Neural representations of graspable objects: are tools special?

Similar documents
Neural correlates of two imagined egocentric transformations

Supporting Information

The Change of Mu Rhythm during Action Observation in People with Stroke. Tae-won Yun, PT, MSc, Moon-Kyu Lee, PT, PhD

Procedia - Social and Behavioral Sciences 159 ( 2014 ) WCPCG 2014

Comparing event-related and epoch analysis in blocked design fmri

Mirror neurons. Romana Umrianova

Modulation of Cortical Activity During Different Imitative Behaviors

Giacomo Rizzolatti - selected references

How do individuals with congenital blindness form a conscious representation of a world they have never seen? brain. deprived of sight?

Twelve right-handed subjects between the ages of 22 and 30 were recruited from the

The Role of Working Memory in Visual Selective Attention

Is a small apple more like an apple or more like a cherry? A study with real and modified sized objects.

Topic 11 - Parietal Association Cortex. 1. Sensory-to-motor transformations. 2. Activity in parietal association cortex and the effects of damage

Human Paleoneurology and the Evolution of the Parietal Cortex

Supplemental Information. Triangulating the Neural, Psychological, and Economic Bases of Guilt Aversion

FRONTAL LOBE. Central Sulcus. Ascending ramus of the Cingulate Sulcus. Cingulate Sulcus. Lateral Sulcus

Theory of mind skills are related to gray matter volume in the ventromedial prefrontal cortex in schizophrenia

RAPID COMMUNICATION A PET Exploration of the Neural Mechanisms Involved in Reciprocal Imitation

Supporting Online Material for

FUNCTIONAL MAGNETIC RESONANCE EVIDENCE OF CORTICAL ALTERATIONS IN A CASE OF REVERSIBLE CONGENITAL LYMPHEDEMA OF THE LOWER LIMB: A PILOT STUDY

Psychology of Language

Functional topography of a distributed neural system for spatial and nonspatial information maintenance in working memory

Neural correlates of memory for object identity and object location: effects of aging

Dr. Mark Ashton Smith, Department of Psychology, Bilkent University

Supplementary information Detailed Materials and Methods


Investigating directed influences between activated brain areas in a motor-response task using fmri

Motor experience with graspable objects reduces their implicit analysis in visual- and motor-related cortex

Methods to examine brain activity associated with emotional states and traits

Chapter 3: 2 visual systems

Mental rotation of anthropoid hands: a chronometric study

Motor Imagery in Mental Rotation: An fmri Study

Does the End Justify the Means? A PET Exploration of the Mechanisms Involved in Human Imitation

Supplementary Information Methods Subjects The study was comprised of 84 chronic pain patients with either chronic back pain (CBP) or osteoarthritis

Motor Systems I Cortex. Reading: BCP Chapter 14

Resistance to forgetting associated with hippocampus-mediated. reactivation during new learning

Mirror Neuron System Differentially Activated by Facial Expressions and Social Hand Gestures: A Functional Magnetic Resonance Imaging Study

Manuscript. Do not cite. 1. Mirror neurons or emulator neurons? Gergely Csibra Birkbeck, University of London

Hippocampal brain-network coordination during volitionally controlled exploratory behavior enhances learning

The role of affordances in inhibition of return

Grasping the Intentions of Others with One s Own Mirror Neuron System

positron-emission tomography study of encoding and retrieval processes

Supplementary Online Material Supplementary Table S1 to S5 Supplementary Figure S1 to S4

Motor and cognitive functions of the ventral premotor cortex Giacomo Rizzolatti*, Leonardo Fogassi and Vittorio Gallese*

Text to brain: predicting the spatial distribution of neuroimaging observations from text reports (submitted to MICCAI 2018)

Are face-responsive regions selective only for faces?

Selective Attention to Face Identity and Color Studied With fmri

Involvement of both prefrontal and inferior parietal cortex. in dual-task performance

SUPPLEMENTARY MATERIAL. Table. Neuroimaging studies on the premonitory urge and sensory function in patients with Tourette syndrome.

Title:Atypical language organization in temporal lobe epilepsy revealed by a passive semantic paradigm

WHAT DOES THE BRAIN TELL US ABOUT TRUST AND DISTRUST? EVIDENCE FROM A FUNCTIONAL NEUROIMAGING STUDY 1

Two Forms of Spatial Imagery

The neural bases of complex tool use in humans

Define functional MRI. Briefly describe fmri image acquisition. Discuss relative functional neuroanatomy. Review clinical applications.

Graspable objects grab attention when the potential for action is recognized

Hallucinations and conscious access to visual inputs in Parkinson s disease

Supporting online material for: Predicting Persuasion-Induced Behavior Change from the Brain

Supplemental information online for

QUANTIFYING CEREBRAL CONTRIBUTIONS TO PAIN 1

Chapter 8: Visual Imagery & Spatial Cognition

Attention Response Functions: Characterizing Brain Areas Using fmri Activation during Parametric Variations of Attentional Load

Perception of Faces and Bodies

Selective Activation of a Parietofrontal Circuit during Implicitly Imagined Prehension

Summary. Multiple Body Representations 11/6/2016. Visual Processing of Bodies. The Body is:

Supplementary Digital Content

Social Cognition and the Mirror Neuron System of the Brain

Functional MRI Mapping Cognition

For better or for worse: neural systems supporting the cognitive down- and up-regulation of negative emotion

Visual Context Dan O Shea Prof. Fei Fei Li, COS 598B

手の心的回転課題遂行時の脳活動 近赤外分光法による検討

Imagery of Voluntary Movement of Fingers, Toes, and Tongue Activates Corresponding Body-Part Specific Motor Representations

Presupplementary Motor Area Activation during Sequence Learning Reflects Visuo-Motor Association

Cortical Organization. Functionally, cortex is classically divided into 3 general types: 1. Primary cortex:. - receptive field:.

Supporting online material. Materials and Methods. We scanned participants in two groups of 12 each. Group 1 was composed largely of

Cortical anatomy of mental imagery of concrete nouns based on their dictionary definition

The Relation Between Perception and Action: What Should Neuroscience Learn From Psychology?

Functional Properties of Brain Areas Associated With Motor Execution and Imagery

A possible mechanism for impaired joint attention in autism

Reasoning and working memory: common and distinct neuronal processes

ArteSImit: Artefact Structural Learning through Imitation

To point a finger: Attentional and motor consequences of observing pointing movements

Supporting Information

Functional Magnetic Resonance Imaging with Arterial Spin Labeling: Techniques and Potential Clinical and Research Applications

A system in the human brain for predicting the actions of others

Supplementary Information

Somatosensory activations during the observation of touch and a case of vision touch synaesthesia

Self-awareness and action Sarah-Jayne Blakemore and Chris Frith y

Featural and con gural face processing strategies: evidence from a functional magnetic resonance imaging study

Effects Of Attention And Perceptual Uncertainty On Cerebellar Activity During Visual Motion Perception

Cortical Control of Movement

Reproducibility of Visual Activation During Checkerboard Stimulation in Functional Magnetic Resonance Imaging at 4 Tesla

SUPPLEMENTARY INFORMATION

Mirror neurons in humans: Consisting or confounding evidence?

Fractionating the left frontal response to tools: Dissociable effects of motor experience and lexical competition

SUPPLEMENTARY METHODS. Subjects and Confederates. We investigated a total of 32 healthy adult volunteers, 16

Neuroimaging studies of mental rotation: A metaanalysis

Cerebral Cortex 1. Sarah Heilbronner

Transcranial Magnetic Stimulation

Selective bias in temporal bisection task by number exposition

Overt vs. Covert Responding. Prior to conduct of the fmri experiment, a separate

Transcription:

Cognitive Brain Research 22 (2005) 457 469 Research report Neural representations of graspable objects: are tools special? Sarah H. Creem-Regehr a, *, James N. Lee b a 380 S. 1530 E. Rm 502, Department of Psychology, University of Utah, Salt Lake City, UT 84112, USA b Department of Radiology, University of Utah, USA Accepted 12 October 2004 Available online 24 November 2004 www.elsevier.com/locate/cogbrainres Abstract Recent cognitive and neuroimaging studies have examined the relationship between perception and action in the context of tools. These studies suggest that tools bpotentiateq actions even when overt actions are not required in a task. Tools are unique objects because they have a visual structure that affords action and also a specific functional identity. The present studies investigated the extent to which a tool s representation for action is tied to its graspability or its functional use. Functional magnetic resonance imaging (fmri) was used to examine the motor representations associated with different classes of graspable objects. Participants viewed and imagined grasping images of 3D tools with handles or neutral graspable shapes. During the viewing task, motor-related regions of cortex (posterior middle, ventral premotor, and posterior parietal) were associated with tools compared to shapes. During the imagined grasping task, a frontal parietal temporal network of activation was seen with both types of objects. However, differences were found in the extent and location of premotor and parietal activation, and additional activation in the middle and fusiform for tools compared to shapes. We suggest that the functional identity of graspable objects influences the extent of motor representations associated with them. These results have implications for understanding the interactions between bwhatq and bhowq visual processing systems. D 2004 Elsevier B.V. All rights reserved. Theme: Neural basis of behavior Topic: Cognition Keywords: fmri; Imagined grasping; Mental imagery; Premotor cortex; Parietal cortex 1. Introduction Tools are a special class of objects. Not only can they be processed for what they are, but also for how they can be used. Gibson [17] defined the word affordances as properties in the environment that are relevant for an animal s goals. Objects can have multiple affordances that define the way they will be grasped. For example, a toothbrush might afford brushing teeth if held by its handle or poking a small hole if held by its bristles. However, a toothbrush reminds us that, although an object may have multiple affordances, it usually has one specific use that is associated with its identity. This functional specificity of tools distinguishes * Corresponding author. Fax: +1 801 581 5841. E-mail address: sarah.creem@psych.utah.edu (S.H. Creem-Regehr). them from other types of objects (e.g., a rock) that may be graspable but do not have a semantic identity tied to an action representation. The unique relationship between object identity and action in tools can help to address questions about the separability and interaction of different visual processing streams for bwhatq and bhowq [44]. Research from neuropsychology [18,32] and psychophysics [3,5,18,27] supports the notion that systems for phenomenal awareness of objects and visually guided actions are dissociable. However, it is also clear that the systems interact. Creem and Proffitt [8] demonstrated that one condition for interaction is when a visually guided action must conform to a tool s functional identity. The present studies aimed to investigate the contribution of knowledge about an object s function to representations for actions associated with graspable objects. 0926-6410/$ - see front matter D 2004 Elsevier B.V. All rights reserved. doi:10.1016/j.cogbrainres.2004.10.006

458 S.H. Creem-Regehr, J.N. Lee / Cognitive Brain Research 22 (2005) 457 469 Recent research using behavioral and neuroimaging paradigms has demonstrated that the visual perception of tools outside of the context of the execution of a grasp is linked to action representations. For example, with a series of behavioral tasks, Tucker and Ellis [62,63] posited that objects bpotentiateq actions even when the goal of a task is not to directly interact with the object. In one study, they demonstrated a Simon effect, finding that the position of visual objects handles had a significant effect on the speed of key-press responses, although the handle position was irrelevant to the task (deciding if the object was upright or inverted). For example, handle orientation toward the right facilitated the key-press response made with the right hand. The result that viewing an object in a certain position affected the potential for subsequent action suggests that action-related information about objects is represented automatically when an object is viewed. In a more recent study [63], participants viewed objects and decided if they were natural or manufactured. The response measure was to perform a precision or power grasp to banswerq natural or manufactured. Thus, the grasp itself (power or precision) was irrelevant to the task but could be compatible or incompatible with the visual object presented. The results showed that grasp compatibility influenced speed of response, again suggesting that objects may be automatically perceived for their potential actions. The findings were recently supported by a functional magnetic resonance imaging (fmri) paradigm [25], which found greater activation in parietal, premotor, and inferior prefrontal cortex, with greater reaction time difference between compatible and incompatible trials. Over recent years, there has been a burst of functional neuroimaging studies that have examined the relationship between perception and action in the context of tools using a variety of tasks such as object viewing and naming, action observation, imagined actions, and decisions about object function [1,4,6,13,21,23,37]. Furthermore, research in monkey neurophysiology has elegantly demonstrated significant links between vision and motor control by defining neurons that are responsive to both perception and action in premotor and parietal cortex [53]. Together, these studies suggest some functional equivalence between observed, imagined, and real actions, but they leave open the question of whether representations associated with tools are influenced by human knowledge of tool function or the tool s visual structure that indicates graspability. We briefly review the literature on the neural representations associated with perception and action in three categories: perception of action-related objects, perception of action, and imagined action as they are directly relevant to the present studies. 1.1. Perceiving objects Recent fmri and PET studies have explored neural distinctions in visual recognition of different categories of objects such as animals, faces, houses, and artifacts. A number of these studies have examined the visual recognition of tools. These visual tool studies have varied in their task-goals and control images. Studies have involved viewing and naming tools compared with nonsense objects [43], fractal patterns [21], or other nonmanipulable objects such as animals, faces, and houses [6,7,48]. Other recent paradigms have assessed visual tools influence on attention [29], the processing of a tool s motion [1], or decisions about a tool s function [37] or orientation [23,64]. In all, these studies have concluded that there are distinct regions in both the ventral and dorsal streams associated with the visual recognition of tools versus other types of objects. Namely, activation has been found in the middle temporal cortex and more medial regions of the fusiform, as well as the dorsal and ventral premotor and the posterior parietal cortex. Chao and Martin [6] have suggested that the premotor and parietal activation may be associated with the retrieval of information about hand movements associated with manipulable objects. This claim is consistent with similar patterns of activation seen in imagined hand movement tasks. 1.2. Perceiving actions Primate studies of bmirror neurons,q neurons in the ventral premotor area F5 that fire when a monkey performs a goal-directed action or observes someone else performing that action [53], have contributed to the goal of researchers to define a similar action recognition system in humans. This link between perception and execution of actions has been proposed as one account for the early developing ability of humans to imitate. Human neuroimaging studies have examined the perception of actions with tasks such as observation of grasping [24,54] and recognition or imitation of actions [4,13,24,28,30,38]. Neural activation has been commonly reported in the posterior parietal cortex and the posterior inferior frontal cortex in the region of Broca s area. This pattern of activity in the inferior frontal cortex has led some to suggest that Broca s area is a human homologue of the ventral premotor area F5 in monkeys [30] and that action recognition and language production share common neural substrates [28]. However, not all action observation studies have found activation in the inferior frontal ; Grezes et al. s [24] recent data suggest that the human ventral precentral sulcus may better characterize monkey F5 mirror neuron function. 1.3. Imagining actions Imagined actions can be categorized into two broad types of tasks: explicit goal-directed actions and spatial decisions that recruit mental body transformations. Goal-directed actions involving explicit imagined grasping, imagined joystick control, and imagined hand/finger-movement tasks have produced activity in supplementary motor area (SMA), anterior cingulate, lateral premotor cortex, inferior frontal, posterior parietal cortex, and the cerebellum [12,16,19]. Some have also found dorsal prefrontal cortex,

S.H. Creem-Regehr, J.N. Lee / Cognitive Brain Research 22 (2005) 457 469 459 basal ganglia [16], and primary motor activation [51]. Johnson et al. [33] recently distinguished between motor planning processes and movement simulation in an eventrelated fmri imagined grasping task. They found that predominantly left-hemisphere motor-related cortical regions (and right cerebellum) were active in the hand preparation component of the task for both hands compared to activation in bilateral dorsal premotor cortex and posterior parietal cortex contralateral to the imagined limb in grip selection. These results suggest that studies examining motor imagery may be combining neural representations involved in both the planning and execution of imagined actions. A second category of imagined action tasks includes implicit motor imagery, in which an observer is asked to make a spatial decision and, in doing so, recruits motor processing. These types of tasks typically involve handedness or same/different decisions about visually presented hands or objects [39,41,47,52,66]. These tasks have led to similar regions of activation as explicit goaldirected tasks, e.g., posterior parietal cortex, posterior temporal cortex, premotor, and some primary motor cortex and cerebellar activation. However, more specific neural distinctions have been found based on strategies and spatial frames of reference used in the transformation. The evidence of shared neural representations for real, imagined, and potential action is supported by Grezes and Decety s [22] meta-analysis on neuroimaging tasks involving motor execution, simulation, observation, and verb generation/tool naming. They found overlapping networks of activation in the SMA, dorsal and ventral premotor cortex, and inferior and superior regions of the parietal cortex. Furthermore, in a recent PET study, Grezes and Decety [23] compared several different types of tasks involving tools. In tasks of tool orientation judgment, mental simulation of grasping and using tools, silent tool naming, and silent verb generation, they found a common network of activation consistent with the findings of the meta-analysis described above. They suggested that representations for action are automatically activated by visual tools regardless of whether the subject had an intention for action, in agreement with the findings of Tucker and Ellis and Tucker et al. [25,62,63]. 1.4. Overview to study An unanswered question involves the nature of the tool representation that activates motor processing; it could be the semantic knowledge about function that is associated with tools, the inherent graspability of tools based on their visual structure, or an interaction between these variables. Neuroimaging studies finding premotor cortex activation with the visual presentation of graspable objects have used only familiar tools. However, single-unit recording studies in nonhuman primates have found ventral premotor cortex and anterior intraparietal neurons that respond to the visual presentation of many different-shaped graspable objects [45,46]. Furthermore, some research indicates that there is a direct route from the visual properties of an object to action that bypasses the recruitment of semantic knowledge [56]. The present studies addressed the question of whether representations for action differ for graspable objects that are associated with familiar functions and those that are not. We examined motor representations associated with graspable objects using fmri by presenting two classes of objects that varied in their association with specific functions. Images of 3D tools (objects with a familiar functional identity) and 3D shapes (graspable objects with no known function) were presented while participants performed two different tasks, passive viewing and imagined grasping. In the passive viewing task, our goal was to assess whether object and motor processing regions would be activated to the same extent for function-specific and neutral graspable objects. In the imagined grasping task, we Fig. 1. An example of the tools, shapes, and scrambled images used in the viewing and imagined grasping experiments.

460 S.H. Creem-Regehr, J.N. Lee / Cognitive Brain Research 22 (2005) 457 469 examined whether additional activation associated with planning a meaningful action (e.g., grasping a hammer versus grasping a cylinder) would be associated with functionally familiar objects. As a side question, participants were also asked to perform a real finger-clenching task with both hands to identify the hand regions of the primary motor cortex and to assess whether imagined and executed actions shared similar representations in primary motor cortex. In all, our results suggest that tools are special graspable objects. In the passive viewing task, we found evidence for greater motor processing (activation in parietal and premotor cortex) associated with viewing tools compared to neutral graspable shapes. In the imagined grasping task, both tools and shapes led to a network of premotor, posterior parietal, and posterior temporal regions predicted for simulated visually guided actions. However, differences emerged in region and in extent of activation in both the dorsal and ventral streams. These results suggest that an object s functional identity influences its perceived potential for action. 2. Materials and methods 2.1. Subjects Twelve healthy right-handed subjects (aged 21 36, seven male) participated in the experiment. All subjects were naive as to the purpose of the experiment. The experimental procedures were approved by the University of Utah Institutional Review Board, and all participants gave their informed consent before beginning the study. 2.2. MRI acquisition Functional MRI tasks were performed on a Picker Eclipse 1.5-T scanner. EPI images were acquired in a quadrature head coil with slice thickness of 5 mm, FOV of 55.425.6 cm, data matrix of 12864, repetition time of 2.2 s, echo time of 35 ms, and flip angle of 908. Twenty-five images were acquired during each repetition time. Anatomical images were acquired using a 3D RF-FAST sequence with TE of 4.47 ms, TR of 15 ms, flip angle of 258, band width of 25 khz, FOV of 25.6 cm, image matrix of 256256, and slice thickness of 2 mm. 2.3. Experimental protocol 1 The task was 320 s. Five extra images were acquired at the beginning of the run and nine at the end for a total of 352 s. All subjects participated in three functional runs, each lasting 352 s. 1 In the Viewing and imagined grasping tasks, subjects viewed grayscale images of 3D tools and 3D shapes presented at different orientations (see Fig. 1). The images were provided courtesy of Michael J. Tarr, Brown University. The tools were all graspable objects with handles (see Appendix A for the list of tools). The shapes were neutral graspable objects such as a cylinder or a cone. There were 20 different objects, 10 tools and 10 shapes. Each object was presented four times, always at a new orientation. bscrambledq images, which served as baseline images, were created by placing a 1010 grid over the intact images Table 1 Clusters of activation in the viewing objects task for tool and shape images ( pb0.0001, uncorrected) Contrast Cluster size MNI coordinates t value (voxels) x y z Tool NScrTool Occipital/temporal activation Left inferior 1942 44 50 18 12.65 Left fusiform 34 44 22 12.56 Right middle 753 52 62 2 14.73 Right inferior 44 66 4 13.02 Right fusiform 377 38 42 22 9.43 Right middle 125 48 62 18 8.99 Frontal/parietal activation Left postcentral 156 42 28 56 7.09 Right postcentral 24 28 46 60 5.73 Left precentral 30 54 0 26 6.49 Left medial frontal 13 8 20 48 5.93 Deactivations (ScrTool NTool) Right superior 190 20 92 18 11.26 occipital Left posterior 211 20 84 12 10.14 fusiform Left superior 50 14 96 10 7.16 occipital Right fusiform 15 28 78 14 6.17 Shape NScrShape Left inferior 85 44 68 6 7.19 Deactivations (ScrShape NShape) Left lingual 1813 16 86 14 15.60 Right cuneus 16 96 12 12.50 Left posterior fusiform 28 78 16 12.39 Tool NShape a Left fusiform 218 36 48 22 7.06 a There were no significant clusters of activation for ShapeNTool.

S.H. Creem-Regehr, J.N. Lee / Cognitive Brain Research 22 (2005) 457 469 461 Fig. 2. Statistical parametric maps (SPMs) of the group random effects analysis for viewing objects (top) and imagined grasping objects (bottom) viewed in sagittal, coronal, and transverse transparent surfaces. SPMs were thresholded at t=5.45, pb0.0001, uncorrected. Higher t values are darker gray. and randomly mixing the squares. 2 The images were presented on a screen (3727 in.) at the foot of the scanner using an LCD projector (Sharp XG-E12004) and a Macintosh ibook using the stimulus presentation software Superlab (Cedrus). Participants viewed the screen through a mirror placed above their eyes. The images were projected at approximately 5 in. in height. The tasks used a standard boxcar design with 16 s epochs. Eight images were presented in each block for 2 s each. The order of blocks alternated between scrambled tools (ScrTool), tools (Tool), scrambled shapes (ScrShape), and shapes (Shape), and the 2 Although this method of scrambling images did not preserve the spatial frequency of the stimuli, it has been used effectively in a number of studies defining higher level visual areas associated with object recognition (e.g., Ref. [42]). entire 64 s sequence was repeated five times. In the viewing task, subjects were instructed to fixate on and pay attention to all of the images presented. In the imagined grasping task, subjects were instructed to imagine grasping and picking up each object with their right hand. Subjects were told to keep their hands still, resting on their legs; no overt motor responses were required. Verbal debriefing after the scanning session indicated that each subject followed the directions to view or imagine grasping in the given task and that they were able to imagine grasping the shapes when instructed to do so. A final finger-clenching task alternated finger clenching with both hands with a rest condition using 33 s blocks. Participants were asked to clench their fingers with both hands to obtain information about the localization of overt hand movement in both hemispheres, as some studies have indicated bilateral motor representations involved in motor imagery. The three tasks were always

462 S.H. Creem-Regehr, J.N. Lee / Cognitive Brain Research 22 (2005) 457 469 Fig. 3. Lateral premotor and posterior parietal activation in imagined grasping of tools (left) and shapes (right). The extent of left posterior parietal activation is greater when imagining grasping tools (versus viewing scrambled images) than when imagining grasping shapes (versus viewing scrambled images). performed in the same order: passive viewing, imagined grasping, and finger clenching. 2.4. Image analysis Raw EPI data were ghost-corrected, distortion-corrected, and reconstructed with in-house matlab routines to a 6464 matrix with 25.6 cm 2 field of view and in-plane resolution of 4 mm. Statistical analysis were performed using MATLAB (Mathworks, Natick, MA, USA) and statistical parametric mapping (SPM99, Wellcome Department of Cognitive Neurology, London, UK). The first five images of each task were discarded to ensure that the signal had reached equilibrium. EPI images were aligned to correct for head motion, and anatomical images were coregistered with the EPI images. All images were spatially normalized to the standard Montreal Neurological Institute (MNI) template and smoothed using isotropic Gaussian kernels of 10 mm. Individual and group analyses were performed. For the viewing and grasping runs, we applied a boxcar model convolved with the hemodynamic response function using a general linear model with four stimulus conditions for each participant. Two linear contrasts were defined to test for specific condition effects for Tools (ToolNScrTool) and Shapes (ShapeNScrShape). The individual subject contrasts were used in subsequent group random effects analyses, using one-sample t tests to assess the Tool and Shape effects relative to the scrambled images and paired-sample t tests to assess the difference between the Tool and Shape effects. The statistical threshold for the random effects analyses was set to pb0.0001 (tn5.45, uncorrected), with a minimum cluster size of 10 voxels. For the real finger-clenching task, one active task was modeled as described above, and the t contrast was again submitted to a group random effects one-sample t test with the same threshold criterion as the viewing and imagined tasks. All results are reported in MNI coordinate space. 3. Results 3.1. Viewing graspable objects In all, the results indicated a distinction in the recruitment of motor processing areas associated with viewing tools and

S.H. Creem-Regehr, J.N. Lee / Cognitive Brain Research 22 (2005) 457 469 463 Table 2 Clusters of activation in the imagined grasping task for tool images ( pb0.0001, uncorrected) Contrast Cluster size MNI coordinates t value (voxels) x y z Tool NScrTool Occipital/temporal activation Left inferior 1827 42 74 6 8.94 occipital Left fusiform 40 34 18 8.84 Right middle 616 44 62 2 9.12 Right fusiform 44 38 18 9.02 Right inferior 46 66 6 7.58 Right middle 38 40 50 12 8.94 Frontal/parietal activation Left inferior 749 40 52 58 8.88 parietal Left superior 20 62 62 8.41 parietal Left superior 262 26 10 60 8.28 frontal Right superior 85 24 8 60 8.35 frontal Left medial 70 10 2 64 7.23 frontal Right postcentral 12 42 40 64 7.01 Left inferior 25 52 4 8 6.66 frontal operculum Right medial 12 12 6 56 6.37 frontal Left putamen 33 16 10 4 5.79 Deactivations (ScrTool NTool) Left lingual 213 14 84 18 8.78 Right superior 205 24 84 24 8.58 occipital Right cuneus 12 90 18 5.63 Left superior 43 14 98 12 6.49 occipital Left cuneus 11 10 88 22 5.85 shapes (see Fig. 2 and Table 1). Viewing tools compared to their baseline scrambled images resulted in clusters of activation bilaterally in the ventral temporal lobes as well as left postcentral, left precentral (ventral premotor cortex), and the medial frontal (pre-sma). This activation in the posterior temporal cortex, posterior parietal cortex, and premotor cortex is consistent with results from several previous studies involving viewing and naming tools. The bilateral clusters of activation in the ventral temporal lobe showed peaks of activation at the middle fusiform, inferior, and middle temporal. Specifically, the locus at the middle has been associated with tool representations in object viewing and naming tasks [7], viewing tool motion [1], and decisions about manipulable objects [7,37]. The activation in the fusiform is consistent with the notion of this region s involvement in semantic processing of objects [7,34]. In contrast, the shapes versus scrambled shapes contrast led to activation only in the left inferior (Fig. 2 and Table 1). The paired t test between the ToolNScrTool and ShapeNScrShape indicated greater activation in the left fusiform in tools versus shapes. Deactivations Table 3 Clusters of activation in the imagined grasping task for shape images ( pb0.0001, uncorrected) Contrast Cluster size MNI coordinates t value (voxels) x y z Shape NScrShape Occipital/remporal activation Left inferior 210 42 60 6 7.82 Right inferior 56 48 62 6 7.36 Frontal/parietal activation Left superior 436 26 10 62 12.64 frontal Left medial 14 0 62 6.39 frontal Right superior 180 26 10 56 9.86 frontal Left precentral 107 56 6 30 7.48 Left inferior 104 44 48 58 6.53 parietal Left superior 34 48 64 5.51 parietal Left inferior 66 56 34 40 6.62 parietal Left precentral 44 46 2 46 6.29 Left medial 43 8 20 42 9.15 superior frontal Left cingulate 28 12 12 44 8.41 Left medial frontal 11 6 10 50 5.70 Deactivations (ScrShape NShape) Right superior 771 22 94 18 9.27 occipital Right fusiform 26 78 12 7.97 Left lingual 449 16 86 14 9.35 Left fusiform 24 80 10 8.11 Left precuneus 360 8 64 36 7.54 Left superior 144 18 96 16 8.93 occipital Left superior 27 54 10 6 8.95 Left middle 22 46 8 24 6.97 Right superior 11 56 6 6 7.48

464 S.H. Creem-Regehr, J.N. Lee / Cognitive Brain Research 22 (2005) 457 469 Fig. 4. A comparison of posterior temporal lobe (left image) activation and ventral premotor activation (right image) in imagining grasping tools (white) and shapes (gray). Activation extended to the fusiform and the middle when imagining grasping tools but not shapes. On-line white=yellow, gray=green. (an active state during the scrambled images) analyzed separately for the tools and shapes task were consistent with recent reviews [26,58] showing activity in posterior occipital temporal regions. 3.2. Imagining grasping objects When given the explicit intention to imagine grasping objects, both tools and shapes elicited activation in temporal, parietal, and premotor regions that have been associated with visually guided movement (see Figs. 2 and 3 and Tables 2 and 3). The ToolNScrTool contrast showed frontal parietal activation in the left posterior parietal cortex, including the intraparietal sulcus, dorsal (bilateral superior frontal ) and ventral premotor cortex (at the left frontal operculum), and bilateral medial frontal (caudal pre-sma). Temporal lobe activation centered on bilateral regions of the middle fusiform, inferior, and middle. The ShapeNScrShape contrast showed relatively more activation in the frontal and parietal cortex than in the temporal cortex. Bilateral dorsal premotor cortex (superior frontal ) and left ventral precentral activation was found as well as the left cingulate, medial frontal, and left inferior and superior parietal cortex. The temporal cortex activation was seen bilaterally in the inferior. Although both the tools and shapes tasks showed premotor, parietal, and inferior temporal Table 4 Activation in the imagined grasping task for ToolsNShapes a ( pb0.0001, uncorrected) Contrast Cluster size MNI coordinates t value (voxels) x y z Tool NShape Left middle 67 54 66 20 6.74 Right precuneus 37 20 52 28 7.87 Left middle 27 60 44 4 6.74 Left angular 15 42 68 26 6.24 Left fusiform 17 38 46 20 6.08 a There were no significant clusters of activation for ShapeNTool. cortex activation, differences in these regions emerged. For the premotor cortex, the shapes task did not show activation in the ventral operculum as was found in the tools tasks. The temporal lobe activation did not fall in the middle or the fusiform in the shapes task, as seen in the tools task (see Fig. 4). The locus of activation in the pre-sma was more rostral in shapes compared to tools. In addition, the posterior parietal activation in the tools task extended to a larger region, including the superior parietal lobule (voxels in cluster=749 versus 104 for tools and shapes, respectively). Furthermore, the paired t test between the tools and shapes task led to notable neural distinctions between the two objects (see Table 4). We found left hemisphere activation for tools versus shapes at dorsal and ventral locations of the middle as well as the left angular and fusiform. Significant deactivations for imagined grasping were found in posterior visual areas in both the tools and shapes tasks and in the middle and superior in the shapes task (see Tables 2 and 3). 3.3. Finger clenching Real finger clenching with both hands led to the predicted bilateral activation along the precentral at Table 5 Clusters of activation for real finger ClenchingNRest ( pb0.0001, uncorrected) Contrast Cluster size MNI coordinates t value (voxels) x y z Clench NRest Right cerebellum 2030 12 56 16 13.36 Left cerebellum 8 62 18 11.12 Left inferior parietal 349 52 28 46 11.17 Left precentral 36 30 60 7.68 Right precentral 327 36 24 64 10.42 44 18 52 8.73 Left/Right supplementary motor area Left supramarginal 79 4 10 54 8.90 6 6 48 7.11 23 60 32 24 7.51 Right thalamus/putamen 33 22 20 4 6.49

S.H. Creem-Regehr, J.N. Lee / Cognitive Brain Research 22 (2005) 457 469 465 the primary motor cortex, the rostral SMA, the cerebellum, and the putamen (see Table 5). The activation loci around the central sulcus and in SMA proper found in this task were not seen in the imagined grasping task. 4. Discussion A number of recent neuroimaging studies involving tools and simulated actions have provided a basis for identifying ventral and dorsal visual processing regions involved in the visual processing of manipulable objects and their associated actions. The posterior middle and the middle fusiform have been associated with perceiving and naming tools [1,7,34]. The dorsal and ventral premotor cortex and posterior parietal cortex have been implicated in tasks involving perception for action and motor imagery [6,19,21,23,37]. Overall, our present tasks demonstrated activation consistent with previous functional neuroimaging studies of viewing tools and imagined actions. Our novel contribution was to examine the nature of motor representations associated with tools by comparing simple viewing and grasping tasks using different categories of graspable objects. We asked whether these common areas of activation were influenced by information about an object s function. The results suggest that the functional identity of tools influences the extent of motor representations associated with them. 4.1. Viewing graspable objects Consider first the dorsal stream or visuomotor representations involved with the visual presentation of graspable objects. Our first task of passive viewing allowed for the examination of whether simple viewing of visually graspable objects is associated with potential action. We found that posterior parietal and premotor regions were active for passively viewing tools but not for viewing shapes. The ventral premotor cortex activation in the tool task is consistent with that found in Chao and Martin s [6] similar task of viewing tools. Thus, an explicit intention to act in terms of an instruction to imagine acting seems unnecessary to elicit motor representations, but the visual presentation of an object must strongly suggest potential action. For the human, familiar tools accomplish this goal, but neutral graspable shapes do not. In contrast to these results, studies with monkeys indicate that canonical neurons in the premotor region F5 respond to the simple visual presentation of many graspable objects [45]. One explanation for this distinction may be that the characteristics that constitute a btoolq differ for humans and nonhuman primates. Another is that the images of the shapes were not implicitly interpreted as graspable. The parietal activation in the viewing tools task was located at the postcentral extending to the inferior parietal lobule. Some previous studies have found more posterior regions of parietal cortex [6,23,37] given the visual presentation of tools, however, recent real grasping studies have also defined the postcentral sulcus as specialized for grasp representations [10]. Second, we asked whether viewing both tools and shapes as graspable objects would activate ventral regions previously associated with the processing of tools. One region, the posterior middle, was clearly apparent when viewing tools but not shapes. This region has been associated with representations of tools and nonbiological motion [1,7]. Specifically, Beauchamp et al. [1] examined the roles of regions of the posterior temporal cortex in processing human and tool motion. They found that, whereas the motion area MT showed no preference for human or tool motion, a greater response for human motion was found in the superior temporal sulcus, and a greater response for tool motion was found in the middle temporal. We also found foci of activation bilaterally in the middle fusiform when viewing tools but not shapes. The middle fusiform has been associated with semantic processing of different categories of objects, including faces, manufactured objects, and natural objects [7,31,34,35]. Although left hemisphere activation has been shown to be associated with visually presented words [50], multiple tasks involving pictures such as viewing, matching, and naming have found bilateral activation in the fusiform regions [36,40]. Activation in this region suggests that, even without an explicit object recognition task, tools were processed for their meaningful identity as objects. 4.2. Imagining grasping objects Our second task, imagined grasping, examined the specific intention to act with respect to the visual presentation of graspable objects. A number of early PET studies demonstrated that imagined movement tasks were associated with lateral and medial premotor areas and parts of inferior and superior parietal cortex [12,16,19,55,61]. The specific task of imagined grasping has been studied with PET and fmri using real graspable objects [19], virtual neutral objects [12,33], and photographs of real graspable objects [23]. In all, the present imagined grasping task found results consistent with other imagined movement tasks, namely, activation in premotor cortex and the posterior parietal cortex. Both the tools and shapes stimuli recruited a temporal parietal frontal network. In the tools task, we found posterior parietal activation at the left intraparietal sulcus extending to the superior parietal cortex, consistent with other studies involving visual representations of manipulable objects as well as imagined [19,23] and real grasping studies [10,11,20]. In real grasping, Culham et al. have systematically defined an anterior region of the intraparietal sulcus (AIP) in humans, analogous to the bgraspingq area AIP in the monkey. In the monkey, neurons in AIP have been identified to respond when a monkey views and manipulates different objects [57]. In humans, Culham et al. identified a similar region in humans by isolating the activation found in grasping

466 S.H. Creem-Regehr, J.N. Lee / Cognitive Brain Research 22 (2005) 457 469 compared to reaching. Furthermore, activation in this region supports studies of neuropsychological patients with left posterior parietal lesions that have found specific deficits in grasping [49]. The shapes task showed a smaller overlapping region in the left posterior parietal cortex. This greater region of activation in the tool task is consistent with the claim that the left posterior parietal cortex has a specific role in goaldirected actions, more likely associated with functional tools. Gerardin et al. [16] suggested that the parietal cortex may have a special role in the generation of actions that are guided by internal representations, consistent with their finding of additional activation in the posterior parietal cortex in imagined hand movements compared to motor execution. This conjecture is supported by evidence from patients with left parietal lesions who showed discrepancies between actual movement and imagined movement duration [60] and the deficits seen in apraxic patients in performing symbolic gestures [14,59]. Grasping both tools and shapes led to several clusters of activation in the premotor cortex, supporting parietalpremotor connections involved in representing intended movements, seen in studies with humans and nonhuman primates [2,65]. The tools task showed loci of activation bilaterally in the dorsal lateral premotor cortex (superior frontal ) and the medial frontal (pre-sma) as well as the ventral opercular premotor cortex. The shapes task indicated similar large bilateral clusters of activation in the dorsal premotor cortex as well as the left ventral premotor cortex (precentral ) but not the more ventral operculum. Furthermore, the pre-sma activation in the shapes task was more rostral than the location found in the tools task. Thus, both similarities and differences in premotor activation emerged given different objects to imagine grasping. The similarities in activation in dorsal premotor cortex suggest that this region is involved in the mental simulation of grasping regardless of the functional specificity of the object. The bilateral activation in the region is consistent with other studies of imagined hand movements. For example, Johnson et al. [33] found this bilateral dorsal premotor activation when isolating an imagined grip selection task from the planning component of an imagined movement task for both the left and right hands. In the ventral premotor cortex, the location of activation could be determined by certain characteristics of visual graspable objects. It may be that the inferior frontal operculum is associated with more specific representations for grasping tied to an object s meaningful function, consistent with its proximity to the insular cortex and Broca s area. Furthermore, activation in pre-sma has been found in some imagined grasping tasks [19] but not others [12]. It is possible that an object s perceived graspability influences the extent and location of SMA involvement in imagined movement [19] and that images of shapes were perceived as less graspable than images of tools because they are not semantically associated with a functional grasp. Notably, the parietal and ventral premotor activation in both of our imagined grasping tasks was lateralized to the left hemisphere. Left hemisphere lateralization has been found in many tasks involving viewing tools, action decisions, and imagined actions [16,22] and is consistent with neuropsychological grasping deficits found in bilateral apraxia [14] and optic ataxia [49]. However, some tasks isolating hand rotation and grip selection to specific hands have found activation in the contralateral hemisphere [33,47]. The present tasks instructed all right-handed individuals to imagine grasping with their right hands. It remains to be seen whether imagined grasping with the left-hand will influence lateralization in these regions. A comparison of the motor activation seen in real finger clenching and imagined grasping revealed differences in the recruitment of motor areas as well. The real hand movement led to activation in the SMA-proper and bilateral primary motor cortex, as well as the cerebellum, areas which were distinct from those found in the present imagined grasping task. Results from motor imagery tasks have varied in their conclusions about the role of primary motor cortex. Some have found activation focused in this region given explicit strategies to mentally simulate hand rotation [15,39,41], although others have not [12,47]. These differences may arise from the specific type of task and strategy used (e.g., imagined hand/object rotation and imagined grasping) and variations in the methodological technique that was used [40]. It is possible that primary motor cortex may be recruited more often when biomechanical constraints of hand movement must necessarily be represented to perform the task. As in the viewing task, regions of activation in the posterior temporal cortex differed between imagining grasping tools and shapes. Both tasks consistently showed activity bilaterally in the inferior temporal cortex. However, only the tools activation extended dorsally to the middle and ventrally to the fusiform. In addition, the paired t test between grasping tools and shapes indicated activation in the left middle and angular. Activation in these regions suggests an additional component in the processes involved in guiding actions towards tools. A number of studies have associated the posterior middle with storing visual motion information about object use, consistent with its proximity to MT [1,7,36]. Furthermore, activation in the middle fusiform is consistent with that region s role in semantic object identification and the perception of stimuli for meaning [7,34,35]. 4.3. Conclusions and ongoing questions There are important limitations in the comparison of simulated actions directed towards familiar tools and neutral shapes. The tools were more visually complex than the shapes, which could potentially lead to more activation in occipital/temporal regions associated with object recognition. Another factor to consider is the differing amount of

S.H. Creem-Regehr, J.N. Lee / Cognitive Brain Research 22 (2005) 457 469 467 previous motor experience associated with tools and shapes. As the tools were common objects with handles, it is likely that our observers had previous experience manipulating similar tools, which could have led to an increase in their perceived graspability. Findings from a recent study may address potential questions about the impact of differences in visual complexity, familiarity, and motor experience with the two groups of objects. Creem-Regehr et al. [9] created and presented novel graspable objects to observers in a behavioral paradigm before a scanning session. Participants were given experience holding and manipulating all of the objects, but they learned specific functions for only one-half of the objects. After three training sessions involving manipulation of the novel objects, participants were scanned while viewing images of the novel tools and shapes and their scrambled images. They performed three tasks while in the scanner: viewing, imagined grasping, and imagined using. A total of 24 participants were tested, and similar random-effects analyses were used as in the present study, thresholded at a corrected p value of 0.05. The results for viewing and imagining grasping novel objects were generally consistent with the present results, with some differences which are congruous with the differences in the object characteristics between the two studies. In the viewing task, one of the important findings in the present study was the ventral premotor cortex activation found with tools but not shapes, suggesting a representation for action with functional tools even when the task did not explicitly require a goal for movement. This was replicated with the novel objects study. However, posterior parietal cortex activation was found for viewing both tools and shapes in Creem-Regehr et al. s study, differing from the present results. This difference is consistent with the notion that the novel shapes were more likely represented for their graspability because of the motor experience with the objects given to participants before the scan. In Creem-Regehr et al. s [9] imagined grasping task, clusters of activation were found in posterior parietal cortex, dorsal and ventral premotor cortex, and ventral temporal cortex for both types of objects, as in the present study. Creem-Regehr et al. s new task requiring imagined using of novel objects showed a distinction between novel tools and novel shapes, with the result of greater activation in the dorsal and ventral premotor cortex, SMA, insula, cerebellum, and posterior parietal cortex, including more inferior regions of the supramarginal, for tools versus shapes. The results from this task, specifically requiring subjects to imagine grasping and using the objects, suggest that, even when objects are similar in visual characteristics, familiarity, and graspability, representations for action differ based on the known motor patterns associated with the use of the object. In conclusion, the study of tools as graspable objects is useful for examining hypotheses of separable but interacting systems for visual object recognition and visually guided action. Tools have a distinct identity that is associated with an action. The present studies examined the nature of a tool representation within human perception and action systems. The results suggest that the functional identity of objects influences the extent of motor representations associated with them. We found that visually presented graspable tools and shapes elicited both similarities and differences in neural representations. Given a passive viewing task, shapes on their own were not associated with dorsal (parietal and premotor) or ventral (middle ) motor processing regions. However, given the explicit intention to imagine grasping, similarities in motor representations seen in grasping tools and shapes were apparent. Despite similarities in premotor and parietal regions of activation, differences emerged in ventral premotor cortex, SMA, the extent of posterior parietal activation, and the presence or absence of fusiform and middle activation. The results of Creem-Regehr et al. [9] suggest that some of the apparent differences between tools and shapes may have been a result of different visual or motor experiences with objects but support the claim of the present findings that knowledge of the use associated with a graspable object influences its motor representation. We suggest that tools elicit a more function-specific representation for action than other graspable objects that is revealed through distributed systems in both the dorsal and ventral streams of visual processing. Acknowledgments We thank Natalie Sargent, Shawn Yeh, and Jayson Neil for help in image processing. This work was supported by a University Funding Incentive Seed Grant, University of Utah, to the first author. Appendix A. Tools Flashlight Frypan Hairbrush Hairdryer Knife Nailclipper Pliers Razor Scissors Screwdriver References [1] M.S. Beauchamp, K.E. Lee, J.V. Haxby, A. Martin, Parallel visual motion processing streams for manipulable objects and human movements, Neuron 34 (2002) 149 159. [2] F. Binkofski, G. Buccino, K.M. Stephan, G. Rizzolatti, R.J. Seitz, H.J. Freund, A parieto-premotor network for object manipulation: evidence from neuroimaging, Exp. Brain Res. 128 (1999) 210 213.

468 S.H. Creem-Regehr, J.N. Lee / Cognitive Brain Research 22 (2005) 457 469 [3] B. Bridgeman, M. Kirch, A. Sperling, Segregation of cognitive and motor aspects of visual function using induced motion, Percept. Psychophys. 29 (1981) 336 342. [4] G. Buccino, F. Binkofski, G.R. Fink, L. Fadiga, L. Fogassi, V. Gallese, R.J. Seitz, K. Zilles, G. Rizzolatti, H.J. Freund, Action observation activates premotor and parietal areas in a somatotopic manner: an fmri study, Eur. J. Neurosci. 13 (2001) 400 404. [5] D.P. Carey, Do action systems resist visual illusions? Trends Cogn. Sci. 5 (2001) 109 113. [6] L.L. Chao, A. Martin, Representation of manipulable man-made objects in the dorsal stream, Neuroimage 12 (2000) 478 484. [7] L.L. Chao, J.V. Haxby, A. Martin, Attribute-based neural substrates in temporal cortex for perceiving and knowing about objects, Nat. Neurosci. 2 (1999) 913 919. [8] S.H. Creem, D.R. Proffitt, Grasping objects by their handles: a necessary interaction between cognition and action, J. Exp. Psychol. Hum. Percept. Perform. 27 (2001) 218 228. [9] S.H. Creem-Regehr, V. Dilda, A.E. Gold, B.W. Lee, J.N. Lee, Grasping novel objects: functional identity influences representations for real and imagined actions, Poster presented at the 11th Annual Cognitive Neuroscience Society Meeting 2004. [10] J. Culham, Human brain imaging reveals a parietal area specialized for grasping, in: J. Duncan (Ed.), Attention and Performance: XX. Functional Brain Imaging of Visual Cognition, Oxford University Press, Oxford, 2004, pp. 417 438. [11] J.C. Culham, S.L. Danckert, J.F.X. De Souza, J.S. Gati, R.S. Menon, M.A. Goodale, Visually guided grasping produces fmri activation in dorsal but not ventral stream brain areas, Exp. Brain Res. 153 (2003) 158 170. [12] J. Decety, D. Perani, M. Jeannerod, V. Bettinardi, B. Tadary, R. Woods, J.C. Mazziotta, F. Fazio, Mapping motor representations with positron emission tomography, Nature 371 (1994) 600 602. [13] J. Decety, J. Grezes, D. Perani, M. Jeannerod, E. Procyk, F. Grassi, F. Fazio, Brain activity during observation of actions: influence of action content and subject s strategy, Brain 120 (1997) 1763 1777. [14] E. De Renzi, P. Faglioni, P. Sorgato, Modality-specific and supramodal mechanisms of apraxia, Brain 105 (1982) 301 312. [15] G. Ganis, J.P. Keenan, S.M. Kosslyn, A. Pascual-Leone, Transcranial magnetic stimulation of primary motor cortex affects mental rotation, Cereb. Cortex 10 (2000) 175 180. [16] E. Gerardin, A. Sirigu, S. Lehericy, J. Poline, B. Gaymard, C. Marsault, Y. Agid, D. Le Bihan, Partially overlapping neural networks for real and imagined hand movements, Cereb. Cortex 10 (2000) 1093 1104. [17] J.J. Gibson, The Ecological Approach to Visual Perception, Houghton Mifflin, Boston, 1979. [18] M.A. Goodale, A.D. Milner, L.S. Jakobson, D.P. Carey, A neurological dissociation between perceiving objects and grasping them, Nature 349 (1991) 154 156. [19] S.T. Grafton, M.A. Arbib, L. Fadiga, G. Rizzolatti, Localization of grasp representation in humans by positron emission tomography, Exp. Brain Res. 112 (1996) 103 111. [20] S.T. Grafton, A.H. Fagg, R.P. Woods, M.A. Arbib, Functional anatomy of pointing and grasping in humans, Cereb. Cortex 6 (1996) 226 237. [21] S.T. Grafton, L. Fadiga, M.A. Arbib, G. Rizzolatti, Premotor cortex activation during observation and naming of familiar tools, Neuroimage 6 (1997) 231 236. [22] J. Grezes, J. Decety, Functional anatomy of execution, mental simulation, observation, and verb generation of actions: a metaanalysis, Hum. Brain Mapp. 12 (2001) 1 19. [23] J. Grezes, J. Decety, Does visual perception of object afford action? Evidence from a neuroimaging study, Neuropsychologia 40 (2002) 212 222. [24] J. Grezes, J.L. Armony, J. Rowe, R.E. Passingham, Activations related to bmirrorq and bcanonicalq neurons in the human brain: an fmri study, Neuroimage 18 (2003) 928 937. [25] J. Grezes, M. Tucker, J.L. Armony, R. Ellis, R.E. Passingham, Objects automatically potentiate action: an fmri study of implicit processing, Eur. J. Neurosci. 17 (2003) 2735 2740. [26] D.A. Gusnard, M.E. Raichle, Searching for a baseline: functional imaging and the resting human brain, Nat. Rev., Neurosci. 2 (2001) 685 694. [27] A.M. Haffenden, M.A. Goodale, The effect of pictorial illusion on prehension and perception, J. Cogn. Neurosci. 10 (1998) 122 136. [28] F. Hamzei, M. Rijntjes, C. Dettmers, V. Glauche, C. Weiller, C. Buchel, The human action recognition system and its relationship to Broca s area: an fmri study, Neuroimage 19 (2003) 637 644. [29] T.C. Handy, S.T. Grafton, N.M. Shroff, S. Ketay, M.S. Gazzaniga, Graspable objects grab attention when the potential for action is recognized, Nat. Neurosci. 6 (2003) 421 427. [30] M. Iacoboni, R.P. Woods, M. Brass, H. Bekkering, J.C. Mazziotta, G. Rizzolatti, Cortical mechanisms of human imitation, Science 286 (1999) 2526 2528. [31] A. Ishai, L.G. Ungerleider, A. Martin, J.L. Schouten, J.V. Haxby, Distributed representation of objects in the human ventral visual pathway, Proc. Natl. Acad. Sci. U. S. A. 96 (1999) 9379 9384. [32] L.S. Jakobson, Y.M. Archibald, D.P. Carey, M.A. Goodale, A kinematic analysis of reaching and grasping movements in a patient recovering from optic ataxia, Neuropsychologia 29 (1991) 803 809. [33] S.H. Johnson, M. Rotte, S.T. Grafton, H. Hinrichs, M.S. Gazzaniga, H.-J. Heinze, Selective activation of a parietofrontal circuit during implicitly imagined prehension, Neuroimage 17 (2002) 1693 1704. [34] J.E. Joseph, Functional neuroimaging studies of category specificity in object recognition: a critical review and meta-analysis, Cogn. Affect. Behav. Neurosci. 1 (2001) 119 136. [35] N. Kanwisher, J. McDermott, M.M. Chun, The fusiform face area: a module in human extrastriate specialized for face perception, J. Neurosci. 17 (1997) 4302 4311. [36] H.O. Karnath, Neural encoding of space in egocentric coordinates? Evidence for and limits of a hypothesis derived from patients with parietal lesions and neglect, in: P. Their, H.O. Karnath (Eds.), Parietal Lobe Contributions to Orientation in 3D Space, Springer-Verlag, Heidelberg, 1997, pp. 497 520. [37] M.L. Kellenbach, M. Brett, K. Patterson, Actions speak louder than functions: the importance of manipulability and action in tool representation, J. Cogn. Neurosci. 15 (2003) 30 46. [38] L. Koski, M. Iacoboni, M. Dubeau, R.P. Woods, J.C. Mazziotta, Modulation of cortical activity during different imitative behaviors, J. Neurophysiol. 89 (2003) 460 471. [39] S.M. Kosslyn, G.J. Digirolamo, W.L. Thompson, N.M. Alpert, Mental rotation of objects versus hands: neural mechanisms revealed by positron emission tomography, Psychophysiology 35 (1998) 151 161. [40] S.M. Kosslyn, G. Ganis, W.L. Thompson, Neural foundations of imagery, Nat. Rev., Neurosci. 2 (2001) 635 642. [41] S.M. Kosslyn, W.L. Thompson, M. Wraga, N.M. Alpert, Imagining rotation by endogenous versus exogenous forces: distinct neural mechanisms, NeuroReport 12 (2001) 2519 2525. [42] Z. Kourtzi, N. Kanwisher, Cortical regions involved in perceiving object shape, J. Neurosci. 20 (2000) 3310 3318. [43] A. Martin, C.L. Wiggs, L.G. Ungerlieder, J.V. Haxby, Neural correlates of category-specific knowledge, Nature 379 (1996) 649 652. [44] A.D. Milner, M.A. Goodale, The Visual Brain in Action, Oxford University Press, Oxford, 1995. [45] A. Murata, L. Fadiga, L. Fogassi, V. Gallese, V. Raos, G. Rizzolatti, Object representation in the ventral premotor cortex (Area F5) of the monkey, J. Neurophysiol. 78 (1997) 2226 12230. [46] A. Murata, V. Gallese, G. Luppino, M. Kaseda, H. Sakata, Selectivity for the shape, size, and orientation of objects for grasping in neurons of monkey parietal area AIP, J. Neurophysiol. 83 (2000) 2580 2601. [47] L.M. Parsons, P.T. Fox, J.H. Downs, T. Glass, T.B. Hirsch, C.G. Martin, P.A. Jerabek, J.L. Lancaster, Use of implicit motor imagery for visual shape discrimination as revealed by PET, Nature 375 (1995) 54 58.