UNDERSTANDING THE RECOGNITION OF FACIAL IDENTITY AND FACIAL EXPRESSION

Size: px
Start display at page:

Download "UNDERSTANDING THE RECOGNITION OF FACIAL IDENTITY AND FACIAL EXPRESSION"

Transcription

1 UNDERSTANDING THE RECOGNITION OF FACIAL IDENTITY AND FACIAL EXPRESSION Andrew J. Calder* and Andrew W. Young Abstract Faces convey a wealth of social signals. A dominant view in face-perception research has been that the recognition of facial and facial expression involves separable visual pathways at the functional and neural levels, and data from experimental, neuropsychological, functional imaging and cell-recording studies are commonly interpreted within this framework. However, the existing evidence supports this model less strongly than is often assumed. Alongside this two-pathway framework, other possible models of facial and expression recognition, including one that has emerged from principal component analysis techniques, should be considered. FUNCTIONAL ACCOUNT A theoretical framework that is based on the informationprocessing characteristics of a set of cognitive subsystems rather than their underlying neural mechanisms *Medical Research Council Cognition and Brain Sciences Unit, 15 Chaucer Road, Cambridge CB2 2EF, UK. Department of Psychology and York Neuroimaging Centre, University of York, York YO10 5DD, UK. Correspondence to A.J.C. andy.calder@mrccbu.cam.ac.uk doi: /nrn1724 Nearly 20 years ago, Bruce and Young 1 presented a model of face recognition that posited separate functional routes for the recognition of facial and facial expression (BOX 1). This framework has remained the dominant account of face perception; few papers have challenged it and none has offered a widely accepted alternative. Here we discuss the relevant evidence, and show why a different conception from that offered by Bruce and Young 1 should be considered. As a FUNCTIONAL ACCOUNT, the Bruce and Young 1 model did not incorporate a neural topography of its separate components. However, the recent neurological account of face perception proposed by Haxby and colleagues 2 (BOX 1) is compatible with the general conception offered by Bruce and Young. The core system of Haxby and colleagues model 2 contains two functionally and neurologically distinct pathways for the visual analysis of faces (BOX 1): one codes changeable facial properties (such as expression, lipspeech and eye gaze) and involves the inferior occipital gyri and superior temporal sulcus (STS), whereas the other codes invariant facial properties (such as ) and involves the inferior occipital gyri and lateral fusiform gyrus. The models proposed by Haxby and colleagues 2, and Bruce and Young 1, share the idea of distinct pathways for the visual analysis of facial and expression, but differ in terms of whether the perceptual coding of expression is carried out by a dedicated system for expressions 1 or by a system that codes expression alongside other changeable facial characteristics 2. At the heart of both models is the idea that facial and expression are recognized by functionally and by implication for Bruce and Young 1, and explicitly for Haxby and colleagues 2 neurologically independent systems. This idea is supported by many psychological studies. For example, the familiarity of a face does not affect the ability of a healthy participant to identify its expression and vice versa 3 6. Brain injury in humans can produce selective impairments in the recognition of facial or facial expression 7 11, in nonhuman primates, different cell populations respond to facial and expression 12, and functional imaging studies show that the perception of facial and facial expression have different neural correlates Therefore, the central idea of some form of dissociation between these two facial cues is not at issue. Rather, we focus on how this dissociation should be interpreted. In particular, we ask whether the concept of distinct parallel visual routes that process NATURE REVIEWS NEUROSCIENCE VOLUME 6 AUGUST

2 Box 1 Cognitive models of face perception a b Inferior occipital gyri Early perception of facial features Expression analysis Core system: visual analysis PROSOPAGNOSIA A visual agnosia that is largely restricted to face recognition, but leaves intact recognition of personal from other identifying cues, such as voices and names. Prosopagnosia is observed after bilateral and, less frequently, unilateral lesions of the inferior occipitotemporal cortex. View-centred descriptions Facial speech analysis Structural encoding Directed visual processing Expressionindependent descriptions Superior temporal sulcus Changeable aspects of faces perception of eye gaze, expression and lip movement Lateral fusiform gyrus Invariant aspects of faces perception of unique Face recognition units Person nodes Cognitive system Intraparietal sulcus Spatially directed attention Auditory cortex Prelexical speech perception Amygdala, insula, limbic system Emotion Anterior temporal Personal, name and biographical information Extended system Further processing with other neural systems Name retrieval Bruce and Young s 1 functional model of face processing (panel a) contains separate parallel routes for recognizing facial, facial expression and lipspeech. The route labelled directed visual processing is involved in the direction of attention to a particular face or facial feature. It is generally considered that the idea of separate routes for the recognition of facial and expression is supported by studies in cognitive psychology 4,5, cognitive neuropsychology 7 10, single-cell recording in nonhuman primates 12 and functional imaging 13,14. Panel a modified, with permission, from REF. 1 (1986) British Psychological Society. The Bruce and Young 1 framework is compatible with the distributed human neural system for face perception proposed by Haxby and colleagues 2 (panel b). This identifies the neural structures that are involved in recognizing two types of facial information: changeable (dynamic) characteristics, such as expression, gaze and lipspeech; and invariant (relatively nonchangeable) characteristics, such as. Panel b reproduced, with permission, from REF. 2 (2000) Elsevier Science. This model is divided into a core system for the visual analysis of faces, which comprises three occipital/temporal regions, and an extended system that includes neural systems that are involved in other cognitive functions. Visuoperceptual representations of changeable characteristics are associated with the superior temporal sulcus (STS), whereas invariant characteristics are coded by the lateral fusiform gyrus (including the fusiform face area or FFA). The extended systems act together with the core system to facilitate the recognition of different facial cues. facial and facial expression offers the best fit to current findings. This view is endorsed by both models of face perception 1,2 and has dominated previous research. We review the evidence and conclude that, although there is some separation between the coding of facial and expression, the dominant view of independent visual pathways is not strongly supported; the present data are consistent with other potential frameworks that deserve to be more fully explored. This review is structured around four questions that we regard as central to interpreting dissociations between facial and expression. The first asks at what level of analysis the facial route bifurcates from the facial expression route. The remaining three questions relate specifically to the recognition of facial expressions: are all expressions processed by a single system, do the mechanisms for recognizing expression incorporate a multimodal level of analysis, and does the facial expression system deal with anything other than expression? Where do the two routes separate? Although each face is a single object, it conveys many socially important characteristics (such as, age, sex, expression, lipspeech and gaze), at least some of which (for example, and expression) show considerable functional independence. Face processing therefore requires a different conceptual framework from object recognition, and any plausible model requires a front-end system that can both extract and separate different facial cues. Both Bruce and Young 1 and Haxby and colleagues 2 propose that functional (and neural) separation of the facial and expression routes occurs immediately after the front-end component, which is involved in the initial structural and visual analysis of faces, with each route incorporating distinct visuoperceptual representations of the relevant facial characteristic. So, what evidence is there that facial and facial expression are coded in distinct visual representational systems? In cognitive neuropsychology, support for this framework requires the identification of patients with impairments in the visual recognition of facial or facial expression alone. Cases of PROSOPAGNOSIA without impaired facial expression recognition would support the independence of processing; however, remarkably few prosopagnosics show well-preserved facial expression recognition. In fact, the idea that prosopagnosics can recognize facial expression is usually supported only by anecdote; on formal testing, most such patients show impairments of facial expression recognition. These difficulties are usually less severe than the problems with recognizing facial, although this might reflect the many procedural differences between standard tests of facial and expression. Instead, much of the evidence for impaired facial with intact facial expression recognition comes from studies in which the cause of the impairments has not been established 7,8,16,17. Such data can provide evidence of a dissociation between the recognition of and expression, but they do not prove that this dissociation has a visuoperceptual origin. So, a crucial issue that is often overlooked is that impaired recognition of facial, but not facial expression, can occur in neuropsychological conditions other than selective damage (or disrupted access) to the perceptual representation of human faces (prosopagnosia), and these causes of the dissociation do not necessarily support separate visuoperceptual codes for facial and expression. The alternative 642 AUGUST 2005 VOLUME 6

3 ACQUIRED NEUROPSYCHOLOGICAL DISORDERS Cognitive impairments that follow neurological damage to previously healthy individuals who have no known genetic or developmental disorders. PRINCIPAL COMPONENT ANALYSIS (PCA). A statistical technique that been applied to the analysis of faces. Facial images, which are originally described in terms of a large number of variables (for example, the greyscale values of individual pixels), are recoded in relation to a smaller set of basis vectors (principal components) by identifying correlations between sets of pixels across the entire set of facial images. causes include the following: impaired learning (and, therefore, recognition) of faces that are encountered after, but not before, neurological damage (prosopamnesia) 10,18,19 ; impaired access to knowledge of familiar people 20 22, which affects recognition of from not only faces but also names, voices and so on; and other cognitive impairments, including amnesia or general semantic deficits, which some earlier studies did not eliminate 7,16. The observation that impaired matching of unfamiliar faces can occur in the absence of impaired recognition of familiar faces and vice versa 8,23 adds a further complication. Therefore, impaired unfamiliar face matching with preserved facial expression recognition (or matching) cannot be considered to be equivalent to impaired recognition of familiar faces with preserved expression recognition 8. If we restrict ourselves to studies that have used documented testing procedures and have excluded most of these alternative explanations, it is notable that only two reports offer evidence of prosopagnosia with preserved facial expression recognition 9,10, and there are some questions even with these. For example, Tranel and colleagues 10 described a prosopagnosic (subject 1) who scored 17 out of 24 (healthy controls scored 20.6) when asked to categorize facial expressions 24 as one of six options (happiness, sadness, anger, fear, disgust or surprise). However, the authors assigned two labels as correct for 5 of the 24 stimuli, because the controls were divided in their responses. Given that participants showing impaired facial expression recognition might select the next most likely option (for example, confusing disgust with anger) 25,26, this unusual method of scoring might have overestimated the ability of subject 1 to recognize expression. This report included two further cases showing impaired facial with intact expression recognition, but neither was prosopagnosic one patient had general amnesia and the other had prosopamnesia 10. A second frequently cited case is that of Mr W 9. Most facial expression tests in this study used a twoalternative choice format (such as happy versus sad). Healthy participants tend to have little difficulty with this form of task and near-ceiling performance makes the results difficult to interpret. The most demanding facial expression task used with Mr W consisted of finding four further examples of a target expression among an array of 16 faces that each displayed one of four expressions 9. The control data in this study were from individuals with damage to either the left or the right hemisphere. The performance of Mr W fell between these two groups, but no statistical comparisons were reported. In light of work showing that both right and left hemisphere damage (including nonspecific damage 27 ) can impair facial expression recognition 8,27,28, the absence of healthy control data makes it difficult to conclude that Mr W s facial expression recognition was fully preserved. Although one or both of these patients with prosopagnosia might have preserved facial expression recognition, this was not conclusively demonstrated 9,10. The data could equally reflect a trend dissociation 29 in which both facial and expression are disrupted to greater and lesser extents, respectively. This would be consistent with most other reported cases of prosopagnosia. The most direct interpretation of this overall pattern involves the impairment of a system that codes visual representations of both and expression, with the deficit being exacerbated because facial recognition tasks are generally more difficult (BOX 2). So, if we are to accept that facial and facial expression are coded by distinct visuoperceptual representational systems, it will be necessary to identify and verify further cases of preserved facial expression recognition in prosopagnosia. An example of the experimental rigour that is required can be found in a recent report of a developmental prosopagnosic who showed a marked discrepancy between her impaired facial and intact facial expression perception across several experiments 30. However, there are reasons why investigations of developmental disorders should not be considered as directly equivalent to studies of ACQUIRED NEUROPSYCHOLOGICAL DISORDERS 31,32. The main argument is that developmental cases violate a fundamental assumption of the dissociation logic namely, that the brain injury affected a normally organized system. For this reason, evidence from cases of acquired neuropsychological disorders is vital. In summary, the idea that the system for processing facial bifurcates from the facial expression route before the stage that codes visuoperceptual representations of facial and expression is less well supported by patient-based research than is widely assumed. It is therefore worth considering an alternative framework in which the perceptual representations of both facial and expression are coded by a single representational system. A PCA framework for face perception. An understanding of how different characteristics can be extracted from a single facial image is central to achieving an accurate conceptual framework for all aspects of face perception. However, the underlying computations of this system have received comparatively little attention. It is therefore of interest that insight has emerged from image-based analysis techniques, such as PRINCIPAL COMPONENT ANALYSIS (PCA) and similar statistical procedures (BOX 3). PCA-based systems can reliably extract and categorize facial cues to 33 36, sex 37,38, race 39 and expression 40 42, and simulate several face-perception phenomena, including distinctiveness effects 34,35,39, caricature effects 43,44, the other-race effect 39 and the composite effect 41. More recent work has shown that a PCA of facial expressions posed by different identities 24 generated distinct sets of principal components (PCs) coding expression and 42, and others that coded both of these facial cues (see also REF. 41) (BOX 3); expression and sex, but not and sex, showed a similar degree of separation. Moreover, this partial independence of the PCs was sufficient NATURE REVIEWS NEUROSCIENCE VOLUME 6 AUGUST

4 Box 2 Level of difficulty A neuropsychological dissociation is defined as High impaired performance on one task (task I) and (relatively) spared performance on another (task II) after brain injury. One interpretation is that tasks I and II exploit different cognitive subsystems. Alternatively, this pattern could reflect damage to a single subsystem if spared performance on task II requires less of the system s resources than spared performance on task I Task I Task II (that is, if task II is easier). This is a genuine problem for Low face research. Facial expression tasks tend to use A B Resource available for two patients (A and B) alternative-choice labelling or matching procedures that involve a small set of expressions, whereas typical facial tasks require the participant to provide identifying information for a series of various celebrities faces. It is generally assumed that a double dissociation, which is defined as two complementary dissociations, is sufficient to eliminate the possibility of a resource (level of difficulty) artefact. However, it is important to be cautious of double dissociations that involve a so-called trend dissociation, where a patient is impaired on both tasks, but shows significantly better performance on one task 29 (patient A in the figure). The graph shows the possible resource levels of two patients, A and B. Here, patient A does better on task I than task II, whereas the pattern is reversed for patient B. As illustrated, a double dissociation could arise because the performance resource curve of one task is steeper than the other, which means that one cognitive subsystem could be responsible for both tasks. Consequently, Shallice 29 suggested that a double dissociation has its maximal interpretive value when patient A performs better than patient B on task I and vice versa for task II. However, even if the criterion proposed by Shallice is satisfied, a double dissociation between facial and expression can be correctly interpreted as evidence for separate neural systems underlying the visual representation of facial and expression only if each respective dissociation is specific to the facial domain. As discussed, there is little evidence for a face-specific double dissociation between facial and expression. Figure reproduced, with permission, from REF. 29 (1988) Cambridge University Press. Performance to model the independent perception of facial and expression 6,41,42 (BOX 3). In addition, the partial overlap in the PCs for facial and expression offers a potential account of previously puzzling and unusual incidences in which facial and expression produced interference effects 45. Therefore, independent coding of facial and facial expression can be achieved by a single multidimensional system in which the independence is partial (statistical) rather than absolute. Moreover, because PCA is an unsupervised algorithm with no prior knowledge of the physical structure of these facial cues, it is the different visual properties of facial and expression that create this separate coding. Therefore, the dissociation between and expression that is seen in healthy participants might be driven, at least in part, by the different visual cues that are optimal for conveying each type of information. In this sense, image-based techni ques offer a new approach to understanding facial and expression perception, because they show that the dissociation is present in the statistical properties of facial images. Note that we are not claiming that there is necessarily anything special about PCA per se. Further research might show that similar algorithms work equally well. However, at present, we use the term PCA framework to refer to this image-based approach. We have introduced the PCA framework at this stage to provide an alternative conception of face perception that can be evaluated along with those of Bruce and Young 1, and Haxby and colleagues 2, in relation to the further research we discuss. So, there are two plausible levels at which the facial route might bifurcate from the facial expression route: immediately after the structural encoding stage (Bruce and Young 1, and Haxby and colleagues 2 ) or after a common representational system that codes both facial and expression (PCA framework). Neurophysiology and functional imaging. The idea that facial and facial expression are processed by different neural systems is supported by data from patient studies, functional imaging and cell recording in monkeys. However, although functional imaging and cell-recording techniques allow more precise localization of brain regions that are sensitive to facial and expression, an important limiting factor is that they identify neural correlates of experimental procedures and the underlying functional contributions are difficult to derive from correlations. Therefore, the implications of these data for issues such as the point at which the facial route bifurcates from the facial expression route are not straightforward. Relatively few functional imaging and cell-recording studies have investigated the processing of facial and expression in a single experiment 12 15,46,47, and their results have been inconsistent. Of interest is the extent to which these studies concur with the wider distinction made by Haxby and colleagues 2 between the involvement of inferotemporal areas (including the fusiform face area or FFA) in coding facial and that of the STS in coding changeable facial cues (such as expression, lipspeech and gaze). 644 AUGUST 2005 VOLUME 6

5 Box 3 Modelling face perception with principal component analysis Principal component analysis (PCA) is a form of linearized compact coding that seeks to explain the relationships among many variables in terms of a smaller set of principal components (PCs). As applied to faces, the pixel intensities of a standardized set of images are submitted to a PCA. Correlations among pixels are identified, and their coefficients (PCs) are extracted 33. The PCs can be thought of as dimensions that code facial information and can be used to code further faces. The particular advantage of techniques such as PCA is that they can reveal the statistical regularities that are inherent in the input with minimal assumptions. a Different same Prototypes Composite Same different Prototypes Composite Different different Prototypes Composite b Reaction time (ms) 1 average correct output 1,400 1,200 1, Human data ns Different same PCA model Different same Different different ns Different different ns Same different ns Categorize Categorize expression Categorize Categorize expression Same different Advocates of PCA-based models of face perception do not claim that the details of PCA are implemented in the human brain. Rather, PCA is viewed as a statistical analogue and not a literal account 34. A PCA of facial expressions posed by different identities generates separate sets of PCs that are important for coding facial, facial expression, or both facial and facial expression 41,42. This partial separation can model the independent coding of configural cues for facial and expression in healthy subjects 6,41, as illustrated by the composite effect 41,42. Panel a shows composite faces that were prepared by combining the top and bottom halves of two faces with different expressions posed by the same, the same expression posed by different identities, or different expressions posed by different identities. Reaction times for reporting the expression in one face half were slowed when the two halves showed different expressions (that is, different same and different different ) relative to when the same expressions (posed by different identities) were used (that is, same different ); however, no further cost was found when the two halves contained different expressions and identities compared with when they contained different expressions and same identities (top graph in panel b). A corresponding effect was found when subjects were asked to report the of one face half. The bottom graph in panel b shows a simulation of this facial expression dissociation using a PCA-based model 41. (ns, not significant; all other comparisons among the categorize or categorize expression levels were statistically reliable.) Panels a and b modified, with permission, from REF. 6 (2000) American Psychological Association. Panel b also modified from REF. 41. Functional imaging investigations into the recognition of both facial and expression have consistently identified occipitotemporal regions as being activated by 13 15,46. However, the results have been less consistent as regards the brain areas that are involved in expression, and only two studies 15,46 found that the STS was involved. One of these investigations used an adaptation procedure whereby repetition of the or expression of a face was manipulated 15. The FFA and posterior STS were more sensitive to repetition of facial, whereas a more anterior section of the STS was more sensitive to repetition of expression. This study provides evidence that representations of facial and expression are coded separately; however, as further research shows that facial and vocal expressions engage a similar area of the superior temporal cortex 48, it is possible that this region is not specific to facial signals of emotion per se (see below). The second study showed STS activation for facial expression, but equivalent activity in the FFA for facial and expression 46. However, the poor temporal resolution of functional MRI (fmri) makes it difficult to determine whether feedback from areas such as the amygdala, which can modulate extra striate responses 49, contributes to the FFA response to facial expression. Therefore, it is of interest NATURE REVIEWS NEUROSCIENCE VOLUME 6 AUGUST

6 POPULATION CODING The idea that objects, such as faces, are coded as distributed patterns of activity across neuronal populations, so that cells show broadly graded responses to a single stimulus, as opposed to an all-or-none - type response pattern. GRANDMOTHER CELL A hypothetical cell that responds specifically to a single face (such as one s grandmother). Although no such highly specialized cell has been found so far, the temporal cortex contains cells that respond preferentially to faces or hands, and some are maximally (although not exclusively) responsive to a particular persons face. These cells are sometimes referred to as grandmother cells. that a recent magneto encephalography study found that the amplitude of a relatively early signal (onset ~144 ms) in the fusiform gyrus was modulated by different facial expressions in the following rank order: happiness > disgust > neutral 50. The remaining functional imaging evidence that the FFA is involved in coding facial, and the STS is involved in coding expression, comes from studies that did not include both and expression conditions 48, So, it would be helpful to conduct further investigations of the neural correlates of facial and expression in a single experiment, and to use varying experimental tasks to identify the conditions under which the STS/FFA distinction is optimized. Clearer evidence of the distinction between facial (inferotemporal) and facial expression (STS) has come from an investigation of face-responsive cells in macaques 12. However, in relation to the PCA framework, it is of interest that this dissociation was not complete and some cells were sensitive to both facial dimensions 12 ; similarly, it should be remembered that the PCs found by PCA were tuned to, expression or both 42. Other research indicates that the idea of a complete neurological dissociation might be too rigid. For example, one study found that the STS contains cells that are sensitive to facial and facial expression 55. Some of the -responsive cells generalized across different views of faces, in contrast to inferotemporal face-sensitive cells, which tend to be view specific 56. This led the authors to propose that the STS might pool the outputs of inferotemporal cells 56. A further study found that face-responsive cells, which were recorded mainly in the anterior inferotemporal cortex but also in the STS, were sensitive to various stimulus dimensions namely, their global category (monkey face, human face or simple shape) and membership of each of four fine categories (monkey, monkey expression, human or human expression) 47. Global information was coded by the earlier part of a cell s response, which led the authors to postulate that this might enhance subsequent processing by switching the cell s processing mode. A different perspective comes from Young and Yamane 57, who showed that the response profiles of face-selective cells in the anterior inferotemporal cortex (AIT) can be predicted by the physical (structural) properties of faces. Moreover, this information is distributed across a network of AIT cells in a POPULATION CODING format, as opposed to a highly localized or GRANDMOTHER CELL format (see also REFS 58,59). By contrast, the responses of STS neurons were not related to physical properties, but rather to how familiar the faces were or possibly to the social status of the bearer. Therefore, we should be cautious of assuming that these regions have analogous roles in coding representations of facial and expression. This is underlined by the fact that AIT and STS cells differ in several respects: the former are primarily visual, whereas regions of the STS are multimodal 60 and receive inputs from other polysensory brain regions that are involved in social/emotional processing 61. As we discuss below, the polysensory properties of the STS might help to explain the greater association of this region with facial expression and other changeable facial cues. Finally, the concept of population coding has similarities with the PCA framework in which face representations are distributed across a series of PCs. The analogy is further strengthened by the fact that most inferotemporal cells that respond to a particular face are tuned to a specific facial view 56. Given the idea of a single PCA framework that codes facial and expression, it would be of interest to discover whether population coding can account for cell responses to the physical properties of both facial and expression, and whether cells that respond to each form part of the same, or different, neuronal populations. In summary, both cell recording and functional imaging support the idea of separable mechanisms that are involved in facial and expression recognition, and provide evidence that the former is associated with inferotemporal regions and the latter with the STS. However, this distinction seems to reflect a bias rather than a categorical dissociation, so it would be helpful to explore under what circumstances this bias is optimized or minimized. These studies provide little evidence that an exclusive facial route bifurcates from an exclusive facial expression route after a structural encoding stage, such that the visual representations of each are coded separately. As it stands, evidence from neuropsychology, functional imaging and cell recordings fits more easily with the account implied by PCA in terms of relative segregation of facial and expression, rather than fully independent coding. Is there one system for expressions? As discussed above, a potential problem for interpreting neuropsychological cases in which facial recognition is impaired but expression recognition is relatively spared is that facial tasks are often harder than tests of expression recognition (BOX 2). It is generally assumed that any problems that arise from an asymmetry of task difficulty are minimized if the converse dissociation can be found (that is, impaired facial expression without impaired facial ). However, in relation to whether the visuoperceptual representations of facial and expression are coded by analogous distinct systems, this logic only applies if the two dissociations affect functionally comparable mechanisms, leading to dissociable facespecific deficits one affecting facial and the other affecting facial expression. There are good reasons, though, to believe that this might not be the case, and that selective impairments in the recognition of facial or facial expression are not simple mirror images of one another. For example, impairments relating to facial affect the recognition of all familiar faces equivalently. By contrast, some facial expression impairments disproportionately affect one emotional category, such as fear 62 65, disgust 25,26,66 or anger 67,68. Similarly, brain-imaging research has shown that some neural 646 AUGUST 2005 VOLUME 6

7 HEIDER-SIMMEL-LIKE ANIMATIONS Short animations that depict the movements of geometric shapes in a manner that is normally perceived as suggestive of social interactions among humanlike characters. For example, one shape might be perceived as aggressive and intimidating towards another. regions are particularly involved in coding certain expressions (such as fear or disgust) 69 72, and comparative research has shown that the same systems are involved in behavioural responses that are associated with these emotions. Detailed discussions of emotion-specific neuropsychological deficits can be found elsewhere 65,73. These emotion-specific impairments are often not restricted to the recognition of emotion from faces alone; vocal expressions are generally also affected 25,26,63 65,67,74. Where exceptions to this pattern have been observed, the facial vocal dissociation was expressed as a disproportionate deficit in recognizing fear from facial, but not prosodic, signals 75,76 and all but one of these cases 75 showed abnormal prosody recognition for emotions other than fear. Moreover, spared performance on emotion prosody tasks could reflect a supporting role of intact language systems. The disproportionate role of certain brain regions in recognizing particular facial expressions shows that all expressions are not processed by a single system. Moreover, evidence is accumulating that these brain regions are not specialized for interpreting emotion from facial signals per se, but have a more general role in processing emotion from several sensory inputs. Indeed, recent research on disgust processing has shown that the insula might be involved in the perception of this emotion in others and its experience by the self 26,69,73, In relation to the brain regions associated with emotion-specific impairments (amygdala, insula and ventral striatum), these deficits are best accounted for by damage to components of the extended system proposed by Haxby and colleagues. Is multimodal analysis involved? The co-occurrence of impairments in the recognition of facial and vocal expression is not restricted to cases that show disproportionate impairments for one emotion; in fact, the relationship is if anything more striking in patients who have general problems in recognizing facial expressions 25, Moreover, other emotional impairments (affecting memory for emotional material, fear conditioning or subjective emotional experience) have also been identified in patients with specific and general facial expression impairments 11,83 89, whereas recognition of facial is generally preserved. Therefore, it seems likely that at least some facial expression impairments reflect damage to emotion systems rather than to face-specific mechanisms. However, because not all patients have completed facial, vocal and general emotional tasks, and details of neurology are not always available, the overall picture is patchy. Impaired facial and vocal expression recognition could result from damage to a mechanism that is involved in processing both of these cues, such as bi- or multimodal representations of emotional signals, or from damage to a system for integrating emotional cues from different modalities. There is evidence that STS cells in macaques are sensitive to both visual and auditory components of animated biologically relevant stimuli, including facial signals 90. There is little functional imaging evidence relating to multimodal emotional signals 91. However, fmri research has shown that the STS is important for processing other types of changeable facial cue in combination with cues from other modalities, such as lipspeech and speech 92,93 (see REFS 94,95 for STS sensitivity to unimodal lipspeech and vocal cues). These observations concur with research showing that regions of the STS constitute a point of multisensory convergence 60,61,96,97. Therefore, the prominent role of the STS in coding facial expressions and other changeable facial signals might relate to the fact that, in everyday life, these signals are associated with more than one perceptual channel (that is, they have facial, vocal and dynamic components), which must be integrated to optimize communication. Consistent with this view, emotion and speech can be identified from facial, vocal or dynamic (point-light display) cues, combinations of which can interact 74, Similarly, the contribution of the STS to the perception of social attention 56,102,103 might relate to the need to integrate gaze, head direction, body posture and associated dynamic information to accurately determine the focus of attention of another individual 104,105. This is illustrated by human cognitive studies showing that the direction of attention that is signalled by gaze interacts with that signalled by head orientation 105, and by research in monkeys showing that many STS cells that respond to a particular gaze direction also respond to the same direction when it is signalled by head orientation or body posture 56. Furthermore, there is evidence that interactions between the different dimensions that are associated with changeable cues occur both within and between cues associated with emotion, social attention, lipspeech and gesture 98,99,106,107 ; for example, gaze can modulate perceived emotional state from facial expressions 107. By contrast, is signalled primarily by the face 108,109 and there is little evidence that facial interacts with other facial cues 4 (although for an alternative view see REF. 45). An integration hypothesis might also explain the involvement of the STS in the perception of other biological cues, such as hand actions 110, body movement 111 and even HEIDER-SIMMEL-LIKE ANIMATIONS 112 (for a review, see REF. 113); all of these would benefit from the integration of different stimulus dimensions and, in particular, form and motion. In fact, there is evidence that integration of the form and motion of biological motion stimuli takes place in the STS 97,114. However, most research into the neural correlates of changeable facial cues has been conducted with unimodal static photographs of faces and these are sufficient to engage STS mechanisms 2. This observation can be explained by several factors. First, STS cells that respond to combinations of stimulus dimensions will also often respond to a single dimension alone 90,93,97. Second, a proportion of the STS response to unimodal static faces might derive from the idea that these images imply other perceptual dimensions that are not explicitly represented by virtue of their strong association in everyday life; for example, lipspeech in the absence of auditory information engages the auditory NATURE REVIEWS NEUROSCIENCE VOLUME 6 AUGUST

8 cortex and STS 94,115. Third, cells that are sensitive to both unimodal (single dimension) and multimodal (dimension combinations) stimuli are found in the STS 56,90,93,97. In relation to the latter, an interesting issue is whether STS cell networks that respond to unimodal or multimodal versions of these stimuli constitute stored unimodal and multimodal representations, or whether the unimodal cells reflect the first stage of an integration system in which unimodal projections from elsewhere (such as the inferotemporal cortex for visual form) undergo a preliminary analysis before being combined. Does the expression system do anything else? Our final question concerns whether the perceptual representations of facial expression are coded separately from other changeable facial cues (such as gaze and lipspeech) 1 or whether all changeable facial properties are coded by a single representational system, with separate routes only emerging in the extended system 2 (BOX 1). Alternatively, it is worth considering whether the visual form of all facial characteristics (, expression, lipspeech, gaze and so on) might be coded in a single multidimensional framework: that is, an extension of the PCA framework. Few previous studies are relevant to this question. There are reports that brain injury can cause impaired facial expression recognition with preserved gaze perception 25 or lipspeech 116, and impaired lipspeech perception with preserved facial expression recognition 116. However, as we have already discussed, dissociations can occur for different reasons and, when appropriate tests were used, these impairments were not found to be face specific 25,116. This indicates that they are best accounted for by damage (or impaired access) to components of the extended system of Haxby and colleagues model 2, whereby lipspeech impairments are associated with impaired access to language mechanisms, facial expression impairments with impaired access to emotion systems, and so on. Similarly, functional imaging research supports the involvement of language, emotion and attention systems in the perception of lipspeech, facial expression and gaze, respectively 2. Studies using healthy volunteers are also potentially consistent with the idea that facial expressions are coded alongside other facial cues. Perception of emotion from faces can be modulated by the gaze direction of the model 107,117, which leads to changes in the amygdalar response to facial expression that can be detected with fmri 118. Another fmri study found that both mouth and eye movements engaged a similar region of the posterior STS 102, which again implies that a common neural region is associated with changeable cues. In summary, although there is some evidence for neuropsychological dissociations among the recognition of changeable cues, these dissociations do not seem to be face specific. At present, there is no clear evidence from neuropsychology that distinct representational systems are used for different changeable facial cues. That said, the paucity of relevant studies makes it difficult to draw firm conclusions at this time. Discussion We began by pointing out that the prevalent view in face perception research is that facial and facial expression are processed by separate functional routes from their visual representations through to mechanisms for their interpretation 1,2. As we explained, clear support for this theoretical position would arise from a double dissociation between facial and expression recognition that is restricted to the facial domain. One side of such a dissociation requires evidence for prosopagnosia without impaired facial expression recognition. However, contrary to common perception, the evidence for this pattern is limited. The opposite dissociation requires preserved recognition of facial with impaired recognition of emotion from the face, but not other emotional impairments. Evidence for this pattern is weaker still and it seems increasingly likely that selective disruption of facial expression recognition does not reflect damage (or impaired access) to visual representations of facial expression per se, but rather to more general emotion systems. In short, the idea that prosopagnosia compromises the recognition of facial but not facial expression, whereas prosopo-affective agnosias compromise the recognition of facial expression but not facial, is an oversimplification. We have become sceptical about the view that facial expression recognition is simply the converse of facial recognition. We also considered the contribution of cell recording and functional imaging to this debate. Although these techniques support the idea that there is a degree of neural separation between the mechanisms that are involved in the recognition of and expression from the face, they have contributed little to issues such as the level of analysis at which the facial route bifurcates from the facial expression route. In effect, most of us working in this field have been trying to fit data to a strongly held a priori hypothesis of separate modules for facial and expression analysis. This model comes from a long tradition that has emphasized the importance of facial expressions as communicative signals and placed less emphasis on the need to combine them with other sources of information. It has been bolstered by the intuitive appeal of the idea that facial expression recognition deficits would show an opposite pattern to prosopagnosia, as well as the belief that separate perceptual representations for facial and expression might be expected, because changes in expression would otherwise interfere with identifying the individual. This logical argument for the independent coding of facial and expression is supported by experimental data from healthy volunteers 3 6, but the term independent in this context has gradually acquired a neurological connotation that lacks persuasive empirical support. The PCA approach offers a different perspective, in that it shows that the independent perception of facial and expression 3 6 arises from an imagebased analysis of faces with no explicit mechanism for 648 AUGUST 2005 VOLUME 6

9 routing - or expression-relevant cues to different systems. This produces a single multidimensional framework in which facial and expression are coded by largely (although not completely) different sets of dimensions (PCs) 42. Therefore, independent perception does not need to rely on totally separate visual codes for these facial cues. Image-based analysis techniques, such as PCA, also offer a ready solution to an important problem for implemented models of face perception an operational front-end mechanism that can both extract and separate the visual codes of different facial characteristics from a single facial image. Moreover, the fact that this separation falls out of an objective (unsupervised) statistical analysis underlines the potential value of combining an analysis of the physical stimulus with an exploration of the functional and neural mechanisms that are involved in face perception. Taking all these factors into account, the PCA framework should be given serious consideration. At this stage, too many pieces of the puzzle are missing to promote it as the definitive answer; not only from studies of brain-injured participants but also from nonhuman primate and functional imaging research. Nonetheless, it has a strong theoretical grounding, can mop up numerous observations (including previously unexplained ones) and serves a useful function in encouraging researchers to view face perception from an image-based perspective. We hope that it provides an alternative approach that will facilitate new research. Unpacking the expression distinction At the heart of the model proposed by Haxby and colleagues 2 is a dissociation supported by several empirical studies between the involvement of the STS in coding facial expression, lipspeech and gaze, and the involvement of the inferotemporal cortex (including the FFA) in coding facial 2. However, this distinction begs more fundamental questions; for example, why are facial characteristics divided in this manner, and why is the STS more interested in facial expression, lipspeech and gaze? Haxby and colleagues 2 drew attention to the fact that facial expressions, gaze and lipspeech are dynamic and constantly changing (changeable cues), whereas facial is invariant. Therefore, projections from motion-sensitive regions the human homologues of the middle temporal area (MT) or medial superior temporal area (MST) to the STS might help to explain the role of the STS in the processing of changeable cues 119. However, it might also be useful to consider other ways in which facial and changeable cues differ. We have emphasized that the STS is sensitive not only to changeable facial characteristics, but also to other perceptual dimensions that are inherently linked with them (such as their associated vocalizations and dynamic information). Moreover, concurrent with research showing that the STS receives inputs from various sensory modalities 60, there is evidence that the STS underlies the integration of these different channels 56,90,92,97. Consequently, we have proposed that the more prominent role of the STS in coding facial expressions and other changeable characteristics relative to facial might reflect the increased reliance of changeable cues on integrative mechanisms. The integration hypothesis can also account for the involvement of the STS in the perception of other biological cues that require integration of form and motion 97. However, there are other ways in which facial and changeable cues differ, which might prove important in interpreting the STS/inferotemporal distinction. One relates to the manner in which these cues are monitored during a social encounter. Although changeable cues require constant online monitoring during social interaction, the same is not true for facial after registering a person s at the beginning of a social encounter there is little need to monitor it further. Indeed, consistent with the latter point, one study showed that a remarkable 60% of participants failed to realize that a stranger they began a conversation with had been switched with another person after a brief staged separation during the social encounter 120. Another potentially relevant way in which changeable facial cues and facial differ is that only the former can be simulated by perceivers. For example, viewing the facial expression or lipspeech of another person produces activation in corresponding facial muscles or facial motor brain regions 121,122, and facial expression tasks engage brain areas that underlie the experience of emotion 77, which has led some researchers to posit the idea of facial expression perception by simulation 78,79. Similarly, seeing a face with leftward or rightward gaze induces an attentional shift towards the same direction in the observer and engages brain systems that are involved in attention 2,102,103. Conversely, there is no obvious sense in which we simulate another person s. Discovering how these different points relate to the neurological pathways that are associated with facial and expression might be important in understanding their psychological basis. In conclusion We have pointed out the value of an approach to face perception that emphasizes the different physical properties and information-processing demands (for example, reliance on integrative mechanisms) of different facial characteristics. This differs from the classic approach, which has tended to emphasize distinctions based mainly on informational content ( versus expression). At the heart of our discussion is the issue of whether differences between neural mechanisms that are involved in the perception of facial and expression reflect a relatively straightforward bifurcation of visual pathways that are dedicated to different types of analysis, or a more complex arrangement in which the separation of and expression is relative rather than absolute. Linked to this issue is the question of whether the perception of facial expressions depends more on an intrinsically multimodal level of analysis (involving the STS) than facial. NATURE REVIEWS NEUROSCIENCE VOLUME 6 AUGUST

11, in nonhuman primates, different cell populations respond to facial identity and expression12, and

11, in nonhuman primates, different cell populations respond to facial identity and expression12, and UNDERSTANDING THE RECOGNITION OF FACIAL IDENTITY AND FACIAL EXPRESSION Andrew J. Calder* and Andrew W. Young Abstract Faces convey a wealth of social signals. A dominant view in face-perception research

More information

FAILURES OF OBJECT RECOGNITION. Dr. Walter S. Marcantoni

FAILURES OF OBJECT RECOGNITION. Dr. Walter S. Marcantoni FAILURES OF OBJECT RECOGNITION Dr. Walter S. Marcantoni VISUAL AGNOSIA -damage to the extrastriate visual regions (occipital, parietal and temporal lobes) disrupts recognition of complex visual stimuli

More information

Perception of Faces and Bodies

Perception of Faces and Bodies CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE Perception of Faces and Bodies Similar or Different? Virginia Slaughter, 1 Valerie E. Stone, 2 and Catherine Reed 3 1 Early Cognitive Development Unit and 2

More information

Sensorimotor Functioning. Sensory and Motor Systems. Functional Anatomy of Brain- Behavioral Relationships

Sensorimotor Functioning. Sensory and Motor Systems. Functional Anatomy of Brain- Behavioral Relationships Sensorimotor Functioning Sensory and Motor Systems Understanding brain-behavior relationships requires knowledge of sensory and motor systems. Sensory System = Input Neural Processing Motor System = Output

More information

Summary. Multiple Body Representations 11/6/2016. Visual Processing of Bodies. The Body is:

Summary. Multiple Body Representations 11/6/2016. Visual Processing of Bodies. The Body is: Visual Processing of Bodies Corps et cognition: l'embodiment Corrado Corradi-Dell Acqua corrado.corradi@unige.ch Theory of Pain Laboratory Summary Visual Processing of Bodies Category-specificity in ventral

More information

Fundamentals of Cognitive Psychology, 3e by Ronald T. Kellogg Chapter 2. Multiple Choice

Fundamentals of Cognitive Psychology, 3e by Ronald T. Kellogg Chapter 2. Multiple Choice Multiple Choice 1. Which structure is not part of the visual pathway in the brain? a. occipital lobe b. optic chiasm c. lateral geniculate nucleus *d. frontal lobe Answer location: Visual Pathways 2. Which

More information

Introduction to the Special Issue on Multimodality of Early Sensory Processing: Early Visual Maps Flexibly Encode Multimodal Space

Introduction to the Special Issue on Multimodality of Early Sensory Processing: Early Visual Maps Flexibly Encode Multimodal Space BRILL Multisensory Research 28 (2015) 249 252 brill.com/msr Introduction to the Special Issue on Multimodality of Early Sensory Processing: Early Visual Maps Flexibly Encode Multimodal Space Roberto Arrighi1,

More information

Identify these objects

Identify these objects Pattern Recognition The Amazing Flexibility of Human PR. What is PR and What Problems does it Solve? Three Heuristic Distinctions for Understanding PR. Top-down vs. Bottom-up Processing. Semantic Priming.

More information

Visual Context Dan O Shea Prof. Fei Fei Li, COS 598B

Visual Context Dan O Shea Prof. Fei Fei Li, COS 598B Visual Context Dan O Shea Prof. Fei Fei Li, COS 598B Cortical Analysis of Visual Context Moshe Bar, Elissa Aminoff. 2003. Neuron, Volume 38, Issue 2, Pages 347 358. Visual objects in context Moshe Bar.

More information

October 2, Memory II. 8 The Human Amnesic Syndrome. 9 Recent/Remote Distinction. 11 Frontal/Executive Contributions to Memory

October 2, Memory II. 8 The Human Amnesic Syndrome. 9 Recent/Remote Distinction. 11 Frontal/Executive Contributions to Memory 1 Memory II October 2, 2008 2 3 4 5 6 7 8 The Human Amnesic Syndrome Impaired new learning (anterograde amnesia), exacerbated by increasing retention delay Impaired recollection of events learned prior

More information

Excellent Network Courses. Department of Neurology Affiliated hospital of Jiangsu University

Excellent Network Courses. Department of Neurology Affiliated hospital of Jiangsu University Excellent Network Courses Department of Neurology Affiliated hospital of Jiangsu University Agnosia Visual Agnosia Lissauer (1890) described 2 types: a) Apperceptive Cannot see objects b) Associative Does

More information

Time Experiencing by Robotic Agents

Time Experiencing by Robotic Agents Time Experiencing by Robotic Agents Michail Maniadakis 1 and Marc Wittmann 2 and Panos Trahanias 1 1- Foundation for Research and Technology - Hellas, ICS, Greece 2- Institute for Frontier Areas of Psychology

More information

Attention Response Functions: Characterizing Brain Areas Using fmri Activation during Parametric Variations of Attentional Load

Attention Response Functions: Characterizing Brain Areas Using fmri Activation during Parametric Variations of Attentional Load Attention Response Functions: Characterizing Brain Areas Using fmri Activation during Parametric Variations of Attentional Load Intro Examine attention response functions Compare an attention-demanding

More information

The previous three chapters provide a description of the interaction between explicit and

The previous three chapters provide a description of the interaction between explicit and 77 5 Discussion The previous three chapters provide a description of the interaction between explicit and implicit learning systems. Chapter 2 described the effects of performing a working memory task

More information

MULTI-CHANNEL COMMUNICATION

MULTI-CHANNEL COMMUNICATION INTRODUCTION Research on the Deaf Brain is beginning to provide a new evidence base for policy and practice in relation to intervention with deaf children. This talk outlines the multi-channel nature of

More information

Impaired face discrimination in acquired prosopagnosia is associated with abnormal response to individual faces in the right middle fusiform gyrus

Impaired face discrimination in acquired prosopagnosia is associated with abnormal response to individual faces in the right middle fusiform gyrus Impaired face discrimination in acquired prosopagnosia is associated with abnormal response to individual faces in the right middle fusiform gyrus Christine Schiltz Bettina Sorger Roberto Caldara Fatima

More information

Cognitive Neuroscience Cortical Hemispheres Attention Language

Cognitive Neuroscience Cortical Hemispheres Attention Language Cognitive Neuroscience Cortical Hemispheres Attention Language Based on: Chapter 18 and 19, Breedlove, Watson, Rosenzweig, 6e/7e. Cerebral Cortex Brain s most complex area with billions of neurons and

More information

Hebbian Plasticity for Improving Perceptual Decisions

Hebbian Plasticity for Improving Perceptual Decisions Hebbian Plasticity for Improving Perceptual Decisions Tsung-Ren Huang Department of Psychology, National Taiwan University trhuang@ntu.edu.tw Abstract Shibata et al. reported that humans could learn to

More information

Ch 5. Perception and Encoding

Ch 5. Perception and Encoding Ch 5. Perception and Encoding Cognitive Neuroscience: The Biology of the Mind, 2 nd Ed., M. S. Gazzaniga, R. B. Ivry, and G. R. Mangun, Norton, 2002. Summarized by Y.-J. Park, M.-H. Kim, and B.-T. Zhang

More information

Who Needs Cheeks? Eyes and Mouths are Enough for Emotion Identification. and. Evidence for a Face Superiority Effect. Nila K Leigh

Who Needs Cheeks? Eyes and Mouths are Enough for Emotion Identification. and. Evidence for a Face Superiority Effect. Nila K Leigh 1 Who Needs Cheeks? Eyes and Mouths are Enough for Emotion Identification and Evidence for a Face Superiority Effect Nila K Leigh 131 Ave B (Apt. 1B) New York, NY 10009 Stuyvesant High School 345 Chambers

More information

Cortical Visual Symptoms

Cortical Visual Symptoms 대한안신경의학회지 : 제 6 권 Supplement 2 ISSN: 2234-0971 Jeong-Yoon Choi Department of Neurology, Seoul National University Bundang Hospital, Seongnam, Korea Jeong-Yoon Choi. MD. PhD. Department of Neurology, Seoul

More information

SUPPLEMENTARY INFORMATION. Table 1 Patient characteristics Preoperative. language testing

SUPPLEMENTARY INFORMATION. Table 1 Patient characteristics Preoperative. language testing Categorical Speech Representation in the Human Superior Temporal Gyrus Edward F. Chang, Jochem W. Rieger, Keith D. Johnson, Mitchel S. Berger, Nicholas M. Barbaro, Robert T. Knight SUPPLEMENTARY INFORMATION

More information

Disorders of Object and Spatial perception. Dr John Maasch Brain Injury Rehabilitation Service Burwood Hospital.

Disorders of Object and Spatial perception. Dr John Maasch Brain Injury Rehabilitation Service Burwood Hospital. Disorders of Object and Spatial perception Dr John Maasch Brain Injury Rehabilitation Service Burwood Hospital. Take Home Message 1 Where there are lesions of the posterior cerebrum and posterior temporal

More information

Chapter 6. Attention. Attention

Chapter 6. Attention. Attention Chapter 6 Attention Attention William James, in 1890, wrote Everyone knows what attention is. Attention is the taking possession of the mind, in clear and vivid form, of one out of what seem several simultaneously

More information

Ch 5. Perception and Encoding

Ch 5. Perception and Encoding Ch 5. Perception and Encoding Cognitive Neuroscience: The Biology of the Mind, 2 nd Ed., M. S. Gazzaniga,, R. B. Ivry,, and G. R. Mangun,, Norton, 2002. Summarized by Y.-J. Park, M.-H. Kim, and B.-T. Zhang

More information

Selective bias in temporal bisection task by number exposition

Selective bias in temporal bisection task by number exposition Selective bias in temporal bisection task by number exposition Carmelo M. Vicario¹ ¹ Dipartimento di Psicologia, Università Roma la Sapienza, via dei Marsi 78, Roma, Italy Key words: number- time- spatial

More information

The Relation Between Perception and Action: What Should Neuroscience Learn From Psychology?

The Relation Between Perception and Action: What Should Neuroscience Learn From Psychology? ECOLOGICAL PSYCHOLOGY, 13(2), 117 122 Copyright 2001, Lawrence Erlbaum Associates, Inc. The Relation Between Perception and Action: What Should Neuroscience Learn From Psychology? Patrick R. Green Department

More information

Henry Molaison. Biography. From Wikipedia, the free encyclopedia

Henry Molaison. Biography. From Wikipedia, the free encyclopedia Henry Molaison From Wikipedia, the free encyclopedia Henry Gustav Molaison (February 26, 1926 December 2, 2008), known widely as H.M., was an American memory disorder patient who had a bilateral medial

More information

Cognitive Modelling Themes in Neural Computation. Tom Hartley

Cognitive Modelling Themes in Neural Computation. Tom Hartley Cognitive Modelling Themes in Neural Computation Tom Hartley t.hartley@psychology.york.ac.uk Typical Model Neuron x i w ij x j =f(σw ij x j ) w jk x k McCulloch & Pitts (1943), Rosenblatt (1957) Net input:

More information

Importance of Deficits

Importance of Deficits Importance of Deficits In complex systems the parts are often so integrated that they cannot be detected in normal operation Need to break the system to discover the components not just physical components

More information

Human Paleoneurology and the Evolution of the Parietal Cortex

Human Paleoneurology and the Evolution of the Parietal Cortex PARIETAL LOBE The Parietal Lobes develop at about the age of 5 years. They function to give the individual perspective and to help them understand space, touch, and volume. The location of the parietal

More information

Differences in holistic processing do not explain cultural differences in the recognition of facial expression

Differences in holistic processing do not explain cultural differences in the recognition of facial expression THE QUARTERLY JOURNAL OF EXPERIMENTAL PSYCHOLOGY, 2017 VOL. 70, NO. 12, 2445 2459 http://dx.doi.org/10.1080/17470218.2016.1240816 Differences in holistic processing do not explain cultural differences

More information

Selective Attention. Modes of Control. Domains of Selection

Selective Attention. Modes of Control. Domains of Selection The New Yorker (2/7/5) Selective Attention Perception and awareness are necessarily selective (cell phone while driving): attention gates access to awareness Selective attention is deployed via two modes

More information

CS/NEUR125 Brains, Minds, and Machines. Due: Friday, April 14

CS/NEUR125 Brains, Minds, and Machines. Due: Friday, April 14 CS/NEUR125 Brains, Minds, and Machines Assignment 5: Neural mechanisms of object-based attention Due: Friday, April 14 This Assignment is a guided reading of the 2014 paper, Neural Mechanisms of Object-Based

More information

Recognition of Faces of Different Species: A Developmental Study Between 5 and 8 Years of Age

Recognition of Faces of Different Species: A Developmental Study Between 5 and 8 Years of Age Infant and Child Development Inf. Child Dev. 10: 39 45 (2001) DOI: 10.1002/icd.245 Recognition of Faces of Different Species: A Developmental Study Between 5 and 8 Years of Age Olivier Pascalis a, *, Elisabeth

More information

PS3021, PS3022, PS4040

PS3021, PS3022, PS4040 School of Psychology Important Degree Information: B.Sc./M.A. Honours The general requirements are 480 credits over a period of normally 4 years (and not more than 5 years) or part-time equivalent; the

More information

The Integration of Features in Visual Awareness : The Binding Problem. By Andrew Laguna, S.J.

The Integration of Features in Visual Awareness : The Binding Problem. By Andrew Laguna, S.J. The Integration of Features in Visual Awareness : The Binding Problem By Andrew Laguna, S.J. Outline I. Introduction II. The Visual System III. What is the Binding Problem? IV. Possible Theoretical Solutions

More information

Rules of apparent motion: The shortest-path constraint: objects will take the shortest path between flashed positions.

Rules of apparent motion: The shortest-path constraint: objects will take the shortest path between flashed positions. Rules of apparent motion: The shortest-path constraint: objects will take the shortest path between flashed positions. The box interrupts the apparent motion. The box interrupts the apparent motion.

More information

Perceptual Disorders. Agnosias

Perceptual Disorders. Agnosias Perceptual Disorders Agnosias Disorders of Object Recognition AGNOSIA : a general term for a loss of ability to recognize objects, people, sounds, shapes, or smells. Agnosias result from damage to cortical

More information

A model of parallel time estimation

A model of parallel time estimation A model of parallel time estimation Hedderik van Rijn 1 and Niels Taatgen 1,2 1 Department of Artificial Intelligence, University of Groningen Grote Kruisstraat 2/1, 9712 TS Groningen 2 Department of Psychology,

More information

From Pixels to People: A Model of Familiar Face Recognition by Burton, Bruce and Hancock. Presented by Tuneesh K Lella

From Pixels to People: A Model of Familiar Face Recognition by Burton, Bruce and Hancock. Presented by Tuneesh K Lella From Pixels to People: A Model of Familiar Face Recognition by Burton, Bruce and Hancock Presented by Tuneesh K Lella Agenda Motivation IAC model Front-End to the IAC model Combination model Testing the

More information

FRONTAL LOBE. Central Sulcus. Ascending ramus of the Cingulate Sulcus. Cingulate Sulcus. Lateral Sulcus

FRONTAL LOBE. Central Sulcus. Ascending ramus of the Cingulate Sulcus. Cingulate Sulcus. Lateral Sulcus FRONTAL LOBE Central Ascending ramus of the Cingulate Cingulate Lateral Lateral View Medial View Motor execution and higher cognitive functions (e.g., language production, impulse inhibition, reasoning

More information

COGS 101A: Sensation and Perception

COGS 101A: Sensation and Perception COGS 101A: Sensation and Perception 1 Virginia R. de Sa Department of Cognitive Science UCSD Lecture 6: Beyond V1 - Extrastriate cortex Chapter 4 Course Information 2 Class web page: http://cogsci.ucsd.edu/

More information

Learning Objectives.

Learning Objectives. Emilie O Neill, class of 2016 Learning Objectives 1. Describe the types of deficits that occur with lesions in association areas including: prosopagnosia, neglect, aphasias, agnosia, apraxia 2. Discuss

More information

The Frontal Lobes. Anatomy of the Frontal Lobes. Anatomy of the Frontal Lobes 3/2/2011. Portrait: Losing Frontal-Lobe Functions. Readings: KW Ch.

The Frontal Lobes. Anatomy of the Frontal Lobes. Anatomy of the Frontal Lobes 3/2/2011. Portrait: Losing Frontal-Lobe Functions. Readings: KW Ch. The Frontal Lobes Readings: KW Ch. 16 Portrait: Losing Frontal-Lobe Functions E.L. Highly organized college professor Became disorganized, showed little emotion, and began to miss deadlines Scores on intelligence

More information

Hemispheric Specialization (lateralization) Each lobe of the brain has specialized functions (Have to be careful with this one.)

Hemispheric Specialization (lateralization) Each lobe of the brain has specialized functions (Have to be careful with this one.) Cerebral Cortex Principles contralaterality the right half of your brain controls the left half of your body and vice versa. (contralateral control.) Localization of function Specific mental processes

More information

How do individuals with congenital blindness form a conscious representation of a world they have never seen? brain. deprived of sight?

How do individuals with congenital blindness form a conscious representation of a world they have never seen? brain. deprived of sight? How do individuals with congenital blindness form a conscious representation of a world they have never seen? What happens to visual-devoted brain structure in individuals who are born deprived of sight?

More information

Hierarchically Organized Mirroring Processes in Social Cognition: The Functional Neuroanatomy of Empathy

Hierarchically Organized Mirroring Processes in Social Cognition: The Functional Neuroanatomy of Empathy Hierarchically Organized Mirroring Processes in Social Cognition: The Functional Neuroanatomy of Empathy Jaime A. Pineda, A. Roxanne Moore, Hanie Elfenbeinand, and Roy Cox Motivation Review the complex

More information

Clinical and Experimental Neuropsychology. Lecture 3: Disorders of Perception

Clinical and Experimental Neuropsychology. Lecture 3: Disorders of Perception Clinical and Experimental Neuropsychology Lecture 3: Disorders of Perception Sensation vs Perception Senses capture physical energy from environment that are converted into neural signals and elaborated/interpreted

More information

Visual Selection and Attention

Visual Selection and Attention Visual Selection and Attention Retrieve Information Select what to observe No time to focus on every object Overt Selections Performed by eye movements Covert Selections Performed by visual attention 2

More information

The significance of sensory motor functions as indicators of brain dysfunction in children

The significance of sensory motor functions as indicators of brain dysfunction in children Archives of Clinical Neuropsychology 18 (2003) 11 18 The significance of sensory motor functions as indicators of brain dysfunction in children Abstract Ralph M. Reitan, Deborah Wolfson Reitan Neuropsychology

More information

(Visual) Attention. October 3, PSY Visual Attention 1

(Visual) Attention. October 3, PSY Visual Attention 1 (Visual) Attention Perception and awareness of a visual object seems to involve attending to the object. Do we have to attend to an object to perceive it? Some tasks seem to proceed with little or no attention

More information

Dynamic functional integration of distinct neural empathy systems

Dynamic functional integration of distinct neural empathy systems Social Cognitive and Affective Neuroscience Advance Access published August 16, 2013 Dynamic functional integration of distinct neural empathy systems Shamay-Tsoory, Simone G. Department of Psychology,

More information

Models of Information Retrieval

Models of Information Retrieval Models of Information Retrieval Introduction By information behaviour is meant those activities a person may engage in when identifying their own needs for information, searching for such information in

More information

Vision and Action. 10/3/12 Percep,on Ac,on 1

Vision and Action. 10/3/12 Percep,on Ac,on 1 Vision and Action Our ability to move thru our environment is closely tied to visual perception. Simple examples include standing one one foot. It is easier to maintain balance with the eyes open than

More information

Neural Correlates of Human Cognitive Function:

Neural Correlates of Human Cognitive Function: Neural Correlates of Human Cognitive Function: A Comparison of Electrophysiological and Other Neuroimaging Approaches Leun J. Otten Institute of Cognitive Neuroscience & Department of Psychology University

More information

Introduction to Physiological Psychology Review

Introduction to Physiological Psychology Review Introduction to Physiological Psychology Review ksweeney@cogsci.ucsd.edu www.cogsci.ucsd.edu/~ksweeney/psy260.html n Learning and Memory n Human Communication n Emotion 1 What is memory? n Working Memory:

More information

The concept of neural network in neuropsychology

The concept of neural network in neuropsychology Cognitive Neuroscience History of Neural Networks in Neuropsychology The concept of neural network in neuropsychology Neuroscience has been very successful at explaining the neural basis of low-level sensory

More information

Topic 11 - Parietal Association Cortex. 1. Sensory-to-motor transformations. 2. Activity in parietal association cortex and the effects of damage

Topic 11 - Parietal Association Cortex. 1. Sensory-to-motor transformations. 2. Activity in parietal association cortex and the effects of damage Topic 11 - Parietal Association Cortex 1. Sensory-to-motor transformations 2. Activity in parietal association cortex and the effects of damage Sensory to Motor Transformation Sensory information (visual,

More information

CPSC81 Final Paper: Facial Expression Recognition Using CNNs

CPSC81 Final Paper: Facial Expression Recognition Using CNNs CPSC81 Final Paper: Facial Expression Recognition Using CNNs Luis Ceballos Swarthmore College, 500 College Ave., Swarthmore, PA 19081 USA Sarah Wallace Swarthmore College, 500 College Ave., Swarthmore,

More information

A CONVERSATION ABOUT NEURODEVELOPMENT: LOST IN TRANSLATION

A CONVERSATION ABOUT NEURODEVELOPMENT: LOST IN TRANSLATION A CONVERSATION ABOUT NEURODEVELOPMENT: LOST IN TRANSLATION Roberto Tuchman, M.D. Chief, Department of Neurology Nicklaus Children s Hospital Miami Children s Health System 1 1 in 6 children with developmental

More information

to Cues Present at Test

to Cues Present at Test 1st: Matching Cues Present at Study to Cues Present at Test 2nd: Introduction to Consolidation Psychology 355: Cognitive Psychology Instructor: John Miyamoto 05/03/2018: Lecture 06-4 Note: This Powerpoint

More information

Cerebral Cortex: Association Areas and Memory Tutis Vilis

Cerebral Cortex: Association Areas and Memory Tutis Vilis 97 Cerebral Cortex: Association Areas and Memory Tutis Vilis a) Name the 5 main subdivisions of the cerebral cortex. Frontal, temporal, occipital, parietal, and limbic (on the medial side) b) Locate the

More information

NEURAL MECHANISMS FOR THE RECOGNITION OF BIOLOGICAL MOVEMENTS

NEURAL MECHANISMS FOR THE RECOGNITION OF BIOLOGICAL MOVEMENTS NEURAL MECHANISMS FOR THE RECOGNITION OF BIOLOGICAL MOVEMENTS Martin A. Giese* and Tomaso Poggio The visual recognition of complex movements and actions is crucial for the survival of many species. It

More information

Chapter 5. Summary and Conclusions! 131

Chapter 5. Summary and Conclusions! 131 ! Chapter 5 Summary and Conclusions! 131 Chapter 5!!!! Summary of the main findings The present thesis investigated the sensory representation of natural sounds in the human auditory cortex. Specifically,

More information

Prof. Greg Francis 5/23/08

Prof. Greg Francis 5/23/08 Brain parts The brain IIE 269: Cognitive Psychology Greg Francis Lecture 02 The source of cognition (consider transplant!) Weighs about 3 pounds Damage to some parts result in immediate death or disability

More information

Lecturer: Rob van der Willigen 11/9/08

Lecturer: Rob van der Willigen 11/9/08 Auditory Perception - Detection versus Discrimination - Localization versus Discrimination - - Electrophysiological Measurements Psychophysical Measurements Three Approaches to Researching Audition physiology

More information

Neuroscience Tutorial

Neuroscience Tutorial Neuroscience Tutorial Brain Organization : cortex, basal ganglia, limbic lobe : thalamus, hypothal., pituitary gland : medulla oblongata, midbrain, pons, cerebellum Cortical Organization Cortical Organization

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 11: Attention & Decision making Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis

More information

Psych3BN3 Topic 4 Emotion. Bilateral amygdala pathology: Case of S.M. (fig 9.1) S.M. s ratings of emotional intensity of faces (fig 9.

Psych3BN3 Topic 4 Emotion. Bilateral amygdala pathology: Case of S.M. (fig 9.1) S.M. s ratings of emotional intensity of faces (fig 9. Psych3BN3 Topic 4 Emotion Readings: Gazzaniga Chapter 9 Bilateral amygdala pathology: Case of S.M. (fig 9.1) SM began experiencing seizures at age 20 CT, MRI revealed amygdala atrophy, result of genetic

More information

Lecturer: Rob van der Willigen 11/9/08

Lecturer: Rob van der Willigen 11/9/08 Auditory Perception - Detection versus Discrimination - Localization versus Discrimination - Electrophysiological Measurements - Psychophysical Measurements 1 Three Approaches to Researching Audition physiology

More information

Chapter 2 Knowledge Production in Cognitive Neuroscience: Tests of Association, Necessity, and Sufficiency

Chapter 2 Knowledge Production in Cognitive Neuroscience: Tests of Association, Necessity, and Sufficiency Chapter 2 Knowledge Production in Cognitive Neuroscience: Tests of Association, Necessity, and Sufficiency While all domains in neuroscience might be relevant for NeuroIS research to some degree, the field

More information

The Central Nervous System

The Central Nervous System The Central Nervous System Cellular Basis. Neural Communication. Major Structures. Principles & Methods. Principles of Neural Organization Big Question #1: Representation. How is the external world coded

More information

Carnegie Mellon University Annual Progress Report: 2011 Formula Grant

Carnegie Mellon University Annual Progress Report: 2011 Formula Grant Carnegie Mellon University Annual Progress Report: 2011 Formula Grant Reporting Period January 1, 2012 June 30, 2012 Formula Grant Overview The Carnegie Mellon University received $943,032 in formula funds

More information

DISORDERS OF PATTERN RECOGNITION

DISORDERS OF PATTERN RECOGNITION DISORDERS OF PATTERN RECOGNITION A. Visual agnosia inability to identify objects by sight Types (1) apperceptive agnosia unable to form stable [presemantic] representations of objects (2) associative agnosia

More information

Does Wernicke's Aphasia necessitate pure word deafness? Or the other way around? Or can they be independent? Or is that completely uncertain yet?

Does Wernicke's Aphasia necessitate pure word deafness? Or the other way around? Or can they be independent? Or is that completely uncertain yet? Does Wernicke's Aphasia necessitate pure word deafness? Or the other way around? Or can they be independent? Or is that completely uncertain yet? Two types of AVA: 1. Deficit at the prephonemic level and

More information

Is it possible to gain new knowledge by deduction?

Is it possible to gain new knowledge by deduction? Is it possible to gain new knowledge by deduction? Abstract In this paper I will try to defend the hypothesis that it is possible to gain new knowledge through deduction. In order to achieve that goal,

More information

Chapter 8: Visual Imagery & Spatial Cognition

Chapter 8: Visual Imagery & Spatial Cognition 1 Chapter 8: Visual Imagery & Spatial Cognition Intro Memory Empirical Studies Interf MR Scan LTM Codes DCT Imagery & Spatial Cognition Rel Org Principles ImplEnc SpatEq Neuro Imaging Critique StruEq Prop

More information

Introduction to Cognitive Neuroscience fmri results in context. Doug Schultz

Introduction to Cognitive Neuroscience fmri results in context. Doug Schultz Introduction to Cognitive Neuroscience fmri results in context Doug Schultz 3-2-2017 Overview In-depth look at some examples of fmri results Fusiform face area (Kanwisher et al., 1997) Subsequent memory

More information

VERDIN MANUSCRIPT REVIEW HISTORY REVISION NOTES FROM AUTHORS (ROUND 2)

VERDIN MANUSCRIPT REVIEW HISTORY REVISION NOTES FROM AUTHORS (ROUND 2) 1 VERDIN MANUSCRIPT REVIEW HISTORY REVISION NOTES FROM AUTHORS (ROUND 2) Thank you for providing us with the opportunity to revise our paper. We have revised the manuscript according to the editors and

More information

Selective Attention. Inattentional blindness [demo] Cocktail party phenomenon William James definition

Selective Attention. Inattentional blindness [demo] Cocktail party phenomenon William James definition Selective Attention Inattentional blindness [demo] Cocktail party phenomenon William James definition Everyone knows what attention is. It is the taking possession of the mind, in clear and vivid form,

More information

Vision Research 49 (2009) Contents lists available at ScienceDirect. Vision Research. journal homepage:

Vision Research 49 (2009) Contents lists available at ScienceDirect. Vision Research. journal homepage: Vision Research 49 (2009) 368 373 Contents lists available at ScienceDirect Vision Research journal homepage: www.elsevier.com/locate/visres Transfer between pose and expression training in face recognition

More information

This is a repository copy of Differences in holistic processing do not explain cultural differences in the recognition of facial expression.

This is a repository copy of Differences in holistic processing do not explain cultural differences in the recognition of facial expression. This is a repository copy of Differences in holistic processing do not explain cultural differences in the recognition of facial expression. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/107341/

More information

MSc Neuroimaging for Clinical & Cognitive Neuroscience

MSc Neuroimaging for Clinical & Cognitive Neuroscience MSc Neuroimaging for Clinical & Cognitive Neuroscience School of Psychological Sciences Faculty of Medical & Human Sciences Module Information *Please note that this is a sample guide to modules. The exact

More information

Thomas J. Palmeri* and Isabel Gauthier*

Thomas J. Palmeri* and Isabel Gauthier* VISUAL OBJECT UNDERSTANDING Thomas J. Palmeri* and Isabel Gauthier* Visual object understanding includes processes at the nexus of visual perception and visual cognition. A traditional approach separates

More information

Higher Cortical Function

Higher Cortical Function Emilie O Neill, class of 2016 Higher Cortical Function Objectives Describe the association cortical areas processing sensory, motor, executive, language, and emotion/memory information (know general location

More information

Social Cognition and the Mirror Neuron System of the Brain

Social Cognition and the Mirror Neuron System of the Brain Motivating Questions Social Cognition and the Mirror Neuron System of the Brain Jaime A. Pineda, Ph.D. Cognitive Neuroscience Laboratory COGS1 class How do our brains perceive the mental states of others

More information

Lecture 2.1 What is Perception?

Lecture 2.1 What is Perception? Lecture 2.1 What is Perception? A Central Ideas in Perception: Perception is more than the sum of sensory inputs. It involves active bottom-up and topdown processing. Perception is not a veridical representation

More information

Implicit memory. Schendan, HE. University of Plymouth https://pearl.plymouth.ac.uk

Implicit memory. Schendan, HE. University of Plymouth https://pearl.plymouth.ac.uk University of Plymouth PEARL https://pearl.plymouth.ac.uk 14 Medicine, Dentistry and Health Sciences 2012-03-02 Implicit memory Schendan, HE http://hdl.handle.net/10026.1/1207 Academic Press All content

More information

PS3019 Cognitive and Clinical Neuropsychology

PS3019 Cognitive and Clinical Neuropsychology PS3019 Cognitive and Clinical Neuropsychology Carlo De Lillo HWB Room P05 Consultation time: Tue & Thu 2.30-3.30 E-mail: CDL2@le.ac.uk Overview Neuropsychology of visual processing Neuropsychology of spatial

More information

Dr. Mark Ashton Smith, Department of Psychology, Bilkent University

Dr. Mark Ashton Smith, Department of Psychology, Bilkent University UMAN CONSCIOUSNESS some leads based on findings in neuropsychology Dr. Mark Ashton Smith, Department of Psychology, Bilkent University nattentional Blindness Simons and Levin, 1998 Not Detected Detected

More information

Cerebral Cortex 1. Sarah Heilbronner

Cerebral Cortex 1. Sarah Heilbronner Cerebral Cortex 1 Sarah Heilbronner heilb028@umn.edu Want to meet? Coffee hour 10-11am Tuesday 11/27 Surdyk s Overview and organization of the cerebral cortex What is the cerebral cortex? Where is each

More information

Interaction Between Social Categories in the Composite Face Paradigm. Wenfeng Chen and Naixin Ren. Chinese Academy of Sciences. Andrew W.

Interaction Between Social Categories in the Composite Face Paradigm. Wenfeng Chen and Naixin Ren. Chinese Academy of Sciences. Andrew W. Interaction Between Social Categories in the Composite Face Paradigm Wenfeng Chen and Naixin Ren Chinese Academy of Sciences Andrew W. Young University of York Chang Hong Liu Bournemouth University Author

More information

It Doesn t Take A Lot of Brains to Understand the Brain: Functional Neuroanatomy Made Ridiculously Simple

It Doesn t Take A Lot of Brains to Understand the Brain: Functional Neuroanatomy Made Ridiculously Simple It Doesn t Take A Lot of Brains to Understand the Brain: Functional Neuroanatomy Made Ridiculously Simple 6 th Annual Northern Kentucky TBI Conference March 23, 2012 www.bridgesnky.org James F. Phifer,

More information

Developmental prosopagnosia: A review

Developmental prosopagnosia: A review Behavioural Neurology 14 (2003) 109 121 109 IOS Press Developmental prosopagnosia: A review Thomas Kress and Irene Daum Institute of Cognitive Neuroscience, Department of Neuropsychology Ruhr-University

More information

Visual Memory Any neural or behavioural phenomenon implying storage of a past visual experience. E n c o d i n g. Individual exemplars:

Visual Memory Any neural or behavioural phenomenon implying storage of a past visual experience. E n c o d i n g. Individual exemplars: Long-term Memory Short-term Memory Unconscious / Procedural Conscious / Declarative Working Memory Iconic Memory Visual Memory Any neural or behavioural phenomenon implying storage of a past visual experience.

More information

Multimodal interactions: visual-auditory

Multimodal interactions: visual-auditory 1 Multimodal interactions: visual-auditory Imagine that you are watching a game of tennis on television and someone accidentally mutes the sound. You will probably notice that following the game becomes

More information

Geography of the Forehead

Geography of the Forehead 5. Brain Areas Geography of the Forehead Everyone thinks the brain is so complicated, but let s look at the facts. The frontal lobe, for example, is located in the front! And the temporal lobe is where

More information

Myers Psychology for AP*

Myers Psychology for AP* Myers Psychology for AP* David G. Myers PowerPoint Presentation Slides by Kent Korek Germantown High School Worth Publishers, 2010 *AP is a trademark registered and/or owned by the College Board, which

More information

Motivation represents the reasons for people's actions, desires, and needs. Typically, this unit is described as a goal

Motivation represents the reasons for people's actions, desires, and needs. Typically, this unit is described as a goal Motivation What is motivation? Motivation represents the reasons for people's actions, desires, and needs. Reasons here implies some sort of desired end state Typically, this unit is described as a goal

More information