Dynamics of visual feature analysis and objectlevel processing in face versus letter-string perception

Size: px
Start display at page:

Download "Dynamics of visual feature analysis and objectlevel processing in face versus letter-string perception"

Transcription

1 Dynamics of visual feature analysis and objectlevel processing in face versus letter-string perception A. Tarkiainen, 1 P. L. Cornelissen 2 and R. Salmelin 1 Brain (2002), 125, 1125± Brain Research Unit, Low Temperature Laboratory, Helsinki University of Technology, Finland and 2 Psychology Department, Newcastle University, Newcastle upon Tyne, UK Summary Neurones in the human inferior occipitotemporal cortex respond to speci c categories of images, such as numbers, letters and faces, within 150±200 ms. Here we identify the locus in time when stimulus-speci c analysis emerges by comparing the dynamics of face and letterstring perception in the same 10 individuals. An ideal paradigm was provided by our previous study on letterstrings, in which noise-masking of stimuli revealed putative visual feature processing at 100 ms around the occipital midline followed by letter-string-speci c activation at 150 ms in the left inferior occipitotemporal cortex. In the present study, noise-masking of cartoonlike faces revealed that the response at 100 ms Correspondence to: Antti Tarkiainen, Brain Research Unit, Low Temperature Laboratory, Helsinki University of Technology, P.O. Box 2200, Fin HUT, Finland antti.tarkiainen@hut. increased linearly with the visual complexity of the images, a result that was similar for faces and letterstrings. By 150 ms, faces and letter-strings had entered their own stimulus-speci c processing routes in the inferior occipitotemporal cortex, with identical timing and large spatial overlap. However, letter-string analysis lateralized to the left hemisphere, whereas face processing occurred more bilaterally or with righthemisphere preponderance. The inferior occipitotemporal activations at ~150 ms, which take place after the visual feature analysis at ~100 ms, are likely to represent a general object-level analysis stage that acts as a rapid gateway to higher cognitive processing. Keywords: MEG; extrastriate; facial expression; occipitotemporal; noise masking Abbreviations: LH = left hemisphere; MEG = magnetoencephalography; RH = right hemisphere Introduction Recognition of faces and facial expressions is important for successful communication and social behaviour. Not surprisingly, certain cortical areas are thought to deal speci cally with face processing. Haemodynamic functional imaging methods have shown that faces activate certain parts of the occipitotemporal cortex, typically in the fusiform gyrus, more than other types of image (e.g. Sergent et al., 1992; Haxby et al., 1994; Clark et al., 1996; Puce et al., 1996; Kanwisher et al., 1997; McCarthy et al., 1997; Gorno-Tempini et al., 1998). Electromagnetic functional recordings have timed this activation to ~150±200 ms after image onset (e.g. Lu et al., 1991; Allison et al., 1994, 1999; Nobre et al., 1994; Sams et al., 1997; Swithenby et al., 1998; McCarthy et al., 1999; Halgren et al., 2000). Occipitotemporal face-speci c activation can be seen bilaterally, but generally with at least slight right-hemisphere (RH) dominance (e.g. Sergent et al., 1992; Haxby et al., 1994; Bentin et al., 1996; Puce et al., 1996; Kanwisher et al., 1997; Sams et al., 1997; Gornoã Guarantors of Brain 2002 Tempini et al., 1998; Swithenby et al., 1998; Halgren et al., 2000). The importance of occipitotemporal face activations is con rmed by observations showing that lesions in the basal occipitotemporal cortex are associated with the inability to recognize familiar faces (prosopagnosia) (for reviews, see Damasio et al., 1990; De Renzi et al., 1994). Neuronal populations in the inferior occipitotemporal cortex have also been shown to respond preferentially to other behaviourally relevant image categories, such as letterstrings and numbers (Allison et al., 1994), in a time window very similar to that for face processing (Allison et al., 1994; Nobre et al., 1994). However, while the timing seems to be comparable for faces and letter-strings, letter-string processing is rather strongly lateralized to the left hemisphere (LH) (Puce et al., 1996; Salmelin et al., 1996; Kuriki et al., 1998; Tarkiainen et al., 1999), unlike face processing. The similarity of the face- and letter-string-speci c occipitotemporal brain activations in timing and location (Allison et al., 1994;

2 1126 A. Tarkiainen et al. Fig. 1 Examples of stimuli used in the earlier study on letter-string reading (A) and in the present study on face processing (B). (A) The letter-string stimuli contained 19 categories. See Material and methods in Tarkiainen et al. (1999) for more details. (B) The face stimuli contained eight categories, which are demonstrated here by two examples from each category. The numbers incorporated in the names of the FACES categories refer to the noise level and the abbreviation PH denotes photographed images. The names shown above the images are used in the text to refer to the different stimulus types. Puce et al., 1996) suggests that these stimulus-speci c signals may represent a more general phase of object-level analysis (Malach et al., 1995). Our aim in the present study was to identify the locus in time and space at which processing streams start to differ for faces and letter-strings. An appropriate paradigm was provided by our previous work on the visual processing of letter-strings (Tarkiainen et al., 1999). Using letter- and symbol-strings of different length and variable levels of masking visual noise, we had succeeded in separating the processes of low-level visual analysis at 100 ms, which apparently reacts only to the visual complexity of images, and stimulus-speci c processing at 150 ms after stimulus onset. In the present study, the same masking algorithm was employed to investigate processing of simple, drawn cartoon-like faces in 10 individuals who had also participated in the previous study on letter-strings. The results of the present study on faces were compared with our earlier ndings on letter-string processing (Tarkiainen et al., 1999) to reveal similarities and differences in brain activations in early face and letter-string processing. Two hypotheses were tested in this study. (i) If the early response at 100 ms after stimulus re ects stimulus-nonspeci c visual feature analysis, it should be essentially identical for faces and letter-strings masked with noise. (ii) If stimulus-speci c object-level processing takes place in the inferior occipitotemporal cortex, we would expect dissociation into face- and letter-string-speci c processing routes by the subsequent response at ~150 ms. Testing these hypotheses should allow us to establish the spatiotemporal limits of visual feature versus object-level analysis. Material and methods Subjects Ten subjects took part in the present study. They had all participated in our earlier study of letter-string reading (Tarkiainen et al., 1999). They were healthy, right-handed, Finnish adults (four females, six males, aged 23±44 years, mean age 31 years) and they gave their informed consent to participation in this study. The subjects were all university students or graduates, and their visual acuity was normal or corrected to normal. Stimuli The letter-string stimuli in Tarkiainen et al. (1999) contained 19 stimulus categories (Fig. 1A): letter-strings with three lengths (single letters, legitimate two-letter Finnish syllables, legitimate four-letter Finnish words), each with four levels of

3 Early face versus letter-string processing 1127 masking noise (levels 0, 8, 16, 24; see below); symbol-strings of the same length (always with noise level 0, i.e. without noise) and empty noise patches (four levels, no letters or symbols). The different noise levels used in the masking varied the visibility of the letter-strings. Noise was added to the originally noiseless images by changing the grey level of each pixel randomly. The amount of change was picked from a Gaussian distribution with zero mean and a standard deviation corresponding to the level of noise. If the new grey level was not within the possible range (0±63 from black to white), the procedure was repeated for that pixel. The addition of the Gaussian noise increased the local luminance contrast in the images, thus giving them a more complex appearance. The face-object stimuli contained eight image categories (Fig. 1B). We constructed the stimuli with the intention of making measurement as similar as possible to that used in the letter-string study. The rst stimulus category (FACES_0) was a collection of simple, cartoon-like, drawn faces with nine different expressions. The task during measurement was to identify the expressions and name them if asked, which forced the subject to analyse the images in a manner similar to reading the letter-strings, as described by Tarkiainen et al. (1999). The second (FACES_8), third (FACES_16) and fourth (FACES_24) image categories consisted of the same drawn face images as those in the rst category but the drawings were masked by an increasing level (levels 8, 16 and 24, respectively) of random Gaussian noise, making the recognition of the expressions harder. The grey levels of the drawn images and the noise levels masking the faces matched the values used in the letter-string study. In the measurement of letter-strings, strings of geometrical symbols had served as control stimuli for letter-strings. We now included two control categories: the fth stimulus category (OBJECTS) consisted of simple drawn images of common household objects (20 objects, e.g. a chair, a book, a hat, a mug) and the sixth category (MIXED) comprised eight images in which parts of the drawn faces were mixed (shifted and rotated) in a random manner and placed within geometrical objects to yield images that were as complex but not as readily recognized as those in FACES_0. Contrasting the FACES_0 and MIXED and OBJECTS conditions should provide information about the image attributes required by the face-speci c neurones. An additional comparison was enabled by the use of photographs. The seventh category (PH_FACES) consisted of black-and-white photographs of faces with expressions similar to those in the rst category. The same male volunteer was viewed from the front in all face photographs; he was unknown to all our subjects. The last (eighth) category (PH_OBJECTS) was a collection of black-and-white photographs of objects similar to those in the fth category. Magnetoencephalography (MEG) Brain activity was recorded with a 306-channel Vectorview magnetometer (Neuromag, Helsinki, Finland), which detects the weak magnetic elds created by synchronous activation of thousands of neurones. From the magnetic eld patterns, we estimated the underlying cortical activations and their time courses of activation with equivalent current dipoles, as described by Tarkiainen et al. (1999). For a comprehensive review of MEG, see HaÈmaÈlaÈinen et al. (1993). Procedure To reach an acceptable signal-to-noise ratio, the subject's brain responses must be averaged over several presentations of images belonging to each stimulus category. In the present study, we prepared one stimulus sequence in which all the different stimulus types (eight categories) appeared in pseudorandomized order but with equal probability. The only restriction was that the same image was not allowed to appear twice in a row. This sequence was presented to the subject in four shorter parts, each lasting ~8 min. Between different parts of the sequence, there were short breaks of 1± 2 min to allow the subject to rest. The subject's task during the MEG measurement was to pay attention to the stimuli and, when prompted by the appearance of a question mark, to read out loud the letterstring [this was done only in the previous study (Tarkiainen et al., 1999)] or to say the name of the facial expression (present study) that was shown immediately before the question mark. No correct or desired names for different expressions were given, and it was emphasized that the subject should say the name that rst comes to his or her mind. The purpose of showing the question mark was to ensure that the subject stayed alert. The question mark appeared with the probability of 1.5% in both the letter-string and the face study. The MEG measurements took place in a magnetically shielded room (Euroshield Oy, Eura, Finland), where the subject sat with his or her head resting against the measurement helmet. The room was dimly lit and the images were presented on a rear projection screen with a data projector (Vista Pro; Electrohome, Kitchener, Ontario, Canada) controlled by a Macintosh Quadra 800 computer. The projection screen was placed in front of the subject at a distance of ~90 cm. All the images were presented at the same location on the screen at a comfortable central viewing position. The letter-string stimuli used by Tarkiainen et al. (1999) occupied a visual angle of ~5 3 2 and the face images in the present study a visual angle of ~ Subjects were asked to xate the central part of the screen, where the images appeared. The images were shown on a grey background. The grey level matched the mean grey level of the stimulus images and was used to keep the luminance relatively constant and to reduce the stress to the eyes caused by viewing the stimuli. With our visual stimulus presentation hardware, the actual stimulus

4 1128 A. Tarkiainen et al. image appeared 33 ms later than the computer-generated trigger that marked the onset of the stimulus. This delay was taken into account in the results, and the latencies refer to the appearance of the stimuli on the screen. The anatomical information given by the subjects' MRIs was aligned with the coordinate system of the MEG measurement by de ning a head coordinate system with three anatomical landmarks (the nasion and points immediately anterior to the ear canals). Four small coils attached to the subject's head allowed the measurement of the position of the head with respect to the MEG helmet. Active brain areas were localized by means of the head coordinate system, in which the x-axis runs from left to right through points anterior to the ear canals, the y-axis towards the nasion and the z-axis towards the top of the head. During a recording session in the present study, the stimulus images appeared for 100 ms with a 2-s interstimulus interval. The only exception was the question mark, which was shown for 2 s to allow the subject to name the facial expression. The 60-ms stimulus presentation time that was used in the letter-string study (Tarkiainen et al., 1999) was also tested, but we found that it was not always long enough to allow the recognition of expressions, especially from the photographs. MEG signals were pass-band ltered at 0.1±200 Hz and sampled at 600 Hz. Signals were averaged over an interval starting 0.2 s before and ending 0.8 s after the onset of the image. The vertical and horizontal electro-oculograms were monitored continuously and epochs contaminated by eye movements and blinks were excluded from the averages. The smallest number of averages collected in one category was 87. On average, 102 epochs were averaged for each stimulus category. After the MEG sessions, to check for intersubject consistency in naming the expressions, the subjects were asked to name the different expressions in writing. Data analysis Averaged MEG responses were low-pass ltered digitally at 40 Hz. The baseline for the signal value was calculated from a time interval of 200 ms before image onset. Vectorview employs 204 planar gradiometers and 102 magnetometers, with two orthogonally oriented gradiometers and one magnetometer at each of the 102 measurement locations. Magnetometers may detect activity from deeper brain structures than gradiometers, but they are also more sensitive to external noise sources. Since we were interested in cortical activations, we based our analysis on the gradiometer data only. Activated brain areas and their time courses were determined using equivalent current dipoles. For this purpose, each subject's brain was modelled as a sphere matching the local curvature of the occipital and temporal regions. In individual subjects, the MEG signals obtained within 700 ms after image onset were analysed in all stimulus conditions, and the separately determined current dipoles were combined into a single multidipole model of nine to 14 dipoles (mean number 11), which accounted for the activation patterns in all stimulus conditions. By applying the individual multidipole models to the different stimulus conditions, we obtained the amplitude waveforms of different source areas. On the basis of these amplitude waveforms, we identi ed the sources that showed systematic stimulus-dependent behaviour. This procedure is explained in detail in the Results section. Statistical tests on the amplitude and latency behaviour of selected source groups were carried out by ANOVA (analysis of variance) and the t-test. To avoid any bias that might result from sources that were not statistically independent, only one source per subject was accepted for these tests. If a subject had multiple sources belonging to the same source group, the mean value was used for the amplitude tests. The latency tests were performed with the source showing the shortest latency, but only if it had a clear activation peak in all stimulus conditions. For visualization purposes, all the individual source coordinates were transformed into standard brain coordinates. This alignment was based on a 12-parameter af ne transformation (Woods et al., 1998) followed by a re nement with a non-linear elastic transformation (Schormann et al., 1996), in which each individual brain was matched to the standard brain by comparing the greyscale values of the MRIs. All the source location parameters reported in the Results section were calculated from the transformed coordinates. Results Summary of activation patterns observed in the letter-string study The main ndings of our earlier study of letter-string processing (Tarkiainen et al., 1999) are illustrated in Fig. 2, which shows the responses to one-item (single letters or symbols) and four-item (four-letter words or symbols) strings for the 10 individuals who also participated in the present study of face processing (data on empty noise patches and two-item strings are not shown). Two distinct patterns of activation were identi ed within 200 ms after stimulus onset. The rst pattern, named Type I, took place ~100 ms after image onset. These responses originated in the occipital cortex close to the midline (15 sources from eight subjects). They were not speci c to the type of string, as letter-strings and symbol-strings evoked equally strong responses. However, Type I activation reacted strongly to the visual complexity of the images, and the strongest responses measured were the responses to images with the highest amount of visual features (noise, number of items). The second pattern of activation, named Type II, was seen ~150 ms after image onset. These responses originated in the inferior occipitotemporal region with LH dominance (10 LH and three RH sources from 10 subjects; LH sources were found in nine subjects and RH sources in three subjects). They were letter-string-speci c in the sense that responses

5 Early face versus letter-string processing 1129 were stronger for letter-strings than symbol-strings and they collapsed for the highest level of noise-masking (contrary to Type I activation). Type I activation for faces In our study of letter-strings, we classi ed all early (<130 ms) sources showing a systematic increase in amplitude with the level of noise as the noise-sensitive Type I activation group. In the present study, the stimuli did not include the empty noise patches that were used in the selection procedure in the letter-string study (Tarkiainen et al., 1999), but the same noise levels were used to mask drawn faces (categories 1±4, representing noise levels 0, 8, 16 and 24, respectively). Thus, the selection was now based on the comparison of noiseless drawn faces (FACES_0) versus faces masked with the highest level of noise (FACES_24). As only one comparison was used, we set a strict requirement for a signi cant difference. Only those sources were selected for which the peak activation in response to heavy noise (FACES_24) was stronger than to noiseless drawn faces (FACES_0) by at least 3.29 times the baseline standard deviation (corresponding to P < 0.001). Exactly as in the letter-string study, the upper time limit for activation peaks was set to 130 ms (in the FACES_24 condition). According to these criteria, we accepted 24 sources from nine subjects. The only subject who did not show any Type I sources did not show them in the earlier study (Tarkiainen et al., 1999) either. One subject who had very strong and widespread occipital activity had as many as seven Type I sources. The other subjects had on average two Type I sources. The Type I source locations collected from all subjects are shown in Fig. 3A. Sources were located bilaterally in the occipital region with a distance (mean 6 SEM) of mm from the occipital midline. The behaviour of all Type I source areas in different stimulus conditions is summarized in Fig. 3B and C. Figure 3B shows the Type I peak amplitudes, which were rst normalized with respect to the FACES_24 condition [equal to 1; strength (mean 6 SEM) nam] and then averaged over all nine subjects (24 sources in total). If no clear peak was found for a condition (usually for some of the noiseless conditions), the baseline standard deviation was used as the amplitude and no peak latency was obtained for that situation. Similar to the ndings in Tarkiainen et al. (1999), the activation strengths of these sources increased systematically with the level of noise. The effect of noise on Type I amplitudes was signi cant for drawn faces [repeated measures Fig. 2 The main results of the earlier letter-string study (for a full account, see Tarkiainen et al., 1999) were recalculated for the 10 subjects who participated in the present study. Mean (+ standard error of the mean) amplitude behaviour is shown for the single letter/symbol (a to e) and four-letters/symbols (f to j) conditions. (A) Occipital Type I responses reached their maximum ~100 ms after image onset and increased with the level of noise and string length. Individual source amplitudes were scaled with respect to the noisiest word condition (j) and were averaged across all sources (15 sources from eight subjects). (B) Occipitotemporal Type II responses showed speci city for letter-strings ~150 ms after image onset. Individual source amplitudes were scaled with respect to the visible words condition (g) and averaged across all sources (13 sources from 10 subjects). Differences in the activation strengths in (A) and (B) were tested with paired t-tests (calculated from absolute amplitudes) for the following pairs: (i) noiseless letter- versus symbol-strings (a versus b and f versus g); (ii) noiseless versus noisy letter-strings (b versus c, d, e, and g versus h, i, j); and (iii) all corresponding one-item versus fouritem strings (a versus f, b versus g, c versus h, d versus i, and e versus j). Signi cant differences are marked in the gures: *P < 0.05, **P < 0.01 and ***P < The centre points of the source areas are indicated as dots on an MRI surface rendering of the standard brain geometry. The brain is viewed from the back and the sources are projected on the brain surface for easy visualization.

6 1130 A. Tarkiainen et al. faces (PH_FACES) and objects (PH_OBJECTS) was signi cant (P < 0.01, paired two-tailed t-test) but the difference between FACES_0 and PH_FACES and that between FACES_0 and OBJECTS were not signi cant. The effect of noise on Type I amplitudes was very clear as even the difference between noise level 8 (FACES_8) and all noiseless image types was signi cant (P < 0.05 for all paired two-tailed t-tests). In the peak latencies (Fig. 3C), only one phenomenon was evident. The latencies were shorter for noiseless conditions than for noisy images [repeated measures ANOVA with image type, all categories, as a within-subjects factor, F(7,42) = 3.5, P < 0.01; only the earliest source in FACES_24 condition was included]. However, the onset latencies showed no difference between FACES_0 and FACES_24 (paired two-tailed t-test). The apparent difference in peak latencies therefore arises from the fact that the responses to noiseless stimuli were smaller in amplitude and thus reached the maximum earlier than responses to noisy stimuli. The peak latency of all Type I sources for FACES_24 (mean 6 SEM) was ms. Fig. 3 (A) Locations of all Type I_f sources (Type I sources found in the present study) on an MRI surface rendering of the standard brain geometry. The brain is viewed from the back. (B) Mean (+ standard error of the mean) Type I_f source amplitudes are shown relative to the FACES_24 condition (set equal to 1). (C) Mean (+ standard error of the mean) Type I_f source peak latencies. The values are calculated across all Type I_f sources (24 sources from nine subjects). ANOVA with image type (FACES_0/8/16/24) as a withinsubjects factor, F(3,24) = 15.0, P = , calculated from absolute amplitudes]. Type I activation strengths also differed among the noiseless image categories [repeated measures ANOVA with image type (FACES_0, PH_FACES, OBJECTS, PH_OBJECTS, MIXED) as a within-subjects factor, F(4,32) = 6.5, P < 0.001]. Pairwise comparisons showed that the amplitude difference between photographed Comparison of Type I activity between word and face processing The behaviour of Type I sources in the present study on faces (henceforth referred to as Type I_f sources) was very similar to that found in our earlier study of letter-strings (Tarkiainen et al., 1999) (henceforth referred to as Type I_ls sources). The peak latencies measured for faces were identical to those measured in the letter-string reading task, namely ms for the four-letter words masked with the highest level of noise and ms for the drawn faces masked with the highest level of noise (FACES_24). The onset latencies were also indistinguishable ( and ms for the words with noise level 24 and FACES_24, respectively). However, the amplitudes differed clearly. The mean peak amplitude for FACES_24 was nam, whereas for four-letter words with noise level 24 the mean value was only nam. The source locations were, on average, similar between the two studies. The sources were located mainly in the visual cortices surrounding the V1 cortex and distributed along the ventral visual stream. When only the earliest Type I source was selected for each subject, the locations could be compared at the individual level. The earliest Type I_f and Type I_ls sources were typically not located at the same exact position in the same subject but were separated on average by mm (averaged over all subjects showing Type I behaviour). However, paired t-tests showed no systematic differences between Type I_f and Type I_ls source locations along any coordinate axis. The total number of Type I sources was higher in the present study (24 sources) than in our earlier study (15 sources for these 10 subjects). The differences in the number and strength of Type I sources between letter-string and face processing are prob-

7 Early face versus letter-string processing 1131 of masking noise was also evident in Type I_f activity, whereas only small differences were seen between all the noiseless image types. The most important factor affecting Type I activation may thus be the visual complexity of the image. To test this hypothesis, we de ned the complexity of our stimulus images in the following way. Each image was represented by an m 3 n matrix, where m is the height of the image and n is the width of the image in pixels and each matrix element gives the greyscale value of the corresponding pixel. For all image matrices belonging to the same stimulus category, we calculated the column-wise (could equally have been row-wise) standard deviations of greyscale values and used the mean value to represent that stimulus category. The mean standard deviations calculated in this way for the face and object stimuli are shown in Fig. 4A. This result resembled closely the source strength behaviour seen in Fig. 3B. As illustrated in Fig. 4B, we obtained a strong correlation (r = 0.97, P < ) between the mean peak amplitudes of Type I sources (averaged over all subjects) and the mean standard deviations of the corresponding stimulus images (averaged over all images belonging to the same category) when the results from both the present study (eight stimulus categories) and the letter-string study (19 stimulus categories; Tarkiainen et al., 1999) were combined. Fig. 4 (A) Mean standard deviation (for details, see Results) of the greyscale values (0±255) of each stimulus category. Note the similarity to Fig. 3B. (B) Correlation between the mean standard deviation of image categories and the Type I mean relative amplitudes is calculated for all the 19 stimulus categories of the letter-string study (triangles) and for the eight stimulus categories of the present study (squares). ably caused mainly by differences in stimulus size and in the visual presentation hardware that affected the luminance of images. In addition, the calibration of the Vectorview MEG system used in the present study is different from that of the Neuromag-122 system used in the earlier, letter-string experiment. The measurement array of the Vectorview system also covers better the lower occipital areas, which may have enabled us to detect source areas not as readily accessible with the Neuromag-122 device. The slightly modi ed selection criteria for Type I activation may also have generated small differences in the results. Therefore, we do not consider the differences in source number or strength as important and they will not be discussed further. Relationship of Type I activity to visual complexity of images Type I_ls activity increased with the level of noise as well as with the length of the string. A clear increase with the amount Type II activation for faces In our study of reading, the letter-string-speci c Type II sources were selected by comparing the activations between four-item letter- and symbol-strings (Tarkiainen et al., 1999). In the present study, we made a similar comparison between face and object categories. Sources were included in the facespeci c Type II category when amplitude waveforms peaked after the Type I activation of the same subject but before 200 ms (in FACES_0), and when peak amplitudes for faces exceeded those for objects (both in drawn and photographed form) by at least 1.96 times the baseline standard deviation (corresponding to P < 0.05). Thus, our de nition of `facespeci city' does not mean activation only for face images but activation that is clearly stronger for faces than for objects. Sources that ful lled these criteria were found in all 10 of our subjects, and a total of 19 sources (one to three per subject) were classi ed as Type II. Eleven of these sources were located in the right inferior occipitotemporal area, one in the RH but close to the occipital midline, and seven in the left inferior occipitotemporal cortex (Fig. 5A). All 10 subjects had at least one RH Type II source and seven subjects had also one additional LH source. Figure 5B shows the average amplitude behaviour of all Type II source areas scaled with respect to FACES_0 (mean 6 SEM amplitude nam). As expected, the differences between faces and other noiseless image types were clear [repeated measures ANOVA with image type (FACES_0, PH_FACES, OBJECTS, PH_OBJECTS, MIXED) as a within-subjects factor, F(4,36) = 27.4, P < , calculated from absolute amplitudes]. The effect of noise was also

8 1132 A. Tarkiainen et al. The mean peak latencies (Fig. 5C) were not signi cantly different among the conditions [repeated measures ANOVA with stimulus type, all categories, as a within-subjects factor, F(7,35) = 2.0, P = 0.08; only the earliest source in FACES_0 condition was included], but pairwise comparisons revealed some differences. The apparently longer latencies for objects than for faces reached signi cance only for the photographic images (PH_FACES versus PH_OBJECTS, P < 0.05, paired two-tailed t-test). Responses to MIXED faces were also signi cantly delayed with respect to those to FACES_0 (P = 0.001). The peak latency of all Type II sources for FACES_0 (mean 6 SEM) was ms. In Fig. 5, the results are pooled over the LH and RH Type II sources. Hemispheric comparison was only possible for those seven subjects who had a Type II source in both the left and the right occipitotemporal cortex. The main effect of hemisphere was not signi cant [repeated measures ANOVA with hemisphere (RH, LH) and image type, all categories, as within-subjects factors, F(1,6) = 1.1, P = 0.3] but the two-way interaction of hemisphere 3 image type reached signi cance [F(7,42) = 2.6, P < 0.05]. Pairwise comparisons revealed a signi cant difference between LH and RH activation only in the FACES_24 condition (P < 0.01, paired two-tailed t-test), in which LH activation ( nam) was stronger than RH (8 6 3 nam) activation. The mean onset and peak latencies in the FACES_0 condition were and ms, respectively, in the right occipitotemporal cortex and and ms in the left occipitotemporal cortex. The mean distance of all Type II sources from the midline was mm. The LH and RH sources were located symmetrically with respect to the head midline. Fig. 5 (A) Locations of all Type II_f sources presented on an MRI surface rendering of the standard brain geometry. The brain is viewed from the back but rotated slightly to the left and right. (B) Mean (+ SEM) Type II_f source amplitudes are shown relative to the FACES_0 condition (set equal to 1). (C) Mean (+ SEM) Type II_f source peak latencies. The values are calculated across all Type II_f sources (19 sources from 10 subjects). signi cant [repeated measures ANOVA with image type (FACES_0/8/16/24) as a within-subjects factor, F(3,27) = 19.0, P < ]. The activation strength of Type II sources increased slightly with low noise (level 8), but collapsed for the highest noise level. Interestingly, MIXED faces evoked stronger activation than OBJECTS (P < 0.01, paired two-tailed t-test) but weaker activation than FACES_0 (P < 0.01). Activation strengths of FACES_0 and PH_FACES did not differ. Comparison of Type II activity between word and face processing The hemispheric distribution of face-speci c (12 RH and seven LH) and letter-string-speci c (three RH and 10 LH) sources was different (P < 0.05, Fisher's exact probability test). The letter-string-speci c Type II_ls sources expressed LH dominance (P < 0.05, binomial test), whereas the facespeci c Type II_f sources were rather found bilaterally and their number suggested only slight preference for the RH. When the hemispheric distribution was ignored, the locations of source areas in the inferior occipitotemporal cortex were very similar. The mean distance from the midline was mm for both Type II_ls and Type II_f sources (the one midline RH Type II_f source was excluded). In the inferior±superior (z-coordinate) direction, the mean value for Type II_ls and Type II_f sources was and mm, respectively. The only small difference was seen in the anterior±posterior (y-coordinate) direction, where the mean values for Type II_ls and Type II_f sources were ± and ± mm, respectively, indicating that the centre of activation of Type II_f sources was located on average 6 mm anterior to the centre of activation of Type II_ls sources. This

9 Early face versus letter-string processing 1133 Fig. 6 Mean locations of the left- and right-hemisphere Type II sources presented on (from left to right) coronal, horizontal and sagittal (left hemisphere only) slices of the standard brain MRIs. White lines denote the relative locations of the slices. The mean locations of the face-speci c Type II_f sources are marked with white ellipses and the mean locations of the letter-string-speci c Type II_ls sources with black rectangles. The dimensions (axes) of ellipses and rectangles are equal to twice the standard error of the mean. The only clearly deviant (close to midline) Type II_f source was excluded from the calculation of the right-hemisphere Type II_f mean location. In coronal and horizontal slices, left is on the left. difference reached signi cance in a two-tailed t-test when all the Type II_ls and Type II_f source locations were considered (P < 0.05), but not when only LH source locations were considered (in ve out of six individuals the Type II_f source was anterior to the Type II_ls source). The mean locations of Type II sources in Talairach coordinates (Talairach and Tournoux, 1988) were (x, y, z) ±37, ±70, ±12 and 35, ±73, ±10 mm for LH and RH Type II_ls sources, respectively, and ±37, ±61, ±13 and 33, ±68, ±10 mm for LH and RH Type II_f sources. Fig. 6 shows the mean locations of Type II_ls and Type II_f sources in a standard brain. The timing of Type II activity in face and letter-string processing was identical. The mean peak latency of all Type II_ls sources for four-letter words without noise was ms and that for Type II_f sources in the FACES_0 condition was ms. No difference was seen in the onset latencies ( ms and ms, respectively) either. The activation was again stronger in the present data set. The mean peak amplitude for Type II_ls sources for four-letter words without noise was nam versus nam for Type II_f sources in the FACES_0 condition. Discussion Our results show that the early visual processing of faces and letter-strings (summarized in Fig. 7) consists of at least two distinguishable processes taking place in the occipital and occipitotemporal cortices within 200 ms after stimulus presentation. The rst process, which we have named Type I, took place ~100 ms after image onset in areas surrounding V1 cortex. This activity was not sensitive to the speci c content of the stimulus and was common to the processing of both letter-strings and faces. Type I sources showed a monotonic increase in signal strength as a function of the visual complexity of the images. Some 30±50 ms after Type I activation and ~150 ms after stimulus onset, stimulus-speci c activation (Type II) emerged in the inferior occipitotemporal cortices. Although both letter-strings and faces activated largely overlapping areas in the inferior occipitotemporal cortex, the hemispheric distribution of these areas was different. Letter-string processing was concentrated in the LH, whereas face processing occurred more bilaterally, apparently with slight RH dominance. Visual feature analysis The combined letter-string and face data revealed that Type I activation strength correlates well with the simple image parameter of the mean standard deviation of greyscale values, providing strong support for the interpretation that Type I activation is related to low-level visual feature analysis, such as the extraction of oriented contrast borders. Spatially and temporally similar increased activation for scrambled images has also been reported, e.g. by Allison et al. (1994, 1999), Bentin et al. (1996) and Halgren et al. (2000). Both for faces and letter-strings, Type I sources were located in the occipital cortex close to the midline. At the individual level, some differences were seen in the Type I source locations. This, however, was not surprising considering the differences in the stimulus presentation and measurement hardware between the two studies. The mean location parameters matched well between face and letterstring measurements. The timing of Type I activation was identical in both measurements. Type I activation elicited by faces is thus essentially similar to the activation evoked by letter-strings.

10 1134 A. Tarkiainen et al. Fig. 7 Early (<200 ms) processing of face and letter-string information consists of at least two distinct stages. The middle part of the gure illustrates the rst stage (Type I), which takes place in the occipital cortex ~100 ms after image onset. This activation does not differ between face and letter-string processing. The locations of Type I sources evoked by letterstrings are marked with black squares and the locations of Type I sources activated by faces with white circles. The second activation pattern ~150 ms after image onset is speci c to the stimulus-type (top part of the gure), with strong lateralization to the left hemisphere for letter-strings (black squares) and slight right-hemisphere preponderance for faces (white circles). See Discussion for details. Sources are gathered from all 10 subjects and their locations are shown on MRI surface renderings of the standard brain geometry. Object-level processing Face-speci c Type II sources were located in the inferior occipitotemporal cortices bilaterally, even though slightly more sources were located in the RH. These results are in good accordance with the results of Sergent et al. (1992), Haxby et al. (1994), Kanwisher et al. (1997), Gorno-Tempini et al. (1998), Wojciulik et al. (1998), Allison et al. (1999) and Halgren et al. (2000). If the hemispheric distribution is ignored, the locations of both face- and letter-string-speci c Type II sources were very similar. The mean location parameters differed only in the anterior±posterior direction as the centres of face-speci c source areas were on average 6 mm more anterior than centres of letter-string-speci c source areas. This difference is small and, taking into account the changes in the stimulus and measurement hardware, perhaps unimportant. However, we cannot exclude the possibility that differences in Type II source locations re ect the spatial separation of functionally distinct neural systems in the occipitotemporal cortex. Interestingly, on the basis of intracranial event-related potential (ERP) studies, Puce et al. (1999) reported that face-speci c sites were typically anterior to letter-speci c sites in the occipitotemporal cortex. More than anything else, we want to stress the extreme similarity of letter-string- and face-speci c occipitotemporal activations. Even though the hemispheric balance was different, the activated areas within the inferior occipitotemporal cortex were very close to each other. Also, the timing of these two activations was practically identical. This is noteworthy, since the visual properties of faces are quite different from those of letter-strings and, importantly, since reading and face processing also differ on an evolutionary scale. Still, it seems that both skills use highly similar cortical systems within the visual domain. We conjecture that the inferior occipitotemporal activations at ~150 ms after stimulus onset represent a more general object-level processing stage that takes place after the common low-level analysis of visual features and acts as a gateway to higher processing areas. In our studies, this activation was seen with two different classes of visual stimuli united by their importance to the modern-day human, namely faces and letter-strings. As the ability to recognize letter-strings accurately and quickly develops through practice, it is possible that similar abilities, if needed, can also be developed and their signature detected at the cortical level for other classes of objects. This is demonstrated by Allison et al. (1994), who identi ed responses speci c to Arabic numbers in areas close to those responding speci cally to faces and letter-strings, and by Gauthier et al. (1999, 2000), who showed that the facespeci c fusiform area can be activated (in a manner similar to activation by faces) also by other classes of objects (a novel group called `greebles', birds and cars) in individuals who have had a lot of practice in recognizing the objects in their eld of expertise. One striking feature in our results is the high level of occipitotemporal activation evoked by the very simple drawn images. Halgren et al. (2000) reported that schematic face sketches evoked ~30% less face-speci c occipitotemporal activation than face photographs. We did not observe such a general difference (Fig. 5B), although our drawn images were simpler than those used by Halgren et al. (2000). A likely reason for this difference is the short stimulus presentation time we used, which may not always allow the full recognition and categorization of complex face photographs. The MIXED face images, which contained the same face components as drawn faces but in randomized positions and orientations and placed inside different geometrical shapes, evoked weaker and somewhat delayed activation when compared with normal face images. However, they still

11 Early face versus letter-string processing 1135 activated the face-speci c occipitotemporal areas more than fully meaningful images of familiar objects; this shows that even a few drawn lines that resemble parts of faces are enough to activate these areas and supports the notion of structural encoding of different face components (Eimer, 1998). All in all, our results once again demonstrate the amazing ability of the human brain to recognize faces that are very unnatural in their appearance. One might be tempted to question the face-speci city of our Type II responses and explain it only as a consequence of our task, which directed the subject to pay more attention to the faces than to the other image categories. Wojciulik et al. (1998), using functional MRI, showed that the fusiform face activation can be modulated by voluntary attention. On the other hand, Puce et al. (1999) demonstrated that top-down in uences (semantic priming and face-name learning and identi cation) did not affect the ventral face-speci city within 200 ms of image onset, but affected the responses measured from the same locations later in time. Even if attention plays a major role in these early object-speci c responses, dissociation of the processing pathways obviously occurred ~150 ms after stimulus onset, with lateralization to the left inferior occipitotemporal cortex for attended letterstrings and a more bilateral response pattern to attended faces. Whether attention ampli es the lateralization remains to be answered by further studies. In conclusion, the strong correlation of the noise-sensitive occipital activation at 100 ms with stimulus complexity, which was similar for letter-strings and faces, con rms the role of this response in low-level visual feature processing. The subsequent inferior occipitotemporal activation at 150 ms, similar in timing and location but different in hemispheric distribution for letter-strings and faces, apparently re ects the earliest stage of stimulus-speci c objectlevel processing. Acknowledgements We wish to thank Mika SeppaÈ for help in transforming the individual subject data to standard brain coordinates, Martin Tovee for volunteering to serve as our photographic model, PaÈivi Helenius for help in statistical analysis and comments on the manuscript and Minna Vihla for comments on the manuscript. This work was supported by the Academy of Finland (grant 32731), the Ministry of Education of Finland, the Human Frontier Science Program (grant RG82/1997-B), the Wellcome Trust and the European Union's Large-Scale Facility Neuro-BIRCH II at the Low Temperature Laboratory, Helsinki University of Technology. References Allison T, McCarthy G, Nobre A, Puce A, Belger A. Human extrastriate visual cortex and the perception of faces, words, numbers, and colors. [Review]. Cereb Cortex 1994; 4: 544±54. Allison T, Puce A, Spencer DD, McCarthy G. Electrophysiological studies of human face perception. I: Potentials generated in occipitotemporal cortex by face and non-face stimuli. Cereb Cortex 1999; 9: 415±30. Bentin S, Allison T, Puce A, Perez E, McCarthy G. Electrophysiological studies of face perception in humans. J Cogn Neurosci 1996; 8: 551±65. Clark VP, Keil K, Maisog JM, Courtney S, Ungerleider LG, Haxby JV. Functional magnetic resonance imaging of human visual cortex during face matching: a comparison with positron emission tomography. Neuroimage 1996; 4: 1±15. Damasio AR, Tranel D, Damasio H. Face agnosia and the neural substrates of memory. [Review]. Annu Rev Neurosci 1990; 13: 89± 109. De Renzi E, Perani D, Carlesimo GA, Silveri MC, Fazio F. Prosopagnosia can be associated with damage con ned to the right hemisphereðan MRI and PET study and a review of the literature. [Review]. Neuropsychologia 1994; 32: 893±902. Eimer M. Does the face-speci c N170 component re ect the activity of a specialized eye processor? Neuroreport 1998; 9: 2945± 8. Gauthier I, Tarr MJ, Anderson AW, Skudlarski P, Gore JC. Activation of the middle fusiform `face area' increases with expertise in recognizing novel objects. Nat Neurosci 1999; 2: 568± 73. Gauthier I, Skudlarski P, Gore JC, Anderson AW. Expertise for cars and birds recruits brain areas involved in face recognition. Nat Neurosci 2000; 3: 191±7. Gorno-Tempini ML, Price CJ, Josephs O, Vandenberghe R, Cappa SF, Kapur N, et al. The neural systems sustaining face and propername processing. Brain 1998; 121: 2103±18. Halgren E, Raij T, Marinkovic K, JousmaÈki V, Hari R. Cognitive response pro le of the human fusiform face area as determined by MEG. Cereb Cortex 2000; 10: 69±81. HaÈmaÈlaÈinen M, Hari R, Ilmoniemi RJ, Knuutila J, Lounasmaa OV. MagnetoencephalographyÐtheory, instrumentation, and applications to noninvasive studies of the working human brain. [Review]. Rev Mod Phys 1993; 65: 413±97. Haxby JV, Horwitz B, Ungerleider LG, Maisog JM, Pietrini P, Grady CL. The functional organization of human extrastriate cortex: a PET±rCBF study of selective attention to faces and locations. J Neurosci 1994; 14: 6336±53. Kanwisher N, McDermott J, Chun MM. The fusiform face area: a module in human extrastriate cortex specialized for face perception. J Neurosci 1997; 17: 4302±11. Kuriki S, Takeuchi F, Hirata Y. Neural processing of words in the human extrastriate visual cortex. Brain Res Cogn Brain Res 1998; 6: 193±203. Lu ST, HaÈmaÈlaÈinen MS, Hari R, Ilmoniemi RJ, Lounasmaa OV, Sams M, et al. Seeing faces activates three separate areas outside the occipital visual cortex in man. Neuroscience 1991; 43: 287±90. Malach R, Reppas JB, Benson RR, Kwong KK, Jiang H, Kennedy WA, et al. Object-related activity revealed by functional magnetic

Left-hemisphere dominance for processing of vowels: a whole-scalp neuromagnetic study

Left-hemisphere dominance for processing of vowels: a whole-scalp neuromagnetic study Auditory and Vestibular Systems 10, 2987±2991 (1999) BRAIN activation of 11 healthy right-handed subjects was studied with magnetoencephalography to estimate individual hemispheric dominance for speech

More information

Dynamics of letter string perception in the human occipitotemporal cortex

Dynamics of letter string perception in the human occipitotemporal cortex Brain (1999), 122, 2119 2131 Dynamics of letter string perception in the human occipitotemporal cortex A. Tarkiainen, 1 P. Helenius, 1 P. C. Hansen, 2 P. L. Cornelissen 3 and R. Salmelin 1 1 Brain Research

More information

Are face-responsive regions selective only for faces?

Are face-responsive regions selective only for faces? Cognitive Neuroscience and Neurophysiology 10, 2945±2950 (1999) TO examine the speci city of face-responsive regions for face processing, we used fmri to measure the response of the fusiform gyrus and

More information

Selective Attention to Face Identity and Color Studied With fmri

Selective Attention to Face Identity and Color Studied With fmri Human Brain Mapping 5:293 297(1997) Selective Attention to Face Identity and Color Studied With fmri Vincent P. Clark, 1 * Raja Parasuraman, 2 Katrina Keil, 1 Rachel Kulansky, 1 Sean Fannon, 2 Jose Ma.

More information

Viewing faces evokes responses in ventral temporal cortex

Viewing faces evokes responses in ventral temporal cortex Dissociation of face-selective cortical responses by attention Maura L. Furey*, Topi Tanskanen, Michael S. Beauchamp*, Sari Avikainen, Kimmo Uutela, Riitta Hari, and James V. Haxby*, ** *Laboratory of

More information

Revisiting the Role of the Fusiform Face Area in Visual Expertise

Revisiting the Role of the Fusiform Face Area in Visual Expertise Cerebral Cortex August 2005;15:1234--1242 doi:10.1093/cercor/bhi006 Advance Access publication January 26, 2005 Revisiting the Role of the Fusiform Face Area in Visual Expertise Yaoda Xu Department of

More information

The overlap of neural selectivity between faces and words: evidences

The overlap of neural selectivity between faces and words: evidences The overlap of neural selectivity between faces and words: evidences from the N170 adaptation effect Xiao-hua Cao 1, Chao Li 1, Carl M Gaspar 2, Bei Jiang 1 1. Department of Psychology,Zhejiang Normal

More information

Linking Brain Response and Behavior to Reveal Top-Down. (and Inside-Out) Influences on Processing in Face Perception. Heather A. Wild. Thomas A.

Linking Brain Response and Behavior to Reveal Top-Down. (and Inside-Out) Influences on Processing in Face Perception. Heather A. Wild. Thomas A. Linking Brain Response and Behavior to Reveal Top-Down (and Inside-Out) Influences on Processing in Face Perception Heather A. Wild Thomas A. Busey Department of Psychology Indiana University Bloomington,

More information

Event-related brain potentials distinguish processing stages involved in face perception and recognition

Event-related brain potentials distinguish processing stages involved in face perception and recognition Clinical Neurophysiology 111 (2000) 694±705 www.elsevier.com/locate/clinph Event-related brain potentials distinguish processing stages involved in face perception and recognition Martin Eimer* Department

More information

FAILURES OF OBJECT RECOGNITION. Dr. Walter S. Marcantoni

FAILURES OF OBJECT RECOGNITION. Dr. Walter S. Marcantoni FAILURES OF OBJECT RECOGNITION Dr. Walter S. Marcantoni VISUAL AGNOSIA -damage to the extrastriate visual regions (occipital, parietal and temporal lobes) disrupts recognition of complex visual stimuli

More information

The Role of Working Memory in Visual Selective Attention

The Role of Working Memory in Visual Selective Attention Goldsmiths Research Online. The Authors. Originally published: Science vol.291 2 March 2001 1803-1806. http://www.sciencemag.org. 11 October 2000; accepted 17 January 2001 The Role of Working Memory in

More information

Face Recognition and Cortical Responses Show Similar Sensitivity to Noise Spatial Frequency

Face Recognition and Cortical Responses Show Similar Sensitivity to Noise Spatial Frequency Cerebral Cortex May 2005;15:526-534 doi:10.1093/cercor/bhh152 Advance Access publication August 18, 2004 Face Recognition and Cortical Responses Show Similar Sensitivity to Noise Spatial Frequency Topi

More information

Perception of Faces and Bodies

Perception of Faces and Bodies CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE Perception of Faces and Bodies Similar or Different? Virginia Slaughter, 1 Valerie E. Stone, 2 and Catherine Reed 3 1 Early Cognitive Development Unit and 2

More information

EDGE DETECTION. Edge Detectors. ICS 280: Visual Perception

EDGE DETECTION. Edge Detectors. ICS 280: Visual Perception EDGE DETECTION Edge Detectors Slide 2 Convolution & Feature Detection Slide 3 Finds the slope First derivative Direction dependent Need many edge detectors for all orientation Second order derivatives

More information

Nature versus Nurture in Ventral Visual Cortex: A Functional Magnetic Resonance Imaging Study of Twins

Nature versus Nurture in Ventral Visual Cortex: A Functional Magnetic Resonance Imaging Study of Twins The Journal of Neuroscience, December 19, 2007 27(51):13921 13925 13921 Brief Communications Nature versus Nurture in Ventral Visual Cortex: A Functional Magnetic Resonance Imaging Study of Twins Thad

More information

Impaired face discrimination in acquired prosopagnosia is associated with abnormal response to individual faces in the right middle fusiform gyrus

Impaired face discrimination in acquired prosopagnosia is associated with abnormal response to individual faces in the right middle fusiform gyrus Impaired face discrimination in acquired prosopagnosia is associated with abnormal response to individual faces in the right middle fusiform gyrus Christine Schiltz Bettina Sorger Roberto Caldara Fatima

More information

Repetition priming and the time course of object recognition: an fmri study

Repetition priming and the time course of object recognition: an fmri study Brain Imaging 10, 1019±1023 (1999) WE investigated the effects of repetition priming on the time course of recognition in several visual areas of the brain using fmri. We slowed down recognition by gradually

More information

Separate Face and Body Selectivity on the Fusiform Gyrus

Separate Face and Body Selectivity on the Fusiform Gyrus The Journal of Neuroscience, November 23, 2005 25(47):11055 11059 11055 Brief Communication Separate Face and Body Selectivity on the Fusiform Gyrus Rebecca F. Schwarzlose, 1,2 Chris I. Baker, 1,2 and

More information

Contrast polarity and face recognition in the human fusiform gyrus

Contrast polarity and face recognition in the human fusiform gyrus Contrast polarity and face recognition in the human fusiform gyrus Nathalie George 1, Raymond J. Dolan 1, Gereon R. Fink 2, Gordon C. Baylis 3, Charlotte Russell 4 and Jon Driver 4 1 Wellcome Department

More information

Mental representation of number in different numerical forms

Mental representation of number in different numerical forms Submitted to Current Biology Mental representation of number in different numerical forms Anna Plodowski, Rachel Swainson, Georgina M. Jackson, Chris Rorden and Stephen R. Jackson School of Psychology

More information

Summary. Multiple Body Representations 11/6/2016. Visual Processing of Bodies. The Body is:

Summary. Multiple Body Representations 11/6/2016. Visual Processing of Bodies. The Body is: Visual Processing of Bodies Corps et cognition: l'embodiment Corrado Corradi-Dell Acqua corrado.corradi@unige.ch Theory of Pain Laboratory Summary Visual Processing of Bodies Category-specificity in ventral

More information

The neurolinguistic toolbox Jonathan R. Brennan. Introduction to Neurolinguistics, LSA2017 1

The neurolinguistic toolbox Jonathan R. Brennan. Introduction to Neurolinguistics, LSA2017 1 The neurolinguistic toolbox Jonathan R. Brennan Introduction to Neurolinguistics, LSA2017 1 Psycholinguistics / Neurolinguistics Happy Hour!!! Tuesdays 7/11, 7/18, 7/25 5:30-6:30 PM @ the Boone Center

More information

Early lateralization and orientation tuning for face, word, and object processing in the visual cortex

Early lateralization and orientation tuning for face, word, and object processing in the visual cortex NeuroImage 20 (2003) 1609 1624 www.elsevier.com/locate/ynimg Early lateralization and orientation tuning for face, word, and object processing in the visual cortex Bruno Rossion, a,b, * Carrie A. Joyce,

More information

Functional Magnetic Resonance Imaging of Human Visual Cortex during Face Matching: A Comparison with Positron Emission Tomography

Functional Magnetic Resonance Imaging of Human Visual Cortex during Face Matching: A Comparison with Positron Emission Tomography NEUROIMAGE 4, 1 15 (1996) ARTICLE NO. 0025 Functional Magnetic Resonance Imaging of Human Visual Cortex during Face Matching: A Comparison with Positron Emission Tomography V. P. CLARK, K. KEIL, J. MA.

More information

Tilburg University. Published in: Journal of Cognitive Neuroscience. Publication date: Link to publication

Tilburg University. Published in: Journal of Cognitive Neuroscience. Publication date: Link to publication Tilburg University Time-course of early visual extrastriate activity in a blindsight patient using event related potentials. Abstract Pourtois, G.R.C.; de Gelder, Beatrice; Rossion, B.; Weiskrantz, L.

More information

Supplementary materials for: Executive control processes underlying multi- item working memory

Supplementary materials for: Executive control processes underlying multi- item working memory Supplementary materials for: Executive control processes underlying multi- item working memory Antonio H. Lara & Jonathan D. Wallis Supplementary Figure 1 Supplementary Figure 1. Behavioral measures of

More information

The neural code for interaural time difference in human auditory cortex

The neural code for interaural time difference in human auditory cortex The neural code for interaural time difference in human auditory cortex Nelli H. Salminen and Hannu Tiitinen Department of Biomedical Engineering and Computational Science, Helsinki University of Technology,

More information

A defense of the subordinate-level expertise account for the N170 component

A defense of the subordinate-level expertise account for the N170 component B. Rossion et al. / Cognition 85 (2002) 189 196 189 COGNITION Cognition 85 (2002) 189 196 www.elsevier.com/locate/cognit Discussion A defense of the subordinate-level expertise account for the N170 component

More information

Main Study: Summer Methods. Design

Main Study: Summer Methods. Design Main Study: Summer 2000 Methods Design The experimental design is within-subject each participant experiences five different trials for each of the ten levels of Display Condition and for each of the three

More information

PSYCHOLOGICAL SCIENCE. Research Report

PSYCHOLOGICAL SCIENCE. Research Report Research Report SEMANTIC CATEGORIZATION IN THE HUMAN BRAIN: Spatiotemporal Dynamics Revealed by Magnetoencephalography Andreas Löw, 1 Shlomo Bentin, 2 Brigitte Rockstroh, 1 Yaron Silberman, 2 Annette Gomolla,

More information

Seeing faces in the noise: Stochastic activity in perceptual regions of the brain may influence the perception of ambiguous stimuli

Seeing faces in the noise: Stochastic activity in perceptual regions of the brain may influence the perception of ambiguous stimuli R175B BC JP To printer - need negs for pgs, 3,, and Psychonomic Bulletin & Review,?? (?),???-??? Seeing faces in the noise: Stochastic activity in perceptual regions of the brain may influence the perception

More information

The time-course of intermodal binding between seeing and hearing affective information

The time-course of intermodal binding between seeing and hearing affective information COGNITIVE NEUROSCIENCE The time-course of intermodal binding between seeing and hearing affective information Gilles Pourtois, 1, Beatrice de Gelder, 1,,CA Jean Vroomen, 1 Bruno Rossion and Marc Crommelinck

More information

FFA: a flexible fusiform area for subordinate-level visual processing automatized by expertise

FFA: a flexible fusiform area for subordinate-level visual processing automatized by expertise commentary FFA: a flexible fusiform area for subordinate-level visual processing automatized by expertise Michael J. Tarr and Isabel Gauthier Much evidence suggests that the fusiform face area is involved

More information

(for Trends in Cognitive Science) Cindy Bukach 1,3. Isabel Gauthier 1,3. Michael J. Tarr 2,3. Research Center, Vanderbilt University, Nashville, TN

(for Trends in Cognitive Science) Cindy Bukach 1,3. Isabel Gauthier 1,3. Michael J. Tarr 2,3. Research Center, Vanderbilt University, Nashville, TN BEYOND FACES AND MODULARITY (for Trends in Cognitive Science) Cindy Bukach 1,3 Isabel Gauthier 1,3 Michael J. Tarr 2,3 1 Department of Psychology, Center for Integrative and Cognitive Neuroscience, Vanderbilt

More information

Accounts for the N170 face-effect: a reply to Rossion, Curran, & Gauthier

Accounts for the N170 face-effect: a reply to Rossion, Curran, & Gauthier S. Bentin, D. Carmel / Cognition 85 (2002) 197 202 197 COGNITION Cognition 85 (2002) 197 202 www.elsevier.com/locate/cognit Discussion Accounts for the N170 face-effect: a reply to Rossion, Curran, & Gauthier

More information

Brain Activation during Face Perception: Evidence of a Developmental Change

Brain Activation during Face Perception: Evidence of a Developmental Change Brain Activation during Face Perception: Evidence of a Developmental Change E. H. Aylward 1, J. E. Park 1, K. M. Field 1, A. C. Parsons 1, T. L. Richards 1, S. C. Cramer 2, and A. N. Meltzoff 1 Abstract

More information

Virtual Brain Reading: A Connectionist Approach to Understanding fmri

Virtual Brain Reading: A Connectionist Approach to Understanding fmri Virtual Brain Reading: A Connectionist Approach to Understanding fmri Rosemary A. Cowell (r.a.cowell@kent.ac.uk) Computing Laboratory, University of Kent, Canterbury, CT2 7NF, UK David E. Huber (dhuber@psy.ucsd.edu)

More information

R.J. Dolan, a, * H.J. Heinze, b R. Hurlemann, b and H. Hinrichs b

R.J. Dolan, a, * H.J. Heinze, b R. Hurlemann, b and H. Hinrichs b www.elsevier.com/locate/ynimg NeuroImage 32 (2006) 778 789 Magnetoencephalography (MEG) determined temporal modulation of visual and auditory sensory processing in the context of classical conditioning

More information

Face activated neurodynamic cortical networks

Face activated neurodynamic cortical networks Med Biol Eng Comput (11) 49:531 543 DOI.7/s11517-11-74-4 SPECIAL ISSUE - ORIGINAL ARTICLE Face activated neurodynamic cortical networks Ana Susac Risto J. Ilmoniemi Doug Ranken Selma Supek Received: 22

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 10: Brain-Computer Interfaces Ilya Kuzovkin So Far Stimulus So Far So Far Stimulus What are the neuroimaging techniques you know about? Stimulus So Far

More information

Does stimulus quality affect the physiologic MRI responses to brief visual activation?

Does stimulus quality affect the physiologic MRI responses to brief visual activation? Brain Imaging 0, 277±28 (999) WE studied the effect of stimulus quality on the basic physiological response characteristics of oxygenationsensitive MRI signals. Paradigms comprised a contrastreversing

More information

Word Length Processing via Region-to-Region Connectivity

Word Length Processing via Region-to-Region Connectivity Word Length Processing via Region-to-Region Connectivity Mariya Toneva Machine Learning Department, Neural Computation Carnegie Mellon University Data Analysis Project DAP committee: Tom Mitchell, Robert

More information

COGS 101A: Sensation and Perception

COGS 101A: Sensation and Perception COGS 101A: Sensation and Perception 1 Virginia R. de Sa Department of Cognitive Science UCSD Lecture 6: Beyond V1 - Extrastriate cortex Chapter 4 Course Information 2 Class web page: http://cogsci.ucsd.edu/

More information

A Biologically Plausible Approach to Cat and Dog Discrimination

A Biologically Plausible Approach to Cat and Dog Discrimination A Biologically Plausible Approach to Cat and Dog Discrimination Bruce A. Draper, Kyungim Baek, Jeff Boody Department of Computer Science Colorado State University Fort Collins, CO 80523-1873 U.S.A. draper,beak,boody@cs.colostate.edu

More information

Frank Tong. Department of Psychology Green Hall Princeton University Princeton, NJ 08544

Frank Tong. Department of Psychology Green Hall Princeton University Princeton, NJ 08544 Frank Tong Department of Psychology Green Hall Princeton University Princeton, NJ 08544 Office: Room 3-N-2B Telephone: 609-258-2652 Fax: 609-258-1113 Email: ftong@princeton.edu Graduate School Applicants

More information

Event-Related fmri and the Hemodynamic Response

Event-Related fmri and the Hemodynamic Response Human Brain Mapping 6:373 377(1998) Event-Related fmri and the Hemodynamic Response Randy L. Buckner 1,2,3 * 1 Departments of Psychology, Anatomy and Neurobiology, and Radiology, Washington University,

More information

The effects of single-trial averaging upon the spatial extent of fmri activation

The effects of single-trial averaging upon the spatial extent of fmri activation BRAIN IMAGING NEUROREPORT The effects of single-trial averaging upon the spatial extent of fmri activation Scott A. Huettel,CA and Gregory McCarthy, Brain Imaging and Analysis Center, Duke University Medical

More information

Supplemental Information

Supplemental Information Current Biology, Volume 22 Supplemental Information The Neural Correlates of Crowding-Induced Changes in Appearance Elaine J. Anderson, Steven C. Dakin, D. Samuel Schwarzkopf, Geraint Rees, and John Greenwood

More information

Task modulation of brain activity related to familiar and unfamiliar face processing: an ERP study

Task modulation of brain activity related to familiar and unfamiliar face processing: an ERP study Clinical Neurophysiology 110 (1999) 449±462 Task modulation of brain activity related to familiar and unfamiliar face processing: an ERP study B. Rossion a, b, *, S. Campanella a, C.M. Gomez d, A. Delinte

More information

Independence of Visual Awareness from the Scope of Attention: an Electrophysiological Study

Independence of Visual Awareness from the Scope of Attention: an Electrophysiological Study Cerebral Cortex March 2006;16:415-424 doi:10.1093/cercor/bhi121 Advance Access publication June 15, 2005 Independence of Visual Awareness from the Scope of Attention: an Electrophysiological Study Mika

More information

fmr-adaptation reveals a distributed representation of inanimate objects and places in human visual cortex

fmr-adaptation reveals a distributed representation of inanimate objects and places in human visual cortex www.elsevier.com/locate/ynimg NeuroImage 28 (2005) 268 279 fmr-adaptation reveals a distributed representation of inanimate objects and places in human visual cortex Michael P. Ewbank, a Denis Schluppeck,

More information

Bodies capture attention when nothing is expected

Bodies capture attention when nothing is expected Cognition 93 (2004) B27 B38 www.elsevier.com/locate/cognit Brief article Bodies capture attention when nothing is expected Paul E. Downing*, David Bray, Jack Rogers, Claire Childs School of Psychology,

More information

Inversion and contrast-reversal effects on face processing assessed by MEG

Inversion and contrast-reversal effects on face processing assessed by MEG available at www.sciencedirect.com www.elsevier.com/locate/brainres Research Report Inversion and contrast-reversal effects on face processing assessed by MEG Roxane J. Itier a,, Anthony T. Herdman b,

More information

Selectivity for the Human Body in the Fusiform Gyrus

Selectivity for the Human Body in the Fusiform Gyrus J Neurophysiol 93: 603 608, 2005. First published August 4, 2004; doi:10.1152/jn.00513.2004. Selectivity for the Human Body in the Fusiform Gyrus Marius V. Peelen and Paul E. Downing School of Psychology,

More information

Mental Representation of Number in Different Numerical Forms

Mental Representation of Number in Different Numerical Forms Current Biology, Vol. 13, 2045 2050, December 2, 2003, 2003 Elsevier Science Ltd. All rights reserved. DOI 10.1016/j.cub.2003.11.023 Mental Representation of Number in Different Numerical Forms Anna Plodowski,

More information

Familiar-face recognition and comparison: source analysis of scalp-recorded event-related potentials

Familiar-face recognition and comparison: source analysis of scalp-recorded event-related potentials Clinical Neurophysiology 115 (2004) 880 886 www.elsevier.com/locate/clinph Familiar-face recognition and comparison: source analysis of scalp-recorded event-related potentials Elena V. Mnatsakanian a,b,

More information

Domain specificity versus expertise: factors influencing distinct processing of faces

Domain specificity versus expertise: factors influencing distinct processing of faces D. Carmel, S. Bentin / Cognition 83 (2002) 1 29 1 COGNITION Cognition 83 (2002) 1 29 www.elsevier.com/locate/cognit Domain specificity versus expertise: factors influencing distinct processing of faces

More information

Processing Faces and Facial Expressions

Processing Faces and Facial Expressions in press 2003. Neuropsychology Review, 13(3), *** *** Processing Faces and Facial Expressions Mette T. Posamentier The University of Texas of Dallas Hervé Abdi The University of Texas of Dallas This paper

More information

Gum Chewing Maintains Working Memory Acquisition

Gum Chewing Maintains Working Memory Acquisition International Journal of Bioelectromagnetism Vol. 11, No. 3, pp.130-134, 2009 www.ijbem.org Gum Chewing Maintains Working Memory Acquisition Yumie Ono a, Kanako Dowaki b, Atsushi Ishiyama b, Minoru Onozuka

More information

Lateral Geniculate Nucleus (LGN)

Lateral Geniculate Nucleus (LGN) Lateral Geniculate Nucleus (LGN) What happens beyond the retina? What happens in Lateral Geniculate Nucleus (LGN)- 90% flow Visual cortex Information Flow Superior colliculus 10% flow Slide 2 Information

More information

EEG Analysis on Brain.fm (Focus)

EEG Analysis on Brain.fm (Focus) EEG Analysis on Brain.fm (Focus) Introduction 17 subjects were tested to measure effects of a Brain.fm focus session on cognition. With 4 additional subjects, we recorded EEG data during baseline and while

More information

The Central Nervous System

The Central Nervous System The Central Nervous System Cellular Basis. Neural Communication. Major Structures. Principles & Methods. Principles of Neural Organization Big Question #1: Representation. How is the external world coded

More information

The functional organization of the ventral visual pathway and its relationship to object recognition

The functional organization of the ventral visual pathway and its relationship to object recognition Kanwisher-08 9/16/03 9:27 AM Page 169 Chapter 8 The functional organization of the ventral visual pathway and its relationship to object recognition Kalanit Grill-Spector Abstract Humans recognize objects

More information

A dissociation between spatial and identity matching in callosotomy patients

A dissociation between spatial and identity matching in callosotomy patients Cognitive Neuroscience, 8±87 (999) ALTHOUGH they are structurally similar, the two hemispheres of the human brain have many functional asymmetries. Some of these, such as language and motor control, have

More information

This presentation is the intellectual property of the author. Contact them for permission to reprint and/or distribute.

This presentation is the intellectual property of the author. Contact them for permission to reprint and/or distribute. Modified Combinatorial Nomenclature Montage, Review, and Analysis of High Density EEG Terrence D. Lagerlund, M.D., Ph.D. CP1208045-16 Disclosure Relevant financial relationships None Off-label/investigational

More information

Modulation of brain and behavioural responses to. cognitive visual stimuli with varying signal-to-noise ratios

Modulation of brain and behavioural responses to. cognitive visual stimuli with varying signal-to-noise ratios Modulation of brain and behavioural responses to cognitive visual stimuli with varying signal-to-noise ratios Alberto Sorrentino a, Lauri Parkkonen b Michele Piana c Anna Maria Massone d Livio Narici e

More information

CS/NEUR125 Brains, Minds, and Machines. Due: Friday, April 14

CS/NEUR125 Brains, Minds, and Machines. Due: Friday, April 14 CS/NEUR125 Brains, Minds, and Machines Assignment 5: Neural mechanisms of object-based attention Due: Friday, April 14 This Assignment is a guided reading of the 2014 paper, Neural Mechanisms of Object-Based

More information

Chapter 10 Importance of Visual Cues in Hearing Restoration by Auditory Prosthesis

Chapter 10 Importance of Visual Cues in Hearing Restoration by Auditory Prosthesis Chapter 1 Importance of Visual Cues in Hearing Restoration by uditory Prosthesis Tetsuaki Kawase, Yoko Hori, Takenori Ogawa, Shuichi Sakamoto, Yôiti Suzuki, and Yukio Katori bstract uditory prostheses,

More information

Comparing event-related and epoch analysis in blocked design fmri

Comparing event-related and epoch analysis in blocked design fmri Available online at www.sciencedirect.com R NeuroImage 18 (2003) 806 810 www.elsevier.com/locate/ynimg Technical Note Comparing event-related and epoch analysis in blocked design fmri Andrea Mechelli,

More information

Sum of Neurally Distinct Stimulus- and Task-Related Components.

Sum of Neurally Distinct Stimulus- and Task-Related Components. SUPPLEMENTARY MATERIAL for Cardoso et al. 22 The Neuroimaging Signal is a Linear Sum of Neurally Distinct Stimulus- and Task-Related Components. : Appendix: Homogeneous Linear ( Null ) and Modified Linear

More information

The neural basis of object perception Kalanit Grill-Spector

The neural basis of object perception Kalanit Grill-Spector 59 The neural basis of object perception Kalanit Grill-Spector Humans can recognize an object within a fraction of a second, even if there are no clues about what kind of object it might be. Recent findings

More information

A Biologically Plausible Approach to Cat and Dog Discrimination

A Biologically Plausible Approach to Cat and Dog Discrimination A Biologically Plausible Approach to Cat and Dog Discrimination Bruce A. Draper, Kyungim Baek, Jeff Boody Department of Computer Science Colorado State University Fort Collins, CO 80523-1873 U.S.A. draper,beak,boody@cs.colostate.edu

More information

Early posterior ERP components do not reflect the control of attentional shifts toward expected peripheral events

Early posterior ERP components do not reflect the control of attentional shifts toward expected peripheral events Psychophysiology, 40 (2003), 827 831. Blackwell Publishing Inc. Printed in the USA. Copyright r 2003 Society for Psychophysiological Research BRIEF REPT Early posterior ERP components do not reflect the

More information

Supplemental Material

Supplemental Material 1 Supplemental Material Golomb, J.D, and Kanwisher, N. (2012). Higher-level visual cortex represents retinotopic, not spatiotopic, object location. Cerebral Cortex. Contents: - Supplemental Figures S1-S3

More information

Remembering the Past to Imagine the Future: A Cognitive Neuroscience Perspective

Remembering the Past to Imagine the Future: A Cognitive Neuroscience Perspective MILITARY PSYCHOLOGY, 21:(Suppl. 1)S108 S112, 2009 Copyright Taylor & Francis Group, LLC ISSN: 0899-5605 print / 1532-7876 online DOI: 10.1080/08995600802554748 Remembering the Past to Imagine the Future:

More information

Non-conscious recognition of affect in the absence of striate cortex

Non-conscious recognition of affect in the absence of striate cortex Vision, Central 10, 3759±3763 (1999) FUNCTIONAL neuroimaging experiments have shown that recognition of emotional expressions does not depend on awareness of visual stimuli and that unseen fear stimuli

More information

Attention modulates the processing of emotional expression triggered by foveal faces

Attention modulates the processing of emotional expression triggered by foveal faces Neuroscience Letters xxx (2005) xxx xxx Attention modulates the processing of emotional expression triggered by foveal faces Amanda Holmes a,, Monika Kiss b, Martin Eimer b a School of Human and Life Sciences,

More information

Taking control of reflexive social attention

Taking control of reflexive social attention Cognition 94 (2005) B55 B65 www.elsevier.com/locate/cognit Brief article Taking control of reflexive social attention Jelena Ristic*, Alan Kingstone Department of Psychology, University of British Columbia,

More information

fmri Study of Face Perception and Memory Using Random Stimulus Sequences

fmri Study of Face Perception and Memory Using Random Stimulus Sequences RAPID COMMUNICATION fmri Study of Face Perception and Memory Using Random Stimulus Sequences VINCENT P. CLARK, JOSE M. MAISOG, AND JAMES V. HAXBY Section on Functional Brain Imaging, Laboratory of Brain

More information

Identify these objects

Identify these objects Pattern Recognition The Amazing Flexibility of Human PR. What is PR and What Problems does it Solve? Three Heuristic Distinctions for Understanding PR. Top-down vs. Bottom-up Processing. Semantic Priming.

More information

Electrophysiological Correlates of Recollecting Faces of Known and Unknown Individuals

Electrophysiological Correlates of Recollecting Faces of Known and Unknown Individuals NeuroImage 11, 98 110 (2000) doi:10.1006/nimg.1999.0521, available online at http://www.idealibrary.com on Electrophysiological Correlates of Recollecting Faces of Known and Unknown Individuals Ken A.

More information

Layout Geometry in Encoding and Retrieval of Spatial Memory

Layout Geometry in Encoding and Retrieval of Spatial Memory Journal of Experimental Psychology: Human Perception and Performance 2009, Vol. 35, No. 1, 83 93 2009 American Psychological Association 0096-1523/09/$12.00 DOI: 10.1037/0096-1523.35.1.83 Layout Geometry

More information

Neurophysiological evidence for visual perceptual categorization of words and faces within 150 ms

Neurophysiological evidence for visual perceptual categorization of words and faces within 150 ms Psychophysiology, 35 ~1998!, 240 251. Cambridge University Press. Printed in the USA. Copyright 1998 Society for Psychophysiological Research Neurophysiological evidence for visual perceptual categorization

More information

Supplementary Note Psychophysics:

Supplementary Note Psychophysics: Supplementary Note More detailed description of MM s subjective experiences can be found on Mike May s Perceptions Home Page, http://www.senderogroup.com/perception.htm Psychophysics: The spatial CSF was

More information

OPTO 5320 VISION SCIENCE I

OPTO 5320 VISION SCIENCE I OPTO 5320 VISION SCIENCE I Monocular Sensory Processes of Vision: Color Vision Mechanisms of Color Processing . Neural Mechanisms of Color Processing A. Parallel processing - M- & P- pathways B. Second

More information

THE EFFECT OF DIFFERENT TRAINING EXPERIENCES ON OBJECT RECOGNITION IN THE VISUAL SYSTEM. Alan Chun-Nang Wong

THE EFFECT OF DIFFERENT TRAINING EXPERIENCES ON OBJECT RECOGNITION IN THE VISUAL SYSTEM. Alan Chun-Nang Wong THE EFFECT OF DIFFERENT TRAINING EXPERIENCES ON OBJECT RECOGNITION IN THE VISUAL SYSTEM By Alan Chun-Nang Wong Dissertation Submitted to the Faculty of the Graduate School of Vanderbilt University In partial

More information

Event-related fmri analysis of the cerebral circuit for number comparison

Event-related fmri analysis of the cerebral circuit for number comparison Brain Imaging 10, 1473±1479 (1999) CEREBRAL activity during number comparison was studied with functional magnetic resonance imaging using an event-related design. We identi ed an extended network of task-related

More information

Featural and con gural face processing strategies: evidence from a functional magnetic resonance imaging study

Featural and con gural face processing strategies: evidence from a functional magnetic resonance imaging study BRAIN IMAGING Featural and con gural face processing strategies: evidence from a functional magnetic resonance imaging study Janek S. Lobmaier a, Peter Klaver a,b, Thomas Loenneker b, Ernst Martin b and

More information

Unraveling Mechanisms for Expert Object Recognition: Bridging Brain Activity and Behavior

Unraveling Mechanisms for Expert Object Recognition: Bridging Brain Activity and Behavior Journal of Experimental Psychology: Human Perception and Performance 2002, Vol. 28, No. 2, 431 446 Copyright 2002 by the American Psychological Association, Inc. 0096-1523/02/$5.00 DOI: 10.1037//0096-1523.28.2.431

More information

Neural Correlates of Human Cognitive Function:

Neural Correlates of Human Cognitive Function: Neural Correlates of Human Cognitive Function: A Comparison of Electrophysiological and Other Neuroimaging Approaches Leun J. Otten Institute of Cognitive Neuroscience & Department of Psychology University

More information

Neuroimaging methods vs. lesion studies FOCUSING ON LANGUAGE

Neuroimaging methods vs. lesion studies FOCUSING ON LANGUAGE Neuroimaging methods vs. lesion studies FOCUSING ON LANGUAGE Pioneers in lesion studies Their postmortem examination provided the basis for the linkage of the left hemisphere with language C. Wernicke

More information

Mental Imagery. What is Imagery? What We Can Imagine 3/3/10. What is nature of the images? What is the nature of imagery for the other senses?

Mental Imagery. What is Imagery? What We Can Imagine 3/3/10. What is nature of the images? What is the nature of imagery for the other senses? Mental Imagery What is Imagery? What is nature of the images? Exact copy of original images? Represented in terms of meaning? If so, then how does the subjective sensation of an image arise? What is the

More information

Figure 1. Source localization results for the No Go N2 component. (a) Dipole modeling

Figure 1. Source localization results for the No Go N2 component. (a) Dipole modeling Supplementary materials 1 Figure 1. Source localization results for the No Go N2 component. (a) Dipole modeling analyses placed the source of the No Go N2 component in the dorsal ACC, near the ACC source

More information

Visual Context Dan O Shea Prof. Fei Fei Li, COS 598B

Visual Context Dan O Shea Prof. Fei Fei Li, COS 598B Visual Context Dan O Shea Prof. Fei Fei Li, COS 598B Cortical Analysis of Visual Context Moshe Bar, Elissa Aminoff. 2003. Neuron, Volume 38, Issue 2, Pages 347 358. Visual objects in context Moshe Bar.

More information

Experimental design of fmri studies

Experimental design of fmri studies Experimental design of fmri studies Kerstin Preuschoff Computational Neuroscience Lab, EPFL LREN SPM Course Lausanne April 10, 2013 With many thanks for slides & images to: Rik Henson Christian Ruff Sandra

More information

Beyond Blind Averaging: Analyzing Event-Related Brain Dynamics. Scott Makeig. sccn.ucsd.edu

Beyond Blind Averaging: Analyzing Event-Related Brain Dynamics. Scott Makeig. sccn.ucsd.edu Beyond Blind Averaging: Analyzing Event-Related Brain Dynamics Scott Makeig Institute for Neural Computation University of California San Diego La Jolla CA sccn.ucsd.edu Talk given at the EEG/MEG course

More information

Neural correlates of non-familiar face processing

Neural correlates of non-familiar face processing Budapest University of Technology and Economics PhD School in Psychology Cognitive Science Krisztina Nagy PhD thesis booklet Supervisor: Prof. Gyula Kovács Budapest, 2013 Contents Abbreviations 2 1 Introduction

More information

Event-related potentials and time course of the other-race face classi cation advantage

Event-related potentials and time course of the other-race face classi cation advantage COGNITIVE NEUROSCIENCE AND NEUROPSYCHOLOGY NEUROREPORT Event-related potentials and time course of the other-race face classi cation advantage Roberto Caldara, CA Bruno Rossion, Pierre Bovet and Claude-Alain

More information

The Meaning of the Mask Matters

The Meaning of the Mask Matters PSYCHOLOGICAL SCIENCE Research Report The Meaning of the Mask Matters Evidence of Conceptual Interference in the Attentional Blink Paul E. Dux and Veronika Coltheart Macquarie Centre for Cognitive Science,

More information