Left-hemisphere dominance for processing of vowels: a whole-scalp neuromagnetic study

Similar documents
The neural code for interaural time difference in human auditory cortex

Adaptation of neuromagnetic N1 without shifts in dipolar orientation

Asymmetrical representation of auditory space in human cortex

Activation of brain mechanisms of attention switching as a function of auditory frequency change

Event-related brain potentials reveal covert distractibility in closed head injuries

Elekta Neuromag TRIUX State-of-the-art Magnetoencephalography. The next level in functional mapping

Dynamics of visual feature analysis and objectlevel processing in face versus letter-string perception

Neuromagnetic recordings reveal the temporal dynamics of auditory spatial processing in the human cortex

Cortical sound processing in children with autism disorder: an MEG investigation

Language Speech. Speech is the preferred modality for language.

Gender di erences in the human mirror system: a magnetoencephalography study

Magnetic source imaging of late evoked field responses to vowels: toward an assessment of hemispheric dominance for language

Competing Streams at the Cocktail Party

Yury Shtyrov New aspects of the cerebral functional asymmetry in speech processing as revealed by auditory cortex evoked magnetic fields

Dynamics of letter string perception in the human occipitotemporal cortex

RAPID COMMUNICATION Scalp-Recorded Optical Signals Make Sound Processing in the Auditory Cortex Visible

Over-representation of speech in older adults originates from early response in higher order auditory cortex

Word Length Processing via Region-to-Region Connectivity

Est-ce que l'eeg a toujours sa place en 2019?

The time-course of intermodal binding between seeing and hearing affective information

Event-related brain activity associated with auditory pattern processing

The neurolinguistic toolbox Jonathan R. Brennan. Introduction to Neurolinguistics, LSA2017 1

Cortical processing of speech sounds and their analogues in a spatial auditory environment

Outline.! Neural representation of speech sounds. " Basic intro " Sounds and categories " How do we perceive sounds? " Is speech sounds special?

A dissociation between spatial and identity matching in callosotomy patients

Sensorimotor integration in human primary and secondary somatosensory cortices

Title:Atypical language organization in temporal lobe epilepsy revealed by a passive semantic paradigm

Supplemental Information. Night Watch in One Brain Hemisphere during Sleep. Associated with the First-Night Effect in Humans

Atypical processing of prosodic changes in natural speech stimuli in school-age children with Asperger syndrome

Gum Chewing Maintains Working Memory Acquisition

Tilburg University. Published in: Journal of Cognitive Neuroscience. Publication date: Link to publication

Viewing faces evokes responses in ventral temporal cortex

Takwa Adly Gabr Assistant lecturer of Audiology

Effect of intensity increment on P300 amplitude

Primitive intelligence in the auditory cortex

Biomedical Research 2013; 24 (3): ISSN X

HCS 7367 Speech Perception

Supplementary Figure S1: Confirmation of MEG source estimates with LFP. We considered FF>CS as well as CS>FF LFPs to be confirmatory of localization

EEG reveals divergent paths for speech envelopes during selective attention

Dichotic Word Recognition of Young Adults in Noise. A Senior Honors Thesis

Synchronous cortical gamma-band activity in task-relevant cognition

This is an electronic reprint of the original article. This reprint may differ from the original in pagination and typographic detail.

A study of the effect of auditory prime type on emotional facial expression recognition

Electrophysiological Substrates of Auditory Temporal Assimilation Between Two Neighboring Time Intervals

Temporal integration of vowel periodicity in the auditory cortex

S everal studies indicate that the identification/recognition. Identification Performance by Right- and Lefthanded Listeners on Dichotic CV Materials

MULTI-CHANNEL COMMUNICATION

IEEE. Proof. BRAIN COMPUTER interface (BCI) research has been

Neural Correlates of Complex Tone Processing and Hemispheric Asymmetry

CHAPTER 5. The intracarotid amobarbital or Wada test: unilateral or bilateral?

The role of amplitude, phase, and rhythmicity of neural oscillations in top-down control of cognition

This is an electronic reprint of the original article. This reprint may differ from the original in pagination and typographic detail.

Frequency and periodicity are represented in orthogonal maps in the human auditory cortex: evidence from magnetoencephalography

RIGHT HEMISPHERE LANGUAGE MAPPING USING ELECTROCORTICAL STIMULATION IN PATIENTS WITH BILATERAL LANGUAGE

Cortical Encoding of Auditory Objects at the Cocktail Party. Jonathan Z. Simon University of Maryland

Comparing the Fused Dichotic Words Test and the Intracarotid Amobarbital Procedure in children with epilepsy

SUPPLEMENTARY INFORMATION. Table 1 Patient characteristics Preoperative. language testing

The overlap of neural selectivity between faces and words: evidences

Accepted to MICCAI Sources of Variability in MEG

Event-Related Desynchronization during anticipatory attention for an upcoming stimulus Brunia, Cees

Robust Neural Encoding of Speech in Human Auditory Cortex

Modulation of the Auditory Cortex during Speech: An MEG Study

Mental representation of number in different numerical forms

Spatial processing in human auditory cortex: The effects of 3D, ITD, and ILD stimulation techniques

Chapter 5. Summary and Conclusions! 131

Subject: Magnetoencephalography/Magnetic Source Imaging

Title of Thesis. Study on Audiovisual Integration in Young and Elderly Adults by Event-Related Potential

Brain Imaging of auditory function

Are face-responsive regions selective only for faces?

The effect of viewing speech on auditory speech processing is different in the left and right hemispheres

ARTICLE IN PRESS. Classification of single MEG trials related to left and right index finger movements

Contribution of the human superior parietal lobule to spatial selection process: an MEG study

Atypical Language Representation in Patients with Chronic Seizure Disorder and Achievement Deficits with Magnetoencephalography

Processing Interaural Cues in Sound Segregation by Young and Middle-Aged Brains DOI: /jaaa

Neural adaptation to silence in the human auditory cortex: a magnetoencephalographic study

Non-conscious recognition of affect in the absence of striate cortex

Magnetic and Electric Brain Activity Evoked by the Processing of Tone and Vowel Stimuli

Auditory event-related responses are generated independently of ongoing brain activity

The following are minimum standards for the routine clinical

Human Auditory Cortical Activation during Self- Vocalization

Brainpotentialsassociatedwithoutcome expectation and outcome evaluation

A Cardiac Measure of Cerebral Asymmetries in Infant Auditory Perception

AccuScreen ABR Screener

Processing Asymmetry of Transitions between Order and Disorder in Human Auditory Cortex

Visual activity in the human frontal eye eld

Auditory encoding abnormalities in children with autism spectrum disorder suggest delayed development of auditory cortex

ABSTRACT. Natural scenes and ecological signals are inherently complex and understanding of

Temporal dynamics of amygdala and orbitofrontal responses to emotional prosody using intracerebral local field potentials in humans

FUNCTIONAL MRI IN EPILEPSY December 6 th 2013

Neuro Q no.2 = Neuro Quotient

Electrophysiological evidence of two different types of error in the Wisconsin Card Sorting Test

From Tones to Speech: Magnetoencephalographic Studies

Human cortical functions in auditory change detection evaluated with multiple brain research methods

Supporting Information

Stuttering Research. Vincent Gracco, PhD Haskins Laboratories

Human Auditory Cortical Processing of Changes in Interaural Correlation

Neural Representations of the Cocktail Party in Human Auditory Cortex

Neuroimaging methods vs. lesion studies FOCUSING ON LANGUAGE

Newborns discriminate novel from harmonic sounds: A study using magnetoencephalography

Neuromagnetic localization of rhythmic activity in the human brain: a comparison of three methods

Transcription:

Auditory and Vestibular Systems 10, 2987±2991 (1999) BRAIN activation of 11 healthy right-handed subjects was studied with magnetoencephalography to estimate individual hemispheric dominance for speech sounds. The auditory stimuli comprised binaurally presented Finnish,, and in groups of two or four stimuli. The subjects were required to detect whether the rst and the last item in a group were the same. In the left hemisphere, evoked signi cantly stronger (37±79%) responses than notes and, whereas in the right hemisphere the responses to different stimuli did not differ signi cantly. Speci cally, in the two-stimulus task, all 11 subjects showed lefthemisphere dominance in the vowel vs tone comparison. This simple paradigm may be helpful in noninvasive evaluation of language lateralization. Neuro- Report 10:2987±2991 # 1999 Lippincott Williams & Wilkins. Key words: Healthy subjects; Hemispheric dominance; MEG; Speech sounds; Vowels Left-hemisphere dominance for processing of : a whole-scalp neuromagnetic study Liselotte Gootjes, CA Tommi Raij, Riitta Salmelin and Riitta Hari Brain Research Unit, Low Temperature Laboratory, Helsinki University of Technology, P.O. Box 2200, FIN-02015 HUT, Espoo, Finland CA Corresponding Author Introduction The left and right hemispheres of the brain are not symmetrically involved in language processing. Clinical observations and neuropsychological studies suggest that while the great majority of right-handed individuals show a left-hemisphere dominance for language, a minority of them are right-hemisphere dominant [1,2]. Knowledge about language lateralization is of special value in evaluation of patients who are candidates for brain surgery (e.g. removal of an epileptic focus or a brain tumor from the temporal lobe). Since resection of brain regions that contribute to basic language skills is likely to result in language impairments, it is essential to determine preoperatively which hemisphere is dominant for language. Traditionally, language dominance is determined with the Wada test [3]. This technique involves intracarotid application of amobarbital, allowing selective inactivation of one hemisphere at a time. Because of the health risks involved, attempts have been made to nd non-invasive methods. In recent years, developments in neuroimaging techniques have provided new tools for studying language dominance. One of these techniques is magnetoencephalography (MEG), measurement of the neuromagnetic eld outside the head to probe neural electric activity of the brain. Previous MEG work has indicated that it is possible to use MEG to examine hemispheric differences for language processing at an individual level [4,5]. 0959-4965 # Lippincott Williams & Wilkins The purpose of the present study was to develop a simple task that might be useful for presurgical evaluation of language lateralization. As the rst step, we examined neuromagnetic responses associated with attentive processing of,, and in both hemispheres of healthy subjects. The results imply left hemisphere dominance for processing of. In future, this approach might provide a new tool for evaluation of individual hemispheric dominance for language. Materials and Methods Subjects: Eleven healthy subjects (23±30 years, mean 27.1; four females and seven males) were studied. All were native Finnish speakers. Hand preference was evaluated with the Edinburgh Inventory [6]. Possible outcomes of this inventory are between 1.0 (strongly right-handed) and 1.0 (strongly left-handed). In the present study, all subjects were right-handed and had a minimum score of 0.7. Stimuli and task: Within one session, the subjects listened to three types of digitally recorded auditory stimuli: Finnish,, and (see Fig. 1). Frequencies of the and the fundamental frequencies of the notes were 394, 442, 496, 525 and 589 Hz which equal G, A, B, C, and D on a tempered scale. A native speaker of Finnish pronounced the (A, E, U, I, O). All sounds were of equal duration (250 ms); the and notes also Vol 10 No 14 29 September 1999 2987

Non-target U O E A I O E I Vowels C B G D A C G A A C G B 1s Tones Piano notes Target B D C B FIG. 1. Examples of the three types of auditory stimuli used in the foursound task:, and. included 20 ms rise and fall times. Stimuli were presented binaurally with the stimulus intensity adjusted to a comfortable listening level. In the two-sound task, pairs of,, and notes were presented. The silent interval between the two stimuli of a pair was 70 ms, and the interpair interval was 1590 ms. The subject had to respond by lifting the left index nger when the two sounds of the pair were identical (target). No response was required when the two sounds differed (non-target). The four-sound task consisted of trains of four,, and notes. Again the sounds within a trial were separated by 70 ms and the intertrain interval was 1590 ms. In target trains, the rst and the fourth stimuli were identical while they differed from each other in the non-target trains. In both target and non-target trains, the second and third items were different from each other and from all other items. In both tasks, targets (20%) and nontargets (80%) of all three types were presented in a pseudo-random order: two targets and two similar pairs or trains in a row were not allowed. To ascertain that the subjects understood the instructions completely, feedback about individual performance was given during a short practice trial. The subjects were instructed to avoid head movements and eye blinks. Recording: MEG signals were recorded in a magnetically shielded room, using a 306-channel wholescalp neuromagnetometer (Vectorview, Neuromag Ltd). The instrument has 102 recording sites which contain planar gradiometers measuring the longitudinal and latitudinal derivatives of the magnetic eld normal to the magnetometer helmet (@B z /@x) and (@B z /@y) (204 in total) and magnetometers (102 in total). The signals were bandpass ltered at 0.03± 200 Hz and digitized at 600 Hz. Responses to the different stimulus categories were averaged online over a 2300 ms interval in the two-sound task and a 2500 ms interval in the four-sound task, both including a 200 ms prestimulus baseline. About 100 responses to the non-target pairs/trains were collected for the three stimulus categories. Epochs contaminated by eye movements and blinks (monitored with electro-oculogram) were discarded from averaging. For subsequent analysis, the averaged responses were low-pass ltered at 40 Hz (roll-off width 10 Hz). A 200 ms prestimulus baseline was used for amplitude measurements. Analysis: Only responses to non-target stimuli were analysed in detail. In signal analysis we only took into account the planar gradiometers. By calculating the vector sums of the longitudinal and latitudinal derivatives of the responses, SQRT((@B z / @x) 2 (@B z /@y) 2 ), information about direction of neural currents was omitted in favour of information about the strength of the response. Two time intervals were set for further analysis: interval A, 25 ms around the peak of the 100 ms response (N100m) to the onset of the rst sound and interval B, from onset of the second sound until 320 ms after the onset of the last sound (320±640 ms in the two-sound task and 320±1280 ms in the foursound task). For each individual and each time interval, we selected over the temporal lobe of each hemisphere the gradiometer pair which showed the highest peak amplitude to vowel stimuli. We then calculated the mean activation at this channel pair for both time windows. The amplitudes were normalized according to each individual's left-hemisphere response to. A two-tailed t-test was used in statistical evaluation of the group data. We also calculated a laterality index (LI) for each subject according to the formula (L R)/(L R), where L and R refer to the ratios of response amplitudes to vs or notes, in the left and right hemispheres, respectively. Results L. Gootjes et al. Behavioural measures: In the two-sound task, subjects performed equally well in trials involving, and. However, in the four-sound task, subjects tended to perform worse for and notes than (binomial test, p ˆ 0.06). Responses to different stimuli: All stimuli evoked strong N100m responses over the temporal areas about 100 ms after pair/train onset. Subsequent 2988 Vol 10 No 14 29 September 1999

Asymmetrical processing of peaks were observed 100 ms after the onset of each sound in the sequence: around 420 ms in both tasks, and additionally around 740 ms and 1060 ms in the four-sound task. Figure 2 displays vector sum responses in one subject over both temporal regions in the foursound task. The responses are clearly stronger to than to and notes over the left hemisphere, whereas no such difference is seen over the right hemisphere. A similar pattern was observed in most subjects. Four individuals showed stronger responses to than to and notes over the right hemisphere as well, but the relative increase was smaller than over the left hemisphere. In the two-sound task, similar differences were found between the responses (data not shown). Mean activation following different stimuli: Figure 3 shows mean amplitudes for the three stimulus L * * * * 100 ft/cm R 0 1 2s 0 1 2s FIG. 2. Vector sums in one subject from sensors over the left (L) and the right (R) temporal regions which showed the highest amplitude to between 320 and 1280 ms in the four-sound task (stars indicating peaks). categories over both hemispheres, separately for the two tasks and time intervals. During interval A ( 25 ms around the rst N100m peak) in the twosound task, evoked in the left hemisphere on average 70% and 72% stronger responses than notes and, respectively ( p, 0.001). During interval A in the four-sound task, responses were on average 85% and 87% stronger to than notes and, respectively ( p, 0.001). In the right hemisphere, however, in both tasks, responses to, notes and vowel stimuli were not statistically signi cantly different. During interval B in the two-sound task (320± 640 ms), the left hemisphere responses to were on average 61% and 39% stronger than responses to notes and, respectively ( p, 0.001). During interval B in the four-sound task (320±1280 ms), the left hemisphere responses were on average 79% and 54% stronger to than to notes and, respectively ( p, 0.001). Again, in the right hemisphere, responses to the different stimuli did not differ statistically signi cantly in either task. Hemispheric laterality in individual subjects: Figure 4 shows that, during interval A in the twosound task, 10 out of 11 subjects (with the exception of S8) had a positive LI, indicating a stronger difference between responses to vs and notes in the left than the right hemisphere. Individual LIs were approximately similar during interval 2-sound task Interval A B 4-sound task Interval A B 140 140 120 % 0 320 640 ms 120 % 0 320 1280 ms 100 100 80 80 60 60 40 40 20 20 0 0 LH RH LH RH LH RH LH RH Interval A Interval B Interval A Interval B FIG. 3. Mean ( s.e.m.) amplitudes for the 11 subjects during both tasks and both time intervals. Amplitudes were normalized according to the individual left hemisphere response to. LH, left hemisphere; RH, right hemisphere. Vol 10 No 14 29 September 1999 2989

Laterality index (LI) 0.8 0.6 0.4 0.2 0.0 0.2 0.4 A in the four-sound task (data not shown). During interval B in the two-sound task, all 11 subjects showed a positive LI for the vowel vs tone comparison but only nine (with the exception of subjects S6 and S10) for the vowel vs piano comparison. During interval B in the four-sound task, nine subjects (with the exception of S8 and S11) showed a positive LI for the vowel vs tone comparison (data not shown) and 10 for the vowel vs piano comparison (all except S9). Table 1 summarizes the results. No relationship was observed between the degree of handedness and the degree of hemispheric dominance as determined by the laterality index (see Fig. 4). Discussion interval A: vowel vs. piano note vowel vs. tone interval B: vowel vs. piano note vowel vs. tone 0.7 0.8 0.9 1.0 Handedness FIG. 4. Individual laterality indices during the two-sound task vs the right-handedness as measured by the Edinburgh Inventory. In the mean data across all 11 subjects, the responses were signi cantly stronger to than to or over the left but not the right hemisphere, suggesting left-hemisphere dominance for processing of. Similar differences were seen Table 1. Percentage of subjects showing a positive laterality index (LI); the subjects with negative LIs are indicated within brackets. Data are given separately for vowel vs and vowel vs piano comparisons during intervals A and B in both tasks Task Interval Vowel vs Vowel vs notes Two-sound A 91% (S8) 91% (S8) B 100% 82% (S6,S10) Four-sound A 91% (S8) 91% (S8) B 82% (S8, S11) 91% (S9) L. Gootjes et al. both during the N100m response and during the later time window (320±640 ms in the two-sound task and 320±1280 ms in the four-sound task). According to individual laterality indices, all subjects were left hemisphere dominant for the vowel vs tone comparison during 320±640 ms in the twosound task. In the corresponding vowel vs piano comparison, and during other intervals in both tasks, left hemispheric dominance was obtained in nine or 10 out of the 11 subjects. In total ve subjects showed inconsistent hemispheric dominance when both vowel vs tone and vowel vs piano comparisons in the later time intervals of both tasks were taken into account. Around the N100m peak, hemispheric dominance for speech sounds was consistent for individual subjects over both the vowel vs piano and vowel vs tone comparisons in both tasks. In most experimental conditions, ten (91%) out of the 11 subjects showed indices favouring the left hemisphere. This number is consistent with results from Wada tests, indicating left hemisphere language dominance in 92±99% of right-handed subjects [1,3]. In previous MEG [4], PET [7], fmri [7] and EEG [8,9] studies, 78±93% of right-handed subjects (sample sizes varying from nine to 28 subjects) have been considered left hemisphere dominant. Some subjects showed a vowel±tone difference also in the right hemisphere, suggesting bilateral representation of speech-related processing, although with left hemisphere dominance. Our aim was to develop a straightforward clinical test for language dominance. Thus, we did not attempt to model the N100m responses, which are known to originate in the region of the supratemporal auditory cortex [10]. We rather compared the response strengths between stimuli within each hemisphere and the resulting ratios across hemispheres. Such a comparison was feasible with our planar gradiometers which detect the maximum signal immediately above the active cortical area [11]. The absolute signal amplitudes depend, besides the source strength, also on the distance of the detectors from the source, but as we used relative values, we did not need to take this issue into account. The differences were more clear between vs than vs notes, which will further simplify the future clinical tests. Our task, in which the subjects had to judge whether the rst and the last sound in a series were the same, kept the subject's vigilance stable but probably also involved short-term memory mechanisms. However, it is highly unlikely that the differences between responses to vs other sounds would be memory-related because the effect was observed already around the N100m response to the rst sound of the sequence. 2990 Vol 10 No 14 29 September 1999

Asymmetrical processing of It is somewhat surprising that consistent hemispheric asymmetry for speech sound processing was re ected in the N100m response which can be elicited in any abrupt change in the auditory environment [10]. However, in line with our data, Poeppel et al. [5] have reported an effect in N100m speci c to attentive discrimination of syllables. It is thus possible that the N100m response not only re ects acoustic but also phonetic aspects of the stimuli. Eulitz et al. [12] found stronger left than right hemisphere responses to than when the subjects made covert judgements about stimulus duration (short or long); similarly to our study, the difference between responses to vs was larger over the left than the right hemisphere. However, hemispheric asymmetries were signi cant only at 400±600 ms but not during the N100m response. Conclusion Our results imply left hemisphere dominance for speech sound processing in right handed subjects, both during the N100m response and at later latencies. The present task, simpli ed to vowel vs tone comparison only, might be helpful in noninvasive presurgical evaluation of language lateralization. A necessary future step is to compare this task and the results of Wada test at individual level. References 1. Wada J and Rasmussen T. J Neurosurg 17, 266±282 (1960). 2. Annett M. Cortex 11, 305±328 (1975). 3. Loring R, Meador K, Lee G et al. Neuropsychology 28, 831±838 (1990). 4. Simos PG, Breier JI, Zouridakis G et al. J Clin Neurophysiol 15, 364±372 (1998). 5. Poeppel D, Yellin E, Phillips C et al. Cogn Brain Res 4, 231±242 (1996). 6. Old eld RC. Neuropsychology 9, 97±113 (1971). 7. Xiong J, Rao S, Gao J-H et al. Hum Brain Mapp 6, 42±58 (1998). 8. Gerschlager W, Lalouschek W, Lehrner J et al. Electroencephalogr Clin Neurophysiol 108, 274±282 (1998). 9. Altenmuller E, Kriechbaum W, Helber U et al. Acta Neurochir Suppl 56, 22±33 (1993). 10. Hari R. Adv Audiol 6, 222±282 (1990). 11. HaÈmaÈlaÈinen M, Hari R, Ilmoniemi RJ et al. Rev Mod Phys 65, 413±497 (1993). 12. Eulitz C, Diesch E, Pantev C et al. J Neurosci 15, 2748±2755 (1995). ACKNOWLEDGEMENTS: This study was supported by the Academy of Finland, the Sigrid JuseÂlius Foundation and by the National Epilepsy Fund and the Brain Foundation of the Netherlands. Received 15 July 1999; accepted 5 August 1999 Vol 10 No 14 29 September 1999 2991