Neuromagnetic recordings reveal the temporal dynamics of auditory spatial processing in the human cortex
|
|
- Darren Fisher
- 5 years ago
- Views:
Transcription
1 Neuroscience Letters 396 (2006) Neuromagnetic recordings reveal the temporal dynamics of auditory spatial processing in the human cortex Hannu Tiitinen a,b,, Nelli H. Salminen a, Kalle J. Palomäki a,c, Ville T. Mäkinen a,b, Paavo Alku c, Patrick J.C. May a,b a Apperception & Cortical Dynamics (ACD), Cognitive Science, Department of Psychology, P.O. Box 9, FIN University of Helsinki, Finland b BioMag Laboratory, Engineering Centre, Helsinki University Central Hospital, Helsinki, Finland c Laboratory of Acoustics and Audio Signal Processing, Helsinki University of Technology, Finland Received 1 July 2005; received in revised form 1 November 2005; accepted 4 November 2005 Abstract In an attempt to delineate the assumed what and where processing streams, we studied the processing of spatial sound in the human cortex by using magnetoencephalography in the passive and active recording conditions and two kinds of spatial stimuli: individually constructed, highly realistic spatial (3D) stimuli and stimuli containing interaural time difference (ITD) cues only. The auditory P1m, N1m, and P2m responses of the event-related field were found to be sensitive to the direction of sound source in the azimuthal plane. In general, the right-hemispheric responses to spatial sounds were more prominent than the left-hemispheric ones. The right-hemispheric P1m and N1m responses peaked earlier for sound sources in the contralateral than for sources in the ipsilateral hemifield and the peak amplitudes of all responses reached their maxima for contralateral sound sources. The amplitude of the right-hemispheric P2m response reflected the degree of spatiality of sound, being twice as large for the 3D than ITD stimuli. The results indicate that the right hemisphere is specialized in the processing of spatial cues in the passive recording condition. Minimum current estimate (MCE) localization revealed that temporal areas were activated both in the active and passive condition. This initial activation, taking place at around 100 ms, was followed by parietal and frontal activity at 180 and 200 ms, respectively. The latter activations, however, were specific to attentional engagement and motor responding. This suggests that parietal activation reflects active responding to a spatial sound rather than auditory spatial processing as such Elsevier Ireland Ltd. All rights reserved. Keywords: Magnetoencephalography (MEG); Auditory; Spatial; P1m; N1m; P2m; Attention In natural auditory environments, sounds are endowed with spatial cues because they are emitted from sources situated in the three-dimensional (3D) space surrounding the listener. Recently, the processing of auditory stimuli with spatial properties has been studied using event-related potentials (ERPs) and fields (ERFs) measured in the electro- (EEG) and magnetoencephalography (MEG), respectively, which provide temporally precise measures of cortical activity [1,6,12,13,14]. These efforts have focused on the auditory N1m, a prominent cortical response peaking some 100 ms after stimulus onset. Its amplitude in both the left and right hemisphere has been found to reflect sound source direction, and displays maxima and minima for contraand ipsilateral sound sources, respectively [12 14]. Also, both Corresponding author. Tel.: ; fax: address: hannu.tiitinen@helsinki.fi (H. Tiitinen). the overall amplitude of the N1m and its variation according to sound source direction are larger in the right hemisphere. A right-hemispheric preponderance is also indicated by hemodynamic studies revealing activation in the right-hemispheric planum temporale both with stationary [22] and moving [2,15] spatial sound. Complementing these findings, behavioral studies have shown that localization is more accurate for stimuli in the left hemifield [5] and when listening with the left ear only [7]. The study of auditory spatial processing in humans has largely neglected the temporal dynamics of the underlying brain activity. This has been due to the limited temporal resolution of hemodynamic methods, whereas researchers using EEG and MEG usually restrict their studies to a particular deflection of the ERP/ERF and thus have not taken full advantage of the high temporal resolution of these methods. The ERP/ERF comprises a series of deflections which together could be used to study the temporal dynamics of auditory spatial processing. For transient /$ see front matter 2005 Elsevier Ireland Ltd. All rights reserved. doi: /j.neulet
2 18 H. Tiitinen et al. / Neuroscience Letters 396 (2006) stimuli, many spatial cues occur only after the first 100 ms poststimulus [4]. Therefore, the processing of these cues cannot be reflected by the N1(m) but might, rather, be captured by the P2(m) response occurring at 200 ms post-stimulus. The P2m has been found to reflect the processing of complex sound features [8,18]. As it is generated partly in the planum temporale [8],an area which hemodynamic studies have identified as taking part in auditory spatial processing [2,15,20,22], it might turn out to be useful for studying the processing of spatial cues in auditory stimulation. Hemodynamic studies have reported activation in the superior temporal gyrus and in posterior parietal areas during auditory spatial tasks [10,15,16,20,22], although the role of the parietal areas in these tasks has remained unclear [3,11]. One interpretation is that parietal areas are a part of a where processing stream analyzing sound location as opposed to a what stream dedicated to the processing of auditory object identity [1,10,16]. A recent PET experiment, however, suggests that the parietal activation occurs only when active responding is required, whereas auditory spatial stimulation in itself activates only auditory areas in the temporal cortex [22]. Parietal activation during an auditory spatial task was also detected in an EEG study [1] which suggests that temporally precise EEG and MEG methods might be useful in elucidating the functional role of parietal cortex. The ability to localize sounds is based on the utilization of multiple localization cues embedded in the sound signal: the interaural time and level difference (ITD and ILD, respectively) and the spectral distortions produced by the pinnae, the head and the body [4]. Although often used as stimulus material in cognitive neuroscience [2,10], the ITD and ILD, when utilized alone, only represent a subset of spatial sound cues and lead to the subject perceiving sound as originating inside the head rather than in the surrounding space. For creating genuine 3D environments, loudspeakers can be used in certain conditions [6,22] but this results in an auditory environment which is specific to the measurement room and cannot be replicated elsewhere. Additionally, loudspeakers cannot be used in MEG due to their magnetic interference. Recently, head-related transfer functions (HRTFs) [21], which replicate the spectral distortions produced by the pinnae, the head, and the body, have provided realistic spatial sound stimuli for brain research purposes [12 14]. An even higher level of realism can be achieved by presenting stimuli through loudspeakers and recording them with miniature microphones placed inside the ear canals of each individual subject, and thereafter using these recordings as subject-specific stimulus material in earphone stimulation. In sum, the study of auditory processing in human cortex has utilized stimulus material whose spectral detail, unfortunately, does not correspond to natural, spatial environments. Although research on the processing of auditory spatial information has gained momentum we still lack an adequate description of the temporal and spatial evolution of the underlying neural activity (e.g. in terms of the putative what and where processing streams). To address these issues we designed three interlinked experiments utilizing subject-specific, realistic spatial sound stimulation and high-resolution MEG recordings. For an improved signal-to-noise ratio, the number of repetitions of each stimulus was elevated at the expense of the number of subjects. The aim of the first experiment was to find out how the direction of the source of a realistic 3D sound is reflected in the P1m, N1m, and P2m responses. In the second experiment, we aimed to reveal processing specific to the spectral detail characteristic of a realistic spatial sound by comparing brain responses elicited by spatially impoverished ITD stimuli and realistic 3D sounds. In the third experiment, we aimed to clarify how temporal and parietal areas take part in auditory spatial processing during passive and active recording conditions, with the hypothesis that parietal activation only occurs in conditions where attentional engagement is required [1,22]. The stimulus set comprised noise bursts with a bandwidth of 10 khz presented at an onset-to-onset interstimulus interval of 1000 ms. Realistic spatial sound stimuli (3D sounds) were obtained by individual recordings for each subject. Microphones were placed in the subject s ear canals blocked with silicon paste. 50-ms noise bursts were sequentially presented from loudspeakers placed at eight locations around the subject at a distance of 1.5 m (Fig. 1A). The recordings of 200-ms epochs took place in a slightly reverberant room adhering to the IEC standard for listening rooms. Lateralized sounds (perceived as originating inside the head) from the right and left hemifields, corresponding to direction angles 90 and 270 (Fig. 1A), were produced by applying ITDs. In order to ensure their correspondence with the 3D stimuli, the ITD sounds were filtered to match the smoothed grand-averaged spectrum of the 3D sound originating from the 0 direction. Five volunteers (all male, right-handed, aged 30 42) took part in the study with informed consent and with the approval of the Ethical Committee of Helsinki University Central Hospital. The stimuli were delivered through a custom-made tubephone system (frequency range 100 Hz 10 khz). Magnetic responses were recorded with a 306-channel whole-head magnetometer (Vectorview 4D, Neuromag Oy, Finland) using a sampling rate of 600 Hz (passband Hz). Horizontal and vertical eye movement sensors were used for removing epochs with eyemovement-related artefacts defined as absolute displacements exceeding 150 V. The subject sat in a reclining chair and in Experiments I and II, was under instruction not to pay attention to the auditory stimuli and to concentrate on watching a selfselected silent film. In Experiment I, 3D sounds were presented from eight equidistantly spaced direction angles separated by 45 steps (Fig. 1A). In Experiment II, the stimuli consisted of 3D and ITD sounds presented from the right and the left hemifields (i.e. from angles 90 and 270 ). In Experiment III, the stimuli of Experiment II were presented to the subject who was under instruction to attend to a target stimulus and to press a button with the right index finger when he detected the target. The target was either the 3D sound from the right (90 ) or the left (270 ) hemifield. Only brain responses accompanied by a correct behavioral response were included in the analyses. The stimuli were presented in a random order, counterbalanced across subjects. For each stimulus, >500 responses in Experiments I and II and >166 responses in Experiment III were collected.
3 H. Tiitinen et al. / Neuroscience Letters 396 (2006) Fig. 1. The P1m, N1m, and P2m responses as indicators of the cortical processing of sound source direction. In Experiment I, realistic spatial sound was presented from eight equally spaced sound source directions in the azimuthal plane. (A) Grand-averaged MEG responses. Sounds from each source direction elicited a response complex comprising the P1m, N1m, and P2m. The right-hemispheric P1m and N1m peaked earlier for sound sources contralateral to the hemisphere. The amplitudes of the P1m and N1m in both hemispheres and the right-hemispheric P2m varied according to the sound source direction. Overall, sound sources in the contralateral hemisphere resulted in larger peak amplitudes, and the right-hemispheric responses were larger in amplitude than the left-hemispheric ones. (B) Grand-averaged MCEs obtained at the latencies of the P1m, N1m, and P2m to the 3D sound from the direction angle 90. Activation restricted to the vicinity of the auditory cortex was observed at each latency. The MEG responses were averaged over a 500-ms poststimulus period (filter passband 2 30 Hz, baseline-correction with respect to a 100-ms pre-stimulus period). For each temporal lobe, data from the sensor displaying the largest N1m amplitude were selected for further analyses. The peak amplitudes and latencies of the P1m, N1m, and P2m were identified. Response sources were localized by using the minimum current estimation (MCE)-technique [19] which, unlike the ECD method, has the advantage that it can describe several local or distributed sources. The activations were mapped to a realistic head model which was divided into a grid of 1425 points. Six regions of interest (ROIs) were defined in the temporal, parietal, and frontal cortices of the two hemispheres. From the summed activity within each ROI, the amplitude and latency of maximal activity were identified. Statistical significances of the differences in response peak latencies and amplitudes were tested with a multivariate analysis of variance (MANOVA). Newman Keuls post hoc tests were performed when appropriate. For statistical analyses, direction angles 225, 270 and 315 represented left-hemifield stimulation and angles 45,90, and 135 right-hemifield stimulation. When analyzing the effect of attention, the MEG responses to the 3D target sound (attended) were compared to those elicited by the same sound in the listening condition where the 3D sound in the opposite hemifield was the target (unattended). The 3D sounds, presented from eight sound source directions in the azimuthal plane, elicited a prominent response complex consisting of the P1m, N1m, and P2m in both hemispheres (see Fig. 1A and Table 1). Response amplitudes and latencies to sound sources in the left and right hemifields are summarized in Table 2. In general, the right-hemispheric responses had larger amplitudes than the left-hemispheric ones (F[3,18] = 4.90, p < 0.01). Response peak latencies were affected by sound source direction in the right (F[3,18] = 21.14, p < 0.001) but not in the left hemisphere (F[3,18] = 2.09, p = n.s.). In the right hemisphere, sound sources in the left hemifield elicited an earlier P1m and N1m than did right-hemifield ones. In both hemispheres, contralateral sound sources elicited larger response amplitudes than ipsilateral ones (F[3,18] = 11.9, F[3,18] = 38.61, for the left and right hemisphere, respectively, p < for both). With the exception of the left-hemispheric P2m (whose amplitude Table 1 Mean amplitudes and latencies Amplitudes (ft/cm) Latencies (ms) P1m N1m P2m P1m N1m P2m Experiment I Left hemisphere Right hemisphere Experiment II Left hemisphere Right hemisphere Experiment III Left hemisphere Right hemisphere
4 20 H. Tiitinen et al. / Neuroscience Letters 396 (2006) Table 2 Mean amplitudes and latencies (Experiment I) Amplitudes (ft/cm) Latencies (ms) Left hemifield Right hemifield Left hemifield Right hemifield Left hemisphere P1m p < p = n.s. N1m p < p = n.s. P2m p = n.s p = n.s. Right hemisphere P1m p < p < N1m p < p < P2m p < p = n.s. was unaffected by sound source direction), maximal responses were consistently elicited by sound sources in the contralateral hemifield. MCE localization (Fig. 1B) revealed large areas of activation in the vicinity of the auditory cortex at the peak latencies of the P1m, N1m, and P2m in both hemispheres. This activation was more pronounced in the right hemisphere, especially at the peak latencies of the N1m and P2m. The subjects were presented with realistic 3D sounds and spatially impoverished ITD sounds from either the left or right hemifield. As shown in Fig. 2A and Table 3, both the 3D and ITD sounds elicited prominent P1m, N1m, and P2m responses. While there were no differences in response latencies (F[3,8] = 1.56, p = n.s., F[3,8] = 1.81, p = n.s., for the left and right hemisphere, respectively), the 3D sound resulted in 115% larger peak amplitudes of the right-hemispheric P2m than did the ITD sounds (F[3,8] = 1.93, p = n.s. and F[3,8] = 36.04, p < for the left and right hemisphere, respectively; see Table 3). MCEs obtained at the peak latency of the P2m are shown in Fig. 2B, revealing how the 3D sounds resulted in more pronounced activation than did the ITD sounds, especially in the right hemisphere. The stimulus set consisted of 3D and ITD sounds originating from the left and right and the subject s task was to attend to the stimuli and respond to a target (3D) sound. The P1m, N1m, and P2m elicited by the attended and unattended 3D sounds are shown in Fig. 3A (see also Table 1). The amplitude of the N1m in the left and right hemisphere was 23% and 30% larger, respectively, for the attended than for the unattended sounds, although these effects did not reach statistical significance. As revealed by MCE analyses, however, attention strongly affected the activations taking place in the parietal and frontal areas: while unattended stimuli activated temporal brain areas only, attended stimuli led to activity spreading from temporal to parietal and frontal areas. Specifically, as shown in Fig. 3B, the activity elicited by unattended sound reached a maximum in the temporal areas at 105 ms (11.9 and 16.4 na m for the left and right hemisphere, respectively), in the latency range of the N1m. A secondary peak was observed in the P2m latency range, Table 3 Mean amplitudes (Experiment II) Amplitudes (ft/cm) 3D ITD Left hemisphere P1m p = n.s. N1m p = n.s. P2m p = n.s. Right hemisphere P1m p = n.s. N1m p = n.s. P2m p < Fig. 2. Brain responses to 3D and ITD sounds. In Experiment II, the subjects were presented with realistic spatial 3D sounds and spatially impoverished ITD sounds in the left and right hemifield (angles 90 and 270 ). (A) Grand-averaged MEG responses to 3D (solid line) and ITD (dashed line) sounds from the left and right hemifield. While the P1m and N1m to the 3D and ITD sounds did not differ, the right-hemispheric P2m elicited by the 3D sound was considerably larger in amplitude than that elicited by the ITD sound. (B) Grand-averaged MCEs obtained at the peak latency of the P2m to 3D and ITD sounds originating from the right hemifield. Interhemispheric differences were visible in the MCEs as more pronounced activation in the right hemisphere. The results support a righthemispheric preponderance in auditory spatial processing and indicate that the P2m might reflect processing specific to the spectral cues of spatial sounds.
5 H. Tiitinen et al. / Neuroscience Letters 396 (2006) Fig. 3. The effect of an active task on auditory spatial processing. In Experiment III the stimulus set was identical to that of Experiment II and the subject s task was to respond by pressing a response key when a target stimulus was detected. The target sound was the 3D sound originating from either the left or the right hemifield. (A) Grand-averaged MEG responses to attended (solid line) and unattended (dashed line) 3D sounds. (B) Grand-averaged MCEs derived from the responses to the attended and unattended left-hemifield 3D sound. Attention had a marked effect on cortical activation: At around the 80-ms latency of the N1m response, the activation is similar to that observed in the passive recording condition of Experiment I. However, following this initial activity, at around 160 ms, parietal activation specific to attentional engagement and motor responding emerges. Further, at around the 180-ms peak latency of the P2m, the attended stimulus activates a frontal area. These results suggest that the parietal activation reflects active responding rather than auditory spatial processing in itself. at around 180 ms. The activity in the parietal (4.3 and 4.7 na m) and frontal (5.2 and 6.3 na m) brain areas remained low during the processing of the unattended sound. Importantly, only when the stimulus was attended was a more extensive network of cortical areas activated: In the left hemisphere, the initial activation in the temporal areas at 107 ms was followed by activity in the parietal and frontal areas at 181 and 203 ms, respectively (F[2,8] = 9.86, p < 0.01). The parietal (11.2 na m, p < 0.01) and frontal (9.9 na m, p < 0.01) activity was significantly stronger for the attended than for the unattended sounds (F[2,8] = 5.6, p < 0.05). The increase in parietal and frontal activation was observed only in the left hemisphere, possibly due to the subjects carrying out the motor response with their right-hand index finger. Here, we studied the temporal dynamics of auditory spatial processing in the human cortex and found that each element of the auditory response complex comprising the P1m, N1m, and P2m reflected the location of the sound source in the azimuthal plane. Of these three, the P2m appeared to be especially sensitive to the presence of spatial cues in the auditory stimuli and displayed twice as large amplitudes to the natural 3D sounds than to their spatially impoverished ITD counterparts even though the two sound types were carefully matched for their spectrotemporal complexity. This, of course, does not exclude the possibility that the P2m is also sensitive to variations in spectrotemporal detail unrelated to the spatial quality in stimulation [8,18]. In general, the 3D sounds resulted in more prominent brain activity in the right than the left hemisphere, indicating that the spatial cues embedded in auditory stimuli might predominantly be processed in the right hemisphere. The engagement of attentional processes had a major effect on the global dynamics of cortex, with parietal and frontal activation emerging only in conditions where stimuli were attended and responded to. In contrast, the brain activity in the passive recording condition and that observed in the processing of unattended stimuli was restricted to temporal areas only. Thus, the spatial and temporal evolution of neural activity associated with the processing of spatial cues can be accessed with MEG. Corroborating previous results [12 14], the amplitude of the N1m varied according to sound source direction. However, expanding the analysis to include brain responses before and after the N1m revealed that the processing of spatial cues is also reflected by the P1m and P2m responses. The current results indicate that the N1m and P1m responses are similarly sensitive to spatial cues, and that differences in the way realistic and spatially impoverished sounds are processed become especially apparent in the right hemisphere at the latency of the P2m response. These amplitude variations might be explained by intracortical results showing that neurons responding strongest to contralateral sound sources are in the majority in both hemispheres, with only a few neurons preferring sound sources in the front or rear [9,17]. This distribution of preferential responding could account for the amplitude variation of the P1m, N1m, and P2m responses if one assumes that MEG response amplitude reflects the number of responding neurons. In this study, MCEs were utilized in describing how the temporal succession and spatial distribution of cortical activation depends on attentional engagement. Corroborating previous findings utilizing PET [22], our analyses revealed parietal activation only during an active auditory spatial task but not during passive stimulation. Further, during the active stimulation condition, the parietal activity only emerged when the subject detected
6 22 H. Tiitinen et al. / Neuroscience Letters 396 (2006) and responded to a target stimulus, roughly 100 ms after the onset of activation in temporal areas. Thus, parietal areas might not participate in auditory spatial processing per se but, rather, might only take part in active responding to spatial sound stimuli. These results would then place significant parietal parts of the claimed where processing stream outside the processing of the spatial sound itself, into a cortical area and processing stage associated with active motor responding. Acknowledgement This study was supported by the Academy of Finland (Proj. no , ). References [1] C. Alain, S.R. Arnott, S. Hevenor, S. Graham, C.L. Grady, What and where in the human auditory system, PNAS 98 (2001) [2] F. Baumgart, B. Gaschler-Markefski, M.G. Woldorff, H.J. Heinze, H. Scheich, A movement-sensitive area in auditory cortex, Nature 400 (1999) [3] P. Belin, R.J. Zatorre, What, where and how in auditory cortex, Nat. Neurosci. 3 (2000) [4] J. Blauert, Spatial Hearing, MIT Press, Cambridge, [5] K.A. Burke, A. Letsos, R.A. Butler, Asymmetric performances in binaural localization of sound in space, Neuropsychologia 32 (1994) [6] R.A. Butler, The influence of spatial separation of sound sources on the auditory evoked response, Neuropsychologia 10 (1972) [7] R.A. Butler, Asymmetric performances in monaural localization of sound in space, Neuropsychologia 32 (1994) [8] K.E. Crowley, I.M. Colrain, A review of the evidence for P2 being an independent component process: age, sleep and modality, Clin. Neurophysiol. 115 (2004) [9] S. Furukawa, L. Xu, J.C. Middlebrooks, Coding of sound-source location by ensembles of cortical neurons, J. Neurosci. 20 (2000) [10] P.P. Maeder, R.A. Meuli, M. Adriani, A. Bellmann, E. Fornari, J.P. Thiran, A. Pittet, S. Clarke, Distinct pathways involved in sound recognition and localization: a human fmri study, NeuroImage 14 (2001) [11] J.C. Middlebrooks, Auditory space processing: here, there or everywhere? Nat. Neurosci. 5 (2002) [12] K. Palomäki, P. Alku, V. Mäkinen, P. May, H. Tiitinen, Sound localization in the human brain: neuromagnetic observations, NeuroReport 11 (2000) [13] K. Palomäki, H. Tiitinen, V. Mäkinen, P. May, P. Alku, Cortical processing of speech sounds and their analogues in a spatial auditory environment, Cogn. Brain Res. 14 (2002) [14] K.J. Palomäki, H. Tiitinen, V. Mäkinen, P.J.C. May, P. Alku, Spatial processing in human auditory cortex: the effects of 3D, ITD, and ILD stimulation techniques, Cogn. Brain Res. 24 (2005) [15] F. Pavani, E. Macaluso, J.D. Warren, J. Driver, T.D. Griffiths, A common cortical substrate activated by horizontal and vertical movement in the human brain, Curr. Biol. 12 (2002) [16] J.P. Rauschecker, B. Tian, Mechanisms and streams for processing of what and where in auditory cortex, PNAS 97 (2000) [17] G.C. Stecker, I.A. Harrington, J.C. Middlebrooks, Location coding by opponent neural populations in the auditory cortex, PLoS Biol. 3 (2005) e78. [18] H. Tiitinen, P. Sivonen, P. Alku, J. Virtanen, R. Näätänen, Electromagnetic recordings reveal latency differences in speech and tone processing in humans, Cogn. Brain Res. 8 (1999) [19] K. Uutela, M. Hämäläinen, E. Somersalo, Visualization of magnetoencephalographic data using minimum current estimates, NeuroImage 10 (1999) [20] J.D. Warren, B.A. Zielinski, G.G.R. Green, J.P. Rauschecker, T.D. Griffiths, Perception of sound-source motion by the human brain, Neuron 34 (2002) [21] F.L. Wrightman, D.J. Kistler, Headphone simulation of free-field listening. I: Stimulus synthesis, II: Psychophysical validation, J. Acoust. Soc. Am. 85 (1989) [22] R.J. Zatorre, M. Bouffard, P. Ahad, P. Belin, Where is where in the human auditory cortex? Nat. Neurosci. 5 (2002)
Asymmetrical representation of auditory space in human cortex
available at www.sciencedirect.com www.elsevier.com/locate/brainres Research Report Asymmetrical representation of auditory space in human cortex Nelli H. Salminen a,c,, Hannu Tiitinen a,c, Ismo Miettinen
More informationThe neural code for interaural time difference in human auditory cortex
The neural code for interaural time difference in human auditory cortex Nelli H. Salminen and Hannu Tiitinen Department of Biomedical Engineering and Computational Science, Helsinki University of Technology,
More informationSpatial processing in human auditory cortex: The effects of 3D, ITD, and ILD stimulation techniques
Cognitive Brain Research 24 (2005) 364 379 Research Report Spatial processing in human auditory cortex: The effects of 3D, ITD, and ILD stimulation techniques Kalle J. Palom7ki a,b, T, Hannu Tiitinen b,c,
More informationCortical processing of speech sounds and their analogues in a spatial auditory environment
Cognitive Brain Research 14 (2002) 294 299 www.elsevier.com/ locate/ bres Research report Cortical processing of speech sounds and their analogues in a spatial auditory environment Kalle J. Palomaki *,
More information3-D Sound and Spatial Audio. What do these terms mean?
3-D Sound and Spatial Audio What do these terms mean? Both terms are very general. 3-D sound usually implies the perception of point sources in 3-D space (could also be 2-D plane) whether the audio reproduction
More informationBinaural Hearing. Why two ears? Definitions
Binaural Hearing Why two ears? Locating sounds in space: acuity is poorer than in vision by up to two orders of magnitude, but extends in all directions. Role in alerting and orienting? Separating sound
More informationNeural correlates of the perception of sound source separation
Neural correlates of the perception of sound source separation Mitchell L. Day 1,2 * and Bertrand Delgutte 1,2,3 1 Department of Otology and Laryngology, Harvard Medical School, Boston, MA 02115, USA.
More information3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF. Masayuki Morimoto Motokuni Itoh Kazuhiro Iida
3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF Masayuki Morimoto Motokuni Itoh Kazuhiro Iida Kobe University Environmental Acoustics Laboratory Rokko, Nada, Kobe, 657-8501,
More informationSound localization psychophysics
Sound localization psychophysics Eric Young A good reference: B.C.J. Moore An Introduction to the Psychology of Hearing Chapter 7, Space Perception. Elsevier, Amsterdam, pp. 233-267 (2004). Sound localization:
More informationA NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES
A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES Kazuhiro IIDA, Motokuni ITOH AV Core Technology Development Center, Matsushita Electric Industrial Co., Ltd.
More informationThe role of low frequency components in median plane localization
Acoust. Sci. & Tech. 24, 2 (23) PAPER The role of low components in median plane localization Masayuki Morimoto 1;, Motoki Yairi 1, Kazuhiro Iida 2 and Motokuni Itoh 1 1 Environmental Acoustics Laboratory,
More informationProcessing of auditory spatial cues in human cortex: An fmri study
Neuropsychologia 44 (2006) 454 461 Processing of auditory spatial cues in human cortex: An fmri study Ulrike Zimmer a,jörg Lewald b,c, Michael Erb d, Hans-Otto Karnath a, a Section Neuropsychology, Department
More informationLeft-hemisphere dominance for processing of vowels: a whole-scalp neuromagnetic study
Auditory and Vestibular Systems 10, 2987±2991 (1999) BRAIN activation of 11 healthy right-handed subjects was studied with magnetoencephalography to estimate individual hemispheric dominance for speech
More informationWe are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors
We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 3,800 116,000 120M Open access books available International authors and editors Downloads Our
More informationRhythm and Rate: Perception and Physiology HST November Jennifer Melcher
Rhythm and Rate: Perception and Physiology HST 722 - November 27 Jennifer Melcher Forward suppression of unit activity in auditory cortex Brosch and Schreiner (1997) J Neurophysiol 77: 923-943. Forward
More informationChapter 5. Summary and Conclusions! 131
! Chapter 5 Summary and Conclusions! 131 Chapter 5!!!! Summary of the main findings The present thesis investigated the sensory representation of natural sounds in the human auditory cortex. Specifically,
More informationAn fmri Study of the Ventriloquism Effect
Cerebral Cortex, November 2015;25: 4248 4258 doi: 10.1093/cercor/bhu306 Advance Access Publication Date: 9 January 2015 Original Article ORIGINAL ARTICLE An fmri Study of the Ventriloquism Effect Akiko
More informationTrading Directional Accuracy for Realism in a Virtual Auditory Display
Trading Directional Accuracy for Realism in a Virtual Auditory Display Barbara G. Shinn-Cunningham, I-Fan Lin, and Tim Streeter Hearing Research Center, Boston University 677 Beacon St., Boston, MA 02215
More informationWernicke s area is commonly considered to be the neural locus of speech. perception. It is named after the German neurologist Carl Wernicke, who first
Supplementary Material 1. Where is Wernicke s area? Wernicke s area is commonly considered to be the neural locus of speech perception. It is named after the German neurologist Carl Wernicke, who first
More informationAuditory System & Hearing
Auditory System & Hearing Chapters 9 and 10 Lecture 17 Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Spring 2015 1 Cochlea: physical device tuned to frequency! place code: tuning of different
More informationHST.723J, Spring 2005 Theme 3 Report
HST.723J, Spring 2005 Theme 3 Report Madhu Shashanka shashanka@cns.bu.edu Introduction The theme of this report is binaural interactions. Binaural interactions of sound stimuli enable humans (and other
More informationHearing in the Environment
10 Hearing in the Environment Click Chapter to edit 10 Master Hearing title in the style Environment Sound Localization Complex Sounds Auditory Scene Analysis Continuity and Restoration Effects Auditory
More informationTemporal integration of vowel periodicity in the auditory cortex
Temporal integration of vowel periodicity in the auditory cortex Santeri Yrttiaho a Department of Signal Processing and Acoustics, Aalto University School of Science and Technology, P.O. Box 13000, Aalto
More informationBinaural Hearing. Steve Colburn Boston University
Binaural Hearing Steve Colburn Boston University Outline Why do we (and many other animals) have two ears? What are the major advantages? What is the observed behavior? How do we accomplish this physiologically?
More informationAUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening
AUDL GS08/GAV1 Signals, systems, acoustics and the ear Pitch & Binaural listening Review 25 20 15 10 5 0-5 100 1000 10000 25 20 15 10 5 0-5 100 1000 10000 Part I: Auditory frequency selectivity Tuning
More informationIN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION. Griffin D. Romigh, Brian D. Simpson, Nandini Iyer
IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION Griffin D. Romigh, Brian D. Simpson, Nandini Iyer 711th Human Performance Wing Air Force Research Laboratory
More informationLanguage Speech. Speech is the preferred modality for language.
Language Speech Speech is the preferred modality for language. Outer ear Collects sound waves. The configuration of the outer ear serves to amplify sound, particularly at 2000-5000 Hz, a frequency range
More informationHearing II Perceptual Aspects
Hearing II Perceptual Aspects Overview of Topics Chapter 6 in Chaudhuri Intensity & Loudness Frequency & Pitch Auditory Space Perception 1 2 Intensity & Loudness Loudness is the subjective perceptual quality
More informationEffect of spectral content and learning on auditory distance perception
Effect of spectral content and learning on auditory distance perception Norbert Kopčo 1,2, Dávid Čeljuska 1, Miroslav Puszta 1, Michal Raček 1 a Martin Sarnovský 1 1 Department of Cybernetics and AI, Technical
More informationSpatial hearing and sound localization mechanisms in the brain. Henri Pöntynen February 9, 2016
Spatial hearing and sound localization mechanisms in the brain Henri Pöntynen February 9, 2016 Outline Auditory periphery: from acoustics to neural signals - Basilar membrane - Organ of Corti Spatial
More information19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS
19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 27 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS PACS: 43.66.Pn Seeber, Bernhard U. Auditory Perception Lab, Dept.
More informationSystems Neuroscience Oct. 16, Auditory system. http:
Systems Neuroscience Oct. 16, 2018 Auditory system http: www.ini.unizh.ch/~kiper/system_neurosci.html The physics of sound Measuring sound intensity We are sensitive to an enormous range of intensities,
More informationCompeting Streams at the Cocktail Party
Competing Streams at the Cocktail Party A Neural and Behavioral Study of Auditory Attention Jonathan Z. Simon Neuroscience and Cognitive Sciences / Biology / Electrical & Computer Engineering University
More informationNeural System Model of Human Sound Localization
in Advances in Neural Information Processing Systems 13 S.A. Solla, T.K. Leen, K.-R. Müller (eds.), 761 767 MIT Press (2000) Neural System Model of Human Sound Localization Craig T. Jin Department of Physiology
More information21/01/2013. Binaural Phenomena. Aim. To understand binaural hearing Objectives. Understand the cues used to determine the location of a sound source
Binaural Phenomena Aim To understand binaural hearing Objectives Understand the cues used to determine the location of a sound source Understand sensitivity to binaural spatial cues, including interaural
More informationAn Auditory System Modeling in Sound Source Localization
An Auditory System Modeling in Sound Source Localization Yul Young Park The University of Texas at Austin EE381K Multidimensional Signal Processing May 18, 2005 Abstract Sound localization of the auditory
More informationUSING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES
USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES Varinthira Duangudom and David V Anderson School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA 30332
More informationNeural Representations of the Cocktail Party in Human Auditory Cortex
Neural Representations of the Cocktail Party in Human Auditory Cortex Jonathan Z. Simon Department of Biology Department of Electrical & Computer Engineering Institute for Systems Research University of
More informationBinaural Hearing for Robots Introduction to Robot Hearing
Binaural Hearing for Robots Introduction to Robot Hearing 1Radu Horaud Binaural Hearing for Robots 1. Introduction to Robot Hearing 2. Methodological Foundations 3. Sound-Source Localization 4. Machine
More informationProcessing Interaural Cues in Sound Segregation by Young and Middle-Aged Brains DOI: /jaaa
J Am Acad Audiol 20:453 458 (2009) Processing Interaural Cues in Sound Segregation by Young and Middle-Aged Brains DOI: 10.3766/jaaa.20.7.6 Ilse J.A. Wambacq * Janet Koehnke * Joan Besing * Laurie L. Romei
More informationOver-representation of speech in older adults originates from early response in higher order auditory cortex
Over-representation of speech in older adults originates from early response in higher order auditory cortex Christian Brodbeck, Alessandro Presacco, Samira Anderson & Jonathan Z. Simon Overview 2 Puzzle
More informationAuditory fmri correlates of loudness perception for monaural and diotic stimulation
PROCEEDINGS of the 22 nd International Congress on Acoustics Psychological and Physiological Acoustics (others): Paper ICA2016-435 Auditory fmri correlates of loudness perception for monaural and diotic
More informationTitle of Thesis. Study on Audiovisual Integration in Young and Elderly Adults by Event-Related Potential
Title of Thesis Study on Audiovisual Integration in Young and Elderly Adults by Event-Related Potential 2014 September Yang Weiping The Graduate School of Natural Science and Technology (Doctor s Course)
More informationCortical Encoding of Auditory Objects at the Cocktail Party. Jonathan Z. Simon University of Maryland
Cortical Encoding of Auditory Objects at the Cocktail Party Jonathan Z. Simon University of Maryland ARO Presidential Symposium, February 2013 Introduction Auditory Objects Magnetoencephalography (MEG)
More informationNeural Representations of the Cocktail Party in Human Auditory Cortex
Neural Representations of the Cocktail Party in Human Auditory Cortex Jonathan Z. Simon Department of Biology Department of Electrical & Computer Engineering Institute for Systems Research University of
More informationLocalization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane. I. Psychoacoustical Data
942 955 Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane. I. Psychoacoustical Data Jonas Braasch, Klaus Hartung Institut für Kommunikationsakustik, Ruhr-Universität
More informationAuditory event-related responses are generated independently of ongoing brain activity
www.elsevier.com/locate/ynimg NeuroImage 24 (2005) 961 968 Auditory event-related responses are generated independently of ongoing brain activity Ville M7kinen,* Hannu Tiitinen, and Patrick May Apperception
More informationJ Jeffress model, 3, 66ff
Index A Absolute pitch, 102 Afferent projections, inferior colliculus, 131 132 Amplitude modulation, coincidence detector, 152ff inferior colliculus, 152ff inhibition models, 156ff models, 152ff Anatomy,
More informationMULTI-CHANNEL COMMUNICATION
INTRODUCTION Research on the Deaf Brain is beginning to provide a new evidence base for policy and practice in relation to intervention with deaf children. This talk outlines the multi-channel nature of
More informationRAPID COMMUNICATION Scalp-Recorded Optical Signals Make Sound Processing in the Auditory Cortex Visible
NeuroImage 10, 620 624 (1999) Article ID nimg.1999.0495, available online at http://www.idealibrary.com on RAPID COMMUNICATION Scalp-Recorded Optical Signals Make Sound Processing in the Auditory Cortex
More informationSound Localization PSY 310 Greg Francis. Lecture 31. Audition
Sound Localization PSY 310 Greg Francis Lecture 31 Physics and psychology. Audition We now have some idea of how sound properties are recorded by the auditory system So, we know what kind of information
More informationCortical sound processing in children with autism disorder: an MEG investigation
AUDITORYAND VESTIBULAR SYSTEMS Cortical sound processing in children with autism disorder: an MEG investigation Nicole M. Gage CA, Bryna Siegel, 1 Melanie Callen 1 and Timothy P. L. Roberts Biomagnetic
More informationThe neurolinguistic toolbox Jonathan R. Brennan. Introduction to Neurolinguistics, LSA2017 1
The neurolinguistic toolbox Jonathan R. Brennan Introduction to Neurolinguistics, LSA2017 1 Psycholinguistics / Neurolinguistics Happy Hour!!! Tuesdays 7/11, 7/18, 7/25 5:30-6:30 PM @ the Boone Center
More informationNeural Correlates of Complex Tone Processing and Hemispheric Asymmetry
International Journal of Undergraduate Research and Creative Activities Volume 5 Article 3 June 2013 Neural Correlates of Complex Tone Processing and Hemispheric Asymmetry Whitney R. Arthur Central Washington
More informationEFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE DESIGN OF SPATIAL AUDIO DISPLAYS
Proceedings of the 14 International Conference on Auditory Display, Paris, France June 24-27, 28 EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE
More informationRobust Neural Encoding of Speech in Human Auditory Cortex
Robust Neural Encoding of Speech in Human Auditory Cortex Nai Ding, Jonathan Z. Simon Electrical Engineering / Biology University of Maryland, College Park Auditory Processing in Natural Scenes How is
More informationA Microphone-Array-Based System for Restoring Sound Localization with Occluded Ears
Restoring Sound Localization with Occluded Ears Adelbert W. Bronkhorst TNO Human Factors P.O. Box 23, 3769 ZG Soesterberg The Netherlands adelbert.bronkhorst@tno.nl Jan A. Verhave TNO Human Factors P.O.
More informationThe basic mechanisms of directional sound localization are well documented. In the horizontal plane, interaural difference INTRODUCTION I.
Auditory localization of nearby sources. II. Localization of a broadband source Douglas S. Brungart, a) Nathaniel I. Durlach, and William M. Rabinowitz b) Research Laboratory of Electronics, Massachusetts
More informationSensitivity to Auditory Object Features in Human Temporal Neocortex
The Journal of Neuroscience, April 7, 2004 24(14):3637 3642 3637 Behavioral/Systems/Cognitive Sensitivity to Auditory Object Features in Human Temporal Neocortex Robert J. Zatorre, 1 Marc Bouffard, 1 and
More informationJ. Acoust. Soc. Am. 114 (2), August /2003/114(2)/1009/14/$ Acoustical Society of America
Auditory spatial resolution in horizontal, vertical, and diagonal planes a) D. Wesley Grantham, b) Benjamin W. Y. Hornsby, and Eric A. Erpenbeck Vanderbilt Bill Wilkerson Center for Otolaryngology and
More informationAdaptation of neuromagnetic N1 without shifts in dipolar orientation
https://helda.helsinki.fi Adaptation of neuromagnetic N1 without shifts in dipolar orientation Campbell, Tom 2007 Campbell, T & Neuvonen, T 2007, ' Adaptation of neuromagnetic N1 without shifts in dipolar
More informationNeural Correlates of Auditory Perceptual Awareness under Informational Masking
Neural Correlates of Auditory Perceptual Awareness under Informational Masking Alexander Gutschalk 1*, Christophe Micheyl 2, Andrew J. Oxenham 2 PLoS BIOLOGY 1 Department of Neurology, Ruprecht-Karls-Universität
More informationOutline.! Neural representation of speech sounds. " Basic intro " Sounds and categories " How do we perceive sounds? " Is speech sounds special?
Outline! Neural representation of speech sounds " Basic intro " Sounds and categories " How do we perceive sounds? " Is speech sounds special? ! What is a phoneme?! It s the basic linguistic unit of speech!
More informationNeuroimaging Correlates of Human Auditory Behavior
Harvard-MIT Division of Health Sciences and Technology HST.722J: Brain Mechanisms for Hearing and Speech Course Instructor: Jennifer R. Melcher HST 722 Brain Mechanisms for Hearing and Speech October 27,
More informationCerebral Responses to Change in Spatial Location of Unattended Sounds
Article Cerebral Responses to Change in Spatial Location of Unattended Sounds Leon Y. Deouell, 1,2, * Aaron S. Heller, 2 Rafael Malach, 3 Mark D Esposito, 2 and Robert T. Knight 2 1 Department of Psychology
More informationReport. Sound Categories Are Represented as Distributed Patterns in the Human Auditory Cortex
Current Biology 19, 498 502, March 24, 2009 ª2009 Elsevier Ltd All rights reserved DOI 10.1016/j.cub.2009.01.066 Sound Categories Are Represented as Distributed Patterns in the Human Auditory Cortex Report
More informationLocalization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane: II. Model Algorithms
956 969 Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane: II. Model Algorithms Jonas Braasch Institut für Kommunikationsakustik, Ruhr-Universität Bochum, Germany
More informationPhysiological measures of the precedence effect and spatial release from masking in the cat inferior colliculus.
Physiological measures of the precedence effect and spatial release from masking in the cat inferior colliculus. R.Y. Litovsky 1,3, C. C. Lane 1,2, C.. tencio 1 and. Delgutte 1,2 1 Massachusetts Eye and
More informationHuman cortical functions in auditory change detection evaluated with multiple brain research methods
Human cortical functions in auditory change detection evaluated with multiple brain research methods Teemu Rinne Cognitive Brain Research Unit Department of Psychology University of Helsinki Finland Academic
More informationEvent-related brain activity associated with auditory pattern processing
Cognitive Neuroscience 0 0 0 0 0 p Website publication November NeuroReport, () ONE of the basic properties of the auditory system is the ability to analyse complex temporal patterns. Here, we investigated
More informationIndependence of Visual Awareness from the Scope of Attention: an Electrophysiological Study
Cerebral Cortex March 2006;16:415-424 doi:10.1093/cercor/bhi121 Advance Access publication June 15, 2005 Independence of Visual Awareness from the Scope of Attention: an Electrophysiological Study Mika
More informationIntroduction to Computational Neuroscience
Introduction to Computational Neuroscience Lecture 10: Brain-Computer Interfaces Ilya Kuzovkin So Far Stimulus So Far So Far Stimulus What are the neuroimaging techniques you know about? Stimulus So Far
More informationElectrophysiological Substrates of Auditory Temporal Assimilation Between Two Neighboring Time Intervals
Electrophysiological Substrates of Auditory Temporal Assimilation Between Two Neighboring Time Intervals Takako Mitsudo *1, Yoshitaka Nakajima 2, Gerard B. Remijn 3, Hiroshige Takeichi 4, Yoshinobu Goto
More informationProcessing of spatial sounds in human auditory cortex during visual, discrimination and 2-back tasks
https://helda.helsinki.fi Processing of spatial sounds in human auditory cortex during visual, discrimination and 2-back tasks Rinne, Teemu 2014-07-28 Rinne, T, Ala-Salomaki, H, Stecker, G C, Pätynen,
More informationAccepted to MICCAI Sources of Variability in MEG
Accepted to MICCAI 27 Sources of Variability in MEG Wanmei Ou, Polina Golland, and Matti Hämäläinen 2 Department of Computer Science and Artificial Intelligence Laboratory, MIT, USA 2 Athinoula A. Martinos
More informationSound localization under conditions of covered ears on the horizontal plane
coust. Sci. & Tech. 28, 5 (27) TECHNICL REPORT #27 The coustical Society of Japan Sound localization under conditions of covered ears on the horizontal plane Madoka Takimoto 1, Takanori Nishino 2;, Katunobu
More informationDiscrimination and identification of azimuth using spectral shape a)
Discrimination and identification of azimuth using spectral shape a) Daniel E. Shub b Speech and Hearing Bioscience and Technology Program, Division of Health Sciences and Technology, Massachusetts Institute
More informationHCS 7367 Speech Perception
Long-term spectrum of speech HCS 7367 Speech Perception Connected speech Absolute threshold Males Dr. Peter Assmann Fall 212 Females Long-term spectrum of speech Vowels Males Females 2) Absolute threshold
More informationEarly posterior ERP components do not reflect the control of attentional shifts toward expected peripheral events
Psychophysiology, 40 (2003), 827 831. Blackwell Publishing Inc. Printed in the USA. Copyright r 2003 Society for Psychophysiological Research BRIEF REPT Early posterior ERP components do not reflect the
More informationWhat Is the Difference between db HL and db SPL?
1 Psychoacoustics What Is the Difference between db HL and db SPL? The decibel (db ) is a logarithmic unit of measurement used to express the magnitude of a sound relative to some reference level. Decibels
More informationLecture 7 Hearing 2. Raghav Rajan Bio 354 Neurobiology 2 February 04th All lecture material from the following links unless otherwise mentioned:
Lecture 7 Hearing 2 All lecture material from the following links unless otherwise mentioned: 1. http://wws.weizmann.ac.il/neurobiology/labs/ulanovsky/sites/neurobiology.labs.ulanovsky/files/uploads/purves_ch12_ch13_hearing
More informationNeural Recording Methods
Neural Recording Methods Types of neural recording 1. evoked potentials 2. extracellular, one neuron at a time 3. extracellular, many neurons at a time 4. intracellular (sharp or patch), one neuron at
More informationEst-ce que l'eeg a toujours sa place en 2019?
Est-ce que l'eeg a toujours sa place en 2019? Thomas Bast Epilepsy Center Kork, Germany Does EEG still play a role in 2019? What a question 7T-MRI, fmri, DTI, MEG, SISCOM, Of ieeg course! /HFO, Genetics
More informationHuman posterior auditory cortex gates novel sounds to consciousness
Human posterior auditory cortex gates novel sounds to consciousness Iiro P. Jääskeläinen*, Jyrki Ahveninen* **, Giorgio Bonmassar*, Anders M. Dale*, Risto J. Ilmoniemi,, Sari Levänen*, Fa-Hsuan Lin*, Patrick
More informationLecture 3: Perception
ELEN E4896 MUSIC SIGNAL PROCESSING Lecture 3: Perception 1. Ear Physiology 2. Auditory Psychophysics 3. Pitch Perception 4. Music Perception Dan Ellis Dept. Electrical Engineering, Columbia University
More informationSound recognition and localization in man: specialized cortical networks and effects of acute circumscribed lesions
Exp Brain Res (2003) 153: 591 604 DOI 10.1007/s00221-003-1616-0 RESEARCH ARTICLE Michela Adriani. Philippe Maeder. Reto Meuli. Anne Bellmann Thiran. Rolf Frischknecht. Jean-Guy Villemure. James Mayer.
More informationHEARING AND PSYCHOACOUSTICS
CHAPTER 2 HEARING AND PSYCHOACOUSTICS WITH LIDIA LEE I would like to lead off the specific audio discussions with a description of the audio receptor the ear. I believe it is always a good idea to understand
More informationNeuroimaging methods vs. lesion studies FOCUSING ON LANGUAGE
Neuroimaging methods vs. lesion studies FOCUSING ON LANGUAGE Pioneers in lesion studies Their postmortem examination provided the basis for the linkage of the left hemisphere with language C. Wernicke
More informationAuditory scene analysis in humans: Implications for computational implementations.
Auditory scene analysis in humans: Implications for computational implementations. Albert S. Bregman McGill University Introduction. The scene analysis problem. Two dimensions of grouping. Recognition
More informationNeural Correlates of Human Cognitive Function:
Neural Correlates of Human Cognitive Function: A Comparison of Electrophysiological and Other Neuroimaging Approaches Leun J. Otten Institute of Cognitive Neuroscience & Department of Psychology University
More informationPerceptual Plasticity in Spatial Auditory Displays
Perceptual Plasticity in Spatial Auditory Displays BARBARA G. SHINN-CUNNINGHAM, TIMOTHY STREETER, and JEAN-FRANÇOIS GYSS Hearing Research Center, Boston University Often, virtual acoustic environments
More informationEEG changes accompanying learned regulation of 12-Hz EEG activity
TNSRE-2002-BCI015 1 EEG changes accompanying learned regulation of 12-Hz EEG activity Arnaud Delorme and Scott Makeig Abstract We analyzed 15 sessions of 64-channel EEG data recorded from a highly trained
More informationEEG Analysis on Brain.fm (Focus)
EEG Analysis on Brain.fm (Focus) Introduction 17 subjects were tested to measure effects of a Brain.fm focus session on cognition. With 4 additional subjects, we recorded EEG data during baseline and while
More informationNeural Representations of the Cocktail Party in Human Auditory Cortex
Neural Representations of the Cocktail Party in Human Auditory Cortex Jonathan Z. Simon Department of Electrical & Computer Engineering Department of Biology Institute for Systems Research University of
More informationEffect of intensity increment on P300 amplitude
University of South Florida Scholar Commons Graduate Theses and Dissertations Graduate School 2004 Effect of intensity increment on P300 amplitude Tim Skinner University of South Florida Follow this and
More informationCortical codes for sound localization
TUTORIAL Cortical codes for sound localization Shigeto Furukawa and John C. Middlebrooks Kresge Hearing Research Institute, University of Michigan, 1301 E. Ann St., Ann Arbor, MI 48109-0506 USA e-mail:
More informationBeyond Blind Averaging: Analyzing Event-Related Brain Dynamics. Scott Makeig. sccn.ucsd.edu
Beyond Blind Averaging: Analyzing Event-Related Brain Dynamics Scott Makeig Institute for Neural Computation University of California San Diego La Jolla CA sccn.ucsd.edu Talk given at the EEG/MEG course
More informationBrain Imaging of auditory function
Who am I? Brain Imaging of auditory function Maria Chait I do (mostly) MEG functional brain imaging of auditory processing. Now, I am at the EAR Institute, University College London & Equipe Audition,
More information