Asymmetrical representation of auditory space in human cortex
|
|
- Claude Smith
- 6 years ago
- Views:
Transcription
1 available at Research Report Asymmetrical representation of auditory space in human cortex Nelli H. Salminen a,c,, Hannu Tiitinen a,c, Ismo Miettinen a,c, Paavo Alku b, Patrick J.C. May a,c a Department of Biomedical Engineering and Computational Science, Helsinki University of Technology, Finland b Department of Signal Processing and Acoustics, Helsinki University of Technology, Finland c BioMag Laboratory, Hospital District of Helsinki and Uusimaa HUSLAB, Helsinki University Central Hospital, Finland ARTICLE INFO Article history: Accepted 24 September 2009 Available online 30 September 2009 Keywords: Sound source localization Human MEG N1m Stimulus-specific adaptation ABSTRACT Recent single-neuron recordings in monkeys and magnetoencephalography (MEG) data on humans suggest that auditory space is represented in cortex as a population rate code whereby spatial receptive fields are wide and centered at locations to the far left or right of the subject. To explore the details of this code in the human brain, we conducted an MEG study utilizing realistic spatial sound stimuli presented in a stimulus-specific adaptation paradigm. In this paradigm, the spatial selectivity of cortical neurons is measured as the effect the location of a preceding adaptor has on the response to a subsequent probe sound. Two types of stimuli were used: a wideband noise sound and a speech sound. The cortical hemispheres differed in the effects the adaptors had on the response to a probe sound presented in front of the subject. The right-hemispheric responses were attenuated more by an adaptor to the left than by an adaptor to the right of the subject. In contrast, the lefthemispheric responses were similarly affected by adaptors in these two locations. When interpreted in terms of single-neuron spatial receptive fields, these results support a population rate code model where neurons in the right hemisphere are more often tuned to the left than to the right of the perceiver while in the left hemisphere these two neuronal populations are of equal size Elsevier B.V. All rights reserved. 1. Introduction Auditory localization has a unique role in orienting in the environment. While vision is limited to locations in front of the perceiver, audition allows the detection of objects in all directions and even when they are obscured by visual obstacles. Consequently, the auditory system provides crucial spatial information for directing other senses towards interesting objects and events. Recent studies suggest that sound source location in the horizontal plane is represented in the cortex by a population rate code. According to this model, auditory spatial receptive fields are centered either to the left or right of the subject and span the whole hemifield. This neural code has been studied in detail in the monkey with single-neuron recordings (Woods et al., 2006; Werner-Reiss and Groh, 2008). Human psychophysical (Boehnke and Phillips, 1999) and neuroimaging (Salminen et al., in press) results are also consistent with the population rate code but very little is Corresponding author. Department of Biomedical Engineering and Computational Science, Helsinki University of Technology, P. O. Box 2200, FI TKK, Finland. Fax: address: nelli.salminen@tkk.fi (N.H. Salminen) /$ see front matter 2009 Elsevier B.V. All rights reserved. doi: /j.brainres
2 94 BRAIN RESEARCH 1306 (2010) still known about the details of its neuronal implementation. Here, we conducted magnetoencephalography (MEG) measurements to study the differences between the representations of auditory space in the two cortical hemispheres. Single-unit recordings in the monkey show that the implementation of the population rate code is asymmetrical in the two cortical hemispheres (Benson et al., 1981; Ahissar et al., 1992; Woods et al., 2006). Neurons are more often tuned to the hemifield contralateral to the measurement site than to the ipsilateral side. Indications of contralateral preference have also been found in the human cortex. Stronger activity is often found in the hemisphere contralateral to the sound source location (Ungan et al., 2001; Palomäki et al., 2005; Krumbholz et al., 2005, 2007) and this could potentially reflect a larger number of neurons being tuned to the contralateral than to the ipsilateral hemifield. The right and the left hemisphere seem to differ in the magnitude of the contralateral preference although this issue is not entirely clear: some studies show a stronger contralateral preference in the right than in the left hemisphere (Palomäki et al., 2002, 2005; Tiitinen et al., 2006) while others suggest that the asymmetry is larger in the left hemisphere (Ungan et al., 2001; Krumbholz et al., 2005, 2007). Thus, it remains unresolved what the relative sizes of the contralaterally and ipsilaterally tuned populations are in the human cortex. Psychophysical studies have revealed several sources of location information embedded in sound (Middlebrooks and Green, 1991). At low sound frequencies, the most prominent cue is the interaural time difference (ITD) that results from the sound reaching one ear before the other and occurs both at the onset and along the whole duration of the sound. At higher frequencies, the head of the listener casts a shadow on the sound and thus creates an interaural level difference (ILD). Also, at higher frequencies, the pinnae, the head, and the body of the listener alter the frequency spectrum of the sound in a direction-specific manner. These cues do not occur as independent entities in the sound waveform. Instead, they are imposed on sound signals that already have complex spectrotemporal structure and, consequently, the availability and usefulness of different types of localization cues diverges from one sound source to another. For instance, while a wide-band noise sound includes all of the localization cues, a speech sound has less energy in the high frequencies and, thus, carries less of the spectral cues and has a smaller ILD. How this is reflected in the cortical representations of sound source location is not known. The N1m response, occurring in the event-related field at around 100 ms after sound onset, and its electrical counterpart N1 have proven to be useful measures of spatial sensitivity in the human auditory cortex. The N1m is generated in multiple secondary (belt and parabelt) areas of auditory cortex (for a review, see May and Tiitinen, in press) and it varies in amplitude according to sound source location. It is maximal for sources contralateral to the hemisphere from which it is measured and minimal for ipsilateral sources (Palomäki et al., 2005). The N1 and N1m responses, when obtained in a stimulus-specific adaptation paradigm, can provide information on the spatial selectivity of auditory cortical neurons (Butler, 1972; Salminen et al., in press). In this paradigm, sounds are presented in adaptor-probe pairs and the effect of the preceding adaptor on the response to the following probe is measured (Fig. 1). When the adaptor is presented at the same location as the probe, the two sounds activate the same population of spatially selective neurons and, consequently, the attenuation of the amplitude of the N1m response is maximal. However, when the adaptor is presented from a different location than the probe, the neurons activated by the probe but not by the adaptor are presumably not influenced by the preceding adaptor presentation and, thus, respond to the probe strongly. This selectivity, then, leads to an increase in the amplitude of the N1m response. The aim of the present study was to compare the spatial selectivity between the cortical hemispheres and to estimate the relative sizes of the contralaterally and ipsilaterally tuned neuronal populations in each hemisphere. To this end, we presented spatial sound stimuli, individually prepared for each subject, in a stimulus-specific adaptation paradigm and measured the attenuation of the N1m response. To facilitate comparisons between the hemispheres, the sound sources were situated symmetrically with respect to the midline. The probe sound was always directly in front of the subject and adaptors were presented in the left and right hemifields. The Fig. 1 Illustration of the stimulus-specific adaptation paradigm. (A) Two sounds, an adaptor and a probe are sequentially presented and the brain responses to the probe are measured. When the adaptor and the probe are presented from the same location directly in front of the subject, adaptation is maximal and responses are small. (B) The adaptor is then presented from a location 45 to the right from the probe. Assuming that the neuronal population giving rise to the response is selective to sound source location, this leads to a less attenuated response to the probe. (C) The largest responses occur when no adaptor sound is presented. (D) The difference between the response amplitudes measured in these conditions can be used as a measure of neuronal selectivity to sound source location.
3 95 experiment was designed to measure the relative numbers of contralaterally and ipsilaterally tuned neurons. Assuming that the two populations are of equal size, the adaptors in the left and right should be equally effective in attenuating the response to the probe. However, if one of the populations is larger, asymmetrical attenuation of the N1m should occur. We also aimed to study how the neuronal representation of sound source location is formed for different types of sounds. Therefore, spatial selectivity was tested with both a wideband noise sound and a vowel sound. 2. Results The N1m responses were measured from the left and right hemisphere to a probe sound either a noise or a vowel stimulus presented directly in front of the subject in the context of adaptors at three different locations or without an intervening adaptor (Fig. 2). In both hemispheres, the vowel sound led to responses with approximately twice the amplitude of those elicited by the noise sound (main effect of stimulus type: F[1,10]=53.3, p<0.001). In the right hemisphere, the spatial sound stimuli elicited N1m responses with average amplitudes of 30.1 ft/cm and 57.1 ft/cm for the noise and the vowel sound, respectively. For the left-hemispheric N1m, these amplitudes were 26.3 ft/cm and 55.5 ft/cm. Additionally, the N1m peak amplitudes were slightly larger in the right than in the left hemisphere, although this difference did not reach statistical significance (main effect of hemisphere: F[1,10] = 0.15, p=n.s.). The amplitude of the N1m depended on the adaptor condition (main effect of adaptor: F[3,30]=51.3, p<0.001). In both hemispheres and for both stimulus types, the smallest responses to the probe were measured when the adaptor was at the same location as the probe, and the largest responses occurred when no adaptors were presented (Fig. 2). The amplitude of the N1m response measured in the other conditions fell between these two extremes. The effect of the adaptor on the N1m response amplitude depended on the hemisphere and the stimulus type (threeway interaction: F[3,30] =2.9, p=0.05). The response amplitudes in the no-adaptor condition were larger for the speech stimulus in the left than in the right hemisphere (83.0 and 76.1 ft/cm, for the left and right hemisphere, respectively, p<0.01). For the noise sound, no such asymmetry was found (41.0 and 43.5 ft/cm, p=n.s.). Thus, the overall dynamic range of the N1m response amplitude varied across hemispheres and stimulus types. To analyze the location-specific adaptation of the N1m response independent of variations in the overall level of activity, the reduction in attenuation caused by the adaptors being to the left and to the right instead of being directly in front was expressed relative to the overall dynamic range of the N1m responses to the same stimulus type in the same hemisphere (see Fig. 1). Further analyses were all performed on these relative measures of location-specificity of the response. When the N1m amplitude was analyzed as a measure relative to the dynamic range of the response, a significant interaction between the hemisphere and the adaptor location was found (Fig. 3; F[1,10] =11.2, p<0.01). In the right hemisphere, Fig. 2 Sound source locations and averaged event-related fields from a representative measurement channel. (A) Stimulus-specific adaptation of the N1m response was measured in four conditions. The probe sound presented always directly in front (0 ) was coupled with adaptors at the same location (0 ), to the left ( 45 ), or to the right (+45 ) of the subject, or no adaptor was presented. (B) Event-related fields to the probe presented in four adaptor conditions and for two stimulus types (noise and vowel) were measured and averaged over eleven subjects. In both hemispheres and for both stimulus types, the smallest response amplitudes were measured when the adaptor was at the same location as the probe and the largest when no adaptor was presented. With the adaptor to the left or right of the subject, the response amplitudes fell between these two extremes. Thus, location-specific adaptation was found in both hemispheres and for both noise and vowel stimuli. Fig. 3 The N1m amplitude expressed as relative to the overall dynamic range. The error bars depict the standard error of the mean. In the left hemisphere, the adaptors to the left and to the right of the subject led to similar increases in the N1m amplitude. However, in the right hemisphere, the increase was much larger when the adaptor was to the right than when it was to the left.
4 96 BRAIN RESEARCH 1306 (2010) the adaptor in the left hemifield caused a larger attenuation of the N1m response than did the adaptor in the right hemifield (p<0.01). This occurred for both noise and vowel stimuli but the effect was larger for the noise stimulus. Thus, in the right hemisphere, the adaptor in the contralateral hemifield caused stronger attenuation of the N1m response than did the adaptor in the ipsilateral hemifield. For the left-hemispheric N1m responses, the effect the adaptor had on the response to the probe did not depend on whether the adaptor was to the left or to the right (p=n.s). The differences found between the left and right adaptor conditions were small and inconsistent across the stimulus types. The N1m response peaked, on the average, at the latency of 104 ms from stimulus onset. No effect of hemisphere (F[1,10] = 0.10, p=n.s.) or stimulus type (F[1,10]=1.62, p=n.s.) was found. The peak latency, however, depended on the presence of the adaptor stimulus (F[1,10] =6.18, p<0.01). In the no-adaptor condition the N1m peaked at 100 ms while in the other conditions the peak latency was 105 ms (p<0.05). 3. Discussion We conducted an MEG experiment utilizing realistic, individually prepared spatial sound stimuli in a stimulus-specific adaptation paradigm to study the cortical encoding of auditory space. Our aim was, first, to compare the spatial selectivity of the two cortical hemispheres and, second, to elucidate whether the spatial selectivity to speech sounds differs from that of the noise sounds. We measured N1m responses elicited by a probe sound presented directly in front of the subject in the context of adaptors either to the left or to the right. A notable difference between the hemispheres was found in the effect the adaptor location had on the response to the probe. In the right hemisphere, the attenuation of the N1m response was much stronger when the adaptor was to the left than when it was to the right. In the left hemisphere, no such asymmetry was found. In general, the response amplitudes were larger for the vowels than for the noise sounds, with the left hemisphere generating especially prominent responses to the vowels. However, the right-hemispheric measures of spatial selectivity obtained with noise stimulation were clearly more prominent than those gained with the vowel stimuli. Thus, the right hemisphere appears to be especially selective to spatial information, and this selectivity is greater for noise sounds than for speech sounds. The N1m response elicited by the probe in front probably reflects the combined activity of left- and right-tuned populations similar to those found in monkey auditory cortex (Benson et al., 1981; Ahissar et al., 1992; Woods et al., 2006; Werner-Reiss and Groh, 2008). When the adaptor is presented in the left or in the right hemifield, it attenuates mainly the activity of the population tuned to that hemifield while the population tuned to the opposite hemifield remains largely unaffected (Fig. 4). If the two populations are of equal size, the N1m amplitude is affected similarly by the adaptors in the left and right hemifields (Fig. 4B, top). However, when more neurons are tuned to locations in the left than to those in the right hemifield, the adaptor to the left attenuates a larger number of neurons than the adaptor to the right (Fig. 4B, bottom). This would be reflected as small responses when the adaptor is in the left and large ones when the adaptor is in the right hemifield. This pattern was, in fact, realized by the righthemispheric N1m responses measured here. Therefore, the present results suggest that, in the right hemisphere, more neurons are tuned to the left than to the right hemifield. However, in the left hemisphere, there seems to be little difference between the sizes of these two populations. Fig. 4 Interpretation of the results based on single-neuron spatial receptive fields. (A) The majority of auditory spatial receptive fields found in the cortex are wide and centered at locations either to the left or to the right of the subject. The top figure represents a case where the same number of neurons is tuned to each direction. Often, however, the majority of the neurons are tuned to contralateral locations. The bottom figure represents a hypothetical right hemisphere where neurons tuned to the left outnumber those tuned to the right. (B) The N1m response reflects the compound activity of the left- and right-tuned populations. When the two populations are of equal size they contribute similarly to the N1m response elicited by a probe sound in front (0 ). Thus, the effects of the adaptors to the left ( 45 ) and right (+45 ) of the subject are similar (top). This corresponds to the left-hemispheric results obtained in this study. However, when more neurons are tuned to the left than to the right of the subject, the left-tuned neurons contribute more to the N1m response than the right-tuned neurons. Consequently, the attenuation caused by the adaptor to the left is stronger than that caused by the adaptor to the right (bottom). This is consistent with our right-hemispheric data.
5 97 The above interpretation in terms of single-neuron spatial receptive fields is consistent with earlier MEG observations (Palomäki et al., 2002, 2005; Tiitinen et al., 2006). Previous studies have shown that the auditory cortices respond more strongly to sounds originating from contralateral directions than to those located ipsilaterally. For instance, when sounds are presented from various horizontal locations, the N1m response of each hemisphere has the largest amplitude for contralateral and the smallest amplitude for ipsilateral sound sources. By considering that the N1m represents the compound activity of the neurons of auditory cortex (May and Tiitinen, in press), its amplitude variation would be consistent with the activity of contralaterally tuned neurons, which therefore probably outnumber ipsilaterally tuned neurons. Further, the variation of the N1m response found in previous studies is much weaker in the left than in the right hemisphere (Palomäki et al., 2005). This could be due to the contralaterally and ipsilaterally tuned populations being of nearly equal size in the left hemisphere. In this case, the location-dependent variations of the two populations cancel each other out leading to less variation in the amplitude of the N1m response. Alternatively, the weaker variation of the N1m response depending on sound source location could be explained by a smaller number of spatially selective neurons in the left than in the right hemisphere. The current results, however, clearly demonstrate that the left-hemispheric responses are selective to sound source location. This shows that even though the right hemisphere is specialized in spatial processing (Griffiths et al., 1998; Baumgart et al., 1999; Zatorre and Penhune, 2001; Zatorre et al., 2002; Warren and Griffiths, 2003; Deouell et al., 2007) both cortical hemispheres represent auditory space with location-selective neurons. While the current results suggest a stronger contralateral preference in the right hemisphere, a number of previous studies have demonstrated similar preference in the two hemispheres or even a larger asymmetry in the left hemisphere (Woldorff et al., 1999; Ungan et al., 2001; Jäncke et al., 2002; Krumbholz et al., 2005, 2007; Schönwiesner et al., 2007; Getzmann, 2009). This discrepancy may arise from differences in the sound stimulation. First, larger contralateral preference in the left than in the right hemisphere has been found in studies utilizing the ILD or ITD cue alone while, in the present study, realistic spatial sounds were used. This, however, is unlikely to account fully for the diverging results as the stimuli in the present study contained prominent ITD and ILD cues also. Second, in many of the studies showing strong lefthemispheric contralateral preference (Ungan et al., 2001; Krumbholz et al., 2005, 2007; Getzmann, 2009), the sound stimuli contained changes in ITD or ILD in an ongoing sound. In contrast, the spatial cues of the current study were static. Thus, the discrepancy between the past and present results could reflect differences between sensitivity to stationary and to moving sound sources. In the mammalian subcortical structures, the spatial receptive fields are strongly influenced by sound source movement (Spitzer and Semple, 1991; McAlpine et al., 2000) and this could be the case for the human auditory cortex also. Finally, in studies where a lefthemispheric contralateral preference has been found with stationary stimuli, the sounds were presented monaurally (Pantev et al., 1986; Woldorff et al., 1999; Jäncke et al., 2002; Schönwiesner et al., 2007). These findings are probably related to the distribution of monaural neurons activated exclusively by left- or right-ear stimulation showing that those preferring the contralateral ear are in the majority in each cortical hemisphere. Our findings, in contrast, were obtained with binaural stimulation and therefore, are likely to reflect the activity of binaural neurons sensitive to the interaural difference cues. The distribution of these neurons seems to be more balanced in the left than in the right hemisphere. There were several differences between the N1m responses to the vowel and the noise sounds. Most conspicuously, the responses measured for the vowel sounds were of larger amplitude, and this difference was especially clear in the left hemisphere. This may reflect a left-hemispheric specialization of speech processing which is apparent already at the level of isolated vowel sounds. Then again, the locationdependent effects (observed in the right hemisphere) were more prominent for the noise sounds than for the vowel sounds. This difference could be explained by the spatial cues that the sounds carry. The wide-band noise stimulus contains all the localization cues while in the case of the vowel sound the ILD and spectral cues are weaker and less useful (Middlebrooks and Green, 1991). The location-specific adaptation measured with speech and noise stimuli followed, nonetheless, the same pattern of variation. This suggests that a similar neuronal representation of sound location was arrived at independently of the bandwidth of the sound stimulus. However, it is also possible that top-down effects contributed to the stronger lateralization of location processing for the noise sounds than for the speech sounds. Such top-down effects in the processing of location information is indicated by the results of Schönwiesner et al. (2007) who found that hemispheric lateralization of responses to monaural sounds is not stimulus-specific but, rather, can be modulated in a topdown manner by the context that the stimuli are presented in. Systematic studies on the contributions of different localization cues in various contexts on the cortical representation of auditory space will be needed to understand how the neural code builds on these diverse sources of acoustical information. 4. Experimental procedures 4.1. Subjects Fourteen right-handed subjects (mean age 27, s.t.d. 6 years, 7 females) took part in the experiment. The measurements were performed with the written informed consent of the subject and under the approval of the Ethical Committee of Helsinki University Central Hospital. During the experiment, the subjects were under instruction to focus on a self-selected silent film and to ignore the auditory stimulation. The data of three subjects were discarded due to a poor signal-to-noise ratio Spatial sound stimuli and presentation Binaural recordings were performed in a highly controlled manner to create individual spatial sound stimuli for each subject. The subject was seated in the center of a standardized
6 98 BRAIN RESEARCH 1306 (2010) listening room and was surrounded by a constellation of loudspeakers. Each loudspeaker was at a distance of 1.3 m from the subject's head and at a height of 1.2 m (Fig. 2). One loudspeaker was directly in front of the subject (0 ) and the second and the third speakers were 45 to the left and the right, respectively. The vertical position of the subject was adjusted so that his/her ear canals were always at the same distance of 1.2 m from the floor. White noise and vowel signals of a 200-ms duration were presented sequentially from each loudspeaker and recorded with miniature microphones placed at the entrances of the ear canals of the subject. For the vowel sound, a Finnish /a/ spoken by a male speaker was used. During the MEG measurements, the stimuli were played back to the subject through a custom-made tube-phone system with a flat amplitude response up to 10 khz. The sound level was adjusted to 75 db SPL (A) for the noise and vowel stimuli at 0. The level of the stimuli corresponding to sound locations at 45 and +45 was scaled accordingly. In the stimulus-specific adaptation paradigm sounds were presented in alternating pairs of an adaptor and a probe stimulus with an interstimulus interval of 1 s. Thus, in all stimulus blocks a probe sound originating from directly in front of the subject (0 ) occurred at 2-s intervals. In each stimulus block, the adaptor sound was presented either from the same location as the probe (0 ), or from the left ( 45 ) or right (+45 ) direction. Additionally, a control condition with no intervening adaptors was included. Thus, there was a total of four adaptor conditions resulting in altogether eight stimulus blocks whose presentation order was counterbalanced across subjects MEG data acquisition Data was recorded with a 306-channel whole-head MEG device (Vectorview 4-D, Neuromag Oy, Finland) with a passband of Hz. The event-related responses were averaged online over a time period from 100 ms before until 400 ms after stimulus onset. Eye-movements were monitored with electrodes, and epochs with absolute deviations larger than 150 μv were automatically discarded. A minimum of 150 repetitions were collected for each stimulus condition. The averaged responses were bandpass filtered at 1 30 Hz and baseline-corrected with respect to a 100-ms prestimulus interval. The N1m response amplitude was quantified from a set of planar gradiometer channels above the left and right temporal lobes. For each subject and hemisphere separately, the three channel pairs showing the largest response amplitudes were chosen for further analyses. The N1m response was identified as the amplitude peak in an 80- to 150-ms poststimulus window in the vector sum averaged over these three channel pairs Statistical analyses The amplitudes of the N1m responses were expressed in two ways. First, the absolute amplitude was obtained for each adaptor condition. Second, to provide a measure of locationspecific adaptation relative to the entire dynamic range of the response, the N1m amplitudes measured with the adaptor at ±45 were expressed as the amplitude increase from the adaptor being at 0 and this was divided by the dynamic range of the variation (see Fig. 1D). The dynamic range was evaluated separately for each subject, stimulus type, and hemisphere. This relative measure of location-specific adaptation was obtained to facilitate comparisons of spatial selectivity across the hemispheres and stimulus types independently of the differences in the absolute levels of activity. The absolute and relative N1m amplitudes and the N1m latencies were submitted to repeated-measures ANOVAs. In all analyses, the repeating factors were hemisphere, stimulus type, and adaptor condition. Newman Keuls post-hoc tests were performed when appropriate. Acknowledgments This study was supported by the Academy of Finland (Project Nos , , and ). REFERENCES Ahissar, M.V., Ahissar, E., Bergman, H., Vaadia, E., Encoding of sound-source location and movement: activity of single neurons and interactions between adjacent neurons in the monkey auditory cortex. J. Neurophysiol. 67, Baumgart, F., Gaschler-Markefski, B., Woldorff, M.G., Heinze, H.J., Scheich, H., A movement-sensitive area in the auditory cortex. Nature 400, Benson, D.A., Hienz, R.D., Goldstein, M.H., Single-unit activity in the auditory cortex of monkeys actively localizing sound sources: spatial tuning and behavioral dependency. Brain Res. 219, Boehnke, S.E., Phillips, D.P., Azimuthal tuning of human perceptual channels for sound location. J. Acoust. Soc. Am. 106, Butler, R.A., The influence of spatial separation of sound sources on the auditory evoked response. Neuropsychologia 10, Deouell, L.Y., Heller, A.S., Malach, R., D'Esposito, M., Knight, R.T., Cerebral responses to change in spatial location of unattended sounds. Neuron 55, Getzmann, S., Effect of auditory motion velocity on reaction time and cortical processes. Neuropsychologia 47, Griffiths, T.D., Rees, G., Rees, A., Green, G.G.R., Witton, C., Rowe, D., Büchel, C., Turner, R., Frackowiak, R.S.J., Right parietal cortex is involved in the perception of sound movement in humans. Nat. Neurosci. 1, Jäncke, L., Wüstenberg, T., Schulze, K., Heinze, H.J., Asymmetric hemidynamic responses of the human auditory cortex to monaural and binaural stimulation. Hear. Res. 170, Krumbholz, K., Schönwiesner, M., von Cramon, D.Y., Rübsamen, R., Shah, N.J., Zilles, K., Fink, G.R., Representation of interaural temporal information from left and right auditory space in the human planum temporale and inferior parietal lobule. Cereb. Cortex 15, Krumbholz, K., Hewson-Stoate, N., Schönwiesner, M., Cortical responses to auditory motion suggests an asymmetry in the reliance on inter-hemispheric connections between the left and right auditory cortices. J. Neurophysiol. 97, May, P.J.C., Tiitinen, H., in press. Mismatch negativity (MMN), the deviance-elicited auditory deflection, explained. Psychophysiology.
7 99 McAlpine, D., Jiang, D., Shackleton, T.M., Palmer, A.R., Responses of neurons in the inferior colliculus to dynamic interaural phase cues: evidence for a mechanism of binaural adaptation. J. Neurophysiol. 83, Middlebrooks, J.C., Green, D.M., Sound localization by human listeners. Annu. Rev. Psychol. 42, Palomäki, K.J., Tiitinen, H., Mäkinen, V., May, P.J.C., Alku, P., Cortical processing of speech sounds and their analogues in a spatial auditory environment. Cogn. Brain Res. 14, Palomäki, K.J., Tiitinen, H., Mäkinen, V., May, P.J.C., Alku, P., Spatial processing in human auditory cortex: the effects of 3D, ITD, and ILD stimulation techniques. Cogn. Brain Res. 24, Pantev, C., Lütkenhöner, B., Hoke, M., Lehnertz, K., Comparison between simultaneously recorded auditory-evoked magnetic fields and potentials elicited by ipsilateral, contralateral and binaural tone burst stimulation. Audiology 25, Salminen, N.H., May, P.J.C, Alku, P., Tiitinen, H., in press. A population rate code of auditory space in the human cortex. PLoS ONE. Schönwiesner, M., Krumbholz, K., Rübsamen, R., Fink, G.R., von Cramon, D.Y., Hemispheric asymmetry for auditory processing in the human auditory brain stem, thalamus, and cortex. Cereb. Cortex 17, Spitzer, M.W., Semple, M.N., Interaural phase coding in auditory midbrain: Influence of dynamic stimulus features. Science 254, Tiitinen, H., Salminen, N.H., Palomäki, K.J., Mäkinen, V.T., Alku, P., May, P.J.C., Neuromagnetic recordings reveal the temporal dynamics of auditory spatial processing in the human cortex. Neurosci. Lett. 396, Ungan, P., Yagcioglu, S., Goksoy, C., Differences between the N1 waves of the responses to interaural time and intensity disparities: scalp topography and dipole sources. Clin. Neurophys. 112, Warren, J.D., Griffiths, T.D., Distinct mechanisms for processing spatial sequences and pitch sequence in the human auditory brain. J. Neurosci. 23, Werner-Reiss, U., Groh, J.M., A rate code for sound azimuth in monkey auditory cortex: Implications for human neuroimaging studies. J. Neurosci. 28, Woldorff, M.G., Tempelmann, C., Fell, J., Tegeler, C., Gaschler-Markefski, B., Hinrichs, H., Heinze, H.-J., Scheich, H., Lateralized auditory spatial perception and the contralaterality of cortical processing as studied with functional magnetic resonance imaging and magnetoencephalography. Hum. Brain Mapp. 7, Woods, T.M., Lopez, S.E., Long, J.H., Rahman, J.E., Recanzone, G.H., Effects of stimulus azimuth and intensity on the single-neuron activity in the auditory cortex of the alert macaque monkey. J. Neurophysiol. 96, Zatorre, R.J., Penhune, V.B., Spatial localization after excision of human auditory cortex. J. Neurosci. 21, Zatorre, R.J., Bouffard, M., Ahad, P., Belin, P., Where is where in the human auditory cortex? Nat. Neurosci. 5,
The neural code for interaural time difference in human auditory cortex
The neural code for interaural time difference in human auditory cortex Nelli H. Salminen and Hannu Tiitinen Department of Biomedical Engineering and Computational Science, Helsinki University of Technology,
More informationNeuromagnetic recordings reveal the temporal dynamics of auditory spatial processing in the human cortex
Neuroscience Letters 396 (2006) 17 22 Neuromagnetic recordings reveal the temporal dynamics of auditory spatial processing in the human cortex Hannu Tiitinen a,b,, Nelli H. Salminen a, Kalle J. Palomäki
More informationSpatial processing in human auditory cortex: The effects of 3D, ITD, and ILD stimulation techniques
Cognitive Brain Research 24 (2005) 364 379 Research Report Spatial processing in human auditory cortex: The effects of 3D, ITD, and ILD stimulation techniques Kalle J. Palom7ki a,b, T, Hannu Tiitinen b,c,
More informationBinaural Hearing. Why two ears? Definitions
Binaural Hearing Why two ears? Locating sounds in space: acuity is poorer than in vision by up to two orders of magnitude, but extends in all directions. Role in alerting and orienting? Separating sound
More informationLeft-hemisphere dominance for processing of vowels: a whole-scalp neuromagnetic study
Auditory and Vestibular Systems 10, 2987±2991 (1999) BRAIN activation of 11 healthy right-handed subjects was studied with magnetoencephalography to estimate individual hemispheric dominance for speech
More informationAuditory System & Hearing
Auditory System & Hearing Chapters 9 and 10 Lecture 17 Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Spring 2015 1 Cochlea: physical device tuned to frequency! place code: tuning of different
More information3-D Sound and Spatial Audio. What do these terms mean?
3-D Sound and Spatial Audio What do these terms mean? Both terms are very general. 3-D sound usually implies the perception of point sources in 3-D space (could also be 2-D plane) whether the audio reproduction
More informationSystems Neuroscience Oct. 16, Auditory system. http:
Systems Neuroscience Oct. 16, 2018 Auditory system http: www.ini.unizh.ch/~kiper/system_neurosci.html The physics of sound Measuring sound intensity We are sensitive to an enormous range of intensities,
More informationNeural correlates of the perception of sound source separation
Neural correlates of the perception of sound source separation Mitchell L. Day 1,2 * and Bertrand Delgutte 1,2,3 1 Department of Otology and Laryngology, Harvard Medical School, Boston, MA 02115, USA.
More informationCortical processing of speech sounds and their analogues in a spatial auditory environment
Cognitive Brain Research 14 (2002) 294 299 www.elsevier.com/ locate/ bres Research report Cortical processing of speech sounds and their analogues in a spatial auditory environment Kalle J. Palomaki *,
More informationAUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening
AUDL GS08/GAV1 Signals, systems, acoustics and the ear Pitch & Binaural listening Review 25 20 15 10 5 0-5 100 1000 10000 25 20 15 10 5 0-5 100 1000 10000 Part I: Auditory frequency selectivity Tuning
More informationChapter 11: Sound, The Auditory System, and Pitch Perception
Chapter 11: Sound, The Auditory System, and Pitch Perception Overview of Questions What is it that makes sounds high pitched or low pitched? How do sound vibrations inside the ear lead to the perception
More informationHearing II Perceptual Aspects
Hearing II Perceptual Aspects Overview of Topics Chapter 6 in Chaudhuri Intensity & Loudness Frequency & Pitch Auditory Space Perception 1 2 Intensity & Loudness Loudness is the subjective perceptual quality
More informationProcessing Interaural Cues in Sound Segregation by Young and Middle-Aged Brains DOI: /jaaa
J Am Acad Audiol 20:453 458 (2009) Processing Interaural Cues in Sound Segregation by Young and Middle-Aged Brains DOI: 10.3766/jaaa.20.7.6 Ilse J.A. Wambacq * Janet Koehnke * Joan Besing * Laurie L. Romei
More informationSound localization psychophysics
Sound localization psychophysics Eric Young A good reference: B.C.J. Moore An Introduction to the Psychology of Hearing Chapter 7, Space Perception. Elsevier, Amsterdam, pp. 233-267 (2004). Sound localization:
More informationNeural Correlates of Complex Tone Processing and Hemispheric Asymmetry
International Journal of Undergraduate Research and Creative Activities Volume 5 Article 3 June 2013 Neural Correlates of Complex Tone Processing and Hemispheric Asymmetry Whitney R. Arthur Central Washington
More informationHearing in the Environment
10 Hearing in the Environment Click Chapter to edit 10 Master Hearing title in the style Environment Sound Localization Complex Sounds Auditory Scene Analysis Continuity and Restoration Effects Auditory
More informationHST.723J, Spring 2005 Theme 3 Report
HST.723J, Spring 2005 Theme 3 Report Madhu Shashanka shashanka@cns.bu.edu Introduction The theme of this report is binaural interactions. Binaural interactions of sound stimuli enable humans (and other
More informationRhythm and Rate: Perception and Physiology HST November Jennifer Melcher
Rhythm and Rate: Perception and Physiology HST 722 - November 27 Jennifer Melcher Forward suppression of unit activity in auditory cortex Brosch and Schreiner (1997) J Neurophysiol 77: 923-943. Forward
More informationSound Localization PSY 310 Greg Francis. Lecture 31. Audition
Sound Localization PSY 310 Greg Francis Lecture 31 Physics and psychology. Audition We now have some idea of how sound properties are recorded by the auditory system So, we know what kind of information
More informationTemporal integration of vowel periodicity in the auditory cortex
Temporal integration of vowel periodicity in the auditory cortex Santeri Yrttiaho a Department of Signal Processing and Acoustics, Aalto University School of Science and Technology, P.O. Box 13000, Aalto
More informationSpatial hearing and sound localization mechanisms in the brain. Henri Pöntynen February 9, 2016
Spatial hearing and sound localization mechanisms in the brain Henri Pöntynen February 9, 2016 Outline Auditory periphery: from acoustics to neural signals - Basilar membrane - Organ of Corti Spatial
More informationINTRODUCTION J. Acoust. Soc. Am. 103 (2), February /98/103(2)/1080/5/$ Acoustical Society of America 1080
Perceptual segregation of a harmonic from a vowel by interaural time difference in conjunction with mistuning and onset asynchrony C. J. Darwin and R. W. Hukin Experimental Psychology, University of Sussex,
More informationSpectro-temporal response fields in the inferior colliculus of awake monkey
3.6.QH Spectro-temporal response fields in the inferior colliculus of awake monkey Versnel, Huib; Zwiers, Marcel; Van Opstal, John Department of Biophysics University of Nijmegen Geert Grooteplein 655
More informationUSING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES
USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES Varinthira Duangudom and David V Anderson School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA 30332
More informationNeural Recording Methods
Neural Recording Methods Types of neural recording 1. evoked potentials 2. extracellular, one neuron at a time 3. extracellular, many neurons at a time 4. intracellular (sharp or patch), one neuron at
More informationCOM3502/4502/6502 SPEECH PROCESSING
COM3502/4502/6502 SPEECH PROCESSING Lecture 4 Hearing COM3502/4502/6502 Speech Processing: Lecture 4, slide 1 The Speech Chain SPEAKER Ear LISTENER Feedback Link Vocal Muscles Ear Sound Waves Taken from:
More informationJ Jeffress model, 3, 66ff
Index A Absolute pitch, 102 Afferent projections, inferior colliculus, 131 132 Amplitude modulation, coincidence detector, 152ff inferior colliculus, 152ff inhibition models, 156ff models, 152ff Anatomy,
More informationWernicke s area is commonly considered to be the neural locus of speech. perception. It is named after the German neurologist Carl Wernicke, who first
Supplementary Material 1. Where is Wernicke s area? Wernicke s area is commonly considered to be the neural locus of speech perception. It is named after the German neurologist Carl Wernicke, who first
More informationBinaural Hearing. Steve Colburn Boston University
Binaural Hearing Steve Colburn Boston University Outline Why do we (and many other animals) have two ears? What are the major advantages? What is the observed behavior? How do we accomplish this physiologically?
More informationHEARING AND PSYCHOACOUSTICS
CHAPTER 2 HEARING AND PSYCHOACOUSTICS WITH LIDIA LEE I would like to lead off the specific audio discussions with a description of the audio receptor the ear. I believe it is always a good idea to understand
More informationWe are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors
We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 3,800 116,000 120M Open access books available International authors and editors Downloads Our
More informationProcessing of auditory spatial cues in human cortex: An fmri study
Neuropsychologia 44 (2006) 454 461 Processing of auditory spatial cues in human cortex: An fmri study Ulrike Zimmer a,jörg Lewald b,c, Michael Erb d, Hans-Otto Karnath a, a Section Neuropsychology, Department
More informationWhat Is the Difference between db HL and db SPL?
1 Psychoacoustics What Is the Difference between db HL and db SPL? The decibel (db ) is a logarithmic unit of measurement used to express the magnitude of a sound relative to some reference level. Decibels
More informationAn fmri Study of the Ventriloquism Effect
Cerebral Cortex, November 2015;25: 4248 4258 doi: 10.1093/cercor/bhu306 Advance Access Publication Date: 9 January 2015 Original Article ORIGINAL ARTICLE An fmri Study of the Ventriloquism Effect Akiko
More informationSignals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor
Signals, systems, acoustics and the ear Week 5 The peripheral auditory system: The ear as a signal processor Think of this set of organs 2 as a collection of systems, transforming sounds to be sent to
More informationEvent-related brain activity associated with auditory pattern processing
Cognitive Neuroscience 0 0 0 0 0 p Website publication November NeuroReport, () ONE of the basic properties of the auditory system is the ability to analyse complex temporal patterns. Here, we investigated
More informationHearing. Juan P Bello
Hearing Juan P Bello The human ear The human ear Outer Ear The human ear Middle Ear The human ear Inner Ear The cochlea (1) It separates sound into its various components If uncoiled it becomes a tapering
More information21/01/2013. Binaural Phenomena. Aim. To understand binaural hearing Objectives. Understand the cues used to determine the location of a sound source
Binaural Phenomena Aim To understand binaural hearing Objectives Understand the cues used to determine the location of a sound source Understand sensitivity to binaural spatial cues, including interaural
More informationEFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE DESIGN OF SPATIAL AUDIO DISPLAYS
Proceedings of the 14 International Conference on Auditory Display, Paris, France June 24-27, 28 EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE
More information19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS
19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 27 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS PACS: 43.66.Pn Seeber, Bernhard U. Auditory Perception Lab, Dept.
More information3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF. Masayuki Morimoto Motokuni Itoh Kazuhiro Iida
3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF Masayuki Morimoto Motokuni Itoh Kazuhiro Iida Kobe University Environmental Acoustics Laboratory Rokko, Nada, Kobe, 657-8501,
More informationAuditory System & Hearing
Auditory System & Hearing Chapters 9 part II Lecture 16 Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Spring 2019 1 Phase locking: Firing locked to period of a sound wave example of a temporal
More informationAuditory fmri correlates of loudness perception for monaural and diotic stimulation
PROCEEDINGS of the 22 nd International Congress on Acoustics Psychological and Physiological Acoustics (others): Paper ICA2016-435 Auditory fmri correlates of loudness perception for monaural and diotic
More informationIN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION. Griffin D. Romigh, Brian D. Simpson, Nandini Iyer
IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION Griffin D. Romigh, Brian D. Simpson, Nandini Iyer 711th Human Performance Wing Air Force Research Laboratory
More informationCortical sound processing in children with autism disorder: an MEG investigation
AUDITORYAND VESTIBULAR SYSTEMS Cortical sound processing in children with autism disorder: an MEG investigation Nicole M. Gage CA, Bryna Siegel, 1 Melanie Callen 1 and Timothy P. L. Roberts Biomagnetic
More informationRobust Neural Encoding of Speech in Human Auditory Cortex
Robust Neural Encoding of Speech in Human Auditory Cortex Nai Ding, Jonathan Z. Simon Electrical Engineering / Biology University of Maryland, College Park Auditory Processing in Natural Scenes How is
More informationHow is the stimulus represented in the nervous system?
How is the stimulus represented in the nervous system? Eric Young F Rieke et al Spikes MIT Press (1997) Especially chapter 2 I Nelken et al Encoding stimulus information by spike numbers and mean response
More informationStudy of perceptual balance for binaural dichotic presentation
Paper No. 556 Proceedings of 20 th International Congress on Acoustics, ICA 2010 23-27 August 2010, Sydney, Australia Study of perceptual balance for binaural dichotic presentation Pandurangarao N. Kulkarni
More informationBinaural processing of complex stimuli
Binaural processing of complex stimuli Outline for today Binaural detection experiments and models Speech as an important waveform Experiments on understanding speech in complex environments (Cocktail
More informationSpectrograms (revisited)
Spectrograms (revisited) We begin the lecture by reviewing the units of spectrograms, which I had only glossed over when I covered spectrograms at the end of lecture 19. We then relate the blocks of a
More informationEffect of intensity increment on P300 amplitude
University of South Florida Scholar Commons Graduate Theses and Dissertations Graduate School 2004 Effect of intensity increment on P300 amplitude Tim Skinner University of South Florida Follow this and
More informationLecture 3: Perception
ELEN E4896 MUSIC SIGNAL PROCESSING Lecture 3: Perception 1. Ear Physiology 2. Auditory Psychophysics 3. Pitch Perception 4. Music Perception Dan Ellis Dept. Electrical Engineering, Columbia University
More informationPhysiological measures of the precedence effect and spatial release from masking in the cat inferior colliculus.
Physiological measures of the precedence effect and spatial release from masking in the cat inferior colliculus. R.Y. Litovsky 1,3, C. C. Lane 1,2, C.. tencio 1 and. Delgutte 1,2 1 Massachusetts Eye and
More informationEarly Occurrence Of Auditory Change Detection In The Human Brain
Early Occurrence Of Auditory Change Detection In The Human Brain SABINE GRIMM University of Leipzig, Germany (formerly University of Barcelona, Spain) ICON, Brisbane, July 28 th, 2014 Auditory deviance
More informationReport. Direct Recordings of Pitch Responses from Human Auditory Cortex
Current Biology 0,, June, 00 ª00 Elsevier Ltd. Open access under CC BY license. DOI 0.0/j.cub.00.0.0 Direct Recordings of Pitch Responses from Human Auditory Cortex Report Timothy D. Griffiths,, * Sukhbinder
More informationEarly posterior ERP components do not reflect the control of attentional shifts toward expected peripheral events
Psychophysiology, 40 (2003), 827 831. Blackwell Publishing Inc. Printed in the USA. Copyright r 2003 Society for Psychophysiological Research BRIEF REPT Early posterior ERP components do not reflect the
More informationThe role of low frequency components in median plane localization
Acoust. Sci. & Tech. 24, 2 (23) PAPER The role of low components in median plane localization Masayuki Morimoto 1;, Motoki Yairi 1, Kazuhiro Iida 2 and Motokuni Itoh 1 1 Environmental Acoustics Laboratory,
More informationHCS 7367 Speech Perception
Long-term spectrum of speech HCS 7367 Speech Perception Connected speech Absolute threshold Males Dr. Peter Assmann Fall 212 Females Long-term spectrum of speech Vowels Males Females 2) Absolute threshold
More informationNeurobiology of Hearing (Salamanca, 2012) Auditory Cortex (2) Prof. Xiaoqin Wang
Neurobiology of Hearing (Salamanca, 2012) Auditory Cortex (2) Prof. Xiaoqin Wang Laboratory of Auditory Neurophysiology Department of Biomedical Engineering Johns Hopkins University web1.johnshopkins.edu/xwang
More informationLocalization 103: Training BiCROS/CROS Wearers for Left-Right Localization
Localization 103: Training BiCROS/CROS Wearers for Left-Right Localization Published on June 16, 2015 Tech Topic: Localization July 2015 Hearing Review By Eric Seper, AuD, and Francis KuK, PhD While the
More informationRepresentation of sound in the auditory nerve
Representation of sound in the auditory nerve Eric D. Young Department of Biomedical Engineering Johns Hopkins University Young, ED. Neural representation of spectral and temporal information in speech.
More informationThe development of a modified spectral ripple test
The development of a modified spectral ripple test Justin M. Aronoff a) and David M. Landsberger Communication and Neuroscience Division, House Research Institute, 2100 West 3rd Street, Los Angeles, California
More informationThe Auditory Nervous System
Processing in The Superior Olivary Complex The Auditory Nervous System Cortex Cortex Alan R. Palmer MGB Excitatory GABAergic IC Glycinergic Interaural Level Differences Medial Geniculate Body Inferior
More informationChapter 5. Summary and Conclusions! 131
! Chapter 5 Summary and Conclusions! 131 Chapter 5!!!! Summary of the main findings The present thesis investigated the sensory representation of natural sounds in the human auditory cortex. Specifically,
More informationBinaural Hearing for Robots Introduction to Robot Hearing
Binaural Hearing for Robots Introduction to Robot Hearing 1Radu Horaud Binaural Hearing for Robots 1. Introduction to Robot Hearing 2. Methodological Foundations 3. Sound-Source Localization 4. Machine
More informationAuditory System. Barb Rohrer (SEI )
Auditory System Barb Rohrer (SEI614 2-5086) Sounds arise from mechanical vibration (creating zones of compression and rarefaction; which ripple outwards) Transmitted through gaseous, aqueous or solid medium
More informationOver-representation of speech in older adults originates from early response in higher order auditory cortex
Over-representation of speech in older adults originates from early response in higher order auditory cortex Christian Brodbeck, Alessandro Presacco, Samira Anderson & Jonathan Z. Simon Overview 2 Puzzle
More informationProcessing in The Superior Olivary Complex
Processing in The Superior Olivary Complex Alan R. Palmer Medical Research Council Institute of Hearing Research University Park Nottingham NG7 2RD, UK Binaural cues for Localising Sounds in Space time
More informationTrading Directional Accuracy for Realism in a Virtual Auditory Display
Trading Directional Accuracy for Realism in a Virtual Auditory Display Barbara G. Shinn-Cunningham, I-Fan Lin, and Tim Streeter Hearing Research Center, Boston University 677 Beacon St., Boston, MA 02215
More informationTwenty subjects (11 females) participated in this study. None of the subjects had
SUPPLEMENTARY METHODS Subjects Twenty subjects (11 females) participated in this study. None of the subjects had previous exposure to a tone language. Subjects were divided into two groups based on musical
More informationIsolating mechanisms that influence measures of the precedence effect: Theoretical predictions and behavioral tests
Isolating mechanisms that influence measures of the precedence effect: Theoretical predictions and behavioral tests Jing Xia and Barbara Shinn-Cunningham a) Department of Cognitive and Neural Systems,
More informationAuditory temporal edge detection in human auditory cortex
available at www.sciencedirect.com www.elsevier.com/locate/brainres Research Report Auditory temporal edge detection in human auditory cortex Maria Chait a,, David Poeppel b,c, Jonathan Z. Simon d,c a
More informationNeural Correlates of Auditory Perceptual Awareness under Informational Masking
Neural Correlates of Auditory Perceptual Awareness under Informational Masking Alexander Gutschalk 1*, Christophe Micheyl 2, Andrew J. Oxenham 2 PLoS BIOLOGY 1 Department of Neurology, Ruprecht-Karls-Universität
More informationRetinotopy & Phase Mapping
Retinotopy & Phase Mapping Fani Deligianni B. A. Wandell, et al. Visual Field Maps in Human Cortex, Neuron, 56(2):366-383, 2007 Retinotopy Visual Cortex organised in visual field maps: Nearby neurons have
More informationELECTROPHYSIOLOGY OF UNIMODAL AND AUDIOVISUAL SPEECH PERCEPTION
AVSP 2 International Conference on Auditory-Visual Speech Processing ELECTROPHYSIOLOGY OF UNIMODAL AND AUDIOVISUAL SPEECH PERCEPTION Lynne E. Bernstein, Curtis W. Ponton 2, Edward T. Auer, Jr. House Ear
More informationTitle of Thesis. Study on Audiovisual Integration in Young and Elderly Adults by Event-Related Potential
Title of Thesis Study on Audiovisual Integration in Young and Elderly Adults by Event-Related Potential 2014 September Yang Weiping The Graduate School of Natural Science and Technology (Doctor s Course)
More informationEffect of mismatched place-of-stimulation on the salience of binaural cues in conditions that simulate bilateral cochlear-implant listening
Effect of mismatched place-of-stimulation on the salience of binaural cues in conditions that simulate bilateral cochlear-implant listening Matthew J. Goupell, a) Corey Stoelb, Alan Kan, and Ruth Y. Litovsky
More informationNeural Correlates and Mechanisms of Spatial Release From Masking: Single-Unit and Population Responses in the Inferior Colliculus
J Neurophysiol 94: 1180 1198, 2005. First published April 27, 2005; doi:10.1152/jn.01112.2004. Neural Correlates and Mechanisms of Spatial Release From Masking: Single-Unit and Population Responses in
More informationAn Auditory System Modeling in Sound Source Localization
An Auditory System Modeling in Sound Source Localization Yul Young Park The University of Texas at Austin EE381K Multidimensional Signal Processing May 18, 2005 Abstract Sound localization of the auditory
More informationRunning head: HEARING-AIDS INDUCE PLASTICITY IN THE AUDITORY SYSTEM 1
Running head: HEARING-AIDS INDUCE PLASTICITY IN THE AUDITORY SYSTEM 1 Hearing-aids Induce Plasticity in the Auditory System: Perspectives From Three Research Designs and Personal Speculations About the
More informationKeywords: Gyrosonics, Moving sound, Autonomic nervous system, Heart rate variability
Gyrosonics a Novel Stimulant for Autonomic Nervous System S. K. Ghatak 1, S.S.Ray 2,R.Choudhuri 3 & S.Banerjee 3 1 Department of Physics & Meteorology, 2 School of Medical Science & Technology Indian institute
More informationAtypical processing of prosodic changes in natural speech stimuli in school-age children with Asperger syndrome
Atypical processing of prosodic changes in natural speech stimuli in school-age children with Asperger syndrome Riikka Lindström, PhD student Cognitive Brain Research Unit University of Helsinki 31.8.2012
More informationEffects of Remaining Hair Cells on Cochlear Implant Function
Effects of Remaining Hair Cells on Cochlear Implant Function 8th Quarterly Progress Report Neural Prosthesis Program Contract N01-DC-2-1005 (Quarter spanning April-June, 2004) P.J. Abbas, H. Noh, F.C.
More informationAuditory scene analysis in humans: Implications for computational implementations.
Auditory scene analysis in humans: Implications for computational implementations. Albert S. Bregman McGill University Introduction. The scene analysis problem. Two dimensions of grouping. Recognition
More informationHuman Auditory Cortical Processing of Changes in Interaural Correlation
8518 The Journal of Neuroscience, September 14, 2005 25(37):8518 8527 Behavioral/Systems/Cognitive Human Auditory Cortical Processing of Changes in Interaural Correlation Maria Chait, 1 David Poeppel,
More informationElectrophysiological Substrates of Auditory Temporal Assimilation Between Two Neighboring Time Intervals
Electrophysiological Substrates of Auditory Temporal Assimilation Between Two Neighboring Time Intervals Takako Mitsudo *1, Yoshitaka Nakajima 2, Gerard B. Remijn 3, Hiroshige Takeichi 4, Yoshinobu Goto
More informationJ. Acoust. Soc. Am. 114 (2), August /2003/114(2)/1009/14/$ Acoustical Society of America
Auditory spatial resolution in horizontal, vertical, and diagonal planes a) D. Wesley Grantham, b) Benjamin W. Y. Hornsby, and Eric A. Erpenbeck Vanderbilt Bill Wilkerson Center for Otolaryngology and
More informationAge-Related Differences in Neuromagnetic Brain Activity Underlying Concurrent Sound Perception
1308 The Journal of Neuroscience, February 7, 2007 27(6):1308 1314 Behavioral/Systems/Cognitive Age-Related Differences in Neuromagnetic Brain Activity Underlying Concurrent Sound Perception Claude Alain
More informationModeling Physiological and Psychophysical Responses to Precedence Effect Stimuli
Modeling Physiological and Psychophysical Responses to Precedence Effect Stimuli Jing Xia 1, Andrew Brughera 2, H. Steven Colburn 2, and Barbara Shinn-Cunningham 1, 2 1 Department of Cognitive and Neural
More informationFINE-TUNING THE AUDITORY SUBCORTEX Measuring processing dynamics along the auditory hierarchy. Christopher Slugocki (Widex ORCA) WAS 5.3.
FINE-TUNING THE AUDITORY SUBCORTEX Measuring processing dynamics along the auditory hierarchy. Christopher Slugocki (Widex ORCA) WAS 5.3.2017 AUDITORY DISCRIMINATION AUDITORY DISCRIMINATION /pi//k/ /pi//t/
More informationNeural correlates of short-term perceptual learning in orientation discrimination indexed by event-related potentials
Chinese Science Bulletin 2007 Science in China Press Springer-Verlag Neural correlates of short-term perceptual learning in orientation discrimination indexed by event-related potentials SONG Yan 1, PENG
More informationAdvanced otoacoustic emission detection techniques and clinical diagnostics applications
Advanced otoacoustic emission detection techniques and clinical diagnostics applications Arturo Moleti Physics Department, University of Roma Tor Vergata, Roma, ITALY Towards objective diagnostics of human
More informationBeyond Blind Averaging: Analyzing Event-Related Brain Dynamics. Scott Makeig. sccn.ucsd.edu
Beyond Blind Averaging: Analyzing Event-Related Brain Dynamics Scott Makeig Institute for Neural Computation University of California San Diego La Jolla CA sccn.ucsd.edu Talk given at the EEG/MEG course
More informationCONTRIBUTION OF DIRECTIONAL ENERGY COMPONENTS OF LATE SOUND TO LISTENER ENVELOPMENT
CONTRIBUTION OF DIRECTIONAL ENERGY COMPONENTS OF LATE SOUND TO LISTENER ENVELOPMENT PACS:..Hy Furuya, Hiroshi ; Wakuda, Akiko ; Anai, Ken ; Fujimoto, Kazutoshi Faculty of Engineering, Kyushu Kyoritsu University
More informationThe Central Auditory System
THE AUDITORY SYSTEM Each auditory nerve sends information to the cochlear nucleus. The Central Auditory System From there, projections diverge to many different pathways. The Central Auditory System There
More informationEditorial Hearing Aids and the Brain
International Otolaryngology, Article ID 518967, 5 pages http://dx.doi.org/10.1155/2014/518967 Editorial Hearing Aids and the Brain K. L. Tremblay, 1 S. Scollie, 2 H. B. Abrams, 3 J. R. Sullivan, 1 and
More informationStructure and Function of the Auditory and Vestibular Systems (Fall 2014) Auditory Cortex (3) Prof. Xiaoqin Wang
580.626 Structure and Function of the Auditory and Vestibular Systems (Fall 2014) Auditory Cortex (3) Prof. Xiaoqin Wang Laboratory of Auditory Neurophysiology Department of Biomedical Engineering Johns
More informationSound Waves. Sensation and Perception. Sound Waves. Sound Waves. Sound Waves
Sensation and Perception Part 3 - Hearing Sound comes from pressure waves in a medium (e.g., solid, liquid, gas). Although we usually hear sounds in air, as long as the medium is there to transmit the
More information