Interaural Timing Cues Do Not Contribute to the Map of Space in the Ferret Superior Colliculus: A Virtual Acoustic Space Study

Size: px
Start display at page:

Download "Interaural Timing Cues Do Not Contribute to the Map of Space in the Ferret Superior Colliculus: A Virtual Acoustic Space Study"

Transcription

1 J Neurophysiol 95: , First published September 14, 2005; doi: /jn Interaural Timing Cues Do Not Contribute to the Map of Space in the Ferret Superior Colliculus: A Virtual Acoustic Space Study Robert A. A. Campbell, Timothy P. Doubell, Fernando R. Nodal, Jan W. H. Schnupp, and Andrew J. King University Laboratory of Physiology, University of Oxford, Oxford United Kingdom Submitted 16 June 2005; accepted in final form 12 September 2005 Campbell, Robert A. A., Timothy P. Doubell, Fernando R. Nodal, Jan W. H. Schnupp, and Andrew J. King. Interaural timing cues do not contribute to the map of space in the ferret superior colliculus: a virtual acoustic space study. J Neurophysiol 95: , First published September 14, 2005; doi: /jn In this study, we used individualized virtual acoustic space (VAS) stimuli to investigate the representation of auditory space in the superior colliculus (SC) of anesthetized ferrets. The VAS stimuli were generated by convolving broadband noise bursts with each animal s own headrelated transfer function and presented over earphones. Comparison of the amplitude spectra of the free-field and VAS signals and of the spatial receptive fields of neurons recorded in the inferior colliculus with each form of stimulation confirmed that the VAS provided an accurate simulation of sounds presented in the free field. Units recorded in the deeper layers of the SC responded predominantly to virtual sound directions within the contralateral hemifield. In most cases, increasing the sound level resulted in stronger spike discharges and broader spatial receptive fields. However, the preferred sound directions, as defined by the direction of the centroid vector, remained largely unchanged across different levels and, as observed in previous free-field studies, varied topographically in azimuth along the rostrocaudal axis of the SC. We also examined the contribution of interaural time differences (ITDs) to map topography by digitally manipulating the VAS stimuli so that ITDs were held constant while allowing other spatial cues to vary naturally. The response properties of the majority of units, including centroid direction, remained unchanged with fixed ITDs, indicating that sensitivity to this cue is not responsible for tuning to different sound directions. These results are consistent with previous data suggesting that sensitivity to interaural level differences and spectral cues provides the basis for the map of auditory space in the mammalian SC. INTRODUCTION The superior colliculus (SC) is a midbrain nucleus involved in the control of reflexive orienting movements (King 2004). The superficial layers of the SC receive visual inputs, whereas the deeper layers also receive auditory and tactile inputs, which often converge on individual neurons to generate multisensory response properties (Stein et al. 2004). Visual, auditory, and tactile inputs to the SC are arranged to form topographically aligned maps of space, the registration of which tends to be maintained even when the eyes move (Hartline et al. 1995; Jay and Sparks 1984; Peck et al. 1995; Populin et al. 2004). As a consequence, multisensory signals are integrated at the single neuron level to facilitate the control of motor commands that give rise to orienting movements of the eyes, head, and body. Whereas visual and somatosensory maps are formed by topographic projections from the retina and body surface, Address for reprint requests and other correspondence: A. J. King, University Laboratory of Physiology, Parks Road, Oxford OX1 3PT, UK ( andrew.king@physiol.ox.ac.uk). respectively, a topographic representation of auditory space has to be computed using acoustic localization cues generated by the head and outer ears. These cues comprise interaural time and level differences (ITDs and ILDs), together with the direction-dependent spectral filtering of sounds by the head and external ears (King et al. 2001; Wightman and Kistler 1993). Previous studies have shown that mammalian SC neurons are sensitive to a combination of ILDs (Hirsch et al. 1985; Middlebrooks 1987; Middlebrooks and Knudsen 1987; Palmer and King 1985; Wise and Irvine 1983, 1985) and spectral cues (Carlile and King 1994; King et al. 1994; Palmer and King 1985). Sensitivity to ITDs, the dominant cue for auditory localization of low-frequency sounds by humans (Wightman and Kistler 1992), has been demonstrated in the cat SC by closed-field stimulation but only to values outside the physiological range (Hirsch et al. 1985; Yin et al. 1985). The contribution of this binaural cue to the formation of the auditory space map in mammals therefore remains uncertain. The spatial receptive fields (SRFs) of auditory neurons are typically measured by presenting sounds in the free field. More recently, SRFs in the auditory nerve (Poon and Brugge 1993), lateral superior olive (Tollin and Yin 2002a,b), inferior colliculus (IC) (Behrend et al. 2004; Delgutte et al. 1999; Euston and Takahashi 2002; Sterbing et al. 2003), auditory cortex (Brugge et al. 1994, 1996; Mrsic-Flogel et al. 2001, 2005; Nelken et al. 1998; Schnupp et al. 2001), and SC (Sterbing et al. 2002) have been mapped with virtual acoustic space (VAS) stimuli. This approach involves delivering via earphones sounds that are digitally manipulated to simulate the filtering effects of the head and outer ears (King et al. 2001; Wightman and Kistler 1989). Because sounds can be rapidly presented from randomized directions without having to employ large speaker arrays or physically move a speaker to different locations, VAS stimulation enables SRFs to be measured at high spatial resolution. Furthermore, sound localization cues can be independently manipulated in ways that are impossible with free-field stimulation. In this study, we used individualized VAS stimuli to measure the SRFs of auditory units in the ferret midbrain. The stimuli were first validated by comparing both acoustical measurements and neuronal responses obtained with free field and VAS stimulation. We then used VAS stimuli to obtain more detailed information about the auditory spatial response properties of SC neurons than provided by previous free-field studies as well as to explore their dependence on ITDs. The costs of publication of this article were defrayed in part by the payment of page charges. The article must therefore be hereby marked advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate this fact /06 $8.00 Copyright 2006 The American Physiological Society

2 AUDITORY RESPONSES OF SUPERIOR COLLICULUS NEURONS 243 METHODS General Seven adult (aged 4 mo) pigmented ferrets (Mustela putorius) with normal hearing (assessed by otoscopic examination and tympanometry) were used in this study. All surgical procedures were approved by local ethical review committee and licensed by the UK Home Office. The animals were anesthetized by an intramuscular injection of alphaxalone/alphadolone acetate (Saffan, 2 mg/kg, Mallinckrodt Veterinary, Uxbridge, UK). During surgery, bupivacaine hydrochloride (Marcain Polyamp, Astrazeneca UK, Luton UK) was applied topically, and supplementary doses of Saffan were given as required via a cannula implanted in the radial vein. Body temperature was monitored by a rectal probe and maintained with a feedback electric blanket at 38 C. The first stage of each experiment involved measuring the animal s head-related transfer function (HRTF) so that individualized VAS stimuli could be constructed, followed by electrophysiological recordings from either the IC or the SC. Preparation for acoustical recordings A damped polythene probe tube ( 30 mm long, 0.86 mm ID, 1.52 mm OD) was passed through each ear canal wall in such a way as to emerge caudally behind the pinna. The tubes were secured internally with a small flange, which abutted against the canal wall, and externally with a rubber O ring that was pressed against the skin. The animal was then placed in a stereotaxic frame and fitted with blunt ear bars, and the skull was exposed. A steel bar (7 mm diam) was attached to the skull with steel screws and dental cement (Simplex Rapid, Austenal Dental, Harrow, UK), so that the head could be supported from behind. At this stage in the SC experiments (n 6), the stereotaxic frame was removed, the incisions in the scalp closed, and the external ears carefully repositioned according to measurements made prior to surgery. Condenser microphones (miniature KE microphone capsules, Sennheiser, High Wycombe, UK) were attached to the probe tubes, and the animal was transferred to an anechoic chamber for free-field acoustic recordings, which were carried out prior to preparation for electrophysiological recording. To allow a more direct comparison between free-field and VAS SRFs, the acoustical measurements for the IC experiment were conducted immediately prior to recording with the craniotomy performed and all recording equipment in place. Acoustical recording A loudspeaker (KefT27, KEF Audio, Maidstone, UK) mounted on a computer-controlled motorized hoop (radius: 65 cm) was used to present broadband signals (512-point Golay codes) (Zhou et al. 1992) from 63 different directions at 16 intervals in azimuth from 160 to 160 and at six vertical angles from 80 to 60 elevation. The sampled positions were arranged so that their diagonal separation was 34. The generation of the Golay codes and the recording of the microphone signals were performed digitally using TDT system 2 A/D and D/A converters (sample rate of 80 khz, Tucker-Davis Technologies, Alachua, FL) and 30-kHz anti-alias filters. The microphone signals were analyzed for each stimulus direction to calculate a spectral transfer function containing both the animal s HRTF and the transfer characteristics of the loudspeaker and probe microphones. The ITDs were extracted from the microphone signals by crosscorrelation of the impulse responses after low-pass filtering (0 4 khz). An in situ calibration to remove the transfer functions of the probe microphones and in-ear headphones used for presenting the VAS stimuli was then carried out. Minimum phase filters were calculated from the equalized amplitude spectra using the Hilbert transform. VAS stimuli consisted of short (100 ms) Gaussian noise burst with 5 ms raised cosine onset and offset ramps, which were convolved with the appropriate minimum phase filters for each direction, and delayed to generate the appropriate ITD. Although frequency-dependent ITDs are excluded by this approach, psychophysical studies in humans have shown that the minimum-phase-plusdelay method adequately approximates the HRTF phase spectrum as long as the low-frequency ITD is appropriate (Kulkarni et al. 1999). The VAS stimuli were not frozen because new Gaussian noise bursts were used for each stimulus presentation. Electrophysiological recording At the conclusion of the acoustical measurements, anesthesia was switched to an intravenous infusion (Perfusor Secura FT infusor, B. Braun Melsungen, Germany) of ketamine/medetomidine (Ketaset, 5 mg kg 1 h 1, Fort Dodge Animal Health, Southampton, UK; Domitor, 10 g kg 1 h 1 ) in Hartmann s solution. A tracheal cannula was implanted, and the animal was ventilated (7025 respirator, Ugo Basile, Milano, Italy) with oxygen-enriched air. Atropine sulfate (0.06 mg kg 1 h 1, Animal Care, York, UK) and dexamethasone (0.5 mg kg 1 h 1, Dexadreson, Intervet UK, Milton Keynes, UK) were administered intramuscularly to reduce mucus secretions in the airways and minimize cerebral edema, respectively. In six ferrets, a craniotomy was performed over the occipitoparietal cortex above the right SC. In the remaining animal, the craniotomy was positioned more caudally, so that recordings could be made from the IC. The dura was removed and the exposed cortex was protected by 2% agar in saline. The agar was supported by a rim of dental acrylic. The left eye was dilated with atropine and fitted with a zero-refractive power contact lens. To eliminate eye movements, pancuronium bromide muscle relaxant was added to the infusate (0.2 mg kg 1 h 1, Pavulon, N.V. Organon, the Netherlands). We monitored the depth of anesthesia and the physiological condition of the animal by continuous measurement of the electrocardio- and electroencephalograph (ECG and EEG; using either custom-built amplifiers or a Datex-Ohmeda anesthesia monitor, Hatfield, UK), end tidal CO 2 (using either a 47210A capnometer, Hewlett Packard GmbH, Boeblingen, Germany or Datex-Ohmeda monitor), and arterial oxyhemoglobin saturation (Datex-Ohmeda monitor). Single-unit activity was recorded extracellularly using a tungsten-inglass electrode ( M ) lowered vertically through the cortex and into the midbrain. The electrode signals were band-pass filtered (500 Hz to 5 khz), amplified ( 15,000 times) and digitized at 25 khz. Electrolytic lesions ( 5 A for 5 s) were made in most electrode penetrations in which acoustically responsive units were isolated, to allow for histological confirmation of recording sites. Visual and auditory stimuli The SC was located by presenting a diffuse flashing light positioned a few centimetres from the contralateral eye. The superficial visual layers of the SC were usually encountered at a depth of 6 mm below the cortical surface. We determined the direction of the maximum visual response of multiunit activity recorded in these layers using an LED mounted on the motorized hoop. The electrode was then advanced into the deeper layers of the SC to search for auditory responses by presenting contralateral broadband noise bursts. These stimuli were delivered via commercial Panasonic earphone drivers (RP-HV297, Bracknell, UK), coupled to an otoscope speculum that was inserted into each ear canal. When an auditory response was identified, the threshold of the unit was determined by presenting unfiltered noise bursts to the contralateral ear (100-ms duration with an inter-stimulus interval of 1,000 ms) at a range of sound levels. Threshold was taken as the lowest sound level to elicit an increase in firing rate that was significantly greater (P 0.05) than the unit s resting level. The spatial receptive fields (SRFs) of most units were measured at two sound levels, one near threshold (typically 5 15 db above unit threshold) and a second at a level well above threshold (typically db above unit threshold). SRFs were

3 244 R.A.A. CAMPBELL, T. P. DOUBELL, F. R. NODAL, J.W.H. SCHNUPP, AND A. J. KING measured by presenting VAS stimuli in a random order from the same sound directions used for measuring the HRTF. This process was repeated until responses for each virtual stimulus direction had been collected. The mean evoked spike rate at each stimulus position was then used to estimate the unit s SRF (see following text). Two stimulus conditions were used to investigate the contribution of ITDs to the spatial tuning of SC units (Fig. 1). The natural cue condition involved presenting sounds from each position with all sound localization cues co-varying naturally, as they would with a real free-field sound source. In the fixed cue condition, the ITDs were maintained at a fixed value and did not co-vary with the ILDs and spectral cues. This was achieved by delaying the sound reaching the left ear by 200 s, close to the maximum value produced by the separation of the ears, for all 63 virtual sound locations. The sound therefore arrived at the right ear first (ipsilateral to the SC from which the recordings were made) and had an interaural delay corresponding to a source at 90 to the animal s right. This ITD was chosen because it represented a value to which no units in the right SC would be expected to be tuned. Consequently, the ITDs were fixed at a value inappropriate for the normal location of an SRF. For the IC recordings, we searched for units and determined threshold in the same manner as in the SC. Once a unit s threshold was determined, we recorded a VAS SRF then carefully removed the earphone drivers and recorded a free-field SRF by sequential presentation of noise bursts from corresponding stimulus locations. Where possible, we recorded the VAS SRF and then the free-field SRF again FIG. 1. Schematic illustrating the natural and fixed interaural timedifference (ITD) conditions. The contribution of ITDs to the spatial response properties of superior colliculus (SC) units was investigated by presenting virtual acoustic space (VAS) stimuli in which ITDs were fixed at a single value for all sound-source directions. The natural ITD condition replicates free field stimuli: all localization cues co-vary with sound-source direction. Sounds presented at the anterior midline will reach both ears simultaneously and be equally intense in both ears. A sound on either side of the midline will reach the near ear before the far ear and be more intense at the near ear. In the fixed condition, the ITDs had a constant value of 200 s for all virtual soundsource directions: stimuli always arrived at the right (ipsilateral) ear before the left ear, whereas level and spectral cues co-varied naturally with stimulus direction. For clarity, the noise-bursts and axes are not drawn to scale. to control for response stability. IC SRF recordings were made at sound levels of 5 20 db above unit threshold. We ensured that the amplitude of the VAS and free-field stimuli were equivalent at the entrance of the ear canal by calibrating our stimuli using the implanted probe microphones prior to each recording. Owing to the time taken to move the speaker, we sampled only three elevations ( 24, 6, and 12 ), at either 10 or 20 different azimuths (i.e., 30 or 60 sound-source directions). Stimuli were presented times at each location. Analysis of results Stimulus generation and data acquisition were controlled using BrainWare (Tucker-Davis Technologies). This software stored the latency and shapes of all spikes that crossed an arbitrary amplitude threshold determined by the user. These data were saved for off-line analysis. Whenever possible we digitally isolated single units by sorting evoked spikes based on their shape. The response period for each unit was individually determined from the poststimulus time histogram (PSTH). In all cases, firing rates had returned to spontaneous background levels by 400 ms after stimulus onset. Response magnitude was measured relative to the spontaneous activity of the neuron, which was obtained from a second window drawn between 500 and 1,000 ms after stimulus onset. The raw data were exported to Matlab R14 (Mathworks, Natick, MA) with which all further analysis was carried out. SRFs were visualized by producing a smoothed map projection showing the average response rate for each sound direction. Smoothing was done by interpolation of the averaged responses over a uniform grid of 7.5 resolution using biharmonic spline interpolation. To avoid discontinuities due to extrapolation over positions above and behind the animal (along the dateline and at the north pole of our spherical coordinate system), we extended the matrix maps to cover a -200 to 200 azimuth range by copying values across from the opposite edge, i.e., from 160 to 200 and from 160 to 200. This ensured that the algorithm could interpolate smoothly and without discontinuities across the full 180 azimuthal range. For each SRF, we calculated the 50 and 75% response areas (rad 2 ) corresponding to the total angular extent of the regions within which the response exceeded a stated percentage of the unit s maximal response (estimated from the mean response at the 5 most effective virtual stimulus positions). The 50 and 75% response areas provide a measure of the overall responsiveness of a cell across different spatial locations but do not indicate whether the SRF is focused around a single preferred stimulus direction. For example, a multi-peaked SRF could still have a small total 75% area. We therefore derived the centroid (or center of mass) for each SRF. The centroid provides a way of quantifying the preferred sound direction of each unit as well as determining the sharpness of this tuning. The centroid was calculated by modeling the response field as a sphere of unit radius, where the mass density in each direction was given by the observed response strength in the corresponding direction (see Mrsic-Flogel et al for the full derivation of the centroid). The direction of the centroid vector was calculated as a volume integral by approximating the model sphere as a sum of pyramids, the bases of which are at the surface and the apices of which are at the center of the sphere. The direction of the centroid vector summarizes the overall directional preference of the SRF, whereas its length gives an indication of how sharply tuned the SRF is in that direction. The theoretical maximum length of the centroid (for an SRF responding to a single sound direction) is 0.75, the distance of the center of a pyramid of unit height from its apex. The interpolated map, centroid direction vector and visual best direction recorded in the superficial layers of the same electrode penetration were then displayed using a Kavraisky 5 equal-area projection (see Fig. 7).

4 AUDITORY RESPONSES OF SUPERIOR COLLICULUS NEURONS 245 To quantify the effects of changing sound level or fixing ITD, we compared SRF features such as response area and centroid parameters (length and direction) between the two conditions, e.g., fixed and natural ITDs. Paired t-tests were used to look for a systematic effect of the stimulus property in question over the whole population of units. We also used a Monte Carlo test to analyze the results on a unit by unit basis to see if the responses of a subset of cells were influenced by sound level or ITD condition. This was done by pooling all repetitions from both conditions at each sound-source direction. Pairs of simulated SRFs were generated by random resampling of the pooled responses (with replacement) for each sound-source direction until two SRFs, based on the same number of stimulus repetitions as those used for collecting the data, were obtained. The centroid statistics were then calculated and the difference in simulated values was determined. This process was repeated 10,000 times, to estimate the distribution of differences in centroid statistics that are to be expected by chance. If an observed difference fell into the highest or lowest 2.5% of this distribution, then we considered this to be significant at the 5% level. Units with low spike rates or unreliable responses often have centroids with unreliable directions. For example, a burst of spikes at a single stimulus presentation could spuriously displace the centroid direction toward that sound-source location. To eliminate units with misleading centroids we used a Monte Carlo approach to resample (with replacement) each SRF 10,000 times and derive the centroid statistics for each. SRFs were excluded if the SE of the simulated centroid directions exceeded 3. RESULTS Validation of virtual acoustic space stimuli To show that our VAS stimuli accurately replicate the animal s own HRTF, we have to demonstrate that the HRTFfiltered noise bursts faithfully recreate the amplitude spectra of the corresponding free-field stimuli. Furthermore, we must show that comparable neuronal spatial tuning can be recorded with VAS and free-field stimulation. The implanted probe microphones made it possible to record our VAS stimuli at the same location within the ear canal where the HRTF was initially measured. Figure 2 shows the amplitude spectra measured from each ear for VAS and freefield stimuli from three different sound-source directions. The spectra are almost identical in each case. Figure 3 shows the differences between the VAS and free-field amplitude spectra FIG. 2. Comparisons of free-field (black) and VAS (red) amplitude spectra from 3 different sound-source directions as indicated at the top of each column. FIG. 3. Differences between the amplitude spectra of free-field and VAS stimuli measured in the left and right ears for 30 sound-source directions. These directions correspond to the 10 different azimuths and 3 elevations shown in Fig. 4. Azimuth is plotted along the ordinate with the data from the 3 elevations sampled at each azimuth presented in adjacent bins. Most directions exhibited a constant amplitude difference across frequency. However, for each ear, the higher frequency components of sounds presented on the contralateral side were greatly attenuated by the head causing the recorded amplitude spectrum to approach the noise floor of our microphones and amplifier. over 30 sampled sound directions (same directions at which the neural responses are shown in Fig. 4). At most locations, these differences were negligible ( 1 db in 80% of cases; 3 dbin 90%) and constant across frequency, indicating that the VAS stimuli faithfully replicate the spectral cues arising from real free-field stimuli. For some directions, however, the amplitude spectra exhibited larger differences at higher frequencies. This is caused by the poor signal-to-noise ratio at higher frequencies for stimuli presented on the side contralateral to the microphone. Because the SC contains topographically aligned maps of visual and auditory space (e.g., King and Hutchings 1987), it is possible to make inferences about the fidelity of our VAS stimuli from a comparison of the virtual SRF centroid directions with the best direction of the overlying visual responses (see following text). In addition, we performed a direct physiological validation by comparing VAS SRFs with those obtained using free-field stimulation. This was done using a population of 11 units recorded from the central nucleus of the IC in one animal. We chose the IC rather than the SC because the more robust responses obtained in this nucleus were suited for making quantitative comparisons of data recorded over the longer periods of time required to map the SRFs using both forms of stimulation. Figures 4 and 5 show examples of three of these units where SRFs were recorded at corresponding sound levels using both free-field and VAS stimuli. The responses of the unit shown in Fig. 4, where two sets of VAS and free-field recordings were conducted around an hour apart from the same unit, clearly illustrate the stability of the SRFs. Indeed, the coefficients derived from cross-correlation of the spike rates in the response window for each mode of stimulation were almost identical for the two sets of recordings. The SRFs for the two units shown in Fig. 5 were quite different from one another, with the response illustrated in Fig. 5B being more sharply tuned than that in Fig. 5A. Once again, however, the responses obtained using free-field and VAS stimuli were extremely similar, with correlation coefficients close to one in each case.

5 246 R.A.A. CAMPBELL, T. P. DOUBELL, F. R. NODAL, J.W.H. SCHNUPP, AND A. J. KING FIG. 4. Raster plots comparing free-field (blue) and VAS (red) responses from an inferior colliculus unit (unit 3 in Fig. 6) measured at a range of sound-source directions. The recordings shown are the 1st (A) and last (B) pairs from this unit and were made about an hour apart. The similarity between plots A and B illustrates recording stability throughout this period. Each plot shows the 1st 300 ms after stimulus onset. Space was sampled in a triangular array, so stimuli at 6 elevation were presented at azimuth positions 18 to the left of (anti-clockwise from) the labeled positions on the lowest row ( 12 elevation). Each sound direction was sampled 10 times in each condition. The correlation coefficients, r, between the VAS and free-field recordings are 0.82 for A and 0.91 for B. Recordings were conducted at 15 db above unit threshold. The SRFs measured from all 11 IC units with both VAS and free-field stimulation are illustrated in Fig. 6 by plotting the response as a function of sound azimuth at a single elevation. In some cases (units 4, 6, 7, 9, and 10), SRFs were obtained in VAS and in the free-field only once, whereas in the others (units 1, 2, 3, 5, 8, and 11), at least one of these measurements was repeated to help control for changes in the response properties over time. For each unit, the VAS SRF was recorded first followed by a free-field recording. Further recordings on the same unit were conducted when possible. As in Keller et al. (1998), we compared the SRFs by calculating correlation coefficients between the free-field and VAS data. The r values in the top right of each panel indicate the correlation coefficients of VAS and free-field recordings (based on all the sound directions sampled, including the other elevations) conducted at adjacent points in time. For example, unit 8 with two VAS and two free-field recordings has three r values listed; from top to bottom: VAS1/free-field1, VAS2/free-field1, and VAS2/ free-field2. Although all responses were predominantly contralateral, the widths and locations of these azimuth response profiles differed across units but were similar within individual units. For example, unit 3 responded reproducibly and fairly selectively to both real and virtual sound directions in front of the animal, whereas unit 6 was more broadly tuned and responded best to stimuli located well into the contralateral hemifield. To assess the significance of the measured r values, we conducted a resampling test to estimate the likelihood of the observed value occurring. Correlation coefficients were calculated between the observed free-field data and 10,000 simulated VAS SRFs created by re-sampling the evoked spike counts from all VAS recordings. Cases where 5% of the simulated r values were greater than the observed value are indicated by * next to the correlation coefficients given in Fig. 6. The likelihood of

6 AUDITORY RESPONSES OF SUPERIOR COLLICULUS NEURONS 247 FIG. 5. Raster plots comparing free-field (blue) and VAS (red) responses from the inferior colliculus measured at a range of sound-source directions. A: unit 9 (r 0.96; recorded at 15 db above threshold). B: 1st pair of recordings from unit 11 (r 0.96; recorded at 10 db above threshold; see Fig. 6). Each plot shows the 1st 300 ms after stimulus onset. these r values occurring was considered to be no greater than chance. Units 3 and 11 showed excellent correspondence between VAS and free-field and were stable on re-recording. The r values of these recordings ranged from 0.81 to Those units in which the SRF was determined only once with each form of stimulation exhibited no evidence of response drift and had high r values in the range of those found in units 3 and 11. The response of unit 8 was less stable between the two pairs of recordings, responding more strongly during the second VAS and free-field recordings. Despite this, the correlation between the VAS and free-field responses within each recording pair was high (r 0.85). Only unit 5 showed systematic differences between VAS and free-field across recordings. Of the four r values (ranging from 0.76 to 0.79) for this unit, two were not significantly greater than chance, confirming that, in this instance, the correlation between the free-field and VAS recordings was poor. The range of r values for all 11 IC units was 0.63 to 0.96, with a mean of These values are comparable to those reported by Keller et al. (1998). Auditory receptive fields of SC neurons measured with normal localization cues A total of 48 acoustically responsive units were isolated from the SC of six ferrets. Three of these were excluded from further analysis according to the criteria described in METHODS. A total of 88 recordings were made, 79 of which passed our exclusion criteria. The SRFs of most units comprised a single peak, usually in the contralateral hemifield (Fig. 7). There was considerable variation in the size of the SRFs and in the magnitude of the maximum response obtained at corresponding sound levels relative to unit threshold. In keeping with previous free-field measurements of spatial tuning in the ferret SC (King and Hutchings 1987), the narrower axis of the 75% area typically covered 30. Effect of altering sound level Of the 45 remaining SC units, we recorded 13 at one sound level only and 32 at two sound levels. The difference in sound level over all units was 18 7 (SD) db. For two neurons, we

7 248 R.A.A. CAMPBELL, T. P. DOUBELL, F. R. NODAL, J.W.H. SCHNUPP, AND A. J. KING FIG. 6. Comparison of VAS and free-field responses from all 11 IC units. For clarity, each line shows an azimuth sweep from only 1 of the 3 recorded elevations ( 6 ). Multiple recordings of the same condition are shown using different colors and symbols (e.g., the 2nd VAS recording is shown as a red line with open circles). VAS and free-field recordings were obtained in an interleaved fashion to help control for changes in the response over time. Numbers inthe top right of each subplot indicate the r values for each pair of recordings. The r values were calculated using the mean evoked spike counts from the full set of sound-source directions (as shown in Figs. 4 and 5). Cases in which 5% of simulated VAS SRFs had r values greater than the observed value are indicated (*). See main text for further details. re-recorded one of the sound levels to test for unit stability. Our level analyses are therefore conducted on 34 pairs of SRFs. Increasing sound level resulted in a significant (paired t-test, t 3.271, df 33, P 0.003) increase in the maximum response of the units (Fig. 8A) as well as a significant (paired t-test, t 3.591, df 33, P 0.002) increase in the size of the 75% area (Fig. 8B). Increasing the sound level also led to a small decrease in centroid vector length, although this decrease was not significant over the population as a whole (paired t-test, t 1.138, df 33, P 0.263; Fig. 8C). A shorter centroid vector would have indicated a less sharply tuned response. Although there was no systematic effect of sound level on the centroid length over the population, Monte-Carlo analysis of individual units showed that 17/34 SRF pairs (16/32 units) had significantly different centroid lengths between the two conditions (P 0.05). Of these 16 units, 12 showed a significant decrease in centroid length as the sound level was increased. Changing sound level did not result in systematic shifts in centroid direction (see Fig. 9) for either azimuth (paired t-test, t 1.779, df 33, P 0.085) or elevation (paired t-test, t 1.138, df 33, P 0.263) for the population as a whole. Indeed, the mean shift was only 6 in azimuth and 3 in elevation. However, like centroid length, a subset of units exhibited a small (Fig. 9C) but significant shift in centroid direction with increasing sound level (13/32). Topography and alignment of visual and auditory maps Previous free-field studies of the ferret SC (e.g., King and Hutchings 1987; King et al. 1998) have shown that the best azimuths of visual units recorded in the superficial layers are arranged to form a map that corresponds closely in its spatial extent, magnification, and orientation with the auditory representation in the deeper layers of the nucleus. We have explored this relationship further for two reasons. First confirming that auditory virtual SRFs are aligned with the visual responses would provide further evidence as to the fidelity of our VAS stimuli. Second, by using the visual map as a template against which to compare the auditory SRFs, we can examine the contribution of ITDs to the topographic organization of the auditory representation. In this study, we focused mainly on how the auditory SRFs vary with the location of the units along the rostrocaudal axis of the SC. This is the axis along which stimulus azimuth is represented. For each acoustically responsive unit, we compared the azimuth of the SRF centroid vector with the visual best azimuth recorded in the same electrode penetration (Fig. 10). The centroid directions of auditory units recorded in the rostral third of the SC spanned the anterior quadrant with the most rostral units being tuned to ipsilateral sound directions. This same region of the SC also represents frontal visual space. As the location of the recording electrode was moved toward

8 AUDITORY RESPONSES OF SUPERIOR COLLICULUS NEURONS 249 naturally were compared with those obtained with a fixed ITD of 200 s for all sound directions (see METHODS). This time delay would correspond to a sound originating from 90 to the right (ipsilateral to the recorded SC). Unless the virtual soundsource originated from this direction, the fixed ITD cues conflicted with the other localization cues. Because most of the units were recorded at more than one sound level, we obtained FIG. 7. A E: spatial receptive fields of 5 auditory units recorded from the right SC. Color scale indicates the mean evoked spike rate per stimulus presentation. Response windows were drawn individually for each unit. Maximum response is indicated by the red region. The solid black contour line encloses the area within which the spike rate is 75% of the unit s maximum response. The black cross shows the direction of the centroid vector, which indicates the unit s preferred sound direction (see METHODS). The white circle represents the visual best position of multiunit activity recorded in the superficial layers of the electrode penetration from which the auditory unit was recorded. Stimulus levels above unit threshold: A, 30 db; B, 25 db; C, 0 db; D, 7.5 db; E, 25 db. the caudal end of the SC, the auditory centroid vectors and visual best azimuths shifted systematically toward more posterior regions of the contralateral hemifield. Because we did not undertake any free-field SC recordings in the present study, we have included in Fig. 10 data from a previous study (King et al. 1998) in which the preferred sound directions were derived from auditory spatial response profiles mapped at a single elevation (gray crosses in this figure). These measures of spatial selectivity are not equivalent, as the centroid vector is based on the relative response of the unit throughout the SRF and can therefore be located away from the sound location evoking the strongest response (see Fig. 7A). Nevertheless, the great majority of auditory centroid vectors fell within or very close to the distribution of best azimuths from the earlier free-field study. Effect of manipulating ITD cues The virtual SRFs of 32 of 45 SC units were recorded using two randomly interleaved ITD conditions. The purpose of this was to investigate the possible contribution of ITDs to the generation of the map of auditory space in the SC. The SRFs recorded when all sound localization cues were allowed to vary FIG. 8. Effects of changing sound level on the SC responses. Circled data points are units which showed a significant change in the measured parameter according to our Monte-Carlo simulation. A: maximum response (mean evoked spike rate per stimulus presentation) of each recorded unit at high and low sound levels. Suprathreshold recordings were performed at an average sound intensity of 34 db above threshold across all units. Near-threshold recordings were performed at an average intensity of 17 db above threshold. B: 75% response area (rad 2 ) of each unit at high and low sound levels. C: effect of sound level on the length of the centroid vector. Higher sound levels led to a significant increase in area and maximum response (P 0.001) but did not affect centroid length over the population as a whole (P 0.245).

9 250 R.A.A. CAMPBELL, T. P. DOUBELL, F. R. NODAL, J.W.H. SCHNUPP, AND A. J. KING both suprathreshold (Fig. 11A) and near-threshold (B) sound levels. We also looked for an effect of ITD on centroid direction, centroid length, 50% SRF area and maximum response strength (Fig. 12). Of these parameters, only the length of the centroid vector (Fig. 12B) showed a significant overall change, decreasing in value in the fixed ITD condition (paired t-test, t 3.583, df 53, P 0.001). To examine whether fixing the ITD altered the response properties of individual units, we performed a Monte Carlo simulation where spike counts from the two ITD conditions were pooled and used to generate simulated SRF pairs from which we calculated differences in various descriptive statistics. This analysis showed that manipulating the ITD resulted in significant changes (P 0.05) in 17/32 units for at least one of the four measured parameters. A smaller subset of units showed a significant effect for at least one measure of spatial tuning (Monte Carlo analysis, indicated by circled data points in Fig. 12). For example, 7/32 units displayed a significant change in SRF area with 2/32 units showing a change in centroid direction when the ITD was fixed. Examples of individual SRFs recorded in the two ITD conditions are shown in Fig. 13. These illustrate cases where holding the ITD constant had either no effect (Fig. 13, A and B) or induced a FIG. 9. Effect of sound level on centroid direction of SC units. A: comparison of the centroid azimuth direction for each unit at high and low sound levels. B: comparison of the centroid elevation direction for each unit at high and low sound levels. C: difference in the centroid at high and low sound intensities. Gray square, the mean shift in centroid direction. As in Fig. 8, circled points are judged to have changed significantly across sound levels, according to our Monte Carlo analysis. There was no systematic shift in centroid direction over the population as a whole. a total of 54 recordings in the fixed ITD condition and 79 in which ITDs were allowed to vary naturally. We compared the topographic alignment of the visual and auditory maps using SRFs generated with natural cues and with fixed ITDs (Fig. 11). Despite the reduction of spatial information available in the fixed ITD condition, the auditory centroid vectors continued to co-vary with the visual best positions at FIG. 10. Comparison of auditory best azimuth (determined by the centroid direction vector) of deep SC units with the visual best azimuth measured in the superficial layers of the same electrode penetration. Near- and suprathreshold recordings from the same neuron are distinguished using blue and red points. Also shown are free-field data from King et al. (1998) in which the best azimuth was defined as the loudspeaker location producing the maximum response within a spatial response profile obtained at a single elevation. The free-field data exclude broadly tuned or multi-peaked units. Units from the present study were only excluded if a permutation test judged the centroid direction to be too variable (see METHODS). Units where the 50% area covered more than half of space have been labeled broad and surrounded with a square. Units with four or more isolated 75% regions are termed multipeaked and surrounded with a green circle.

10 AUDITORY RESPONSES OF SUPERIOR COLLICULUS NEURONS 251 of SC neurons than has previously been carried out with free-field stimulation. Moreover, VAS stimuli can be manipulated to assess the contribution of different acoustic cues to the SRFs. We have shown here by holding ITDs constant while leaving ILDs and spectral cues to vary naturally that sensitivity to timing differences between the ears is not a major determinant of the map of auditory space in the SC of the ferret. Fidelity of the virtual acoustic space stimuli FIG. 11. Effect of fixing the ITD at 200 s on the visual-auditory correlation in the SC. A: data obtained at suprathreshold sound levels. B: data obtained at near-threshold sound levels. Circled data points are those for which the centroid direction vector changed significantly as a function of ITD condition according to a Monte Carlo analysis. Validation of the VAS stimuli was achieved by comparing the amplitude spectra of the free-field and VAS signals measured with probe microphones implanted into the ear canals and also by a comparison of SRFs of the same IC units recorded with each form of sound presentation. Although the central nucleus of the IC does not contain a map of auditory space, the SRFs found there are sufficiently discrete and inhomogeneous (see also Behrend et al. 2004; Sterbing et al. 2003) to justify using the stronger responses recorded in this nucleus to compare the spatial tuning obtained with free-field and VAS stimulation. We did find a discrepancy between some of the very highfrequency components of the amplitude spectra of the free-field and VAS stimuli for sounds presented on the contralateral side, which arose because these signals were attenuated by the head to the noise level of our recording system. However, the bandwidth of these differences is almost certainly too narrow to be preserved after the sound has been filtered by the cochlea. Overall, the correlation between both acoustical and physiological measures was very high and comparable to that reported in other studies (Behrend et al. 2004; Keller et al. 1998; significant change (Fig. 13, C F) in one of the spatial tuning measures. These results suggest that, at least in some cases, sensitivity to ITDs might contribute to the formation of the SRFs of SC neurons. The limited effect of fixing the ITD on centroid direction indicates, however, that the contribution of this cue to the map of sound azimuth must be minor. This is further highlighted in Fig. 14, in which we combined all the SRFs recorded within four different visual azimuth ranges, i.e., from different regions of the SC. In both the natural (Fig. 14A) and fixed ITD (Fig. 14B) conditions, this resulted in a singlepeaked SRF at each location, which varied in azimuth from the rostral to the caudal end of the nucleus. Moreover, despite the variation in centroid vector direction for individual units, particularly in rostral SC (see Fig. 10), these average SRF plots revealed a much tighter correlation with the pooled visual best azimuth. DISCUSSION In this study, we used individualized VAS stimuli, derived from acoustical measurements of each animal s own HRTF, to provide a more detailed characterization of the auditory SRFs FIG. 12. Effect of fixing ITDs on the centroid azimuth direction (A) and length (B) and the SRF response area (C) and maximum response (D). Circled data points exhibited a significant difference between the 2 ITD conditions according to a Monte-Carlo simulation.

11 252 R.A.A. CAMPBELL, T. P. DOUBELL, F. R. NODAL, J.W.H. SCHNUPP, AND A. J. KING FIG. 13. A F: effect of fixing ITDs on the SRFs of 6 different units. Left: responses in the natural cue condition. Right: responses of the same units when the ITDs were fixed to 200 s. A and B: units with no significant change in response properties on fixing ITDs. C and D: units exhibiting a significant decrease and increase, respectively, in centroid length. E and F: units showing significant decreases and increases, respectively, in their 75% areas. In all cases, P Sterbing et al. 2003). We can therefore conclude that our VAS stimuli reliably simulated the real sound sources. A similar conclusion was reached by comparing judgements made by human listeners of the apparent locations of free-field and virtual sound sources that had been synthesized in the same way as in the present study (Wightman and Kistler 1989). Representation of space in the SC The fidelity of the VAS stimuli was further confirmed by the similarity between the virtual SRFs measured here and the auditory spatial tuning reported in previous free-field studies of the SC. With the exception of a few units in the ferret SC where the SRFs were mapped in more detail (King and Hutchings 1987), most free-field experiments have measured spatial response profiles along a single azimuth or elevation or attempted to estimate the location of the borders of the SRF. Nevertheless, we found that the properties of the VAS SRFs closely resemble those reported in free-field studies of the ferret (King and Hutchings 1987) and other species (King and Palmer 1983; Middlebrooks and Knudsen 1984). Increasing the sound level resulted in a systematic increase in maximum firing rate and in the response area of the units, confirming results found in previous free-field studies (King and Hutchings 1987; King and Palmer 1983; Middlebrooks and Knudsen 1984). Around 40% of units showed a small but significant change in the direction of the centroid vector, which resulted from a non-uniform increase in SRF area with sound level. There was, however, no systematic shift in preferred sound direction across the population of neurons (Fig. 9) and the distribution of centroid directions with recording site within the SC was very similar at near-threshold and suprathreshold sound levels (Fig. 10). A similar result was noted with free-field stimulation (Carlile and King 1994; King et al. 1994; Middlebrooks and Knudsen 1984). We did observe more scatter in the auditory map than reported in the free-field studies. In particular, the range of centroid direction vectors of units recorded in rostral SC was larger than that of the best azimuths, as defined by the peak of the spatial response profile, in our earlier free-field studies of the ferret SC (e.g., King and Hutchings 1987; King et al. 1998). Our acoustical measurements and recordings in the IC suggest

12 AUDITORY RESPONSES OF SUPERIOR COLLICULUS NEURONS 253 that this apparent mismatch is unlikely to arise because the VAS stimulation did not adequately replicate the free-field sound source. On the other hand, Behrend et al. (2004) showed that the presence of recording equipment around the animal s head can alter virtual SRFs. Such equipment was obviously in place in the free-field experiments but not when the VAS stimuli were generated for the SC experiments in the present study. Acoustic distortions produced by the recording equipment could therefore contribute to the differences between the azimuth maps observed with VAS and free-field stimulation. Another possibility is that these differences might reflect our use of the centroid as a measure of spatial selectivity, which is based on the whole receptive field rather than the region of maximum response only. This latter possibility is likely, given that, in contrast to the free-field studies, we did not exclude units with very broad or multi-peaked SRFs. Although the distribution of centroid direction vectors for individual units was quite scattered, the average SRFs obtained by pooling data within discrete regions of the SC revealed a clear topographic shift in spatial selectivity with recording site. The broad spatial tuning of the auditory units is consistent with the coarsely tuned movement fields of deep SC neurons (McIlwain 1991; Lee et al. 1988; Sparks et al. 1976). Thus the location of the stimulus and the vector of the orienting movements elicited by it both appear to be specified by the spatial distribution of activity across a population of SC neurons. Estimates of neuronal discrimination values from the responses of space-mapped neurons in the external nucleus of the IC in the barn owl also suggest that the ability of the animal to detect a change in soundsource direction is based on a shift in the population response of these neurons (Bala et al. 2003). FIG. 14. Representation of sound azimuth in the SC based on population responses. Plots show normalized auditory SRF data which has been pooled over a range of visual azimuths ( 90 and behind, 80 to 45, 45 to 24, and anterior of 24 ). A: data obtained with normal VAS stimuli. The region of maximum response of the pooled data follows the visual best position (the mean of which is shown by the circle). B: this alignment is not altered by fixing the ITDs to 200 s. Role of ITDs in location selectivity of SC neurons Cue-trading experiments have shown that low-frequency ITDs are the primary cue for localization by humans in the horizontal plane (Wightman and Kistler 1992). Tuning to ITDs is also known to underlie the representation of sound azimuth in the optic tectum of the barn owl (Olsen et al. 1989) as well as the behavioral responses of this species (Poganiatz et al. 2001). In contrast, previous recording studies suggest that the map of space in the mammalian SC is based mainly on ILDs and spectral cues. This is supported by the broad, multi-peaked frequency tuning of SC neurons, which is dominated by high frequencies (Carlile and Pettigrew 1987; Hirsch et al. 1985; King and Carlile 1994; King and Palmer 1983; Middlebrooks 1987; Wise and Irvine 1983) and by the presence of a topographic variation in ILD sensitivity along the rostrocaudal axis of the nucleus (Hirsch et al. 1985; Wise and Irvine 1985). Moreover, the changes in spatial tuning observed following occlusion (Middlebrooks 1987; Palmer and King 1985) or passively moving one ear (Middlebrooks and Knudsen 1987), or after removal of the external ear structures (Carlile and King 1994; Schnupp et al. 1998), are consistent with a combination of ILDs and spectral cues being the primary determinants of the spatial selectivity of mammalian SC neurons. Sensitivity to ITDs has been demonstrated in cat SC neurons (Hirsch et al. 1985), but because response latencies decrease with increasing sound level, this could provide a mechanism underlying the processing of ILDs rather than a basis for the representation of auditory space (Yin et al. 1985). A unique advantage of VAS stimulation is the capacity to manipulate independently the acoustic cues available and therefore to assess their contribution to auditory localization (King et al. 2001). This has been done in human psychophysical (Martin et al. 2004; Wightman and Kistler 1992, 1997) and neurophysiological studies (Delgutte et al. 1999; Nelken et al. 1998; Tollin and Yin 2002a,b). By holding ITDs at a constant value while allowing ILDs and spectral cues to vary naturally, we found that the properties of the SRFs of a minority of SC neurons did change significantly, perhaps reflecting the broad ITD sensitivity demonstrated by Hirsch et al. (1985) in the cat. However, there was no overall shift in the centroid direction vectors, which continued to co-vary with the visual receptive fields mapped at the same SC locations. Although behavioral measurements have shown that ferrets can certainly localize sounds using ITDs (A. Schulz, J.W.H. Schnupp, and A. J. King, unpublished observations), our results indicate that these binaural cues make little contribution to the map of auditory space in the SC and are therefore presumably processed by other pathways within the midbrain. ACKNOWLEDGMENTS We are grateful to J. Bithell and R. Ripley for valuable ideas for the data analysis. GRANTS This work was supported by the Wellcome Trust, through a 4-year studentship to R.A.A. Campbell and a Senior Research Fellowship to A. J. King, and by Biotechnology and Biological Sciences Research Council Grant 43/S19595 to J.W.H. Schnupp.

Neural correlates of the perception of sound source separation

Neural correlates of the perception of sound source separation Neural correlates of the perception of sound source separation Mitchell L. Day 1,2 * and Bertrand Delgutte 1,2,3 1 Department of Otology and Laryngology, Harvard Medical School, Boston, MA 02115, USA.

More information

Binaural-Level Functions in Ferret Auditory Cortex: Evidence for a Continuous Distribution of Response Properties

Binaural-Level Functions in Ferret Auditory Cortex: Evidence for a Continuous Distribution of Response Properties J Neurophysiol 9: 7 7, 6; doi:./jn... Binaural-Level Functions in Ferret Auditory Cortex: Evidence for a Continuous Distribution of Response Properties Robert A. A. Campbell, Jan W. H. Schnupp, Akhil Shial,

More information

Binaural Hearing. Why two ears? Definitions

Binaural Hearing. Why two ears? Definitions Binaural Hearing Why two ears? Locating sounds in space: acuity is poorer than in vision by up to two orders of magnitude, but extends in all directions. Role in alerting and orienting? Separating sound

More information

3-D Sound and Spatial Audio. What do these terms mean?

3-D Sound and Spatial Audio. What do these terms mean? 3-D Sound and Spatial Audio What do these terms mean? Both terms are very general. 3-D sound usually implies the perception of point sources in 3-D space (could also be 2-D plane) whether the audio reproduction

More information

Neural Recording Methods

Neural Recording Methods Neural Recording Methods Types of neural recording 1. evoked potentials 2. extracellular, one neuron at a time 3. extracellular, many neurons at a time 4. intracellular (sharp or patch), one neuron at

More information

Sound localization psychophysics

Sound localization psychophysics Sound localization psychophysics Eric Young A good reference: B.C.J. Moore An Introduction to the Psychology of Hearing Chapter 7, Space Perception. Elsevier, Amsterdam, pp. 233-267 (2004). Sound localization:

More information

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS 19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 27 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS PACS: 43.66.Pn Seeber, Bernhard U. Auditory Perception Lab, Dept.

More information

Auditory System & Hearing

Auditory System & Hearing Auditory System & Hearing Chapters 9 and 10 Lecture 17 Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Spring 2015 1 Cochlea: physical device tuned to frequency! place code: tuning of different

More information

Binaural Hearing. Steve Colburn Boston University

Binaural Hearing. Steve Colburn Boston University Binaural Hearing Steve Colburn Boston University Outline Why do we (and many other animals) have two ears? What are the major advantages? What is the observed behavior? How do we accomplish this physiologically?

More information

Training-Induced Plasticity of Auditory Localization in Adult Mammals

Training-Induced Plasticity of Auditory Localization in Adult Mammals Training-Induced Plasticity of Auditory Localization in Adult Mammals Oliver Kacelnik, Fernando R. Nodal, Carl H. Parsons, Andrew J. King * Department of Physiology, Anatomy, and Genetics, University of

More information

Spectro-temporal response fields in the inferior colliculus of awake monkey

Spectro-temporal response fields in the inferior colliculus of awake monkey 3.6.QH Spectro-temporal response fields in the inferior colliculus of awake monkey Versnel, Huib; Zwiers, Marcel; Van Opstal, John Department of Biophysics University of Nijmegen Geert Grooteplein 655

More information

A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES

A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES Kazuhiro IIDA, Motokuni ITOH AV Core Technology Development Center, Matsushita Electric Industrial Co., Ltd.

More information

The role of low frequency components in median plane localization

The role of low frequency components in median plane localization Acoust. Sci. & Tech. 24, 2 (23) PAPER The role of low components in median plane localization Masayuki Morimoto 1;, Motoki Yairi 1, Kazuhiro Iida 2 and Motokuni Itoh 1 1 Environmental Acoustics Laboratory,

More information

Cortical codes for sound localization

Cortical codes for sound localization TUTORIAL Cortical codes for sound localization Shigeto Furukawa and John C. Middlebrooks Kresge Hearing Research Institute, University of Michigan, 1301 E. Ann St., Ann Arbor, MI 48109-0506 USA e-mail:

More information

Comment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA)

Comment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA) Comments Comment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA) Is phase locking to transposed stimuli as good as phase locking to low-frequency

More information

A MAP OF AUDITORY SPACE IN THE MAMMALIAN BRAIN: NEURAL COMPUTATION AND DEVELOPMENT

A MAP OF AUDITORY SPACE IN THE MAMMALIAN BRAIN: NEURAL COMPUTATION AND DEVELOPMENT Experimental Physiology (1993), 78, 559-59 Printed in Great Britain THE WELLCOME PRIZE LECTURE A MAP OF AUDITORY SPACE IN THE MAMMALIAN BRAIN: NEURAL COMPUTATION AND DEVELOPMENT ANDREW J. KING University

More information

Sensitivity to interaural time difference and representation of azimuth in central nucleus of inferior colliculus in the barn owl

Sensitivity to interaural time difference and representation of azimuth in central nucleus of inferior colliculus in the barn owl J Comp Physiol A (2007) 193:99 112 DOI 10.1007/s00359-006-0172-z ORIGINAL PAPER Sensitivity to interaural time difference and representation of azimuth in central nucleus of inferior colliculus in the

More information

Information conveyed by inferior colliculus neurons about stimuli with aligned and misaligned sound localization cues

Information conveyed by inferior colliculus neurons about stimuli with aligned and misaligned sound localization cues J Neurophysiol 106: 974 985, 2011. First published June 8, 2011; doi:10.1152/jn.00384.2011. Information conveyed by inferior colliculus neurons about stimuli with aligned and misaligned sound localization

More information

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening AUDL GS08/GAV1 Signals, systems, acoustics and the ear Pitch & Binaural listening Review 25 20 15 10 5 0-5 100 1000 10000 25 20 15 10 5 0-5 100 1000 10000 Part I: Auditory frequency selectivity Tuning

More information

3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF. Masayuki Morimoto Motokuni Itoh Kazuhiro Iida

3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF. Masayuki Morimoto Motokuni Itoh Kazuhiro Iida 3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF Masayuki Morimoto Motokuni Itoh Kazuhiro Iida Kobe University Environmental Acoustics Laboratory Rokko, Nada, Kobe, 657-8501,

More information

Hearing II Perceptual Aspects

Hearing II Perceptual Aspects Hearing II Perceptual Aspects Overview of Topics Chapter 6 in Chaudhuri Intensity & Loudness Frequency & Pitch Auditory Space Perception 1 2 Intensity & Loudness Loudness is the subjective perceptual quality

More information

Spatial hearing and sound localization mechanisms in the brain. Henri Pöntynen February 9, 2016

Spatial hearing and sound localization mechanisms in the brain. Henri Pöntynen February 9, 2016 Spatial hearing and sound localization mechanisms in the brain Henri Pöntynen February 9, 2016 Outline Auditory periphery: from acoustics to neural signals - Basilar membrane - Organ of Corti Spatial

More information

The Structure and Function of the Auditory Nerve

The Structure and Function of the Auditory Nerve The Structure and Function of the Auditory Nerve Brad May Structure and Function of the Auditory and Vestibular Systems (BME 580.626) September 21, 2010 1 Objectives Anatomy Basic response patterns Frequency

More information

Supplementary materials for: Executive control processes underlying multi- item working memory

Supplementary materials for: Executive control processes underlying multi- item working memory Supplementary materials for: Executive control processes underlying multi- item working memory Antonio H. Lara & Jonathan D. Wallis Supplementary Figure 1 Supplementary Figure 1. Behavioral measures of

More information

EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE DESIGN OF SPATIAL AUDIO DISPLAYS

EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE DESIGN OF SPATIAL AUDIO DISPLAYS Proceedings of the 14 International Conference on Auditory Display, Paris, France June 24-27, 28 EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE

More information

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane. I. Psychoacoustical Data

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane. I. Psychoacoustical Data 942 955 Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane. I. Psychoacoustical Data Jonas Braasch, Klaus Hartung Institut für Kommunikationsakustik, Ruhr-Universität

More information

Level dependence of spatial processing in the primate auditory cortex

Level dependence of spatial processing in the primate auditory cortex J Neurophysiol 108: 810 826, 2012. First published May 16, 2012; doi:10.1152/jn.00500.2011. Level dependence of spatial processing in the primate auditory cortex Yi Zhou and Xiaoqin Wang Laboratory of

More information

Trading Directional Accuracy for Realism in a Virtual Auditory Display

Trading Directional Accuracy for Realism in a Virtual Auditory Display Trading Directional Accuracy for Realism in a Virtual Auditory Display Barbara G. Shinn-Cunningham, I-Fan Lin, and Tim Streeter Hearing Research Center, Boston University 677 Beacon St., Boston, MA 02215

More information

The neural code for interaural time difference in human auditory cortex

The neural code for interaural time difference in human auditory cortex The neural code for interaural time difference in human auditory cortex Nelli H. Salminen and Hannu Tiitinen Department of Biomedical Engineering and Computational Science, Helsinki University of Technology,

More information

HST.723J, Spring 2005 Theme 3 Report

HST.723J, Spring 2005 Theme 3 Report HST.723J, Spring 2005 Theme 3 Report Madhu Shashanka shashanka@cns.bu.edu Introduction The theme of this report is binaural interactions. Binaural interactions of sound stimuli enable humans (and other

More information

Lecture 7 Hearing 2. Raghav Rajan Bio 354 Neurobiology 2 February 04th All lecture material from the following links unless otherwise mentioned:

Lecture 7 Hearing 2. Raghav Rajan Bio 354 Neurobiology 2 February 04th All lecture material from the following links unless otherwise mentioned: Lecture 7 Hearing 2 All lecture material from the following links unless otherwise mentioned: 1. http://wws.weizmann.ac.il/neurobiology/labs/ulanovsky/sites/neurobiology.labs.ulanovsky/files/uploads/purves_ch12_ch13_hearing

More information

Spectrograms (revisited)

Spectrograms (revisited) Spectrograms (revisited) We begin the lecture by reviewing the units of spectrograms, which I had only glossed over when I covered spectrograms at the end of lecture 19. We then relate the blocks of a

More information

Responses of Auditory Cortical Neurons to Pairs of Sounds: Correlates of Fusion and Localization

Responses of Auditory Cortical Neurons to Pairs of Sounds: Correlates of Fusion and Localization Responses of Auditory Cortical Neurons to Pairs of Sounds: Correlates of Fusion and Localization BRIAN J. MICKEY AND JOHN C. MIDDLEBROOKS Kresge Hearing Research Institute, University of Michigan, Ann

More information

Binaural Tuning of Auditory Units in the Forebrain Archistriatal Gaze Fields of the Barn Owl: Local Organization but No Space Map

Binaural Tuning of Auditory Units in the Forebrain Archistriatal Gaze Fields of the Barn Owl: Local Organization but No Space Map The Journal of Neuroscience, July 1995, 75(7): 5152-5168 Binaural Tuning of Auditory Units in the Forebrain Archistriatal Gaze Fields of the Barn Owl: Local Organization but No Space Map Yale E. Cohen

More information

SUPPLEMENTARY INFORMATION. Table 1 Patient characteristics Preoperative. language testing

SUPPLEMENTARY INFORMATION. Table 1 Patient characteristics Preoperative. language testing Categorical Speech Representation in the Human Superior Temporal Gyrus Edward F. Chang, Jochem W. Rieger, Keith D. Johnson, Mitchel S. Berger, Nicholas M. Barbaro, Robert T. Knight SUPPLEMENTARY INFORMATION

More information

A Model of Visually Guided Plasticity of the Auditory Spatial Map in the Barn Owl

A Model of Visually Guided Plasticity of the Auditory Spatial Map in the Barn Owl A Model of Visually Guided Plasticity of the Auditory Spatial Map in the Barn Owl Andrea Haessly andrea@cs.utexas.edu Joseph Sirosh sirosh@cs.utexas.edu Risto Miikkulainen risto@cs.utexas.edu Abstract

More information

Signals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor

Signals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor Signals, systems, acoustics and the ear Week 5 The peripheral auditory system: The ear as a signal processor Think of this set of organs 2 as a collection of systems, transforming sounds to be sent to

More information

Correlation Between the Activity of Single Auditory Cortical Neurons and Sound-Localization Behavior in the Macaque Monkey

Correlation Between the Activity of Single Auditory Cortical Neurons and Sound-Localization Behavior in the Macaque Monkey Correlation Between the Activity of Single Auditory Cortical Neurons and Sound-Localization Behavior in the Macaque Monkey GREGG H. RECANZONE, 1,2 DARREN C. GUARD, 1 MIMI L. PHAN, 1 AND TIEN-I K. SU 1

More information

Neural System Model of Human Sound Localization

Neural System Model of Human Sound Localization in Advances in Neural Information Processing Systems 13 S.A. Solla, T.K. Leen, K.-R. Müller (eds.), 761 767 MIT Press (2000) Neural System Model of Human Sound Localization Craig T. Jin Department of Physiology

More information

The Coding of Spatial Location by Single Units in the Lateral Superior Olive of the Cat. I. Spatial Receptive Fields in Azimuth

The Coding of Spatial Location by Single Units in the Lateral Superior Olive of the Cat. I. Spatial Receptive Fields in Azimuth The Journal of Neuroscience, February 15, 2002, 22(4):1454 1467 The Coding of Spatial Location by Single Units in the Lateral Superior Olive of the Cat. I. Spatial Receptive Fields in Azimuth Daniel J.

More information

IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION. Griffin D. Romigh, Brian D. Simpson, Nandini Iyer

IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION. Griffin D. Romigh, Brian D. Simpson, Nandini Iyer IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION Griffin D. Romigh, Brian D. Simpson, Nandini Iyer 711th Human Performance Wing Air Force Research Laboratory

More information

Representation of sound in the auditory nerve

Representation of sound in the auditory nerve Representation of sound in the auditory nerve Eric D. Young Department of Biomedical Engineering Johns Hopkins University Young, ED. Neural representation of spectral and temporal information in speech.

More information

Hearing in the Environment

Hearing in the Environment 10 Hearing in the Environment Click Chapter to edit 10 Master Hearing title in the style Environment Sound Localization Complex Sounds Auditory Scene Analysis Continuity and Restoration Effects Auditory

More information

Neural Correlates and Mechanisms of Spatial Release From Masking: Single-Unit and Population Responses in the Inferior Colliculus

Neural Correlates and Mechanisms of Spatial Release From Masking: Single-Unit and Population Responses in the Inferior Colliculus J Neurophysiol 94: 1180 1198, 2005. First published April 27, 2005; doi:10.1152/jn.01112.2004. Neural Correlates and Mechanisms of Spatial Release From Masking: Single-Unit and Population Responses in

More information

Modeling Physiological and Psychophysical Responses to Precedence Effect Stimuli

Modeling Physiological and Psychophysical Responses to Precedence Effect Stimuli Modeling Physiological and Psychophysical Responses to Precedence Effect Stimuli Jing Xia 1, Andrew Brughera 2, H. Steven Colburn 2, and Barbara Shinn-Cunningham 1, 2 1 Department of Cognitive and Neural

More information

Linguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions.

Linguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions. 24.963 Linguistic Phonetics Basic Audition Diagram of the inner ear removed due to copyright restrictions. 1 Reading: Keating 1985 24.963 also read Flemming 2001 Assignment 1 - basic acoustics. Due 9/22.

More information

Sound Localization PSY 310 Greg Francis. Lecture 31. Audition

Sound Localization PSY 310 Greg Francis. Lecture 31. Audition Sound Localization PSY 310 Greg Francis Lecture 31 Physics and psychology. Audition We now have some idea of how sound properties are recorded by the auditory system So, we know what kind of information

More information

Physiological measures of the precedence effect and spatial release from masking in the cat inferior colliculus.

Physiological measures of the precedence effect and spatial release from masking in the cat inferior colliculus. Physiological measures of the precedence effect and spatial release from masking in the cat inferior colliculus. R.Y. Litovsky 1,3, C. C. Lane 1,2, C.. tencio 1 and. Delgutte 1,2 1 Massachusetts Eye and

More information

Perceptual Plasticity in Spatial Auditory Displays

Perceptual Plasticity in Spatial Auditory Displays Perceptual Plasticity in Spatial Auditory Displays BARBARA G. SHINN-CUNNINGHAM, TIMOTHY STREETER, and JEAN-FRANÇOIS GYSS Hearing Research Center, Boston University Often, virtual acoustic environments

More information

Considerable plasticity exists in the neural circuits that process. Plasticity in the neural coding of auditory space in the mammalian brain

Considerable plasticity exists in the neural circuits that process. Plasticity in the neural coding of auditory space in the mammalian brain Colloquium Plasticity in the neural coding of auditory space in the mammalian brain Andrew J. King*, Carl H. Parsons, and David R. Moore University Laboratory of Physiology, Oxford University, Parks Road,

More information

The development of a modified spectral ripple test

The development of a modified spectral ripple test The development of a modified spectral ripple test Justin M. Aronoff a) and David M. Landsberger Communication and Neuroscience Division, House Research Institute, 2100 West 3rd Street, Los Angeles, California

More information

Effects of Remaining Hair Cells on Cochlear Implant Function

Effects of Remaining Hair Cells on Cochlear Implant Function Effects of Remaining Hair Cells on Cochlear Implant Function 8th Quarterly Progress Report Neural Prosthesis Program Contract N01-DC-2-1005 (Quarter spanning April-June, 2004) P.J. Abbas, H. Noh, F.C.

More information

How high-frequency do children hear?

How high-frequency do children hear? How high-frequency do children hear? Mari UEDA 1 ; Kaoru ASHIHARA 2 ; Hironobu TAKAHASHI 2 1 Kyushu University, Japan 2 National Institute of Advanced Industrial Science and Technology, Japan ABSTRACT

More information

Behavioral generalization

Behavioral generalization Supplementary Figure 1 Behavioral generalization. a. Behavioral generalization curves in four Individual sessions. Shown is the conditioned response (CR, mean ± SEM), as a function of absolute (main) or

More information

Effect of spectral content and learning on auditory distance perception

Effect of spectral content and learning on auditory distance perception Effect of spectral content and learning on auditory distance perception Norbert Kopčo 1,2, Dávid Čeljuska 1, Miroslav Puszta 1, Michal Raček 1 a Martin Sarnovský 1 1 Department of Cybernetics and AI, Technical

More information

How is the stimulus represented in the nervous system?

How is the stimulus represented in the nervous system? How is the stimulus represented in the nervous system? Eric Young F Rieke et al Spikes MIT Press (1997) Especially chapter 2 I Nelken et al Encoding stimulus information by spike numbers and mean response

More information

21/01/2013. Binaural Phenomena. Aim. To understand binaural hearing Objectives. Understand the cues used to determine the location of a sound source

21/01/2013. Binaural Phenomena. Aim. To understand binaural hearing Objectives. Understand the cues used to determine the location of a sound source Binaural Phenomena Aim To understand binaural hearing Objectives Understand the cues used to determine the location of a sound source Understand sensitivity to binaural spatial cues, including interaural

More information

Brian D. Simpson Veridian, 5200 Springfield Pike, Suite 200, Dayton, Ohio 45431

Brian D. Simpson Veridian, 5200 Springfield Pike, Suite 200, Dayton, Ohio 45431 The effects of spatial separation in distance on the informational and energetic masking of a nearby speech signal Douglas S. Brungart a) Air Force Research Laboratory, 2610 Seventh Street, Wright-Patterson

More information

Chapter 11: Sound, The Auditory System, and Pitch Perception

Chapter 11: Sound, The Auditory System, and Pitch Perception Chapter 11: Sound, The Auditory System, and Pitch Perception Overview of Questions What is it that makes sounds high pitched or low pitched? How do sound vibrations inside the ear lead to the perception

More information

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES Varinthira Duangudom and David V Anderson School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA 30332

More information

Processing in The Cochlear Nucleus

Processing in The Cochlear Nucleus Processing in The Cochlear Nucleus Alan R. Palmer Medical Research Council Institute of Hearing Research University Park Nottingham NG7 RD, UK The Auditory Nervous System Cortex Cortex MGB Medial Geniculate

More information

Supporting Information

Supporting Information Supporting Information ten Oever and Sack 10.1073/pnas.1517519112 SI Materials and Methods Experiment 1. Participants. A total of 20 participants (9 male; age range 18 32 y; mean age 25 y) participated

More information

J Jeffress model, 3, 66ff

J Jeffress model, 3, 66ff Index A Absolute pitch, 102 Afferent projections, inferior colliculus, 131 132 Amplitude modulation, coincidence detector, 152ff inferior colliculus, 152ff inhibition models, 156ff models, 152ff Anatomy,

More information

Plasticity of Cerebral Cortex in Development

Plasticity of Cerebral Cortex in Development Plasticity of Cerebral Cortex in Development Jessica R. Newton and Mriganka Sur Department of Brain & Cognitive Sciences Picower Center for Learning & Memory Massachusetts Institute of Technology Cambridge,

More information

Theta sequences are essential for internally generated hippocampal firing fields.

Theta sequences are essential for internally generated hippocampal firing fields. Theta sequences are essential for internally generated hippocampal firing fields. Yingxue Wang, Sandro Romani, Brian Lustig, Anthony Leonardo, Eva Pastalkova Supplementary Materials Supplementary Modeling

More information

Nature Neuroscience: doi: /nn Supplementary Figure 1. Trial structure for go/no-go behavior

Nature Neuroscience: doi: /nn Supplementary Figure 1. Trial structure for go/no-go behavior Supplementary Figure 1 Trial structure for go/no-go behavior a, Overall timeline of experiments. Day 1: A1 mapping, injection of AAV1-SYN-GCAMP6s, cranial window and headpost implantation. Water restriction

More information

I. INTRODUCTION. J. Acoust. Soc. Am. 113 (3), March /2003/113(3)/1631/15/$ Acoustical Society of America

I. INTRODUCTION. J. Acoust. Soc. Am. 113 (3), March /2003/113(3)/1631/15/$ Acoustical Society of America Auditory spatial discrimination by barn owls in simulated echoic conditions a) Matthew W. Spitzer, b) Avinash D. S. Bala, and Terry T. Takahashi Institute of Neuroscience, University of Oregon, Eugene,

More information

Functional Organization of Ferret Auditory Cortex

Functional Organization of Ferret Auditory Cortex Cerebral Cortex October 2005;15:1637--1653 doi:10.1093/cercor/bhi042 Advance Access publication February 9, 2005 Functional Organization of Ferret Auditory Cortex Jennifer K. Bizley 1, Fernando R. Nodal

More information

SOLUTIONS Homework #3. Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03

SOLUTIONS Homework #3. Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03 SOLUTIONS Homework #3 Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03 Problem 1: a) Where in the cochlea would you say the process of "fourier decomposition" of the incoming

More information

Development of Sound Localization 2. How do the neural mechanisms subserving sound localization develop?

Development of Sound Localization 2. How do the neural mechanisms subserving sound localization develop? Development of Sound Localization 2 How do the neural mechanisms subserving sound localization develop? 1 Overview of the development of sound localization Gross localization responses are observed soon

More information

The use of interaural time and level difference cues by bilateral cochlear implant users

The use of interaural time and level difference cues by bilateral cochlear implant users The use of interaural time and level difference cues by bilateral cochlear implant users Justin M. Aronoff, a) Yang-soo Yoon, and Daniel J. Freed b) Communication and Neuroscience Division, House Ear Institute,

More information

Article. Context-Specific Reweighting of Auditory Spatial Cues following Altered Experience during Development

Article. Context-Specific Reweighting of Auditory Spatial Cues following Altered Experience during Development Current Biology 23, 1291 1299, July 22, 2013 ª2013 The Authors. Open access under CC BY license. http://dx.doi.org/10.1016/j.cub.2013.05.045 Context-Specific Reweighting of Auditory Spatial Cues following

More information

Distinct Mechanisms for Top-Down Control of Neural Gain and Sensitivity in the Owl Optic Tectum

Distinct Mechanisms for Top-Down Control of Neural Gain and Sensitivity in the Owl Optic Tectum Article Distinct Mechanisms for Top-Down Control of Neural Gain and Sensitivity in the Owl Optic Tectum Daniel E. Winkowski 1,2, * and Eric I. Knudsen 1 1 Neurobiology Department, Stanford University Medical

More information

The Auditory Nervous System

The Auditory Nervous System Processing in The Superior Olivary Complex The Auditory Nervous System Cortex Cortex Alan R. Palmer MGB Excitatory GABAergic IC Glycinergic Interaural Level Differences Medial Geniculate Body Inferior

More information

Supporting Information

Supporting Information 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 Supporting Information Variances and biases of absolute distributions were larger in the 2-line

More information

Response Properties of Neighboring Neurons in the Auditory Midbrain for Pure-Tone Stimulation: A Tetrode Study

Response Properties of Neighboring Neurons in the Auditory Midbrain for Pure-Tone Stimulation: A Tetrode Study J Neurophysiol 98: 258 273, 27. First published August 1, 27; doi:1.1152/jn.1317.26. Response Properties of Neighboring Neurons in the Auditory Midbrain for Pure-Tone Stimulation: A Tetrode Study Chandran

More information

Processing in The Superior Olivary Complex

Processing in The Superior Olivary Complex Processing in The Superior Olivary Complex Alan R. Palmer Medical Research Council Institute of Hearing Research University Park Nottingham NG7 2RD, UK Binaural cues for Localising Sounds in Space time

More information

Auditory spatial tuning at the cross-roads of the midbrain and forebrain

Auditory spatial tuning at the cross-roads of the midbrain and forebrain Articles in PresS. J Neurophysiol (July 1, 2009). doi:10.1152/jn.00400.2009 1 Auditory spatial tuning at the cross-roads of the midbrain and forebrain 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Authors: M.

More information

Temporal coding in the sub-millisecond range: Model of barn owl auditory pathway

Temporal coding in the sub-millisecond range: Model of barn owl auditory pathway Temporal coding in the sub-millisecond range: Model of barn owl auditory pathway Richard Kempter* Institut fur Theoretische Physik Physik-Department der TU Munchen D-85748 Garching bei Munchen J. Leo van

More information

Neuroethology in Neuroscience or Why study an exotic animal

Neuroethology in Neuroscience or Why study an exotic animal Neuroethology in Neuroscience or Why study an exotic animal Nobel prize in Physiology and Medicine 1973 Karl von Frisch Konrad Lorenz Nikolaas Tinbergen for their discoveries concerning "organization and

More information

Analysis of in-vivo extracellular recordings. Ryan Morrill Bootcamp 9/10/2014

Analysis of in-vivo extracellular recordings. Ryan Morrill Bootcamp 9/10/2014 Analysis of in-vivo extracellular recordings Ryan Morrill Bootcamp 9/10/2014 Goals for the lecture Be able to: Conceptually understand some of the analysis and jargon encountered in a typical (sensory)

More information

Encoding Stimulus Information by Spike Numbers and Mean Response Time in Primary Auditory Cortex

Encoding Stimulus Information by Spike Numbers and Mean Response Time in Primary Auditory Cortex Journal of Computational Neuroscience 19, 199 221, 2005 c 2005 Springer Science + Business Media, Inc. Manufactured in The Netherlands. Encoding Stimulus Information by Spike Numbers and Mean Response

More information

Adaptation to Stimulus Statistics in the Perception and Neural Representation of Auditory Space

Adaptation to Stimulus Statistics in the Perception and Neural Representation of Auditory Space Article Adaptation to Stimulus Statistics in the Perception and Neural Representation of Auditory Space Johannes C. Dahmen, 1, * Peter Keating, 1 Fernando R. Nodal, 1 Andreas L. Schulz, 1 and Andrew J.

More information

Nature Neuroscience: doi: /nn Supplementary Figure 1. Behavioral training.

Nature Neuroscience: doi: /nn Supplementary Figure 1. Behavioral training. Supplementary Figure 1 Behavioral training. a, Mazes used for behavioral training. Asterisks indicate reward location. Only some example mazes are shown (for example, right choice and not left choice maze

More information

Effect of microphone position in hearing instruments on binaural masking level differences

Effect of microphone position in hearing instruments on binaural masking level differences Effect of microphone position in hearing instruments on binaural masking level differences Fredrik Gran, Jesper Udesen and Andrew B. Dittberner GN ReSound A/S, Research R&D, Lautrupbjerg 7, 2750 Ballerup,

More information

COM3502/4502/6502 SPEECH PROCESSING

COM3502/4502/6502 SPEECH PROCESSING COM3502/4502/6502 SPEECH PROCESSING Lecture 4 Hearing COM3502/4502/6502 Speech Processing: Lecture 4, slide 1 The Speech Chain SPEAKER Ear LISTENER Feedback Link Vocal Muscles Ear Sound Waves Taken from:

More information

CROSSMODAL PLASTICITY IN SPECIFIC AUDITORY CORTICES UNDERLIES VISUAL COMPENSATIONS IN THE DEAF "

CROSSMODAL PLASTICITY IN SPECIFIC AUDITORY CORTICES UNDERLIES VISUAL COMPENSATIONS IN THE DEAF Supplementary Online Materials To complement: CROSSMODAL PLASTICITY IN SPECIFIC AUDITORY CORTICES UNDERLIES VISUAL COMPENSATIONS IN THE DEAF " Stephen G. Lomber, M. Alex Meredith, and Andrej Kral 1 Supplementary

More information

Pharmacological Specialization of Learned Auditory Responses in the Inferior Colliculus of the Barn Owl

Pharmacological Specialization of Learned Auditory Responses in the Inferior Colliculus of the Barn Owl The Journal of Neuroscience, April 15, 1998, 18(8):3073 3087 Pharmacological Specialization of Learned Auditory Responses in the Inferior Colliculus of the Barn Owl Daniel E. Feldman and Eric I. Knudsen

More information

Two Modified IEC Ear Simulators for Extended Dynamic Range

Two Modified IEC Ear Simulators for Extended Dynamic Range Two Modified IEC 60318-4 Ear Simulators for Extended Dynamic Range Peter Wulf-Andersen & Morten Wille The international standard IEC 60318-4 specifies an occluded ear simulator, often referred to as a

More information

Deafness and hearing impairment

Deafness and hearing impairment Auditory Physiology Deafness and hearing impairment About one in every 10 Americans has some degree of hearing loss. The great majority develop hearing loss as they age. Hearing impairment in very early

More information

Nature Methods: doi: /nmeth Supplementary Figure 1. Activity in turtle dorsal cortex is sparse.

Nature Methods: doi: /nmeth Supplementary Figure 1. Activity in turtle dorsal cortex is sparse. Supplementary Figure 1 Activity in turtle dorsal cortex is sparse. a. Probability distribution of firing rates across the population (notice log scale) in our data. The range of firing rates is wide but

More information

Effect of source spectrum on sound localization in an everyday reverberant room

Effect of source spectrum on sound localization in an everyday reverberant room Effect of source spectrum on sound localization in an everyday reverberant room Antje Ihlefeld and Barbara G. Shinn-Cunningham a) Hearing Research Center, Boston University, Boston, Massachusetts 02215

More information

Systems Neuroscience Oct. 16, Auditory system. http:

Systems Neuroscience Oct. 16, Auditory system. http: Systems Neuroscience Oct. 16, 2018 Auditory system http: www.ini.unizh.ch/~kiper/system_neurosci.html The physics of sound Measuring sound intensity We are sensitive to an enormous range of intensities,

More information

Microcircuitry coordination of cortical motor information in self-initiation of voluntary movements

Microcircuitry coordination of cortical motor information in self-initiation of voluntary movements Y. Isomura et al. 1 Microcircuitry coordination of cortical motor information in self-initiation of voluntary movements Yoshikazu Isomura, Rie Harukuni, Takashi Takekawa, Hidenori Aizawa & Tomoki Fukai

More information

Supplementary Figure S1: Histological analysis of kainate-treated animals

Supplementary Figure S1: Histological analysis of kainate-treated animals Supplementary Figure S1: Histological analysis of kainate-treated animals Nissl stained coronal or horizontal sections were made from kainate injected (right) and saline injected (left) animals at different

More information

Ch 5. Perception and Encoding

Ch 5. Perception and Encoding Ch 5. Perception and Encoding Cognitive Neuroscience: The Biology of the Mind, 2 nd Ed., M. S. Gazzaniga, R. B. Ivry, and G. R. Mangun, Norton, 2002. Summarized by Y.-J. Park, M.-H. Kim, and B.-T. Zhang

More information

1- Cochlear Impedance Telemetry

1- Cochlear Impedance Telemetry INTRA-OPERATIVE COCHLEAR IMPLANT MEASURMENTS SAMIR ASAL M.D 1- Cochlear Impedance Telemetry 1 Cochlear implants used presently permit bi--directional communication between the inner and outer parts of

More information

Before we talk about the auditory system we will talk about the sound and waves

Before we talk about the auditory system we will talk about the sound and waves The Auditory System PHYSIO: #3 DR.LOAI ZAGOUL 24/3/2014 Refer to the slides for some photos. Before we talk about the auditory system we will talk about the sound and waves All waves have basic characteristics:

More information

Isolating mechanisms that influence measures of the precedence effect: Theoretical predictions and behavioral tests

Isolating mechanisms that influence measures of the precedence effect: Theoretical predictions and behavioral tests Isolating mechanisms that influence measures of the precedence effect: Theoretical predictions and behavioral tests Jing Xia and Barbara Shinn-Cunningham a) Department of Cognitive and Neural Systems,

More information

The role of high frequencies in speech localization

The role of high frequencies in speech localization The role of high frequencies in speech localization Virginia Best a and Simon Carlile Department of Physiology, University of Sydney, Sydney, NSW, 2006, Australia Craig Jin and André van Schaik School

More information