Commentary How the owl tracks its prey II
|
|
- Fay Farmer
- 5 years ago
- Views:
Transcription
1 3399 The Journal of Experimental Biology 213, Published by The Company of Biologists Ltd doi:1.1242/jeb Commentary How the owl tracks its prey II Terry T. Takahashi Institute of Neuroscience, University of Oregon, Eugene, OR 9743, USA Accepted 27 July 21 Summary Barn owls can capture prey in pitch darkness or by diving into snow, while homing in on the sounds made by their prey. First, the neural mechanisms by which the barn owl localizes a single sound source in an otherwise quiet environment will be explained. The ideas developed for the single source case will then be expanded to environments in which there are multiple sound sources and echoes environments that are challenging for humans with impaired hearing. Recent controversies regarding the mechanisms of sound localization will be discussed. Finally, the case in which both visual and auditory information are available to the owl will be considered. Key words: owls, binaural, cocktail party effect, multimodal, precedence effect, sound localization. Introduction When a barn owl (Tyto alba) hears a sound of interest, it turns its head rapidly to train its gaze on the source. If it decides that the sound comes from prey, such as a vole or mouse, it plunges headlong into the darkness all the while keeping its head centered on the sound source. Just before impact, it retracts its head, replacing it with its talons, and lands atop the prey (Konishi, 1973a). That this behavior could be elicited in complete darkness, coupled with anecdotal reports that barn owls dive into deep snow to extract prey, suggested that vision is not necessary (Payne, 1971). Payne and Drury (Payne and Drury, 1958) demonstrated the crucial role of hearing using a mouse towing a crumpled wad of paper in a darkened room with a barn owl. The owl struck the paper, thus ruling out vision and olfaction. In a more controlled setting, Konishi (Konishi, 1973b) confirmed the role of audition and measured the accuracy of these strikes with an owl named Roger (after Payne), trained to strike at loudspeakers. It has been nearly 4 years since Konishi (Konishi, 1973a) published the paper with the same title as this Commentary. In the meantime, the owl has become an established model of sound localization, and the study of this animal has contributed not just to the understanding of sound localization, but also to an understanding of neuroplasticity and neural computation. Below, I will attempt to review some of the more recent contributions. Acoustical cues for the location of a sound source In the visual system, objects in the environment cast an image on the retina, much as a camera lens casts an image on film. In the auditory system, there is no such spatially organized image on the peripheral receptor, i.e. the array of hair cells along the basilar membrane. Space in the auditory system is, instead, computed primarily from the interaural differences in the arrival times (interaural time difference or ITD) and sound-pressure levels (interaural level difference or ILD) of the acoustical waveforms. Unlike human ears, which are symmetrically located on either side of the head, the ears of the barn owl are asymmetrical in position and shape. As a result, ILD provides information on the vertical position of the sound source (the elevation), while ITD provides information on the horizontal position (the azimuth), as it does in humans. The correspondence between ITD and azimuth and between ILD and elevation, as well as their dependence on frequency, can be seen in Fig. 1 (Keller et al., 1998). Each diamond represents the space in front of the owl (down is below the owl; left is to the owl s left), and the colors represent ITD and ILD for different frequency bands. In the plots of ITD, the color changes from left to right, indicating that ITD varies with the sound source s azimuth for the entire range of frequencies used for sound localization (3 1 khz) (Konishi, 1973b). ITD remains relatively, but not completely, constant across frequencies (Keller et al., 1998). ILD also varies with the source s azimuth at low frequencies (e.g. 3.5 khz), but, as frequency increases, the axis along which ILD changes becomes increasingly vertical, allowing for the representation of elevation. Thus, a location in space can be specified by a spectrum of ILDs and, to a first approximation, a single ITD value. The manner in which the ears and head alter the magnitude and phase of sounds in a location-specific manner is called the head-related transfer function (HRTF). Filtering a sound with the location-specific HRTFs of each ear and presenting them over headphones reproduces the signal at the eardrums from a real source at that location. The sound source is said to have been placed in virtual auditory space, and the perception of space thus achieved, is highly realistic. By contrast, if one simply alters the magnitude or timing of the sound in one ear by the same amount across all frequencies, i.e. ignoring HRTFs, the experience is that of a sound inside the head. Behavioral evidence for the correspondence between cues and source position comes from studies in which an owl, trained to point its head at a sound source, had one of its ears plugged (Knudsen et al., 1979). Measurements showed that the signal in the plugged ear is not only attenuated but also delayed. Plugging the left ear, for instance, therefore causes the right ear s signal to seem louder and to lead, and as a result, the owl mis-localizes the sound, turning its head above and to the right of the loudspeaker (Knudsen
2 34 T. T. Takahashi ICx (space map) ICc-ls ICc-core ICx (space map) ICc-ls ICc-core Cross-frequency integration Convergence of ITD & ILD LLDp LLDp Computation of ILD NA NL NLComputation of ITD NA NM Midsagittal plane NM Fig. 2. Parallel neural pathways for the processing of interaural time difference (ITD, red) and interaural level difference (ILD, blue) information. Information in the ITD and ILD pathways are combined at the level of the lateral shell of the opposite central nucleus of the inferior colliculus (ICc-ls) within frequency channels. The combined information is integrated across frequencies in the external nucleus of the inferior colliculus (ICx). Fig. 1. ILD and ITD components of the barn owl s head related transfer functions (HRTFs). Each diamond represents the space in front of the owl for four frequency bands. Warmer colors indicate that the sound in the right ear leads or is louder (see Keller et al., 1998). et al., 1979). Other owls with asymmetrical ears, such as the Tengmalm s owl (Aegolius funereus), are presumed to use the binaural cues in a manner similar to the barn owl (Payne, 1971). Humans use ITD and ILD to localize, respectively, the low- and high-frequency components of sounds along the azimuth (Strutt, 197). This contrasts with the barn owl, which assigns the two cues to two different spatial axes. For humans to localize sounds in elevation, monaural cues, based on the shape of the magnitude spectrum in each ear, are thought to be important (reviewed by Blauert, 1997). Recent studies suggest that such spectral-shape cues are not used for localization by owls. In a clever experiment, Egnor (Egnor, 2) took the HRTFs of owls for certain test locations, inverted them, and applied them to the opposite ears. This inverted-reversed filter preserves the ILD spectra for each test location but dramatically alters the monaural spectral shapes. Her data showed that the owls ability to localize sounds filtered through the inverted-reversed HRTFs were as accurate and precise as their ability to localize sounds filtered through normal HRTFs. In the visual, electrosensory and somatosensory systems, stimulus space (e.g. skin surface) is represented in the central nervous system by preserving the neighbor-relationships of axons in the primary projection. In the auditory system, space must be computed. Yet, the external nucleus of the barn owl s inferior colliculus (ICx) contains such a spatiotopic representation of the frontal hemisphere (Knudsen and Konishi, 1978). This map is composed of neurons, called space-map or space-specific neurons, which have well-circumscribed spatial receptive fields (SRFs). Lesions of the barn owl s auditory space map lead to scotoma-like defects in sound localization, and microstimulation of the spacemap evokes a rapid head turn to that area of space represented at the point of stimulation (du Lac and Knudsen, 199; Masino and Knudsen, 199; Masino and Knudsen, 1992; Masino and Knudsen, 1993; Wagner, 1993). As shown in Fig. 2, the two cues for sound localization, ILD (blue) and ITD (red), are processed in anatomically separate neural pathways starting at the cochlear nuclei, the nucleus angularis (NA) and the nucleus magnocellularis (NM), respectively. The two pathways converge in the central nucleus of the inferior colliculus (ICc), one synaptic step before the space map in the ICx (Takahashi and Konishi, 1988a; Takahashi and Konishi, 1988b; Takahashi et al., 1984). Below, the processes by which binaural cues are transformed into a spatiotopic map of auditory space are described. Neural computation of ITD The computation of ITD begins with the preservation and encoding of the monaural phase angles of each spectral component by phaselocking neurons of the NM (Sullivan and Konishi, 1984). In the barn owl, phase locking extends to neurons with best frequencies as high as 9 khz at which strong ILDs are generated. ITD and ILD
3 How the owl tracks its prey II 341 can therefore operate over the same frequency range, allowing the owl to localize sounds in two dimensions. The NM projects bilaterally to the nucleus laminaris (NL), the avian analog of the mammalian medial superior olivary nucleus. In the NL, interaural differences in the phase angles of each spectral component are computed by a binaural cross-correlation-like mechanism very similar to that originally proposed by Jeffress in 1948 (Carr and Konishi, 199; Jeffress, 1948; Yin and Chan, 199). As a result, neurons in the NL are selective for interaural phase difference (IPD). The NL projects directly to the core of the contralateral ICc (ICccore), which, in turn, projects to the lateral shell of the opposite ICc (the ICc-l) (Takahashi et al., 1989). Within the ICc-ls and the ICccore, neurons are organized into tonotopic columns and, within each column, all cells are tuned to the same ITD. Cells in a column of the ICc-ls project convergently onto a cluster of space-map neurons in the ICx, which endows them with selectivity for the ITD preserved by the column and a sensitivity to a broad range of frequencies (Takahashi and Konishi, 1986; Wagner et al., 1987). In other words, a neuron of the space map (or of a column in the ICc) responds maximally at a particular ITD, called the characteristic delay, regardless of the frequency of the stimulus tone. The concept of the characteristic delay has been accepted since its conceptualization, and evidence for it was first uncovered in the mammalian inferior colliculus (Rose et al., 1966). More recently, however, this seemingly fundamental concept has been challenged by workers studying ITD tuning in two rodents, the gerbil and the guinea pig (Brand et al., 22; McAlpine and Grothe, 23; McAlpine et al., 21). For neurons that have a characteristic delay, the optimal interaural difference in the phase angle of each spectral component (IPD best ) varies as a linear function of frequency, thus: IPD best = ITD best BF, (1) where BF and ITD best are the frequency and ITD that maximally excites a cell (Yin and Kuwada, 1983). For a large proportion of neurons in the inferior colliculi of these rodents, however, it is IPD that is constant across cells with different best frequencies, not ITD. Specifically, the peaks of the ITD tuning functions in the rodent colliculi are shifted by roughly.125 cycles and, as a result, ITD best depends upon its BF (McAlpine et al., 21), thus: ITD best =.125 BF. (2) For the rodent IC neurons with low BFs, this.125-cycle shift in IPD puts the peaks of the ITD tuning curves of a large proportion of neurons outside of the naturally occurring range of ITDs for gerbils and guinea pigs (Grothe, 23; McAlpine and Grothe, 23). By contrast, in the owl, the peaks fall within the owl s range of naturally occurring ITDs [see fig. 1 of McAlpine and Grothe (McAlpine and Grothe, 23)]. The rodents distribution of ITD tuning, which seems perplexing in the context of a spatial map, makes sense if one considers that the IPD shift places the steepest slope of the neurons ITD functions across the midline (Brand et al., 22), thus optimizing discrimination of ITD. Indeed, in another rodent, the rat, spatial discrimination is finest at the midline (Kavanagh and Kelly, 1986). It has been proposed that owls on the one hand, and mammals on the other, use, respectively, place codes and slope codes (Grothe, 23; McAlpine and Grothe, 23). Although this dichotomy frames the discussion effectively, the details are more complex. First, slope codes can and are implemented by the owl. A single source drives a neuron in the owl s space map to the A B Spikes Speaker Preferred locations Neural representation of azimuth (deg) Decrease extent that the source falls within its SRF. A focus of neural activity, or neural image, develops across the map as shown in Fig. 3A, which shows the SRFs (blue Gaussians) of neurons (colored circles) tuned for loci along the azimuth. Neurons at the edge of the image (e.g. those at 2 deg and deg) have the source on the slopes of their SRFs. If the source moves (Fig. 3B), the firing of the cells at the edge of the neural image will change maximally, as would be the case in the mammalian IC under the slope-code hypothesis. The smallest change of sound-source position that an owl can resolve can be explained by the slopes and variances of the SRFs of the space-map cells (Bala et al., 23; Bala et al., 27). Second, in the IC of the guinea pig, as well as in those of the cat and rabbit, there is an abundance of cells the ITD curves of which peak within the naturally occurring range (reviewed by Joris and Yin, 27). When IC neurons are studied using stimuli in virtual auditory space, more than half of the cells in the guinea pig s IC have SRFs in frontal space (Sterbing et al., 23). These cells are not organized as a map, but they are clustered, such that cells representing similar spatial loci are closer together. A sound source at a given location will therefore excite a particular cluster of IC cells, and it is, thus, the identity of the cluster that signals location. This is a place code. What is more, a topographic map of azimuth has been described in the superior colliculus of the guinea pig and Δx NC Increase Fig. 3. (A) Representation of a single sound source as a focus of neural activity, i.e. the neural image, on the space-map. Each blue Gaussian curve represents a horizontal cross-section through the spatial receptive fields (SRFs) of the cells shown below (colored circles). The source is shown at an azimuth of 1deg, thus maximally exciting the cell labeled 1deg (warmer colors indicate higher spike rates). Adjacent cells also discharge, but at lower rates. (B) Shift of the neural image with the shift of source location. When the source is displaced to the right by an amount x, the focus of activity likewise shifts, as shown in the lower row of circles. The cells that were at the edges of the neural image change their firing rate the most: cells to the left of the cell that was originally maximally activated decrease their firing as the source leaves their SRFs; those to the right increase their firing as the source enters their SRFs. Some cells, like the one representing deg, are equally driven from locations x and x, so their firing rate shows no change ( NC ) (see Bala et al., 23; Bala et al., 27).
4 342 T. T. Takahashi Fig. 4. Formation of SRFs by multiplication of the ITD and ILD information. Each diamond represents the space in front of the bird and colors represent firing rates (warmer colors indicate higher spike rates). Each row represents data for a different space-map cell. The two columns on the left show the SRFs that a cell would have, had it access to only ILD or ITD information, respectively (ITD-alone, ILD-alone). The third column from the left shows the product of the ITD- and ILD-alone SRFs. The right column shows the SRF measured with noise bursts filtered through normal HRTFs that contain both ITD and ILD information (see Euston and Takahashi, 22). in other mammals such as rats, cats, ferrets and monkeys (Gaese and Johnen, 2; Hirsch et al., 1985; King and Hutchings, 1987; King and Palmer, 1983; Sterbing et al., 22; Wallace et al., 1996), and one cannot ignore its role in spatial hearing. Thus, differences in owls and mammals might be more quantitative than qualitative and might be related to the degree of acuity afforded by each organism s collicular neurons. Neural computation of ILD Our understanding of the processing of ILD in the barn owl is less complete than our understanding of ITD processing. As indicated above, the spatial axis along which ILD varies is horizontal at low frequencies and becomes progressively more vertical at higher frequencies. As a result, for the high frequency neurons in the ICc, selectivity for ILD generally determines the position of the SRF along the vertical axis. The sound level in the ipsilateral ear is encoded by cells in the NA (Sullivan and Konishi, 1984), which project contralaterally to the posterior subdivision of the dorsal nucleus of the lateral lemniscus [LLDp after Arends and Zeigler (Arends and Zeigler, 1986); formerly, the nucleus ventralis lemnisci lateralis pars posterior (VLVp) of Leibler (Leibler, 1976)]. The LLDp of the two sides are interconnected by a commissural projection that mediates a mutual inhibition. The neurons of the LLDp are therefore excited (E) by stimulation of the contralateral ear, via direct input from the NA, and are inhibited (I) by stimulation of the ipsilateral ear, via the commissural input (Fig. 2) (Takahashi et al., 1995; Takahashi and Keller, 1992). LLDp neurons are thus EI cells, the discharge rates of which are sensitive sigmoidal functions of ILD (Goldberg and Brown, 1969). The sigmoidal rate-ild functions characteristic of the EI cells in the LLDp are transformed into peaked tuning curves in the ICcls by combining the ILD-response functions of the left and right sides (Adolphs, 1993). This transformation requires a bilateral projection from the LLDp to the ICc-ls, but the existence of an ipsilateral projection is controversial. The axon-tracing study of Adolphs (Adolphs, 1993) has demonstrated bilateral projections from the LLDp to the ICc, but Takahashi and Keller (Takahashi and Keller, 1992; Takahashi et al., 1995), who used a different tracer, reported only a contralateral projection. Finally, it is in the ICc-ls that ILD and ITD cues are merged (Mazer, 1995). A clear cut topographical representation of ILD, however, has never been found in the lateral shell (Mazer, 1995). Combining ITD, ILD, and frequency The neurons of the ICc-ls are sensitive to both binaural cues, within each frequency channel. The final step in the formation of neurons that are selective for space is the combining of ITD and ILD information across frequency. The application of virtual auditory space techniques has recently made it possible to assess the contribution of ILD to the SRF of the space-specific neuron. For any given location, ILD varies as a function of frequency in a complex manner. The HRTFs were altered so that the ILD spectrum of each location was preserved, but the ITD was held constant at a cell s preferred value, thus generating the SRF the cell would have had were it sensitive only to ILD (Euston and Takahashi, 22). These altered HRTFs were used to filter broadband noises that served as the stimulus. These ILD-alone receptive fields (RFs) were generally horizontal swaths of activity at the elevation of the normal SRF of the cell (Fig. 4). An ITD-alone RF formed a vertical swath at the azimuth of the normal SRF of the cell. The normal SRF thus lies at the intersection of the ITD- and ILD-alone SRFs (Euston and Takahashi, 22). At this intersection the optimal ITD and ILD spectra of the cell are present and are combined by a multiplication-like process (Peña and Konishi, 21). The shape of the ILD-alone RF is ultimately determined by the frequency bands that innervate the space-specific neuron and by the tuning of the cell for ILD at each of these frequencies (Arthur, 24; Delgutte et al., 1999; Fuzessery and Pollak, 1985; Gold and Knudsen, 2a). One simple hypothesis would be that a spacespecific neuron is broadly tuned to frequency, and for each frequency band it would be sharply tuned to the ILD value that occurs at its SRF, as it was for ITD (Brainard et al., 1992). We therefore determined whether ILD tuning, measured with tones of different frequencies, could predict the shape of the ILD-alone RF
5 How the owl tracks its prey II 343 or, conversely, whether the ILD-alone RF measured with noise could predict the frequency-specific ILD tuning (Euston and Takahashi, 22; Spezio and Takahashi, 23). Both of these approaches revealed, contrary to our initial hypothesis, that spacemap neurons are not necessarily tuned sharply to ILD over a wide, continuous frequency range. Instead, their frequency tuning seems to be patchy, and, at some frequencies, the ILD tuning functions retained the sigmoidal shape seen at lower stages, specifically in the LLDp (Euston and Takahashi, 22; Spezio and Takahashi, 23). Whether this seemingly pattern-less tuning has advantages over that hypothesized earlier is not known. Transformation from binaural cues to space Once the binaural cues are computed, these cues must be translated into space. In a baby bird, which has a small head, the maximal ITD, generated by a source to the extreme right or left, might be some 9 s. A neuron tuned to an ITD of, say, 3 s represents a location about one-third of the distance to the extreme periphery. The adult, by contrast, has a maximal ITD of about 2 s (Keller et al., 1998), so the same neuron would represent a smaller angle. This process of translating cues to space occurs during a critical period, of up to ca. 2 days, during which visual input is crucial (Brainard and Knudsen, 1998). Birds that are blind from birth form inverted and otherwise distorted maps of auditory space (Knudsen, 1988). Moreover, if a baby bird matures with a pair of prisms that laterally shift the visual scene, the bird, upon maturing, mislocalizes a sound by the amount that the prisms shifted the visual world (Brainard and Knudsen, 1993; Knudsen and Knudsen, 1989). Correspondingly, a space-specific neuron will shift its auditory SRF by an amount equal to the prism shift. The plasticity is thought to occur in the ICx and involves the inputs from the optic tectum, which contributes visual input, and in the ICc, which contributes the spatially selective auditory input (Brainard and Knudsen, 1993; Feldman and Knudsen, 1997; Gold and Knudsen, 2b; Hyde and Knudsen, 2; Zheng and Knudsen, 1999). The process of audiovisual calibration in the owl is a clear example of supervised learning in the auditory system and has been thoroughly investigated and reviewed (Knudsen, 22). Multiple sound sources In nature, there are typically several concomitantly active sources of sounds, as well as the reflections of these sounds, or echoes, from surfaces in the environment. Under such conditions, the sound waves from each source and from the reflective surfaces will add in the ears, and if the sounds have broad, overlapping spectra, the binaural cues will fluctuate over time in a complex manner (Blauert, 1997; Keller and Takahashi, 25; Takahashi and Keller, 1994). Not only might such dynamic cues make it difficult to localize sounds, they might compromise the ability of the auditory system to signal the temporal characteristics of the sounds that are intrinsic to each source. Thus, the tasks of identifying a sound and localizing its source are intertwined. Yet, despite this complexity, people with normal hearing are able to determine what came from where. The auditory space-map of the owl allows us to analyze the representation of the natural auditory environment in which there are multiple, concomitantly emitting sources, as well as reflections. When two sounds with overlapping spectra are presented simultaneously from two sources in the free field, the frequencyspecific binaural cues at a given instant become vector sums of the cues corresponding to the location of each source. The resultant vector, moreover, reflects the amplitude and phase of each source s signal. If two sounds are identical in all respects, the vector summation results in a single auditory image located midway between the two sources. If the signals from the two sources are uncorrelated broadband noises, then, within any given frequency band at any given instant, the amplitude is likely to be higher for one source than the other. The ITD and ILD for that frequency band will approximate the values of the higher-amplitude source. In other bands, the situation would be reversed. With two uncorrelated noises, over the course of the stimulus, the cues will spend roughly equal amounts of time assuming the values corresponding to the loci of the two sources. There would be two separate foci of activity on the space map. However, because the binaural cues do not remain stable for any location for very long, the foci on the spacemap neurons should generally be weaker and more diffuse than a focus evoked by a single source. We can confirm that this is the case by recording from the owl s auditory space map, the neurons of which are tuned to the frequency-specific ITDs and ILDs at their SRFs (Fig. 5). When two sources of uncorrelated noises, diagonally separated by about 4deg (in virtual auditory space) are passed through the SRF of a cell, the neuron fires when each source falls in its SRF but not when the two sources flank the SRF. A plot of firing rate against the sound pair s mean position generates two foci (Fig. 5B; see figure legend), indicating that the space-map neuron resolves the two sources (Keller and Takahashi, 25; Takahashi and Keller, 1994). If two sources, Source A and Source B, emit uncorrelated noises that are sinusoidally amplitude-modulated (SAM) at, for example, 55 and 75 Hz, respectively, the binaural cues will reach Source A s ITD(f) and ILD(f) values 55 times per second, and Source B s ITD(f) and ILD(f) values, 75 times per second (Fig. 5C,D). Thus, in the space map, the neurons that represent Source A and Source B will fire in synchrony at, respectively, 55 and 75 Hz. Information about amplitude-modulated (AM) rates and loci are thus both preserved. Note that with two sources, the AM noises drive the neurons at the appropriate rates, indirectly, by modulating the binaural cues. These acoustical principles have been suggested to play a role in the segregation of speech from the background (Faller and Merimaa, 24; Roman et al., 23). Echoes Acoustical reflections constitute a special case of multiple sound sources in that the reflection and the sound arriving directly from the active source are related in structure. Early psychoacoustical studies demonstrated that directional information conveyed by the direct (or leading) sound dominates perception and that listeners are relatively insensitive to directional information conveyed by reflections (or lagging sources) (Wallach et al., 1949). The perceptual dominance of the leading sound and a host of related perceptions are collectively referred to as the precedence effect, and this has been reviewed extensively (Blauert, 1997; Litovsky et al., 1999). We concentrate below on those aspects that are directly relevant to the owl. A subject s perceptual experiences under experimental precedence conditions depend upon the lead/lag delay. When the delay between leading and lagging sources is in the 1 5 ms range in humans, a single, fused auditory image is localized close to the leading source. This is termed localization dominance or the law of the first wavefront (Litovsky et al., 1999; Wallach et al., 1949). At the same time, spatial information regarding the lagging source is degraded, a phenomenon termed lag discrimination suppression (Spitzer et al., 23; Zurek, 198). As the delay increases to about 1 ms, the lagging sound becomes more perceptible and localizable,
6 344 T. T. Takahashi Fig. 5. Response of a space-map neuron to two sources of amplitude-modulated broadband noise. (A) SRF of space-map cell. (B) Spike rate of the neuron as two sources, Source A and Source B, separated diagonally by 42deg (see inset in B), are passed through the SRF of the cell. The firing rate observed is assigned to the average position of the two sources. Thus, when Source A is in the SRF, Source B would be below and to the right of the SRF, and the cell s response to Source A in its SRF is charted at the average position of the two sources, which is also below and to the right (white outline labeled A in Fig. 5A). When the firing of the cell is charted in this manner, two foci of firing are found, from which we can infer that the space-map is capable of resolving the two sources. Sources A and B were sinusoidally amplitude modulated at 55 Hz and 75 Hz, respectively. (C,D) The temporal response synchronized to 75 and 55 Hz, respectively. This synchronous response is quantified as the z-score of the vector strength relative to the variance in vector strengths across all modulation components within the owl s temporal modulation transfer function (see Keller and Takahashi, 25). and the echo threshold is said to have been crossed. Psychoacoustical evidence for the precedence effect has been reported across taxa [for a detailed listing see Dent and Dooling (Dent and Dooling, 24)]. The precedence effect is most often studied using clicks, which help to avoid the acoustical superposition of the leading and lagging sounds. Physiological studies, typically in the IC, show that the response of neurons to a lagging click is weak when the delay is short, but that this response increases when the delay is long (Fitzpatrick et al., 1995; Keller and Takahashi, 1996; Tollin et al., 24; Tollin and Yin, 23; Yin, 1994). Echo threshold is therefore thought to be related to the inter-click interval at which the strength of the response to the lagging click approaches that of the leading click. Such observations have led to the view that neurons responding to the leading sound inhibit and preempt the responses of neurons to the echo (Akeroyd and Bernstein, 21; Fitzpatrick et al., 1995; Keller and Takahashi, 1996; Tollin et al., 24; Tollin and Yin, 23; Yin, 1994; Zurek, 198). The lagging sound is thought to become localizable when the delay is long enough to allow the lateral-inhibition-like process, or echo suppression, to subside before the arrival of the lagging click. Although clicks afford advantages for experimentation, sounds in nature often overlap temporally with their reflections, which arrive after short delays. As a result, there is a period of time when both leading and lagging sounds are present, called the superposed segment, flanked by periods of time when the leading or lagging sound is present alone ( lead-alone and lag-alone segments, Fig. 6A). What determines the echo threshold in this case? If we explicitly apply the results obtained with clicks to longer sounds, one would expect the echo threshold to depend upon the delay between the onsets of the leading and lagging sounds, which is equivalent to the length of the lead-alone segment. In other words, as the lead-alone segment is lengthened beyond roughly 1 ms, echo suppression should have subsided by the time the lagging stimulus arrived, and the lagging sound should be localized more frequently. Note, however, that when the leading and lagging sounds have the same duration, as they normally do in nature, the lead-alone and lag-alone segments are equally long. Therefore, an alternative possibility is that the lagging sound becomes perceptible when the lag-alone segment is long enough. To discriminate between the alternative hypotheses, Nelson and Takahashi (Nelson and Takahashi, 28) varied the length of one segment while holding the other constant, and examined the owl s head turns to the leading and lagging stimuli. Stimuli consisted of identical copies of a broadband noise burst (3 ms duration), one of which was delayed to serve as a simulated echo arriving from a reflective source at a different location. The results supported the alternative hypothesis. First, the duration of lag-alone segment was held constant at a short value of 3 or 6 ms, and the lead-alone segment was extended from 3 to 24 ms. If it were the case that the lag is localized if it arrives after the cessation of echo suppression about 1 ms after the lead s onset then the owls should have localized the lagging sound more frequently when the lead alone segment was extended beyond 1 ms. Instead, the owls continued to localize the leading sound even when echo suppression should have been through. Second, when the lead-alone segment was held constant and the lag-alone segment was varied from 3 24 ms, the obverse was found: the owls tendencies to localize the lag scaled with the duration of the lag-alone segment. These results suggest that the delay at the sound-pair s onset does not affect the localization of the reflection as one might infer from extrapolating the results from clicks to longer sounds. The idea that echo threshold is determined by the delay at the onset, per se, might thus be true only to the extent that
7 How the owl tracks its prey II 345 % Max response Lag Lead A B C D E Lead-alone Target alone Target leads 3 ms delay Target lags Superposed Peri-stimulus time (ms) Lag-alone segment Behavior Q3 Median Q Lag-alone segment (ms) Lag-alone Fig. 6. Dependence of echo threshold on the lag-alone segment. (A) When a direct sound is longer than the echo delay, the direct (leading) sound and the echo (lagging sound) overlap. Three segments can be defined: the lead- and lag-alone segments when only the leading or lagging sound is present; and the superposed segment, when both are present. In the standard paradigm in which the leading and lagging sounds are of equal lengths, the lead-alone and lag-alone segments are equal in length to the delay at the onset of the two sounds. (B) Combined response of a population of space-map neurons evoked by a single sound in the SRFs of the cells. The gray bars represent the median response; the white bars represent the first and third quartiles (Q1, Q3). The sound in a cell s SRF is referred to as the target. The response to a target is vigorous at the onset, but the firing declines to a lower steady-state level. (C) Population response evoked by a leading sound (blue trapezoid) in the SRF (i.e. a leading target). The neuron responds vigorously at the onset, much as it would at the onset of a single sound (between blue arrowheads). The response falls dramatically as the lagging sound (gray trapezoid) is received, decorrelating the two sounds. (D) Population response evoked by a lagging target. After the termination of the leading sound (gray trapezoid), only the lagging sound remains, and the cells tuned at the lagging sound s location respond weakly (between red arrowheads), much as they would to the terminal, steady-state portion of a response to a single sound. (E) Population responses (red line) evoked by the lag-alone segment increase as this segment is lengthened from 1.5 to 24 ms. As this segment is lengthened, the proportion of saccades to the lagging sound (green line) increases in a manner similar to that of the neural response (see Nelson and Takahashi, 28)..5 Proportion to lag this delay determines the length of the lag-alone segment when lead and lag are equally long. In the natural condition, where the lead and lag alone segments are equal in length and the delay is short (<12 ms), the bird localizes the leading sound preferentially (Nelson and Takahashi, 28; Spitzer and Takahashi, 26). In other words, when equal in length, the lead-alone segment seems to have a higher perceptual salience than the lag-alone segment. A possible physiological reason is illustrated in Fig. 6B, which shows the post-stimulus time histogram of responses evoked in a population of space-map cells by a single sound emitted from the cells SRFs ( target alone ). Like cells encountered throughout the central auditory system, the cells discharges are highest at the onset and decline to a lower, steadystate level. The response of the population to a lead/lag pair when the target (sound in the SRF) led or lagged is shown in Fig. 6C,D. Cells representing the location of the leading stimulus respond vigorously at the start of the sound pair, as they would to the onset of a single sound, i.e. during the lead-alone segment (between blue arrowheads). When the lagging sound arrives, the leading sound is already in progress and the firing rate drops dramatically owing to spatial decorrelation (Fig. 6D; see multiple sound sources ). When the leading sound terminates, only the lagging sound remains, and the cells tuned to the lagging sound s location discharge, but at a rate similar to the lower, steady-state level (between red arrowheads). The weaker response of the cells at the lag location can therefore be explained by the response dynamics of space-map neurons to single sources, and specialized circuitry devoted to the suppression of echoes need not be invoked, at least, for stimuli used in this study (Nelson and Takahashi, 28). Finally, Fig. 6E plots the recovery of the population response to the lagging stimulus against the duration of the lag-alone segment (red line). The proportion of saccades to the lagging stimuli are superimposed (green line), and it is clear that the forms of the two functions are similar, which is consistent with the idea that the neural response to the lag-alone segment determines the salience of the lagging sound. Audiovisual integration Although barn owls are known for their feats of sound localization, their vision is also keen. What is localization like when both auditory and visual information are available? Studies of mammalian multisensory integration have suggested that the combination of visual and auditory information increases performance beyond that afforded by one modality by itself, especially at low light and sound amplitudes (Stein and Meredith, 1993). This idea was examined in the barn owl by Whitchurch and Takahashi (Whitchurch and Takahashi, 26). One of the simplest models for audiovisual integration is the stochastic facilitation model of Raab (Raab, 1962). According to this hypothesis, also called the race model, auditory and visual information race along their pathways to reach a common neural station that triggers a reaction, such as the redirection of the head or eyes (i.e. a saccade). This trigger zone responds to the information that arrives earlier. If, on the whole, auditory and visual information have an equal chance of arriving earlier, the overall reaction time would be slightly faster than that evoked by either modality alone. This faster response occurs stochastically and requires no additional neural processing. By using auditory and visual stimuli having a wide range of intensities, Whitchurch and Takahashi (Whitchurch and Takahashi, 26) examined the effects of stimulus strength on saccade latency and saccade error. The statistics of saccades to unisensory targets
8 346 T. T. Takahashi were used to predict responses to the bisensory targets, and the predictions were compared to empirical observations. Acoustically guided saccades were found to have shorter saccade latencies than visually guided saccades, but the visually guided saccades were more precise. Interestingly, audiovisual saccades occurred no earlier than did auditory saccades nor were they more accurate than visual saccades, which is at odds with the idea that the combination of two modalities yields better performance than does either modality alone. The stochastic facilitation model is based on the competition between two separate streams of sensory information. Does the modality that wins the race determine all characteristics of the ensuing saccade or does the presence of the stream that lost have an influence? In other words, do audiovisual saccades with auditory-like saccade latencies also have auditorylike precision? Interestingly, the answer was no. Audiovisually guided saccades in the owl had, instead, the best features of both acoustically and visually guided saccades. They were both fast and accurate. These results support a model of sensory integration in which the faster modality initiates the saccade and the slower modality remains available to refine the saccade trajectory (Corneil et al., 22). Conclusions In the roughly 4 years since Mark Konishi published his seminal review, the barn owl has not only become an established model for the neural mechanisms of sound localization, but has also contributed to fields beyond spatial hearing. Detractors have questioned the relevance of discoveries made in a specialized bird to human hearing. This criticism misses the point. The specializations of the owl, such as its ability to phase lock at high frequencies or the formation of the space map, make it easier to study the fundamental processes, such as the coding of ITD or the integration of information across frequencies. Having understood these principles in a species specialized for spatial hearing, one can then determine the modifications that must be made in the theories and models gleaned from the specialist so that they work in more generalized animals. This is the power of the comparative approach. Glossary EI cells EI cells are excited by sounds in the contralateral ear and inhibited by inputs in the ipsilateral ear. This innervation renders the cells especially sensitive to differences in the magnitudes of the sounds in the two ears (ILD). HRTF (head related transfer function) A mathematical description of the way in which the magnitude and phase angles of spectral components in a sound are altered by the head and ears in a direction-specific manner. IC (inferior colliculus) The IC is a major way-station in the central auditory system. Nearly all information must pass through the IC to reach higher auditory areas, such as the medial geniculate nucleus (nucleus ovoidalis in birds). ICc (central nucleus of the inferior colliculus) A subdivision of the inferior colliculus found medial to the ICx where the space-map is found. It consists of at least two further subdivision, the core and the lateral shell. ICc-core (core of the contralateral central nucleus of the inferior colliculus) The ICc core, subdivision of the ICc, consists of a tonotopic array of neurons that are sensitive to ITD. It receives a direct projection from the NL and projects directly to the contralateral ICc-ls. ICc-ls (lateral shell of the opposite central nucleus of the inferior colliculus) The ICc-Is, subdivision of the ICc, consists of a tonotopic array of neurons that are selective for both ITD and ILD. It receives a direct projection from the contralateral IC-core and projects directly to the ipsilateral ICx. ICx (external nucleus of the inferior colliculus) A subdivision of the IC found along the lateral margin of the IC just deep to the tectal ventricle in birds. In the barn owl, the neurons in the ICx are sharply tuned for the location of sound sources and are organized to form a topographic representation of auditory space. ILD (interaural level difference) The difference in the magnitude of the sound in the left and right ears. In owls, ILD is a cue for the location of a sound source along the horizon at low frequencies. At high frequencies, it is a cue for the elevation of a sound source. In the literature, ILD is sometimes referred to as the interaural intensity difference (IID). ITD (interaural time difference) The difference in the time-of-arrival of sounds in the left and right ears. In owls, ITD is the acoustic cue for the location of the sound source along the horizon. Lag discrimination suppression A perceptual phenomenon encountered when a sound and its echo are presented. When subjects experience lag-discrimination suppression, they are unable to locate the source of the echo. LLDp (posterior subdivision of the dorsal nucleus of the lateral lemniscus) One of the auditory nuclei found in the rostral pontine tegmentum with its somata interspersed amongst the fibres of the lateral lemniscus. It is thought to compute ILD. The nucleus LLDp was formerly called the VLVp. Localization dominance A perceptual phenomenon encountered when a sound and its echo are presented. When subjects experience localization dominance, they hear only one sound and localize it at the location of the direct sound. NA (nucleus angularis) One of the cochlear nuclei. Found in the caudal pons, its cells are sensitive to the amplitude sounds. NL (nucleus laminaris) A nucleus of the caudal pons located just medial to the nucleus magnocellularis (NM). Innervated by the NM of both sides, it is thought to be involved in the computation of ITD. NM (nucleus magnocellularis) One of the cochlear nuclei. Found in the caudal pons, its cells synchronize their firing to the fine structure of sounds. Place code A place code describes the way in which an object in space (e.g. a loudspeaker) is represented in the brain as the activity of a cluster of neurons. Importantly, the identity of the neuron signifies the location of the object. Saccade A rapid turn of the eyes or head towards an object. Owls, whose eyes are nearly immobile, make only head saccades. Slope code Slope codes refer to the way in which a change in the location of a stimulus location is signified by a change in the firing rates of neurons. SRF (spatial receptive field) The locations in space from which a cell can be excited. Virtual auditory space Sounds can be filtered with HRTFs and presented over headphones. By filtering with HRTFs, the frequency-specific ITDs and ILDs that are characteristic of a sound source at a particular location are imparted, giving a highly realistic auditory-spatial experience over headphones. Sounds thus presented are said to be presented in virtual auditory space. Acknowledgements This work was supported by the NIH (RO1 DC3925). Deposited in PMC for release after 12 months. References Adolphs, R. (1993). Bilateral inhibition generates neuronal responses tuned to interaural level differences in the auditory brainstem of the barn owl. J. Neurosci. 13,
9 How the owl tracks its prey II 347 Akeroyd, M. A. and Bernstein, L. R. (21). The variation across time of sensitivity to interaural disparities: behavioral measurements and quantitative analyses. J. Acoust. Soc. Am. 11, Arends, J. J. and Zeigler, H. P. (1986). Anatomical identification of an auditory pathway from a nucleus of the lateral lemniscal system to the frontal telencephalon (nucleus basalis) of the pigeon. Brain Res. 398, Arthur, B. J. (24). Sensitivity to spectral interaural intensity difference cues in space-specific neurons of the barn owl. J. Comp. Physiol. A 19, Bala, A. D., Spitzer, M. W. and Takahashi, T. T. (23). Prediction of auditory spatial acuity from neural images on the owl s auditory space map. Nature 424, Bala, A. D., Spitzer, M. W. and Takahashi, T. T. (27). Auditory spatial acuity approximates the resolving power of space-specific neurons. PLoS ONE 2, e675. Blauert, J. (1997). Spatial Hearing. The Psychophysics of Human Sound Localization. Cambridge, MA: Massachusetts Institute of Technology. Brainard, M. S. and Knudsen, E. I. (1993). Experience-dependent plasticity in the inferior colliculus: a site for visual calibration of the neural representation of auditory space in the barn owl. J. Neurosci. 13, Brainard, M. S. and Knudsen, E. I. (1998). Sensitive periods for visual calibration of the auditory space map in the barn owl optic tectum. J. Neurosci. 18, Brainard, M. S., Knudsen, E. I. and Esterly, S. D. (1992). Neural derivation of sound source location: resolution of spatial ambiguities in binaural cues. J. Acoust. Soc. Am. 91, Brand, A., Behrend, O., Marquardt, T., McAlpine, D. and Grothe, B. (22). Precise inhibition is essential for microsecond interaural time difference coding. Nature 417, Carr, C. E. and Konishi, M. (199). A circuit for detection of interaural time differences in the brain stem of the barn owl. J. Neurosci. 1, Corneil, B. D., Van Wanrooij, M., Munoz, D. P. and Van Opstal, A. J. (22). Auditory-visual interactions subserving goal-directed saccades in a complex scene. J. Neurophysiol. 88, Delgutte, B., Joris, P. X., Litovsky, R. Y. and Yin, T. C. (1999). Receptive fields and binaural interactions for virtual-space stimuli in the cat inferior colliculus. J. Neurophysiol. 81, Dent, M. L. and Dooling, R. J. (24). The precedence effect in three species of birds (Melopsittacus undulatus, Serinus canaria, and Taeniopygia guttata). J. Comp. Psychol. 118, du Lac, S. and Knudsen, E. I. (199). Neural maps of head movement vector and speed in the optic tectum of the barn owl. J. Neurophysiol. 63, Egnor, R. (2). The Role of Spectral Cues in Sound Localization by The Barn Owl. Pasadena: Division of Biology, California Institute of Technology. Euston, D. R. and Takahashi, T. T. (22). From spectrum to space: the contribution of level difference cues to spatial receptive fields in the barn owl inferior colliculus. J. Neurosci. 22, Faller, C. and Merimaa, J. (24). Source localization in complex listening situations: selection of binaural cues based on interaural coherence. J. Acoust. Soc. Am. 116, Feldman, D. E. and Knudsen, E. I. (1997). An anatomical basis for visual calibration of the auditory space map in the barn owl s midbrain. J. Neurosci. 17, Fitzpatrick, D. C., Kuwada, S., Batra, R. and Trahiotis, C. (1995). Neural responses to simple simulated echoes in the auditory brain stem of the unanesthetized rabbit. J. Neurophysiol. 74, Fuzessery, Z. M. and Pollak, G. D. (1985). Determinants of sound location selectivity in bat inferior colliculus: a combined dichotic and free-field stimulation study. J. Neurophysiol. 54, Gaese, B. H. and Johnen, A. (2). Coding for auditory space in the superior colliculus of the rat. Eur. J. Neurosci. 12, Gold, J. I. and Knudsen, E. I. (2a). Abnormal auditory experience induces frequency-specific adjustments in unit tuning for binaural localization cues in the optic tectum of juvenile owls. J. Neurosci. 2, Gold, J. I. and Knudsen, E. I. (2b). A site of auditory experience-dependent plasticity in the neural representation of auditory space in the barn owl s inferior colliculus. J. Neurosci. 2, Goldberg, J. M. and Brown, P. B. (1969). Response of binaural neurons of dog superior olivary complex to dichotic tonal stimuli: some physiological mechanisms of sound localization. J Neurophysiol. 32, Grothe, B. (23). New roles for synaptic inhibition in sound localization. Nat. Rev. Neurosci. 4, Hirsch, J. A., Chan, J. C. and Yin, T. C. (1985). Responses of neurons in the cat s superior colliculus to acoustic stimuli. I. Monaural and binaural response properties. J. Neurophysiol. 53, Hyde, P. S. and Knudsen, E. I. (2). Topographic projection from the optic tectum to the auditory space map in the inferior colliculus of the barn owl. J. Comp. Neurol. 421, Jeffress, L. (1948). A place theory of sound localization. J. Comp. Physiol. Psychol. 41, Joris, P. and Yin, T. C. (27). A matter of time: internal delays in binaural processing. Trends Neurosci. 3, Kavanagh, G. L. and Kelly, J. B. (1986). Midline and lateral field sound localization in the albino rat (Rattus norvegicus). Behav. Neurosci. 1, Keller, C. H. and Takahashi, T. T. (1996). Responses to simulated echoes by neurons in the barn owl s auditory space map. J. Comp. Physiol. A 178, Keller, C. H. and Takahashi, T. T. (25). Localization and identification of concurrent sounds in the owl s auditory space map. J. Neurosci. 25, Keller, C. H., Hartung, K. and Takahashi, T. T. (1998). Head-related transfer functions of the barn owl: measurement and neural responses. Hear. Res. 118, King, A. J. and Hutchings, M. E. (1987). Spatial response properties of acoustically responsive neurons in the superior colliculus of the ferret: a map of auditory space. J. Neurophysiol. 57, King, A. J. and Palmer, A. R. (1983). Cells responsive to free-field auditory stimuli in guinea-pig superior colliculus: distribution and response properties. J. Physiol. 342, Knudsen, E. I. (1988). Early blindness results in a degraded auditory map of space in the optic tectum of the barn owl. Proc. Natl. Acad. Sci. USA 85, Knudsen, E. I. (22). Instructed learning in the auditory localization pathway of the barn owl. Nature 417, Knudsen, E. I. and Konishi, M. (1978). Space and frequency are represented separately in auditory midbrain of the owl. J. Neurophysiol. 41, Knudsen, E. I. and Knudsen, P. F. (1989). Vision calibrates sound localization in developing barn owls. J. Neurosci. 9, Knudsen, E. I., Blasdel, G. G. and Konishi, M. (1979). Mechanisms of sound localization in the barn owl (Tyto alba). J. Comp. Physiol. A 133, Konishi, M. (1973a). How the owl tracks its prey. Am. Sci. 61, Konishi, M. (1973b). Locatable and nonlocatable acoustic signals for barn owls. Am. Nat. 17, Leibler, L. M. (1976). Monaural and binaural pathways in the ascending auditory system of the pigeon. PhD thesis, Department of Psychology, Massachusetts Institute of Technology, Cambridge, p. 76. Litovsky, R. Y., Colburn, H. S., Yost, W. A. and Guzman, S. J. (1999). The precedence effect. J. Acoust. Soc. Am. 16, Masino, T. and Knudsen, E. I. (199). Horizontal and vertical components of head movement are controlled by distinct neural circuits in the barn owl. Nature 345, Masino, T. and Knudsen, E. I. (1992). Anatomical pathways from the optic tectum to the spinal cord subserving orienting movements in the barn owl. Exp. Brain Res. 92, Masino, T. and Knudsen, E. I. (1993). Orienting head movements resulting from electrical microstimulation of the brainstem tegmentum in the barn owl. J. Neurosci. 13, Mazer, J. (1995). Integration of parallel processing streams in the inferior colliculus of the owl. In Division of Biology. Pasadena: California Institute of Technology. McAlpine, D. and Grothe, B. (23). Sound localization and delay lines-do mammals fit the model? Trends Neurosci. 26, McAlpine, D., Jiang, D. and Palmer, A. R. (21). A neural code for low-frequency sound localization in mammals. Nat. Neurosci. 4, Nelson, B. S. and Takahashi, T. T. (28). Independence of echo-threshold and echo-delay in the barn owl. PLoS ONE 3, e3598. Payne, R. S. (1971). Acoustic location of prey by barn owls (Tyto alba). J. Exp. Biol. 54, Payne, R. S. and Drury, W. H. (1958). Tyto alba Part II. Nat. Hist. 67, Pena, J. L. and Konishi, M. (21). Auditory spatial receptive fields created by multiplication. Science 292, Raab, D. H. (1962). Statistical facilitation of simple reaction times. Trans. NY Acad. Sci. 24, Roman, N., Wang, D. and Brown, G. J. (23). Speech segregation based on sound localization. J. Acoust. Soc. Am. 114, Rose, J. E., Gross, N. B., Geisler, C. D. and Hind, J. E. (1966). Some neural mechanisms in the inferior colliculus of the cat which may be relevant to localization of a sound source. J. Neurophysiol. 29, Spezio, M. L. and Takahashi, T. T. (23). Frequency-specific interaural level difference tuning predicts spatial response patterns of space-specific neurons in the barn owl inferior colliculus. J. Neurosci. 23, Spitzer, M. W. and Takahashi, T. T. (26). Sound localization by barn owls in a simulated echoic environment. J. Neurophysiol. 95, Spitzer, M. W., Bala, A. D. and Takahashi, T. T. (23). Auditory spatial discrimination by barn owls in simulated echoic conditions. J. Acoust. Soc. Am. 113, Stein, B. E. and Meredith, M. A. (1993). The merging of the senses. In Cognitive Neuroscience Series (ed. M. S. Gazzaniga). Cambridge: MIT Press. Sterbing, S. J., Hartung, K. and Hoffmann, K. P. (22). Representation of sound source direction in the superior colliculus of the guinea pig in a virtual auditory environment. Exp. Brain Res. 142, Sterbing, S. J., Hartung, K. and Hoffmann, K. P. (23). Spatial tuning to virtual sounds in the inferior colliculus of the guinea pig. J. Neurophysiol. 9, Strutt, J. W. L. R. (197). On our perception of sound direction. Phil. Mag. 13, Sullivan, W. E. and Konishi, M. (1984). Segregation of stimulus phase and intensity coding in the cochlear nucleus of the barn owl. J. Neurosci. 4, Takahashi, T. T. and Keller, C. H. (1992). Commissural connections mediate inhibition for the computation of interaural level difference in the barn owl. J. Comp. Physiol. A 17, Takahashi, T. T. and Keller, C. H. (1994). Representation of multiple sound sources in the owl s auditory space map. J. Neurosci. 14, Takahashi, T. and Konishi, M. (1986). Selectivity for interaural time difference in the owl s midbrain. J. Neurosci. 6, Takahashi, T. T. and Konishi, M. (1988a). Projections of nucleus angularis and nucleus laminaris to the lateral lemniscal nuclear complex of the barn owl. J Comp Neurol 274, Takahashi, T. T. and Konishi, M. (1988b). Projections of the cochlear nuclei and nucleus laminaris to the inferior colliculus of the barn owl. J. Comp. Neurol. 274,
Binaural Hearing. Why two ears? Definitions
Binaural Hearing Why two ears? Locating sounds in space: acuity is poorer than in vision by up to two orders of magnitude, but extends in all directions. Role in alerting and orienting? Separating sound
More informationHST.723J, Spring 2005 Theme 3 Report
HST.723J, Spring 2005 Theme 3 Report Madhu Shashanka shashanka@cns.bu.edu Introduction The theme of this report is binaural interactions. Binaural interactions of sound stimuli enable humans (and other
More informationNeural correlates of the perception of sound source separation
Neural correlates of the perception of sound source separation Mitchell L. Day 1,2 * and Bertrand Delgutte 1,2,3 1 Department of Otology and Laryngology, Harvard Medical School, Boston, MA 02115, USA.
More informationA Model of Visually Guided Plasticity of the Auditory Spatial Map in the Barn Owl
A Model of Visually Guided Plasticity of the Auditory Spatial Map in the Barn Owl Andrea Haessly andrea@cs.utexas.edu Joseph Sirosh sirosh@cs.utexas.edu Risto Miikkulainen risto@cs.utexas.edu Abstract
More informationStudy of sound localization by owls and its relevance to humans
Comparative Biochemistry and Physiology Part A 126 (2000) 459 469 www.elsevier.com/locate/cbpa Review Study of sound localization by owls and its relevance to humans Masakazu Konishi Di ision of Biology
More informationNeuroethology in Neuroscience or Why study an exotic animal
Neuroethology in Neuroscience or Why study an exotic animal Nobel prize in Physiology and Medicine 1973 Karl von Frisch Konrad Lorenz Nikolaas Tinbergen for their discoveries concerning "organization and
More informationL14. Sound Localization 2
L14. Sound Localization 2 Linear Summation + - Delay-line Coincidence Detector delay Jeffress Model r ( ) f ( t ) h( ) dt neural delay, ΔT September 19, 2011 BioNB4240 C. D. Hopkins 1 coincidence detectors
More informationJ Jeffress model, 3, 66ff
Index A Absolute pitch, 102 Afferent projections, inferior colliculus, 131 132 Amplitude modulation, coincidence detector, 152ff inferior colliculus, 152ff inhibition models, 156ff models, 152ff Anatomy,
More informationAUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening
AUDL GS08/GAV1 Signals, systems, acoustics and the ear Pitch & Binaural listening Review 25 20 15 10 5 0-5 100 1000 10000 25 20 15 10 5 0-5 100 1000 10000 Part I: Auditory frequency selectivity Tuning
More informationTHE NEURAL CODING OF AUDITORY SPACE BY TERRY T. TAKAHASHI
J. exp. Biol. 146, 307-322 (1989) 307 Printed in Great Britain The Company of Biologists Limited 1989 THE NEURAL CODING OF AUDITORY SPACE BY TERRY T. TAKAHASHI Institute of Neuroscience, University of
More information21/01/2013. Binaural Phenomena. Aim. To understand binaural hearing Objectives. Understand the cues used to determine the location of a sound source
Binaural Phenomena Aim To understand binaural hearing Objectives Understand the cues used to determine the location of a sound source Understand sensitivity to binaural spatial cues, including interaural
More informationBinaural Hearing. Steve Colburn Boston University
Binaural Hearing Steve Colburn Boston University Outline Why do we (and many other animals) have two ears? What are the major advantages? What is the observed behavior? How do we accomplish this physiologically?
More informationThe Auditory Nervous System
Processing in The Superior Olivary Complex The Auditory Nervous System Cortex Cortex Alan R. Palmer MGB Excitatory GABAergic IC Glycinergic Interaural Level Differences Medial Geniculate Body Inferior
More informationProcessing in The Superior Olivary Complex
Processing in The Superior Olivary Complex Alan R. Palmer Medical Research Council Institute of Hearing Research University Park Nottingham NG7 2RD, UK Binaural cues for Localising Sounds in Space time
More informationCODING OF AUDITORY SPACE
Annu. Rev. Neurosci. 2003. 26:31 55 doi: 10.1146/annurev.neuro.26.041002.131123 Copyright c 2003 by Annual Reviews. All rights reserved CODING OF AUDITORY SPACE Masakazu Konishi Division of Biology 216-76,
More informationModeling Physiological and Psychophysical Responses to Precedence Effect Stimuli
Modeling Physiological and Psychophysical Responses to Precedence Effect Stimuli Jing Xia 1, Andrew Brughera 2, H. Steven Colburn 2, and Barbara Shinn-Cunningham 1, 2 1 Department of Cognitive and Neural
More informationThe Central Auditory System
THE AUDITORY SYSTEM Each auditory nerve sends information to the cochlear nucleus. The Central Auditory System From there, projections diverge to many different pathways. The Central Auditory System There
More informationSound localization psychophysics
Sound localization psychophysics Eric Young A good reference: B.C.J. Moore An Introduction to the Psychology of Hearing Chapter 7, Space Perception. Elsevier, Amsterdam, pp. 233-267 (2004). Sound localization:
More informationSensitivity to interaural time difference and representation of azimuth in central nucleus of inferior colliculus in the barn owl
J Comp Physiol A (2007) 193:99 112 DOI 10.1007/s00359-006-0172-z ORIGINAL PAPER Sensitivity to interaural time difference and representation of azimuth in central nucleus of inferior colliculus in the
More informationHearing II Perceptual Aspects
Hearing II Perceptual Aspects Overview of Topics Chapter 6 in Chaudhuri Intensity & Loudness Frequency & Pitch Auditory Space Perception 1 2 Intensity & Loudness Loudness is the subjective perceptual quality
More informationComment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA)
Comments Comment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA) Is phase locking to transposed stimuli as good as phase locking to low-frequency
More informationTemporal coding in the sub-millisecond range: Model of barn owl auditory pathway
Temporal coding in the sub-millisecond range: Model of barn owl auditory pathway Richard Kempter* Institut fur Theoretische Physik Physik-Department der TU Munchen D-85748 Garching bei Munchen J. Leo van
More informationThe Structure and Function of the Auditory Nerve
The Structure and Function of the Auditory Nerve Brad May Structure and Function of the Auditory and Vestibular Systems (BME 580.626) September 21, 2010 1 Objectives Anatomy Basic response patterns Frequency
More informationHearing in the Environment
10 Hearing in the Environment Click Chapter to edit 10 Master Hearing title in the style Environment Sound Localization Complex Sounds Auditory Scene Analysis Continuity and Restoration Effects Auditory
More informationCreating a sense of auditory space
J Physiol 566.1 (2005) pp 21 28 21 SYMPOSIUM REPORT Creating a sense of auditory space David McAlpine Department of Physiology and The Ear Institute, University College London, Gower Street, London WC1E
More informationThe location of a sound source does not project directly onto
Colloquium Traces of learning in the auditory localization pathway Eric I. Knudsen*, Weimin Zheng, and William M. DeBello Department of Neurobiology, Stanford University School of Medicine, Stanford, CA
More informationThe neural code for interaural time difference in human auditory cortex
The neural code for interaural time difference in human auditory cortex Nelli H. Salminen and Hannu Tiitinen Department of Biomedical Engineering and Computational Science, Helsinki University of Technology,
More informationPhysiological measures of the precedence effect and spatial release from masking in the cat inferior colliculus.
Physiological measures of the precedence effect and spatial release from masking in the cat inferior colliculus. R.Y. Litovsky 1,3, C. C. Lane 1,2, C.. tencio 1 and. Delgutte 1,2 1 Massachusetts Eye and
More informationSystems Neuroscience Oct. 16, Auditory system. http:
Systems Neuroscience Oct. 16, 2018 Auditory system http: www.ini.unizh.ch/~kiper/system_neurosci.html The physics of sound Measuring sound intensity We are sensitive to an enormous range of intensities,
More informationNeurobiology of Hearing (Salamanca, 2012) Auditory Cortex (2) Prof. Xiaoqin Wang
Neurobiology of Hearing (Salamanca, 2012) Auditory Cortex (2) Prof. Xiaoqin Wang Laboratory of Auditory Neurophysiology Department of Biomedical Engineering Johns Hopkins University web1.johnshopkins.edu/xwang
More informationSpatial hearing and sound localization mechanisms in the brain. Henri Pöntynen February 9, 2016
Spatial hearing and sound localization mechanisms in the brain Henri Pöntynen February 9, 2016 Outline Auditory periphery: from acoustics to neural signals - Basilar membrane - Organ of Corti Spatial
More information3-D Sound and Spatial Audio. What do these terms mean?
3-D Sound and Spatial Audio What do these terms mean? Both terms are very general. 3-D sound usually implies the perception of point sources in 3-D space (could also be 2-D plane) whether the audio reproduction
More informationProcessing in The Cochlear Nucleus
Processing in The Cochlear Nucleus Alan R. Palmer Medical Research Council Institute of Hearing Research University Park Nottingham NG7 RD, UK The Auditory Nervous System Cortex Cortex MGB Medial Geniculate
More informationStructure and Function of the Auditory and Vestibular Systems (Fall 2014) Auditory Cortex (3) Prof. Xiaoqin Wang
580.626 Structure and Function of the Auditory and Vestibular Systems (Fall 2014) Auditory Cortex (3) Prof. Xiaoqin Wang Laboratory of Auditory Neurophysiology Department of Biomedical Engineering Johns
More informationBinaural processing of complex stimuli
Binaural processing of complex stimuli Outline for today Binaural detection experiments and models Speech as an important waveform Experiments on understanding speech in complex environments (Cocktail
More informationRepresentation of sound in the auditory nerve
Representation of sound in the auditory nerve Eric D. Young Department of Biomedical Engineering Johns Hopkins University Young, ED. Neural representation of spectral and temporal information in speech.
More informationAuditory System & Hearing
Auditory System & Hearing Chapters 9 and 10 Lecture 17 Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Spring 2015 1 Cochlea: physical device tuned to frequency! place code: tuning of different
More informationNeural Recording Methods
Neural Recording Methods Types of neural recording 1. evoked potentials 2. extracellular, one neuron at a time 3. extracellular, many neurons at a time 4. intracellular (sharp or patch), one neuron at
More informationSignals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor
Signals, systems, acoustics and the ear Week 5 The peripheral auditory system: The ear as a signal processor Think of this set of organs 2 as a collection of systems, transforming sounds to be sent to
More informationModels of Plasticity in Spatial Auditory Processing
Auditory CNS Processing and Plasticity Audiol Neurootol 2001;6:187 191 Models of Plasticity in Spatial Auditory Processing Barbara Shinn-Cunningham Departments of Cognitive and Neural Systems and Biomedical
More informationHow is the stimulus represented in the nervous system?
How is the stimulus represented in the nervous system? Eric Young F Rieke et al Spikes MIT Press (1997) Especially chapter 2 I Nelken et al Encoding stimulus information by spike numbers and mean response
More informationRetinal Waves and Ocular Dominance Columns
Retinal Waves and Ocular Dominance Columns In the cat, at birth, inputs from both eyes are intermingled in the visual cortex. Ocular dominance columns start to appear a few weeks after birth. They can
More informationDevelopment of Sound Localization 2. How do the neural mechanisms subserving sound localization develop?
Development of Sound Localization 2 How do the neural mechanisms subserving sound localization develop? 1 Overview of the development of sound localization Gross localization responses are observed soon
More informationPlasticity of Cerebral Cortex in Development
Plasticity of Cerebral Cortex in Development Jessica R. Newton and Mriganka Sur Department of Brain & Cognitive Sciences Picower Center for Learning & Memory Massachusetts Institute of Technology Cambridge,
More informationAuditory spatial tuning at the cross-roads of the midbrain and forebrain
Articles in PresS. J Neurophysiol (July 1, 2009). doi:10.1152/jn.00400.2009 1 Auditory spatial tuning at the cross-roads of the midbrain and forebrain 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 Authors: M.
More information19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS
19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 27 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS PACS: 43.66.Pn Seeber, Bernhard U. Auditory Perception Lab, Dept.
More informationSpectro-temporal response fields in the inferior colliculus of awake monkey
3.6.QH Spectro-temporal response fields in the inferior colliculus of awake monkey Versnel, Huib; Zwiers, Marcel; Van Opstal, John Department of Biophysics University of Nijmegen Geert Grooteplein 655
More informationSpectrograms (revisited)
Spectrograms (revisited) We begin the lecture by reviewing the units of spectrograms, which I had only glossed over when I covered spectrograms at the end of lecture 19. We then relate the blocks of a
More informationLecture 7 Hearing 2. Raghav Rajan Bio 354 Neurobiology 2 February 04th All lecture material from the following links unless otherwise mentioned:
Lecture 7 Hearing 2 All lecture material from the following links unless otherwise mentioned: 1. http://wws.weizmann.ac.il/neurobiology/labs/ulanovsky/sites/neurobiology.labs.ulanovsky/files/uploads/purves_ch12_ch13_hearing
More informationLauer et al Olivocochlear efferents. Amanda M. Lauer, Ph.D. Dept. of Otolaryngology-HNS
Lauer et al. 2012 Olivocochlear efferents Amanda M. Lauer, Ph.D. Dept. of Otolaryngology-HNS May 30, 2016 Overview Structural organization Responses Hypothesized roles in hearing Olivocochlear efferent
More informationI. INTRODUCTION. J. Acoust. Soc. Am. 113 (3), March /2003/113(3)/1631/15/$ Acoustical Society of America
Auditory spatial discrimination by barn owls in simulated echoic conditions a) Matthew W. Spitzer, b) Avinash D. S. Bala, and Terry T. Takahashi Institute of Neuroscience, University of Oregon, Eugene,
More informationRepresentation of sound localization cues in the auditory thalamus of the barn owl
Proc. Natl. Acad. Sci. USA Vol. 94, pp. 10421 10425, September 1997 Neurobiology Representation of sound localization cues in the auditory thalamus of the barn owl LARRY PROCTOR* AND MASAKAZU KONISHI Division
More informationSpiking Neural Model of Supervised Learning in the Auditory Localization Pathway of Barn Owls
In D. Reitter & F. E. Ritter (Eds.), Proceedings of the 14th International Conference on Cognitive Modeling (ICCM 2016). University Park, PA: Penn State. Spiking Neural Model of Supervised Learning in
More informationIsolating mechanisms that influence measures of the precedence effect: Theoretical predictions and behavioral tests
Isolating mechanisms that influence measures of the precedence effect: Theoretical predictions and behavioral tests Jing Xia and Barbara Shinn-Cunningham a) Department of Cognitive and Neural Systems,
More informationNeural Correlates and Mechanisms of Spatial Release From Masking: Single-Unit and Population Responses in the Inferior Colliculus
J Neurophysiol 94: 1180 1198, 2005. First published April 27, 2005; doi:10.1152/jn.01112.2004. Neural Correlates and Mechanisms of Spatial Release From Masking: Single-Unit and Population Responses in
More informationSound Localization PSY 310 Greg Francis. Lecture 31. Audition
Sound Localization PSY 310 Greg Francis Lecture 31 Physics and psychology. Audition We now have some idea of how sound properties are recorded by the auditory system So, we know what kind of information
More informationDevelopment of Sound Localization Mechanisms in the Mongolian Gerbil Is Shaped by Early Acoustic Experience
J Neurophysiol 94: 1028 1036, 2005. First published April 13, 2005; doi:10.1152/jn.01143.2004. Development of Sound Localization Mechanisms in the Mongolian Gerbil Is Shaped by Early Acoustic Experience
More informationJuha Merimaa b) Institut für Kommunikationsakustik, Ruhr-Universität Bochum, Germany
Source localization in complex listening situations: Selection of binaural cues based on interaural coherence Christof Faller a) Mobile Terminals Division, Agere Systems, Allentown, Pennsylvania Juha Merimaa
More informationB. G. Shinn-Cunningham Hearing Research Center, Departments of Biomedical Engineering and Cognitive and Neural Systems, Boston, Massachusetts 02215
Investigation of the relationship among three common measures of precedence: Fusion, localization dominance, and discrimination suppression R. Y. Litovsky a) Boston University Hearing Research Center,
More informationHumans and owls may incorrectly localize a virtual sound
Colloquium Cellular mechanisms for resolving phase ambiguity in the owl s inferior colliculus José Luis Peña* and Masakazu Konishi Division of Biology 216-76, California Institute of Technology, Pasadena,
More informationComputational Perception /785. Auditory Scene Analysis
Computational Perception 15-485/785 Auditory Scene Analysis A framework for auditory scene analysis Auditory scene analysis involves low and high level cues Low level acoustic cues are often result in
More informationInformation Processing During Transient Responses in the Crayfish Visual System
Information Processing During Transient Responses in the Crayfish Visual System Christopher J. Rozell, Don. H. Johnson and Raymon M. Glantz Department of Electrical & Computer Engineering Department of
More information3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF. Masayuki Morimoto Motokuni Itoh Kazuhiro Iida
3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF Masayuki Morimoto Motokuni Itoh Kazuhiro Iida Kobe University Environmental Acoustics Laboratory Rokko, Nada, Kobe, 657-8501,
More informationAn Auditory System Modeling in Sound Source Localization
An Auditory System Modeling in Sound Source Localization Yul Young Park The University of Texas at Austin EE381K Multidimensional Signal Processing May 18, 2005 Abstract Sound localization of the auditory
More informationBinaural Tuning of Auditory Units in the Forebrain Archistriatal Gaze Fields of the Barn Owl: Local Organization but No Space Map
The Journal of Neuroscience, July 1995, 75(7): 5152-5168 Binaural Tuning of Auditory Units in the Forebrain Archistriatal Gaze Fields of the Barn Owl: Local Organization but No Space Map Yale E. Cohen
More informationEFFECT OF DECORRELATION OF BINAURAL SOUND ON BARN OWL S ABILITY TO DETECT CHANGE IN INTERAURAL TIME DIFFERENCE UNDER A BAYESIAN PROCESSING MODEL
EFFECT OF DECORRELATION OF BINAURAL SOUND ON BARN OWL S ABILITY TO DETECT CHANGE IN INTERAURAL TIME DIFFERENCE UNDER A BAYESIAN PROCESSING MODEL by ALEXANDER WORTH A THESIS Presented to the Department
More informationInformation conveyed by inferior colliculus neurons about stimuli with aligned and misaligned sound localization cues
J Neurophysiol 106: 974 985, 2011. First published June 8, 2011; doi:10.1152/jn.00384.2011. Information conveyed by inferior colliculus neurons about stimuli with aligned and misaligned sound localization
More informationBinaural Hearing for Robots Introduction to Robot Hearing
Binaural Hearing for Robots Introduction to Robot Hearing 1Radu Horaud Binaural Hearing for Robots 1. Introduction to Robot Hearing 2. Methodological Foundations 3. Sound-Source Localization 4. Machine
More informationRhythm and Rate: Perception and Physiology HST November Jennifer Melcher
Rhythm and Rate: Perception and Physiology HST 722 - November 27 Jennifer Melcher Forward suppression of unit activity in auditory cortex Brosch and Schreiner (1997) J Neurophysiol 77: 923-943. Forward
More informationAuditory Scene Analysis
1 Auditory Scene Analysis Albert S. Bregman Department of Psychology McGill University 1205 Docteur Penfield Avenue Montreal, QC Canada H3A 1B1 E-mail: bregman@hebb.psych.mcgill.ca To appear in N.J. Smelzer
More informationHunting Increases Adaptive Auditory Map Plasticity in Adult Barn Owls
9816 The Journal of Neuroscience, October 19, 2005 25(42):9816 9820 Development/Plasticity/Repair Hunting Increases Adaptive Auditory Map Plasticity in Adult Barn Owls Joseph F. Bergan, Peter Ro, Daniel
More informationRegistration of Neural Maps through Value-Dependent Learning: Modeling the Alignment of Auditory and Visual Maps in the Barn Owl s Optic Tectum
The Journal of Neuroscience, January 1, 1997, 17(1):334 352 Registration of Neural Maps through Value-Dependent Learning: Modeling the Alignment of Auditory and Visual Maps in the Barn Owl s Optic Tectum
More informationChapter 11: Sound, The Auditory System, and Pitch Perception
Chapter 11: Sound, The Auditory System, and Pitch Perception Overview of Questions What is it that makes sounds high pitched or low pitched? How do sound vibrations inside the ear lead to the perception
More informationBefore we talk about the auditory system we will talk about the sound and waves
The Auditory System PHYSIO: #3 DR.LOAI ZAGOUL 24/3/2014 Refer to the slides for some photos. Before we talk about the auditory system we will talk about the sound and waves All waves have basic characteristics:
More informationHearing. Juan P Bello
Hearing Juan P Bello The human ear The human ear Outer Ear The human ear Middle Ear The human ear Inner Ear The cochlea (1) It separates sound into its various components If uncoiled it becomes a tapering
More informationSensory Physiology Bi353 Fall Term 2016
Lectures 2-3p MWF 111 Lillis (CRN 11012) Lab/Discussion Section 1 (CRN 11013) Friday 10-110:50a Hue 129 Lab/Discussion Section 2 (CRN 11014) Friday 11a-11:50a Hue 129 Lab/Discussion Section 3 (CRN 16400)
More informationFINE-TUNING THE AUDITORY SUBCORTEX Measuring processing dynamics along the auditory hierarchy. Christopher Slugocki (Widex ORCA) WAS 5.3.
FINE-TUNING THE AUDITORY SUBCORTEX Measuring processing dynamics along the auditory hierarchy. Christopher Slugocki (Widex ORCA) WAS 5.3.2017 AUDITORY DISCRIMINATION AUDITORY DISCRIMINATION /pi//k/ /pi//t/
More informationConvergent Input from Brainstem Coincidence Detectors onto Delay-Sensitive Neurons in the Inferior Colliculus
The Journal of Neuroscience, August 1, 1998, 18(15):6026 6039 Convergent Input from Brainstem Coincidence Detectors onto Delay-Sensitive Neurons in the Inferior Colliculus David McAlpine, Dan Jiang, Trevor
More informationHEARING AND PSYCHOACOUSTICS
CHAPTER 2 HEARING AND PSYCHOACOUSTICS WITH LIDIA LEE I would like to lead off the specific audio discussions with a description of the audio receptor the ear. I believe it is always a good idea to understand
More informationSensory Systems Vision, Audition, Somatosensation, Gustation, & Olfaction
Sensory Systems Vision, Audition, Somatosensation, Gustation, & Olfaction Sarah L. Chollar University of California, Riverside sarah.chollar@gmail.com Sensory Systems How the brain allows us to see, hear,
More informationPharmacological Specialization of Learned Auditory Responses in the Inferior Colliculus of the Barn Owl
The Journal of Neuroscience, April 15, 1998, 18(8):3073 3087 Pharmacological Specialization of Learned Auditory Responses in the Inferior Colliculus of the Barn Owl Daniel E. Feldman and Eric I. Knudsen
More informationA developmental learning rule for coincidence. tuning in the barn owl auditory system. Wulfram Gerstner, Richard Kempter J.
A developmental learning rule for coincidence tuning in the barn owl auditory system Wulfram Gerstner, Richard Kempter J.Leo van Hemmen Institut fur Theoretische Physik, Physik-Department der TU Munchen
More informationOrganization. The physics of sound. Measuring sound intensity. Fourier analysis. (c) S-C. Liu, Inst of Neuroinformatics 1
Audition: Biological and Silicon cochleas; Localization Shih-Chii Liu Institute of Neuroinformatics University of Zurich/ETH Zurich www.ini.uzh.ch/~shih Organization Thursday: Biological and silicon cochleas;
More informationPublication VI. c 2007 Audio Engineering Society. Reprinted with permission.
VI Publication VI Hirvonen, T. and Pulkki, V., Predicting Binaural Masking Level Difference and Dichotic Pitch Using Instantaneous ILD Model, AES 30th Int. Conference, 2007. c 2007 Audio Engineering Society.
More informationA dendritic model of coincidence detection in the avian brainstem
Neurocomputing 26}27 (1999) 263}269 A dendritic model of coincidence detection in the avian brainstem Jonathan Z. Simon *, Catherine E. Carr, Shihab A. Shamma Institute for Systems Research, University
More informationDiscrimination of temporal fine structure by birds and mammals
Auditory Signal Processing: Physiology, Psychoacoustics, and Models. Pressnitzer, D., de Cheveigné, A., McAdams, S.,and Collet, L. (Eds). Springer Verlag, 24. Discrimination of temporal fine structure
More informationCh 5. Perception and Encoding
Ch 5. Perception and Encoding Cognitive Neuroscience: The Biology of the Mind, 2 nd Ed., M. S. Gazzaniga, R. B. Ivry, and G. R. Mangun, Norton, 2002. Summarized by Y.-J. Park, M.-H. Kim, and B.-T. Zhang
More informationAuditory and Vestibular Systems
Auditory and Vestibular Systems Objective To learn the functional organization of the auditory and vestibular systems To understand how one can use changes in auditory function following injury to localize
More informationRunning head: HEARING-AIDS INDUCE PLASTICITY IN THE AUDITORY SYSTEM 1
Running head: HEARING-AIDS INDUCE PLASTICITY IN THE AUDITORY SYSTEM 1 Hearing-aids Induce Plasticity in the Auditory System: Perspectives From Three Research Designs and Personal Speculations About the
More informationCOM3502/4502/6502 SPEECH PROCESSING
COM3502/4502/6502 SPEECH PROCESSING Lecture 4 Hearing COM3502/4502/6502 Speech Processing: Lecture 4, slide 1 The Speech Chain SPEAKER Ear LISTENER Feedback Link Vocal Muscles Ear Sound Waves Taken from:
More informationRelearning sound localization with new ears
Relearning sound localization with new ears Paul M. Hofman, Jos G.A. Van Riswick and A. John Van Opstal University of Nijmegen, Department of Medical Physics and Biophysics, Geert Grooteplein 21, NL-6525
More informationP. M. Zurek Sensimetrics Corporation, Somerville, Massachusetts and Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
Failure to unlearn the precedence effect R. Y. Litovsky, a) M. L. Hawley, and B. J. Fligor Hearing Research Center and Department of Biomedical Engineering, Boston University, Boston, Massachusetts 02215
More informationVISUAL CORTICAL PLASTICITY
VISUAL CORTICAL PLASTICITY OCULAR DOMINANCE AND OTHER VARIETIES REVIEW OF HIPPOCAMPAL LTP 1 when an axon of cell A is near enough to excite a cell B and repeatedly and consistently takes part in firing
More informationAuditory System. Barb Rohrer (SEI )
Auditory System Barb Rohrer (SEI614 2-5086) Sounds arise from mechanical vibration (creating zones of compression and rarefaction; which ripple outwards) Transmitted through gaseous, aqueous or solid medium
More informationCh 5. Perception and Encoding
Ch 5. Perception and Encoding Cognitive Neuroscience: The Biology of the Mind, 2 nd Ed., M. S. Gazzaniga,, R. B. Ivry,, and G. R. Mangun,, Norton, 2002. Summarized by Y.-J. Park, M.-H. Kim, and B.-T. Zhang
More informationINTRODUCTION J. Acoust. Soc. Am. 103 (2), February /98/103(2)/1080/5/$ Acoustical Society of America 1080
Perceptual segregation of a harmonic from a vowel by interaural time difference in conjunction with mistuning and onset asynchrony C. J. Darwin and R. W. Hukin Experimental Psychology, University of Sussex,
More informationBehavioral/Systems/Cognitive. Sasha Devore 1,2,3 and Bertrand Delgutte 1,2,3 1
7826 The Journal of Neuroscience, June 9, 2010 30(23):7826 7837 Behavioral/Systems/Cognitive Effects of Reverberation on the Directional Sensitivity of Auditory Neurons across the Tonotopic Axis: Influences
More informationIntroduction to sensory pathways. Gatsby / SWC induction week 25 September 2017
Introduction to sensory pathways Gatsby / SWC induction week 25 September 2017 Studying sensory systems: inputs and needs Stimulus Modality Robots Sensors Biological Sensors Outputs Light Vision Photodiodes
More informationAnalysis of in-vivo extracellular recordings. Ryan Morrill Bootcamp 9/10/2014
Analysis of in-vivo extracellular recordings Ryan Morrill Bootcamp 9/10/2014 Goals for the lecture Be able to: Conceptually understand some of the analysis and jargon encountered in a typical (sensory)
More informationReview Auditory Processing of Interaural Timing Information: New Insights
Journal of Neuroscience Research 66:1035 1046 (2001) Review Auditory Processing of Interaural Timing Information: New Insights Leslie R. Bernstein * Department of Neuroscience, University of Connecticut
More information