THE PHYSICAL AND PSYCHOPHYSICAL BASIS OF SOUND LOCALIZATION

Size: px
Start display at page:

Download "THE PHYSICAL AND PSYCHOPHYSICAL BASIS OF SOUND LOCALIZATION"

Transcription

1 CHAPTER 2 THE PHYSICAL AND PSYCHOPHYSICAL BASIS OF SOUND LOCALIZATION Simon Carlile 1. PHYSICAL CUES TO A SOUND S LOCATION 1.1. THE DUPLEX THEORY OF AUDITORY LOCALIZATION Traditionally, the principal cues to a sound s location are identified as the differences between the sound field at each ear. The obvious fact that we have two ears sampling the sound field under slightly different conditions makes these binaural cues self-evident. A slightly more subtle concept underlying traditional thinking is that the differences between the ears are analyzed on a frequency by frequency basis. This idea has as its basis the notion that the inner ear encodes the sounds in terms of its spectral characteristics as opposed to its time domain characteristics. As a result, complex spectra are thought to be encoded within the nervous system as varying levels of activity across a wide range of auditory channels; each channel corresponding to a different segment of the frequency range. While there is much merit and an enormous amount of data supporting these ideas, they have tended to dominate research efforts to the exclusion of a number of other important features of processing. In contrast to these traditional views, there is a growing body of evidence that: (i) illustrates the important role of information available at each ear alone (monaural cues to sound location); Virtual Auditory Space: Generation and Applications, edited by Simon Carlile R.G. Landes Company.

2 28 Virtual Auditory Space: Generation and Applications (ii) suggests that processing across frequency is an important feature of those mechanisms analyzing cues to sound location (monaural and binaural spectral cues); (iii) suggests that the time (rather than frequency) domain characteristics of the sound may also play an important role in sound localization processing. The principal theoretical statement of the basis of sound localization has become know as the duplex theory of sound localization and has its roots in the work of Lord Rayleigh at the turn of the century. It is based on the fact that the main difference between the two ears is that they are not in the same place. 1 Early formulations were based on a number of fairly rudimentary physical and psychophysical observations. Models of the behavior of sound waves around the head were made with simplifying approximations of the head as a sphere and the ears as two symmetrically placed point receivers (Fig. 2.1). 2 Despite these simplifications the resulting models had great explanatory and predictive power and have tended to dominate the research program for most of this century. The fact that we have two ears separated by a relatively large head means that, for sounds off the mid-line, there are differences in the path lengths from the sound source to each ear. This results in a difference in the time of arrival of the sound at each ear; this is referred to as the interaural time difference (ITD). This ITD manifests as a difference in the onset of sound at each ear and, for more continuous sounds, results in an interaural difference in the phase of the sounds at each ear (interaural phase difference: IPD). There are important frequency limitations to the encoding of phase information. The auditory nervous system is known to encode the phase of a pure tone stimulus at the level of the auditory receptors only for relatively low frequencies. 3 Psychophysically, we also seem to be insensitive to differences in interaural phase for frequencies above about 1.5 khz. 4,5 For these reasons, the duplex theory holds that the encoding of interaural time differences (in the form of interaural phase differences) is restricted to low frequency sounds. As the head is a relatively dense medium it will tend to reflect and refract sound waves. This only becomes a significant effect when the wavelengths of the sound are of the same order or smaller than the head. For a sound located off the midline, the head casts an acoustic shadow for the far ear and generates an interaural difference in the sound level at each ear (interaural level difference: ILD). At low frequencies of hearing this effect is negligible because of the relatively long wavelengths involved, but for frequencies above about 3 khz the magnitude of the effect rises sharply. The amount of shadowing of the far ear will depend on the location of the source (section 1.3) so that this effect provides powerful cues to a sound s location. There are also changes in the level of the sound at the ear nearer to the sound

3 The Physical and Psychophysical Basis of Sound Localization 29 Fig The coordinate system used for calculating the interaural time differences in a simple path length model and the interaural level difference model. In these models the head is approximated as a hard sphere with two point receivers (the ears). Reprinted with permission from Shaw EAG. In: Keidel W D, Neff W D, ed. Handbook of Sensory physiology. Berlin: Springer-Verlag, 1974: source that are dependent on the location of the source. The latter variations result from two distinct effects: Firstly, the so-called obstacle or baffle effect (section 1.3) and secondly, the filtering effects of the outer ear (section. 1.5 and chapter 6, section 2.2). The head shadow and near ear effects can result in interaural level differences of 40 db or more at higher frequencies. The magnitudes of these effects and the frequencies at which they occur are dependent on the precise morphology of the head and ears and thus can show marked differences between individuals. The duplex theory is, however, incomplete in that there are a number of observations that cannot be explained by reference to the theory and a number of observations that contradict the basic premises of the theory. For instance, there is a growing body of evidence that the human auditory system is sensitive to the interaural time differences in the envelopes of high frequency carriers (see review by Trahiotis 6 ). There are a number of experiments that suggest that this information is not dependent on the low frequency channels of the auditory system. 7,8 In the absence of a spectral explanation of the phenomena, this

4 30 Virtual Auditory Space: Generation and Applications suggests a role for some form of time domain code operating at higher frequencies. Furthermore, recent work suggests that coding the interaural differences in both amplitude and frequency modulated signals is dependent on rapid amplitude fluctuations in individual frequency channels which are then compared binaurally. 9 The incompleteness of the duplex theory is also illustrated by the fact that listeners deafened in one ear can localize a sound with a fair degree of accuracy (chapter 1, section 2.2). This behavior must be based upon cues other than those specified by the duplex theory which is principally focused on binaural processing of differences between the ears. A second problem with the theory is that because of the geometrical arrangement of the ears a single interaural difference in time or level is not associated with a single spatial location. That is, a Fig The interaural time and level binaural cues to a sound s location are ambiguous if considered within frequencies because a single interaural interval specifies more than one location in space. Because of the symmetry of the two receivers on each side of the head, a single binaural interval specifies the locations in space which can be described by the surface of a cone directed out from the ear, the so-called cone of confusion. For interaural time differences, the cone is centered on the interaural axis. The case is slightly more complicated for interaural level differences as, for some frequencies, the axis of the cone is a function of the frequency. Reprinted with permission from Moore BCJ. An Introduction to the Psychology of Hearing. London: Academic Press, 1989.

5 The Physical and Psychophysical Basis of Sound Localization 31 particular interaural difference will specify the surface of an imaginary cone centered on the interaural axis (Fig. 2.2). The solid angle of the cone will be associated with the magnitude of the interval; for example the cone becomes the median plane for zero interaural time difference and becomes the interaural axis for a maximum interaural time difference. Therefore, interaural time differences less that the maximum possible ITD will be ambiguous for sound location. These have been referred to as the cones of confusion. 1 Similar arguments exist for interaural level differences although, as we shall see, the cones of confusion for these cues are slightly more complex. The kind of front-back confusions seen in a percentage of localization trials is consistent with the descriptions of the binaural cues and indicative of the utilization of these cues (chapter 1, section 2.1.3). However, the fact that front-back confusions only occur in a small fraction of localization judgments suggests that some other cues are available to resolve the ambiguity in the binaural cues. These ambiguities in the binaural cues were recognized in the earliest statements of the duplex theory and it was suggested that the filtering properties of the outer ear might play a role in resolving these ambiguities. However, in contrast to the highly quantitative statements of the binaural characteristics and the predictive models of processing in these early formulations, the invocation of the outer ear was more of an ad hoc adjustment of the theory to accommodate a minor difficulty. It was not until the latter half of this century that more quantitative models of pinna function began to appear 10 and indeed it has been only recently that quantitative and predictive formulations of auditory localization processing have begun to integrate the role of the outer ear 11 (but see Searle et al 12 ). In the following sections we will look in detail at what is known about the acoustics of the binaural cues and also the so-called monaural cues to a sound s location. We will then look at the role of different structures of the auditory periphery in generating these location cues and some of the more quantitative models of the functional contribution of different components of the auditory periphery such as the pinna, head, shoulder and torso CUES THAT ARISE AS A RESULT OF THE PATH LENGTH DIFFERENCE The path length differences depend on the distance and the angular location of the source with respect to the head (Fig. 2.1). 1,13 Variations in the ITD with distance is really only effective for source locations a to 3a, where a is the radius of a sphere approximating the head. At distances greater than 3a the wave front is effectively planar. The ITDs produced by the path length differences for a plane sound wave can be calculated from D = r (θ + sin(θ)) (1)

6 32 Virtual Auditory Space: Generation and Applications where D = distance in meters, r = radius of head in meters, θ = angle of sound source from median plane in radians, (Fig. 2.1). 1 The timing difference produced by this path length difference is 14 t = D/c (2) where t = time in seconds, c = speed of sound in air (340 m s 1 ). The interaural phase difference (IPD) produced for a relatively continuous periodic signal is then given by Kuhn 15 where ω = radian frequency. IPD = tω (3) For a continuous sound, the differences in the phase of the sound waves at each ear will provide two phase angles; a and (360 a ). If these are continuous signals there is no a priori indication of which ear is leading. This information must come from the frequency of the sound wave and the distance between the two ears. Assuming the maximum phase difference occurs on the interaural axis, the only unambiguous phase differences will occur for frequencies whose wave lengths (λ) are greater than twice the interaural distance. At these frequencies the IPD will always be less than 180 and hence the cue is unambiguous. Physical measurements of the interaural time differences produced using click stimuli are in good agreement with predictions from the simple path length model described above This model breaks down however, when relatively continuous tonal stimuli are used (Fig. 2.3). 14,15,17,18 In general, the measured ITDs for continuous tones are larger than those predicted. Furthermore, the ITDs become smaller and more variable as a function of frequency and azimuth location when the frequency exceeds a limit that is related to head size. The failure of the simple models to predict the observed variations in ITDs results from the assumption that the velocity of the sound wave is independent of frequency. Three different velocities can be ascribed to a signal; namely the phase, group and signal velocities. 15,18,19 The rate of propagation of elements of the amplitude envelope is represented by the group ITD, while the phase velocity of the carrier is a The fact that a signal can have a number of different velocities is not intuitively obvious to many. Brillouin 19 likens the phase and group velocities to the ripples caused by a stone cast into a pond. He points out for instance that if the group velocity of the ripple is greater than the phase velocity one sees wavelets appearing at the advancing edge of the ripple, slipping backwards through the packet of wavelets that make up the ripple and disappearing at the trailing edge.

7 The Physical and Psychophysical Basis of Sound Localization 33 Fig Measurements of the interaural time differences using a dummy head reveal that this is a function of both frequency and the type of sound. The points plot data obtained from the measurement of on-going phase of a tone at a number of angles of incidence (15, 30, 45, 60, 75 and 90 referenced to the median plane). The solid lines to the left show the predictions based on the phase velocity of the wave (eq. 5) and can be seen to be a good match for the data only for the lowest frequencies. The boxed points show the solutions for integer ka for the complete model from which equation (5) was derived (i.e., without the simplifying assumption that ka < 1; see text). On the right y-axis, the dashed lines show the predictions of the simple path length model (eq. 2) and the arrows show measurements from the leading edge of a tone burst. Reprinted with permission from Kuhn GF, J Acoust Soc Am 1977; 62: best ascribed to what was previously thought of as the steady state ITD. a Over the frequency range of auditory sensitivity, the group and signal velocities are probably identical. 18 When phase velocity is constant, phase and group velocities will be equal, regardless of wavelength. However, because the phase velocity of sound waves is dependent on wavelength (particularly at high frequencies), then relatively large differences can occur between the phase and group velocities. 19 In addition, as a wave encounters a solid object, it is diffracted such that the wavefront at the surface of the object is a combination of the incident and reflected waves. Under these circumstances the phase velocity at the surface of the object becomes frequency-dependent in a manner characteristic of the object. 18 The interaural phase differences based on phase velocity, for frequencies in the range 0.25 khz to 8.0 khz, have been calculated using a sphere approximating the human head (Fig. 2.3). IPD 3ka sin(a inc ) (4)

8 34 Virtual Auditory Space: Generation and Applications where k = acoustic wave number b (2π/λ), a = radius of the sphere, a inc = angle of incidence of the plane sound wave (see Kuhn 15 for derivation). The interaural time difference is calculated using equation (3) ITD 3(a/c)sin(a inc ) (5) where c = speed of sound in air. According to equation 5, ITD is constant as a function of frequency, however this relation 15 holds only where (ka) 2 << 1. The predicted ITDs from this formulation are larger than those predicted using path-length models of the time differences around the human head 1 (eq. 1), and for any one stimulus location are constant as a function of frequency only for frequencies below 0.5 khz (where a = 8.75 cm). Above this frequency, ITDs decrease as a function of frequency to the values predicted by the path-length model (eq. 1). The steady state ITDs measured from a life-like model of the human head were dependent on the frequency of the sinusoidal stimulus 15,17 and were in good agreement with the theoretical predictions (Fig. 2.3). In summary, measured ITDs were larger than predicted by the simple path-length model and relatively stable for frequencies below about 0.5 khz. ITDs decreased to a minimum for frequencies above 1.4 khz to 1.6 khz and varied as a function of frequency at higher frequencies. In general there was much less variation in the measured ITDs as a function of frequency for angles closer to the median plane. Roth et al 18 measured ITDs for cats and confirmed that these changes in the ITD also occur for an animal with a smaller head and different pinna arrangement. Moderate stability of the ITDs was demonstrated only for frequencies below about 1.5 khz and for locations within 60 of the median plane. In addition, the functions relating onset ITD and frequency were variable, particularly at high frequencies. This variability was found to be attributable to the pinna and the surface supporting the animal. These findings indicate that it cannot be assumed that a particular ITD is associated with a single azimuthal location. Steady state ITD is a potentially stable cue for sound localization only at low frequencies (humans < 0.6 khz; cats < 1.5 khz), but is frequency dependent at higher frequencies. The phase and group velocities have also been calculated for the first five acoustic modes of the creeping waves around a rigid sphere b The acoustic wave number simply allows a more general relationship to be established between the wavelength of the sound and the dimensions of the object. In Figure 2.3 the predicted and measured ITDs for the human head are expressed in terms of both the acoustic wave number and the corresponding frequencies for a sphere with the approximate size of the human head (in this case, radius = 8.75 cm).

9 The Physical and Psychophysical Basis of Sound Localization 35 for ka between 0.4 and The creeping waves are the waves resulting from the interaction of the incident and reflected sounds close to the surface of the obstacle. The ka relates the wavelength to the radius of the sphere so that for a sphere approximating the human head (a = 8.75 cm) ka between 0.4 and 25.0 represents a frequency range of 0.25 khz to 16 khz. At 1.25 khz the group velocities for the first, second and third modes are 0.92, 0.72 and 0.63 times the ambient speed of sound. 20 These calculations suggest that there are significant differences between the group and phase velocities at frequencies that are physiologically relevant to human auditory localization. Roth et al 18 have demonstrated differences of the order of 75 µs between phase and group ITDs around an acoustically firm sphere approximating a cat s head which are consistent with the calculations of Gaunaurd. 20 Thus, the physical description of sound wave transmission, and the acoustic measurements of the sound, suggests that two distinct types of interaural timing cues are generated in the frequency range relevant to mammalian sound localization THE HEAD AS AN ACOUSTIC OBSTACLE As a consequence of the separation of the ears by the acoustically opaque mass of the head, two different acoustic effects will vary the pressure at each ear for a sound source located away from the median plane. The resulting disparity in the sound level at each ear is commonly referred to as the Interaural Level Difference (ILD). c The first effect, occurring at the ear ipsilateral to the source of the sound, is due to the capacity of the head to act as a reflecting surface. For a plane sound wave at normal incidence, the sound pressure at the surface of a perfectly reflecting barrier will be 6 db higher than the pressure measured in the absence of the barrier 21 (Fig. 2.4). Thus an onaxis pressure gain will be produced at the ipsilateral ear when the wavelength of the sound is much less than the interaural distance. The second effect is due to the capacity of the head to diffract the sound wave. When the wavelength is of the same order as the interaural distance, only small diffractive effects are produced. However, at relatively shorter wavelengths, the head acts as an increasingly effective obstacle and produces reflective and diffractive perturbations of the sound field. Thus, for an object of fixed size such as the head, the distribution of sound pressure around the object will depend on the incident angle and the frequency of the plane sound wave. c This is also referred to as the interaural intensity difference (IID); however, this is an inappropriate usage of the term. The differences so measured are the differences in the pressure of the sound at each ear, not in the average power flux per unit area (intensity). Much of the early literature uses the term IID although it is used in a way which is (incorrectly) synonymous with ILD.

10 36 Virtual Auditory Space: Generation and Applications Fig Calculated transformations of the sound pressure level from the free field to a point receiver located on a hard sphere (radius a) as a function of the acoustic wave number (2πa/λ) and the angle of incidence of a plane sound wave. The location of the receiver is indicated by the filled circle on the sphere and all azimuths (θ) are referenced to the median plane. On the lower x-axis the corresponding frequencies for the human are included (head radius = 8.75 cm). For sound sources located in the ipsilateral field, the SPL increases as a function of the frequency to an asymptote of 6 db. In a slightly less symmetrical model the higher order changes in the SPL exhibited for sources in the contralateral field (θ = -45 and -135 for ka = 2.5; 4; 6; 10...) are likely to be smoother. Reprinted with permission from Shaw EAG. In: Keidel W D, Neff W D, ed. Handbook of Sensory physiology. Berlin: Springer-Verlag, 1974: The distribution of pressure about a hard sphere was first described by Lord Rayleigh and further developed by Stewart around the turn of the century. 10,22 Figure 2.4 shows the changes in the gain in sound pressure level (SPL), relative to SPL in the absence of the sphere, calculated as a function of frequency and the angle of incidence of a plane sound wave. 10 Note the asymptotic increase to 6 db for waves at normal incidence due to the reflective gain. In contrast to the simple

11 The Physical and Psychophysical Basis of Sound Localization 37 characterization of the head as an acoustic obstacle which produces the largest interaural intensity differences for sound located on the interaural axis, the Rayleigh-Stewart model predicts that the largest interaural difference will occur for sounds located around ±45 and ±135. This is due to the nature of the diffractive interactions of the sound traveling around the sphere from different directions and their interaction around the axis of the far ear. The effects on the sound pressure at the surface of the sphere produced by the distance of the source from the center of the sphere are likely to be significant for distances of up to 10a, particularly for low frequencies. 13 This effect is due mainly to the spherical nature of the sound wave in the vicinity of a point source. It should also be kept in mind that the ears of most mammals are not separated by an arc of 180 ; for instance the ears of Homo Sapiens are separated by an arc of about 165 measured around the front of the head. This arrangement will produce slight differences between the pressure transforms for sounds located anterior and posterior of the ears. 13 In summary, the head acts as an effective acoustic obstacle which reflects and diffracts the sound field for sounds whose wavelengths are small relative to the dimensions of the head. This results in a frequency dependent transformation of the sound pressure from the free field to the acoustic receivers located on either side of the head. Differences in the sound pressure at each ear are related to the location of the sound in the free field. These interaural level differences are most significant for high frequencies TRANSFER FUNCTION OF THE AUDITORY PERIPHERY: SOME METHODOLOGICAL ISSUES Measurement techniques In the first chapter it was discussed in broad terms how the outer ear filters sound, resulting in the so-called spectral cues to a sound s location. A large proportion of the rest of this chapter will review what is known about these outer ear filter functions. This issue is of considerable importance, as these filter functions are the raw material of VAS displays. In chapter 4 there is a more detailed review of the methodology of measuring these filter functions and validating the measurements. Directionally-dependent changes in the frequency spectra at the eardrum have been studied in humans using probe microphone measurements 10,23-31 by miniature microphones placed in the ear canal 11,12,32-34,36-38 and by minimum audible field measurements 39 (see also ref. 40). Before proceeding, a number of general comments regarding these methods should be made. What characterizes these studies is the wide range of interstudy and intersubject variation (for review see for instance Shaw 23,41 ). These variations are probably due to a variety of

12 38 Virtual Auditory Space: Generation and Applications factors. Firstly, intersubject variation in pinna dimensions might be expected to produce intersubject variation in the measured transforms. To some extent this can be accounted for by structural averaging 42 where the transforms are normalized along the frequency axis so the major components of different transforms from different subjects coincide, thus preserving the fine structure of the transformation. Indeed, where averaging of the transfer functions has been done without structural averaging, then the functions are much shallower and smoother. 10,23 The other major source of variation results from differences in the measurement procedures. Probably the most important consideration relates to the specification of the point within the outer ear at which the transfer function from the free field should be measured. This issue is covered in more detail in chapter 4 (section 2.1) and so is only briefly considered here. Over the frequency range of interest the wave motion within the canal is principally planar 24 (see also Rabbitt 43 ). In this case all of the directional dependencies of the transfer function could be recorded at any position within the canal. 11,44 However, the eardrum also reflects a proportion of the incoming sound energy back into the canal, which results in standing waves within the canal. 45,46 The incoming sounds interacts with the reflected sounds to produce a distribution of pressure peaks and nulls along the canal that varies as a function of frequency. Simply put, a 180 phase difference in the phase of the incoming and outgoings waves is associated with a pressure null and at positions of 0 phase difference, a pressure peak. This results principally in a notch in the measured transfer function which moves up in frequency as the position of a probe microphone gets closer to the eardrum. 45,47 Therefore, to capture the HRTF faithfully it is necessary to get as close to the eardrum as possible so that the notch resulting from the pressure null is above the frequency range of interest. HRTFs measured at more distal locations in the canal will be influenced by the standing waves at the lower frequency range, so that differences in the measurement positions between studies could then be expected to produce differences in the HRTFs reported. Other influences on the measurements result from the perturbation of the sound field by the measurement instruments themselves. Although their dimensions are small (miniature microphone diameter = 5 mm, probe tube diameter = 1 to 2 mm) these instruments have the capacity to perturb the sound field, particularly at higher frequencies. These effects are related to the extent of the obstruction of the canal and their cross sectional location within the canal. 43 The insertion of miniature microphones into the meatus could also change the effective length of the ear canal which could vary the quarter wavelength closed tube resonance of the auditory canal as well as affect the transverse mode of the concha (section 1.8.1). Blocking the canal to some extent is also likely to vary the input impedance of the ear but this is

13 The Physical and Psychophysical Basis of Sound Localization 39 unlikely to affect the directional responses of the human ear to frequencies below 10 khz. However, variation in impedance may well affect the relative magnitude of the spectral components of a frequency transform for any one stimulus position. 10,48,49 Much intersubject variation and variations between studies may also be due to the lack of a set of criteria for establishing reliable morphological axes to act as references for sound source position. The HRTFs are a strong function of location and even small variations in the relative position of subjects heads with respect to the stimulus coordinates both within and across studies are likely to lead to fairly large apparent variations in the HRTF across the population of subjects. One recent approach has been to use a perceptual standard for defining these axes: that is, simply requesting a subject to point their nose towards a particular visual and/or auditory target and to hold their head upright can result in a highly reproducible head posture within individuals (Carlile and Hyams, unpublished observations). Presumably this reflects the precision with which we perceptually align ourselves with the gravitational plane on the one hand and our sense of directly in front on the other Coordinate system Specifying the precise location of a stimulus requires the adoption of a coordinate system that at least describes the two dimensional space about the subject. The most common form of coordinate system is a single pole system, the same system that is used for specifying the location on the surface of the planet. With the head centered in an imaginary sphere, the azimuth location is specified by the lines of latitude where directly ahead is usually taken to be azimuth 0 with locations to the right of the anterior midline as negative. The elevation is specified by lines of longitude with the audio-visual horizon as the 0 reference and inferior elevations being negative (Fig. 2.5a). The biggest advantage of such a system is that it is the most intuitive or at least, the system which people are most familiar with. One of the disadvantages is that the radial distance specified by a particular number of degrees azimuth varies as a function of the elevation. For instance, on the greater circle (Elevation 0 ) the length of the arc specified per degree is greatest and becomes progressively shorter as one approaches the poles. This becomes a problem when, for instance, sampling the HRTFs at equidistant points in space. Specifying a sampling distance of 5 will result in a massive over sampling of the areas of space associated with the poles. However, simple trigonometry will provide the appropriate corrections to be made to allow equal area sampling. A second coordinate system that has been occasionally used is the double pole system. This specifies elevation in the same way as the single pole system but specifies azimuth as a series of rings parallel to the midline and centered on the poles at each interaural axis (Fig. 2.5b).

14 40 Virtual Auditory Space: Generation and Applications Fig Two different coordinate systems used commonly to represent the location of a sound in space; (a) single pole system of latitude and longitude, (b) double pole system of colatitude and colongitude. Modified with permission from Middlebrooks JC et al, J Acoust Soc Am 1989; 86: This system was first introduced with the development of a free field auditory stimulus system that was built around a semicircular hoop. The hoop was attached to supports and motors that were located on the interaural axis so that rotating the hoop caused it to move over the top of the animal (see Knudsen et al 50 ). A speaker could be moved along the hoop so that when the hoop was rotated the speaker moved through an arc described by the azimuth rings of the double pole system. The principal advantage of such a system is that the azimuth arc length is constant as a function of elevation. This is of course important when comparing localization errors across a number of elevations. A second advantage of this system is that each azimuth specifies the cone of confusion for particular interaural time differences and thus simplifies a number of modeling and computational issues. The principal disadvantage of the double pole system is that it is not very intuitive and from a perceptual point of view, does not map well to localization behavior. For instance, if a sound was presented at an elevation of 55 above the audio visual horizon on the left interaural axis this would be classified as azimuth 90 /elevation 55 in a single pole system. To turn to face such a source the subject would rotate counter-clockwise through 90. However, in a double pole system this location would be specified as 40 azimuth and 55 elevation and it seems counter intuitive to say that the subject would still have to turn 90 to the left to face this source. Regardless of these difficulties, we should be careful to scrutinize the coordinate system referred to when

15 The Physical and Psychophysical Basis of Sound Localization 41 we consider generalized statements about localization processing. For instance, it has been claimed that monaural cues play little to no role in azimuth localization processing but are important in determining the elevation of the source. 51,52 This statement was made in the context of a double pole system and is well supported by the data. Should this be misinterpreted in the context of a single pole system the statement is far more contentious and in fact would contradict those authors own data. To some extent both the single and the double pole systems map imperfectly to the perceptual dimensions of auditory space. When we speak of a location in the vernacular we refer to the azimuth (or some analogous term) but elevation is better mapped onto the notion of the height of the source. That is, we live more in a perceptual cylinder than at the center of a sphere, or more correctly we tend to stand at the bottom of a perceptual cylinder. Notwithstanding this difficulty, the most convenient and most common form of design for a free field stimulus system is based on a semicircular hoop which describes a sphere. This therefore requires some form of spherical descriptor. As the single pole system is by far the more intuitive we have chosen to use this system to describe spatial location HRTF MEASUREMENTS USING PROBE TUBES Pure tones 25,41 and the Fourier analysis of impulse responses 29,30,31,42,53 have been used to study the spectral transformations of sounds in both the horizontal and the vertical planes (see Shaw 10,23,24 for extensive reviews of the early literature, chapter 3 section 5 for signal processing methodology). We have recently recorded the HRTFs for each ear for around 350 locations in space and systematically examined the changes in the HRTF for variations in azimuth and elevation on the anterior midline and the interaural axis. 31 These recordings were obtained using probe tube microphones placed within 6 mm of the eardrum using an acoustic technique to ensure placement accuracy. Using a dummy head equipped with an internal microphone we also calibrated the acoustic perturbations of our recording system 27 (chapter 4) Variation in the HRTF with azimuth The horizon transfer function determined for the left ear of one subject is shown in Figure 2.6. This has been calculated by collecting together the HRTFs recorded at 10 intervals along the audio-visual horizon. The amplitude spectrum of each HRTF was determined and a continuous surface was obtained by interpolating between measurements (see Carlile 31,44,54 for a full description of the method). This surface is plotted as a contour plot of azimuth location versus frequency with the gain at each frequency-location conjunction indicated by the color of the contour. This horizon transfer function is representative of the large number of ears that we have so far measured.

16 42 Virtual Auditory Space: Generation and Applications The most prominent spectral features of the horizon transfer function vary asymmetrically about the interaural axis (-90 azimuth). For instance, anterior sound locations result in a gain of greater than 15 db in transmission for frequencies 3 khz to 5 khz but this is reduced by 10 db to 15 db over the same frequency range for posterior locations. For this subject, a second high gain feature was evident at high frequencies for anterior but not posterior locations. This is associated with a very sharp notch between 8 khz and 10 khz for anterior locations which is absent at posterior locations. The gain of the high frequency feature is highly variable between subjects but the associated notch is generally evident in all subjects. For frequencies below 2.5 khz, the gain varied from around 0 db at the anterior midline to around 6 db for locations around the interaural axis. This small gain and the location dependent changes are well predicted by the obstacle effect at these frequencies (c.f. Fig. 2.4). These findings are fairly typical of the results of previous studies in which the effects of the auditory canal have been accounted for. 10,29,42 Fig The horizon transfer function shows how the HRTF varies as a function of location for a particular horizon. In this case, the HRTFs have been plotted and interpolated for locations on the ipsilateral audio-visual horizon: 0 indicates the anterior median plane and 90 the ipsilateral interaural axis. Frequency is plotted using a logarithmic scale and the gain of the transfer function in db is indicated by the color of the contour. Reprinted with permission from Carlile S and Pralong D, J Acoust Soc Am 1994; 95:

17 The Physical and Psychophysical Basis of Sound Localization 43 The differences between these studies fall within the range of the intersubject differences we have observed in our own recordings, with the exception of the generally lower transmission of higher frequencies and a deeper notch around 10 khz to 12 khz. This has also been reported by Wightman and Kistler 29 who have recorded the HRTFs using similar techniques to those in our laboratory Variation in the HRTF with elevation The variation in the HRTFs due to the elevation of the sound source are shown in Figure 2.7 for the frontal median plane and the vertical plane containing the interaural axis. These plots have been generated in the same way as the horizon transfer functions with the exception that the elevation of the sound source is plotted on the Y-axis. These plots are referred to as the meridian transfer functions for various vertical planes. 31 For the anterior median plane, the meridian transfer functions show a bandpass gain of around 20 db for frequencies between 2 khz and 5 khz. For the lateral vertical plane, this bandpass extends from about 1 khz to around 7 khz. In both cases, there is a narrowing of the bandwidth at the extremes of elevation. Additionally there is an upward shift in the high frequency roll off with an increase in elevation from -45 to above the audio-visual horizon. This is manifest as an increase in the frequency of the notch in the HRTFs from 5 khz to 8 khz with increasing elevation for the lateral meridian transfer function. These findings are in good agreement with previously published studies that have measured the transfer functions from similar spatial locations to those shown in Figures 2.7. In particular, the increase in the frequency of the high frequency roll-off for increasing elevation on the median plane has been previously reported 23,24,55 (see also Hebrank and Wright 36 ) and is evident in the averaged data of Mehrgardt and Mellert. 42 Compared to the asymmetrical changes in the horizon transfer functions (Fig. 2.6) there are relatively smaller asymmetries in the meridian transfer functions for locations above and below the audiovisual horizon (Fig. 2.7). Several authors have examined the transfer functions for sounds located on the median plane. 10,33,36,39,42 As with measurements of horizontal plane transformations, the intersubject variations are quite large. However, the most consistent feature is a narrow (1/3 octave) peak at 8 khz for sounds located over the head (90 elevation). This is evident in miniature microphone recordings, 33 in recordings taken from model ears, 36 as well as in probe microphone recordings. 42 This is consistent with the psychophysical data for median plane localization where an acoustic image occurs over the head for narrow band noise centered at 8 khz 36 (see also Blauert 40 ). Furthermore, frontal cues may be provided by a lowpass notch that moves from 4 khz to 10 khz as elevation increases from below the horizon to around ,33,36,39,42

18 44 Virtual Auditory Space: Generation and Applications Fig The meridian transfer functions are shown for a variation in the elevation of the source located on (a) the anterior midline or (b) the ipsilateral interaural axis. The audio-visual horizon corresponds to 0 elevation. Other details as for Fig Reprinted with permission from Carlile S and Pralong D, J Acoust Soc Am 1994; 95:

19 The Physical and Psychophysical Basis of Sound Localization 45 The spectral features responsible for back localization seem more complicated. Mehrgardt and Mellert 42 show peaks for frequencies between 1 khz and 1.5 khz. Hebrank and Wright 36 demonstrate a lowpass cut off for frequencies above 13 khz for sounds located behind the head and reported psychophysical data showing that signals with a peak around 12 khz to 13 khz tend to be localized rear-ward. Blauert 40 reports that the percept of back localization by narrow band noise stimuli can be produced with either 1 khz or 10 khz center frequencies. These studies suggest that rear-ward localization may be due to a high frequency (> 13 khz) and/or a low frequency (< 1.5 khz) peak in the median plane transformation CONTRIBUTION OF DIFFERENT COMPONENTS OF THE AUDITORY PERIPHERY TO THE HRTF In considering the spectral transfer functions recorded at either end of the ear canal, it is important to keep in mind that structures other than the pinna will contribute to these functions. 10,56 Figure 2.8 shows the relative contribution of various components of the auditory periphery calculated for a sound located at 45 azimuth. These measures are very much a first approximation calculated by Shaw, 10 but serve to illustrate the point that the characteristics of the HRTF are dependent on a number of different physical structures. The gain due to the head, calculated from the Rayleigh-Stewart description of the sound pressure distribution around a sphere, 10,21,22 increases with increasing frequency to an asymptote of 6 db. The rate of this increase, as a function of frequency, is determined by the radius of the sphere. In humans this corresponds to a radius of 8.75 cm and the midpoint to asymptote occurs at 630 Hz (see Fig. 2.4). The contribution of the torso and neck is small and restricted primarily to low frequencies. These pressure changes probably result from the interactions of the scattered sound waves at the ear and are effective primarily for low frequencies. The contribution of the pinna flap is small at 45 azimuth but probably exerts a greater influence on the resulting total for sounds presented behind the interaural axis 48 (see also section 1.7). The largest contributions are attributable to the concha and the ear canal/eardrum complex. An important feature of these contributions is the complementarity of the conchal and ear canal components which act together to produce a substantial gain over a broad range of frequencies. However, an important distinction between the two is that the contribution of the ear canal is insensitive to the location of the stimulus, while the gain due to the concha and the pinna flange is clearly dependent on stimulus direction. 10,24,48,57 That is to say, the HRTF is clearly composed of both location-dependent and location independent components.

20 46 Virtual Auditory Space: Generation and Applications Fig Relative contributions of the different components of the human auditory periphery calculated by Shaw (1974). The source is located at 45 from the median plane. At this location the transformation is clearly dominated by the gains due to the concha and the ear canal. An important distinction between these components is that the gain due to the concha is highly dependent on the location of the source in space, while the gain of the canal remains unaffected by location. Reprinted with permission from Shaw EAG, in: Keidel WD, Neff WD, ed. Handbook of Sensory physiology. Berlin: Springer-Verlag, 1974: MODELS OF PINNA FUNCTION There are three main functional models of pinna function. The pinna is a structure convoluted in three dimensions (see Fig. 1.4) and all theories of pinna function refer, in some way, to the interactions of sound waves, either within restricted cavities of the pinna, or as a result of the reflections or distortions of the sound field by the pinna or the pinna flap. These models and other numerical models of the filtering effects of the outer ear are also considered in chapter A resonator model of pinna function The frequency transformations of the pinna have been attributed to the filtering of the sound by a directionally-dependent multi-modal resonator. 10,24,48,57-59 This model has been realized using two similar analytical techniques. Firstly, precise probe tube measurements have been made of the sound pressures generated in different portions of life-like models of the human pinna 57 and real human pinnae. Between five and seven basic modes have been described as contributing to the frequency transfer function of the outer ear. The first mode (M 1 ) at around 2.9 khz is attributed to the resonance of the ear canal.

21 The Physical and Psychophysical Basis of Sound Localization 47 The canal can be modeled as a simple tube which is closed at one end. An end correction of 50% of the actual length of the canal is necessary in matching the predicted resonance with the measured frequency response. This correction is attributed to the tight folding of the tragus and the crus helicas around the opening of the canal entrance 57 (see Fig. 1.4). The second resonant mode (M 2 ) centered around 4.3 khz is attributed to the quarter wavelength depth resonance of the concha. Again, the match between the predicted and the measured values requires an end correction of 50%. The large opening of the concha and the uniform pressure across the opening suggests that the concha is acting as a reservoir of acoustic energy which acts to maintain a high eardrum pressure across a wide bandwidth. 57 For frequencies above 7 khz, precise probe tube measurements using the human ear model suggest that transverse wave motion within the concha begins to dominate. 24 The higher frequency modes (7.1 khz, 9.6 khz, 12.1 khz, 14.4 khz and 16.7 khz) result from complex multipole distributions of pressure within the concha resulting from transverse wave motion within the concha. An important result from these measurements is that the gain of the higher frequency modes was found to be dependent on the incident angle of the plane sound wave. The second approach was to construct simple acoustic models of the ear and make precise measurements of the pressure distributions within these models. 24,48,49,58 By progressively adding components to the models, the functions of analogous morphological components of the human external ear could be inferred (Fig. 2.9). The pinna flange was found to play an important role in producing location-dependent changes in the gain of the lower conchal modes 3 khz to 9 khz. 48 The model flange represents the helix, anti-helix and lobule of the human pinna (Fig. 2.9). There is an improved coupling of the first conchal mode to the sound field for sound in front of the ear, but this gain is greatly reduced when the sound source is toward the rear. This has been attributed to the interference between the direct and scattered waves from the edge of the pinna. 49 By varying the shape of the conchal component of the models, the match between the frequency transforms for higher frequencies measured from the simple models and that measured from life-like models of human ears can be improved. The fossa of the helix and the crus helicas both seem to be very important in producing the directional changes for the higher modes. 24,49,58 In summary, while the primary features of the transformations seem to be adequately accounted for by the acoustic models, the agreement between the theoretical predictions of the modal frequencies and the measured modes is not as good. The size of the ad hoc end corrections for the predictions based on simple tube resonance suggest that while simple tube resonance models may provide a reasonable first

22 48 Virtual Auditory Space: Generation and Applications Fig Pressure transformation relative to the SPL at the reflecting plane for three simple acoustic models of the outer ear. The variations in the gain of the model for variation in the elevation of the progressive wave source are indicated in each panel (0 indicates a location in front of the ear and 90 above the ear). The dimensions of the models are in mm and the small filled circle in each model illustrates the position of the recording microphone. Panels A and B indicate the blocked meatus response of the models with increasingly complex models of the concha, while panel C shows the response of the system with a tubular canal and an approximation to the normal terminating impedance of the ear drum. Note the large gain at around 2.6 khz that appears with the introduction of the canal component of the model. Reprinted with permission from Shaw EAG. In: Studebaker GA, Hochberg I. Acoustical factors affecting hearing aid performance. Baltimore: University Park Press, 1980:

3-D Sound and Spatial Audio. What do these terms mean?

3-D Sound and Spatial Audio. What do these terms mean? 3-D Sound and Spatial Audio What do these terms mean? Both terms are very general. 3-D sound usually implies the perception of point sources in 3-D space (could also be 2-D plane) whether the audio reproduction

More information

Binaural Hearing. Why two ears? Definitions

Binaural Hearing. Why two ears? Definitions Binaural Hearing Why two ears? Locating sounds in space: acuity is poorer than in vision by up to two orders of magnitude, but extends in all directions. Role in alerting and orienting? Separating sound

More information

A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES

A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES Kazuhiro IIDA, Motokuni ITOH AV Core Technology Development Center, Matsushita Electric Industrial Co., Ltd.

More information

Sound localization psychophysics

Sound localization psychophysics Sound localization psychophysics Eric Young A good reference: B.C.J. Moore An Introduction to the Psychology of Hearing Chapter 7, Space Perception. Elsevier, Amsterdam, pp. 233-267 (2004). Sound localization:

More information

HEARING AND PSYCHOACOUSTICS

HEARING AND PSYCHOACOUSTICS CHAPTER 2 HEARING AND PSYCHOACOUSTICS WITH LIDIA LEE I would like to lead off the specific audio discussions with a description of the audio receptor the ear. I believe it is always a good idea to understand

More information

3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF. Masayuki Morimoto Motokuni Itoh Kazuhiro Iida

3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF. Masayuki Morimoto Motokuni Itoh Kazuhiro Iida 3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF Masayuki Morimoto Motokuni Itoh Kazuhiro Iida Kobe University Environmental Acoustics Laboratory Rokko, Nada, Kobe, 657-8501,

More information

The role of low frequency components in median plane localization

The role of low frequency components in median plane localization Acoust. Sci. & Tech. 24, 2 (23) PAPER The role of low components in median plane localization Masayuki Morimoto 1;, Motoki Yairi 1, Kazuhiro Iida 2 and Motokuni Itoh 1 1 Environmental Acoustics Laboratory,

More information

Binaural Hearing. Steve Colburn Boston University

Binaural Hearing. Steve Colburn Boston University Binaural Hearing Steve Colburn Boston University Outline Why do we (and many other animals) have two ears? What are the major advantages? What is the observed behavior? How do we accomplish this physiologically?

More information

Signals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor

Signals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor Signals, systems, acoustics and the ear Week 5 The peripheral auditory system: The ear as a signal processor Think of this set of organs 2 as a collection of systems, transforming sounds to be sent to

More information

21/01/2013. Binaural Phenomena. Aim. To understand binaural hearing Objectives. Understand the cues used to determine the location of a sound source

21/01/2013. Binaural Phenomena. Aim. To understand binaural hearing Objectives. Understand the cues used to determine the location of a sound source Binaural Phenomena Aim To understand binaural hearing Objectives Understand the cues used to determine the location of a sound source Understand sensitivity to binaural spatial cues, including interaural

More information

IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION. Griffin D. Romigh, Brian D. Simpson, Nandini Iyer

IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION. Griffin D. Romigh, Brian D. Simpson, Nandini Iyer IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION Griffin D. Romigh, Brian D. Simpson, Nandini Iyer 711th Human Performance Wing Air Force Research Laboratory

More information

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening AUDL GS08/GAV1 Signals, systems, acoustics and the ear Pitch & Binaural listening Review 25 20 15 10 5 0-5 100 1000 10000 25 20 15 10 5 0-5 100 1000 10000 Part I: Auditory frequency selectivity Tuning

More information

Trading Directional Accuracy for Realism in a Virtual Auditory Display

Trading Directional Accuracy for Realism in a Virtual Auditory Display Trading Directional Accuracy for Realism in a Virtual Auditory Display Barbara G. Shinn-Cunningham, I-Fan Lin, and Tim Streeter Hearing Research Center, Boston University 677 Beacon St., Boston, MA 02215

More information

Improve localization accuracy and natural listening with Spatial Awareness

Improve localization accuracy and natural listening with Spatial Awareness Improve localization accuracy and natural listening with Spatial Awareness While you probably don t even notice them, your localization skills make everyday tasks easier: like finding your ringing phone

More information

CHAPTER 1. Simon Carlile 1. PERCEIVING REAL AND VIRTUAL SOUND FIELDS

CHAPTER 1. Simon Carlile 1. PERCEIVING REAL AND VIRTUAL SOUND FIELDS 1 CHAPTER 1 AUDITORY SPACE Simon Carlile 1. PERCEIVING REAL AND VIRTUAL SOUND FIELDS 1.1. PERCEIVING THE WORLD One of the greatest and most enduring of intellectual quests is that of self understanding.

More information

Spatial hearing and sound localization mechanisms in the brain. Henri Pöntynen February 9, 2016

Spatial hearing and sound localization mechanisms in the brain. Henri Pöntynen February 9, 2016 Spatial hearing and sound localization mechanisms in the brain Henri Pöntynen February 9, 2016 Outline Auditory periphery: from acoustics to neural signals - Basilar membrane - Organ of Corti Spatial

More information

Comment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA)

Comment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA) Comments Comment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA) Is phase locking to transposed stimuli as good as phase locking to low-frequency

More information

The Structure and Function of the Auditory Nerve

The Structure and Function of the Auditory Nerve The Structure and Function of the Auditory Nerve Brad May Structure and Function of the Auditory and Vestibular Systems (BME 580.626) September 21, 2010 1 Objectives Anatomy Basic response patterns Frequency

More information

Neural correlates of the perception of sound source separation

Neural correlates of the perception of sound source separation Neural correlates of the perception of sound source separation Mitchell L. Day 1,2 * and Bertrand Delgutte 1,2,3 1 Department of Otology and Laryngology, Harvard Medical School, Boston, MA 02115, USA.

More information

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS 19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 27 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS PACS: 43.66.Pn Seeber, Bernhard U. Auditory Perception Lab, Dept.

More information

The basic mechanisms of directional sound localization are well documented. In the horizontal plane, interaural difference INTRODUCTION I.

The basic mechanisms of directional sound localization are well documented. In the horizontal plane, interaural difference INTRODUCTION I. Auditory localization of nearby sources. II. Localization of a broadband source Douglas S. Brungart, a) Nathaniel I. Durlach, and William M. Rabinowitz b) Research Laboratory of Electronics, Massachusetts

More information

Hearing in the Environment

Hearing in the Environment 10 Hearing in the Environment Click Chapter to edit 10 Master Hearing title in the style Environment Sound Localization Complex Sounds Auditory Scene Analysis Continuity and Restoration Effects Auditory

More information

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane. I. Psychoacoustical Data

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane. I. Psychoacoustical Data 942 955 Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane. I. Psychoacoustical Data Jonas Braasch, Klaus Hartung Institut für Kommunikationsakustik, Ruhr-Universität

More information

Hearing II Perceptual Aspects

Hearing II Perceptual Aspects Hearing II Perceptual Aspects Overview of Topics Chapter 6 in Chaudhuri Intensity & Loudness Frequency & Pitch Auditory Space Perception 1 2 Intensity & Loudness Loudness is the subjective perceptual quality

More information

Systems Neuroscience Oct. 16, Auditory system. http:

Systems Neuroscience Oct. 16, Auditory system. http: Systems Neuroscience Oct. 16, 2018 Auditory system http: www.ini.unizh.ch/~kiper/system_neurosci.html The physics of sound Measuring sound intensity We are sensitive to an enormous range of intensities,

More information

Spectrograms (revisited)

Spectrograms (revisited) Spectrograms (revisited) We begin the lecture by reviewing the units of spectrograms, which I had only glossed over when I covered spectrograms at the end of lecture 19. We then relate the blocks of a

More information

What Is the Difference between db HL and db SPL?

What Is the Difference between db HL and db SPL? 1 Psychoacoustics What Is the Difference between db HL and db SPL? The decibel (db ) is a logarithmic unit of measurement used to express the magnitude of a sound relative to some reference level. Decibels

More information

Effect of spectral content and learning on auditory distance perception

Effect of spectral content and learning on auditory distance perception Effect of spectral content and learning on auditory distance perception Norbert Kopčo 1,2, Dávid Čeljuska 1, Miroslav Puszta 1, Michal Raček 1 a Martin Sarnovský 1 1 Department of Cybernetics and AI, Technical

More information

Technical Discussion HUSHCORE Acoustical Products & Systems

Technical Discussion HUSHCORE Acoustical Products & Systems What Is Noise? Noise is unwanted sound which may be hazardous to health, interfere with speech and verbal communications or is otherwise disturbing, irritating or annoying. What Is Sound? Sound is defined

More information

SOLUTIONS Homework #3. Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03

SOLUTIONS Homework #3. Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03 SOLUTIONS Homework #3 Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03 Problem 1: a) Where in the cochlea would you say the process of "fourier decomposition" of the incoming

More information

Hearing. Juan P Bello

Hearing. Juan P Bello Hearing Juan P Bello The human ear The human ear Outer Ear The human ear Middle Ear The human ear Inner Ear The cochlea (1) It separates sound into its various components If uncoiled it becomes a tapering

More information

Human Sensitivity to Interaural Phase Difference for Very Low Frequency Sound

Human Sensitivity to Interaural Phase Difference for Very Low Frequency Sound Acoustics 28 Geelong, Victoria, Australia 24 to 26 November 28 Acoustics and Sustainability: How should acoustics adapt to meet future demands? Human Sensitivity to Interaural Phase Difference for Very

More information

Advanced otoacoustic emission detection techniques and clinical diagnostics applications

Advanced otoacoustic emission detection techniques and clinical diagnostics applications Advanced otoacoustic emission detection techniques and clinical diagnostics applications Arturo Moleti Physics Department, University of Roma Tor Vergata, Roma, ITALY Towards objective diagnostics of human

More information

Chapter 1: Introduction to digital audio

Chapter 1: Introduction to digital audio Chapter 1: Introduction to digital audio Applications: audio players (e.g. MP3), DVD-audio, digital audio broadcast, music synthesizer, digital amplifier and equalizer, 3D sound synthesis 1 Properties

More information

Two Modified IEC Ear Simulators for Extended Dynamic Range

Two Modified IEC Ear Simulators for Extended Dynamic Range Two Modified IEC 60318-4 Ear Simulators for Extended Dynamic Range Peter Wulf-Andersen & Morten Wille The international standard IEC 60318-4 specifies an occluded ear simulator, often referred to as a

More information

Neural System Model of Human Sound Localization

Neural System Model of Human Sound Localization in Advances in Neural Information Processing Systems 13 S.A. Solla, T.K. Leen, K.-R. Müller (eds.), 761 767 MIT Press (2000) Neural System Model of Human Sound Localization Craig T. Jin Department of Physiology

More information

EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE DESIGN OF SPATIAL AUDIO DISPLAYS

EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE DESIGN OF SPATIAL AUDIO DISPLAYS Proceedings of the 14 International Conference on Auditory Display, Paris, France June 24-27, 28 EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE

More information

INTRODUCTION J. Acoust. Soc. Am. 103 (2), February /98/103(2)/1080/5/$ Acoustical Society of America 1080

INTRODUCTION J. Acoust. Soc. Am. 103 (2), February /98/103(2)/1080/5/$ Acoustical Society of America 1080 Perceptual segregation of a harmonic from a vowel by interaural time difference in conjunction with mistuning and onset asynchrony C. J. Darwin and R. W. Hukin Experimental Psychology, University of Sussex,

More information

An Auditory System Modeling in Sound Source Localization

An Auditory System Modeling in Sound Source Localization An Auditory System Modeling in Sound Source Localization Yul Young Park The University of Texas at Austin EE381K Multidimensional Signal Processing May 18, 2005 Abstract Sound localization of the auditory

More information

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane: II. Model Algorithms

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane: II. Model Algorithms 956 969 Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane: II. Model Algorithms Jonas Braasch Institut für Kommunikationsakustik, Ruhr-Universität Bochum, Germany

More information

Hearing Sound. The Human Auditory System. The Outer Ear. Music 170: The Ear

Hearing Sound. The Human Auditory System. The Outer Ear. Music 170: The Ear Hearing Sound Music 170: The Ear Tamara Smyth, trsmyth@ucsd.edu Department of Music, University of California, San Diego (UCSD) November 17, 2016 Sound interpretation in the auditory system is done by

More information

Music 170: The Ear. Tamara Smyth, Department of Music, University of California, San Diego (UCSD) November 17, 2016

Music 170: The Ear. Tamara Smyth, Department of Music, University of California, San Diego (UCSD) November 17, 2016 Music 170: The Ear Tamara Smyth, trsmyth@ucsd.edu Department of Music, University of California, San Diego (UCSD) November 17, 2016 1 Hearing Sound Sound interpretation in the auditory system is done by

More information

Lecture 8: Spatial sound

Lecture 8: Spatial sound EE E6820: Speech & Audio Processing & Recognition Lecture 8: Spatial sound 1 2 3 4 Spatial acoustics Binaural perception Synthesizing spatial audio Extracting spatial sounds Dan Ellis

More information

Localization: Give your patients a listening edge

Localization: Give your patients a listening edge Localization: Give your patients a listening edge For those of us with healthy auditory systems, localization skills are often taken for granted. We don t even notice them, until they are no longer working.

More information

Sound Localization PSY 310 Greg Francis. Lecture 31. Audition

Sound Localization PSY 310 Greg Francis. Lecture 31. Audition Sound Localization PSY 310 Greg Francis Lecture 31 Physics and psychology. Audition We now have some idea of how sound properties are recorded by the auditory system So, we know what kind of information

More information

Spectro-temporal response fields in the inferior colliculus of awake monkey

Spectro-temporal response fields in the inferior colliculus of awake monkey 3.6.QH Spectro-temporal response fields in the inferior colliculus of awake monkey Versnel, Huib; Zwiers, Marcel; Van Opstal, John Department of Biophysics University of Nijmegen Geert Grooteplein 655

More information

SPHSC 462 HEARING DEVELOPMENT. Overview Review of Hearing Science Introduction

SPHSC 462 HEARING DEVELOPMENT. Overview Review of Hearing Science Introduction SPHSC 462 HEARING DEVELOPMENT Overview Review of Hearing Science Introduction 1 Overview of course and requirements Lecture/discussion; lecture notes on website http://faculty.washington.edu/lawerner/sphsc462/

More information

HST.723J, Spring 2005 Theme 3 Report

HST.723J, Spring 2005 Theme 3 Report HST.723J, Spring 2005 Theme 3 Report Madhu Shashanka shashanka@cns.bu.edu Introduction The theme of this report is binaural interactions. Binaural interactions of sound stimuli enable humans (and other

More information

This will be accomplished using maximum likelihood estimation based on interaural level

This will be accomplished using maximum likelihood estimation based on interaural level Chapter 1 Problem background 1.1 Overview of the proposed work The proposed research consists of the construction and demonstration of a computational model of human spatial hearing, including long term

More information

Lecture 3: Perception

Lecture 3: Perception ELEN E4896 MUSIC SIGNAL PROCESSING Lecture 3: Perception 1. Ear Physiology 2. Auditory Psychophysics 3. Pitch Perception 4. Music Perception Dan Ellis Dept. Electrical Engineering, Columbia University

More information

HearIntelligence by HANSATON. Intelligent hearing means natural hearing.

HearIntelligence by HANSATON. Intelligent hearing means natural hearing. HearIntelligence by HANSATON. HearIntelligence by HANSATON. Intelligent hearing means natural hearing. Acoustic environments are complex. We are surrounded by a variety of different acoustic signals, speech

More information

Chapter 3. Sounds, Signals, and Studio Acoustics

Chapter 3. Sounds, Signals, and Studio Acoustics Chapter 3 Sounds, Signals, and Studio Acoustics Sound Waves Compression/Rarefaction: speaker cone Sound travels 1130 feet per second Sound waves hit receiver Sound waves tend to spread out as they travel

More information

Perceptual Plasticity in Spatial Auditory Displays

Perceptual Plasticity in Spatial Auditory Displays Perceptual Plasticity in Spatial Auditory Displays BARBARA G. SHINN-CUNNINGHAM, TIMOTHY STREETER, and JEAN-FRANÇOIS GYSS Hearing Research Center, Boston University Often, virtual acoustic environments

More information

HCS 7367 Speech Perception

HCS 7367 Speech Perception Long-term spectrum of speech HCS 7367 Speech Perception Connected speech Absolute threshold Males Dr. Peter Assmann Fall 212 Females Long-term spectrum of speech Vowels Males Females 2) Absolute threshold

More information

Tympanometry and Reflectance in the Hearing Clinic. Presenters: Dr. Robert Withnell Dr. Sheena Tatem

Tympanometry and Reflectance in the Hearing Clinic. Presenters: Dr. Robert Withnell Dr. Sheena Tatem Tympanometry and Reflectance in the Hearing Clinic Presenters: Dr. Robert Withnell Dr. Sheena Tatem Abstract Accurate assessment of middle ear function is important for appropriate management of hearing

More information

Hearing. Figure 1. The human ear (from Kessel and Kardon, 1979)

Hearing. Figure 1. The human ear (from Kessel and Kardon, 1979) Hearing The nervous system s cognitive response to sound stimuli is known as psychoacoustics: it is partly acoustics and partly psychology. Hearing is a feature resulting from our physiology that we tend

More information

Auditory Scene Analysis

Auditory Scene Analysis 1 Auditory Scene Analysis Albert S. Bregman Department of Psychology McGill University 1205 Docteur Penfield Avenue Montreal, QC Canada H3A 1B1 E-mail: bregman@hebb.psych.mcgill.ca To appear in N.J. Smelzer

More information

How high-frequency do children hear?

How high-frequency do children hear? How high-frequency do children hear? Mari UEDA 1 ; Kaoru ASHIHARA 2 ; Hironobu TAKAHASHI 2 1 Kyushu University, Japan 2 National Institute of Advanced Industrial Science and Technology, Japan ABSTRACT

More information

Congruency Effects with Dynamic Auditory Stimuli: Design Implications

Congruency Effects with Dynamic Auditory Stimuli: Design Implications Congruency Effects with Dynamic Auditory Stimuli: Design Implications Bruce N. Walker and Addie Ehrenstein Psychology Department Rice University 6100 Main Street Houston, TX 77005-1892 USA +1 (713) 527-8101

More information

The lowest level of stimulation that a person can detect. absolute threshold. Adapting one's current understandings to incorporate new information.

The lowest level of stimulation that a person can detect. absolute threshold. Adapting one's current understandings to incorporate new information. absolute threshold The lowest level of stimulation that a person can detect accommodation Adapting one's current understandings to incorporate new information. acuity Sharp perception or vision audition

More information

Hearing and Balance 1

Hearing and Balance 1 Hearing and Balance 1 Slide 3 Sound is produced by vibration of an object which produces alternating waves of pressure and rarefaction, for example this tuning fork. Slide 4 Two characteristics of sound

More information

College of Medicine Dept. of Medical physics Physics of ear and hearing /CH

College of Medicine Dept. of Medical physics Physics of ear and hearing /CH College of Medicine Dept. of Medical physics Physics of ear and hearing /CH 13 2017-2018 ***************************************************************** o Introduction : The ear is the organ that detects

More information

Effect of microphone position in hearing instruments on binaural masking level differences

Effect of microphone position in hearing instruments on binaural masking level differences Effect of microphone position in hearing instruments on binaural masking level differences Fredrik Gran, Jesper Udesen and Andrew B. Dittberner GN ReSound A/S, Research R&D, Lautrupbjerg 7, 2750 Ballerup,

More information

CONTRIBUTION OF DIRECTIONAL ENERGY COMPONENTS OF LATE SOUND TO LISTENER ENVELOPMENT

CONTRIBUTION OF DIRECTIONAL ENERGY COMPONENTS OF LATE SOUND TO LISTENER ENVELOPMENT CONTRIBUTION OF DIRECTIONAL ENERGY COMPONENTS OF LATE SOUND TO LISTENER ENVELOPMENT PACS:..Hy Furuya, Hiroshi ; Wakuda, Akiko ; Anai, Ken ; Fujimoto, Kazutoshi Faculty of Engineering, Kyushu Kyoritsu University

More information

Localization 103: Training BiCROS/CROS Wearers for Left-Right Localization

Localization 103: Training BiCROS/CROS Wearers for Left-Right Localization Localization 103: Training BiCROS/CROS Wearers for Left-Right Localization Published on June 16, 2015 Tech Topic: Localization July 2015 Hearing Review By Eric Seper, AuD, and Francis KuK, PhD While the

More information

TOPICS IN AMPLIFICATION

TOPICS IN AMPLIFICATION August 2011 Directional modalities Directional Microphone Technology in Oasis 14.0 and Applications for Use Directional microphones are among the most important features found on hearing instruments today.

More information

Brian D. Simpson Veridian, 5200 Springfield Pike, Suite 200, Dayton, Ohio 45431

Brian D. Simpson Veridian, 5200 Springfield Pike, Suite 200, Dayton, Ohio 45431 The effects of spatial separation in distance on the informational and energetic masking of a nearby speech signal Douglas S. Brungart a) Air Force Research Laboratory, 2610 Seventh Street, Wright-Patterson

More information

7. Sharp perception or vision 8. The process of transferring genetic material from one cell to another by a plasmid or bacteriophage

7. Sharp perception or vision 8. The process of transferring genetic material from one cell to another by a plasmid or bacteriophage 1. A particular shade of a given color 2. How many wave peaks pass a certain point per given time 3. Process in which the sense organs' receptor cells are stimulated and relay initial information to higher

More information

Audibility of time differences in adjacent head-related transfer functions (HRTFs) Hoffmann, Pablo Francisco F.; Møller, Henrik

Audibility of time differences in adjacent head-related transfer functions (HRTFs) Hoffmann, Pablo Francisco F.; Møller, Henrik Aalborg Universitet Audibility of time differences in adjacent head-related transfer functions (HRTFs) Hoffmann, Pablo Francisco F.; Møller, Henrik Published in: Audio Engineering Society Convention Papers

More information

The role of high frequencies in speech localization

The role of high frequencies in speech localization The role of high frequencies in speech localization Virginia Best a and Simon Carlile Department of Physiology, University of Sydney, Sydney, NSW, 2006, Australia Craig Jin and André van Schaik School

More information

Pressure difference receiving ears

Pressure difference receiving ears See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/5487001 Pressure difference receiving ears Article in Bioinspiration & Biomimetics April 2008

More information

Parallel-Axis Gear Terminology

Parallel-Axis Gear Terminology Parallel-Axis Gear Terminology For more detailed coverage of this subject, consult ANSI/AGMA Standard 1012-F90; Gear Nomenclature, Definitions with Terms and Symbols Active Profile- that part of the gear

More information

Aalborg Universitet. Sound transmission to and within the human ear canal. Hammershøi, Dorte; Møller, Henrik

Aalborg Universitet. Sound transmission to and within the human ear canal. Hammershøi, Dorte; Møller, Henrik Downloaded from vbn.aau.dk on: marts 28, 2019 Aalborg Universitet Sound transmission to and within the human ear canal Hammershøi, Dorte; Møller, Henrik Published in: Journal of the Acoustical Society

More information

Linguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions.

Linguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions. 24.963 Linguistic Phonetics Basic Audition Diagram of the inner ear removed due to copyright restrictions. 1 Reading: Keating 1985 24.963 also read Flemming 2001 Assignment 1 - basic acoustics. Due 9/22.

More information

J. Acoust. Soc. Am. 114 (2), August /2003/114(2)/1009/14/$ Acoustical Society of America

J. Acoust. Soc. Am. 114 (2), August /2003/114(2)/1009/14/$ Acoustical Society of America Auditory spatial resolution in horizontal, vertical, and diagonal planes a) D. Wesley Grantham, b) Benjamin W. Y. Hornsby, and Eric A. Erpenbeck Vanderbilt Bill Wilkerson Center for Otolaryngology and

More information

COM3502/4502/6502 SPEECH PROCESSING

COM3502/4502/6502 SPEECH PROCESSING COM3502/4502/6502 SPEECH PROCESSING Lecture 4 Hearing COM3502/4502/6502 Speech Processing: Lecture 4, slide 1 The Speech Chain SPEAKER Ear LISTENER Feedback Link Vocal Muscles Ear Sound Waves Taken from:

More information

Discrimination and identification of azimuth using spectral shape a)

Discrimination and identification of azimuth using spectral shape a) Discrimination and identification of azimuth using spectral shape a) Daniel E. Shub b Speech and Hearing Bioscience and Technology Program, Division of Health Sciences and Technology, Massachusetts Institute

More information

Masker-signal relationships and sound level

Masker-signal relationships and sound level Chapter 6: Masking Masking Masking: a process in which the threshold of one sound (signal) is raised by the presentation of another sound (masker). Masking represents the difference in decibels (db) between

More information

Minimum Audible Angles Measured with Simulated Normally-Sized and Oversized Pinnas for Normal-Hearing and Hearing- Impaired Test Subjects

Minimum Audible Angles Measured with Simulated Normally-Sized and Oversized Pinnas for Normal-Hearing and Hearing- Impaired Test Subjects Minimum Audible Angles Measured with Simulated Normally-Sized and Oversized Pinnas for Normal-Hearing and Hearing- Impaired Test Subjects Filip M. Rønne, Søren Laugesen, Niels S. Jensen and Julie H. Pedersen

More information

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES Varinthira Duangudom and David V Anderson School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA 30332

More information

Juha Merimaa b) Institut für Kommunikationsakustik, Ruhr-Universität Bochum, Germany

Juha Merimaa b) Institut für Kommunikationsakustik, Ruhr-Universität Bochum, Germany Source localization in complex listening situations: Selection of binaural cues based on interaural coherence Christof Faller a) Mobile Terminals Division, Agere Systems, Allentown, Pennsylvania Juha Merimaa

More information

Chapter 11: Sound, The Auditory System, and Pitch Perception

Chapter 11: Sound, The Auditory System, and Pitch Perception Chapter 11: Sound, The Auditory System, and Pitch Perception Overview of Questions What is it that makes sounds high pitched or low pitched? How do sound vibrations inside the ear lead to the perception

More information

Sound Waves. Sensation and Perception. Sound Waves. Sound Waves. Sound Waves

Sound Waves. Sensation and Perception. Sound Waves. Sound Waves. Sound Waves Sensation and Perception Part 3 - Hearing Sound comes from pressure waves in a medium (e.g., solid, liquid, gas). Although we usually hear sounds in air, as long as the medium is there to transmit the

More information

to vibrate the fluid. The ossicles amplify the pressure. The surface area of the oval window is

to vibrate the fluid. The ossicles amplify the pressure. The surface area of the oval window is Page 1 of 6 Question 1: How is the conduction of sound to the cochlea facilitated by the ossicles of the middle ear? Answer: Sound waves traveling through air move the tympanic membrane, which, in turn,

More information

Mechanical Properties of the Cochlea. Reading: Yost Ch. 7

Mechanical Properties of the Cochlea. Reading: Yost Ch. 7 Mechanical Properties of the Cochlea CF Reading: Yost Ch. 7 The Cochlea Inner ear contains auditory and vestibular sensory organs. Cochlea is a coiled tri-partite tube about 35 mm long. Basilar membrane,

More information

The development of a modified spectral ripple test

The development of a modified spectral ripple test The development of a modified spectral ripple test Justin M. Aronoff a) and David M. Landsberger Communication and Neuroscience Division, House Research Institute, 2100 West 3rd Street, Los Angeles, California

More information

Physiological measures of the precedence effect and spatial release from masking in the cat inferior colliculus.

Physiological measures of the precedence effect and spatial release from masking in the cat inferior colliculus. Physiological measures of the precedence effect and spatial release from masking in the cat inferior colliculus. R.Y. Litovsky 1,3, C. C. Lane 1,2, C.. tencio 1 and. Delgutte 1,2 1 Massachusetts Eye and

More information

! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics

! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics 2/14/18 Can hear whistle? Lecture 5 Psychoacoustics Based on slides 2009--2018 DeHon, Koditschek Additional Material 2014 Farmer 1 2 There are sounds we cannot hear Depends on frequency Where are we on

More information

Literature Overview - Digital Hearing Aids and Group Delay - HADF, June 2017, P. Derleth

Literature Overview - Digital Hearing Aids and Group Delay - HADF, June 2017, P. Derleth Literature Overview - Digital Hearing Aids and Group Delay - HADF, June 2017, P. Derleth Historic Context Delay in HI and the perceptual effects became a topic with the widespread market introduction of

More information

Abstract. 1. Introduction. David Spargo 1, William L. Martens 2, and Densil Cabrera 3

Abstract. 1. Introduction. David Spargo 1, William L. Martens 2, and Densil Cabrera 3 THE INFLUENCE OF ROOM REFLECTIONS ON SUBWOOFER REPRODUCTION IN A SMALL ROOM: BINAURAL INTERACTIONS PREDICT PERCEIVED LATERAL ANGLE OF PERCUSSIVE LOW- FREQUENCY MUSICAL TONES Abstract David Spargo 1, William

More information

Definition Slides. Sensation. Perception. Bottom-up processing. Selective attention. Top-down processing 11/3/2013

Definition Slides. Sensation. Perception. Bottom-up processing. Selective attention. Top-down processing 11/3/2013 Definition Slides Sensation = the process by which our sensory receptors and nervous system receive and represent stimulus energies from our environment. Perception = the process of organizing and interpreting

More information

A truly remarkable aspect of human hearing is the vast

A truly remarkable aspect of human hearing is the vast AUDITORY COMPRESSION AND HEARING LOSS Sid P. Bacon Psychoacoustics Laboratory, Department of Speech and Hearing Science, Arizona State University Tempe, Arizona 85287 A truly remarkable aspect of human

More information

Jacob Sulkers M.Cl.Sc (AUD) Candidate University of Western Ontario: School of Communication Sciences and Disorders

Jacob Sulkers M.Cl.Sc (AUD) Candidate University of Western Ontario: School of Communication Sciences and Disorders Critical Review: The (Sound) Wave of the Future: Is Forward Pressure Level More Accurate than Sound Pressure Level in Defining In Situ Sound Levels for Hearing Aid Fitting? Jacob Sulkers M.Cl.Sc (AUD)

More information

= add definition here. Definition Slide

= add definition here. Definition Slide = add definition here Definition Slide Definition Slides Sensation = the process by which our sensory receptors and nervous system receive and represent stimulus energies from our environment. Perception

More information

JARO HEATH G. JONES 1,2,KANTHAIAH KOKA 2,JENNIFER L. THORNTON 1,2, AND DANIEL J. TOLLIN 1,2,3, ABSTRACT INTRODUCTION

JARO HEATH G. JONES 1,2,KANTHAIAH KOKA 2,JENNIFER L. THORNTON 1,2, AND DANIEL J. TOLLIN 1,2,3, ABSTRACT INTRODUCTION JARO 12: 127 140 (2011) DOI: 10.1007/s10162-010-0242-3 D 2010 Association for Research in Otolaryngology JARO Journal of the Association for Research in Otolaryngology Concurrent Development of the Head

More information

Auditory System & Hearing

Auditory System & Hearing Auditory System & Hearing Chapters 9 and 10 Lecture 17 Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Spring 2015 1 Cochlea: physical device tuned to frequency! place code: tuning of different

More information

ADHEAR The new bone-conduction hearing aid innovation

ADHEAR The new bone-conduction hearing aid innovation ADHEAR The new bone-conduction hearing aid innovation MED-EL has world-wide launched a new kind of hearing aid, ADHEAR, for people who have an hearing impairment and want to prevent surgery. This little

More information

Topic 4. Pitch & Frequency

Topic 4. Pitch & Frequency Topic 4 Pitch & Frequency A musical interlude KOMBU This solo by Kaigal-ool of Huun-Huur-Tu (accompanying himself on doshpuluur) demonstrates perfectly the characteristic sound of the Xorekteer voice An

More information

Significance of a notch in the otoacoustic emission stimulus spectrum.

Significance of a notch in the otoacoustic emission stimulus spectrum. Significance of a notch in the otoacoustic emission stimulus spectrum. Grenner, Jan Published in: Journal of Laryngology and Otology DOI: 10.1017/S0022215112001533 Published: 2012-01-01 Link to publication

More information

ID# Exam 2 PS 325, Fall 2003

ID# Exam 2 PS 325, Fall 2003 ID# Exam 2 PS 325, Fall 2003 As always, the Honor Code is in effect and you ll need to write the code and sign it at the end of the exam. Read each question carefully and answer it completely. Although

More information