Adaptation to Auditory Localization Cues from an Enlarged Head

Size: px
Start display at page:

Download "Adaptation to Auditory Localization Cues from an Enlarged Head"

Transcription

1 Adaptation to Auditory Localization Cues from an Enlarged Head by Salim Kassem B.S., Electrical Engineering (1996) Pontificia Universidad Javeriana Submitted to the Department of Electrical Engineering and Computer Science in Partial Fulfillment of the Requirements for the Degree of Master of Science in Electrical Engineering and Computer Science at the Massachusetts Institute of Technology June 1998 Massachusetts Institute of Technology All rights reserved Signature of Author Department of Electrical Engineering and Computer Science May 20, 1998 Certified by... Nathaniel I. Durlach Senior Research Scientist of Electrical Engineering and Computer Science - /Thes Supervisor Accepted by rthu C.Sm th AccptembyArthur C. Smith Chairman, Department Committee on Graduate Students gra Se%94i~iA

2 Adaptation to Auditory Localization Cues from an Enlarged Head by Salim Kassem Submitted to the Department of Electrical Engineering and Computer Science on May 20, 1998 in Partial Fulfillment of the Requirements for the Degree of Master of Science in Electrical Engineering and Computer Science ABSTRACT Auditory localization cues for a double-size head were simulated using an auditory virtual environment where the acoustic cues were presented to subjects through headphones. The goals of the study were to see if better-than-normal resolution could be achieved and analyze how subjects adapt to this type of transformation of spatial acoustic cues. This worked follows that done by Shinn-Cunningham (1994, 1998) and Shinn- Cunningham, Durlach and Held (1998a, 1998b), where a nonlinear remapping of the normal space filters was implemented. The double-size head's acoustic cues were simulated by frequency-scaling normal Head Related Transfer Functions. As a result, the Interaural Time Differences (ITDs) presented for every position were doubled. Therefore, even though the relationship between the location a naive listener associates with a stimulus and its correct location is not linear, it is a linear transformation in ITD space. Since ITDs were doubled, some ITDs presented to the listener were larger than the largest naturally-occurring ITDs, which proved to be a problem. Bias and resolution were the two quantitative measures used to study performance as well as to examine changes in performance over time. Also, the Minimum Audible Angles for normal and altered cues were determined and used to obtain estimates of subjects' sensitivity. In the experiments, mean response and bias changed over time as expected, clearly showing the adaptation process. Resolution results were less consistent, giving better-than-normal resolution around the middle positions with altered cues. Nevertheless, normal cues provided better overall performance. When correct-answer feedback was used, resolution behaved as expected, but when feedback was not presented, results were consistent with subjects attending to the whole range of possible cues throughout the experiment (i.e., the internal noise was large and constant). Previous work suggested that mean response, bias and resolution are dependent on each other and that all have the same adaptation rate. However, the no-feedback condition proved that resolution can be independent of the other quantities. Finally, estimates of sensitivity indicated that resolution is strongly related to the type of cues used and that changes in resolution depend directly on the total internal noise. Thesis Supervisor: Nathaniel I. Durlach Title: Senior Research Scientist of Electrical Engineering and Computer Science

3 ACKNOWLEDGMENTS Dedico este trabajo de grado a mi esposa, quien me di6 todo el apoyo, toda la amistad y todo el amor que necesitd. Gracias por creer en mi. A mis padres y hermanos, por ayudarme a ser como hoy soy. A mi papa', porque sin su esfuerzo no podria estar aquf. A mis verdaderos amigos. This work is dedicated to my wife, who support, all the friendship and all the love you for believing in me. gave me all the I needed. Thank To my parents and siblings, for helping me be who I am. To my father; without his effort I would not be here. To my truly good friends. I want to thank Nathaniel Durlach for his support and for giving me the opportunity of learning wonderful things. Special thanks to Barbara Shinn-Cunningham, for all her unconditional help. Without her guidance, this work could never have been finished. I also want to thank Lorraine Delhorne, Jay Desloge and Andy Brughera for all their kind collaboration.

4 TABLE OF CONTENTS ABSTRACT... ACKNOWLEDGMENTS INTRODUCTION BAC K G RO UND NORMAL AUDITORY LOCALIZATION AUDITORY VIRTUAL ENVIRONMENTS SENSORY IMPROVEMENT ADAPTATION TO SUPERNORMAL CUES MOTIVATION SUPERNORMAL AUDITORY LOCALIZATION: DOUBLE-SIZE HEAD EQUIPMENT AND EXPERIMENTAL SETUP ADAPTATION EXPERIMENT WITH FEEDBACK Exp erim ent D escription Analysis E xp ected R esu lts R esults Error in M easured th RTFs ADAPTATION EXPERIMENT WITHOUT FEEDBACK Experim ent D escription E xp ected R esults Results JUST NOTICEABLE DIFFERENCE M OTIVATION B ACKG RO UND N E W H R T F s EQUIPMENT AND EXPERIMENTAL SETUP EXPERIMENT DESCRIPTION E X PECTED R ESU LTS R ES U LT S

5 5. MODEL OF ADAPTATION R EM APPING FUNCTIO N AVERAGE PERCEIVED POSITION RELATING JND AND RESOLUTION BACKGROUND R ESU LTS CO N CLUSIO N SUMMARY D ISCU SSIO N FUTURE WORK REFEREN CES... 74

6 1. INTRODUCTION In recent years, computing technology has provided us with more sophisticated ways of gathering data, increasing the amount and complexity of the information presented to users. As a result, the systems that work with this information are more complex and more difficult to operate and understand. Today's graphic computer interfaces are a first approach to easing the resulting burden of displaying information. Lately, attention has been given to a more sophisticated interface, referred to as virtual reality, whose objective is to provide a more efficient and natural way of presenting and manipulating information by incorporating a three-dimensional spatial cues in the display (Wenzel, 1992). Using this technology, a human operator can interact with a real environment via a human-machine interface and a telerobot as if he were the one standing in the remote working area. Ideally, the operator should see, hear, and feel what the telerobot sees, hears, and feels. Moreover, the telerobot can provide additional information that can be useful to the operator (e.g., temperature, speed, etc.). Normally, the teleoperator system is used to interact with a remote, inaccessible or hazardous environment, protecting the physical integrity of the operator while permitting him to control and achieve a specific task. The signals in the telerobot's environment are sensed, sent back, and displayed to the human operator. In the same way, the actions taken by the operator in response to the signals are transmitted to the telerobot and used to control its actions (Durlach, 1991). In a virtual-environment system, the same kind of human-machine interface is used, but a computer simulation replaces the telerobot and the environment. The purpose of a teleoperator system is to extend the operator's sensory-motor system in order to facilitate the manipulation of the physical environment, while in a virtual reality system the objective is to study or alter the human operator. General information on teleoperators and virtual-environments can be found in Vertut and Coiffet (1986), Sheridan (1987), Bolt (1984), Foley (1987), and Durlach and Mavor (1995).

7 In the past, use of the visual modality was the primary method for presenting spatial information to a human operator. However, more recently, the auditory system has become recognized as an alternative for delivering such information. Acoustic signals are very useful because they can be heard from any source direction, they tend to produce an alerting or orienting response, and they can be detected faster than visual signals (Wenzel, 1992). In this project, attention is given only to the auditory localization features of the machine-human interface, and particular consideration is given to how to provide the operator with a better-than-normal localization ability, so-called supernormal auditory localization (Durlach, Shinn-Cunningham, and Held, 1993). Such an approach attempts to provide acoustic cues that yield better effective spatial resolution than do normal cues. This is achieved by increasing the change in the physical acoustic cues that result when source position changes. Improving the effective resolution is desirable because the normal human auditory localization system has extremely poor resolution in azimuth at angles off the side, in elevation, and in distance; it has at least a moderate resolution only in azimuth for sources in the front. In other words, we have relatively poor spatial resolution for acoustic sources, especially when compared with visual spatial resolution. Durlach, Shinn-Cunningham, and Held (1993) proposed several ways to increase the directional resolution by using localization cues that would improve the justnoticeable-difference (JND) (i.e., the minimum separation for which a listener can resolve two adjacent spatial positions). Some of the suggested methods for achieving supernormal cues include simulating the localization cues from an enlarged head, remapping the normal localization cues to increase resolution in some regions of the azimuth plane while decreasing it in others, and exponentiating the complex interaural ratio at all frequencies (Durlach and Pang, 1986). As Shinn-Cunningham (1998a) noted, these approaches should improve the subject's ability to resolve sources in JND-type experiments, but the effects on identification tasks using a larger range of physical stimuli are not clear. In addition, the use of supernormal localization cues will displace the apparent location of the source for a naive listener when he is first exposed to these remapped cues. Adaptation to the new cues is said to have taken place to the extent that the mean localization error diminishes over time with training.

8 Given the results obtained in a previous work by Shinn-Cunningham (1994) and Shinn-Cunningham, Durlach and Held (1998a, 1998b), a study of supernormal auditory localization cues will be undertaken, using the suggested enlarged head approach (Durlach, Shinn-Cunningham, and Held, 1993). Auditory localization cues for a doublesized head will be simulated and presented to the subjects during the experiments. The main goals of this project will be: To analyze how subjects adapt to a transformation of spatial acoustic cues that is approximately linear, to extend the quantitative model of adaptation developed from the nonlinear adaptation results (Shinn-Cunningham, 1998), and to see if better-than-normal resolution is achieved with the double-head size cues. Also, the results of this experiment will be compared with those of Shinn-Cunningham (1994 and 1998) and Shinn-Cunningham, Durlach and Held (1998a, 1998b) to explore how different types of remappings affect adaptation to remapped auditory spatial cues. Following their work, bias (a measure of response error in units of standard deviation) and resolution (the ability of reliably differentiate between nearby stimulus locations) are the two quantitative measures that will be used to analyze the performance of subjects over the course of the experiments.

9 2. BACKGROUND 2.1. Normal Auditory Localization The classic duplex theory (Lord Rayleigh, 1907) states that the interaural differences in time of arrival and interaural differences in intensity, are the two primary cues used for auditory localization (Figure 1). Interaural time differences (ITDs) arise when a sound source is to one side of the head, since the sound reaches the nearest ear first'. If a sound source is far enough from the head, then sounds' wavefront is approximately planar when it reaches the head. The distance the sound must travel to reach the two ears differs, depending on source location. Assuming a spherical model of the head with radius r, the difference in the travel distance for a source on the horizontal plane at an angle of 0 (in radians) is given by (Figure 2): Ad = r -(0 + sino). (1) Assigning a radius of 8.75 cm to the spherical head, and knowing that the velocity of sound c is 343 m/sec, the interaural time difference (ITD) can be expressed as: ITD Ad 255x (0 + sin0) [sec]. (2) C Figure 3 shows predictions of ITD based on equation 2 and measurement of ITD for adult males (Mills, 1972). The duplex theory states that the relative left-right position of a sound source is determined by ITDs for low frequency sounds and IIDs for high frequency sounds. As the duplex theory explains, ITDs give good perceptual cues for sound location only for low The sound will reach the farther ear 29 psec later per each additional centimeter it must travel (Mills, 1972).

10 frequencies; at frequencies higher than 1500Hz, phase ambiguities occur. The phase information becomes ambiguous at high frequencies because the wavelengths are smaller than the distance between the ears. closer Interaural sources off sooner at t rd IID closer ear Interaural Intensity Differences (lids): sources off to one side are louder at the closer ear due to head- shodowing Figure 1. The duplex theory postulates that interaural intensity differences (lids) and interaural time differences (ITDs) are the two primary cues for auditory localization (from Wenzel, 1992). r-(o+si Lef Figure 2. Differences between the distances of the ears from a sound source that is far away and that can be represented as a plane wave front (from Mills, 1972).

11 On the other hand, sources off to one side of the head are louder at the closer ear due to head-shadowing; the head acts as a low-pass filter for the far ear, making IDs important localization cues for high frequencies. This acoustic effect occurs because wavelengths are large relative to the size of the head at high frequencies. It has been found that ITD is the major cue for determining the location of sources along the horizontal plane, and that the spectral peaks and the notches produced by the filtering effect of the pinnae (mainly above 5kHz) are important for determining source elevation. Even though the duplex theory provides a clean and simple explanation for determining the lateral position of a sound, this approach presents several limitations. For example, listeners use the time delay envelope of high frequency sounds for localization even though they do not use ITD at these frequencies. The direction-dependent filtering that occurs when sound waves impinge on the outer ears and pinnae also provides very important localization cues. It has been shown that the spectral shaping by the pinnae is highly directional dependent (Shaw, 1974 and 1975), and that the pinnae is responsible for the externalization of the sounds (Plenge, 1974). ANGLE FROM DIRECTLY AHEAD 0- Figure 3. Interaural time difference (ITD) as a function of the position of a source of clicks. X: measured values from five subjects. 0: values computed from the mathematical approximation (from Mills, 1972).

12 Therefore, the auditory system's method for determining source position depends on a directional dependent filtering that occurs when the received wave sound interacts with the head, ears, and torso of the listener. Let X(w) be the complex spectrum of the sound source and YL(m,O,B) and YR(Wm,O, ) be the complex spectrum of the signals received in the left and right ear respectively. Then, for sources that are sufficiently far from the listener (so that distance only affects the overall level of the received signals), and for anechoic listening conditions, one can write: YL(O, 0,) = r-' -HL(o,, ) -X(o) (3a) Y, (o,,) = r -' H R (0,0,) X(O), (3b) where r is the distance from the head to the source, and HL(o,,,) and HR(m,0,4) are the space filters or Head Related Transfer Functions (HRTFs) for each ear, describing the directional dependent effects of the head and body. The HRTFs depend on the frequency, o; the azimuth of the sound source relative to the head, 0; and the elevation of the source relative to the head, 0. The auditory system compares the signals received at the two ears in a manner that can be usefully represented mathematically by forming the ratio: YL(o,)0,) HL(O,0',) (4) YR (W,O, ) HR (,0,0) In this ratio, the effect of r and the effect of X(w) are canceled, and the ratio depends only on o, 0, and 0. The auditory system can determine the location of the sound source from the ratio, independent of source characteristics. The magnitude and the phase of the ratio of the signals at the two ears for a source at direction (0,0) are equivalent to the interaural intensity difference (IID) and the interaural time difference (ITD), respectively. Even though interaural processing (i.e., computation of IID and ITD) offers useful localization information, directional ambiguities can occur: (i) distance is not perceived because its effect is negligible for distant sources, and (ii) front-back confusions appear

13 because of the so-called cone of confusion 2 (Mills, 1972). Head movements and monaural processing help to resolve front-back ambiguities. Head movements cause changes in IID and ITD which differ for a source in front or behind the listener. Also, a priori knowledge or information about the transmitted signal X(o) can allow monaural spectral cues to be used to estimate the space filters HL(o,O, ) and HR(o,O,) from the signals YL(0,0,0) and YR(w,O,) received at the two ears. Wightman and Kistler (1992) found that low-frequency ITDs are the dominant cues for localization of broadband sound sources. Although ITD cues are dominant, when the low-frequency components of a stimulus are removed, direction is determined by IID and spectral shape cues. In other words, when low-frequency interaural time cues are present, they override the ID and the spectral shape cues that are present in other frequency ranges. It follows that in every condition in which there is a conflict between low-frequency ITD and any other cue, sound localization is determined mainly by ITD. The ITD is used primarily to establish the locus of possible source location (i.e., to determine on which cone of confusion the sound source lies), while lid and spectral filtering help to resolve any ambiguity in ITD information. Integration of all available cues leads to accurate localization (Wightman and Kistler, 1992). More information about normal auditory localization can be found in Blauert (1983), Mills (1972), Wightman, Kistler, and Perkins (1987), Wenzel (1992) and Durlach, Shinn-Cunningham and Held (1993) Auditory Virtual Environments In order to better understand the importance of auditory cues such as ITD, IID and pinnae effects, and to enhance their capabilities, researchers have begun to use auditory virtual environments to simulate acoustic sources around the listeners. This approach 2 The cone of confusion errors arise because a given ITD or IID produced from one source position is roughly equal to that produced by sound sources located at any place over the surface of a hyperbolic surface (with a cone shape) whose axis is the interaural axis.

14 gives the experimenter good control of the stimulus while creating rich and realistic localization cues. One class of simulation technique derives from the measurement of Head Related Transform Functions (HRTFs). Using a normative mannequin, such as the KEMAR (Knowles Electronics, Inc.), it is possible to obtain good estimates of the acoustic effects of the head and the pinnae on sounds reaching the listeners' ear drum as a function of source position. Using these finite impulse response (FIR) filters, it is possible to filter an arbitrary sound to give it spatial characteristics (i.e., to simulate a sound coming from a predetermined direction). Even though the HRTFs provide good acoustic cues, the localizability of the sound also depends on other factors, such as its original spectral content (e.g., narrow band sounds like pure tones are harder to localize than broad band tones). Individual differences in the pinnae appear to be very important for some aspects of localization, most notably resolving cone of confusion errors. Several studies show that most listeners can obtain useful directional information from a typical HRTF, suggesting that the basic properties of the HRTFs carry much of the important localization information (Wenzel, 1992). Using digital signal processing (DSP) systems, real time simulation of acoustic cues can be used to generate spatial auditory cues over headphones. These systems use time domain convolution to achieve the desired real time performance, reproducing a free-field experience. Using a head tracker device attached to the headphones, the system can determine the actual head's yaw, pitch and roll and decide which set of HRTFs is needed for presenting a source from a particular position. The DSP system will then filter the input signal with the proper HRTF. Even if the subject's head is moving freely, the head tracker allows the presentation of a fixed sound location by calculating the relative azimuth and elevation from the source to the head. Of course, the term real time is a relative one given that it is not possible to select the appropriate HRTF on the fly. Some processing time is needed for all the computations. Due to the constraints of memory and computation time, DSP systems must make several approximations and simplifications, losing some reliability.

15 A typical HRTF record consists of a pair of impulse responses (i.e., one for the right and one for the left ear), measured from several equidistant locations around the subject. The HRTFs are then estimated by canceling the effects of the loud speakers, the stimulus, and the microphone responses from the recorded signal (Wightman and Kistler, 1.989). For example, the HRTFs measured by Wightman and Kistler (1989) from their subject SOS consisted of 36 azimuth positions (with a 100 resolution) ranging from 1800 to -170', and 14 elevation positions (with a 100 resolution), ranging between 80 to Hence, the HRTFs represented a total of 504 positions (36 in azimuth times 14 in elevation). The HRTF for a specific position is stored as two 127 tap FIR filters, each containing the impulse response for one of the ears. Figure 4 shows typical HRTF waveforms for two different locations in azimuth at 0' elevation, and demonstrates how ITD and IID vary as a function of the direction of the sound source. For a source at 0O in azimuth (i.e., right in front of the listener), there is very little difference in either the magnitude (lid) or the phase (ITD) responses for both ears (top right plots); this is highlighted by taking the ratio between the responses of both ears (bottom right plots). Because sound arrives almost at the same time and with the same magnitude at both ears, the ratio of the phase and magnitude is almost zero. For a source at -400 in azimuth (i.e., to the left of the listener), the magnitude (lid) of the left ear is greater than the one of the right, while the phase (ITD) of the right ear is larger (top left plots). As expected, the ratio between the right and left ear responses (bottom left plots) shows a negative magnitude (i.e., the sound at the right ear has less energy than the sound at the left ear) and a negative overall phase (i.e., the sound arrives at the right ear later that at the left ear) Sensory Improvement It is now possible to think not only of better ways to simulate normal localization cues, but also of methods for transforming the natural acoustic cues for the purpose of achieving better spatial resolution (e.g., superlocalization, Durlach, 1991).

16 Frequency response left(-) and right(- -- ) ear (40 degrees) Frequency response let(--) and right(- - -) ear (0 degrees) I s if L Frequency 400 response MXX righf/teft ear 800 (40 degrees) Jr-- -' Frequency 1Hzl Frequency response Frequency rightwlf (HzJ eer (-40 degrees) to Frequency (HzJ Figure 4. Frequency responses for -40o and 00 in azimuth and 0' in elevation of the HRTFs measured by Wightman and Kistler (1989) from their subject SOS. The figure illustrates how the HRTFs contain the IID, ITD, and pinnae effect cues. Some studies have tried to show how subjects adapt to unnatural auditory localization cues. One set of such studies (Warren and Strelow, 1984; Strelow and Warren, 1985) investigated the use of the Binaural Sensory Aid, a device that used auditory localization cues as a way of representing the position of objects sensed with sonar. Here, the ITDs contained information about the distance from the object, and the IIDs gave its direction. The results of this study showed that blindfolded subjects were able to adapt and use these unnatural cues accurately, after being trained using a correctanswer feedback paradigm. In an attempt to improve spatial resolution (i.e., improving the JND in direction), a study on supernormal auditory localization was undertaken (Durlach, Shinn- Cunningham, and Held, 1993). Its main goal was to determine if adaptation to rearranged acoustic spatial cues was possible and to see whether resolution could be improved.

17 In this study, supernormal localization cues were created by remapping the relationship between source position and the normal HRTFs (Durlach, Shinn- Cunningham, and Held, 1993). The transformation was supernormal only for some positions. At other positions the rearrangement actually reduced the change in acoustic cues with changes in source location. To simulate a sound at position 0, the study used HRTFs that were chosen from the normal HRTF set, but which normally correspond to a different azimuth. The new HRTFs are given by: H'(w, 0, ) = H(o, f, (),). (5) 'With this transformation no new HRTFs were created. Instead, the existing HRTFs were reassigned to different angles. The family of mapping functions fo(o) used to transform the horizontal plane was given by: (0) I 1- tan 2n sin(20) (6) 2 1-n 2 +(I+n')cos(26) where the parameter n gives the slope of the transformation at 0=0. Figure 5 shows this transformation for several cases of n. When n=l, cues are not rearranged. With n>l the transformation increased the cue differences (and therefore the resolution) around values of 0=00, while it decreased them in the neighborhood of For n<l the opposite occurred. As a result, subjects were expected to show better-than-normal resolution in the front, and lower resolution towards the sides when n>1. In the study, subjects were first tested with normal localization cues (to determine baseline performance), and then with altered (supernormal) cues to examine how performance changed with training. Finally, normal cues were presented again to see if there was any after-effect as a result of training. Bias, a measure of the error in the subjects response, and resolution, a measure of the ability to resolve adjacent stimulus positions, were the two quantities used to analyze

18 the adaptation process throughout the experiments. Figure 6 and 7 illustrates bias and resolution results for one of the experiments in this study, in which correct-answer feedback was used to train the subjects Co a) ) a) * V source -10 azimuth (degrees) Figure 5. A plot of the azimuth remapping transformation specified by equation 6 (from Shinn- Cunningham, Durlach, and Held, 1998a). The first normal cues are expected to show small bias (error in units of standard deviation) since cues are roughly consistent with normal localization cues. The first run using altered cues resulted in a very large bias, indicating the sudden introduction of the unnatural sounds. The last run using altered cues showed a decrease in bias compared to before training, demonstrating that the correct answer feedback caused subjects to adapt to the new cues (although, adaptation was not complete). Finally, the first normal cue test following training with altered cues produced a negative after-effect, indicating that the performance was not controlled exclusively by conscious correction (Shinn-Cunningham, Durlach, and Held, 1998a). Resolution of adjacent source locations is shown in Figure 7. In the first normal cue run, resolution provides a standard against which other results are compared. As

19 expected, when altered cues were presented for the first time, resolution increased around the center positions and decreased at the edges of the range. In the last run with altered cues, resolution remained enhanced (with respect to the baseline), but showed a decrease compared to the first altered cue run. As before, an after-effect was seen after normal cues were introduced again (Shinn-Cunningham, Durlach, and Held, 1998a) Pi n= n=3 - n=3 o- - - n= 9 O o~ Io Source position (degrees) Figure 6. Bias results for one of the experiments carried out. 0: First run in the experiment using normal cues. ': First run with altered cues. *: Last run with altered cues. 0: First normal cue run following altered cue exposure. Here, the altered cues have a transformation strength of n=3 (from Shinn-Cunningham, Durlach, and Held, 1998a). This study showed that subjects could not adapt completely to a nonlinear remapping of the auditory localization cues. In general, subjects were able to reduce their response bias with training, but they could never completely overcome their errors. In addition, although the transformation initially increased resolution as expected, resolution decreased as subjects adapted to the remapping. Shinn-Cunningham, Durlach and Held (1998a) concluded that resolution depended not only on the range of physical cues presented during an experiment, a result previously described for perception of sound intensity (e.g., Durlach and Braida, 1969; Braida and Durlach, 1972), but also upon the past history of exposure or training of the subject. The researchers also found that

20 subjects adapted to the best-fit linear approximation of the nonlinear transformation, implying that subjects may only be capable of adapting to linear transformations of the localization cues (Shinn-Cunningham, Durlach and Held, 1998b) d'i Source position (degrees) Figure 7. Resolution results for one of the experiments carried out. 0: First run in the experiment using normal cues. +: First run with altered cues. *: Last run with altered cues. 0: First normal cue run following altered cue exposure. Here, the altered cues have a transformation strength of n=3 (from Shinn- Cunningham, Durlach, and Held, 1998a).

21 3. ADAPTATION TO SUPERNORMAL CUES 3.1. Motivation The main goal of this project is to examine further whether humans can adapt to unnatural (altered) auditory localization cues that will provide listeners with better-thannormal localization ability, so-called supernormal auditory localization (Durlach, Shinn- Cunningham, and Held, 1993). In contrast with the previous study Listed above (e.g., Shinn-Cunningham et al., 1994 and 1998) where a nonlinear remapping of the normal space filters was implemented, a more linear approach that expands all positions is now taken to create supernormal HRTFs. The earlier experiments showed that subjects adapted to the best-fit linear approximation of a nonlinear transformation. This could mean that subjects are only able to adapt to linear transformations. This study is designed to give further insight into the adaptation process to determine if this linear constraint holds for other cue transformations. In addition, the new transformation may provide listeners with a higher spatial sensitivity and, hopefully, a low overall localization error Supernormal Auditory Localization: Double-Size Head To improve resolution, the localization cues must increase the discriminability between separated sources. This may be achieved by having a larger-than-normal difference in the physical cues corresponding to two different positions. One way of achieving this is by simulating a larger-than-normal head, thereby increasing the ITDs and lids associated with every position in space. For a subject who has not adapted to such a change in cues, the use of such a transformation will make him think that the location of sound sources are farther apart than they actually are.

22 The double-size head was simulated by frequency scaling normal HRTFs (Rabinowitz, Maxwell, Shao, and Wei, 1993). The new pair of HRTF filters are defined as follows: HL (0,,4) = HL(K(o,,) R (o0,8,) = H R (Ko,,4), (7) where K has a constant value. This transformation approximates the acoustic effect of increasing the size of the human body, including the head and pinnae, by a factor of K. As a result, the IlD and ITD will also be affected, and will be determined by the new ratio: YR (,0,) HR (0)O,) 1 (8) Here, both the interaural differences and the monaural spectral cues are magnified by the factor K (Durlach, Shinn-Cunningham, and Held, 1993), and therefore, it is said to be a linear transformation. For the current study, the frequency was doubled (i.e., by setting K=2), simulating a head twice the normal size. As Rabinowitz, Maxwell, Shao, and Wei (1993) showed, scaling the HRTFs corresponds to uniformly scaling up all physical dimensions to simulate the main acoustic effects of a magnified head. The transformation of the HRTFs presents several problems: Scaling the frequency of the normal HRTFs can be achieved by inserting an additional sample equal to zero after each sample of the original HRTF impulse response. This causes the time signal (i.e., the impulse response) to increase in length by a factor of two (i.e., K=2). In the frequency domain, the spectrum is compressed by a factor of two. The new HRTFs must be low-pass filtered to remove energy above the original Nyquist frequency. For example, if the normal HRTFs are defined up to 20kHz, the new HRTFs are only defined up to 10kHz. Without low-pass filtering the upsampled waveforms, this procedure would create distortion of the spectrum above 0lkHz due to spectral aliasing.

23 Conversely, since the size of the head is doubled, the ITDs presented to the listener will include larger ITDs than the largest naturally-occurring ITDs. For example, a source at 900 (or -900) will produce the maximum normal ITD of around 0.65 msec (Figure 3). With the transformed cues, the corresponding ITD will be 1.3 msec. It is not clear how subjects will perceive these unnatural cues. As a consequence, subjects must adapt to the expanded interaural axis not only by relabeling it, but also by interpreting larger than normal ITDs (Durlach, 1991). Normal HRTFs from subject SOS (Wightman's and Kistler's, 1989) were used to create the double-head HRTFs. Each position described by the HRTFs contains two 127 tap FIR filters (one containing the filter coefficients for the right ear and one for the left ear) sampled at 50kHz. To create the double-head HRTFs, each FIR filter was upsampled by a factor of two and then low pass filtered at 25kHz (Figure 8). As a result, the new altered HRTFs were two times longer (i.e., each FIR filter is now 254 tap long), and sampled at 100kHz. Frequency response normal HRTF (40 degrees) Frequency response altered HATF (-40 degrees) Frequency [Hz] Frequency (Hz] Notice the different frequency scales Figure 8. Comparison between normal and altered HRTFs for a source at -400 in azimuth. The left panel shows the normal HRTF while the right one shows the altered HRTF under different frequency scales. The shapes of both frequency responses are the same, except for the fact that the altered HRTF has been scaled in frequency, indicating that the upsampling was successful. Note that the ITD (given by the slope of the phase as a function of frequency) for the altered HRTF is now doubled. Figure 8 compares the frequency responses of the HRTF at -400, showing that the upsampling doubles the ITD (the ITD is given by the slope of the phase response as a function of frequency). As mentioned above, the altered HRTFs are now compressed in

24 frequency by a factor of two, and in order to prevent unpredictable results at high frequencies, the new HRTFs were low-pass filtered. As a result, the magnitude response is effectively zero above 12.5kHz Equipment and Experimental Setup Adaptation to the double-size head auditory localization cues was investigated by presenting simulated acoustic cues and real visual cues. The acoustic cues were generated by an auditory virtual environment. Visual cues were provided by a light display located in front of the subjects. The visual cues were used to provide the subjects with spatial feedback about the simulated sounds. Subjects were seated in front of a five-foot-diameter arc of lights, consisting of thirteen 2 inch light bulbs. The lights were labeled from 1 to 13 (all lights were visible to the subjects during the experiment). The lights were positioned from -600 to +600 in azimuth with respect to the head position, with a 100 separation between each pair of lights. The position -60o azimuth was represented by light 1, 0O by light 7, 60' by light 13, etc. The light array was connected to a digital-analog device, the light driver, which receives a digital input from a personal computer (PC) and converts it to an analog output that drives the current to each light bulb. The PC used Data Translation's DT2817 Digital I/0 Board to transmit signals to the light driver, enabling it to turn the light on or off at any of the 13 positions (Figure 9). This light array provided visual feedback to the subjects. The acoustic cues were simulated by an auditory virtual environment system consisting of a PC, a signal-processing device, a head tracker, headphones, and a function generator. The head tracker transmits to the PC the instantaneous head orientation of the subject with respect to 0' azimuth (i.e., the 00 position in the light array, calibrated during start-up procedures). The PC calculates the relative direction of the head with respect to the desired source position. This information is then transmitted to the signal-processing hardware, which filters the waveform provided by the function generator with the

25 appropriate HRTFs to produce the left and right ear signals. Finally, the binaural signal generated by the signal processing hardware was played to the subject over headphones o _20o 200 ^ /o Figure 9. Diagram of the light array which gave subjects spatial visual feedback. Thirteen light bulbs represent 13 positions ranging from -600 to 600 in azimuth with respect to the subject's head position. Lights were placed at 100 intervals. The Escort EFG-2210 function generator provided the system with a 5Hz periodic train of clicks (i.e., square wave) as the sound source. As described later, the subjects heard roughly 5 clicks per trial, as the signal-processing hardware switches the input signal on and off asynchronously. A Polhemus 3Space Isotrack provided head position information. The Isotrack uses electromagnetic signals to measure the relative position (azimuth, elevation and roll) between a stationary transmitter and a receiver worn on the subject's head. The PC, a Pentium-S based machine running at 100MHz, controlled the signalprocessing hardware and the light array and ran the experiment's software control program. To present a source, the program randomly selected one source position from the 13 possibilities. The relative position between the selected source and the subject's head was calculated after reading the position of the listener's head from the head tracker. The PC instructed the signal-processing hardware to generate the appropriate binaural cues and present them to the subject, based on these computations.

26 The signal-processing hardware used was the System II, a signal-processing platform from Tucker Davis Technologies. The System II consists of analog and digital interface modules permitting the synthesis of high-quality analog waveforms, including PA4 Programmable Attenuators, an HTI Head Tracker Interface, and the PD1 Power Sdac (a real-time digital filtering system). An analog to digital converter (ADC) received the input waveform and filtered it with the selected HRTFs. The binaural signal was then passed through the output digital to analog converter (DAC) which was connected to the PA4s. The programmable attenuators controlled the length of the stimuli that the subjects received. While the attenuators were in the mute state, no sound was heard. The PA4s were switched out of the mute state for one second per trial, allowing roughly 5 clicks to be heard. The HTI permitted the computer to read the coordinates provided by the head tracker. Figure 10 shows a block diagram of the virtual auditory environment used to simulate the acoustic cues. Transmitter Receii - Function Generator sound source Figure 10. Block diagram of the virtual auditory environment that simulated acoustic localization cues.

27 3.4. Adaptation Experiment with Feedback Experiment Description In each testing run, subjects had to face front (0' azimuth) while a continuous sound (click train) was presented from a random location. When the sound was turned off, they were asked to identify the location of the sound by reporting the position number (i.e., a number between 1 and 13) to the operator, who entered it on the keyboard. As soon as the answer was typed into the computer the appropriate light was turned on as a way of giving a correct-answer feedback. One second after the subject's response, the next random sound was presented. All locations were presented to the subject exactly twice in each run (i.e., the locations were chosen at random without replacement). Thus, if 13 positions were used, 26 trials were presented in each run. Each run lasts around 3 minutes. Finally, each run could present either normal or altered cues, determined by selecting the appropriate set of HRTFs. The basic experimental paradigm was similar to that used by Shinn-Cunningham 1(1994). Each subject performed 8 identical sessions of 40 testing runs each. In each :session, the first 2 runs and the last 8 runs used normal cues, while the others used altered HRTFs. Eight sessions were necessary in order to have a sufficient number of trials to average across. It was assumed that all trials were stochastically independent even though the positions presented were chosen at random without replacement. Before the beginning of the experiment, subjects were informed that both normal and altered cues would be used at different times and that the apparent location of simulated sources using altered cues may not be their correct location. Also, the subjects were notified every time that a change of cues was about to occur (from normal to altered or from altered to normal), so that they would answer as accurately as possible for the current cues.

28 Data from five subjects were gathered. All subjects were naive (without prior experience in auditory localization experiments), reported normal hearing, and had no difficulty performing the test Analysis Bias and resolution were the two quantitative measures used to study the performance and adaptation of each subject under these experimental conditions. Bias measures the error in the subject response (in units of standard deviation), describing how well the subjects adapted to the altered cues. Resolution measures the ability to resolve adjacent stimulus positions. As described by Shinn-Cunningham (1994), there are three basic processing schemes that can be used for finding estimates of the average signed error (bias) and the response sensitivity (resolution). All schemes assume that each presentation of a physical stimulus results in a random variable with a Gaussian distribution along some internal decision axis. The mean of the Gaussian distribution is assumed to depend monotonically on the source position, while its standard deviation has the same value for all positions. This indicates that the ability to resolve sources comes from the relative distances between their means. The first estimation method uses a Maximum Likelihood Estimate (MLE) technique to find the means of the internal distributions and the placement of decision criteria, given the confusion matrix observed. The second method, known as the raw processing method, computes raw estimates of bias and resolution from the means and the standard deviations of the responses. Bias is estimated as the difference between the mean and the correct response divided by the standard deviation. Resolution between two adjacent positions is computed as the difference of the mean responses divided by the average of the standard deviations. Finally, the third method, also a raw processing method, assumes that the variations in response between the standard deviation of all positions are unimportant. As

29 a result, the standard deviation for all positions is averaged and used as a constant value. Bias and resolution are then computed as in the second method. As Shinn-Cunningham (1994) noted, the results of these three methods are very similar, even though MLE processing is much more computationally intensive and takes into account many factors ignored by the other methods. Thus, method two was assumed to be adequate for analyzing the data in this study. Accordingly, bias and resolution are given by: bias m(p)- p (9a) bias = m(p + 1) - m(p) resolution = d'= m V (p + 1) J o(p) (9b) where p is the target position, and m(p) and o(p) are the mean and the standard deviation of the responses for target position p, respectively Expected Results Given the results obtained by Shinn-Cunningham (1994) and the linearity 3 of the altered cues used in this project, the following results were expected (Figure 11): For the first run using normal cues, subjects are expected to show almost zero bias and better resolution for the center positions than for the edges. When the first altered cues are presented, an increase in resolution in almost all directions was expected (with greater values around zero), due to the fact that the ITDs were larger (doubled) at all positions. Because of the increase in ITDs, the mean response should show a change in slope (with slope of mean response to correct location approximately doubled). This is consistent with subjects hearing sources farther to the side than their correct position. Similarly, we expected that bias would be small for positions near 00 azimuth, larger for intermediate 3 The supernormal cues used are called linear because the ITDs are approximately doubled for all positions.

30 positions, and small again at the extreme edges (since subjects could not respond beyond the range of locations presented). Expected Mean Response Target Position (degrees) Expected Bias Characteristic A Expected Resolution Characteristic \ i 01I'^^^^"^^^'^' K X_ a - X Target Position (degrees) Target Position (degrees) ) Figure 11. Cartoon exaggerating the effects of adaptation for mean, bias and resolution. - : Normal cues. - -: Altered'cues. O: First presentation of normal cues. *: First presentation of altered cues. X: Last presentation of altered cues. +: First presentation of normal cues following the last run of altered cues. After the 30 th altered run, adaptation was assumed to have taken place in that it was expected that mean errors would decrease with time. This decrease would be evident by a change in the slope relating mean response to correct location towards one, and a decrease in bias towards 0. Since the acoustic range was larger with the altered cues, the internal decision noise was assumed to grow with adaptation (Durlach and Braida, 1969; Braida and Durlach, 1972; Shinn-Cunningham, 1998). As a result, resolution was expected to decrease with time. The change in internal noise would also cause bias to decrease even farther than if there was no change in stimulus range.

31 Finally, results from the first normal cues after exposure to the supernormal cues would give insight into whether subjects really adapted to the supernormal cues or if they were just consciously correcting their responses based on whether they were hearing normal or altered cues. In the first case, subjects could not immediately turn off their remapping of localization cues (even when they were told that they are hearing normal cues) and mean responses were expected to show an after-effect (i.e., the slope relating mean response to location was expected to be less than one). The after-effect should also cause identification performance to be worse after training than before, bias should be non zero and in the opposite direction from the error originally introduced by the remapping. If subjects could consciously change their responses, mean, bias, and resolution should have resembled those from the first normal cue run. As is shown in Figure 11, the expected results are all symmetrical around 00 azimuth (the mean response had odd symmetry) since there was no reason to think that there would be any left-right asymmetry in the results. For this reason, all the results presented here are collapsed around 00 (i.e., the left and right sides were averaged) Results The data showed small differences across sessions, compared to the differences across test runs within a particular session. Therefore, the data reported in this study were collapsed across the eight sessions performed by each subject. The individual subject responses were analyzed to find mean response, bias, and resolution as a function of position for each run in the session. These statistics were then averaged across subjects, and then further collapsed by assuming left-right symmetry, to yield the results shown. Results from runs 2, 3, 32 and 33 were examined in detail to investigate how performance changed over the course of one session. Run 2 was the last run that used normal cues prior to the exposure to altered cues. At run 2, the subject knew what the experiment was about and should had been comfortable with the procedure. The results of this run served as a baseline or reference point for other runs because it reflected normal

3-D Sound and Spatial Audio. What do these terms mean?

3-D Sound and Spatial Audio. What do these terms mean? 3-D Sound and Spatial Audio What do these terms mean? Both terms are very general. 3-D sound usually implies the perception of point sources in 3-D space (could also be 2-D plane) whether the audio reproduction

More information

Binaural Hearing. Why two ears? Definitions

Binaural Hearing. Why two ears? Definitions Binaural Hearing Why two ears? Locating sounds in space: acuity is poorer than in vision by up to two orders of magnitude, but extends in all directions. Role in alerting and orienting? Separating sound

More information

Perceptual Plasticity in Spatial Auditory Displays

Perceptual Plasticity in Spatial Auditory Displays Perceptual Plasticity in Spatial Auditory Displays BARBARA G. SHINN-CUNNINGHAM, TIMOTHY STREETER, and JEAN-FRANÇOIS GYSS Hearing Research Center, Boston University Often, virtual acoustic environments

More information

Gregory Galen Lin. at the. May A uthor... Department of Elf'ctricdal Engineering and Computer Science May 28, 1996

Gregory Galen Lin. at the. May A uthor... Department of Elf'ctricdal Engineering and Computer Science May 28, 1996 Adaptation to a Varying Auditory Environment by Gregory Galen Lin Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of

More information

Sound localization psychophysics

Sound localization psychophysics Sound localization psychophysics Eric Young A good reference: B.C.J. Moore An Introduction to the Psychology of Hearing Chapter 7, Space Perception. Elsevier, Amsterdam, pp. 233-267 (2004). Sound localization:

More information

Trading Directional Accuracy for Realism in a Virtual Auditory Display

Trading Directional Accuracy for Realism in a Virtual Auditory Display Trading Directional Accuracy for Realism in a Virtual Auditory Display Barbara G. Shinn-Cunningham, I-Fan Lin, and Tim Streeter Hearing Research Center, Boston University 677 Beacon St., Boston, MA 02215

More information

Adapting to Remapped Auditory Localization Cues: A Decision-Theory Model

Adapting to Remapped Auditory Localization Cues: A Decision-Theory Model Shinn-Cunningham, BG (2000). Adapting to remapped auditory localization cues: A decisiontheory model, Perception and Psychophysics, 62(), 33-47. Adapting to Remapped Auditory Localization Cues: A Decision-Theory

More information

Effect of spectral content and learning on auditory distance perception

Effect of spectral content and learning on auditory distance perception Effect of spectral content and learning on auditory distance perception Norbert Kopčo 1,2, Dávid Čeljuska 1, Miroslav Puszta 1, Michal Raček 1 a Martin Sarnovský 1 1 Department of Cybernetics and AI, Technical

More information

Binaural Hearing. Steve Colburn Boston University

Binaural Hearing. Steve Colburn Boston University Binaural Hearing Steve Colburn Boston University Outline Why do we (and many other animals) have two ears? What are the major advantages? What is the observed behavior? How do we accomplish this physiologically?

More information

An Auditory System Modeling in Sound Source Localization

An Auditory System Modeling in Sound Source Localization An Auditory System Modeling in Sound Source Localization Yul Young Park The University of Texas at Austin EE381K Multidimensional Signal Processing May 18, 2005 Abstract Sound localization of the auditory

More information

The basic mechanisms of directional sound localization are well documented. In the horizontal plane, interaural difference INTRODUCTION I.

The basic mechanisms of directional sound localization are well documented. In the horizontal plane, interaural difference INTRODUCTION I. Auditory localization of nearby sources. II. Localization of a broadband source Douglas S. Brungart, a) Nathaniel I. Durlach, and William M. Rabinowitz b) Research Laboratory of Electronics, Massachusetts

More information

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS 19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 27 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS PACS: 43.66.Pn Seeber, Bernhard U. Auditory Perception Lab, Dept.

More information

3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF. Masayuki Morimoto Motokuni Itoh Kazuhiro Iida

3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF. Masayuki Morimoto Motokuni Itoh Kazuhiro Iida 3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF Masayuki Morimoto Motokuni Itoh Kazuhiro Iida Kobe University Environmental Acoustics Laboratory Rokko, Nada, Kobe, 657-8501,

More information

Systems Neuroscience Oct. 16, Auditory system. http:

Systems Neuroscience Oct. 16, Auditory system. http: Systems Neuroscience Oct. 16, 2018 Auditory system http: www.ini.unizh.ch/~kiper/system_neurosci.html The physics of sound Measuring sound intensity We are sensitive to an enormous range of intensities,

More information

Hearing in the Environment

Hearing in the Environment 10 Hearing in the Environment Click Chapter to edit 10 Master Hearing title in the style Environment Sound Localization Complex Sounds Auditory Scene Analysis Continuity and Restoration Effects Auditory

More information

21/01/2013. Binaural Phenomena. Aim. To understand binaural hearing Objectives. Understand the cues used to determine the location of a sound source

21/01/2013. Binaural Phenomena. Aim. To understand binaural hearing Objectives. Understand the cues used to determine the location of a sound source Binaural Phenomena Aim To understand binaural hearing Objectives Understand the cues used to determine the location of a sound source Understand sensitivity to binaural spatial cues, including interaural

More information

Brian D. Simpson Veridian, 5200 Springfield Pike, Suite 200, Dayton, Ohio 45431

Brian D. Simpson Veridian, 5200 Springfield Pike, Suite 200, Dayton, Ohio 45431 The effects of spatial separation in distance on the informational and energetic masking of a nearby speech signal Douglas S. Brungart a) Air Force Research Laboratory, 2610 Seventh Street, Wright-Patterson

More information

HEARING AND PSYCHOACOUSTICS

HEARING AND PSYCHOACOUSTICS CHAPTER 2 HEARING AND PSYCHOACOUSTICS WITH LIDIA LEE I would like to lead off the specific audio discussions with a description of the audio receptor the ear. I believe it is always a good idea to understand

More information

A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES

A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES Kazuhiro IIDA, Motokuni ITOH AV Core Technology Development Center, Matsushita Electric Industrial Co., Ltd.

More information

Hearing II Perceptual Aspects

Hearing II Perceptual Aspects Hearing II Perceptual Aspects Overview of Topics Chapter 6 in Chaudhuri Intensity & Loudness Frequency & Pitch Auditory Space Perception 1 2 Intensity & Loudness Loudness is the subjective perceptual quality

More information

Binaural processing of complex stimuli

Binaural processing of complex stimuli Binaural processing of complex stimuli Outline for today Binaural detection experiments and models Speech as an important waveform Experiments on understanding speech in complex environments (Cocktail

More information

Two Modified IEC Ear Simulators for Extended Dynamic Range

Two Modified IEC Ear Simulators for Extended Dynamic Range Two Modified IEC 60318-4 Ear Simulators for Extended Dynamic Range Peter Wulf-Andersen & Morten Wille The international standard IEC 60318-4 specifies an occluded ear simulator, often referred to as a

More information

Discrimination and identification of azimuth using spectral shape a)

Discrimination and identification of azimuth using spectral shape a) Discrimination and identification of azimuth using spectral shape a) Daniel E. Shub b Speech and Hearing Bioscience and Technology Program, Division of Health Sciences and Technology, Massachusetts Institute

More information

EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE DESIGN OF SPATIAL AUDIO DISPLAYS

EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE DESIGN OF SPATIAL AUDIO DISPLAYS Proceedings of the 14 International Conference on Auditory Display, Paris, France June 24-27, 28 EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE

More information

IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION. Griffin D. Romigh, Brian D. Simpson, Nandini Iyer

IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION. Griffin D. Romigh, Brian D. Simpson, Nandini Iyer IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION Griffin D. Romigh, Brian D. Simpson, Nandini Iyer 711th Human Performance Wing Air Force Research Laboratory

More information

B. G. Shinn-Cunningham Hearing Research Center, Departments of Biomedical Engineering and Cognitive and Neural Systems, Boston, Massachusetts 02215

B. G. Shinn-Cunningham Hearing Research Center, Departments of Biomedical Engineering and Cognitive and Neural Systems, Boston, Massachusetts 02215 Investigation of the relationship among three common measures of precedence: Fusion, localization dominance, and discrimination suppression R. Y. Litovsky a) Boston University Hearing Research Center,

More information

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening AUDL GS08/GAV1 Signals, systems, acoustics and the ear Pitch & Binaural listening Review 25 20 15 10 5 0-5 100 1000 10000 25 20 15 10 5 0-5 100 1000 10000 Part I: Auditory frequency selectivity Tuning

More information

This will be accomplished using maximum likelihood estimation based on interaural level

This will be accomplished using maximum likelihood estimation based on interaural level Chapter 1 Problem background 1.1 Overview of the proposed work The proposed research consists of the construction and demonstration of a computational model of human spatial hearing, including long term

More information

Lecture 8: Spatial sound

Lecture 8: Spatial sound EE E6820: Speech & Audio Processing & Recognition Lecture 8: Spatial sound 1 2 3 4 Spatial acoustics Binaural perception Synthesizing spatial audio Extracting spatial sounds Dan Ellis

More information

Sound Localization PSY 310 Greg Francis. Lecture 31. Audition

Sound Localization PSY 310 Greg Francis. Lecture 31. Audition Sound Localization PSY 310 Greg Francis Lecture 31 Physics and psychology. Audition We now have some idea of how sound properties are recorded by the auditory system So, we know what kind of information

More information

Effect of source spectrum on sound localization in an everyday reverberant room

Effect of source spectrum on sound localization in an everyday reverberant room Effect of source spectrum on sound localization in an everyday reverberant room Antje Ihlefeld and Barbara G. Shinn-Cunningham a) Hearing Research Center, Boston University, Boston, Massachusetts 02215

More information

Angular Resolution of Human Sound Localization

Angular Resolution of Human Sound Localization Angular Resolution of Human Sound Localization By Simon Skluzacek A senior thesis submitted to the Carthage College Physics & Astronomy Department in partial fulfillment of the requirements for the Bachelor

More information

Localization 103: Training BiCROS/CROS Wearers for Left-Right Localization

Localization 103: Training BiCROS/CROS Wearers for Left-Right Localization Localization 103: Training BiCROS/CROS Wearers for Left-Right Localization Published on June 16, 2015 Tech Topic: Localization July 2015 Hearing Review By Eric Seper, AuD, and Francis KuK, PhD While the

More information

Signals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor

Signals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor Signals, systems, acoustics and the ear Week 5 The peripheral auditory system: The ear as a signal processor Think of this set of organs 2 as a collection of systems, transforming sounds to be sent to

More information

The role of low frequency components in median plane localization

The role of low frequency components in median plane localization Acoust. Sci. & Tech. 24, 2 (23) PAPER The role of low components in median plane localization Masayuki Morimoto 1;, Motoki Yairi 1, Kazuhiro Iida 2 and Motokuni Itoh 1 1 Environmental Acoustics Laboratory,

More information

What Is the Difference between db HL and db SPL?

What Is the Difference between db HL and db SPL? 1 Psychoacoustics What Is the Difference between db HL and db SPL? The decibel (db ) is a logarithmic unit of measurement used to express the magnitude of a sound relative to some reference level. Decibels

More information

The use of interaural time and level difference cues by bilateral cochlear implant users

The use of interaural time and level difference cues by bilateral cochlear implant users The use of interaural time and level difference cues by bilateral cochlear implant users Justin M. Aronoff, a) Yang-soo Yoon, and Daniel J. Freed b) Communication and Neuroscience Division, House Ear Institute,

More information

Hearing. Figure 1. The human ear (from Kessel and Kardon, 1979)

Hearing. Figure 1. The human ear (from Kessel and Kardon, 1979) Hearing The nervous system s cognitive response to sound stimuli is known as psychoacoustics: it is partly acoustics and partly psychology. Hearing is a feature resulting from our physiology that we tend

More information

Localization: Give your patients a listening edge

Localization: Give your patients a listening edge Localization: Give your patients a listening edge For those of us with healthy auditory systems, localization skills are often taken for granted. We don t even notice them, until they are no longer working.

More information

HearIntelligence by HANSATON. Intelligent hearing means natural hearing.

HearIntelligence by HANSATON. Intelligent hearing means natural hearing. HearIntelligence by HANSATON. HearIntelligence by HANSATON. Intelligent hearing means natural hearing. Acoustic environments are complex. We are surrounded by a variety of different acoustic signals, speech

More information

Welcome to the LISTEN G.R.A.S. Headphone and Headset Measurement Seminar The challenge of testing today s headphones USA

Welcome to the LISTEN G.R.A.S. Headphone and Headset Measurement Seminar The challenge of testing today s headphones USA Welcome to the LISTEN G.R.A.S. Headphone and Headset Measurement Seminar The challenge of testing today s headphones USA 2017-10 Presenter Peter Wulf-Andersen Engineering degree in Acoustics Co-founder

More information

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for What you re in for Speech processing schemes for cochlear implants Stuart Rosen Professor of Speech and Hearing Science Speech, Hearing and Phonetic Sciences Division of Psychology & Language Sciences

More information

INTRODUCTION J. Acoust. Soc. Am. 103 (2), February /98/103(2)/1080/5/$ Acoustical Society of America 1080

INTRODUCTION J. Acoust. Soc. Am. 103 (2), February /98/103(2)/1080/5/$ Acoustical Society of America 1080 Perceptual segregation of a harmonic from a vowel by interaural time difference in conjunction with mistuning and onset asynchrony C. J. Darwin and R. W. Hukin Experimental Psychology, University of Sussex,

More information

SOLUTIONS Homework #3. Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03

SOLUTIONS Homework #3. Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03 SOLUTIONS Homework #3 Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03 Problem 1: a) Where in the cochlea would you say the process of "fourier decomposition" of the incoming

More information

! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics

! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics 2/14/18 Can hear whistle? Lecture 5 Psychoacoustics Based on slides 2009--2018 DeHon, Koditschek Additional Material 2014 Farmer 1 2 There are sounds we cannot hear Depends on frequency Where are we on

More information

Sound from Left or Right?

Sound from Left or Right? Sound from Left or Right? Pre-Activity Quiz 1. How does our sense of hearing work? 2. Why do we have two ears? 3. How does a stethoscope work? (A device used by doctors to listen to the sound of your heart.)

More information

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane. I. Psychoacoustical Data

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane. I. Psychoacoustical Data 942 955 Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane. I. Psychoacoustical Data Jonas Braasch, Klaus Hartung Institut für Kommunikationsakustik, Ruhr-Universität

More information

Unit 4: Sensation and Perception

Unit 4: Sensation and Perception Unit 4: Sensation and Perception Sensation a process by which our sensory receptors and nervous system receive and represent stimulus (or physical) energy and encode it as neural signals. Perception a

More information

Digital. hearing instruments have burst on the

Digital. hearing instruments have burst on the Testing Digital and Analog Hearing Instruments: Processing Time Delays and Phase Measurements A look at potential side effects and ways of measuring them by George J. Frye Digital. hearing instruments

More information

ADHEAR The new bone-conduction hearing aid innovation

ADHEAR The new bone-conduction hearing aid innovation ADHEAR The new bone-conduction hearing aid innovation MED-EL has world-wide launched a new kind of hearing aid, ADHEAR, for people who have an hearing impairment and want to prevent surgery. This little

More information

William A. Yost and Sandra J. Guzman Parmly Hearing Institute, Loyola University Chicago, Chicago, Illinois 60201

William A. Yost and Sandra J. Guzman Parmly Hearing Institute, Loyola University Chicago, Chicago, Illinois 60201 The precedence effect Ruth Y. Litovsky a) and H. Steven Colburn Hearing Research Center and Department of Biomedical Engineering, Boston University, Boston, Massachusetts 02215 William A. Yost and Sandra

More information

Effect of microphone position in hearing instruments on binaural masking level differences

Effect of microphone position in hearing instruments on binaural masking level differences Effect of microphone position in hearing instruments on binaural masking level differences Fredrik Gran, Jesper Udesen and Andrew B. Dittberner GN ReSound A/S, Research R&D, Lautrupbjerg 7, 2750 Ballerup,

More information

Chapter 1: Introduction to digital audio

Chapter 1: Introduction to digital audio Chapter 1: Introduction to digital audio Applications: audio players (e.g. MP3), DVD-audio, digital audio broadcast, music synthesizer, digital amplifier and equalizer, 3D sound synthesis 1 Properties

More information

Congruency Effects with Dynamic Auditory Stimuli: Design Implications

Congruency Effects with Dynamic Auditory Stimuli: Design Implications Congruency Effects with Dynamic Auditory Stimuli: Design Implications Bruce N. Walker and Addie Ehrenstein Psychology Department Rice University 6100 Main Street Houston, TX 77005-1892 USA +1 (713) 527-8101

More information

Technical Discussion HUSHCORE Acoustical Products & Systems

Technical Discussion HUSHCORE Acoustical Products & Systems What Is Noise? Noise is unwanted sound which may be hazardous to health, interfere with speech and verbal communications or is otherwise disturbing, irritating or annoying. What Is Sound? Sound is defined

More information

Neural System Model of Human Sound Localization

Neural System Model of Human Sound Localization in Advances in Neural Information Processing Systems 13 S.A. Solla, T.K. Leen, K.-R. Müller (eds.), 761 767 MIT Press (2000) Neural System Model of Human Sound Localization Craig T. Jin Department of Physiology

More information

Hearing. Juan P Bello

Hearing. Juan P Bello Hearing Juan P Bello The human ear The human ear Outer Ear The human ear Middle Ear The human ear Inner Ear The cochlea (1) It separates sound into its various components If uncoiled it becomes a tapering

More information

Frequency refers to how often something happens. Period refers to the time it takes something to happen.

Frequency refers to how often something happens. Period refers to the time it takes something to happen. Lecture 2 Properties of Waves Frequency and period are distinctly different, yet related, quantities. Frequency refers to how often something happens. Period refers to the time it takes something to happen.

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 13 http://acousticalsociety.org/ ICA 13 Montreal Montreal, Canada - 7 June 13 Engineering Acoustics Session 4pEAa: Sound Field Control in the Ear Canal 4pEAa13.

More information

Speech segregation in rooms: Effects of reverberation on both target and interferer

Speech segregation in rooms: Effects of reverberation on both target and interferer Speech segregation in rooms: Effects of reverberation on both target and interferer Mathieu Lavandier a and John F. Culling School of Psychology, Cardiff University, Tower Building, Park Place, Cardiff,

More information

Cochlear implant patients localization using interaural level differences exceeds that of untrained normal hearing listeners

Cochlear implant patients localization using interaural level differences exceeds that of untrained normal hearing listeners Cochlear implant patients localization using interaural level differences exceeds that of untrained normal hearing listeners Justin M. Aronoff a) Communication and Neuroscience Division, House Research

More information

Discrete Signal Processing

Discrete Signal Processing 1 Discrete Signal Processing C.M. Liu Perceptual Lab, College of Computer Science National Chiao-Tung University http://www.cs.nctu.edu.tw/~cmliu/courses/dsp/ ( Office: EC538 (03)5731877 cmliu@cs.nctu.edu.tw

More information

Supplemental Information: Task-specific transfer of perceptual learning across sensory modalities

Supplemental Information: Task-specific transfer of perceptual learning across sensory modalities Supplemental Information: Task-specific transfer of perceptual learning across sensory modalities David P. McGovern, Andrew T. Astle, Sarah L. Clavin and Fiona N. Newell Figure S1: Group-averaged learning

More information

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES Varinthira Duangudom and David V Anderson School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA 30332

More information

J. Acoust. Soc. Am. 114 (2), August /2003/114(2)/1009/14/$ Acoustical Society of America

J. Acoust. Soc. Am. 114 (2), August /2003/114(2)/1009/14/$ Acoustical Society of America Auditory spatial resolution in horizontal, vertical, and diagonal planes a) D. Wesley Grantham, b) Benjamin W. Y. Hornsby, and Eric A. Erpenbeck Vanderbilt Bill Wilkerson Center for Otolaryngology and

More information

Improve localization accuracy and natural listening with Spatial Awareness

Improve localization accuracy and natural listening with Spatial Awareness Improve localization accuracy and natural listening with Spatial Awareness While you probably don t even notice them, your localization skills make everyday tasks easier: like finding your ringing phone

More information

Variation in spectral-shape discrimination weighting functions at different stimulus levels and signal strengths

Variation in spectral-shape discrimination weighting functions at different stimulus levels and signal strengths Variation in spectral-shape discrimination weighting functions at different stimulus levels and signal strengths Jennifer J. Lentz a Department of Speech and Hearing Sciences, Indiana University, Bloomington,

More information

Juha Merimaa b) Institut für Kommunikationsakustik, Ruhr-Universität Bochum, Germany

Juha Merimaa b) Institut für Kommunikationsakustik, Ruhr-Universität Bochum, Germany Source localization in complex listening situations: Selection of binaural cues based on interaural coherence Christof Faller a) Mobile Terminals Division, Agere Systems, Allentown, Pennsylvania Juha Merimaa

More information

FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED

FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED Francisco J. Fraga, Alan M. Marotta National Institute of Telecommunications, Santa Rita do Sapucaí - MG, Brazil Abstract A considerable

More information

Sensory Cue Integration

Sensory Cue Integration Sensory Cue Integration Summary by Byoung-Hee Kim Computer Science and Engineering (CSE) http://bi.snu.ac.kr/ Presentation Guideline Quiz on the gist of the chapter (5 min) Presenters: prepare one main

More information

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE Copyright SFA - InterNoise 2000 1 inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering 27-30 August 2000, Nice, FRANCE I-INCE Classification: 6.3 PSYCHOLOGICAL EVALUATION

More information

OCCLUSION REDUCTION SYSTEM FOR HEARING AIDS WITH AN IMPROVED TRANSDUCER AND AN ASSOCIATED ALGORITHM

OCCLUSION REDUCTION SYSTEM FOR HEARING AIDS WITH AN IMPROVED TRANSDUCER AND AN ASSOCIATED ALGORITHM OCCLUSION REDUCTION SYSTEM FOR HEARING AIDS WITH AN IMPROVED TRANSDUCER AND AN ASSOCIATED ALGORITHM Masahiro Sunohara, Masatoshi Osawa, Takumi Hashiura and Makoto Tateno RION CO., LTD. 3-2-41, Higashimotomachi,

More information

The basic hearing abilities of absolute pitch possessors

The basic hearing abilities of absolute pitch possessors PAPER The basic hearing abilities of absolute pitch possessors Waka Fujisaki 1;2;* and Makio Kashino 2; { 1 Graduate School of Humanities and Sciences, Ochanomizu University, 2 1 1 Ootsuka, Bunkyo-ku,

More information

Abstract. 1. Introduction. David Spargo 1, William L. Martens 2, and Densil Cabrera 3

Abstract. 1. Introduction. David Spargo 1, William L. Martens 2, and Densil Cabrera 3 THE INFLUENCE OF ROOM REFLECTIONS ON SUBWOOFER REPRODUCTION IN A SMALL ROOM: BINAURAL INTERACTIONS PREDICT PERCEIVED LATERAL ANGLE OF PERCUSSIVE LOW- FREQUENCY MUSICAL TONES Abstract David Spargo 1, William

More information

Tactile Communication of Speech

Tactile Communication of Speech Tactile Communication of Speech RLE Group Sensory Communication Group Sponsor National Institutes of Health/National Institute on Deafness and Other Communication Disorders Grant 2 R01 DC00126, Grant 1

More information

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED International Conference on Systemics, Cybernetics and Informatics, February 12 15, 2004 BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED Alice N. Cheeran Biomedical

More information

Sound Preference Development and Correlation to Service Incidence Rate

Sound Preference Development and Correlation to Service Incidence Rate Sound Preference Development and Correlation to Service Incidence Rate Terry Hardesty a) Sub-Zero, 4717 Hammersley Rd., Madison, WI, 53711, United States Eric Frank b) Todd Freeman c) Gabriella Cerrato

More information

CONTRIBUTION OF DIRECTIONAL ENERGY COMPONENTS OF LATE SOUND TO LISTENER ENVELOPMENT

CONTRIBUTION OF DIRECTIONAL ENERGY COMPONENTS OF LATE SOUND TO LISTENER ENVELOPMENT CONTRIBUTION OF DIRECTIONAL ENERGY COMPONENTS OF LATE SOUND TO LISTENER ENVELOPMENT PACS:..Hy Furuya, Hiroshi ; Wakuda, Akiko ; Anai, Ken ; Fujimoto, Kazutoshi Faculty of Engineering, Kyushu Kyoritsu University

More information

Signals, systems, acoustics and the ear. Week 1. Laboratory session: Measuring thresholds

Signals, systems, acoustics and the ear. Week 1. Laboratory session: Measuring thresholds Signals, systems, acoustics and the ear Week 1 Laboratory session: Measuring thresholds What s the most commonly used piece of electronic equipment in the audiological clinic? The Audiometer And what is

More information

Jitter, Shimmer, and Noise in Pathological Voice Quality Perception

Jitter, Shimmer, and Noise in Pathological Voice Quality Perception ISCA Archive VOQUAL'03, Geneva, August 27-29, 2003 Jitter, Shimmer, and Noise in Pathological Voice Quality Perception Jody Kreiman and Bruce R. Gerratt Division of Head and Neck Surgery, School of Medicine

More information

Representation of sound in the auditory nerve

Representation of sound in the auditory nerve Representation of sound in the auditory nerve Eric D. Young Department of Biomedical Engineering Johns Hopkins University Young, ED. Neural representation of spectral and temporal information in speech.

More information

Auditory System & Hearing

Auditory System & Hearing Auditory System & Hearing Chapters 9 and 10 Lecture 17 Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Spring 2015 1 Cochlea: physical device tuned to frequency! place code: tuning of different

More information

On the improvement of localization accuracy with nonindividualized

On the improvement of localization accuracy with nonindividualized On the improvement of localization accuracy with nonindividualized HRTF-based sounds Catarina Mendonça 1, AES Member, Guilherme Campos 2, AES Member, Paulo Dias 2, José Vieira 2, AES Fellow, João P. Ferreira

More information

A Novel Software Solution to Diagnose the Hearing Disabilities In Human Beings

A Novel Software Solution to Diagnose the Hearing Disabilities In Human Beings A Novel Software Solution to Diagnose the Hearing Disabilities In Human Beings Prithvi B S Dept of Computer Science & Engineering SVIT, Bangalore bsprithvi1992@gmail.com Sanjay H S Research Scholar, Jain

More information

How to use AutoFit (IMC2) How to use AutoFit (IMC2)

How to use AutoFit (IMC2) How to use AutoFit (IMC2) How to use AutoFit (IMC2) 1 AutoFit is a beneficial feature in the Connexx Fitting Application that automatically provides the Hearing Care Professional (HCP) with an optimized real-ear insertion gain

More information

Topic 4. Pitch & Frequency

Topic 4. Pitch & Frequency Topic 4 Pitch & Frequency A musical interlude KOMBU This solo by Kaigal-ool of Huun-Huur-Tu (accompanying himself on doshpuluur) demonstrates perfectly the characteristic sound of the Xorekteer voice An

More information

THE PHYSICAL AND PSYCHOPHYSICAL BASIS OF SOUND LOCALIZATION

THE PHYSICAL AND PSYCHOPHYSICAL BASIS OF SOUND LOCALIZATION CHAPTER 2 THE PHYSICAL AND PSYCHOPHYSICAL BASIS OF SOUND LOCALIZATION Simon Carlile 1. PHYSICAL CUES TO A SOUND S LOCATION 1.1. THE DUPLEX THEORY OF AUDITORY LOCALIZATION Traditionally, the principal cues

More information

Models of Plasticity in Spatial Auditory Processing

Models of Plasticity in Spatial Auditory Processing Auditory CNS Processing and Plasticity Audiol Neurootol 2001;6:187 191 Models of Plasticity in Spatial Auditory Processing Barbara Shinn-Cunningham Departments of Cognitive and Neural Systems and Biomedical

More information

The development of a modified spectral ripple test

The development of a modified spectral ripple test The development of a modified spectral ripple test Justin M. Aronoff a) and David M. Landsberger Communication and Neuroscience Division, House Research Institute, 2100 West 3rd Street, Los Angeles, California

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Psychological and Physiological Acoustics Session 4aPPb: Binaural Hearing

More information

QuickTIPS REMOTE CONTROL TRULINK FOR APPLE DEVICES VOLUME CHANGES MEMORY CHANGES. PRODUCT AVAILABILITY: Halo iq, Halo 2, and Halo Devices

QuickTIPS REMOTE CONTROL TRULINK FOR APPLE DEVICES VOLUME CHANGES MEMORY CHANGES. PRODUCT AVAILABILITY: Halo iq, Halo 2, and Halo Devices QuickTIPS TRULINK FOR APPLE DEVICES PRODUCT AVAILABILITY: Halo iq, Halo 2, and Halo Devices For the most up-to-date information regarding Apple devices and ios versions that are supported for use with

More information

Publication VI. c 2007 Audio Engineering Society. Reprinted with permission.

Publication VI. c 2007 Audio Engineering Society. Reprinted with permission. VI Publication VI Hirvonen, T. and Pulkki, V., Predicting Binaural Masking Level Difference and Dichotic Pitch Using Instantaneous ILD Model, AES 30th Int. Conference, 2007. c 2007 Audio Engineering Society.

More information

Minimum Audible Angles Measured with Simulated Normally-Sized and Oversized Pinnas for Normal-Hearing and Hearing- Impaired Test Subjects

Minimum Audible Angles Measured with Simulated Normally-Sized and Oversized Pinnas for Normal-Hearing and Hearing- Impaired Test Subjects Minimum Audible Angles Measured with Simulated Normally-Sized and Oversized Pinnas for Normal-Hearing and Hearing- Impaired Test Subjects Filip M. Rønne, Søren Laugesen, Niels S. Jensen and Julie H. Pedersen

More information

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane: II. Model Algorithms

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane: II. Model Algorithms 956 969 Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane: II. Model Algorithms Jonas Braasch Institut für Kommunikationsakustik, Ruhr-Universität Bochum, Germany

More information

Study of perceptual balance for binaural dichotic presentation

Study of perceptual balance for binaural dichotic presentation Paper No. 556 Proceedings of 20 th International Congress on Acoustics, ICA 2010 23-27 August 2010, Sydney, Australia Study of perceptual balance for binaural dichotic presentation Pandurangarao N. Kulkarni

More information

FIR filter bank design for Audiogram Matching

FIR filter bank design for Audiogram Matching FIR filter bank design for Audiogram Matching Shobhit Kumar Nema, Mr. Amit Pathak,Professor M.Tech, Digital communication,srist,jabalpur,india, shobhit.nema@gmail.com Dept.of Electronics & communication,srist,jabalpur,india,

More information

Application of Phased Array Radar Theory to Ultrasonic Linear Array Medical Imaging System

Application of Phased Array Radar Theory to Ultrasonic Linear Array Medical Imaging System Application of Phased Array Radar Theory to Ultrasonic Linear Array Medical Imaging System R. K. Saha, S. Karmakar, S. Saha, M. Roy, S. Sarkar and S.K. Sen Microelectronics Division, Saha Institute of

More information

How high-frequency do children hear?

How high-frequency do children hear? How high-frequency do children hear? Mari UEDA 1 ; Kaoru ASHIHARA 2 ; Hironobu TAKAHASHI 2 1 Kyushu University, Japan 2 National Institute of Advanced Industrial Science and Technology, Japan ABSTRACT

More information

Neural correlates of the perception of sound source separation

Neural correlates of the perception of sound source separation Neural correlates of the perception of sound source separation Mitchell L. Day 1,2 * and Bertrand Delgutte 1,2,3 1 Department of Otology and Laryngology, Harvard Medical School, Boston, MA 02115, USA.

More information

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair Who are cochlear implants for? Essential feature People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work

More information