This will be accomplished using maximum likelihood estimation based on interaural level

Size: px
Start display at page:

Download "This will be accomplished using maximum likelihood estimation based on interaural level"

Transcription

1 Chapter 1 Problem background 1.1 Overview of the proposed work The proposed research consists of the construction and demonstration of a computational model of human spatial hearing, including long term \precedence-related" eects. Using acoustic signals from the eardrums of a mannequin head as input, the proposed model will estimate the elevation and azimuth, relative to the head, of a sound source in space. This will be accomplished using maximum likelihood estimation based on interaural level and time dierences (ILDs and ITDs). ILDs and ITDs will be weighted at \onsets" (as described in [24]) and will be calculated as functions of time for a number of \critical band" frequency regions. An old-plus-new heuristic will be employed to replicate the Franssen and other precedence-related eects. The tools required to complete the research include signal processing and pattern recognition techniques, as well as an understanding of the cues humans use to localize sound events. The signal processing required for this project involves lter design, signal analysis and synthesis, and probabilistic estimation. The pattern recognition technique to be used is maximum likelihood estimation, but extensions may be made to incorporate other techniques. The maximum likelihood estimator \templates" will be derived from head-related transfer function data measured from a mannequin head. The psychoacoustics behind the model are drawn from many sources. Good overviews of the subject include [2] and [22]. Much inspiration is also drawn from Bregman's Auditory Scene Analysis ([3]), especially in conjunction with the old-plus-new heuristic. Part of the processing is intended to model the \precedence eect" (or \law of the rst wavefront") that has been demonstrated in human listeners, and is based on Zurek's model ([23], [24]). The completed thesis will detail all stages of the model and will compare the performance of the model with that of human listeners. In particular, the \precedence eect" experiments reported in [23] will be performed. It is expected that the model's performance will bear strong relation to human performance. Also, experiments will be performed to test the Franssen and other precedence-related eects. The form of the long-term model should suggest human psychoacoustic experiments which may be performed to test the model's generalization. 1

2 1.2 Introduction to spatial hearing The ability ofhuman listeners to determine the location of sound sources around them is not fully understood. It is widely accepted that several cues are used to estimate the location of sound sources in space. These cues include ILDs, ITDs, and the ratio of direct to reverberant energy, aswell as additional cues provided by headmovements. This research concentrates on ILDs and ITDs as the primary cues for localization. In short, ILD is the dierence (in db) of the signal levels measured at the two eardrums in response to a sound at a particular point in space. Similarly, ITD is the dierence in arrival time between the signals at the two eardrums. In this research, ILD and ITD are not single numbers for a given spatial location. Rather, they vary as a function of location and frequency. This distinction is important because the eects of head-shadowing and pinna ltering are frequency dependent. Henceforth, the frequency dependent ILD and ITD will be denoted the interaural spectrum. It is reasonable to assume that frequency analysis is performed at each ear individually prior to interaural spectrum estimation, since the human auditory periphery functions in this manner Head shadowing and the spherical head model To a rst approximation, ILDs and ITDs can be explained by a spherical head model. This model assumes that the head is a rigid sphere, isolated in space, with ideal pinpoint pressure sensors at the ends of a diameter representing the ears. By solving the acoustic wave equation for such a conguration, ITD and ILD can be determined as functions of frequency. This solution has been derived by many researchers. One such analysis is presented in [13]. When ITD and ILD are modelled in this way, \cones of confusion" arise. For a given interaural spectrum, there exists a locus of points for which ITDs and ILDs are identical. A cone (whose axis is a line drawn between the ears and whose origin is the center of the head) is a fair approximation of this surface. With this model, there are no cues available to resolve positional ambiguity on such a cone Head-related transfer functions It is widely accepted that the pinnae (outer ears) and the listener's body provide additional cues that enable position on a cone of confusion to be resolved to some degree. These additional cues are due to acoustic reections from parts of the pinnae or body, and result in systematic \distortions" of the interaural spectrum. A reasonable representation of the eect of body and pinnae is the so-called head-related transfer functions (HRTFs). A head-related transfer function is a measure of the acoustic transfer function between a point in space and a point in the ear canal of the listener. From these HRTFs, the interaural spectrum may be calculated by forming the complex ratio of the transfer functions for the two ears. HRTFs vary considerably from person to person, but the types of distortions imparted by the pinnae follow some general patterns, so meaningful comparisons may be made Issues not addressed In this thesis (and in most of the literature), at least two important localization cues used by human listeners are largely ignored. Because a slight shift in the position of the head (by rotation or \tilt") can alter the interaural spectrum in a predictable manner, human listeners 2

3 may use head movements to resolve positional ambiguities on \cones of confusion". This fact has been largely ignored in psychoacoustics literature for several reasons. First, it is intuitively obvious how head movements can resolve positional ambiguities. Second, human listeners are capable of auditory localization without head movements (which presents a more tractable research problem). Third, it is very dicult to construct a computational model including head movements and demonstrate that it works in any non-trivial situations. Also, there is little information on the forms of head movement employed by human listeners. A second set of cues that is being ignored is the monaural spectral information embodied in the HRTFs. The location and shape of \ridges" and \notches" in the monaural spectra may be useful cues if assumptions are made about the spectrum of the sound source (such as local \atness"). In the current research, no assumptions are made about the source spectrum, so monaural localization cues can not be employed. Another important cue that is being ignored is the ratio of direct to reverberant energy. This ratio can be used to estimate the relative distance of sources in an acoustic space. Because this research addresses only elevation and azimuth, this cue is not relevant. A complete model of localization would address all of these issues. 1.3 Previous work Lateralization models (\1D" localization) Some models of spatial hearing have previously been developed. Notably, most of these fall into the \one-dimensional localization" category. These models typically estimate only a single parameter, the subjective \lateralization" of a stimulus. In this context, lateralization means \left to right position" inside the head. Some models that fall into this category include [9], [16] and [18]. Lateralization models typically have several failings. Most of the experimental data that the models are designed to reproduce result from \unnatural" sounds. Lateralization models are most often used in conjunction with stimuli presented on headphones, with either uniform interaural spectra or laboratory-generated distortions of the interaural spectrum. These stimuli never occur in natural listening contexts, so the application of the experiments (and thus the models) to true \localization" is questionable. Also, few lateralization models attempt to model precedence-related eects. In [6], it is suggested that laboratory generated distortions of ILDs and ITDs that do not arise in the \real world" may result in sound events that appear to occur inside the head. This view is also supported in [9] Azimuth and elevation (\2D" localization) Few eorts have been made to model localization in more than one dimension (which may be largely due to the general unavailability ofhrtf data to the research community). However, at least two such models have been constructed. In [21], the authors summarize a model where ILD spectrum templates were constructed from HRTF data. Gaussian noise was added to these templates and the results fed to a pattern recognition algorithm in an attempt to determine if sucient information was available to distinguish position based on the ILD spectrum alone. The authors report that discrimination was possible, and that as the available bandwidth of the signal was reduced, localization ability was reduced in much the same manner as in humans. 3

4 One model that attempts to estimate both azimuth and elevation from binaural signals is described by Dudain[5]. Duda's model uses only interaural level dierences, which isavalid choice because his HRTF data (SLV data from [19] and 1991 KEMAR data) contains little useful low-frequency information (an artifact of the measurement technique). As predicted in [2], the most useful ITD information is expected to be present at frequencies below approximately 1.4 khz. Duda's model succeeds, with a maximum-likelihood estimator similar to the one proposed in this thesis, in the estimation of both azimuth and elevation with accuracy Duda likens to that of humans. There are two notable shortcomings in Duda's model. First, the failure to use ITD information makes it dicult to realistically compare the model performance with human performance. Second, Duda uses head-related impulse responses (HRIRs) as both training and test data, rather than attempting to use \natural" signals. Thus, he shows that the HRIRs contain information that can resolve positional ambiguity, but he fails to show how this information can be exploited in a natural listening environment. Neither model makes any attempt to model precedence-related eects Precedence-related eects In a reverberant environment, \direct" sound from a sound source arrives at a given point in space slightly before energy reected from various surfaces in the acoustic space (this is simply due to the geometric fact that the shortest distance between two points is a straight line). For evolutionary reasons, it may have been important for localization to be based on the direct sound, which generally reveals the true location of the sound source ([24]). Regardless of the origin of the precedence eect, it is true that for purposes of localization, the interaural spectrum is weighted more heavily by the auditory periphery at the onsets of sounds. Quantitative measures of this eect are presented by Zurek in [23]. The experimental measurements in [23] and the model described in [24] are the basis of the precedence eect model employed in this thesis. The precedence eect, as it has just been described, is a relatively short term eect. The suppression mechanism described in [24] operates on a time scale of a few milliseconds. However, localization in a room is fairly robust in the presence of reverberation, which operates on a time scale of seconds. To deal with such long term eects, the localization mechanism must be extended. Several long term precedence-related eects have been noted. For example, when only the sharp onset of a tone is played through one loudspeaker, while the remainder of the tone is presented through a second loudspeaker, the tone is localized at (or near) the rst loudspeaker. This is an example of the Franssen eect, and it is robust in the presence of head movements and for long duration tones. 1.4 Motivation and goals The primary goal of this thesis work is to present a localization model that analyzes \natural" binaural signals, estimating both azimuth and elevation. As in the human auditory periphery, onsets are strongly weighted in the determination of the interaural spectrum. The weighting of onset information is intended to model the precedence eect. This weighting, combined with an old-plus-new heuristic, and may allow the model to distinguish the actual location of a sound source in the presence of acoustic reections and reverberation and may 4

5 oer an explanation of the Franssen and other long-term precedence-related eects. Another important aspect of the proposed model is that it divides up the frequency spectrum, which allows the estimation of an interaural spectrum. The motivation behind this frequency analysis is physiologically driven. The proposed model is intended to be able to resolve \cone of confusion" ambiguities for a sound source in a reverberant environment. It should be able to \localize" many types of sound sources robustly. Some of these sources might include human voice (sharp attacks), musical instruments (relatively soft attacks), and \noise" signals. It will be interesting and instructive to see how the model resolves \paradoxical" signals with conicting or unrealistic binaural cues. As the model develops, its performance will be gauged against that of human listeners. The specic comparisons planned will be discussed in a later section. 5

6 Chapter 2 Procedure 2.1 Pieces of the model Overview of the model The model described in this research consists of several independent pieces that may be separately considered and implemented. Figure 2-1 is a block diagram of the proposed model. "Precedence Effect" Model Filter Bank Intensity Envelope Onset Detection From Right Ear Left Ear From Right Ear Interaural Spectrum Estimation Onset related "Suppression" Position Estimation Figure 2-1: Block diagram of the proposed model. All signal lines (except the initial ear input) consist of multiple channels Filter bank The \front end" of the model is an analysis lter bank. Currently, a\gamma-tone" lter bank ([17]) modelled after the basilar membrane is being employed. Other lter banks, possibly including generalized constant-q, and the short-time Fourier transform are also being considered. The author believes that the exact form of the lter bank is not of paramount importance. Rather, the important feature of all of the lter banks mentioned is that they divide up the frequency spectrum into portions that can be individually analyzed. 6

7 2.1.3 Intensity envelope The second stage of the proposed model is the calculation of an intensity envelope for each lter bank channel. The resulting envelope is used by the onset detector to determine the appropriate \suppression" for the signal as a function of time ([24]) and is employed in the long term precedence eect model as part of the old-plus-new heuristic. The intensity envelope may also be used to estimate interaural timing information, which some authors have suggested may be an important localization cue at frequencies above approximately 1.4 khz ([2]). The exact details of the intensity envelope processing have not been determined at this time Interaural spectrum The interaural spectrum is formed from the outputs of the lter bank and the intensity envelope processor. As specied earlier, ILD and ITD will be estimated as functions of frequency. The details of this portion of the model have notyet been determined and are likely to change as the project progresses Precedence eect model Long term eects such as the Franssen eect might be explained by an old-plus-new heuristic with some memory decay related to the time of the last \onset". It is expected that a simple mechanism may be constructed that will duplicate many dierent eects, including localization in the presence of reverberation and the Franssen eect. Additionally, it is expected that the same mechanism will exhibit performance similar to humans in traditional \precedence eect" experiments (see [23]). The development and implementation of the precedence eect model is expected to form a signicant portion of the total work. The form of the model is expected to be similar to the suppression mechanism described in [24], coupled with an old-plus-new heuristic for long term suppression Maximum likelihood position estimation The maximum likelihood position estimator uses the output of the interaural spectrum estimator, weighted by onset-related suppression, as its feature vector (Note that the feature vector varies with time, and thus the outputs of the estimator will also be time-varying.). Using a central limit theorem argument, one can argue that the estimated interaural spectrum will have a vector Gaussian distribution, centered on means that can be estimated directly from the HRTF data. If the variance of the measurements can be adequately estimated, the maximum likelihood estimator can use that information to compare the feature vectors in a Mahalanobis distance sense. In eect, this implements automatic \optimum" weighting of the \reliable" parts of a signal. The likely result is that ILD information is weighted more heavily in higher frequency ranges, and ITD information is weighted more heavily in lower frequency ranges. It will be interesting and informative to compare the features that the model favors with those that human listeners seem to favor. One of the principal advantages of using the Gaussian distribution assumption with a maximum likelihood estimator is that the estimation can be performed with simple matrix 7

8 multiplications. In a small temporal window, if there are only onsets in a small subset of the frequency bands, a meaningful estimate can be derived from only the available information (through a smaller matrix multiplication). Another result of this is that multiple source locations can be estimated from the same source signal by dividing up the spectrum appropriately. There are still several issues that will require serious consideration in this portion of the model. First and foremost, it must be determined whether or not the Gaussian model of the interaural spectrum is a reasonable assumption. A method of extracting the mean feature vectors from the HRTF data is needed. Also, a reasonable method for estimating the variance of the various parameters must be constructed. Finally, in order for the model to \graduate" from discrete to continuous estimation, a method of interpolating between mean vectors must be determined. This might possibly be done with radial basis functions, but that is beyond the scope of the problem at this point. 2.2 Test signals The inputs to the proposed localization model are the acoustic signals measured at the eardrums of a \dummy-head microphone". It is quite dicult to obtain directly test signals that cover a broad range of locations around the head, so another approach has been taken. Using head-related impulse responses measured for a dummy-head microphone, it is possible to synthesize binaural signals for all measured locations around the head. The principal data requirement for this thesis is a set of head-related transfer function measurements. To this end, a set of HRTFs (actually HRIRs) of a KEMAR mannequin has been measured (see [10]). The data set contains the impulse responses at each eardrum for 710 sound source positions on a 1.4 meter radius sphere around the mannequin head (at elevations between -40 and 90 degrees). These impulse responses can be used both for synthesis of test signals and for generating mean vectors and variance estimates of the interaural spectrum for use in the maximum-likelihood estimator. The synthesis process at its simplest consists only of convolution of a monophonic source signal with the appropriate HRIRs. The resulting signal is akin to a sound source presented in an anechoic environment, and thus does not sound very \natural". To overcome this problem, it is a simple matter to introduce reections and reverberation to the signal. This helps to \externalize" the sound and makes the signal sound more natural (see [6] and [9]). Additionally, variation of the direct to reverberant level ratio results in variation of perceived sound source distance ([6]). While the proposed model does not attempt to determine sound source distance, this might be a useful extension to the current research. An application called \space" has been developed to synthesize test signals using the KEMAR HRIRs. space allows the user to \spatialize" monophonic sound les by convolution with HRIRs and the addition of reverberation. It is designed to make it a simple matter to mix together several \spatialized" sounds and ouput a single sound le with the resulting signal. The resulting stereophonic signal is sampled at 44.1 khz and is intended to be perceptually similar to the signal that would arrive at the eardrums of KEMAR from appropriately positioned sources. Test signals will be synthesized from speech, music, and noise signals. Possibly, \paradoxical" signals will also be tested. 8

9 2.3 Comparison with published data Once the model is complete, its performance will be gauged against human performance. The precedence eect data obtained in [23] may be directly compared with data obtained from this localization model under \identical" experimental circumstances. The localization performance of the model will be examined for various reverberation levels and the relationship of localization blur to reverberation level will be quantied. Signals that elicit the Franssen eect in human listeners will be presented to the model and the presence of the eect in the model's \perception" will be conrmed. Other long term precedence-related eects will also be tested, in hopes that further precedence eect experiments for human listeners will be suggested by the model's performance. 2.4 Time schedule The proposed time schedule for this project is shown in Table 1. It is, of course, subject to large variations as parts of the model are developed, enhanced, and t together. Description KEMAR HRTF measurements Development of signal synthesis software Synthesis of test signals Filter bank implementation Intensity envelope implementation Precedence eect development and implementation interaural spectrum estimation implementation Maximum likelihood estimator development and implementation Replication of data in [23] Other comparisons with human data Total Estimated time requirement 60 hours 40 hours 10 hours 10 hours 20 hours 80+ hours 40 hours 40 hours 40 hours 20+ hours 360+ Hours Table 1: Time schedule for thesis project 9

10 Chapter 3 Equipment and facilities The computing resources required to complete this thesis include a UNIX workstation equipped with the MATLAB software package and C development tools, which will be provided by the Machine Listening group of the MIT Media Laboratory under the direction of Professor Barry Vercoe. 10

11 Bibliography [1] Jens Blauert. \Some Consideration of Binaural Cross Correlation Analysis". Acustica, 39(2):96{104, [2] Jens Blauert. Spatial Hearing. MIT Press, Cambridge, MA, [3] Albert S. Bregman. Auditory Scene Analysis. MIT Press, Cambridge, MA, [4] H. Steven Colburn and Nathaniel I. Durlach. \Models of Binaural Interaction". In Handbook of Perception, volume IV, chapter 10, pages 467{518. Academic Press Inc, [5] Richard O. Duda. \Estimating azimuth and elevation from the interaural intensity difference". Unpublished draft of a paper to be submitted to J. Acoust. Soc. Am., September [6] N. I. Durlach, A. Rigopulos, X. D. Pang, W. S. Woods, A. Kulkarni, H. S. Colburn, and E. M. Wenzel. \On the Externalization of Auditory Images". Presence, 1(2):251{257, Spring [7] Nathaniel I. Durlach and H. Steven Colburn. \Binaural Phenomena". In Handbook of Perception, volume IV, chapter 10, pages 365{466. Academic Press Inc, [8] Daniel P. W. Ellis and Barry L. Vercoe. \A perceptual Representation of Sound for Auditory Signal Separation". In presented at the 123rd meeting of the Acoustical Society of America, Salt Lake City, May [9] Werner Gaik. \Combined evaluation of interaural time and intensity dierences: Psychoacoustic results and computer modeling". J. Acoust. Soc. Am., 94(1):98{110, [10] Bill Gardner and Keith Martin. \HRTF Measurements of a KEMAR Dummy-Head Microphone". Perceptual Computing Technical Report #280, [11] William Morris Hartmann. Localization of a source of sound in a room. In AES 8th International Conference, [12] William Morris Hartmann and Brad Rakerd. \Localization of sound in rooms IV: The Franssen eect". J. Acoust. Soc. Am., 86(4):1366{1373, [13] George F. Kuhn. \Physical Acoustics and Measurements Pertaining to Directional Hearing". In W. A. Yost and G. Gourevitch, editors, Directional Hearing, chapter 1, pages 3{25. Springer-Verlag, New York,

12 [14] A. V. Oppenheim and R. W. Schafer. Discrete-Time Signal Processing. Prentice-Hall, Englewood Clis, NJ, [15] Athanasios Papoulis. Probability, Random Variables, and Stochastic Processes. McGraw-Hill, New York, NY, third edition, [16] Trevor M. Shackleton, Ray Meddis, and Michael J. Hewitt. \Across frequency integration in a model of lateralization". J. Acoust. Soc. Am., 91(4):2276{2279, April [17] Malcolm Slaney. Auditory Toolbox. Apple Technical Report #45, Apple Computer, Inc., [18] R. M. Stern, A. S. Zeiberg, and C. Trahiotis. \Lateralization of complex binaural stimuli: A weighted-image model". J. Acoust. Soc. Am., 84(1):156{165, April [19] F. L. Wightman and D. J. Kistler. \Headphone simulation of free-eld listening. I: Stimulus Synthesis". J. Acoust. Soc. Am., 85(2):858{867, [20] Frederic L. Wightman and Doris J. Kistler. Hearing in three dimensions: Sound localization. In AES 8th International Conference, [21] Frederic L. Wightman, Doris J. Kistler, and Mark E. Perkins. \A New Approach to the Study of Human Sound Localization". In W. A. Yost and G. Gourevitch, editors, Directional Hearing, chapter 2, pages 26{48. Springer-Verlag, New York, [22] W. A. Yost and G. Gourevitch, editors. Directional Hearing. Springer-Verlag, New York, [23] P. M. Zurek. \The precedence eect and its possible role in the avoidance of interaural ambiguities". J. Acoust. Soc. Am., 67(3):952{964, March [24] P. M. Zurek. \The Precedence Eect". In W. A. Yost and G. Gourevitch, editors, Directional Hearing, chapter 4, pages 85{105. Springer-Verlag, New York, [25] P. M. Zurek. \A note on onset eects in binaural hearing". J. Acoust. Soc. Am., 93(2):1200{1201,

Binaural Hearing. Why two ears? Definitions

Binaural Hearing. Why two ears? Definitions Binaural Hearing Why two ears? Locating sounds in space: acuity is poorer than in vision by up to two orders of magnitude, but extends in all directions. Role in alerting and orienting? Separating sound

More information

Binaural Hearing. Steve Colburn Boston University

Binaural Hearing. Steve Colburn Boston University Binaural Hearing Steve Colburn Boston University Outline Why do we (and many other animals) have two ears? What are the major advantages? What is the observed behavior? How do we accomplish this physiologically?

More information

3-D Sound and Spatial Audio. What do these terms mean?

3-D Sound and Spatial Audio. What do these terms mean? 3-D Sound and Spatial Audio What do these terms mean? Both terms are very general. 3-D sound usually implies the perception of point sources in 3-D space (could also be 2-D plane) whether the audio reproduction

More information

Sound localization psychophysics

Sound localization psychophysics Sound localization psychophysics Eric Young A good reference: B.C.J. Moore An Introduction to the Psychology of Hearing Chapter 7, Space Perception. Elsevier, Amsterdam, pp. 233-267 (2004). Sound localization:

More information

3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF. Masayuki Morimoto Motokuni Itoh Kazuhiro Iida

3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF. Masayuki Morimoto Motokuni Itoh Kazuhiro Iida 3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF Masayuki Morimoto Motokuni Itoh Kazuhiro Iida Kobe University Environmental Acoustics Laboratory Rokko, Nada, Kobe, 657-8501,

More information

An Auditory System Modeling in Sound Source Localization

An Auditory System Modeling in Sound Source Localization An Auditory System Modeling in Sound Source Localization Yul Young Park The University of Texas at Austin EE381K Multidimensional Signal Processing May 18, 2005 Abstract Sound localization of the auditory

More information

Binaural processing of complex stimuli

Binaural processing of complex stimuli Binaural processing of complex stimuli Outline for today Binaural detection experiments and models Speech as an important waveform Experiments on understanding speech in complex environments (Cocktail

More information

IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION. Griffin D. Romigh, Brian D. Simpson, Nandini Iyer

IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION. Griffin D. Romigh, Brian D. Simpson, Nandini Iyer IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION Griffin D. Romigh, Brian D. Simpson, Nandini Iyer 711th Human Performance Wing Air Force Research Laboratory

More information

Neural correlates of the perception of sound source separation

Neural correlates of the perception of sound source separation Neural correlates of the perception of sound source separation Mitchell L. Day 1,2 * and Bertrand Delgutte 1,2,3 1 Department of Otology and Laryngology, Harvard Medical School, Boston, MA 02115, USA.

More information

Lecture 8: Spatial sound

Lecture 8: Spatial sound EE E6820: Speech & Audio Processing & Recognition Lecture 8: Spatial sound 1 2 3 4 Spatial acoustics Binaural perception Synthesizing spatial audio Extracting spatial sounds Dan Ellis

More information

Hearing in the Environment

Hearing in the Environment 10 Hearing in the Environment Click Chapter to edit 10 Master Hearing title in the style Environment Sound Localization Complex Sounds Auditory Scene Analysis Continuity and Restoration Effects Auditory

More information

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening AUDL GS08/GAV1 Signals, systems, acoustics and the ear Pitch & Binaural listening Review 25 20 15 10 5 0-5 100 1000 10000 25 20 15 10 5 0-5 100 1000 10000 Part I: Auditory frequency selectivity Tuning

More information

Effect of spectral content and learning on auditory distance perception

Effect of spectral content and learning on auditory distance perception Effect of spectral content and learning on auditory distance perception Norbert Kopčo 1,2, Dávid Čeljuska 1, Miroslav Puszta 1, Michal Raček 1 a Martin Sarnovský 1 1 Department of Cybernetics and AI, Technical

More information

HEARING AND PSYCHOACOUSTICS

HEARING AND PSYCHOACOUSTICS CHAPTER 2 HEARING AND PSYCHOACOUSTICS WITH LIDIA LEE I would like to lead off the specific audio discussions with a description of the audio receptor the ear. I believe it is always a good idea to understand

More information

Spatial hearing and sound localization mechanisms in the brain. Henri Pöntynen February 9, 2016

Spatial hearing and sound localization mechanisms in the brain. Henri Pöntynen February 9, 2016 Spatial hearing and sound localization mechanisms in the brain Henri Pöntynen February 9, 2016 Outline Auditory periphery: from acoustics to neural signals - Basilar membrane - Organ of Corti Spatial

More information

HearIntelligence by HANSATON. Intelligent hearing means natural hearing.

HearIntelligence by HANSATON. Intelligent hearing means natural hearing. HearIntelligence by HANSATON. HearIntelligence by HANSATON. Intelligent hearing means natural hearing. Acoustic environments are complex. We are surrounded by a variety of different acoustic signals, speech

More information

A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES

A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES Kazuhiro IIDA, Motokuni ITOH AV Core Technology Development Center, Matsushita Electric Industrial Co., Ltd.

More information

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS 19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 27 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS PACS: 43.66.Pn Seeber, Bernhard U. Auditory Perception Lab, Dept.

More information

Effect of microphone position in hearing instruments on binaural masking level differences

Effect of microphone position in hearing instruments on binaural masking level differences Effect of microphone position in hearing instruments on binaural masking level differences Fredrik Gran, Jesper Udesen and Andrew B. Dittberner GN ReSound A/S, Research R&D, Lautrupbjerg 7, 2750 Ballerup,

More information

Juha Merimaa b) Institut für Kommunikationsakustik, Ruhr-Universität Bochum, Germany

Juha Merimaa b) Institut für Kommunikationsakustik, Ruhr-Universität Bochum, Germany Source localization in complex listening situations: Selection of binaural cues based on interaural coherence Christof Faller a) Mobile Terminals Division, Agere Systems, Allentown, Pennsylvania Juha Merimaa

More information

Neural System Model of Human Sound Localization

Neural System Model of Human Sound Localization in Advances in Neural Information Processing Systems 13 S.A. Solla, T.K. Leen, K.-R. Müller (eds.), 761 767 MIT Press (2000) Neural System Model of Human Sound Localization Craig T. Jin Department of Physiology

More information

The role of low frequency components in median plane localization

The role of low frequency components in median plane localization Acoust. Sci. & Tech. 24, 2 (23) PAPER The role of low components in median plane localization Masayuki Morimoto 1;, Motoki Yairi 1, Kazuhiro Iida 2 and Motokuni Itoh 1 1 Environmental Acoustics Laboratory,

More information

Publication VI. c 2007 Audio Engineering Society. Reprinted with permission.

Publication VI. c 2007 Audio Engineering Society. Reprinted with permission. VI Publication VI Hirvonen, T. and Pulkki, V., Predicting Binaural Masking Level Difference and Dichotic Pitch Using Instantaneous ILD Model, AES 30th Int. Conference, 2007. c 2007 Audio Engineering Society.

More information

Binaural Hearing for Robots Introduction to Robot Hearing

Binaural Hearing for Robots Introduction to Robot Hearing Binaural Hearing for Robots Introduction to Robot Hearing 1Radu Horaud Binaural Hearing for Robots 1. Introduction to Robot Hearing 2. Methodological Foundations 3. Sound-Source Localization 4. Machine

More information

INTRODUCTION J. Acoust. Soc. Am. 103 (2), February /98/103(2)/1080/5/$ Acoustical Society of America 1080

INTRODUCTION J. Acoust. Soc. Am. 103 (2), February /98/103(2)/1080/5/$ Acoustical Society of America 1080 Perceptual segregation of a harmonic from a vowel by interaural time difference in conjunction with mistuning and onset asynchrony C. J. Darwin and R. W. Hukin Experimental Psychology, University of Sussex,

More information

Sound Localization PSY 310 Greg Francis. Lecture 31. Audition

Sound Localization PSY 310 Greg Francis. Lecture 31. Audition Sound Localization PSY 310 Greg Francis Lecture 31 Physics and psychology. Audition We now have some idea of how sound properties are recorded by the auditory system So, we know what kind of information

More information

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane: II. Model Algorithms

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane: II. Model Algorithms 956 969 Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane: II. Model Algorithms Jonas Braasch Institut für Kommunikationsakustik, Ruhr-Universität Bochum, Germany

More information

21/01/2013. Binaural Phenomena. Aim. To understand binaural hearing Objectives. Understand the cues used to determine the location of a sound source

21/01/2013. Binaural Phenomena. Aim. To understand binaural hearing Objectives. Understand the cues used to determine the location of a sound source Binaural Phenomena Aim To understand binaural hearing Objectives Understand the cues used to determine the location of a sound source Understand sensitivity to binaural spatial cues, including interaural

More information

Psychoacoustics. Author: Nejc Rosenstein. Advisor: Simon irca

Psychoacoustics. Author: Nejc Rosenstein. Advisor: Simon irca Psychoacoustics Author: Nejc Rosenstein Advisor: Simon irca 8. 1. 2014 Abstract We introduce psychoacoustics as a branch of physics which explores the link between physical properties of sound and listener's

More information

William A. Yost and George Gourevitch Editors. Directional Hearing. With 133 Figures. Springer-Verlag New York Berlin Heidelberg London Paris Tokyo

William A. Yost and George Gourevitch Editors. Directional Hearing. With 133 Figures. Springer-Verlag New York Berlin Heidelberg London Paris Tokyo Directional Hearing William A. Yost and George Gourevitch Editors Directional Hearing With 133 Figures Springer-Verlag New York Berlin Heidelberg London Paris Tokyo William A. Yost Director Parmly Hearing

More information

Signals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor

Signals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor Signals, systems, acoustics and the ear Week 5 The peripheral auditory system: The ear as a signal processor Think of this set of organs 2 as a collection of systems, transforming sounds to be sent to

More information

Computational Perception /785. Auditory Scene Analysis

Computational Perception /785. Auditory Scene Analysis Computational Perception 15-485/785 Auditory Scene Analysis A framework for auditory scene analysis Auditory scene analysis involves low and high level cues Low level acoustic cues are often result in

More information

Hearing II Perceptual Aspects

Hearing II Perceptual Aspects Hearing II Perceptual Aspects Overview of Topics Chapter 6 in Chaudhuri Intensity & Loudness Frequency & Pitch Auditory Space Perception 1 2 Intensity & Loudness Loudness is the subjective perceptual quality

More information

Discrete Signal Processing

Discrete Signal Processing 1 Discrete Signal Processing C.M. Liu Perceptual Lab, College of Computer Science National Chiao-Tung University http://www.cs.nctu.edu.tw/~cmliu/courses/dsp/ ( Office: EC538 (03)5731877 cmliu@cs.nctu.edu.tw

More information

CONTRIBUTION OF DIRECTIONAL ENERGY COMPONENTS OF LATE SOUND TO LISTENER ENVELOPMENT

CONTRIBUTION OF DIRECTIONAL ENERGY COMPONENTS OF LATE SOUND TO LISTENER ENVELOPMENT CONTRIBUTION OF DIRECTIONAL ENERGY COMPONENTS OF LATE SOUND TO LISTENER ENVELOPMENT PACS:..Hy Furuya, Hiroshi ; Wakuda, Akiko ; Anai, Ken ; Fujimoto, Kazutoshi Faculty of Engineering, Kyushu Kyoritsu University

More information

Improve localization accuracy and natural listening with Spatial Awareness

Improve localization accuracy and natural listening with Spatial Awareness Improve localization accuracy and natural listening with Spatial Awareness While you probably don t even notice them, your localization skills make everyday tasks easier: like finding your ringing phone

More information

Abstract. 1. Introduction. David Spargo 1, William L. Martens 2, and Densil Cabrera 3

Abstract. 1. Introduction. David Spargo 1, William L. Martens 2, and Densil Cabrera 3 THE INFLUENCE OF ROOM REFLECTIONS ON SUBWOOFER REPRODUCTION IN A SMALL ROOM: BINAURAL INTERACTIONS PREDICT PERCEIVED LATERAL ANGLE OF PERCUSSIVE LOW- FREQUENCY MUSICAL TONES Abstract David Spargo 1, William

More information

Localization: Give your patients a listening edge

Localization: Give your patients a listening edge Localization: Give your patients a listening edge For those of us with healthy auditory systems, localization skills are often taken for granted. We don t even notice them, until they are no longer working.

More information

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES Varinthira Duangudom and David V Anderson School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA 30332

More information

On the influence of interaural differences on onset detection in auditory object formation. 1 Introduction

On the influence of interaural differences on onset detection in auditory object formation. 1 Introduction On the influence of interaural differences on onset detection in auditory object formation Othmar Schimmel Eindhoven University of Technology, P.O. Box 513 / Building IPO 1.26, 56 MD Eindhoven, The Netherlands,

More information

Discrimination and identification of azimuth using spectral shape a)

Discrimination and identification of azimuth using spectral shape a) Discrimination and identification of azimuth using spectral shape a) Daniel E. Shub b Speech and Hearing Bioscience and Technology Program, Division of Health Sciences and Technology, Massachusetts Institute

More information

J Jeffress model, 3, 66ff

J Jeffress model, 3, 66ff Index A Absolute pitch, 102 Afferent projections, inferior colliculus, 131 132 Amplitude modulation, coincidence detector, 152ff inferior colliculus, 152ff inhibition models, 156ff models, 152ff Anatomy,

More information

A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER

A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER ARCHIVES OF ACOUSTICS 29, 1, 25 34 (2004) INTELLIGIBILITY OF SPEECH PROCESSED BY A SPECTRAL CONTRAST ENHANCEMENT PROCEDURE AND A BINAURAL PROCEDURE A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER Institute

More information

functions grow at a higher rate than in normal{hearing subjects. In this chapter, the correlation

functions grow at a higher rate than in normal{hearing subjects. In this chapter, the correlation Chapter Categorical loudness scaling in hearing{impaired listeners Abstract Most sensorineural hearing{impaired subjects show the recruitment phenomenon, i.e., loudness functions grow at a higher rate

More information

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane. I. Psychoacoustical Data

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane. I. Psychoacoustical Data 942 955 Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane. I. Psychoacoustical Data Jonas Braasch, Klaus Hartung Institut für Kommunikationsakustik, Ruhr-Universität

More information

THE PHYSICAL AND PSYCHOPHYSICAL BASIS OF SOUND LOCALIZATION

THE PHYSICAL AND PSYCHOPHYSICAL BASIS OF SOUND LOCALIZATION CHAPTER 2 THE PHYSICAL AND PSYCHOPHYSICAL BASIS OF SOUND LOCALIZATION Simon Carlile 1. PHYSICAL CUES TO A SOUND S LOCATION 1.1. THE DUPLEX THEORY OF AUDITORY LOCALIZATION Traditionally, the principal cues

More information

Effects of Cochlear Hearing Loss on the Benefits of Ideal Binary Masking

Effects of Cochlear Hearing Loss on the Benefits of Ideal Binary Masking INTERSPEECH 2016 September 8 12, 2016, San Francisco, USA Effects of Cochlear Hearing Loss on the Benefits of Ideal Binary Masking Vahid Montazeri, Shaikat Hossain, Peter F. Assmann University of Texas

More information

Auditory System & Hearing

Auditory System & Hearing Auditory System & Hearing Chapters 9 and 10 Lecture 17 Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Spring 2015 1 Cochlea: physical device tuned to frequency! place code: tuning of different

More information

HCS 7367 Speech Perception

HCS 7367 Speech Perception Long-term spectrum of speech HCS 7367 Speech Perception Connected speech Absolute threshold Males Dr. Peter Assmann Fall 212 Females Long-term spectrum of speech Vowels Males Females 2) Absolute threshold

More information

THE RELATION BETWEEN SPATIAL IMPRESSION AND THE PRECEDENCE EFFECT. Masayuki Morimoto

THE RELATION BETWEEN SPATIAL IMPRESSION AND THE PRECEDENCE EFFECT. Masayuki Morimoto THE RELATION BETWEEN SPATIAL IMPRESSION AND THE PRECEDENCE EFFECT Masayuki Morimoto Environmental Acoustics Laboratory, Faculty of Engineering, Kobe University Rokko Nada Kobe 657-85 Japan mrmt@kobe-u.ac.jp

More information

Trading Directional Accuracy for Realism in a Virtual Auditory Display

Trading Directional Accuracy for Realism in a Virtual Auditory Display Trading Directional Accuracy for Realism in a Virtual Auditory Display Barbara G. Shinn-Cunningham, I-Fan Lin, and Tim Streeter Hearing Research Center, Boston University 677 Beacon St., Boston, MA 02215

More information

Auditory Scene Analysis

Auditory Scene Analysis 1 Auditory Scene Analysis Albert S. Bregman Department of Psychology McGill University 1205 Docteur Penfield Avenue Montreal, QC Canada H3A 1B1 E-mail: bregman@hebb.psych.mcgill.ca To appear in N.J. Smelzer

More information

Physiological measures of the precedence effect and spatial release from masking in the cat inferior colliculus.

Physiological measures of the precedence effect and spatial release from masking in the cat inferior colliculus. Physiological measures of the precedence effect and spatial release from masking in the cat inferior colliculus. R.Y. Litovsky 1,3, C. C. Lane 1,2, C.. tencio 1 and. Delgutte 1,2 1 Massachusetts Eye and

More information

Hearing. Juan P Bello

Hearing. Juan P Bello Hearing Juan P Bello The human ear The human ear Outer Ear The human ear Middle Ear The human ear Inner Ear The cochlea (1) It separates sound into its various components If uncoiled it becomes a tapering

More information

Combating the Reverberation Problem

Combating the Reverberation Problem Combating the Reverberation Problem Barbara Shinn-Cunningham (Boston University) Martin Cooke (Sheffield University, U.K.) How speech is corrupted by reverberation DeLiang Wang (Ohio State University)

More information

William A. Yost and Sandra J. Guzman Parmly Hearing Institute, Loyola University Chicago, Chicago, Illinois 60201

William A. Yost and Sandra J. Guzman Parmly Hearing Institute, Loyola University Chicago, Chicago, Illinois 60201 The precedence effect Ruth Y. Litovsky a) and H. Steven Colburn Hearing Research Center and Department of Biomedical Engineering, Boston University, Boston, Massachusetts 02215 William A. Yost and Sandra

More information

Representation of sound in the auditory nerve

Representation of sound in the auditory nerve Representation of sound in the auditory nerve Eric D. Young Department of Biomedical Engineering Johns Hopkins University Young, ED. Neural representation of spectral and temporal information in speech.

More information

The Structure and Function of the Auditory Nerve

The Structure and Function of the Auditory Nerve The Structure and Function of the Auditory Nerve Brad May Structure and Function of the Auditory and Vestibular Systems (BME 580.626) September 21, 2010 1 Objectives Anatomy Basic response patterns Frequency

More information

P. M. Zurek Sensimetrics Corporation, Somerville, Massachusetts and Massachusetts Institute of Technology, Cambridge, Massachusetts 02139

P. M. Zurek Sensimetrics Corporation, Somerville, Massachusetts and Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 Failure to unlearn the precedence effect R. Y. Litovsky, a) M. L. Hawley, and B. J. Fligor Hearing Research Center and Department of Biomedical Engineering, Boston University, Boston, Massachusetts 02215

More information

Lecture 3: Perception

Lecture 3: Perception ELEN E4896 MUSIC SIGNAL PROCESSING Lecture 3: Perception 1. Ear Physiology 2. Auditory Psychophysics 3. Pitch Perception 4. Music Perception Dan Ellis Dept. Electrical Engineering, Columbia University

More information

Effect of source spectrum on sound localization in an everyday reverberant room

Effect of source spectrum on sound localization in an everyday reverberant room Effect of source spectrum on sound localization in an everyday reverberant room Antje Ihlefeld and Barbara G. Shinn-Cunningham a) Hearing Research Center, Boston University, Boston, Massachusetts 02215

More information

Systems Neuroscience Oct. 16, Auditory system. http:

Systems Neuroscience Oct. 16, Auditory system. http: Systems Neuroscience Oct. 16, 2018 Auditory system http: www.ini.unizh.ch/~kiper/system_neurosci.html The physics of sound Measuring sound intensity We are sensitive to an enormous range of intensities,

More information

Auditory Presence, Individualized Head-Related Transfer Functions, and Illusory Ego-Motion in Virtual Environments

Auditory Presence, Individualized Head-Related Transfer Functions, and Illusory Ego-Motion in Virtual Environments Auditory Presence, Individualized Head-Related Transfer Functions, and Illusory Ego-Motion in Virtual Environments Aleksander Väljamäe 1, Pontus Larsson 2, Daniel Västfjäll 2,3 and Mendel Kleiner 4 1 Department

More information

Sound Texture Classification Using Statistics from an Auditory Model

Sound Texture Classification Using Statistics from an Auditory Model Sound Texture Classification Using Statistics from an Auditory Model Gabriele Carotti-Sha Evan Penn Daniel Villamizar Electrical Engineering Email: gcarotti@stanford.edu Mangement Science & Engineering

More information

Speech segregation in rooms: Effects of reverberation on both target and interferer

Speech segregation in rooms: Effects of reverberation on both target and interferer Speech segregation in rooms: Effects of reverberation on both target and interferer Mathieu Lavandier a and John F. Culling School of Psychology, Cardiff University, Tower Building, Park Place, Cardiff,

More information

The basic mechanisms of directional sound localization are well documented. In the horizontal plane, interaural difference INTRODUCTION I.

The basic mechanisms of directional sound localization are well documented. In the horizontal plane, interaural difference INTRODUCTION I. Auditory localization of nearby sources. II. Localization of a broadband source Douglas S. Brungart, a) Nathaniel I. Durlach, and William M. Rabinowitz b) Research Laboratory of Electronics, Massachusetts

More information

ICaD 2013 ADJUSTING THE PERCEIVED DISTANCE OF VIRTUAL SPEECH SOURCES BY MODIFYING BINAURAL ROOM IMPULSE RESPONSES

ICaD 2013 ADJUSTING THE PERCEIVED DISTANCE OF VIRTUAL SPEECH SOURCES BY MODIFYING BINAURAL ROOM IMPULSE RESPONSES ICaD 213 6 1 july, 213, Łódź, Poland international Conference on auditory Display ADJUSTING THE PERCEIVED DISTANCE OF VIRTUAL SPEECH SOURCES BY MODIFYING BINAURAL ROOM IMPULSE RESPONSES Robert Albrecht

More information

Auditory scene analysis in humans: Implications for computational implementations.

Auditory scene analysis in humans: Implications for computational implementations. Auditory scene analysis in humans: Implications for computational implementations. Albert S. Bregman McGill University Introduction. The scene analysis problem. Two dimensions of grouping. Recognition

More information

Spatial unmasking in aided hearing-impaired listeners and the need for training

Spatial unmasking in aided hearing-impaired listeners and the need for training Spatial unmasking in aided hearing-impaired listeners and the need for training Tobias Neher, Thomas Behrens, Louise Kragelund, and Anne Specht Petersen Oticon A/S, Research Centre Eriksholm, Kongevejen

More information

Hearing. Figure 1. The human ear (from Kessel and Kardon, 1979)

Hearing. Figure 1. The human ear (from Kessel and Kardon, 1979) Hearing The nervous system s cognitive response to sound stimuli is known as psychoacoustics: it is partly acoustics and partly psychology. Hearing is a feature resulting from our physiology that we tend

More information

Modeling HRTF For Sound Localization in Normal Listeners and Bilateral Cochlear Implant Users

Modeling HRTF For Sound Localization in Normal Listeners and Bilateral Cochlear Implant Users University of Denver Digital Commons @ DU Electronic Theses and Dissertations Graduate Studies 1-1-2013 Modeling HRTF For Sound Localization in Normal Listeners and Bilateral Cochlear Implant Users Douglas

More information

EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE DESIGN OF SPATIAL AUDIO DISPLAYS

EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE DESIGN OF SPATIAL AUDIO DISPLAYS Proceedings of the 14 International Conference on Auditory Display, Paris, France June 24-27, 28 EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE

More information

Sound localization under conditions of covered ears on the horizontal plane

Sound localization under conditions of covered ears on the horizontal plane coust. Sci. & Tech. 28, 5 (27) TECHNICL REPORT #27 The coustical Society of Japan Sound localization under conditions of covered ears on the horizontal plane Madoka Takimoto 1, Takanori Nishino 2;, Katunobu

More information

INTRODUCTION J. Acoust. Soc. Am. 100 (4), Pt. 1, October /96/100(4)/2352/13/$ Acoustical Society of America 2352

INTRODUCTION J. Acoust. Soc. Am. 100 (4), Pt. 1, October /96/100(4)/2352/13/$ Acoustical Society of America 2352 Lateralization of a perturbed harmonic: Effects of onset asynchrony and mistuning a) Nicholas I. Hill and C. J. Darwin Laboratory of Experimental Psychology, University of Sussex, Brighton BN1 9QG, United

More information

The Effects of Hearing Loss on Binaural Function

The Effects of Hearing Loss on Binaural Function The Effects of Hearing Loss on Binaural Function Steve Colburn Boston University colburn@bu.edu Outline What are the major advantages of two ears? How do the left and right signals differ? How do we use

More information

Comment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA)

Comment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA) Comments Comment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA) Is phase locking to transposed stimuli as good as phase locking to low-frequency

More information

Angular Resolution of Human Sound Localization

Angular Resolution of Human Sound Localization Angular Resolution of Human Sound Localization By Simon Skluzacek A senior thesis submitted to the Carthage College Physics & Astronomy Department in partial fulfillment of the requirements for the Bachelor

More information

B. G. Shinn-Cunningham Hearing Research Center, Departments of Biomedical Engineering and Cognitive and Neural Systems, Boston, Massachusetts 02215

B. G. Shinn-Cunningham Hearing Research Center, Departments of Biomedical Engineering and Cognitive and Neural Systems, Boston, Massachusetts 02215 Investigation of the relationship among three common measures of precedence: Fusion, localization dominance, and discrimination suppression R. Y. Litovsky a) Boston University Hearing Research Center,

More information

Auditory Scene Analysis: phenomena, theories and computational models

Auditory Scene Analysis: phenomena, theories and computational models Auditory Scene Analysis: phenomena, theories and computational models July 1998 Dan Ellis International Computer Science Institute, Berkeley CA Outline 1 2 3 4 The computational

More information

Binaural hearing and future hearing-aids technology

Binaural hearing and future hearing-aids technology Binaural hearing and future hearing-aids technology M. Bodden To cite this version: M. Bodden. Binaural hearing and future hearing-aids technology. Journal de Physique IV Colloque, 1994, 04 (C5), pp.c5-411-c5-414.

More information

Optimizing personalized 3D soundscape for a wearable mobility aid for the blind

Optimizing personalized 3D soundscape for a wearable mobility aid for the blind Optimizing personalized 3D soundscape for a wearable mobility aid for the blind eingereichte Master Thesis von std. Alexis Guibourgé geb. am 28.07.1990 wohnhaft in: Lierstr. 11A 80639 München Tel.: 0179

More information

What Is the Difference between db HL and db SPL?

What Is the Difference between db HL and db SPL? 1 Psychoacoustics What Is the Difference between db HL and db SPL? The decibel (db ) is a logarithmic unit of measurement used to express the magnitude of a sound relative to some reference level. Decibels

More information

How is the stimulus represented in the nervous system?

How is the stimulus represented in the nervous system? How is the stimulus represented in the nervous system? Eric Young F Rieke et al Spikes MIT Press (1997) Especially chapter 2 I Nelken et al Encoding stimulus information by spike numbers and mean response

More information

Topic 4. Pitch & Frequency

Topic 4. Pitch & Frequency Topic 4 Pitch & Frequency A musical interlude KOMBU This solo by Kaigal-ool of Huun-Huur-Tu (accompanying himself on doshpuluur) demonstrates perfectly the characteristic sound of the Xorekteer voice An

More information

Sound source localization in real sound fields based on empirical statistics of interaural parameters a)

Sound source localization in real sound fields based on empirical statistics of interaural parameters a) Sound source localization in real sound fields based on empirical statistics of interaural parameters a) Johannes Nix b and Volker Hohmann Medizinische Physik, Carl von Ossietzky Universität Oldenburg,

More information

Using Source Models in Speech Separation

Using Source Models in Speech Separation Using Source Models in Speech Separation Dan Ellis Laboratory for Recognition and Organization of Speech and Audio Dept. Electrical Eng., Columbia Univ., NY USA dpwe@ee.columbia.edu http://labrosa.ee.columbia.edu/

More information

On the improvement of localization accuracy with nonindividualized

On the improvement of localization accuracy with nonindividualized On the improvement of localization accuracy with nonindividualized HRTF-based sounds Catarina Mendonça 1, AES Member, Guilherme Campos 2, AES Member, Paulo Dias 2, José Vieira 2, AES Fellow, João P. Ferreira

More information

Audibility of time differences in adjacent head-related transfer functions (HRTFs) Hoffmann, Pablo Francisco F.; Møller, Henrik

Audibility of time differences in adjacent head-related transfer functions (HRTFs) Hoffmann, Pablo Francisco F.; Møller, Henrik Aalborg Universitet Audibility of time differences in adjacent head-related transfer functions (HRTFs) Hoffmann, Pablo Francisco F.; Møller, Henrik Published in: Audio Engineering Society Convention Papers

More information

TOPICS IN AMPLIFICATION

TOPICS IN AMPLIFICATION August 2011 Directional modalities Directional Microphone Technology in Oasis 14.0 and Applications for Use Directional microphones are among the most important features found on hearing instruments today.

More information

Brian D. Simpson Veridian, 5200 Springfield Pike, Suite 200, Dayton, Ohio 45431

Brian D. Simpson Veridian, 5200 Springfield Pike, Suite 200, Dayton, Ohio 45431 The effects of spatial separation in distance on the informational and energetic masking of a nearby speech signal Douglas S. Brungart a) Air Force Research Laboratory, 2610 Seventh Street, Wright-Patterson

More information

Cochlear implant patients localization using interaural level differences exceeds that of untrained normal hearing listeners

Cochlear implant patients localization using interaural level differences exceeds that of untrained normal hearing listeners Cochlear implant patients localization using interaural level differences exceeds that of untrained normal hearing listeners Justin M. Aronoff a) Communication and Neuroscience Division, House Research

More information

I. INTRODUCTION. NL-5656 AA Eindhoven, The Netherlands. Electronic mail:

I. INTRODUCTION. NL-5656 AA Eindhoven, The Netherlands. Electronic mail: Binaural processing model based on contralateral inhibition. I. Model structure Jeroen Breebaart a) IPO, Center for User System Interaction, P.O. Box 513, NL-5600 MB Eindhoven, The Netherlands Steven van

More information

CHAPTER 1. Simon Carlile 1. PERCEIVING REAL AND VIRTUAL SOUND FIELDS

CHAPTER 1. Simon Carlile 1. PERCEIVING REAL AND VIRTUAL SOUND FIELDS 1 CHAPTER 1 AUDITORY SPACE Simon Carlile 1. PERCEIVING REAL AND VIRTUAL SOUND FIELDS 1.1. PERCEIVING THE WORLD One of the greatest and most enduring of intellectual quests is that of self understanding.

More information

Effect of Reverberation Context on Spatial Hearing Performance of Normally Hearing Listeners

Effect of Reverberation Context on Spatial Hearing Performance of Normally Hearing Listeners Western University Scholarship@Western Electronic Thesis and Dissertation Repository January 2015 Effect of Reverberation Context on Spatial Hearing Performance of Normally Hearing Listeners Renita E.

More information

HST.723J, Spring 2005 Theme 3 Report

HST.723J, Spring 2005 Theme 3 Report HST.723J, Spring 2005 Theme 3 Report Madhu Shashanka shashanka@cns.bu.edu Introduction The theme of this report is binaural interactions. Binaural interactions of sound stimuli enable humans (and other

More information

Chapter 1: Introduction to digital audio

Chapter 1: Introduction to digital audio Chapter 1: Introduction to digital audio Applications: audio players (e.g. MP3), DVD-audio, digital audio broadcast, music synthesizer, digital amplifier and equalizer, 3D sound synthesis 1 Properties

More information

Synthesis of Spatially Extended Virtual Sources with Time-Frequency Decomposition of Mono Signals

Synthesis of Spatially Extended Virtual Sources with Time-Frequency Decomposition of Mono Signals PAPERS Synthesis of Spatially Extended Virtual Sources with Time-Frequency Decomposition of Mono Signals TAPANI PIHLAJAMÄKI, AES Student Member, OLLI SANTALA, AES Student Member, AND (tapani.pihlajamaki@aalto.fi)

More information

Perceptual Plasticity in Spatial Auditory Displays

Perceptual Plasticity in Spatial Auditory Displays Perceptual Plasticity in Spatial Auditory Displays BARBARA G. SHINN-CUNNINGHAM, TIMOTHY STREETER, and JEAN-FRANÇOIS GYSS Hearing Research Center, Boston University Often, virtual acoustic environments

More information

A Microphone-Array-Based System for Restoring Sound Localization with Occluded Ears

A Microphone-Array-Based System for Restoring Sound Localization with Occluded Ears Restoring Sound Localization with Occluded Ears Adelbert W. Bronkhorst TNO Human Factors P.O. Box 23, 3769 ZG Soesterberg The Netherlands adelbert.bronkhorst@tno.nl Jan A. Verhave TNO Human Factors P.O.

More information