Angular Resolution of Human Sound Localization

Size: px
Start display at page:

Download "Angular Resolution of Human Sound Localization"

Transcription

1 Angular Resolution of Human Sound Localization By Simon Skluzacek A senior thesis submitted to the Carthage College Physics & Astronomy Department in partial fulfillment of the requirements for the Bachelor of Arts Degree in Physics Carthage College Kenosha, Wisconsin May 15, 2012 Abstract Sound localization is very important to the acoustic community due to its capacity to replicate audio environments. Surround sound systems, theaters, and audio engineers all use localization techniques to give a listener the sensation that they are hearing realistic sounds. The purpose of this experiment was to design a procedure to test and model the localization accuracy of human hearing in the azimuthal plane. Although more testing needs to be completed to reduce the large error due to lack of a large collection of trials, trends have emerged from the data to show that localization is more accurate in the front than on the sides. Data shows that the variation in localization has approximately ±15 of error directly in front and continues to increase to ±36 error at a listener angle of 60 relative to the speaker location. Another trend shows that at extremely high frequencies, localization averages decrease. For frequencies of 300Hz, 1kHz, and 3kHz, localization averages were approximately 20±4, but for 10kHz the average was only 15±1.3. Based off of these trends a qualitative model and its features have been constructed. 1

2 1 Introduction In this paper, I will be discussing the ideas about sound localization and its angular resolution when compared to actual source positions and a calculated model. Sound localization, basically defined, is the ability of the brain to interpret the position of a sound utilizing tools like interaural time differences (ITD), interaural intensity differences (ILD), and headrelated transfer functions (HRTF). 1.1 Interaural Time and Intensity Differences Interaural time and intensity differences show the changes in phase and decibel level, respectively, of sound waves between the two ears of a listener. Sound waves traveling through a medium towards a person will most likely reach each ear at different times and intensities and it is with these differences that a person can determine where a sound is coming from, or should be coming from. Fig. 1 illustrates the path of sound as it reaches the listener from an angle. Figure 1: An diagram showing the path differences from ear to ear as a sound wave approaches a listener at an angle. [2] These differences also offer some insight physically as to how sound interacts with our heads. 2

3 When sounds originate from the front of a listener, there will be little to no difference in phase or intensity level. However, as the sound origin moves and the angle between the origin and listener increases, there is a shadowing effect that can occur by the head blocking the sound before it reaches the opposite ear. Studies have shown this shadowing effect can reduce the decibel level by 20db from ear to ear when sounds originate at 90 from the listener. This reduction in intensity has been proven for frequencies above 3.5kHz, but frequencies below this have shown less variation in intensity and therefore, another method must be used. [9] ITDs represents another method that listeners use to localize sound sources. Just as their are differences between the ears with intensity, many times there are also time differences. Differences in time mean difference in phase of sounds, and it is with these small changes that listeners are able to localize sound sources using frequency wavelengths. This method is most useful in frequencies lower than 1.5kHz, which has a wavelength equal to the average width of a human head, which is approximately 23cm. Anything above this frequency will generally have noticeable intensity differences, which is where ILDs will be the most effective method for localization. A computer model was also created that replicated the cochlea and the data gathered from it showed that higher frequency stimuli were localized utilizing ILDs over ITDs. [3] 1.2 Head-Related Transfer Function Head-related transfer functions are a more in depth look at localizing sounds. HRTFs describe how reflections off of body features, as well as ear components, affect sound waves prior to entering the inner ear for signal processing. Just as each person has an individual ear and body composition, their HRTF also is individualistic, which can make them costly and difficult to determine. Therefore, measured data for generic HRTFs are used by different people who wish to implement the use of HRTFs in replicating spatial audio sources. For example, with HRTFs, headphones can reproduce the feeling of distance, as well as changes 3

4 in azimuthal and elevation angle of audio sources. [2] 1.3 Monaural and Binaural Localization It is also important to consider that these differences are found using binaural localization, or using both ears. Research has shown that the use of both ears to localize sounds, even at 90 positions, play a significant role in the ability of a person to localize. Monaural localization studies have been conducted that show if one ear is blocked or if a person is deaf in one ear, their ability to localize on the side is much more accurate than localization in front of them. For instance, if a person was deaf in their left ear and a sound or stimuli were played at 90 on their right, a change of 4 is necessary for the listener to notice change in localization. In addition, when the sound was played directly at the deaf ear a 22 shift is necessary to notice change. All other localizations require a 5-10 change to be noticed. When looking at the application of binaural localization, it was found that with the combination of both ears localization was best found at 0 and 180. However, at 90 it was discovered that the localization was about the same as with the monaural localization. This is due mainly to the fact that at 90 there is really only one ear involved with the localization process since the head shadows the other ear from receiving a direct sound. [5] 1.4 Precedence Effect There is another special case when it comes to sound localization called the precedence effect. This effect deals with small changes in time between the two ears since ITDs are able to recognize time delays or gaps of less than a millisecond. If two sources were to be emitting sound to a centralized listener, and if delays of 0-1ms were to be applied to one speaker gradually, the listener would start with a localization at the central phantom 4

5 source between the two speakers and would slowly begin to localize the sound source towards the speaker without the delay. As delay times increase from 1-5ms, the listener will continue to localize the sound source at the location of the speaker without delay, and they will begin to cancel out the information gathered from the delayed speaker. [7] At around 30ms for speech, the precedence effect is broken down and the listener will begin to hear two completely different sources. [8] 1.5 Reflections and Localization Figure 2: A simple illustration showing the reflected sound waves and their localization points. Not to scale. Sound localization is very important when playing audio to reproduce certain environments. Fig. 2 shows a simple acoustic situation that a person may face when trying to complete such a task. When a listener is at Position 1, the sound waves from the speaker would reach the head before the reflected waves from Wall 2. Since there is a small delay between the two sources to the head as well as intensity differences, the listener would interpret the source to be at Location 1 based on ITDs and ILDs. At Position 2, the initial sound waves 5

6 from the speaker would not reach the listener s ears because it is outside of the field angle of the speaker. Instead, what would be heard at Position 2 are the reflections of the sound waves from Wall 2 and Wall 1. Since the reflected wave from Wall 2 would reach Position 2 before the reflected wave from Wall 1, the listener would localize the source of sound towards Location 2. In addition, the wave from Wall 2 would be reflecting a wave that is generated more towards the center of the speaker, where Wall 1 would be reflecting a weaker intensity wave since it is being generated towards the outer part of the speaker. Due to this fact, ILDs are just as important as ITDs in this situation. This understanding of delayed reflected waves and intensity differences was utilized many years ago with choirs in churches in the 1500s to create distant but audible sounds for musical compositions. From this point on, composers and architects began including concepts for aural spatial presentation in their music and designs to create works that are acoustically pleasing. [1] 1.6 Purpose The purpose of this experiment is to design an experiment that measures the angular resolution of human sound localization in the azimuthal plane. In addition, Another purpose of this experiment is to compare the the data collected with actual data found through research. A final purpose is to construct a qualitative model for localization error. 2 Experimental Design The setup for this experiment includes a small speaker for audio playback. The speaker used in this setup was an Altec Lansing ACS5 that has a frequency range of 80Hz-20kHz. The playback speaker was set on a stand that placed the center of the speaker at ear height of a seated listener, which is approximately 42in. The sound played for the listener was a pure 6

7 sine wave of either 100Hz, 300Hz, 1kHz, 3kHz, or 10kHz. These frequencies covered a broad spectrum of audible frequencies within the range of human hearing. These frequencies were also expected to yield different results based on the information gathered on ITDs. The testing consisted of one frequency being played through the audio playback program QLab, which allows control to set all frequencies at the same output level. The frequency played faded from an inaudible volume to the 0db setting over.5 seconds, sustained for 3 seconds, and faded out over.5 seconds. By implementing the fades to the sound, the effect of sudden starts and stops in a speaker was eliminated. The azimuthal angles used in the experiment were differences of 15, going to a total displacement of 60. Fig. 3 shows the experimental setup, as well as explanations for localization errors that will be discussed in the results section. Figure 3: A diagram of the experimental setup and angle definitions. Note that the listener moves around the speaker s location to the designated angle locations. As seen in Fig. 3, the distance between the listener and the speaker was set at 5m to ensure that a full wavelength of the sound would occur before reaching the listener. The testings 7

8 performed were made up of playing one frequency five times at each listener angle. The order of the listener angle was randomized using a list randomizer to eliminate the chances for the listener to form patterns with the tests. 2.1 Initial Design The initial experimental procedure used a laser pointer attached to a helmet to measure degree changes. To record the laser points, we used a board that would slide into the front of the speaker stand that had graph paper attached to it for data collection. This method seemed it would be complete enough to cover the range of all data errors at first, but after some initial trials we realized that the sheet did not cover a large enough area to capture all localizations points. We then used a larger board to collect data on, but for many of the trials the resulting angle measurement was too large to capture on the board and we had to rely on the wall behind to capture distance data. The first testing location was in a studio theatre that had ambient noise from water pipes that were running through it. For this reason, we decided to use only one test subject in this location. On top of this intrusion of sound, the laser pointer ran out of battery after about 1.5 hours of testing. To eliminate the water pipe problems with the studio theatre, we relocated to a larger classroom. After collecting data for about 2 hours until the battery in the laser pointer ran out again, we began questioning the benefit of randomizing all aspects of the experiment. At that moment we had randomized both listener angle and frequency, making for a total of 225 trials to be completed. We had only completed about 50 trials after 2 hours and did not have enough data to produce any with statistical value. After talking with the test subjects, we realized that the randomizing of frequencies is quite straining on their ears and it also prevented us from gathering enough data for each frequency. To resolve these issues, we tested only one frequency at a time, resulting in approximately 1.5 hours of testing each time to collect 45 data points. 8

9 2.2 Current Design For the tests, nine chairs were used for the ease of moving the listener from location to location. The listener was guided around the space and spun around during the transition so their knowledge of position in relation to the speaker was eliminated. The setup shown in Fig. 3 was reconstructed on a patio outside so as to eliminate chances for wave reflections interfering with the data. (See Appendix for photographs of the testing location and setup.) The orientation of each chair was the same as at the 0 point so that the listener would be perpendicular to the plane of the speaker. Note that for each trial, the speaker itself was rotated so that the center of the speaker was pointed at the listener s location to maximum the amount of sound that reached the listener. In addition, this rotation also provided assurance that a consistent sound field reached the listener for each trial and assured the only contributing factor to localization error was the listener angle in relation to the speaker. Prior to the experimentation, the listener was blindfolded and outfitted with a hat that had a compass attached to it. This compass was used to measure the change in angle of where the listener was oriented at the start to the point they face after hearing the sound. For the experimentation, the listener was guided to the initial test location and seated in the chair. After rotating the speaker as stated in the above procedure, the sound was played. Once the sound had finished fading out, the listener would then rotate their head so that they were pointing in the direction of where they localized the sound. Angle measurement changes were then recorded, and the process repeated for each trial. One additional test was completed using a rolling office chair instead of individual chairs at each location. This new experimentation technique aimed to eliminate possible error in localization due to tension applied to the neck when turning towards greater degree differences. The listener remained seated during this entire test and was moved to each location for the trials. 9

10 3 Results The results collected show strong evidence of a relation between increased listener angle and increased degree difference despite the high amount of error due to lack of trials. Fig. 4 shows the data collected for one test subject at 3000Hz. As seen by the graph, when the listener angle increases the degree difference between the speaker s location and where the listener localizes the sound also increases. What is unique about this trend is that the listener seems to undershoot the localization of the sound source, which means they are not turning past the speaker location when they localize. By analyzing this data, we can see that by this action, the listener is favoring the ear that is closer to the direct sound source. This same trend is visible in Fig. 5, which shows results from another subject for the frequency of 10kHz. In addition to this trend being consistent between the two graphs, it is important to notice how the degree differences are closer to 0 with 10kHz. The possible explanation that we developed to understand this increase in sensitivity is due to the fact that when smaller wavelengths interfere with the head they are absorbed and a decibel loss is experienced at the opposite ear. With this ILD and shadowing effect, the brain can localize sounds more accurately. 10

11 Sheet2 Plot of Listener Angle vs. Degree Difference 3kHz Degree Difference Listener Angle Figure 4: A plot of data collected for one test subject at 3000Hz. Positive listener angles mean that the speaker is located to the listener s right and positive angle differences show that the listener localized left of the speaker s location. Table of data can be found in the appendix. Error bars represent error in testing. Page 3 Organized Results Plot of Listener Angle vs. Degree Difference 10kHz Degree Difference Listener Angle Figure 5: A plot of data collected for a second test subject at 10kHz. This graph shows a similar linear trend to that in the 3000Hz test, but with less degree difference in localization. Table of data can be found in the appendix. Error Page 3 bars represent error in testing. 11

12 Fig. 6 helps illustrate this point of ILDs having a great impact on localization. The graphs shows the average absolute degree differences for all frequencies by listener angle. When the phrase average absolute degree difference is used, it means that the absolute value of all data points was taken, making all values positive, and the average was taken of these absolute values allowing for easier degree difference comparison. Fig. 6 shows that, typically, as listener angle increases, so does the degree difference. The binaural localization experiments performed by Starch also notes this same kind of influence. This graph also shows the tendency that as frequency increases, independent of listener angle, localization becomes more accurate. All Data Points Averaged Absolute Degree Differences by Listener Angle Degree Difference Frequency (Hz) Figure 6: A plot of the average of all absolute angle differences organized by listener angle and frequency. Error bars represent error in testing. Page 43 Fig. 7 displays the average absolute degree difference for all angles combined. The purpose of this graph was to show if, regardless of listener angle, frequency played a role in degree differences. As seen in the graph, as the frequency becomes higher-pitched, the degree 12

13 difference in localization tends to be reduced. An explanation for this kind of result could be explained by the dominance of ILDs at higher-frequency sounds in the localization process. All Data Points Average Absolute Degree Difference for Combined Angles Degree Difference Frequency (Hz) Figure 7: A plot of the combined average of absolute angle differences. This graph shows the trend that as frequency increases the degree difference in localization becomes smaller. Error bars represent error in testing. Fig. 8 shows one more more trend that is supported Page 44 by our collected data. Theoretically, if it was true that there is no bias from the left or right ear, these averages should be approximately 0. By looking at the Fig. 8 and Fig. 9, we can see that this is nearly true. In addition, the standard deviation increases as the listener angle increases, just like we expected it to do. Previous research and experiments support this concept that as listener angle increases, the localization error also increases. This graph also supports the concept that, regardless of frequency, localization error increases as listener angles increase. 13

14 All Data Points All Frequencies Plot of All Data with 1 Standard Deviation Lines Degree Difference Listener Angle Figure 8: A plot of all raw data taken. The orange lines represent one standard deviation from the 0 difference ideal average. Note that as listener angle increases, the spread for the data points also increases. The data for averaged values and one standard deviation can be found in Fig. 9. Page 29 All Data Points Listener Angle Average of All Data One Standard Deviation Figure 9: Data for Fig. 8. Values listed are the averages for all points collected. As these values approach zero, they demonstrate that no bias exists between the ears during testing. Also, one standard deviation errors are listed which represent the localization error we were trying to find. As the listener angle increases, we see that this error increases. 14

15 In all, Fig. 9 gives the numerical data for localization errors. However, while looking at these values, it must be understood that more trials need to be completed. In all, 210 trials were completed in total with only 2 subjects having recordable data. With more test subjects, as well as trials in the thousands, it is expected that these localization errors will decrease. 4 Discussion 4.1 Qualitative Model of ILD Localization Fig. 10 shows a possible solution for modeling the ILDs and their resulting localization errors. The blue and red lines represent cardioid graphs, which currently best represent ILD between ears. When examining the red line, we notice that at 90 the ear is the most sensitive since it has the greatest radial difference from the origin. However, as we go to the opposite side, the distance is the least, showing that there is very little sensitivity when playing a sound from the opposite side of the head. Combing two of these cardioid graphs allows us to examine intersection points between the two. The green line represents a speaker at a location of 15 from the listener. As we compare the distance between the two intersection points to the distance of the intersection points of the purple line, a speaker at 45, we notice that the distance increases. Comparing this to the results collected, we notice that localization errors have the same tendency to increase as listener angle increases. Therefore, with more research into these distances between intersection points and more data collection, a proper model may be built upon this binaural ILD model. 15

16 1 0-1 Figure 10: The model above shows a combination of two cardioid graphs to form a possible solution for accurately modeling ILD. Another model that would be developed is a combination of ILD and ITD to determine the angular resolution dependent on frequency. As stated before, for lower frequencies, ITDs are preferred over ILD due to the longer wavelength of the sound. However, once the wavelengths are shorter than the size of the head, ILDs are preferred due to the shadow effect. It would be interesting to develop a model that incorporates both ITD and ILD into the angular resolution, and to see if this model could help us better understand how these two methods interact with frequencies that are in the middle range, specifically 1.5kHz-3.5kHz. Since this is the range between the two preferred methods, it may be more difficult to model, but could yield promising results. 16

17 4.2 Frequency Contribution Human hearing is designed to respond better with certain frequencies due to the shape of the ear. Research has shown that the most sensitive range of frequencies for human hearing is 3-5kHz. Sensitivity gradually reduces until approximately 10kHz, when after the sensitivity is drastically reduced. When applying this to the results of the experiment, we can see that this sensitivity in range is supported by the resolution averages. As the frequency increased, the average resolution became closer to 0. However, we should have expected the 3kHz range to have the lowest resolution out of all of the frequencies if the previous research was indeed true. An explanation for this oddity is that although human hearing is more sensitive at the specified frequencies, the process of hearing determines uses the difference between the ears, not just a single sensitivity. Therefore, although human hearing is more sensitive at 3-5kHz, the data collected in this experiment focuses on the differences between the ears rather than sensitivity of one ear due to frequency. 4.3 Further Possible Studies The future for this experiment lies with a few advancements. First, the experimental method needs to be improved for efficiency. The addition of multiple speakers set up around the listener would make the movement of the listener between trials obsolete and reduce the trial time by at least 50%. Not having to move the listener for each trial would also result in the possibility for an improved chair that would make reading angle measurements easier. In addition to the improved experimental setup, more testing needs to be completed. Previous research shows that angular resolutions at 0 should be around 1, and at 90 the resolution should be about 15. [4] With more tests completed, we believe that our results would begin to match those found. 17

18 As for the model construction, more research would need to be done on HRFTs to possibly include them into the model. Although these functions involve 3 dimensions of localization, they could be very useful with the development of an exclusively azimuthal model. Also, the combined ILD and ITD model would have to be researched in more depth to see what similarities between the two exist psychoacoustically so that a proper model can be constructed. 5 Conclusion The data collected from the experiments conducted showed some very interesting trends, despite the large error due to the lack of a large quantity of trials. One of them shows that as the listener angle increases, so does the degree difference in localization. When the speaker location is directly in front of the listener, localization errors were found to be ±17. As the listener angle increased, the error also increased and was maxed, with this experiment, with the speaker at a location of 60 from the listener and localization errors of ±36. A second trend shows that with extremely high frequency localization accuracy tends to increase. For frequencies of 300Hz, 1kHz, and 3kHz, average localizations were 20 ± 4. However, at 10kHz average localization was 15±1.3. These trends are helpful in localization studies as it reinforces previous research, but uses a different testing method to do so. A qualitative model for ILD localization error prediction was constructed and displayed similar characteristics to the results found from the testing. With further exploration into the combination of sounds and their signals within the brain, as well as the addition of ITD factors, this model can be improved to more accurately model and give quantitative values for localization errors. 18

19 References [1] Holman, T. (2008). Surround sound, up and running. Focal Press. [2] HRTF-Based systems. (2011, February 25). Retrieved from [3] Lyon, R. F. (1983). A computational model of binaural localization and separation. ICASSP 83, [4] Pitt, I. Auditory Perception [PDF document] Retrieved from: ianp/cs2511/hap.html [5] Starch, D. (1908). Perimetry of the localization of sound. State University of Iowa. [6] Stern, R. M., Wang, DeL., and Brown, G. (2006). Binaural Sound Localization, Chapter in Computational Auditory Scene Analysis, G. Brown and DeL. Wang, Eds. New York: Wiley/IEEE Press. [7] Takahashi TT, Keller CH, Nelson BS, Spitzer MW, Bala AD, Whitchurch EA (2008). Object localization in cluttered acoustical environments. Biol Cybern, 98(6): [8] Toole, F. E. (2008). Sound reproduction, loudspeakers and rooms. Focal Press. [9] Zhan Huan Zhou (2002). Sound Localization and Virtual Auditory Space, Project report of University of Toronto. 19

20 6 Appendix Listener Angle (degrees) Average Angle Difference Error in Measurement (degrees) (degrees) Sheet2 Figure 11: Data for Fig. 4. Values were averaged with 5 trials for each listener angle. Listener Angle (degrees) Average Angle Difference Error in Measurement (degrees) (degrees) Organized Results Page 6 Figure 12: Data for Fig. 5. Values were averaged with 5 trials for each listener angle. 20

21 Figure 13: The computer and speaker setup used for the experiment. Note that even though there are two speakers, I was able to limit the output to one speaker so that there was no interference from the secondary speaker. Figure 14: A wide shot of the experimental space. Tests were completed outside to achieve the closest resemblance to an anechoic environment. This picture also shows the newest method of using the rolling chair for testing procedure. 21

22 7 Acknowledgements I would like to thank Professor Quashnock for his advising and supporting me throughout the the process of experimental development, research, and revisions. I would also like to thank Ben McGuiggan, Matthew Gehrz, Coty Tatge, and Kim Schultz for helping me develop the experimental procedure to a testable level. Another thank you goes to Professor Schwartz for his advice on presentation and wording of my thesis, along with advising me through my four years at Carthage. A final thank you goes to Professors Crosby, Dahlstrom, and Burling for assisting with the writing process and revisions. 22

Binaural Hearing. Why two ears? Definitions

Binaural Hearing. Why two ears? Definitions Binaural Hearing Why two ears? Locating sounds in space: acuity is poorer than in vision by up to two orders of magnitude, but extends in all directions. Role in alerting and orienting? Separating sound

More information

3-D Sound and Spatial Audio. What do these terms mean?

3-D Sound and Spatial Audio. What do these terms mean? 3-D Sound and Spatial Audio What do these terms mean? Both terms are very general. 3-D sound usually implies the perception of point sources in 3-D space (could also be 2-D plane) whether the audio reproduction

More information

Hearing. Juan P Bello

Hearing. Juan P Bello Hearing Juan P Bello The human ear The human ear Outer Ear The human ear Middle Ear The human ear Inner Ear The cochlea (1) It separates sound into its various components If uncoiled it becomes a tapering

More information

Auditory System & Hearing

Auditory System & Hearing Auditory System & Hearing Chapters 9 and 10 Lecture 17 Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Spring 2015 1 Cochlea: physical device tuned to frequency! place code: tuning of different

More information

Hearing in the Environment

Hearing in the Environment 10 Hearing in the Environment Click Chapter to edit 10 Master Hearing title in the style Environment Sound Localization Complex Sounds Auditory Scene Analysis Continuity and Restoration Effects Auditory

More information

3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF. Masayuki Morimoto Motokuni Itoh Kazuhiro Iida

3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF. Masayuki Morimoto Motokuni Itoh Kazuhiro Iida 3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF Masayuki Morimoto Motokuni Itoh Kazuhiro Iida Kobe University Environmental Acoustics Laboratory Rokko, Nada, Kobe, 657-8501,

More information

Frequency refers to how often something happens. Period refers to the time it takes something to happen.

Frequency refers to how often something happens. Period refers to the time it takes something to happen. Lecture 2 Properties of Waves Frequency and period are distinctly different, yet related, quantities. Frequency refers to how often something happens. Period refers to the time it takes something to happen.

More information

Localization 103: Training BiCROS/CROS Wearers for Left-Right Localization

Localization 103: Training BiCROS/CROS Wearers for Left-Right Localization Localization 103: Training BiCROS/CROS Wearers for Left-Right Localization Published on June 16, 2015 Tech Topic: Localization July 2015 Hearing Review By Eric Seper, AuD, and Francis KuK, PhD While the

More information

HearIntelligence by HANSATON. Intelligent hearing means natural hearing.

HearIntelligence by HANSATON. Intelligent hearing means natural hearing. HearIntelligence by HANSATON. HearIntelligence by HANSATON. Intelligent hearing means natural hearing. Acoustic environments are complex. We are surrounded by a variety of different acoustic signals, speech

More information

Sound localization psychophysics

Sound localization psychophysics Sound localization psychophysics Eric Young A good reference: B.C.J. Moore An Introduction to the Psychology of Hearing Chapter 7, Space Perception. Elsevier, Amsterdam, pp. 233-267 (2004). Sound localization:

More information

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening AUDL GS08/GAV1 Signals, systems, acoustics and the ear Pitch & Binaural listening Review 25 20 15 10 5 0-5 100 1000 10000 25 20 15 10 5 0-5 100 1000 10000 Part I: Auditory frequency selectivity Tuning

More information

HEARING AND PSYCHOACOUSTICS

HEARING AND PSYCHOACOUSTICS CHAPTER 2 HEARING AND PSYCHOACOUSTICS WITH LIDIA LEE I would like to lead off the specific audio discussions with a description of the audio receptor the ear. I believe it is always a good idea to understand

More information

Sound Localization PSY 310 Greg Francis. Lecture 31. Audition

Sound Localization PSY 310 Greg Francis. Lecture 31. Audition Sound Localization PSY 310 Greg Francis Lecture 31 Physics and psychology. Audition We now have some idea of how sound properties are recorded by the auditory system So, we know what kind of information

More information

Effect of spectral content and learning on auditory distance perception

Effect of spectral content and learning on auditory distance perception Effect of spectral content and learning on auditory distance perception Norbert Kopčo 1,2, Dávid Čeljuska 1, Miroslav Puszta 1, Michal Raček 1 a Martin Sarnovský 1 1 Department of Cybernetics and AI, Technical

More information

Binaural Hearing. Steve Colburn Boston University

Binaural Hearing. Steve Colburn Boston University Binaural Hearing Steve Colburn Boston University Outline Why do we (and many other animals) have two ears? What are the major advantages? What is the observed behavior? How do we accomplish this physiologically?

More information

Hearing II Perceptual Aspects

Hearing II Perceptual Aspects Hearing II Perceptual Aspects Overview of Topics Chapter 6 in Chaudhuri Intensity & Loudness Frequency & Pitch Auditory Space Perception 1 2 Intensity & Loudness Loudness is the subjective perceptual quality

More information

Chapter 3. Sounds, Signals, and Studio Acoustics

Chapter 3. Sounds, Signals, and Studio Acoustics Chapter 3 Sounds, Signals, and Studio Acoustics Sound Waves Compression/Rarefaction: speaker cone Sound travels 1130 feet per second Sound waves hit receiver Sound waves tend to spread out as they travel

More information

What Is the Difference between db HL and db SPL?

What Is the Difference between db HL and db SPL? 1 Psychoacoustics What Is the Difference between db HL and db SPL? The decibel (db ) is a logarithmic unit of measurement used to express the magnitude of a sound relative to some reference level. Decibels

More information

IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION. Griffin D. Romigh, Brian D. Simpson, Nandini Iyer

IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION. Griffin D. Romigh, Brian D. Simpson, Nandini Iyer IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION Griffin D. Romigh, Brian D. Simpson, Nandini Iyer 711th Human Performance Wing Air Force Research Laboratory

More information

Localization: Give your patients a listening edge

Localization: Give your patients a listening edge Localization: Give your patients a listening edge For those of us with healthy auditory systems, localization skills are often taken for granted. We don t even notice them, until they are no longer working.

More information

Effect of microphone position in hearing instruments on binaural masking level differences

Effect of microphone position in hearing instruments on binaural masking level differences Effect of microphone position in hearing instruments on binaural masking level differences Fredrik Gran, Jesper Udesen and Andrew B. Dittberner GN ReSound A/S, Research R&D, Lautrupbjerg 7, 2750 Ballerup,

More information

21/01/2013. Binaural Phenomena. Aim. To understand binaural hearing Objectives. Understand the cues used to determine the location of a sound source

21/01/2013. Binaural Phenomena. Aim. To understand binaural hearing Objectives. Understand the cues used to determine the location of a sound source Binaural Phenomena Aim To understand binaural hearing Objectives Understand the cues used to determine the location of a sound source Understand sensitivity to binaural spatial cues, including interaural

More information

Masker-signal relationships and sound level

Masker-signal relationships and sound level Chapter 6: Masking Masking Masking: a process in which the threshold of one sound (signal) is raised by the presentation of another sound (masker). Masking represents the difference in decibels (db) between

More information

Drexel-SDP GK-12 ACTIVITY

Drexel-SDP GK-12 ACTIVITY Activity Template Subject Area(s): Sound Associated Unit: None Associated Lesson: None Activity Title: How good is your hearing? Grade Level: 8 (7-9) Activity Dependency: None Time Required: 120 minutes

More information

Brian D. Simpson Veridian, 5200 Springfield Pike, Suite 200, Dayton, Ohio 45431

Brian D. Simpson Veridian, 5200 Springfield Pike, Suite 200, Dayton, Ohio 45431 The effects of spatial separation in distance on the informational and energetic masking of a nearby speech signal Douglas S. Brungart a) Air Force Research Laboratory, 2610 Seventh Street, Wright-Patterson

More information

Sound Workshop. What is sound Longitudinal Waves Frequency and pitch Hearing ranges Sounds in solids, liquids and gases Sound in a vacuum

Sound Workshop. What is sound Longitudinal Waves Frequency and pitch Hearing ranges Sounds in solids, liquids and gases Sound in a vacuum Sound Workshop a. b. c. d. e. f. g. h. i. j. k. l. What is sound Longitudinal Waves Frequency and pitch Hearing ranges Sounds in solids, liquids and gases Sound in a vacuum Echoes Ultrasound Loudspeakers

More information

Welcome to the LISTEN G.R.A.S. Headphone and Headset Measurement Seminar The challenge of testing today s headphones USA

Welcome to the LISTEN G.R.A.S. Headphone and Headset Measurement Seminar The challenge of testing today s headphones USA Welcome to the LISTEN G.R.A.S. Headphone and Headset Measurement Seminar The challenge of testing today s headphones USA 2017-10 Presenter Peter Wulf-Andersen Engineering degree in Acoustics Co-founder

More information

Systems Neuroscience Oct. 16, Auditory system. http:

Systems Neuroscience Oct. 16, Auditory system. http: Systems Neuroscience Oct. 16, 2018 Auditory system http: www.ini.unizh.ch/~kiper/system_neurosci.html The physics of sound Measuring sound intensity We are sensitive to an enormous range of intensities,

More information

Perceptual Plasticity in Spatial Auditory Displays

Perceptual Plasticity in Spatial Auditory Displays Perceptual Plasticity in Spatial Auditory Displays BARBARA G. SHINN-CUNNINGHAM, TIMOTHY STREETER, and JEAN-FRANÇOIS GYSS Hearing Research Center, Boston University Often, virtual acoustic environments

More information

SOLUTIONS Homework #3. Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03

SOLUTIONS Homework #3. Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03 SOLUTIONS Homework #3 Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03 Problem 1: a) Where in the cochlea would you say the process of "fourier decomposition" of the incoming

More information

Hearing. Figure 1. The human ear (from Kessel and Kardon, 1979)

Hearing. Figure 1. The human ear (from Kessel and Kardon, 1979) Hearing The nervous system s cognitive response to sound stimuli is known as psychoacoustics: it is partly acoustics and partly psychology. Hearing is a feature resulting from our physiology that we tend

More information

Speech intelligibility in simulated acoustic conditions for normal hearing and hearing-impaired listeners

Speech intelligibility in simulated acoustic conditions for normal hearing and hearing-impaired listeners Speech intelligibility in simulated acoustic conditions for normal hearing and hearing-impaired listeners Ir i s Arw e i l e r 1, To r b e n Po u l s e n 2, a n d To r s t e n Da u 1 1 Centre for Applied

More information

INTRODUCTION TO PURE (AUDIOMETER & TESTING ENVIRONMENT) TONE AUDIOMETERY. By Mrs. Wedad Alhudaib with many thanks to Mrs.

INTRODUCTION TO PURE (AUDIOMETER & TESTING ENVIRONMENT) TONE AUDIOMETERY. By Mrs. Wedad Alhudaib with many thanks to Mrs. INTRODUCTION TO PURE TONE AUDIOMETERY (AUDIOMETER & TESTING ENVIRONMENT) By Mrs. Wedad Alhudaib with many thanks to Mrs. Tahani Alothman Topics : This lecture will incorporate both theoretical information

More information

Improve localization accuracy and natural listening with Spatial Awareness

Improve localization accuracy and natural listening with Spatial Awareness Improve localization accuracy and natural listening with Spatial Awareness While you probably don t even notice them, your localization skills make everyday tasks easier: like finding your ringing phone

More information

Trading Directional Accuracy for Realism in a Virtual Auditory Display

Trading Directional Accuracy for Realism in a Virtual Auditory Display Trading Directional Accuracy for Realism in a Virtual Auditory Display Barbara G. Shinn-Cunningham, I-Fan Lin, and Tim Streeter Hearing Research Center, Boston University 677 Beacon St., Boston, MA 02215

More information

Topic 4. Pitch & Frequency. (Some slides are adapted from Zhiyao Duan s course slides on Computer Audition and Its Applications in Music)

Topic 4. Pitch & Frequency. (Some slides are adapted from Zhiyao Duan s course slides on Computer Audition and Its Applications in Music) Topic 4 Pitch & Frequency (Some slides are adapted from Zhiyao Duan s course slides on Computer Audition and Its Applications in Music) A musical interlude KOMBU This solo by Kaigal-ool of Huun-Huur-Tu

More information

Psychoacoustical Models WS 2016/17

Psychoacoustical Models WS 2016/17 Psychoacoustical Models WS 2016/17 related lectures: Applied and Virtual Acoustics (Winter Term) Advanced Psychoacoustics (Summer Term) Sound Perception 2 Frequency and Level Range of Human Hearing Source:

More information

Issues faced by people with a Sensorineural Hearing Loss

Issues faced by people with a Sensorineural Hearing Loss Issues faced by people with a Sensorineural Hearing Loss Issues faced by people with a Sensorineural Hearing Loss 1. Decreased Audibility 2. Decreased Dynamic Range 3. Decreased Frequency Resolution 4.

More information

Sound Waves. Sensation and Perception. Sound Waves. Sound Waves. Sound Waves

Sound Waves. Sensation and Perception. Sound Waves. Sound Waves. Sound Waves Sensation and Perception Part 3 - Hearing Sound comes from pressure waves in a medium (e.g., solid, liquid, gas). Although we usually hear sounds in air, as long as the medium is there to transmit the

More information

How high-frequency do children hear?

How high-frequency do children hear? How high-frequency do children hear? Mari UEDA 1 ; Kaoru ASHIHARA 2 ; Hironobu TAKAHASHI 2 1 Kyushu University, Japan 2 National Institute of Advanced Industrial Science and Technology, Japan ABSTRACT

More information

Two Modified IEC Ear Simulators for Extended Dynamic Range

Two Modified IEC Ear Simulators for Extended Dynamic Range Two Modified IEC 60318-4 Ear Simulators for Extended Dynamic Range Peter Wulf-Andersen & Morten Wille The international standard IEC 60318-4 specifies an occluded ear simulator, often referred to as a

More information

Chapter 17 Sound Sound and Hearing. Properties of Sound Waves 1/20/2017. Pearson Prentice Hall Physical Science: Concepts in Action

Chapter 17 Sound Sound and Hearing. Properties of Sound Waves 1/20/2017. Pearson Prentice Hall Physical Science: Concepts in Action Pearson Prentice Hall Physical Science: Concepts in Action Chapter 17 Sound Standing Waves in Music When the string of a violin is played with a bow, it vibrates and creates standing waves. Some instruments,

More information

Signals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor

Signals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor Signals, systems, acoustics and the ear Week 5 The peripheral auditory system: The ear as a signal processor Think of this set of organs 2 as a collection of systems, transforming sounds to be sent to

More information

Solutions for better hearing. audifon innovative, creative, inspired

Solutions for better hearing. audifon innovative, creative, inspired Solutions for better hearing audifon.com audifon innovative, creative, inspired Hearing systems that make you hear well! Loss of hearing is frequently a gradual process which creeps up unnoticed on the

More information

! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics

! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics 2/14/18 Can hear whistle? Lecture 5 Psychoacoustics Based on slides 2009--2018 DeHon, Koditschek Additional Material 2014 Farmer 1 2 There are sounds we cannot hear Depends on frequency Where are we on

More information

Activity Template. Drexel-SDP GK-12 ACTIVITY

Activity Template. Drexel-SDP GK-12 ACTIVITY Drexel-SDP GK-12 ACTIVITY Activity Template Subject Area(s): Sound Associated Unit: None Associated Lesson: None Activity Title: How good is your hearing? Grade Level: 8 (7-9) Activity Dependency: None

More information

Spectrograms (revisited)

Spectrograms (revisited) Spectrograms (revisited) We begin the lecture by reviewing the units of spectrograms, which I had only glossed over when I covered spectrograms at the end of lecture 19. We then relate the blocks of a

More information

Topic 4. Pitch & Frequency

Topic 4. Pitch & Frequency Topic 4 Pitch & Frequency A musical interlude KOMBU This solo by Kaigal-ool of Huun-Huur-Tu (accompanying himself on doshpuluur) demonstrates perfectly the characteristic sound of the Xorekteer voice An

More information

Abstract. 1. Introduction. David Spargo 1, William L. Martens 2, and Densil Cabrera 3

Abstract. 1. Introduction. David Spargo 1, William L. Martens 2, and Densil Cabrera 3 THE INFLUENCE OF ROOM REFLECTIONS ON SUBWOOFER REPRODUCTION IN A SMALL ROOM: BINAURAL INTERACTIONS PREDICT PERCEIVED LATERAL ANGLE OF PERCUSSIVE LOW- FREQUENCY MUSICAL TONES Abstract David Spargo 1, William

More information

Outline. The ear and perception of sound (Psychoacoustics) A.1 Outer Ear Amplifies Sound. Introduction

Outline. The ear and perception of sound (Psychoacoustics) A.1 Outer Ear Amplifies Sound. Introduction The ear and perception of sound (Psychoacoustics) 1 Outline A. Structure of the Ear B. Perception of Pitch C. Perception of Loudness D. Timbre (quality of sound) E. References Updated 01Aug0 Introduction

More information

Gregory Galen Lin. at the. May A uthor... Department of Elf'ctricdal Engineering and Computer Science May 28, 1996

Gregory Galen Lin. at the. May A uthor... Department of Elf'ctricdal Engineering and Computer Science May 28, 1996 Adaptation to a Varying Auditory Environment by Gregory Galen Lin Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of

More information

HOW TO USE THE SHURE MXA910 CEILING ARRAY MICROPHONE FOR VOICE LIFT

HOW TO USE THE SHURE MXA910 CEILING ARRAY MICROPHONE FOR VOICE LIFT HOW TO USE THE SHURE MXA910 CEILING ARRAY MICROPHONE FOR VOICE LIFT Created: Sept 2016 Updated: June 2017 By: Luis Guerra Troy Jensen The Shure MXA910 Ceiling Array Microphone offers the unique advantage

More information

17.4 Sound and Hearing

17.4 Sound and Hearing You can identify sounds without seeing them because sound waves carry information to your ears. People who work in places where sound is very loud need to protect their hearing. Properties of Sound Waves

More information

Signals, systems, acoustics and the ear. Week 1. Laboratory session: Measuring thresholds

Signals, systems, acoustics and the ear. Week 1. Laboratory session: Measuring thresholds Signals, systems, acoustics and the ear Week 1 Laboratory session: Measuring thresholds What s the most commonly used piece of electronic equipment in the audiological clinic? The Audiometer And what is

More information

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS 19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 27 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS PACS: 43.66.Pn Seeber, Bernhard U. Auditory Perception Lab, Dept.

More information

INTRODUCTION J. Acoust. Soc. Am. 103 (2), February /98/103(2)/1080/5/$ Acoustical Society of America 1080

INTRODUCTION J. Acoust. Soc. Am. 103 (2), February /98/103(2)/1080/5/$ Acoustical Society of America 1080 Perceptual segregation of a harmonic from a vowel by interaural time difference in conjunction with mistuning and onset asynchrony C. J. Darwin and R. W. Hukin Experimental Psychology, University of Sussex,

More information

Binaural processing of complex stimuli

Binaural processing of complex stimuli Binaural processing of complex stimuli Outline for today Binaural detection experiments and models Speech as an important waveform Experiments on understanding speech in complex environments (Cocktail

More information

Lecture 9: Sound Localization

Lecture 9: Sound Localization Lecture 9: Sound Localization Localization refers to the process of using the information about a sound which you get from your ears, to work out where the sound came from (above, below, in front, behind,

More information

Spatial hearing and sound localization mechanisms in the brain. Henri Pöntynen February 9, 2016

Spatial hearing and sound localization mechanisms in the brain. Henri Pöntynen February 9, 2016 Spatial hearing and sound localization mechanisms in the brain Henri Pöntynen February 9, 2016 Outline Auditory periphery: from acoustics to neural signals - Basilar membrane - Organ of Corti Spatial

More information

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane. I. Psychoacoustical Data

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane. I. Psychoacoustical Data 942 955 Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane. I. Psychoacoustical Data Jonas Braasch, Klaus Hartung Institut für Kommunikationsakustik, Ruhr-Universität

More information

9.3 Sound The frequency of sound Frequency and pitch pitch Most sound has more than one frequency The frequency spectrum

9.3 Sound The frequency of sound Frequency and pitch pitch Most sound has more than one frequency The frequency spectrum 9.3 Sound Like other waves, sound has frequency, wavelength, amplitude, and speed. Because sound is part of your daily experience, you already know its properties but by different names. You may never

More information

The role of low frequency components in median plane localization

The role of low frequency components in median plane localization Acoust. Sci. & Tech. 24, 2 (23) PAPER The role of low components in median plane localization Masayuki Morimoto 1;, Motoki Yairi 1, Kazuhiro Iida 2 and Motokuni Itoh 1 1 Environmental Acoustics Laboratory,

More information

Influence of music-induced floor vibration on impression of music in concert halls

Influence of music-induced floor vibration on impression of music in concert halls Buenos Aires 5 to 9 September, 216 PROCEEDINGS of the 22 nd International Congress on Acoustics Concert Hall Acoustics: Paper ICA216-694 Influence of music-induced floor vibration on impression of music

More information

The Hearwig as hearing protector for musicians a practical investigation

The Hearwig as hearing protector for musicians a practical investigation 12th ICBEN Congress on Noise as a Public Health Problem The Hearwig as hearing protector for musicians a practical investigation Andrea Wolff 1 1 Institute for occupational Safety and Health of the German

More information

An Auditory System Modeling in Sound Source Localization

An Auditory System Modeling in Sound Source Localization An Auditory System Modeling in Sound Source Localization Yul Young Park The University of Texas at Austin EE381K Multidimensional Signal Processing May 18, 2005 Abstract Sound localization of the auditory

More information

Speaker s Notes: AB is dedicated to helping people with hearing loss hear their best. Partnering with Phonak has allowed AB to offer unique

Speaker s Notes: AB is dedicated to helping people with hearing loss hear their best. Partnering with Phonak has allowed AB to offer unique 1 General Slide 2 Speaker s Notes: AB is dedicated to helping people with hearing loss hear their best. Partnering with Phonak has allowed AB to offer unique technological advances to help people with

More information

PHYS 1240 Sound and Music Professor John Price. Cell Phones off Laptops closed Clickers on Transporter energized

PHYS 1240 Sound and Music Professor John Price. Cell Phones off Laptops closed Clickers on Transporter energized PHYS 1240 Sound and Music Professor John Price Cell Phones off Laptops closed Clickers on Transporter energized The Ear and Hearing Thanks to Jed Whittaker for many of these slides Ear anatomy substructures

More information

CLASSROOM AMPLIFICATION: WHO CARES? AND WHY SHOULD WE? James Blair and Jeffery Larsen Utah State University ASHA, San Diego, 2011

CLASSROOM AMPLIFICATION: WHO CARES? AND WHY SHOULD WE? James Blair and Jeffery Larsen Utah State University ASHA, San Diego, 2011 CLASSROOM AMPLIFICATION: WHO CARES? AND WHY SHOULD WE? James Blair and Jeffery Larsen Utah State University ASHA, San Diego, 2011 What we know Classroom amplification has been reported to be an advantage

More information

Transfer of Sound Energy through Vibrations

Transfer of Sound Energy through Vibrations secondary science 2013 16 Transfer of Sound Energy through Vibrations Content 16.1 Sound production by vibrating sources 16.2 Sound travel in medium 16.3 Loudness, pitch and frequency 16.4 Worked examples

More information

Tune in on life with SCOLAbuddy. A new fm receiver from Widex

Tune in on life with SCOLAbuddy. A new fm receiver from Widex Tune in on life with SCOLAbuddy A new fm receiver from Widex Meet your new buddy The latest member of the Scola FM family The successful Widex SCOLA FM product series has acquired a new family member:

More information

Impact of the ambient sound level on the system's measurements CAPA

Impact of the ambient sound level on the system's measurements CAPA Impact of the ambient sound level on the system's measurements CAPA Jean Sébastien Niel December 212 CAPA is software used for the monitoring of the Attenuation of hearing protectors. This study will investigate

More information

Appendix E: Basics of Noise. Table of Contents

Appendix E: Basics of Noise. Table of Contents E Basics of Noise Table of Contents E.1 Introduction... ii E.2 Introduction to Acoustics and Noise Terminology... E-1 E.3 The Decibel (db)... E-1 E.4 A-Weighted Decibel... E-2 E.5 Maximum A-Weighted Noise

More information

BASIC NOTIONS OF HEARING AND

BASIC NOTIONS OF HEARING AND BASIC NOTIONS OF HEARING AND PSYCHOACOUSICS Educational guide for the subject Communication Acoustics VIHIAV 035 Fülöp Augusztinovicz Dept. of Networked Systems and Services fulop@hit.bme.hu 2018. október

More information

Chapter 1: Introduction to digital audio

Chapter 1: Introduction to digital audio Chapter 1: Introduction to digital audio Applications: audio players (e.g. MP3), DVD-audio, digital audio broadcast, music synthesizer, digital amplifier and equalizer, 3D sound synthesis 1 Properties

More information

Auditory Perception: Sense of Sound /785 Spring 2017

Auditory Perception: Sense of Sound /785 Spring 2017 Auditory Perception: Sense of Sound 85-385/785 Spring 2017 Professor: Laurie Heller Classroom: Baker Hall 342F (sometimes Cluster 332P) Time: Tuesdays and Thursdays 1:30-2:50 Office hour: Thursday 3:00-4:00,

More information

CONTRIBUTION OF DIRECTIONAL ENERGY COMPONENTS OF LATE SOUND TO LISTENER ENVELOPMENT

CONTRIBUTION OF DIRECTIONAL ENERGY COMPONENTS OF LATE SOUND TO LISTENER ENVELOPMENT CONTRIBUTION OF DIRECTIONAL ENERGY COMPONENTS OF LATE SOUND TO LISTENER ENVELOPMENT PACS:..Hy Furuya, Hiroshi ; Wakuda, Akiko ; Anai, Ken ; Fujimoto, Kazutoshi Faculty of Engineering, Kyushu Kyoritsu University

More information

Hearing and Balance 1

Hearing and Balance 1 Hearing and Balance 1 Slide 3 Sound is produced by vibration of an object which produces alternating waves of pressure and rarefaction, for example this tuning fork. Slide 4 Two characteristics of sound

More information

Sonic Spotlight. Binaural Coordination: Making the Connection

Sonic Spotlight. Binaural Coordination: Making the Connection Binaural Coordination: Making the Connection 1 Sonic Spotlight Binaural Coordination: Making the Connection Binaural Coordination is the global term that refers to the management of wireless technology

More information

16.400/453J Human Factors Engineering /453. Audition. Prof. D. C. Chandra Lecture 14

16.400/453J Human Factors Engineering /453. Audition. Prof. D. C. Chandra Lecture 14 J Human Factors Engineering Audition Prof. D. C. Chandra Lecture 14 1 Overview Human ear anatomy and hearing Auditory perception Brainstorming about sounds Auditory vs. visual displays Considerations for

More information

How the LIVELab helped us evaluate the benefits of SpeechPro with Spatial Awareness

How the LIVELab helped us evaluate the benefits of SpeechPro with Spatial Awareness How the LIVELab helped us evaluate the benefits of SpeechPro with Author Donald Hayes, Ph.D. Director, Clinical Research Favorite sound: Blues guitar A Sonova brand Figure The LIVELab at McMaster University

More information

Audiology - Hearing Care Torbay and South Devon. Before you receive your hearing aid

Audiology - Hearing Care Torbay and South Devon. Before you receive your hearing aid Audiology - Hearing Care Torbay and South Devon Before you receive your hearing aid How our ears work Our ears are divided into three sections, the outer ear, middle ear and inner ear (see diagram opposite).

More information

Unit 4: Sensation and Perception

Unit 4: Sensation and Perception Unit 4: Sensation and Perception Sensation a process by which our sensory receptors and nervous system receive and represent stimulus (or physical) energy and encode it as neural signals. Perception a

More information

Lecture 3: Perception

Lecture 3: Perception ELEN E4896 MUSIC SIGNAL PROCESSING Lecture 3: Perception 1. Ear Physiology 2. Auditory Psychophysics 3. Pitch Perception 4. Music Perception Dan Ellis Dept. Electrical Engineering, Columbia University

More information

The basic hearing abilities of absolute pitch possessors

The basic hearing abilities of absolute pitch possessors PAPER The basic hearing abilities of absolute pitch possessors Waka Fujisaki 1;2;* and Makio Kashino 2; { 1 Graduate School of Humanities and Sciences, Ochanomizu University, 2 1 1 Ootsuka, Bunkyo-ku,

More information

On the influence of interaural differences on onset detection in auditory object formation. 1 Introduction

On the influence of interaural differences on onset detection in auditory object formation. 1 Introduction On the influence of interaural differences on onset detection in auditory object formation Othmar Schimmel Eindhoven University of Technology, P.O. Box 513 / Building IPO 1.26, 56 MD Eindhoven, The Netherlands,

More information

ADHEAR The new bone-conduction hearing aid innovation

ADHEAR The new bone-conduction hearing aid innovation ADHEAR The new bone-conduction hearing aid innovation MED-EL has world-wide launched a new kind of hearing aid, ADHEAR, for people who have an hearing impairment and want to prevent surgery. This little

More information

Stimulus any aspect of or change in the environment to which an organism responds. Sensation what occurs when a stimulus activates a receptor

Stimulus any aspect of or change in the environment to which an organism responds. Sensation what occurs when a stimulus activates a receptor Chapter 8 Sensation and Perception Sec 1: Sensation Stimulus any aspect of or change in the environment to which an organism responds Sensation what occurs when a stimulus activates a receptor Perception

More information

Technical Discussion HUSHCORE Acoustical Products & Systems

Technical Discussion HUSHCORE Acoustical Products & Systems What Is Noise? Noise is unwanted sound which may be hazardous to health, interfere with speech and verbal communications or is otherwise disturbing, irritating or annoying. What Is Sound? Sound is defined

More information

Literature Overview - Digital Hearing Aids and Group Delay - HADF, June 2017, P. Derleth

Literature Overview - Digital Hearing Aids and Group Delay - HADF, June 2017, P. Derleth Literature Overview - Digital Hearing Aids and Group Delay - HADF, June 2017, P. Derleth Historic Context Delay in HI and the perceptual effects became a topic with the widespread market introduction of

More information

BMMC (UG SDE) IV SEMESTER

BMMC (UG SDE) IV SEMESTER UNIVERSITY OF CALICUT SCHOOL OF DISTANCE EDUCATION BMMC (UG SDE) IV SEMESTER GENERAL COURSE IV COMMON FOR Bsc ELECTRONICS, COMPUTER SCIENCE, INSTRUMENTATION & MULTIMEDIA BASICS OF AUDIO & VIDEO MEDIA QUESTION

More information

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED International Conference on Systemics, Cybernetics and Informatics, February 12 15, 2004 BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED Alice N. Cheeran Biomedical

More information

SENDING SECRET MESSAGES (1 Hour)

SENDING SECRET MESSAGES (1 Hour) SENDING SECRET MESSAGES (1 Hour) Addresses NGSS Level of Difficulty: 3 Grade Range: K-2 OVERVIEW In this activity, students explore how sound waves travel through various materials. They will build a sound

More information

William A. Yost and Sandra J. Guzman Parmly Hearing Institute, Loyola University Chicago, Chicago, Illinois 60201

William A. Yost and Sandra J. Guzman Parmly Hearing Institute, Loyola University Chicago, Chicago, Illinois 60201 The precedence effect Ruth Y. Litovsky a) and H. Steven Colburn Hearing Research Center and Department of Biomedical Engineering, Boston University, Boston, Massachusetts 02215 William A. Yost and Sandra

More information

SUBJECT: Physics TEACHER: Mr. S. Campbell DATE: 15/1/2017 GRADE: DURATION: 1 wk GENERAL TOPIC: The Physics Of Hearing

SUBJECT: Physics TEACHER: Mr. S. Campbell DATE: 15/1/2017 GRADE: DURATION: 1 wk GENERAL TOPIC: The Physics Of Hearing SUBJECT: Physics TEACHER: Mr. S. Campbell DATE: 15/1/2017 GRADE: 12-13 DURATION: 1 wk GENERAL TOPIC: The Physics Of Hearing The Physics Of Hearing On completion of this section, you should be able to:

More information

This test contains questions that are borrowed from other sources. It was not accepted to the exchange but is included in this folder because it was

This test contains questions that are borrowed from other sources. It was not accepted to the exchange but is included in this folder because it was This test contains questions that are borrowed from other sources. It was not accepted to the exchange but is included in this folder because it was the only test submitted for this event. Sounds of Music

More information

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane: II. Model Algorithms

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane: II. Model Algorithms 956 969 Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane: II. Model Algorithms Jonas Braasch Institut für Kommunikationsakustik, Ruhr-Universität Bochum, Germany

More information

HCS 7367 Speech Perception

HCS 7367 Speech Perception Long-term spectrum of speech HCS 7367 Speech Perception Connected speech Absolute threshold Males Dr. Peter Assmann Fall 212 Females Long-term spectrum of speech Vowels Males Females 2) Absolute threshold

More information

Auditory Scene Analysis

Auditory Scene Analysis 1 Auditory Scene Analysis Albert S. Bregman Department of Psychology McGill University 1205 Docteur Penfield Avenue Montreal, QC Canada H3A 1B1 E-mail: bregman@hebb.psych.mcgill.ca To appear in N.J. Smelzer

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Noise Session 3aNSa: Wind Turbine Noise I 3aNSa5. Can wind turbine sound

More information

Chapter 5 Test Review. Try the practice questions in the Study Guide and on line

Chapter 5 Test Review. Try the practice questions in the Study Guide and on line Chapter 5 Test Review Try the practice questions in the Study Guide and on line Printing game plan Put six slides on a page Select pure black and white as the printing option Okay, now wade into the answers>>>>

More information