Psychoacoustics. Author: Nejc Rosenstein. Advisor: Simon irca
|
|
- Roland Strickland
- 6 years ago
- Views:
Transcription
1 Psychoacoustics Author: Nejc Rosenstein Advisor: Simon irca Abstract We introduce psychoacoustics as a branch of physics which explores the link between physical properties of sound and listener's perception. We describe experiments which measure threshold, localization and frequency selectivity. We introduce simple physical models of perception of sound in human pinna and cochlea. The results of experiments in psychoacustics, which are in agreement with prediction of both models, are presented.
2 Contents 1 Introduction 2 2 Thresholds 2 3 Frequency selectivity Physical model Auditory lters Localization Physical model of pinna Psychoacoustical investigation Experimentation using headphones Conclusion 11 1 Introduction Two major disciplines that dominate the eld of research of the human auditory system are auditory physiology, which focuses on functions of biological systems that aect the hearing process, and psychoacoustics [1]. The latter deals with relation between sound i.e. physical stimuli and the sensation in the listener which occurs as a result of sound perception. Psychoacoustics therefore encompasses research of thresholds, frequency analysis and masking, perception of pitch, timbre and loudness, analysis of auditory scene, temporal processing and other properties of the auditory system. Psychophysical experiments involve listeners as test subjects who usually have to make some judgments about the sound they detected and produce a certain response. Those experiments have to be carefully planned as much eort is needed to perform them in a proper way, especially when scientists aim to reproduce a certain sound phenomena by the means of some device, for example headphones. 2 Thresholds The sound pressure level (SPL) is dened as L = 20 log(p/p 0 ) with decibels, db, as units, where p is the measured sound pressure and p 0 is the absolute threshold, i.e. the lowest sound level which can be detected by humans [2]. Since it is impossible to determine its exact value, the threshold is usually dened as the sound level at which the probability of detection equals a certain value, which is determined experimentally. In a common experiment where test subject have to tell, whether they perceived a sound or not (two-choice task), that probability usually equals 75% [2]. The threshold is frequency-dependent and is dierent if the listener is using only one or both ears (Fig. 1). Absolute thresholds are also important in psychoacoustical experiments, where quiet, but still detectable signals need to be produced. One such experiment is described in section
3 Figure 1: [Left] Measurements of absolute thresholds for binaural (both-ears) and monaural (one-ear) listening. Shape of the ear and other physiological factors aect our hearing in many ways. Therefore, monaural listening leads to dierent perception of loudness and dierent frequencies in the detected spectrum. This results in dierent frequency dependencies of the threshold. [Right] The dierence between the measurement at the eardrum and the measurement at the position of the center of the listener's head once he is removed from his position. We see that the ear shape aected the detected frequency spectrum. More about the role of ear shape in auditory perception will be told in chapter 4. Pictures are taken from [3]. 3 Frequency selectivity The probability of detecting a sound from a certain source can decrease dramatically if the listener is also exposed to other sounds at the same time; this phenomenon is called masking [2]. While the most obviuous example of masking is the unability to detect low-volume sound in the presence of the loud one, masking can also occur if two sound signals are having similar or the same freqency components. Discoveries in the eld of physiology shed some light on this phenomenon; in 1961, the Nobel prize in physiology was awarded to Georg von Békésy [4] for the discovery of mechanical properties of cochlea that provide an explanation of human frequency analysis and help to explain masking of sounds with similar frequencies. 3.1 Physical model By developing a new method to dissect the human ear, Békésy could perform experiments on a partly intact human chochlea (Fig. 2). He found that each part of the basilar membrane corresponds to a certain frequency; high frequencies of sound stimulate the parts of membrane which are close to the outer ear and low frequencies stimulate response of the membrane at the end of the cochlea. We are not going to present psychiological structure of the inner ear at this point; instead we will focus on a basic mechanical model, which was used by Békésy himself when he planned his physiological observations [5]. 3
4 Figure 2: [Left] A cross-section of the human chochlea [6]. [Right] The schematic representation of cochlea's shape [7]. The inside is divided into two uid compartments which are separated by a cochlear partition (CP), where the basilar membrane (BM) is located. The physical model we will present here is based on a very simplied structure of the cochlea and its sketch is included in Fig. 3. Figure 3: [Top] Simplied model of the cochlea interior, proposed by Bekesy [5]. [Bottom] The coordinate system for modeling the unrolled cochlea [7]. The interior of the cochlea is divided into two separate compartments which are lled with uid and are 4
5 separated by a wall. The large opening in the separating wall is covered by the basilar membrane which is a membrane with known mass, stiness and damping coecient (all these properties were measured by Békésy). Vibrations enter the cochlea through the oval window, covered by the membrane; the round window which is located in the other compartment is also covered by a membrane and its purpose is that it allows the movement of the incompressible uid in the cochlea. Even though Békésy included another opening, which connects both compartments (helicotrema), the exchange of uids is neglected in uid modeling. Even though the Reynolds number of the ow inside both compartments is low, we can treat the ow inside the cochlea as inviscid in the rst order of approximation [5]. If the pulses of sound are short, we may assume that uid ow inside the cochlea is irrotational [5], therefore its velocity can be expressed as v = Φ, where Φ is velocity potential. Since compartments are separated, the velocity potentials in both of them are dierent and we denote them by Φ 1 (in upper compartment) and Φ 2 (in lower compartment). The equations we write below are valid for the upper compartment. The uid is assumed to be incompressible, therefore we write: 2 Φ 1 = 0 (1) Since irrotational ow is assumed, Bernoulli's law applies [8]. We can neglect the term which contains the square of the uid velocity, since velocities are low. In term of irrotational ow due to a velocity potential, Bernoulli's equation is therefore written as: p 1 + ρ Φ 1 t = const. (2) where ρ denotes the density of the uid and p the pressure. The velocity of the uid in the direction perpendicular to the membrane is expressed as Φ/ z and it equals the velocity of the membrane: η t = Φ 1 z. (3) The three equations above describe the motion of the uid in the rst, upper compartment. The same set is then written for the second compartment, which is physically separated from the rst one. The next equation connects the dierence of the uid pressures on both sides of the membrane with the force on the membrane due to its displacement µ(x, t): m (x) 2 η t 2 + β (x) η t + κ (x)η = p 2 (x, 0, t) p 1 (x, 0, t). (4) Values for membrane mass per unit area m and functions for damping coecient per unit area β and stiness per unit area κ that were used to calculate the results, were chosen to represent actual properties of the basilar membrane. In addition to the above equations, two sets of equations similar to (3) and (4) are needed to describe the displacement of the membrane covering the round window, ξ r, and displacement of the oval window membrane, ξ o (see Fig. 3). The last equation, ξ o = F, (5) states that the displacement ξ of the membrane covering the oval window is equal to the incoming signal F which is transferred mechanically to the membrane. We are not going to describe the methods required to solve the model here and will instead focus only on the results. The calculated membrane positions along the length of the cochlea are presented in Fig. 4. Despite many assumptions and simplications, the calculated movement of the basilar membrane possesses the most important property of Bekesy's observations the position of the largest excitation on the basilar membrane is frequency-dependent, as shown in Fig. 4. Békésy also discovered that the membrane excitations only propagate up to a certain 5
6 distance, which is frequency-dependent. The higher the input frequency is, the shorter that distance gets. The membrane movements, calculated by the physical model, possess that attribute as well (see Fig. 4). Figure 4: [Top] The curves represent the shape of the membrane at dierent times. The wave envelope is drawn with dashed line and it shows that the displacement of the membrane has an absolute maximum at some point. [Bottom] Wave envelopes of membrane shapes. The maximum displacement appears on a frequency-dependent position along the length of the membrane (dierent curves were calculated using input signals with dierent frequencies) [5]. 3.2 Auditory lters The discovery of the way in which the basilar membrane acts as frequency analyzer led to a suggestion that the entire peripheral auditory system (which consists of the outer, the middle and the inner ear) acts as a bank of bandpass lters, with passbands that overlap [3]. Since dierent sections of the basilar membrane correspond to dierent frequencies, those sections actually act as auditory lters, centered around those frequencies. The overlapping of the passbands occurs because an excitation at a certain point of the membrane also causes the movement of neighbouring points, i.e. lters. The shape of auditory lters is determined through psychophysical experiments rather than by physical 6
7 models. One of the experiments where the auditory lter shape was determined, was carried out in the following way. The listener had to detect a sound signal (the testing signal) with xed volume in the presence of another, background sound signal (the masker signal). Both signals were sinusoidal with xed frequencies and were coming from the same source. Researchers experimented with dierent volumes of masker signal and determined the minimal masker volume at which the listener could no longer detect the testing signal. The described experiment was repeated many times, each time with dierent masker frequency. The minimal masker volume was the lowest when the dierence of frequencies of both signals was also low. The results are represented in the form of so-called psychophysical tuning curves (see Fig. 5). Figure 5: Psychophysical tuning curves [3] show the level of masker signal which is needed to successfully mask a signal, which is xed at low level (it is represented by dots). The dashed line represents the absolute threshold of a signal at a given frequency. Solid lines connect the measurements of minimal masking signal. The resulting shapes are rough estimates of the shape of auditory lters for dierent frequencies. The experiments were carried out at low volume, because researchers wanted to ensure that the excitation of the membrane would be as low as possible [2]. If the sound with a certain frequency causes the movement of the membrane, which is too strong, the excitation also triggers neighbouring auditory lters on the membrane; this is dened as an o-frequency listening [1]. In order to nd out the accurate shape of the auditory lter at a certain frequency, the o-frequency listening must be reduced as much as possible. This can be achieved by notched-noise technique experiments [1]. In these experiments, the listener is exposed to pure tone signal with added bands of noise above and below the tone frequency (Fig. 6). Both the pure tone and the noise are coming from the same source (either headphones or loudspeakers). If we use this technique, the observed lter is exposed to the least ammount of noise compared to the neighbouring lters which therefore do not contribute much to the detection of tone frequency, due to the increased masking. The minimum audible level is a function of the spectral gap between the edges of each of the noise bands and the tone frequency. The shape of the auditory lter, obtained by the described method is presented in Fig. 6. 7
8 Figure 6: Notched-noise technique [1]. Grey areas represent bands of noise. The auditory lter is centered at the signal frequency. 4 Localization Another important area which the psychoacoustic research is focused on is sound localization, i.e. dening the position of the sound source [2]. The sound from the source usually reaches one ear before the other (except when the sound is emerging from the front or the back of the listener). If the sound wave is sinusoidal and its frequency is low, the dierence in time of arrival of sound to both ears is proportional to the phase dierence, which can be used to determine the direction from which the sound is coming. This, however, is not true for frequencies above 1500 Hz, because in that case the wavelength of sound is small in comparison to the dimensions of the head, and hence the brain can not determine the phase dierence correctly. At high frequencies, another option of localization is more suitable. Since the head of the listener represents an obstacle for high frequency waves, they bend and diract around the head. That causes small sound intensity dierences between both ears. The human auditory system can then determinine the angle from which the sound is coming. Such combination of localization at low and high frequencies is called the duplex theory [2]. However, it has several aws for example, a single non-periodic click can be easily localised using only information about interaural time delays, regardless of the frequencies it contains. Furthermore, localization is also possible when listener is using only one ear (monaural listening). 4.1 Physical model of pinna Since duplex theory does not provide a satisfactory explanation of sound localization, the physical properties of human body must also be taken into consideration. The outer ear, i.e. the pinnae, plays a major role in localization. Its function is to transform the incoming sound into a signal which contains information about the direction from which the sound is coming [9]. That information is then extracted from the signal in the neural system. Incoming sound is reected at dierent parts of the pinna and because of that, eardrums detect not only the original signal, but also reected signals, which arrive to the eardrum with delays τ n. We assume that the incoming signal arrives to the eardrum via M dierent paths due to reections from dierent parts of the pinna. Each reected (delayed) signal is attenuated 8
9 after the reection occurs. We mark the attenuation coecient with a n < 1 and the original sound signal is represented by the function F. We will write the equations in the notation of Laplace transformation. The signal which arrives to the eardrum is [9] where H(s) = F (s)t (s) (6) M T (s) = a n e sτn (7) n=1 is the transfer function of the pinna. The sound signal F is transformed into signal H because of time delays, which are dependent of the direction from which the signal is coming. The transfer function T (s) has an inverse, and according to the research in the eld of physiology, such inverse could be performed in the neural system. Batteau [9] also proposed a so-called attention transformation function which could also theoretically be performed by the neural system in the inner ear: M R(s) = a n e (s(τ M τ n)). (8) n=0 As a result of the above transformation, the following signal is produced: M P (s) = R(s)H(s) = F (s)e sτ M [ a 2 n + n=0 M M a k a j e s(τ k τ j ) ]. (9) k j k The above result indicates that the processed signal has a dierent amplitude and a slightly altered spectrum in comparison to the original signal [9]. The received signal is also delayed by the maximum delay time τ M. According to Batteau's theory, the brain could be able to determine the direction of the incoming sound via the changes in the spectrum and intensity. It has to be mentioned that all delays are not caused by reections from the pinna, but also by reverberation from other objects, such as walls. But reections from the environment do not necessarily cause trouble when it comes to localization. It is easy to treat them separately, because time delays caused by the pinna, are short (2 300 µs) and delays due to the reverberation in environment are signicantly longer (usually more than 10 ms). But that does not mean that reections from the environment do not play a role in localization. Because of small dimensions of the pinna, sounds with high frequencies cannot be localised only by analyzing short delays which are caused by the reections from pinna. 4.2 Psychoacoustical investigation We have shown that the information about localization can be extracted from interaural time delays, interaural intensity dierences and from the delays in the pinna, but nothing has been said yet about our actual ability to localize sounds. The following psychoacoustic experiment was carried out in order to nd out how good we are at determining directions from which sounds are coming [1]. Researches decided to look for minimum audible angles, i.e. angles at which the listeners could no longer determine whether the two sounds with the same frequency were coming from the same direction or not. Two loudspeakers were used in the experiment the rst loudspeaker was placed in front of the listener's head and the second one was placed at the same distance from the center of the head, but at a dierent angle. The researchers then moved the second speaker around and the listener had to determine whether two tones were coming from the same direction or not. The signals used for experimenting were pure tones and the experiment was repeated at dierent frequencies. Minimum audible angles were determined for each frequency and the results are shown in Fig. 7. 9
10 Figure 7: Minimum audible angles at dierent frequencies [1]. White dots are measurements taken when the rst loudspeker was placed in front of the listener and black dots represent measurements when the rst loudspeaker was at 30 angle. The minimum audible angle is the lowest at low frequencies, i.e. at wavelengths which are much bigger than the dimensions of the human head [1]. In that case the information needed for localization is most likely provided to the brain by interaural time dierencies. In 1956, Klumpp and Eady found out that the smallest interaural time dierence which can be detected by a human is approximately 10 µs [1]. Since we know the diameter of the human head and the speed of sound in air, we can estimate the minimum detectable change in direction, which equals roughly 1. This value matches the experimental results at low frequency, shown in Fig. 7. This indicates that the auditory system really processess information about interaural delays. However, the localization is possible also when the listener can only use one ear (monaural listening), as shown in Fig. 8, even though both time-dierence and phase-dierence are impossible in that case. We can therefore conclude that the reections of sound in the pinna, described in section 4, really play a major role in localization. Figure 8: Histograms show results of experiment, where listeners had to determine the direction from which the sound was coming [9]. When listeners were only using one ear [Right], the results were visibly worse than in the case of binaural listening [Left]. 4.3 Experimentation using headphones So far, we have only discussed experiments which were carried out by the use of loudspeakers. When researchers tried to reproduce external sounds with headphones, the listeners often reported that the voices appeared inside, instead of outside their heads [2]. In order to investigate what caused the brains 10
11 to determine that sounds originated from inside the head, careful calibration of headphones had to be performed. The goal was to produce a signal at the eardrum with the headphones, which would have the same amplitude and phase as the signal at the eardrum due to a tone played from a loudspeaker, set at a certain angle. Experiments were carried out where the researchers reproduced signals in headphones, as if they were coming from dierent directions. The same sounds were then also played through loudspeakers placed at the position which corresponded to the position determined by the sound played by headphones. Amplitudes and phases had to be corrected for that purpose and calibration was conrmed as successful after listeners could no longer distinguish between loudspeaker and headphone signals [10]. The sound coming from loudspeaker is dened as the real signal and the one that comes from headphones is dened as the virtual signal. After it was ensured that headphones could satisfactorly reproduce the signal coming from loudspeakers, researchers began to investigate which property of the sound caused externalization, i.e. the property of auditory system to determine that sound does not originate inside the head. They focused on both main aspects of sound localization, described at the beginning of this chapter - time and level dierences [7]. In this chapter, we will only focus on one specic experiment which investigated the eect of interaural time dierencies (ITD) on externalization. The loudspeaker was positioned at the angle of 37 with respect to the directon the listener was facing. The headphones were calibrated to resemble the sound coming from the same angle. If the frequency of the sound is low, we can calculate the ITD from the low-frequency limit of diraction around a sphere [7]: IT D = 3r c sin θ. (10) In the above equation, c is speed of sound in the air and r is radius of the head (which is approximated by a sphere). For a head with a radius of 8 cm we get IT D 420 µs, since θ = 37. But after calibration on the test subjects it turned out that the optimal interaural time dierence was IT D 375 µs. The experimental sessions were divided into three stages - calibration, training and trial. During the training phase, listeners were exposed to sequence of four signals: real - virtual - real - virtual. The purpose of this stage of the experiment was to distinguish real signals from virtual signals; the listeners could repeat the sequence as many times as they wanted in order to learn and remember the dierence. The next stage consisted of 20 trials where the listeners had to decide whether the signal they were exposed to was real or virtual. The experiment was repeated at dierend IT Ds and at dierent frequencies. Figure 9 shows the results for experiment with sounds with fundamental frequency ν = 125 Hz. 5 Conclusion We have presented some basic experiments, which were at one point or another cruicial in the progress in the eld of psychophysics. Both of the physical models included in this paper have been introduced in 1970s and both have seen many developments by dierent authors. In one of the recent articles [7], the authors report that they have succesfully implemented the cochlea model which is still based on the same equations we presented in chapter 2, but the model also encompassess curvature in addition to the basic model. It turns out that the consequence of curvature in cochlea (see Fig. 2) is an increased perception of low frequencies [7]. All psychoacoustic experiments have also been updated up to this day and psychoacoustical models are also useful in the commercial sector. Psychophysical models are used in software such as lossy signal converters (for example MP3-converters), in planning and designing of noise reduction systems and high-end audio systems, as well as in loudspeaker and headphone industry. 11
12 Figure 9: Fractions of correct identications at dierent IT Ds [10]. At IT D = 525µs and IT D = 300µs, listeners almost never identied real sources as virtual sources or vice versa. In contrast, at IT Ds close to the optimal time dierence, test subjects often made a wrong decision, which means that the virtual signal was such a good reproduction of the real signal, that they were too hard to distinguish. It is interesting that for IT D = 525 µs all listeners reported that the virtual signal was coming from the right side of the loudspeaker. On the other hand, at IT D = 300 µs, all listeners claimed that the sound originated from the inside of their head. References [1] Plack, C. J., Sense of hearing (Lawrence Erlbaum Associates Inc, Manwah, 2005). [2] Rossing, T., Springer Handbook on Acoustics (Springer Science+Business Media, New York, 2007). [3] Moore, B. C. J., An Introduction to the P sychology of Hearing (Emerald Group Publishing Limited, Bingley, 2012). [4] (Cited on 5. January, 2014). [5] M. B. Lesser and D. A. Berkley, J. Fluid Mech. 51, (1972). [6] (Cited on 5. January, 2014). [7] H. Chai, D. Manoussaki and R. Chadwick, Phys. Rev. Lett. 96, (2006). [8] Morrison, F. A., An Introduction to F luid M echanics (Cambridge University Press, New York, 2013). [9] D. W. Batteau, Proc. Roy. Soc. B. 168, (1976). [10] W. M. Hartmann and A. Wittenberg, J. Acoust. Soc. Am 99, (1996). 12
Sound Waves. Sensation and Perception. Sound Waves. Sound Waves. Sound Waves
Sensation and Perception Part 3 - Hearing Sound comes from pressure waves in a medium (e.g., solid, liquid, gas). Although we usually hear sounds in air, as long as the medium is there to transmit the
More informationCan You Hear Me Now?
An Introduction to the Mathematics of Hearing Department of Applied Mathematics University of Washington April 26, 2007 Some Questions How does hearing work? What are the important structures and mechanisms
More informationHEARING AND PSYCHOACOUSTICS
CHAPTER 2 HEARING AND PSYCHOACOUSTICS WITH LIDIA LEE I would like to lead off the specific audio discussions with a description of the audio receptor the ear. I believe it is always a good idea to understand
More informationFrequency refers to how often something happens. Period refers to the time it takes something to happen.
Lecture 2 Properties of Waves Frequency and period are distinctly different, yet related, quantities. Frequency refers to how often something happens. Period refers to the time it takes something to happen.
More information21/01/2013. Binaural Phenomena. Aim. To understand binaural hearing Objectives. Understand the cues used to determine the location of a sound source
Binaural Phenomena Aim To understand binaural hearing Objectives Understand the cues used to determine the location of a sound source Understand sensitivity to binaural spatial cues, including interaural
More informationHearing Sound. The Human Auditory System. The Outer Ear. Music 170: The Ear
Hearing Sound Music 170: The Ear Tamara Smyth, trsmyth@ucsd.edu Department of Music, University of California, San Diego (UCSD) November 17, 2016 Sound interpretation in the auditory system is done by
More informationMusic 170: The Ear. Tamara Smyth, Department of Music, University of California, San Diego (UCSD) November 17, 2016
Music 170: The Ear Tamara Smyth, trsmyth@ucsd.edu Department of Music, University of California, San Diego (UCSD) November 17, 2016 1 Hearing Sound Sound interpretation in the auditory system is done by
More informationBinaural Hearing. Steve Colburn Boston University
Binaural Hearing Steve Colburn Boston University Outline Why do we (and many other animals) have two ears? What are the major advantages? What is the observed behavior? How do we accomplish this physiologically?
More informationSystems Neuroscience Oct. 16, Auditory system. http:
Systems Neuroscience Oct. 16, 2018 Auditory system http: www.ini.unizh.ch/~kiper/system_neurosci.html The physics of sound Measuring sound intensity We are sensitive to an enormous range of intensities,
More informationHearing. istockphoto/thinkstock
Hearing istockphoto/thinkstock Audition The sense or act of hearing The Stimulus Input: Sound Waves Sound waves are composed of changes in air pressure unfolding over time. Acoustical transduction: Conversion
More informationAUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening
AUDL GS08/GAV1 Signals, systems, acoustics and the ear Pitch & Binaural listening Review 25 20 15 10 5 0-5 100 1000 10000 25 20 15 10 5 0-5 100 1000 10000 Part I: Auditory frequency selectivity Tuning
More informationWhat Is the Difference between db HL and db SPL?
1 Psychoacoustics What Is the Difference between db HL and db SPL? The decibel (db ) is a logarithmic unit of measurement used to express the magnitude of a sound relative to some reference level. Decibels
More informationBinaural Hearing. Why two ears? Definitions
Binaural Hearing Why two ears? Locating sounds in space: acuity is poorer than in vision by up to two orders of magnitude, but extends in all directions. Role in alerting and orienting? Separating sound
More information3-D Sound and Spatial Audio. What do these terms mean?
3-D Sound and Spatial Audio What do these terms mean? Both terms are very general. 3-D sound usually implies the perception of point sources in 3-D space (could also be 2-D plane) whether the audio reproduction
More informationLecture 3: Perception
ELEN E4896 MUSIC SIGNAL PROCESSING Lecture 3: Perception 1. Ear Physiology 2. Auditory Psychophysics 3. Pitch Perception 4. Music Perception Dan Ellis Dept. Electrical Engineering, Columbia University
More informationSpectrograms (revisited)
Spectrograms (revisited) We begin the lecture by reviewing the units of spectrograms, which I had only glossed over when I covered spectrograms at the end of lecture 19. We then relate the blocks of a
More informationIssues faced by people with a Sensorineural Hearing Loss
Issues faced by people with a Sensorineural Hearing Loss Issues faced by people with a Sensorineural Hearing Loss 1. Decreased Audibility 2. Decreased Dynamic Range 3. Decreased Frequency Resolution 4.
More informationChapter 1: Introduction to digital audio
Chapter 1: Introduction to digital audio Applications: audio players (e.g. MP3), DVD-audio, digital audio broadcast, music synthesizer, digital amplifier and equalizer, 3D sound synthesis 1 Properties
More informationCOM3502/4502/6502 SPEECH PROCESSING
COM3502/4502/6502 SPEECH PROCESSING Lecture 4 Hearing COM3502/4502/6502 Speech Processing: Lecture 4, slide 1 The Speech Chain SPEAKER Ear LISTENER Feedback Link Vocal Muscles Ear Sound Waves Taken from:
More informationSound and its characteristics. The decibel scale. Structure and function of the ear. Békésy s theory. Molecular basis of hair cell function.
Hearing Sound and its characteristics. The decibel scale. Structure and function of the ear. Békésy s theory. Molecular basis of hair cell function. 19/11/2014 Sound A type of longitudinal mass wave that
More informationSound localization psychophysics
Sound localization psychophysics Eric Young A good reference: B.C.J. Moore An Introduction to the Psychology of Hearing Chapter 7, Space Perception. Elsevier, Amsterdam, pp. 233-267 (2004). Sound localization:
More informationfunctions grow at a higher rate than in normal{hearing subjects. In this chapter, the correlation
Chapter Categorical loudness scaling in hearing{impaired listeners Abstract Most sensorineural hearing{impaired subjects show the recruitment phenomenon, i.e., loudness functions grow at a higher rate
More informationTopic 4. Pitch & Frequency
Topic 4 Pitch & Frequency A musical interlude KOMBU This solo by Kaigal-ool of Huun-Huur-Tu (accompanying himself on doshpuluur) demonstrates perfectly the characteristic sound of the Xorekteer voice An
More informationHearing II Perceptual Aspects
Hearing II Perceptual Aspects Overview of Topics Chapter 6 in Chaudhuri Intensity & Loudness Frequency & Pitch Auditory Space Perception 1 2 Intensity & Loudness Loudness is the subjective perceptual quality
More informationID# Exam 2 PS 325, Fall 2003
ID# Exam 2 PS 325, Fall 2003 As always, the Honor Code is in effect and you ll need to write the code and sign it at the end of the exam. Read each question carefully and answer it completely. Although
More informationHearing. Figure 1. The human ear (from Kessel and Kardon, 1979)
Hearing The nervous system s cognitive response to sound stimuli is known as psychoacoustics: it is partly acoustics and partly psychology. Hearing is a feature resulting from our physiology that we tend
More informationHCS 7367 Speech Perception
Long-term spectrum of speech HCS 7367 Speech Perception Connected speech Absolute threshold Males Dr. Peter Assmann Fall 212 Females Long-term spectrum of speech Vowels Males Females 2) Absolute threshold
More informationHEARING. Structure and Function
HEARING Structure and Function Rory Attwood MBChB,FRCS Division of Otorhinolaryngology Faculty of Health Sciences Tygerberg Campus, University of Stellenbosch Analyse Function of auditory system Discriminate
More informationHearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds
Hearing Lectures Acoustics of Speech and Hearing Week 2-10 Hearing 3: Auditory Filtering 1. Loudness of sinusoids mainly (see Web tutorial for more) 2. Pitch of sinusoids mainly (see Web tutorial for more)
More informationChapter 11: Sound, The Auditory System, and Pitch Perception
Chapter 11: Sound, The Auditory System, and Pitch Perception Overview of Questions What is it that makes sounds high pitched or low pitched? How do sound vibrations inside the ear lead to the perception
More informationAuditory Physiology PSY 310 Greg Francis. Lecture 30. Organ of Corti
Auditory Physiology PSY 310 Greg Francis Lecture 30 Waves, waves, waves. Organ of Corti Tectorial membrane Sits on top Inner hair cells Outer hair cells The microphone for the brain 1 Hearing Perceptually,
More informationAcoustics Research Institute
Austrian Academy of Sciences Acoustics Research Institute Modeling Modelingof ofauditory AuditoryPerception Perception Bernhard BernhardLaback Labackand andpiotr PiotrMajdak Majdak http://www.kfs.oeaw.ac.at
More informationLinguistic Phonetics Fall 2005
MIT OpenCourseWare http://ocw.mit.edu 24.963 Linguistic Phonetics Fall 2005 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 24.963 Linguistic Phonetics
More informationAuditory System & Hearing
Auditory System & Hearing Chapters 9 and 10 Lecture 17 Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Spring 2015 1 Cochlea: physical device tuned to frequency! place code: tuning of different
More informationSignals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor
Signals, systems, acoustics and the ear Week 5 The peripheral auditory system: The ear as a signal processor Think of this set of organs 2 as a collection of systems, transforming sounds to be sent to
More informationOutline. The ear and perception of sound (Psychoacoustics) A.1 Outer Ear Amplifies Sound. Introduction
The ear and perception of sound (Psychoacoustics) 1 Outline A. Structure of the Ear B. Perception of Pitch C. Perception of Loudness D. Timbre (quality of sound) E. References Updated 01Aug0 Introduction
More informationSUBJECT: Physics TEACHER: Mr. S. Campbell DATE: 15/1/2017 GRADE: DURATION: 1 wk GENERAL TOPIC: The Physics Of Hearing
SUBJECT: Physics TEACHER: Mr. S. Campbell DATE: 15/1/2017 GRADE: 12-13 DURATION: 1 wk GENERAL TOPIC: The Physics Of Hearing The Physics Of Hearing On completion of this section, you should be able to:
More informationAcoustics, signals & systems for audiology. Psychoacoustics of hearing impairment
Acoustics, signals & systems for audiology Psychoacoustics of hearing impairment Three main types of hearing impairment Conductive Sound is not properly transmitted from the outer to the inner ear Sensorineural
More information! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics
2/14/18 Can hear whistle? Lecture 5 Psychoacoustics Based on slides 2009--2018 DeHon, Koditschek Additional Material 2014 Farmer 1 2 There are sounds we cannot hear Depends on frequency Where are we on
More informationLinguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions.
24.963 Linguistic Phonetics Basic Audition Diagram of the inner ear removed due to copyright restrictions. 1 Reading: Keating 1985 24.963 also read Flemming 2001 Assignment 1 - basic acoustics. Due 9/22.
More informationHearing: Physiology and Psychoacoustics
9 Hearing: Physiology and Psychoacoustics Click Chapter to edit 9 Hearing: Master title Physiology style and Psychoacoustics The Function of Hearing What Is Sound? Basic Structure of the Mammalian Auditory
More informationIntro to Audition & Hearing
Intro to Audition & Hearing Lecture 16 Chapter 9, part II Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Fall 2017 1 Sine wave: one of the simplest kinds of sounds: sound for which pressure
More informationHearing in the Environment
10 Hearing in the Environment Click Chapter to edit 10 Master Hearing title in the style Environment Sound Localization Complex Sounds Auditory Scene Analysis Continuity and Restoration Effects Auditory
More informationHearing. and other senses
Hearing and other senses Sound Sound: sensed variations in air pressure Frequency: number of peaks that pass a point per second (Hz) Pitch 2 Some Sound and Hearing Links Useful (and moderately entertaining)
More informationChapter 3. Sounds, Signals, and Studio Acoustics
Chapter 3 Sounds, Signals, and Studio Acoustics Sound Waves Compression/Rarefaction: speaker cone Sound travels 1130 feet per second Sound waves hit receiver Sound waves tend to spread out as they travel
More informationHearing. PSYCHOLOGY (8th Edition, in Modules) David Myers. Module 14. Hearing. Hearing
PSYCHOLOGY (8th Edition, in Modules) David Myers PowerPoint Slides Aneeq Ahmad Henderson State University Worth Publishers, 2007 1 Hearing Module 14 2 Hearing Hearing The Stimulus Input: Sound Waves The
More informationLecture 9: Sound Localization
Lecture 9: Sound Localization Localization refers to the process of using the information about a sound which you get from your ears, to work out where the sound came from (above, below, in front, behind,
More informationGanglion Cells Blind Spot Cornea Pupil Visual Area of the Bipolar Cells Thalamus Rods and Cones Lens Visual cortex of the occipital lobe
How We See How We See Cornea Ganglion Cells whose axons form the optic nerve Blind Spot the exit point at the back of the retina Pupil which is controlled by the iris Bipolar Cells Visual Area of the Thalamus
More informationAuditory Physiology PSY 310 Greg Francis. Lecture 29. Hearing
Auditory Physiology PSY 310 Greg Francis Lecture 29 A dangerous device. Hearing The sound stimulus is changes in pressure The simplest sounds vary in: Frequency: Hertz, cycles per second. How fast the
More informationPSY 310: Sensory and Perceptual Processes 1
Auditory Physiology PSY 310 Greg Francis Lecture 29 A dangerous device. Hearing The sound stimulus is changes in pressure The simplest sounds vary in: Frequency: Hertz, cycles per second. How fast the
More informationHearing. Juan P Bello
Hearing Juan P Bello The human ear The human ear Outer Ear The human ear Middle Ear The human ear Inner Ear The cochlea (1) It separates sound into its various components If uncoiled it becomes a tapering
More informationComment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA)
Comments Comment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA) Is phase locking to transposed stimuli as good as phase locking to low-frequency
More informationHearing and Balance 1
Hearing and Balance 1 Slide 3 Sound is produced by vibration of an object which produces alternating waves of pressure and rarefaction, for example this tuning fork. Slide 4 Two characteristics of sound
More informationMechanical Properties of the Cochlea. Reading: Yost Ch. 7
Mechanical Properties of the Cochlea CF Reading: Yost Ch. 7 The Cochlea Inner ear contains auditory and vestibular sensory organs. Cochlea is a coiled tri-partite tube about 35 mm long. Basilar membrane,
More informationBCS 221: Auditory Perception BCS 521 & PSY 221
BCS 221: Auditory Perception BCS 521 & PSY 221 Time: MW 10:25 11:40 AM Recitation: F 10:25 11:25 AM Room: Hutchinson 473 Lecturer: Dr. Kevin Davis Office: 303E Meliora Hall Office hours: M 1 3 PM kevin_davis@urmc.rochester.edu
More informationPHYS 1240 Sound and Music Professor John Price. Cell Phones off Laptops closed Clickers on Transporter energized
PHYS 1240 Sound and Music Professor John Price Cell Phones off Laptops closed Clickers on Transporter energized The Ear and Hearing Thanks to Jed Whittaker for many of these slides Ear anatomy substructures
More informationThe Ear. The ear can be divided into three major parts: the outer ear, the middle ear and the inner ear.
The Ear The ear can be divided into three major parts: the outer ear, the middle ear and the inner ear. The Ear There are three components of the outer ear: Pinna: the fleshy outer part of the ear which
More informationL2: Speech production and perception Anatomy of the speech organs Models of speech production Anatomy of the ear Auditory psychophysics
L2: Speech production and perception Anatomy of the speech organs Models of speech production Anatomy of the ear Auditory psychophysics Introduction to Speech Processing Ricardo Gutierrez-Osuna CSE@TAMU
More informationHearing Lectures. Acoustics of Speech and Hearing. Subjective/Objective (recap) Loudness Overview. Sinusoids through ear. Facts about Loudness
Hearing Lectures coustics of Speech and Hearing Week 2-8 Hearing 1: Perception of Intensity 1. Loudness of sinusoids mainly (see Web tutorial for more) 2. Pitch of sinusoids mainly (see Web tutorial for
More informationMasker-signal relationships and sound level
Chapter 6: Masking Masking Masking: a process in which the threshold of one sound (signal) is raised by the presentation of another sound (masker). Masking represents the difference in decibels (db) between
More informationSound and Hearing. Decibels. Frequency Coding & Localization 1. Everything is vibration. The universe is made of waves.
Frequency Coding & Localization 1 Sound and Hearing Everything is vibration The universe is made of waves db = 2log(P1/Po) P1 = amplitude of the sound wave Po = reference pressure =.2 dynes/cm 2 Decibels
More informationBefore we talk about the auditory system we will talk about the sound and waves
The Auditory System PHYSIO: #3 DR.LOAI ZAGOUL 24/3/2014 Refer to the slides for some photos. Before we talk about the auditory system we will talk about the sound and waves All waves have basic characteristics:
More informationSPHSC 462 HEARING DEVELOPMENT. Overview Review of Hearing Science Introduction
SPHSC 462 HEARING DEVELOPMENT Overview Review of Hearing Science Introduction 1 Overview of course and requirements Lecture/discussion; lecture notes on website http://faculty.washington.edu/lawerner/sphsc462/
More informationPsychoacoustical Models WS 2016/17
Psychoacoustical Models WS 2016/17 related lectures: Applied and Virtual Acoustics (Winter Term) Advanced Psychoacoustics (Summer Term) Sound Perception 2 Frequency and Level Range of Human Hearing Source:
More informationBinaural processing of complex stimuli
Binaural processing of complex stimuli Outline for today Binaural detection experiments and models Speech as an important waveform Experiments on understanding speech in complex environments (Cocktail
More informationThis will be accomplished using maximum likelihood estimation based on interaural level
Chapter 1 Problem background 1.1 Overview of the proposed work The proposed research consists of the construction and demonstration of a computational model of human spatial hearing, including long term
More informationSignals, systems, acoustics and the ear. Week 1. Laboratory session: Measuring thresholds
Signals, systems, acoustics and the ear Week 1 Laboratory session: Measuring thresholds What s the most commonly used piece of electronic equipment in the audiological clinic? The Audiometer And what is
More informationAn Auditory System Modeling in Sound Source Localization
An Auditory System Modeling in Sound Source Localization Yul Young Park The University of Texas at Austin EE381K Multidimensional Signal Processing May 18, 2005 Abstract Sound localization of the auditory
More informationTopic 4. Pitch & Frequency. (Some slides are adapted from Zhiyao Duan s course slides on Computer Audition and Its Applications in Music)
Topic 4 Pitch & Frequency (Some slides are adapted from Zhiyao Duan s course slides on Computer Audition and Its Applications in Music) A musical interlude KOMBU This solo by Kaigal-ool of Huun-Huur-Tu
More informationID# Exam 2 PS 325, Fall 2009
ID# Exam 2 PS 325, Fall 2009 As always, the Skidmore Honor Code is in effect. At the end of the exam, I ll have you write and sign something to attest to that fact. The exam should contain no surprises,
More informationISLAMABAD ACADEMY PHYSICS FOR 10TH CLASS (UNIT # 13)
PHYSICS FOR 10TH CLASS (UNIT # 13) SHORT QUESTIONS How sound is produced? It is produced from a vibrating body which transfers in air from one place to other in the form of compression waves. A medium
More informationThe basic hearing abilities of absolute pitch possessors
PAPER The basic hearing abilities of absolute pitch possessors Waka Fujisaki 1;2;* and Makio Kashino 2; { 1 Graduate School of Humanities and Sciences, Ochanomizu University, 2 1 1 Ootsuka, Bunkyo-ku,
More informationID# Final Exam PS325, Fall 1997
ID# Final Exam PS325, Fall 1997 Good luck on this exam. Answer each question carefully and completely. Keep your eyes foveated on your own exam, as the Skidmore Honor Code is in effect (as always). Have
More informationReceptors / physiology
Hearing: physiology Receptors / physiology Energy transduction First goal of a sensory/perceptual system? Transduce environmental energy into neural energy (or energy that can be interpreted by perceptual
More informationLoudness. Loudness is not simply sound intensity!
is not simply sound intensity! Sound loudness is a subjective term describing the strength of the ear's perception of a sound. It is intimately related to sound intensity but can by no means be considered
More informationTechnical Discussion HUSHCORE Acoustical Products & Systems
What Is Noise? Noise is unwanted sound which may be hazardous to health, interfere with speech and verbal communications or is otherwise disturbing, irritating or annoying. What Is Sound? Sound is defined
More informationENT 318 Artificial Organs Physiology of Ear
ENT 318 Artificial Organs Physiology of Ear Lecturer: Ahmad Nasrul Norali The Ear The Ear Components of hearing mechanism - Outer Ear - Middle Ear - Inner Ear - Central Auditory Nervous System Major Divisions
More informationVision and Audition. This section concerns the anatomy of two important sensory systems, the visual and the auditory systems.
Vision and Audition Vision and Audition This section concerns the anatomy of two important sensory systems, the visual and the auditory systems. The description of the organization of each begins with
More informationTopics in Linguistic Theory: Laboratory Phonology Spring 2007
MIT OpenCourseWare http://ocw.mit.edu 24.91 Topics in Linguistic Theory: Laboratory Phonology Spring 27 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
More informationSensation and Perception. Chapter 6
Sensation and Perception Chapter 6 1 Sensation & Perception How do we construct our representations of the external world? Text To represent the world, we must detect physical energy (a stimulus) from
More information17.4 Sound and Hearing
You can identify sounds without seeing them because sound waves carry information to your ears. People who work in places where sound is very loud need to protect their hearing. Properties of Sound Waves
More informationThresholds for different mammals
Loudness Thresholds for different mammals 8 7 What s the first thing you d want to know? threshold (db SPL) 6 5 4 3 2 1 hum an poodle m ouse threshold Note bowl shape -1 1 1 1 1 frequency (Hz) Sivian &
More informationPERIPHERAL AND CENTRAL AUDITORY ASSESSMENT
PERIPHERAL AND CENTRAL AUDITORY ASSESSMENT Ravi Pachigolla, MD Faculty Advisor: Jeffery T. Vrabec, MD The University of Texas Medical Branch At Galveston Department of Otolaryngology Grand Rounds Presentation
More informationDiscrete Signal Processing
1 Discrete Signal Processing C.M. Liu Perceptual Lab, College of Computer Science National Chiao-Tung University http://www.cs.nctu.edu.tw/~cmliu/courses/dsp/ ( Office: EC538 (03)5731877 cmliu@cs.nctu.edu.tw
More informationSound. Audition. Physics of Sound. Properties of sound. Perception of sound works the same way as light.
Sound Audition Perception of sound works the same way as light. Have receptors to convert a physical stimulus to action potentials Action potentials are organized in brain structures You apply some meaning
More informationAudition. Sound. Physics of Sound. Perception of sound works the same way as light.
Audition Sound Perception of sound works the same way as light. Have receptors to convert a physical stimulus to action potentials Action potentials are organized in brain structures You apply some meaning
More informationAuditory Perception: Sense of Sound /785 Spring 2017
Auditory Perception: Sense of Sound 85-385/785 Spring 2017 Professor: Laurie Heller Classroom: Baker Hall 342F (sometimes Cluster 332P) Time: Tuesdays and Thursdays 1:30-2:50 Office hour: Thursday 3:00-4:00,
More informationRequired Slide. Session Objectives
Auditory Physiology Required Slide Session Objectives Auditory System: At the end of this session, students will be able to: 1. Characterize the range of normal human hearing. 2. Understand the components
More informationSensation and Perception. A. Sensation: awareness of simple characteristics B. Perception: making complex interpretations
I. Overview Sensation and Perception A. Sensation: awareness of simple characteristics B. Perception: making complex interpretations C. Top-Down vs Bottom-up Processing D. Psychophysics -- thresholds 1.
More informationEffects of Age and Hearing Loss on the Processing of Auditory Temporal Fine Structure
Effects of Age and Hearing Loss on the Processing of Auditory Temporal Fine Structure Brian C. J. Moore Abstract Within the cochlea, broadband sounds like speech and music are filtered into a series of
More informationSOLUTIONS Homework #3. Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03
SOLUTIONS Homework #3 Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03 Problem 1: a) Where in the cochlea would you say the process of "fourier decomposition" of the incoming
More informationSpatial hearing and sound localization mechanisms in the brain. Henri Pöntynen February 9, 2016
Spatial hearing and sound localization mechanisms in the brain Henri Pöntynen February 9, 2016 Outline Auditory periphery: from acoustics to neural signals - Basilar membrane - Organ of Corti Spatial
More informationProceedings of Meetings on Acoustics
Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Noise Session 3aNSa: Wind Turbine Noise I 3aNSa5. Can wind turbine sound
More informationPublication VI. c 2007 Audio Engineering Society. Reprinted with permission.
VI Publication VI Hirvonen, T. and Pulkki, V., Predicting Binaural Masking Level Difference and Dichotic Pitch Using Instantaneous ILD Model, AES 30th Int. Conference, 2007. c 2007 Audio Engineering Society.
More informationSounds Good to Me. Engagement. Next Generation Science Standards (NGSS)
Sounds Good to Me Students make a mental model of how frequencies are detected by the cochlear membrane. Using this model, students examine how cochlear implants can be used to help treat deafness. Next
More informationWeek 2 Systems (& a bit more about db)
AUDL Signals & Systems for Speech & Hearing Reminder: signals as waveforms A graph of the instantaneousvalue of amplitude over time x-axis is always time (s, ms, µs) y-axis always a linear instantaneousamplitude
More information9/29/14. Amanda M. Lauer, Dept. of Otolaryngology- HNS. From Signal Detection Theory and Psychophysics, Green & Swets (1966)
Amanda M. Lauer, Dept. of Otolaryngology- HNS From Signal Detection Theory and Psychophysics, Green & Swets (1966) SIGNAL D sensitivity index d =Z hit - Z fa Present Absent RESPONSE Yes HIT FALSE ALARM
More informationAuditory Physiology Richard M. Costanzo, Ph.D.
Auditory Physiology Richard M. Costanzo, Ph.D. OBJECTIVES After studying the material of this lecture, the student should be able to: 1. Describe the morphology and function of the following structures:
More informationSound Localization PSY 310 Greg Francis. Lecture 31. Audition
Sound Localization PSY 310 Greg Francis Lecture 31 Physics and psychology. Audition We now have some idea of how sound properties are recorded by the auditory system So, we know what kind of information
More informationThe role of low frequency components in median plane localization
Acoust. Sci. & Tech. 24, 2 (23) PAPER The role of low components in median plane localization Masayuki Morimoto 1;, Motoki Yairi 1, Kazuhiro Iida 2 and Motokuni Itoh 1 1 Environmental Acoustics Laboratory,
More information