Hearing Lecture notes (1): Introductory Hearing

Size: px
Start display at page:

Download "Hearing Lecture notes (1): Introductory Hearing"

Transcription

1 SECOND YEAR COURSE 1 PERCEPTION Hearing Lecture notes (1): Introductory Hearing 1. What is hearing for? (i) Indicate direction of sound sources (better than eyes since omnidirectional, no eye-lids; but poorer resolution of direction). (ii) Recognise the identity and content of a sound source (such as speech or music or a car). (iii) Give information on the nature of the environment via echoes, reverberation (normal room, cathedral, open field). 2. Waveforms and Frequency Analysis Sound is a change in the pressure of the air. The waveform of any sound shows how the pressure changes over time. The eardrum moves in response to changes in pressure. Any waveform shape can be produced by adding together sine waves of appropriate frequencies and amplitudes. The amplitudes (and phases) of the sine waves give the spectrum of the sound. The spectrum of a sine wave is a single point at the frequency of the sine wave. The spectrum of white noise is a line covering all frequencies. The cochlea breaks the waveform at the ear down into its component sine waves - frequency analysis. Hair cells in the cochlea respond to these component frequencies. This process of frequency analysis is impaired in sensori-neural hearing loss. It cannot be compensated for by a conventional hearing aid. 3. Why does the auditory system analyse sound by frequency? Some animals do not analyse sound by frequency, but simply transmit the pressure waveform at the ear directly. We could do this by having hair cells on the eardrum. But instead we have an elaborate system to analyse sound into its frequency components. We do this because, since almost all sounds are structured in frequency, we can detect them, especially in the presence of other sounds, more easily by "looking" at the spectrum than at the waveform. In the six panels below, the left-hand column shows plots of the waveform of a sound - the way that pressure changes over time. The right-hand column shows the spectrum of the sound - how much of each sine-wave you have to add together in order to make that particular waveform. The upper panel is a sine wave tone with a frequency of 1 Hz. A sine wave has energy at just one frequency, so the spectrum is just one point.

2 1.5-1 waveform p=1. *sin(2 pi 1 t) time (t) spectrum 1 frequency (Hz) The middle panel is white noise (like the sound of a waterfall). White noise has equal energy at all frequencies, so the spectrum is a horizontal line Noise time t frequency (Hz) The lower panel is the sine tone added to the noise. The spectrum of the sum is just the sum of the spectra of the two components. Notice that you can see the tone very easily in the spectrum, but it is completely obscured by the noise in the waveform. Noise + Tone time t 1 frequency (Hz) 4. Sine waves A sine wave has three properties which appear in the basic equation: p(t) = a* sin(2π ft + ) (i) frequency (f) - measured in Hertz (Hz), cycles per second.

3 (ii) amplitude (a) - is a measure of the pressure change of a sound. It is usually measured in decibels (db) relative to another sound; the db scale is a logarithmic scale : if we have two sounds p 1 and p 2, then p 1 is 2*log 1 (p 1 /p 2 ) db grester than p 2. Doubling pressure (amplitude) gives on increase of 6dB: 2 * log 1 (2/1) = 2 *.3 = 6. Amplitude squared is proportional to the energy, or level, or intensity (I) of a sound. The decibel difference between two sounds can also be expressed in terms of intensity changes: 1*log 1 (I 1 /I 2 ). Doubling intensity gives an increase of 3dB (1 *.3). The just noticeable difference (jnd) between two sounds is about 1dB (iii) phase ( - measured in degrees or radians, indicates the relative time of a wave. 3 The sine wave shown above has an amplitude of 1, a frequency of 1 Hz, and it starts in zero sine phase (φ = 5. Complex periodic sounds A sound which has more than one (sine-wave) frequency component is a complex sound. A periodic sound is one which repeats itself at regular intervals. A sine wave is a simple periodic sound. Musical instruments or the voice produce complex periodic sounds. They have a spectrum consisting of a series of harmonics. Each harmonic is a sine wave that has a frequency that is an integer multiple of the fundamental frequency. For example, the note 'A' played by the oboe to tune the orchestra has a fundamental frequency of 44 Hz, giving harmonics at 44, 88, 132, 176, 22, 264, etc. If the oboe played a higher pitch, the fundamental frequency (and so all the harmonic frequencies of the note) would be higher. The period of a complex sound is 1/fundamental frequency (in this case 1/44 =.23s = 2.3ms). A different instrument, with a different timbre, playing the same pitch as the oboe, would have harmonics at the same frequencies, but the harmonics would have different relative amplitudes. The overall timbre of a natural instrument is partly determined by the relative amplitudes of the harmonics, but the attack of the note is also important. Different harmonics start at different times in different instruments, and the rate at which they start also differs markedly across instruments. Cheap synthesisers cannot imitate the attack, and so they do not make very lifelike sounds. Expensive synthesisers (like Yamaha's Clavinova) store the whole note including the attack and so sound very realistic. Here is one-tenth of a second of the waveform and also the spectrum of a complex periodic sound consisting of the first four harmonics of a fundamental of 1 Hz. Notice that there are 1 cycles of the waveform in.1s, and all the frequency components are integer multiples of 1 Hz.

4 time (t) frequency (Hz) Here is a sound with the same period, but a different timbre. Notice that the waveform has a different shape, but the same period. The change in timbre is produced by making the higher harmonics lower in amplitude time (t) frequency (Hz) We can also change the shape of the waveform by changing the relative phase of the different frequencies. In this example four components were all in

5 sine phase, in the next example the odd harmonics are in sine phase and the even in cosine phase. This change produces very little change in timbre time (t) frequency (Hz) 6. Linearity Most studies of the auditory system have used sine waves. If we know how a system responds to sine waves, then we can predict exactly how it will behave to complex waves (which are made up of sine waves), provided that the system is linear. The output of a linear system to the sum of two inputs, is equal to the sum of its outputs to the two inputs separately. Equivalently, if you double the input to a linear system, then you double the output. A linear system can only output frequencies that are present on the input, non-linear systems always add extra frequency components. The filters we describe below are linear. The auditory system is only linear to a first approximation. 7. Filters A filter lets through some frequencies but not others. A treble control acts as a low-pass filter, letting less of the high frequencies through as you turn the treble down. A bass control acts as a high-pass filter, letting less of the low frequencies through as you turn the bass down. A band-pass filter only lets through frequencies that fall within some range. A slider on a graphic equalizer controls the output level of a band-pass filter. In analysing sound into its frequency components, the ear acts like a set of band-pass filters. We can represent the action of a filter with a diagram like a spectrum which shows by how much each frequency is attenuated (or reduced in amplitude) when it passes through the filter.

6 Input sound time (t) Filter frequency (Hz) 1. Low-Pass Filter frequency (Hz) Output sound time (t) frequency (Hz) 8. Resonance A resonant system acts like a band-pass filter, responding to a narrow range of frequencies. Examples are: a tuning fork, a string of a harp or piano, a swing. Helmholtz was almost right in thinking that the ear consisted of a series of resonators - like a grand-piano with the sustaining pedal held down. Here is what happens when a complex sound is passed through a sharply- tuned band-pass filter. Notice that a complex wave goes in, but a sine wave comes out. Each part of the basilar membrane acts like a bandpass filter tuned to a different frequency.

7 Input sound time (t) frequency (Hz) 1. Filter Band-Pass Filter frequency (Hz) Output sound

8 time (t) frequency (Hz) What you should know. You should understand the meaning of all the terms shown in italics. You should also be able to explain all the diagrams in this handout. If you do not understand any of the terms or diagrams, first try asking someone else in the class who you think might. If you still don't, then ask me either in a lecture, after a lecture or in my office.

9 SECOND YEAR COURSE 9 PERCEPTION Hearing Lecture Notes (2): Ear and Auditory Nerve 1 THE EAR There are three main parts of the ear: the pinna (or external ear) and meatus, the middle ear, and the cochlea (inner ear). 1.1 Pinna and meatus The pinna serves different functions in different animals. Those with mobile pinnae (donkey, cat) use it to amplify sound coming from a particular direction, at the expense of other sounds. The human pinna is not mobile, but serves to colour high frequency sounds by interference between the echoes reflected off its different structures (like the colours of light produced by reflection from an oil slick). Only frequencies that have a wavelength comparable to the dimensions of the pinna are influenced by it (> 3kHz). Different high frequencies are amplified by different amounts depending on the direction of the sound. The brain interprets these changes as direction. The meatus is the tube that links the pinna to the eardrum. It resonates at around 2kHz so that frequencies in that region are transmitted more efficiently to the cochlea than others. This frequency region is particularly important in speech. 1.2 Middle ear: tympanic membrane, malleus, incus and stapes The middle ear transmits the vibrations of the ear drum (tympanic membrance) to the cochlea. The middle ear performs two functions. (i) Impedance matching - vibrations in air must be transmitted efficiently into the fluid of the cochlea. If there were no middle ear most of the sound would just bounce off the cochlea. The middle ear helps turn a large amplitude vibration in air into a small amplitude vibration (of the same energy) in fluid. The large area of the ear-drum compared with the small area of the stapes helps to achieve this, together with the lever action of the three middle ear bones or ossicles (malleus, incus, stapes). (ii) Protection against loud low frequency sounds - the cochlea is susceptible to damage from intense sounds. The middle ear offers some protection by the stapedius reflex, which tenses muscles that stiffen the vibration of the ossicles, thus reducing the extent to which low frequency sounds are transmitted. The reflex is triggered by loud sounds; it also reduces the extent of upward spread of masking from intense low-frequency sounds (see hearing lecture 3). Damage to the middle ear causes a Conductive Hearing Loss which can usually be corrected by a hearing aid. In a conductive hearing loss, absolute thresholds are elevated. These thresholds are measured in an audiological test and shown in an audiogram. Appropriate amplification at different frequencies compensates for the conductive loss. 1.3 Inner ear: cochlea The snail-shaped cochlea, unwound, is a three-chambered tube. Two of the chambers are separated by the basilar membrane, on which sits the organ of Corti. The tectorial membrane sits on top of the organ of Corti and is fixed

10 rigidly to the organ of Corti at one end only. Sound produces a travelling wave down the basilar membrane that is detected by shearing movement between the tectorial and basilar membranes bending the hairs on top of inner hair cells that form part of of the organ of Corti. Different frequencies of sound give maximum vibration at different places along the basilar membrane. 1 When a low frequency pure tone stimulates the ear, the whole basilar membrane, up to the point at which the travelling wave dies out, vibrates at the frequency of the tone. The amplitude of the vibration has a very sharp peak. The vibration to high frequency tones peaks nearer the base of the membrane than does the vibration to low frequency sounds. The characteristic frequency (CF) of a particular place along the membrane is the frequency that peaks at that point. If more than one tone is present at a time, then their vibrations on the membrane add together (but see remarks on non-linearity). high frequencies low frequencies base apex distance along basilar membrane More intense tones give a greater vibration than do less intense: high frequencies low frequencies high intensity low intensity base CF apex distance along basilar membrane A brief click contains energy at virtually all frequencies. Each part of the basilar membrane will resonate to its particular frequency.

11 Response of band-pass filter to a click / CF time (t) It is a useful approximation to note that each point on the basilar membrane acts like a very sharply tuned band-pass filter. In the normal ear these filters are just as sharply tuned as are individual fibers of the auditory nerve (see below) Non-linearity In normal ears the response of the basilar membrane to sound is actually non-linear - there is significant distortion. If you double the input to the basilar membrane, the output less than doubles (saturating non-linearity). If you add a second tone at a different frequency, the response to the first tone can decrease (Two-tone suppression) If you play two tones (say 1 & 12 Hz) a third tone can appear (at 8 Hz) - the so-called Cubic Difference Tone Sensori-neural hearing loss (SNHL) Sensori-neural hearing loss can be brought about by exposure to loud sounds (particularly impusive ones like gun shots), or by infection or by antibiotics. It usually arises from loss of outer hair cells. It is likely that outer hair cells act as tiny motors; they feed back energy into the ear at the CF. In ears with a sensori-neural hearing loss,(snhl) this distortion is reduced or disappears. So, paradoxically, abnormal ears are more nearly linear Forms of deafness There are two major forms of deafness: conductive and sensori-neural. Conductive Sensori-neural Origin Middle-ear Cochlea (OHCs) Thresholds Raised Raised Filter bandwidths Normal Increased Loudness growth Normal Increased (Recruitment) Bold symptoms are not alleviated by a conventional hearing aid Role of outer hair cells The active feedback of energy by outer hair cells into the basilar membrane is probably responsible for:

12 (i) the sharp peak in the basilar membrane response -low thresholds and narrow bandwidth; (ii) oto-acoustic emissions (sounds that come out of the ear); (iii) the non-linear response of the basilar membrane vibration. The more linear behaviour of the SNHL basilar membrane is probably the cause of loudness recruitment (abnormally rapid growth of loudness) AUDITORY NERVE As the hairs of inner hair cells bend, the voltage of the hair cell changes; when the hairs are bent sufficiently in one direction (but not the other) the voltage changes enough to release neurotransmitter in the junction between the hair cell and the auditory nerve synapse, and the auditory nerve fires. This direction corresponds to a pressure rarefaction in the air. After firing, an auditory nerve fibre has a refractory period of around 1 ms. Each hair cell has about 1 auditory nerve fibers connected to it. These fibers have different thresholds. Inner hair cells stimulate the afferent auditory nerve, outer hair cells generally do not, but are innervated by the efferent auditory nerve. Efferent activity may influence the mechanical response of the basilar membrane via the outer hair cells. 2.1 Response to single pure tones As the amplitude of a tone played to the ear increases, so the rate of firing of a nerve fibre at CF increases up to saturation. Most auditory nerve fibers have high spontaneous rates and saturate rapidly, but there are others (which are harder to record from) that have low spontaneous rates and saturate more slowly. High spontaneous rate fibers code intensity changes at low levels, and the low spontaneous rate ones code intensity changes at high levels. 1 saturation 8 many 6 4 high spontaneous rate few 2 low spontaneous rate log amplitude (db SPL) 2.2 Frequency threshold curves (FTCs) FTCs plot the minimum intensity of sound needed at a particular frequency to just stimulate an auditory nerve fibre above spontaneous activity. The high frequency slopes are very steep (c. 3 db/oct), the low frequency slopes generally have a steep tip followed by a flatter base. Damage to the cochlea easily abolishes the tip, and explains some features of Sensori-Neural Hearing Loss: raised thresholds and reduced frequency selectivity.

13 13 Abnormal bandwidth 1 8 Abnormal Threshold Normal Threshold Normal bandwidth Characteristic Frequency log frequency 2.3 Characteristic frequency (CF) The CF of an auditory nerve fibre is the frequency at which least energy is needed to stimulate it. Different nerve fibers have different CFs and different thresholds. The CF of a fiber is roughly the same as the resonant frequency of the part of the basilar membrane that it is attached to. 2.4 Phase locking The auditory nerve will tend to fire at a particular phase of a stimulating low-frequency tone. So the inter-spike intervals tend to occur at integer multiples of the period of the tone. With high frequency tones (> 3kHz) phase locking gets weaker, because the capacitance of inner hair cells prevents them from changing in voltage sufficiently rapidly. Please note that the weaker phase-locking at high frequencies is NOT due to the refractory period. 2.5 Coding frequency How does the brain tell, from the pattern of firing in the auditory nerve, what frequencies are present? There are two alternatives: (a) place of maximal excitation - fibres whose CF is close to a stimulating tone's frequency will fire at a higher rate than those remote from it. So the frequency of a tone will be given by the place on the membrane from which emerge fibers having the highest rate of firing. (b) timing information - fibres with a CF near to a stimulating tone's frequency will be phase locked to the tone, provided it is low in frequency (< 3kHz). So, consistent inter-spike intervals across a band of fibers indicate the frequency of a tone.

14 Response to Low Frequency tones Inter-spike Intervals 2 periods 1 period nerve spike time (t) Response to High Frequency tones > 5kHz Random intervals time (t) 2.6 Coding intensity How does the brain tell, from the pattern of firing in the auditory nerve, what are the intensities of the different frequencies present? The dynamic range of most auditory nerve fibres (high spontaneous) is not sufficient to cover the range of hearing (c.1db). Low spontaneous rte fibers have a larger dynamic range and provide useful information at high levels. So information about intensity is carried in different fibers at different levels. 2.7 Two-tone suppression If a tone at a fiber's CF is played just above threshold for that fiber, the fiber will fire. But if a second tone is also played, at a frequency and level in the shaded area of the next diagram, then the firing rate will be reduced. This two-tone suppression demonstrates that the normal auditory system is nonlinear. If the system were linear, then the firing rate could only be unchanged or increased by the addition of an extra tone. Two-tone suppression is a characteristic of the normal ear and may be absent in the damaged ear. It is formally similar to lateral inhibition in vision, but it has a very different underlying cause. Lateral inhibition in vision is the result of neural mechanisms whereas two-tone inhibition is the result of mechanical processes inthe cochlea.

15 Regions for two-tone suppression 4 2 Test tone at Characteristic Frequency log frequency 2.8 Cochlear implants Implants can be fitted to patients who are profoundly deaf (>9dB loss), who gain very little benefit from conventional hearing aids. In multi-channel implants, a number of bipolar electrodes are inserted into the cochlea, terminating at different places. Electrical current derived from band-pass filtering sound can stimulate selectively auditory nerve fibers near the electrode, giving some crude 'place' coding of frequency. The best patients' hearing is good enough for them to understand isolated words over the telephone, but there is a great deal of variation across patients, which may be partly due to the integrity of the auditory nerve and higher pathways. It is increasingly common to fit cochlear implants to profoundly deaf children, so that they gain exposure to spoken language. This move raises ethical issues, as well as social ones for the signing deaf community, some of whom oppose implants. 3 WHAT YOU SHOULD KNOW. You should understand the meaning of all the terms shown in italics. You should also be able to explain all the diagrams in this handout. If you do not understand any of the terms or diagrams, first try asking someone else in the class whom you think might understand. If you still don't, then ask me either in the lecture or afterwards.

16 SECOND YEAR COURSE 16 PERCEPTION Hearing Lecture notes (3): Introductory psychoacoustics 1. BASIC TERMS There is an importnat distinction between terms used to describe physical properties and those used to describe psychological properties. Psychological properties are usually influenced by many physical ones. Physical Intensity Level Frequency Spectrum Psychological Loudness Pitch Timbre 1.1. Absolute threshold Human listeners are most sensitive to sounds around 2-3kHz. Absolute threshold at these frequencies for normal young adults is around db Sound Pressure Level (SPL - level relative to.2 dyne/cm 2 ). Thresholds increase to about 5 db SPL at 1 Hz and 1 db SPL at 1 khz. A normal young adult's absolute threshold for a pure tone defines db Hearing Level (HL) at that frequency. An audiogram measures an individual's threshold at different frequencies relative to db HL. Normal ageing progressively increases thresholds at high frequencies (presbyacusis). A noisy environment will lead to a more rapid hearing loss (4 db loss at 4kHz for a factory worker at age 35, compared with 2 db for an office worker). The term Sensation Level (SL) gives the number of db that a sound is above its absolute threshold for a particular individual. 2. FREQUENCY RESOLUTION AND MASKING Ohm's Acoustic Law states that we can perceive the individual Fourier components of a complex sound. It is only partly true since the ear has a limited ability to resolve different frequencies. Our ability to separate different frequencies in the ear depends on the sharpness of our auditory filters. The physiology underlying auditory filters is described in the previous Notes. The bandwidth of human auditory filters at different frequencies can be measured psychoacoustically in masking experiments (see below). The older literature refers to the width of an auditory filter at a particular frequency as the Critical Band. Sounds can be separated by the ear when they fall within a Critical Band, but they mix together when they do not. For example (and somewhat oversimplified!), only harmonics that are separated by more than a critical band can be heard out from a mixture; only noise that is within a critical band contributes to the masking of a tone. A simple demonstration of the bandwidth of noise that contributes to the masking of a tone is in the following band-limiting demonstration which is Demonstration 2 in the ASA "Auditory Demonstrations" CD.

17 17 In silence, you can hear all ten 5dB steps of the 2Hz tone. In wide-band noise you can only hear about five because of masking. As the bandwidth of the noise is decreased to 1 Hz and then to 25 Hz there is no change, because your auditory bandwidth is narrower than these values. When the bandwidth of the noise is decreased to 1 Hz, you hear more tone steps because the noise bandwidth is now narrower than the auditory filter and so less noise gets into the auditory filter to mask the tone. Measurement of auditory bandwidth with band-limited noise 1 Hz 25 Hz Broadband Level 2 Hz frequency Tone Noise Auditory bandwidth Noise bandwidth Detection mechanism The masked threshold of a tone is its level when it is just detectable in the presence of some other sound. It will of course vary with the masking sound. The amount of masking is the difference between the masked threshold and the abolute threshold. Generally, individuals with broader

18 auditory filters (as a result of SNHL) show more masking. In Simultaneous masking the two sounds are presented at the same time. In Forward masking the masking sound is presented just before the test tone. It gives slightly different results from simultaneous masking because of non-linearities in the auditory system. 18 Types of Masking Forward Mask Target Backward Target Mask Simultaneous In older studies of masking: Mask frequency and level are fixed Threshold level for the Target is measured at different frequencies In measuring Psychoacoustic Tuning Curves: Target frequency and level are fixed Threshold level for the Mask is measured at different frequencies 2.1.Psychophysical Tuning Curves A psychophysical method can be used to generate an analogy to the physiological frequency threshold curve for a single auditory fiber. A narrowband noise of variable center frequency is the masker, and a fixed frequency and fixed level pure tone at about 2 db HL is the target. The level of masker is found that just masks the tone for different masker frequencies. Compare the following diagram with the FTC in the previous Notes Target frequency Masker center frequency Using these techniques (and other similar ones) we can estimate the shape and bandwidth of human auditory filters at different (target) frequencies. The bandwidth values are shown in the next diagram. At 1kHz the bandwidth is about 13; at 5kHz about 65 Hz.

19 Center Frequency (Hz) Psychophysical tuning curves measured in people with SNHL often show increased auditory bandwidths at those frequencies where they have a hearing loss Excitation pattern Using the filter shapes and bandwidths derived from masking experiments we can produce the excitation pattern produced by a sound. The excitation pattern shows how much energy comes through each filter in a bank of auditory filters. It is analogous to the pattern of vibration on the basilar membrane. For a 1 Hz pure tone the excitation pattern for a normal and for a SNHL listener look like this: Normal SNHL 1 2 center frequency of auditory filter The excitation pattern to a complex tone is simply the sum of the patterns to the sine waves that make up the complex tone (since the model is a linear one). We can hear out a tone at a particular frequency in a mixture if there is a clear peak in the excitation pattern at that frequency. Since people suffering from SNHL have broader auditory filters their excitation patterns do not have such clear peaks. Sounds mask each other more, and so they have difficulty hearing sounds (such as speech) in noise. 3. NON-LINEARITIES To a first approximation the cochlea acts like a row of linear overlapping band-pass filters. But there is clear evidence that the cochlea is in fact inherently non-linear (ie its non-linearity is not just a result of over-loading it at high signal levels). In a non-linear system the output to (a+b) is not the same as the output to (a) plus the output to (b) Combination tones If two tones at frequencies f 1 and f 2 are played to the same ear simultaneously, a third tone is heard at a frequency (2f 1 -f 2 ) provided that f 1

20 and f 2 are close in frequency (f 2 /f 1 < 1.2) and at similar levels. Combination tones are often absent in SNHL Two-tone suppression Two-tone suppression Mask 1 Hz a Target 1 Hz c Less masking Mask 1 Hz + Suppressor 9 Hz b a Target 1 Hz c In single auditory nerve recordings, the response to a just supra threshold tone at CF can be reduced by a second tone, even though the tone would - itself have increased the nerve's firing rate. A similar effect is found in forward masking. The forward masking of tone a on tone c can be reduced if a is accompanied by a third tone b with a different frequency, even though b has no effect on c on its own. Two-tone suppression is often absent in SNHL. What you should know You should understand: what an auditory filter is and how it is measured; what an excitation pattern is and how it changes for those having a SNHL. You should know what is the evidence for non-linearities in human hearing.

21 SECOND YEAR COURSE 21 PERCEPTION Hearing Lecture notes (4): Pitch Perception Definition: Pitch is the 'attribute of auditory sensation in terms of which sounds may be ordered on a musical scale'. 1. PURE TONES Pitch of pure tones is influenced mainly by their frequency, but also by intensity: high frequency pure tones go flat when played loud. The pitch of pure tones is probably coded by a combination of place and timing mechanisms: Place mechanisms can explain diplacusis (same tone giving different pitches in the two ears) more easily than can timing mechanisms. But timing theories based on phase-locked neural discharge appear to be needed in order to explain our ability to distinguish the frequencies of very short duration tones (whose place representation would be very blurred). Timing theories could be the whole story for musical pitch since it deteriorates at high frequencies where phase locking is weak. (The highest note on the piano is around 4 khz; higher notes lose their sense of musical pitch). For very high frequency tones (5-2kHz) you can tell crudely which is the higher in frequency, but not what musical note is being played. 2. COMPLEX TONES Structure. Almost all sounds that give a sensation of pitch are periodic. Their spectrum consists of harmonics that are integer multiples of the fundamental. The pitch of a complex periodic tone is close to the pitch of a sine wave at the fundamental. Helmholtz claimed that the pitch is heard at the fundamental since the fundamental frequency gives the lowest frequency peak on the basilar membrane Period = 1/2 s = 5ms time (t) 1 Fundamental =2 Hz Harmonic spacing = 2 Hz frequency (Hz) 2.1.Missing fundamental Seebeck (and later Schouten) showed that complex periodic sounds with no energy at the fundamental may still give a clear pitch sensation at the fundamental (cf telephone speech - the telephone acts as a high-pass filter, removing energy below about 3 Hz).

22 Harmonic spacing = 2 Hz Period = 1/2 s = 5ms time (t) frequency (Hz) 2.2. Helmholtz's place theory Helmholtz suggested that the ear reintroduces energy at the fundamental by a process of distortion that produces energy at frequencies corresponding to the difference between two components physically present (i.e. at the harmonic spacing). Any pair of adjacent harmonics would generate energy at the fundamental. Helmholtz's explanation is wrong because: (i) a pitch at the fundamental is still heard in lowpass filtered masking noise that heavily masks the fundamental. (ii) a complex sound consisting of enharmonic frequencies (eg 87, 17, 127) gives a pitch that is slightly higher than the difference of 2. (iii) the distortion only occurs at high intensities but low intensities still give the pitch Schouten's timing theory Schouten proposed that the brain times the intervals between beats of the unresolved (see next diagram) harmonics of a complex sound, in order to find the pitch. Schouten's theory is wrong because: (i) pitch is determined more by the resolved than by the unresolved harmonics (ii) you can still hear a pitch corresponding to the fundamental when the two consecutive frequency components go to opposite ears. The following diagram shows the excitation pattern that would be produced on the basilar membrane separately by individual harmonics of a 2 Hz fundamental. Notice that the excitation patterns of the higher numbered harmonics are closer together than those of the low-numbered harmonics. This is because the filters have a bandwidth which is roughly a tenth of their center frequency (and so is constant on a log scale), whereas harmonics are equally spaced in frequency on a linear scale. More harmonics then get into a high-frequency filter than into a low-frequency one. The low-numbered harmonics are resolved by the basilar membrane (giving roughly sinusoidal output in their filters); but the high-numbered harmonics are not resolved. They add together in their filters to give a complex vibration which shows beats at the fundamental frequency.

23 unresolved 16 8 resolved base log frequency Output of 16 Hz filter 1/2s = 5ms apex Output of 2 Hz filter 1/2s = 5ms Pattern recognition theories Goldstein's theory states that pitch is determined by a pattern recognition process on the resolved harmonics from both ears. The brain finds the bestfitting harmonic series to the resolved frequencies, and takes its fundamental as the pitch. Goldstein's theory accounts well for most of the data, but there is also a weak pitch sensation from periodic sounds which do not contain any resolvable harmonics or from aperiodic sounds that have a regular envelope (such as amplitude modulated noise). A theory such as Schouten's may be needed in addition to Goldstein's in order to account for such effects. Evidence for there being two separate mechanisms for resolved and unresolved harmonics is: pitch discrimination and musical pitch labelling (eg A#) is much worse for sounds consisting of only unresolved harmonics; comparison of pitches between two sounds one having resolved and the other unresolved harmonics is worse than comparison of pitches between two sounds both with unresolved harmonics. 3. WHAT YOU SHOULD KNOW You should know the evidence for and against the three different theories of pitch perception for complex tones, and the difference between place and timing mechanisms for the pitch of pure tones.

24 SECOND YEAR COURSE 24 PERCEPTION Hearing Lecture Notes (5): Binaural hearing and localization Possible cues to localization of a sound: binaural time/intensity differences (inherently ambiguous); pinna effects; reverberation and intensity; head movements. 1. PURE TONES 1.1. Rayleigh's duplex theory (only applies to azimuth, ie localization in horizontal plane) Low frequency tones (<15 Hz) localised by phase differences: Phase locking present for low frequency tones (<4kHz). Jeffress' cross-correlator gives possible neural model. Maximum time difference between the ears is about 67 us corresponding to half a cycle at 15 Hz (the upper limit for binaural phase sensitivity) Onset time is different from the ongoing phase difference. Onset time differences are important for short sounds High (and low) frequency tones localised by intensity differences Shadow cast by head greater at high (2 db at 6 khz) than low frequencies (3 db at 5 Hz) i.e. head acts as a lowpass filter. auditory nerve not phase-locked for high frequency tones (>4kHz). phase differences are ambiguous for high frequency tones (>15Hz) 1.2. Time/intensity trade The time/intensity trade is shown by titrating a phase difference in one direction against an intensity difference in the other direction. Varies markedly with frequency of a sound. Not due to a peripheral effect of intensity on nerve latency, since: can get multiple images optimally traded stimulus is distinguishable from untraded. 1.3 Cone of confusion Binaural cues are inherently ambiguous. The same differences can be produced by a sound anywhere on the surface of an imaginary cone whose tip is in the ear. For pure tones this ambiguity can only be resolved by head movements. 2. COMPLEX TONES 2.1. Timing cues As with pure tones, onset time cues are important for (particularly short) complex tones. But the use of other timing cues is different since high frequency complex tones can change in localization with an ongoing timing difference. The next diagram shows the output of an auditory filter at 16 Hz to a complex tone with a fundamental of 2 Hz. The 14, 16 and 18 Hz components of the complex pass through the filter and add together to give the complex wave shown in the diagram. The complex wave has an envelope that repeats at 2 Hz. Phase differences would not change the

25 localization of any of those tones if they were heard individually, but we can localize sounds by the relative timing of the envelopes in the two ears (provided that the fundamental frequency (envelope frequency) is less than about 4 Hz). 25 Output of 16 Hz filter to complex tone with a 2Hz fundamental frequency; right ear leading by 5 us. 2 1/2s = 5ms Left Right ear leads by 5 us Right 2.2. Pinna effects (mainly median plane) Pinna reflects high frequency sound (wavelength less than dimesnions of outer ear) with echoes whose latency varies with direction (Batteau). Reflections cause echoes which interfere with other echoes/direct sound to give spectral peaks and notches. Frequency of peaks and notches varies with direction of sound and are used to indicate direction in median plane. 2.3 Head movements Head movements can resolve the ambiguity of front-back confusions. 3. DISTANCE A distant sound will be quieter and have relatively more reverberation than a close sound. Increasing the proportion of reverberant sound leads to greater apparent distance. Lowpass filtering also leads to greater apparent distance; (high frequencies absorbed more by water vapour in air by up to about 3 db/1 ft). If you know what a sound is, then you can use its actual timbre and loudness to tell its distance. 4. VISION Seen location easily dominates over heard location when the two are in conflict. 5. PRECEDENCE (OR HAAS) EFFECT In an echoic environment the first wavefront to reach a listener indicates the direction of the source. The brain suppresses directional information from subsequent sounds.

26 Since echoes come from different directions than the main sound, they may be ignored more easily with two ears BINAURAL EFFECTS A number of psychoacoustic phenomena demonstrate that we are only binaurally sensitive to the phase of a pure tone if its frequency is less than about 2 khz Binaural beats Fluctuations in intensity and/or localisation when two different tones one to each ear (e.g Hz gives a beat at 4 Hz). Only works for low frequency tones < 1.5 khz Binaural masking level difference (BMLD) When the same tone in noise is played to both ears, the tone is harder to detect than when one ear either does not get the tone, or has the tone at a different phase. Magnitude of effect declines above about 1 khz, as phaselocking breaks down. Explained by Durlach's Equalization and Cancellation model Cramer-Huggins pitch If noise is fed to one ear and the same noise to the other ear but with the phase changed in a narrow band of frequencies, subjects hear a pitch sensation at the frequency of the band. Pitch gets rapidly less clear above 15 Hz. (NB Can be explained by models of the BMLD effect if you think of the phase-shifted band as the 'tone'). WHAT YOU SHOULD KNOW You should be able to describe the different cues used to localize pure and complex tones. You should understand why phase-locking does not occur for high frequency pure tones, and why this important in localization and in other binaural effects. You should know what the BMLD is and how Durlach's model explains it.

27 SECOND YEAR COURSE 27 PERCEPTION Hearing Lecture Notes (6): Auditory Object Recognition & Music Timbre Vowel sounds in speech differ in the relative amplitudes of their harmonics. A particular vowel has harmonics that have a greater amplitude near the formant frequencies. A formant is a resonant frequency of the vocal tract. As you change the pitch of a vowel, you change the fundamental frequency and the spacing of the harmonics, but the formant frequencies stay the same. If you change the vowel without changing the pitch of the voice, the fundamental and the harmonic spacing stay the same but the formant frequencies change. Here is the spectrum of the vowel in "bit" on a fundamental frequency of 125 Hz. F1 = 396Hz Formants F2 = 152Hz F3 = 194Hz 125 Hz (fundamental) frequency Musical instruments: The synthetic sounds produced by a simple keyboard synthesiser differ in: the relative amplitude of their harmonics; their attack time and decay time. For most synthesisers the relative amplitudes of the different harmonics stay constant throughout the sound. The sounds produced by a natural musical instrument are much more complex; the different harmonics start and stop at different times and change in relative amplitude throughout the "steady-state" of the note. Our ability to identify a natural musical instrument from another depends more on the attack (and decay) than the "steadystate". The nature of the attack and the relative amplitudes during the staedy-state are not constant for a particular instrument. They depend on the style of playing, where in the the range of the instrument the note is etc. Auditory Scene Analysis Ears receive waves from many different sound sources at the same time eg multiple talkers, or instruments, cars, machinery etc. In order to recognise the pitch and timbre of the sound from a particular source the brain must decide which frequencies "belong together" and have come from this source. The problem is formally similar to that of "parsing" a visual scene into separate objects. Principles enunciated by the Gestalt psychologists in vision are useful as heuristics for helping the decide what sounds will be grouped together: proximity, similarity, good continuation, common fate, all have auditory analogues.

28 The brain needs to group simultaneously (separating out which frequency components that are present at a particular time have come from the same sound source) and also successively (deciding which group of components at one time is a continuation of a previous group). 28 Auditory streaming Auditory streaming is the formation of perceptually distinct apparent sound sources. Temporal order judgement is good within a stream but bad between steams. Examples include: implied polyphony noise burst replacing a consonant in a sentence. click superimposed on a sentence or melody. Grouping Principles (i) Proximity Tones close in frequency will group together, so as to minimise the extent of frequency jumps and the number of streams. Tones with similar timbre will tend to group together. Speech sounds of similar pitch will tend to be heard from the same speaker. Sounds from different locations are harder to group together across time than those from the same location. (ii) Common fate Sounds from a common source tend to start and stop at the same time and change in amplitude or frequency together (vibrato). A single component is easy to hear out if it is the only one to change in a complex. (iii) Good continuation Abrupt discontinuities in frequency or pitch, can give the impression of a different sound source. Continuity Effect Sound that is interrupted by a noise that masks it, can appear to be continuous. Alternations of sound and mask can give the illusion of continuity with the auditory system interpolating across the mask. Music Perception Tuning Consonant intervals have harmonics that do not beat together to give roughness, i.e. at small integer frequency ratios: 2:1 (octave) 3:2 (fifth) 4:3 (fourth) 5:4 (major third). Unfortunately, a scale based on such intervals is not internally consistent and does not allow modulations. Equal temperament sacrifices some consonance in the primary intervals for an equal size of semitone (2 1/12 ), and so sounds equally in tune in any key. Absolute pitch About 1 person in 1, has "Absolute Pitch" - they can identify the pitch of musical note without the use of an external reference pitch. Most people can only give pitch names relatively - "if that is A this must be C". Absolute pitch is much more common in people who had musical training at an early age than among those who started later, and is probably more common in those whose early training involved learning the names of notes. It can be a liability, since pitch perception can change as you grow older, and international pitch standards also change. A more common absolute ability is the ability to tell when a piece of music is being played in the correct key.

29 Melody The pitch of a tone can be regarded as having chroma (musical note name) and height (which octave). Melodies are hard to recognise if only chroma is maintained (transposing notes by octaves). Overall contour is an important attribute of melody, and allows variation of chroma within a recognisable framework. 29

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening AUDL GS08/GAV1 Signals, systems, acoustics and the ear Pitch & Binaural listening Review 25 20 15 10 5 0-5 100 1000 10000 25 20 15 10 5 0-5 100 1000 10000 Part I: Auditory frequency selectivity Tuning

More information

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds Hearing Lectures Acoustics of Speech and Hearing Week 2-10 Hearing 3: Auditory Filtering 1. Loudness of sinusoids mainly (see Web tutorial for more) 2. Pitch of sinusoids mainly (see Web tutorial for more)

More information

Chapter 11: Sound, The Auditory System, and Pitch Perception

Chapter 11: Sound, The Auditory System, and Pitch Perception Chapter 11: Sound, The Auditory System, and Pitch Perception Overview of Questions What is it that makes sounds high pitched or low pitched? How do sound vibrations inside the ear lead to the perception

More information

Signals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor

Signals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor Signals, systems, acoustics and the ear Week 5 The peripheral auditory system: The ear as a signal processor Think of this set of organs 2 as a collection of systems, transforming sounds to be sent to

More information

COM3502/4502/6502 SPEECH PROCESSING

COM3502/4502/6502 SPEECH PROCESSING COM3502/4502/6502 SPEECH PROCESSING Lecture 4 Hearing COM3502/4502/6502 Speech Processing: Lecture 4, slide 1 The Speech Chain SPEAKER Ear LISTENER Feedback Link Vocal Muscles Ear Sound Waves Taken from:

More information

Intro to Audition & Hearing

Intro to Audition & Hearing Intro to Audition & Hearing Lecture 16 Chapter 9, part II Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Fall 2017 1 Sine wave: one of the simplest kinds of sounds: sound for which pressure

More information

Binaural Hearing. Why two ears? Definitions

Binaural Hearing. Why two ears? Definitions Binaural Hearing Why two ears? Locating sounds in space: acuity is poorer than in vision by up to two orders of magnitude, but extends in all directions. Role in alerting and orienting? Separating sound

More information

Systems Neuroscience Oct. 16, Auditory system. http:

Systems Neuroscience Oct. 16, Auditory system. http: Systems Neuroscience Oct. 16, 2018 Auditory system http: www.ini.unizh.ch/~kiper/system_neurosci.html The physics of sound Measuring sound intensity We are sensitive to an enormous range of intensities,

More information

Topic 4. Pitch & Frequency

Topic 4. Pitch & Frequency Topic 4 Pitch & Frequency A musical interlude KOMBU This solo by Kaigal-ool of Huun-Huur-Tu (accompanying himself on doshpuluur) demonstrates perfectly the characteristic sound of the Xorekteer voice An

More information

Hearing: Physiology and Psychoacoustics

Hearing: Physiology and Psychoacoustics 9 Hearing: Physiology and Psychoacoustics Click Chapter to edit 9 Hearing: Master title Physiology style and Psychoacoustics The Function of Hearing What Is Sound? Basic Structure of the Mammalian Auditory

More information

HCS 7367 Speech Perception

HCS 7367 Speech Perception Long-term spectrum of speech HCS 7367 Speech Perception Connected speech Absolute threshold Males Dr. Peter Assmann Fall 212 Females Long-term spectrum of speech Vowels Males Females 2) Absolute threshold

More information

Acoustics, signals & systems for audiology. Psychoacoustics of hearing impairment

Acoustics, signals & systems for audiology. Psychoacoustics of hearing impairment Acoustics, signals & systems for audiology Psychoacoustics of hearing impairment Three main types of hearing impairment Conductive Sound is not properly transmitted from the outer to the inner ear Sensorineural

More information

Frequency refers to how often something happens. Period refers to the time it takes something to happen.

Frequency refers to how often something happens. Period refers to the time it takes something to happen. Lecture 2 Properties of Waves Frequency and period are distinctly different, yet related, quantities. Frequency refers to how often something happens. Period refers to the time it takes something to happen.

More information

Lecture 3: Perception

Lecture 3: Perception ELEN E4896 MUSIC SIGNAL PROCESSING Lecture 3: Perception 1. Ear Physiology 2. Auditory Psychophysics 3. Pitch Perception 4. Music Perception Dan Ellis Dept. Electrical Engineering, Columbia University

More information

Representation of sound in the auditory nerve

Representation of sound in the auditory nerve Representation of sound in the auditory nerve Eric D. Young Department of Biomedical Engineering Johns Hopkins University Young, ED. Neural representation of spectral and temporal information in speech.

More information

Issues faced by people with a Sensorineural Hearing Loss

Issues faced by people with a Sensorineural Hearing Loss Issues faced by people with a Sensorineural Hearing Loss Issues faced by people with a Sensorineural Hearing Loss 1. Decreased Audibility 2. Decreased Dynamic Range 3. Decreased Frequency Resolution 4.

More information

HEARING. Structure and Function

HEARING. Structure and Function HEARING Structure and Function Rory Attwood MBChB,FRCS Division of Otorhinolaryngology Faculty of Health Sciences Tygerberg Campus, University of Stellenbosch Analyse Function of auditory system Discriminate

More information

Topic 4. Pitch & Frequency. (Some slides are adapted from Zhiyao Duan s course slides on Computer Audition and Its Applications in Music)

Topic 4. Pitch & Frequency. (Some slides are adapted from Zhiyao Duan s course slides on Computer Audition and Its Applications in Music) Topic 4 Pitch & Frequency (Some slides are adapted from Zhiyao Duan s course slides on Computer Audition and Its Applications in Music) A musical interlude KOMBU This solo by Kaigal-ool of Huun-Huur-Tu

More information

HEARING AND PSYCHOACOUSTICS

HEARING AND PSYCHOACOUSTICS CHAPTER 2 HEARING AND PSYCHOACOUSTICS WITH LIDIA LEE I would like to lead off the specific audio discussions with a description of the audio receptor the ear. I believe it is always a good idea to understand

More information

Deafness and hearing impairment

Deafness and hearing impairment Auditory Physiology Deafness and hearing impairment About one in every 10 Americans has some degree of hearing loss. The great majority develop hearing loss as they age. Hearing impairment in very early

More information

Chapter 3. Sounds, Signals, and Studio Acoustics

Chapter 3. Sounds, Signals, and Studio Acoustics Chapter 3 Sounds, Signals, and Studio Acoustics Sound Waves Compression/Rarefaction: speaker cone Sound travels 1130 feet per second Sound waves hit receiver Sound waves tend to spread out as they travel

More information

Required Slide. Session Objectives

Required Slide. Session Objectives Auditory Physiology Required Slide Session Objectives Auditory System: At the end of this session, students will be able to: 1. Characterize the range of normal human hearing. 2. Understand the components

More information

Linguistic Phonetics Fall 2005

Linguistic Phonetics Fall 2005 MIT OpenCourseWare http://ocw.mit.edu 24.963 Linguistic Phonetics Fall 2005 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 24.963 Linguistic Phonetics

More information

Sound Waves. Sensation and Perception. Sound Waves. Sound Waves. Sound Waves

Sound Waves. Sensation and Perception. Sound Waves. Sound Waves. Sound Waves Sensation and Perception Part 3 - Hearing Sound comes from pressure waves in a medium (e.g., solid, liquid, gas). Although we usually hear sounds in air, as long as the medium is there to transmit the

More information

Hearing Sound. The Human Auditory System. The Outer Ear. Music 170: The Ear

Hearing Sound. The Human Auditory System. The Outer Ear. Music 170: The Ear Hearing Sound Music 170: The Ear Tamara Smyth, trsmyth@ucsd.edu Department of Music, University of California, San Diego (UCSD) November 17, 2016 Sound interpretation in the auditory system is done by

More information

Music 170: The Ear. Tamara Smyth, Department of Music, University of California, San Diego (UCSD) November 17, 2016

Music 170: The Ear. Tamara Smyth, Department of Music, University of California, San Diego (UCSD) November 17, 2016 Music 170: The Ear Tamara Smyth, trsmyth@ucsd.edu Department of Music, University of California, San Diego (UCSD) November 17, 2016 1 Hearing Sound Sound interpretation in the auditory system is done by

More information

whether or not the fundamental is actually present.

whether or not the fundamental is actually present. 1) Which of the following uses a computer CPU to combine various pure tones to generate interesting sounds or music? 1) _ A) MIDI standard. B) colored-noise generator, C) white-noise generator, D) digital

More information

Auditory Physiology PSY 310 Greg Francis. Lecture 30. Organ of Corti

Auditory Physiology PSY 310 Greg Francis. Lecture 30. Organ of Corti Auditory Physiology PSY 310 Greg Francis Lecture 30 Waves, waves, waves. Organ of Corti Tectorial membrane Sits on top Inner hair cells Outer hair cells The microphone for the brain 1 Hearing Perceptually,

More information

Auditory System. Barb Rohrer (SEI )

Auditory System. Barb Rohrer (SEI ) Auditory System Barb Rohrer (SEI614 2-5086) Sounds arise from mechanical vibration (creating zones of compression and rarefaction; which ripple outwards) Transmitted through gaseous, aqueous or solid medium

More information

Digital Speech and Audio Processing Spring

Digital Speech and Audio Processing Spring Digital Speech and Audio Processing Spring 2008-1 Ear Anatomy 1. Outer ear: Funnels sounds / amplifies 2. Middle ear: acoustic impedance matching mechanical transformer 3. Inner ear: acoustic transformer

More information

Sound and Hearing. Decibels. Frequency Coding & Localization 1. Everything is vibration. The universe is made of waves.

Sound and Hearing. Decibels. Frequency Coding & Localization 1. Everything is vibration. The universe is made of waves. Frequency Coding & Localization 1 Sound and Hearing Everything is vibration The universe is made of waves db = 2log(P1/Po) P1 = amplitude of the sound wave Po = reference pressure =.2 dynes/cm 2 Decibels

More information

Outline. The ear and perception of sound (Psychoacoustics) A.1 Outer Ear Amplifies Sound. Introduction

Outline. The ear and perception of sound (Psychoacoustics) A.1 Outer Ear Amplifies Sound. Introduction The ear and perception of sound (Psychoacoustics) 1 Outline A. Structure of the Ear B. Perception of Pitch C. Perception of Loudness D. Timbre (quality of sound) E. References Updated 01Aug0 Introduction

More information

The Structure and Function of the Auditory Nerve

The Structure and Function of the Auditory Nerve The Structure and Function of the Auditory Nerve Brad May Structure and Function of the Auditory and Vestibular Systems (BME 580.626) September 21, 2010 1 Objectives Anatomy Basic response patterns Frequency

More information

Receptors / physiology

Receptors / physiology Hearing: physiology Receptors / physiology Energy transduction First goal of a sensory/perceptual system? Transduce environmental energy into neural energy (or energy that can be interpreted by perceptual

More information

Linguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions.

Linguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions. 24.963 Linguistic Phonetics Basic Audition Diagram of the inner ear removed due to copyright restrictions. 1 Reading: Keating 1985 24.963 also read Flemming 2001 Assignment 1 - basic acoustics. Due 9/22.

More information

SPHSC 462 HEARING DEVELOPMENT. Overview Review of Hearing Science Introduction

SPHSC 462 HEARING DEVELOPMENT. Overview Review of Hearing Science Introduction SPHSC 462 HEARING DEVELOPMENT Overview Review of Hearing Science Introduction 1 Overview of course and requirements Lecture/discussion; lecture notes on website http://faculty.washington.edu/lawerner/sphsc462/

More information

Auditory Physiology Richard M. Costanzo, Ph.D.

Auditory Physiology Richard M. Costanzo, Ph.D. Auditory Physiology Richard M. Costanzo, Ph.D. OBJECTIVES After studying the material of this lecture, the student should be able to: 1. Describe the morphology and function of the following structures:

More information

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair Who are cochlear implants for? Essential feature People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work

More information

Hearing. Juan P Bello

Hearing. Juan P Bello Hearing Juan P Bello The human ear The human ear Outer Ear The human ear Middle Ear The human ear Inner Ear The cochlea (1) It separates sound into its various components If uncoiled it becomes a tapering

More information

Hearing. and other senses

Hearing. and other senses Hearing and other senses Sound Sound: sensed variations in air pressure Frequency: number of peaks that pass a point per second (Hz) Pitch 2 Some Sound and Hearing Links Useful (and moderately entertaining)

More information

Hearing in the Environment

Hearing in the Environment 10 Hearing in the Environment Click Chapter to edit 10 Master Hearing title in the style Environment Sound Localization Complex Sounds Auditory Scene Analysis Continuity and Restoration Effects Auditory

More information

Acoustics Research Institute

Acoustics Research Institute Austrian Academy of Sciences Acoustics Research Institute Modeling Modelingof ofauditory AuditoryPerception Perception Bernhard BernhardLaback Labackand andpiotr PiotrMajdak Majdak http://www.kfs.oeaw.ac.at

More information

Music and Hearing in the Older Population: an Audiologist's Perspective

Music and Hearing in the Older Population: an Audiologist's Perspective Music and Hearing in the Older Population: an Audiologist's Perspective Dwight Ough, M.A., CCC-A Audiologist Charlotte County Hearing Health Care Centre Inc. St. Stephen, New Brunswick Anatomy and Physiology

More information

Perception of Sound. To hear sound, your ear has to do three basic things:

Perception of Sound. To hear sound, your ear has to do three basic things: Perception of Sound Your ears are extraordinary organs. They pick up all the sounds around you and then translate this information into a form your brain can understand. One of the most remarkable things

More information

Auditory Physiology PSY 310 Greg Francis. Lecture 29. Hearing

Auditory Physiology PSY 310 Greg Francis. Lecture 29. Hearing Auditory Physiology PSY 310 Greg Francis Lecture 29 A dangerous device. Hearing The sound stimulus is changes in pressure The simplest sounds vary in: Frequency: Hertz, cycles per second. How fast the

More information

PSY 310: Sensory and Perceptual Processes 1

PSY 310: Sensory and Perceptual Processes 1 Auditory Physiology PSY 310 Greg Francis Lecture 29 A dangerous device. Hearing The sound stimulus is changes in pressure The simplest sounds vary in: Frequency: Hertz, cycles per second. How fast the

More information

Hearing. Figure 1. The human ear (from Kessel and Kardon, 1979)

Hearing. Figure 1. The human ear (from Kessel and Kardon, 1979) Hearing The nervous system s cognitive response to sound stimuli is known as psychoacoustics: it is partly acoustics and partly psychology. Hearing is a feature resulting from our physiology that we tend

More information

MECHANISM OF HEARING

MECHANISM OF HEARING MECHANISM OF HEARING Sound: Sound is a vibration that propagates as an audible wave of pressure, through a transmission medium such as gas, liquid or solid. Sound is produced from alternate compression

More information

J Jeffress model, 3, 66ff

J Jeffress model, 3, 66ff Index A Absolute pitch, 102 Afferent projections, inferior colliculus, 131 132 Amplitude modulation, coincidence detector, 152ff inferior colliculus, 152ff inhibition models, 156ff models, 152ff Anatomy,

More information

Who are cochlear implants for?

Who are cochlear implants for? Who are cochlear implants for? People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work best in adults who

More information

ID# Exam 2 PS 325, Fall 2003

ID# Exam 2 PS 325, Fall 2003 ID# Exam 2 PS 325, Fall 2003 As always, the Honor Code is in effect and you ll need to write the code and sign it at the end of the exam. Read each question carefully and answer it completely. Although

More information

ENT 318 Artificial Organs Physiology of Ear

ENT 318 Artificial Organs Physiology of Ear ENT 318 Artificial Organs Physiology of Ear Lecturer: Ahmad Nasrul Norali The Ear The Ear Components of hearing mechanism - Outer Ear - Middle Ear - Inner Ear - Central Auditory Nervous System Major Divisions

More information

Auditory System Feedback

Auditory System Feedback Feedback Auditory System Feedback Using all or a portion of the information from the output of a system to regulate or control the processes or inputs in order to modify the output. Central control of

More information

Topics in Linguistic Theory: Laboratory Phonology Spring 2007

Topics in Linguistic Theory: Laboratory Phonology Spring 2007 MIT OpenCourseWare http://ocw.mit.edu 24.91 Topics in Linguistic Theory: Laboratory Phonology Spring 27 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

Hearing II Perceptual Aspects

Hearing II Perceptual Aspects Hearing II Perceptual Aspects Overview of Topics Chapter 6 in Chaudhuri Intensity & Loudness Frequency & Pitch Auditory Space Perception 1 2 Intensity & Loudness Loudness is the subjective perceptual quality

More information

! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics

! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics 2/14/18 Can hear whistle? Lecture 5 Psychoacoustics Based on slides 2009--2018 DeHon, Koditschek Additional Material 2014 Farmer 1 2 There are sounds we cannot hear Depends on frequency Where are we on

More information

Hearing and Balance 1

Hearing and Balance 1 Hearing and Balance 1 Slide 3 Sound is produced by vibration of an object which produces alternating waves of pressure and rarefaction, for example this tuning fork. Slide 4 Two characteristics of sound

More information

Binaural Hearing. Steve Colburn Boston University

Binaural Hearing. Steve Colburn Boston University Binaural Hearing Steve Colburn Boston University Outline Why do we (and many other animals) have two ears? What are the major advantages? What is the observed behavior? How do we accomplish this physiologically?

More information

Computational Perception /785. Auditory Scene Analysis

Computational Perception /785. Auditory Scene Analysis Computational Perception 15-485/785 Auditory Scene Analysis A framework for auditory scene analysis Auditory scene analysis involves low and high level cues Low level acoustic cues are often result in

More information

Chapter 1: Introduction to digital audio

Chapter 1: Introduction to digital audio Chapter 1: Introduction to digital audio Applications: audio players (e.g. MP3), DVD-audio, digital audio broadcast, music synthesizer, digital amplifier and equalizer, 3D sound synthesis 1 Properties

More information

What Is the Difference between db HL and db SPL?

What Is the Difference between db HL and db SPL? 1 Psychoacoustics What Is the Difference between db HL and db SPL? The decibel (db ) is a logarithmic unit of measurement used to express the magnitude of a sound relative to some reference level. Decibels

More information

Psychoacoustical Models WS 2016/17

Psychoacoustical Models WS 2016/17 Psychoacoustical Models WS 2016/17 related lectures: Applied and Virtual Acoustics (Winter Term) Advanced Psychoacoustics (Summer Term) Sound Perception 2 Frequency and Level Range of Human Hearing Source:

More information

Hearing. istockphoto/thinkstock

Hearing. istockphoto/thinkstock Hearing istockphoto/thinkstock Audition The sense or act of hearing The Stimulus Input: Sound Waves Sound waves are composed of changes in air pressure unfolding over time. Acoustical transduction: Conversion

More information

Sound localization psychophysics

Sound localization psychophysics Sound localization psychophysics Eric Young A good reference: B.C.J. Moore An Introduction to the Psychology of Hearing Chapter 7, Space Perception. Elsevier, Amsterdam, pp. 233-267 (2004). Sound localization:

More information

Sound and its characteristics. The decibel scale. Structure and function of the ear. Békésy s theory. Molecular basis of hair cell function.

Sound and its characteristics. The decibel scale. Structure and function of the ear. Békésy s theory. Molecular basis of hair cell function. Hearing Sound and its characteristics. The decibel scale. Structure and function of the ear. Békésy s theory. Molecular basis of hair cell function. 19/11/2014 Sound A type of longitudinal mass wave that

More information

SLHS 1301 The Physics and Biology of Spoken Language. Practice Exam 2. b) 2 32

SLHS 1301 The Physics and Biology of Spoken Language. Practice Exam 2. b) 2 32 SLHS 1301 The Physics and Biology of Spoken Language Practice Exam 2 Chapter 9 1. In analog-to-digital conversion, quantization of the signal means that a) small differences in signal amplitude over time

More information

Auditory System & Hearing

Auditory System & Hearing Auditory System & Hearing Chapters 9 and 10 Lecture 17 Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Spring 2015 1 Cochlea: physical device tuned to frequency! place code: tuning of different

More information

Human Acoustic Processing

Human Acoustic Processing Human Acoustic Processing Sound and Light The Ear Cochlea Auditory Pathway Speech Spectrogram Vocal Cords Formant Frequencies Time Warping Hidden Markov Models Signal, Time and Brain Process of temporal

More information

Mechanical Properties of the Cochlea. Reading: Yost Ch. 7

Mechanical Properties of the Cochlea. Reading: Yost Ch. 7 Mechanical Properties of the Cochlea CF Reading: Yost Ch. 7 The Cochlea Inner ear contains auditory and vestibular sensory organs. Cochlea is a coiled tri-partite tube about 35 mm long. Basilar membrane,

More information

THE MECHANICS OF HEARING

THE MECHANICS OF HEARING CONTENTS The mechanics of hearing Hearing loss and the Noise at Work Regulations Loudness and the A weighting network Octave band analysis Hearing protection calculations Worked examples and self assessed

More information

9/29/14. Amanda M. Lauer, Dept. of Otolaryngology- HNS. From Signal Detection Theory and Psychophysics, Green & Swets (1966)

9/29/14. Amanda M. Lauer, Dept. of Otolaryngology- HNS. From Signal Detection Theory and Psychophysics, Green & Swets (1966) Amanda M. Lauer, Dept. of Otolaryngology- HNS From Signal Detection Theory and Psychophysics, Green & Swets (1966) SIGNAL D sensitivity index d =Z hit - Z fa Present Absent RESPONSE Yes HIT FALSE ALARM

More information

A truly remarkable aspect of human hearing is the vast

A truly remarkable aspect of human hearing is the vast AUDITORY COMPRESSION AND HEARING LOSS Sid P. Bacon Psychoacoustics Laboratory, Department of Speech and Hearing Science, Arizona State University Tempe, Arizona 85287 A truly remarkable aspect of human

More information

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair Who are cochlear implants for? Essential feature People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work

More information

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for What you re in for Speech processing schemes for cochlear implants Stuart Rosen Professor of Speech and Hearing Science Speech, Hearing and Phonetic Sciences Division of Psychology & Language Sciences

More information

ID# Exam 2 PS 325, Fall 2009

ID# Exam 2 PS 325, Fall 2009 ID# Exam 2 PS 325, Fall 2009 As always, the Skidmore Honor Code is in effect. At the end of the exam, I ll have you write and sign something to attest to that fact. The exam should contain no surprises,

More information

PSY 214 Lecture 16 (11/09/2011) (Sound, auditory system & pitch perception) Dr. Achtman PSY 214

PSY 214 Lecture 16 (11/09/2011) (Sound, auditory system & pitch perception) Dr. Achtman PSY 214 PSY 214 Lecture 16 Topic: Sound, auditory system, & pitch perception Chapter 11, pages 268-288 Corrections: None needed Announcements: At the beginning of class, we went over some demos from the virtual

More information

Spectrograms (revisited)

Spectrograms (revisited) Spectrograms (revisited) We begin the lecture by reviewing the units of spectrograms, which I had only glossed over when I covered spectrograms at the end of lecture 19. We then relate the blocks of a

More information

SOLUTIONS Homework #3. Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03

SOLUTIONS Homework #3. Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03 SOLUTIONS Homework #3 Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03 Problem 1: a) Where in the cochlea would you say the process of "fourier decomposition" of the incoming

More information

AUDL GS08 and GAV1: 2013 Final exam page 1/13. You must complete all sections. Label all graphs. Show your work!

AUDL GS08 and GAV1: 2013 Final exam page 1/13. You must complete all sections. Label all graphs. Show your work! AUDL GS08 and GAV1: 2013 Final exam page 1/13 You must complete all sections. Label all graphs. Show your work! Section A: Short questions concerning Signals & Systems A1. Give the sound pressure levels

More information

The Ear. The ear can be divided into three major parts: the outer ear, the middle ear and the inner ear.

The Ear. The ear can be divided into three major parts: the outer ear, the middle ear and the inner ear. The Ear The ear can be divided into three major parts: the outer ear, the middle ear and the inner ear. The Ear There are three components of the outer ear: Pinna: the fleshy outer part of the ear which

More information

Educational Module Tympanometry. Germany D Germering

Educational Module Tympanometry. Germany D Germering Educational Module anometry PATH medical Germany D-82110 Germering Our educational modules 1 are made for providing information on how the hearing organ works and which test procedures are used to test

More information

Unit VIII Problem 9 Physiology: Hearing

Unit VIII Problem 9 Physiology: Hearing Unit VIII Problem 9 Physiology: Hearing - We can hear a limited range of frequency between 20 Hz 20,000 Hz (human hearing acuity is between 1000 Hz 4000 Hz). - The ear is divided into 3 parts. Those are:

More information

Hearing I: Sound & The Ear

Hearing I: Sound & The Ear Hearing I: Sound & The Ear Overview of Topics Chapter 5 in Chaudhuri Philosophical Aside: If a tree falls in the forest and no one is there to hear it... Qualities of sound energy and sound perception

More information

ID# Final Exam PS325, Fall 1997

ID# Final Exam PS325, Fall 1997 ID# Final Exam PS325, Fall 1997 Good luck on this exam. Answer each question carefully and completely. Keep your eyes foveated on your own exam, as the Skidmore Honor Code is in effect (as always). Have

More information

Lecture 6 Hearing 1. Raghav Rajan Bio 354 Neurobiology 2 January 28th All lecture material from the following links unless otherwise mentioned:

Lecture 6 Hearing 1. Raghav Rajan Bio 354 Neurobiology 2 January 28th All lecture material from the following links unless otherwise mentioned: Lecture 6 Hearing 1 All lecture material from the following links unless otherwise mentioned: 1. http://wws.weizmann.ac.il/neurobiology/labs/ulanovsky/sites/neurobiology.labs.ulanovsky/files/uploads/purves_ch12_ch13_hearing

More information

Lecture 9: Sound Localization

Lecture 9: Sound Localization Lecture 9: Sound Localization Localization refers to the process of using the information about a sound which you get from your ears, to work out where the sound came from (above, below, in front, behind,

More information

Week 2 Systems (& a bit more about db)

Week 2 Systems (& a bit more about db) AUDL Signals & Systems for Speech & Hearing Reminder: signals as waveforms A graph of the instantaneousvalue of amplitude over time x-axis is always time (s, ms, µs) y-axis always a linear instantaneousamplitude

More information

Role of F0 differences in source segregation

Role of F0 differences in source segregation Role of F0 differences in source segregation Andrew J. Oxenham Research Laboratory of Electronics, MIT and Harvard-MIT Speech and Hearing Bioscience and Technology Program Rationale Many aspects of segregation

More information

Before we talk about the auditory system we will talk about the sound and waves

Before we talk about the auditory system we will talk about the sound and waves The Auditory System PHYSIO: #3 DR.LOAI ZAGOUL 24/3/2014 Refer to the slides for some photos. Before we talk about the auditory system we will talk about the sound and waves All waves have basic characteristics:

More information

Loudness. Loudness is not simply sound intensity!

Loudness. Loudness is not simply sound intensity! is not simply sound intensity! Sound loudness is a subjective term describing the strength of the ear's perception of a sound. It is intimately related to sound intensity but can by no means be considered

More information

Auditory Scene Analysis

Auditory Scene Analysis 1 Auditory Scene Analysis Albert S. Bregman Department of Psychology McGill University 1205 Docteur Penfield Avenue Montreal, QC Canada H3A 1B1 E-mail: bregman@hebb.psych.mcgill.ca To appear in N.J. Smelzer

More information

3-D Sound and Spatial Audio. What do these terms mean?

3-D Sound and Spatial Audio. What do these terms mean? 3-D Sound and Spatial Audio What do these terms mean? Both terms are very general. 3-D sound usually implies the perception of point sources in 3-D space (could also be 2-D plane) whether the audio reproduction

More information

Hearing I: Sound & The Ear

Hearing I: Sound & The Ear Hearing I: Sound & The Ear Overview of Topics Chapter 5 in Chaudhuri Philosophical Aside: If a tree falls in the forest and no one is there to hear it... Qualities of sound energy and sound perception

More information

Vision and Audition. This section concerns the anatomy of two important sensory systems, the visual and the auditory systems.

Vision and Audition. This section concerns the anatomy of two important sensory systems, the visual and the auditory systems. Vision and Audition Vision and Audition This section concerns the anatomy of two important sensory systems, the visual and the auditory systems. The description of the organization of each begins with

More information

Discrete Signal Processing

Discrete Signal Processing 1 Discrete Signal Processing C.M. Liu Perceptual Lab, College of Computer Science National Chiao-Tung University http://www.cs.nctu.edu.tw/~cmliu/courses/dsp/ ( Office: EC538 (03)5731877 cmliu@cs.nctu.edu.tw

More information

BCS 221: Auditory Perception BCS 521 & PSY 221

BCS 221: Auditory Perception BCS 521 & PSY 221 BCS 221: Auditory Perception BCS 521 & PSY 221 Time: MW 10:25 11:40 AM Recitation: F 10:25 11:25 AM Room: Hutchinson 473 Lecturer: Dr. Kevin Davis Office: 303E Meliora Hall Office hours: M 1 3 PM kevin_davis@urmc.rochester.edu

More information

L2: Speech production and perception Anatomy of the speech organs Models of speech production Anatomy of the ear Auditory psychophysics

L2: Speech production and perception Anatomy of the speech organs Models of speech production Anatomy of the ear Auditory psychophysics L2: Speech production and perception Anatomy of the speech organs Models of speech production Anatomy of the ear Auditory psychophysics Introduction to Speech Processing Ricardo Gutierrez-Osuna CSE@TAMU

More information

College of Medicine Dept. of Medical physics Physics of ear and hearing /CH

College of Medicine Dept. of Medical physics Physics of ear and hearing /CH College of Medicine Dept. of Medical physics Physics of ear and hearing /CH 13 2017-2018 ***************************************************************** o Introduction : The ear is the organ that detects

More information

PSY 215 Lecture 10 Topic: Hearing Chapter 7, pages

PSY 215 Lecture 10 Topic: Hearing Chapter 7, pages PSY 215 Lecture 10 Topic: Hearing Chapter 7, pages 189-197 Corrections: NTC 09-1, page 3, the Superior Colliculus is in the midbrain (Mesencephalon). Announcements: Movie next Monday: Case of the frozen

More information