Quarterly Progress and Status Report. Masking effects of one s own voice
|
|
- Walter Merritt
- 5 years ago
- Views:
Transcription
1 Dept. for Speech, Music and Hearing Quarterly Progress and Status Report Masking effects of one s own voice Gauffin, J. and Sundberg, J. journal: STL-QPSR volume: 15 number: 1 year: 1974 pages:
2
3 STL-QPSR 1/1974 B. MASKING EFFECTS OF ONE'S OWN VOICE Abstract Measurements are reported in which a voice trained subject matches his masked threshold during phonation. This threshold is found to be fairly similar to the threshold masked by the purely airborn signal which reaches the subject' s ear during phonation in an anechoic room. By reducing the number of partials in the last mentioned masker spectrum it is found that most higher partials fall below the masked threshold and seem to account for the masking effect. The differences between spectra radiated frontally from the mouth and spectra l'eaching the ear are studied, I The masking of one' s own voice can be described by the masked threshold which is measured during phonation. This threshold is interesting from many points of view. It is related to the perception of the own voice and to the excitation of the basilar membrane due to the phonation. Therefore, the masked threshold may inform about the auditory feedback signal'which the speaker perceives. This signal is presumably important for the control of the voice. We may raise the question, however, whether or not measurements of masked thresholds can be successfully made in a phonating subject. What is the order of magnitude of the scatter we obtain if a subject tries to match the hearing threshold while he is phonating? And is a subject capable of keeping the phonation sufficiently constant over the period of time needed for the measurements? Thus, are such threshold measurements meaningful at all? The purpose of the present investigation was to find out the answer to such questions. Attempts have been made to determine the masked threshold during phonation of different vowel sounds. The threshold has also been determined for the sound reaching the speaker' s ear when he phonates. Finally, the differences in the spectrum radiated frontally from the mouth and the spectrum reaching the ear have been measured for different vowels. The experiments have been conducted on oee subject only. Background The masking effect of one's own voice can be assumed to depend on three important factors: the stapedius reflex, the bone conduction, and the sound transfer from the mouth to the ears.
4 The stapediu s reflex attenuates the sound transmitted through the middle ear by reducing the vibration amplitude of the stapedius. The reflex has been shown to be released even at very weak levels of phona- tion. It mainly affects frequencies lower than 2 khz. In a silent subject the stapedius reflex is released when the ear is exposed to sounds with an SPL of 90 to 95 db (~p/ller, 1972). The bone conduction transmits vibrations from the vocal tract walls and the larynx region to the ear. The bone conducted vibrations yield energy to the cochlea in various ways. Sound is generated in the meatus due to its vibrating walls. In addition the skull vibrations are transmitted to the ossicles. Also, the vibrations forced directly upon the cochlea result in an excitation of the basilar membranes. We may assume that the phase relationship between bone- and air-conducted sound during phonation is frequency dependent. Moreover, the stapedius reflex is assumed to affect bone conduction (Tonndorf, 1972). The sound transfer from the mouth to the ear is normally dependent upon the reverberation of the room. In an anechoic room the transfer would to a large extent be dependent on the size of the sound radiating lip opening, the dimensions of the head, and the wavelength of the sound transferred. As yet the details of this sound transfer are not sufficiently investigated. Against this background it seems clear that the masking effect of one's own voice is hard to predict. Therefore, even a set of purely empirical data may provide valuable information on the masking effect and on the perception of the own voice. Measurements All threshold measurements were made in an anechoic chamber. A microphone was mounted at the subject's ear 7 cm above his meatus. This microphone was used for the control of the phonation level and for measuring the probetone amplitudes. A sinusoidal probetone was presented through a loud-speaker suspended 40 cm in front of the subject (cf. Fig. III-B- I). Each 0. 5 sec the probetone alternated between two sound levels differing by 6 db. While the subject phonated, he adjusted the probetone amplitude so that he could perceive the stronger parts of the probetone only. The probetone was given at 17 frequencies in rising
5 . PROBETONE * G A I r GATE r\e Fig. 111-B- I. Equipment used for measuring masked thresholds during phonation and for a purely airborne masker. M microphone mounted at the subject's ear VU instrument indicating SPL Pm potentiometer for regulation of the SPL of the purely airborne masker which was provided by a tape loop on the tape recorder (TR) G sinewave generator providing the probetone potentiometer for the subject's regulation of the probetone Pp amplitude MX mixer
6 STL-QPSR 1/ order between 100 and 4000 Hz. After the probetone amplitude adjust- ment had been completed at a given frequency the phonation was inter- rupted and the probetone amplitude was measured with the ear micro- phone. Thereafter the subject started the phonation again and the next probetone frequency was tested. This procedure was repeated several times so that three or four values were obtained for each probetone fre- quency. Thus, each point in a threshold curve represents the average of at least three values. The complete set of readings (3 collected within 40 min. 17) could be The ability of the subject to control voice production is decisive for the reliability of the results. This ability can be assumed to be higher in a trained than in an untrained voice. Therefore, a trained singer was used as subject. Nevertheless, a certain amount of spectral variability must be present also in vowels produced by trained voices. This varia- bility may be of two kinds: long-time variations and short-time varia- tions of the amplitudes of the spectral components. The long-time variability was estimated by comparing spectra of the same vowel produced in the beginning and at the end of a session. The stronger partials below 1 khz differed by.f. I db only, whereas differences smaller than 3 db were observed near the higher formants. The short- time variability (within 2 sec) of the partial amplitudes was found to be smaller than f 2 db for the more prominent components. The actual masker signal raising the threshold during phonation is unknown. On the other hand, the acoustic signal reaching the subject's ear during the phonation can be determined. This signal was picked up by the microphone at the subject' s ear and recorded on tape. A loop of this tape was presented through the loud-speaker as masker signal in a subsequent session. In this way the masking of the purely airborne sound reaching the subject' s ear during phonation could be measured. This sound was possible to reproduce with an accuracy of 4 db. The measurements were completely analogous to the earlier measurements. Thus, the masker signal was interrupted when the probetone amplitude adjustment had been completed, and at least three threshold values were collected for each of the 17 probetone frequencies. The three settings of the probetone amplitude made at the same fre- quency in the same session differed with more than 4 db only excep- tionally. In the measurements with a purely airborne masker the spread
7 STL-QPSR 1/1974 was slightly smaller, on the average. Occasionally, day- to-day-variations of the masked threshold occurred. They were observed to be within 3 db, approximately. This amount of spread is rather small and we may conclude that measurements of the masked threshold during phonation yield reasonably reliable results. However, in comparing masked thresholds obtained with different masker Signals it should be remembered that differences smaller than 3 db may be due to the limited accuracy in the measurements. Masked thresholds The masked thresholds for an [a)-vowel phonated at two levels and for an [ i ] are shown in Fig. 111-B-2a-c. The fundamental frequency was 110 Hz in all vowels. In the graphs in Fig. 111-B-2 are also shown the purely airborne masker spectrum recorded at the subject' s ear and the masked threshold pertaining to that masker. The two types of thresholds are generally grossly parallel and differ with less than 5 db as a rule. Two exceptions to this are found. One is the weaker [a 1 in the low frequency region and the other is found between 1 and 2 khz in the stronger [ a ]. In both these cases, the threshold measured during phonation is the lowest. If the bone conduction has the effect of suppressing a strong partial in the masker spectrum, the masked threshold can be expected to drop in the frequency region above this partial. This is a possible explanation to the threshold differences in these two cases. Only in the case of the purely airborne masker the relations between the masker spectrum and the masked threshold can be studied. It is well known that the masked threshold for a sinewave masker reaches a maximum at the masker frequency lying about 20 db belov the SPL of the masker. The threshold slopes steeply towards lower frequencies and slower towards higher frequencies. In our case the relationships between the masker spectrum and the threshold is more complicated, as we may expect. The threshold lies 10 to 15 db below the stronger low frequency components. Above a strong partial followed by considerably weaker partials the threshold falls at a rate of 13 to 20 d~/octave. As a consequence of this, the threshold lies higher than the spectrum envelope in the "valleys" between formants, and only one or two partials near a higher formant surpasses the level of the threshold. I I
8
9 STL-QPSR 1/ Which partials determine the masking effects, then? It seems rea- sonable to assume that masker components lying below the threshold do not contribute to the masking effects. This assumption is supported by the results shown in Fig. 111-B- 3a and b. Lowpass-filtering the weaker [ a ] at 1. 1 khz did not affect the threshold significantly, as seen in Fig. 111-B-3a. In one version of the [ i ] only five partials clearly surpassed the masked threshold. These five partials were synthesized and presented as a masker to the subject. The results showed that the threshold was not affected significantly by this reduction of all partials lying below the threshold as is shown in Fig. 111-B-3b. Therefore it seems as if the masking effect of a vowel spectrum may be determined by a rather small number of partials. Comparing the timbre of the 1 complete and the reduced masker spectra yielded support for the as- sumption. that these few partials determine the timbre perceived as well. Transmission mouth- ear Our results suggest that the masking effect of one' s own voice is to a large extent dependent on quite few partials. This result is of course partly due to the fact that the masker spectrum was recorded at the sub- ject's ear in an anechoic room. How does this spectrum relate to the spectrum radiated frontally from the mouth? The answer to this question was obtained by measuring these two spectra and comparing them. One microphone was placed in front of the mouth and the other at the ear, as before. The distance from the mouth opening to both these micro- phones was 16 cm. The microphone signals were simultaneously re- corded on each of the two channels of a tape recorder. In this way, the same set of voice pulses could be identified and analyzed in the two recorded signals. The analysis was performed by an FFT computer program. Two pairs of spectra of each of the vowels [ u, a, i ] were compared. The differences in the partial amplitudes are shown in Fig. 111-B-4a, b, and c. It is seen from the figure that the differences vary considerably with frequency and vowel. Note also that the two values observed for the same partial generally agree within a few db. The frequent and abrupt changes in the curves are probably due to in- terferences between sound radiated from the lip opening and sound radiated from the cheeks and the neck. The frequency dependence shows that the signal reaching a speaker's ear in an anechoic chamber differs widely from the signal radiated frontally. I
10 b FREQU E NCY (~Hz) I I I I I f I I I c I FREQU E NCY (k~z) I I I I I I 1 1 Fig. 111-B-3. Airborne masker spectra and masked thresholds for an [a] and an [i ] (upper and lower graphs, respectively). Circles and solid line: mask4 threshold for the complete masker spectrum: triangles and dashed line: masked threshold for a reduced masker spectrum consisting only of the partials indicated by heavy lines. The dashed curve at the bottom of the graphs shows the threshold measured in silence.
11 FREQUENCY (k~z) FREQUENCY (k~z) Fig Solid lines and dots: differences in partial amplitudes between vowel spectra simultaneously sounding at the car and 16 em in front of the mouth of a phonating subject. The dot-dashed line shows the corresponding values for a frequency sweep generated by a point source at the subject's mouth. The dashed line shows the average spectrum level differences in octave bands of 12 vowels recorded at the brim of the lips and at the ear (adapted after von B&k&sy).
12 STL-QPSR 1/ In Fig. 111-B-4 two other curves are shown for comparison. One shows the differences which wodd occur if the lip opening behaved as a poifit source. This curve was obtained as the differences in the responses to a ~ihekrave sweep generated by a point source (the STL-Ionophone) and recorded with the ~icro~hohes just mentioned (Fransson & Jansson, 197i). The Ionophone was a few mm in front of the subject's closed lips. The dip in the curve near 3 khz probably depends on interferences be- tween sound which travels directly to the ear and sound which reaches the ear from behind the head. We would expect that a vowel produced with a small lip opening would give values lying closer to the curve per- taining to the point source than would vowels produced with larger lip openings. This is also the case; the curve pertaining to the [ u] - vowel lies higher than the curves pertaining to the other two vowels. The other curve presented in the graphs in Fig. III-B-4 reproduces data given by von ~ & k &(1960). s ~ These data were recorded with one microphone at the brim of the lips and one near the ear. The values were normalized to a difference of 0 db at 0 Hz for the sake of compari- son. Thus, we would expect that B&~&sY's curve would provide a very gross average of our data. This can also be said to be the case except for a difference below 0.7 khz. Disregarding the numerous dips our curves may be described as follows. The differences between the spectrum reaching the speaker' s ear and the spectrum radiated frontally from the mouth are very small below 500 Hz. Partials near 1 khz are reduced with 8 db on the average and partials near 3 khz with 10 db. Discussion and conclusion The results appear to show that masked thresholds can be determined with an accuracy of 3 db in a phonating voice-trained subject. The threshold measured during phonation is not always identical with the threshold pertaining to the masker signal which reaches the ear during phonation. The differences are probably the combined effects of the bone conduction, the stapedius reflex and the sound transfer from the mouth to the ears. If we may generalize our observations the following can be said as regards the masked threshold in relation to a vowel masker spectrum. All strong low frequency components in the masker spectrum lie around
13 STL-QPSR 1/ db above the masked threshold, whereas nearly all higher partials lie below the threshold except those which fall close to formants. Thus, partials in a spectral "valley" between two formants do not seem to reach the level of the masked threshold. The results suggest that only a small number of partials accounts for the masking effect and probably also for the timbre perceived. This agrees with the findings of Kakusho et a1 (1968). In normal acoustic surroundings the reverberation will counteract the lowpass filter effect characterizing the sound transfer from the mouth to the ear in an anechoic room. However, the amplitudes of the higher partials reaching the ear will depend strongly of the acoustic sur- roundings. The results of a previous investigation yielded support for the assumption that a singer finds it easier to base the voice control on the vibration sensations in the head than on the auditory feedback signal (Sundberg, 1974). The results of the present investigation seem to provide an explanation to this. The auditory feedback signal must vary severely with the acoustic properties of the room in which the singer sings. This is not so for the head vibrations, which thus is likely to provide a more useful feedback signal. Acknowledgments This work was supported by the Bank of Sweden Tercentenary Fund. References von B&k&sy, G. : "Bone Conduction", Chapter 6 in Experim nts in Hearing, pp , New York Fransson, F. and Jansson, E. : "Properties of the STL Ionophone Transducer", STL-QPSR 2-3/1971, pp Kakusho, 0., Kato, K., and Kobayashi, T. : "Just Discriminable Change and Matching Range of Acoustic Parameters of Vowels", Acustica 20, pp (1968). - ~4ller, A. : "The Middle Ear", Chapter 4 in Foundations of Modern Auditory Theory I1 (ed. J. Tobias), pp , New York Sundberg, J. : "Articulatory Interpretation of the ' Singing Formant' ", J-Acoust. Soc.Am :4, p. &38 sqq (1974). Tonndorf, J. : "Bone Conduction1', Chapter 5 in Foundations of Modern Auditory Theory I1 (ed. J. Tobias), pp , New York 1972.
Issues faced by people with a Sensorineural Hearing Loss
Issues faced by people with a Sensorineural Hearing Loss Issues faced by people with a Sensorineural Hearing Loss 1. Decreased Audibility 2. Decreased Dynamic Range 3. Decreased Frequency Resolution 4.
More informationQuarterly Progress and Status Report. Evaluation of teflon injection therapy for paralytic dysphonia
Dept. for Speech, Music and Hearing Quarterly Progress and Status Report Evaluation of teflon injection therapy for paralytic dysphonia Fritzell, B. and Hallen, O. and Sundberg, J. journal: STL-QPSR volume:
More informationHearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds
Hearing Lectures Acoustics of Speech and Hearing Week 2-10 Hearing 3: Auditory Filtering 1. Loudness of sinusoids mainly (see Web tutorial for more) 2. Pitch of sinusoids mainly (see Web tutorial for more)
More informationSignals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor
Signals, systems, acoustics and the ear Week 5 The peripheral auditory system: The ear as a signal processor Think of this set of organs 2 as a collection of systems, transforming sounds to be sent to
More informationFrequency refers to how often something happens. Period refers to the time it takes something to happen.
Lecture 2 Properties of Waves Frequency and period are distinctly different, yet related, quantities. Frequency refers to how often something happens. Period refers to the time it takes something to happen.
More informationCombination of Bone-Conducted Speech with Air-Conducted Speech Changing Cut-Off Frequency
Combination of Bone-Conducted Speech with Air-Conducted Speech Changing Cut-Off Frequency Tetsuya Shimamura and Fumiya Kato Graduate School of Science and Engineering Saitama University 255 Shimo-Okubo,
More informationHearing Sound. The Human Auditory System. The Outer Ear. Music 170: The Ear
Hearing Sound Music 170: The Ear Tamara Smyth, trsmyth@ucsd.edu Department of Music, University of California, San Diego (UCSD) November 17, 2016 Sound interpretation in the auditory system is done by
More informationMusic 170: The Ear. Tamara Smyth, Department of Music, University of California, San Diego (UCSD) November 17, 2016
Music 170: The Ear Tamara Smyth, trsmyth@ucsd.edu Department of Music, University of California, San Diego (UCSD) November 17, 2016 1 Hearing Sound Sound interpretation in the auditory system is done by
More informationSLHS 1301 The Physics and Biology of Spoken Language. Practice Exam 2. b) 2 32
SLHS 1301 The Physics and Biology of Spoken Language Practice Exam 2 Chapter 9 1. In analog-to-digital conversion, quantization of the signal means that a) small differences in signal amplitude over time
More informationHCS 7367 Speech Perception
Long-term spectrum of speech HCS 7367 Speech Perception Connected speech Absolute threshold Males Dr. Peter Assmann Fall 212 Females Long-term spectrum of speech Vowels Males Females 2) Absolute threshold
More informationHearing. and other senses
Hearing and other senses Sound Sound: sensed variations in air pressure Frequency: number of peaks that pass a point per second (Hz) Pitch 2 Some Sound and Hearing Links Useful (and moderately entertaining)
More informationSpeech Generation and Perception
Speech Generation and Perception 1 Speech Generation and Perception : The study of the anatomy of the organs of speech is required as a background for articulatory and acoustic phonetics. An understanding
More informationHEARING. Structure and Function
HEARING Structure and Function Rory Attwood MBChB,FRCS Division of Otorhinolaryngology Faculty of Health Sciences Tygerberg Campus, University of Stellenbosch Analyse Function of auditory system Discriminate
More informationLinguistic Phonetics Fall 2005
MIT OpenCourseWare http://ocw.mit.edu 24.963 Linguistic Phonetics Fall 2005 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 24.963 Linguistic Phonetics
More informationChapter 11: Sound, The Auditory System, and Pitch Perception
Chapter 11: Sound, The Auditory System, and Pitch Perception Overview of Questions What is it that makes sounds high pitched or low pitched? How do sound vibrations inside the ear lead to the perception
More informationPsychoacoustical Models WS 2016/17
Psychoacoustical Models WS 2016/17 related lectures: Applied and Virtual Acoustics (Winter Term) Advanced Psychoacoustics (Summer Term) Sound Perception 2 Frequency and Level Range of Human Hearing Source:
More informationQuarterly Progress and Status Report. Effect on LTAS of vocal loudness variation
Dept. for Speech, Music and Hearing Quarterly Progress and Status Report Effect on LTAS of vocal loudness variation Nordenberg, M. and Sundberg, J. journal: TMH-QPSR volume: 45 number: 1 year: 2003 pages:
More informationEEL 6586, Project - Hearing Aids algorithms
EEL 6586, Project - Hearing Aids algorithms 1 Yan Yang, Jiang Lu, and Ming Xue I. PROBLEM STATEMENT We studied hearing loss algorithms in this project. As the conductive hearing loss is due to sound conducting
More informationAcoustics, signals & systems for audiology. Psychoacoustics of hearing impairment
Acoustics, signals & systems for audiology Psychoacoustics of hearing impairment Three main types of hearing impairment Conductive Sound is not properly transmitted from the outer to the inner ear Sensorineural
More informationwhether or not the fundamental is actually present.
1) Which of the following uses a computer CPU to combine various pure tones to generate interesting sounds or music? 1) _ A) MIDI standard. B) colored-noise generator, C) white-noise generator, D) digital
More informationHearing. Figure 1. The human ear (from Kessel and Kardon, 1979)
Hearing The nervous system s cognitive response to sound stimuli is known as psychoacoustics: it is partly acoustics and partly psychology. Hearing is a feature resulting from our physiology that we tend
More informationSpeech (Sound) Processing
7 Speech (Sound) Processing Acoustic Human communication is achieved when thought is transformed through language into speech. The sounds of speech are initiated by activity in the central nervous system,
More informationHCS 7367 Speech Perception
Babies 'cry in mother's tongue' HCS 7367 Speech Perception Dr. Peter Assmann Fall 212 Babies' cries imitate their mother tongue as early as three days old German researchers say babies begin to pick up
More informationSound Waves. Sensation and Perception. Sound Waves. Sound Waves. Sound Waves
Sensation and Perception Part 3 - Hearing Sound comes from pressure waves in a medium (e.g., solid, liquid, gas). Although we usually hear sounds in air, as long as the medium is there to transmit the
More informationMasker-signal relationships and sound level
Chapter 6: Masking Masking Masking: a process in which the threshold of one sound (signal) is raised by the presentation of another sound (masker). Masking represents the difference in decibels (db) between
More informationJitter, Shimmer, and Noise in Pathological Voice Quality Perception
ISCA Archive VOQUAL'03, Geneva, August 27-29, 2003 Jitter, Shimmer, and Noise in Pathological Voice Quality Perception Jody Kreiman and Bruce R. Gerratt Division of Head and Neck Surgery, School of Medicine
More information11 Music and Speech Perception
11 Music and Speech Perception Properties of sound Sound has three basic dimensions: Frequency (pitch) Intensity (loudness) Time (length) Properties of sound The frequency of a sound wave, measured in
More informationTopics in Linguistic Theory: Laboratory Phonology Spring 2007
MIT OpenCourseWare http://ocw.mit.edu 24.91 Topics in Linguistic Theory: Laboratory Phonology Spring 27 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
More informationWhat you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for
What you re in for Speech processing schemes for cochlear implants Stuart Rosen Professor of Speech and Hearing Science Speech, Hearing and Phonetic Sciences Division of Psychology & Language Sciences
More informationLinguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions.
24.963 Linguistic Phonetics Basic Audition Diagram of the inner ear removed due to copyright restrictions. 1 Reading: Keating 1985 24.963 also read Flemming 2001 Assignment 1 - basic acoustics. Due 9/22.
More informationChapter 3. Sounds, Signals, and Studio Acoustics
Chapter 3 Sounds, Signals, and Studio Acoustics Sound Waves Compression/Rarefaction: speaker cone Sound travels 1130 feet per second Sound waves hit receiver Sound waves tend to spread out as they travel
More informationThe role of low frequency components in median plane localization
Acoust. Sci. & Tech. 24, 2 (23) PAPER The role of low components in median plane localization Masayuki Morimoto 1;, Motoki Yairi 1, Kazuhiro Iida 2 and Motokuni Itoh 1 1 Environmental Acoustics Laboratory,
More informationVoice Pitch Control Using a Two-Dimensional Tactile Display
NTUT Education of Disabilities 2012 Vol.10 Voice Pitch Control Using a Two-Dimensional Tactile Display Masatsugu SAKAJIRI 1, Shigeki MIYOSHI 2, Kenryu NAKAMURA 3, Satoshi FUKUSHIMA 3 and Tohru IFUKUBE
More informationAmbiguity in the recognition of phonetic vowels when using a bone conduction microphone
Acoustics 8 Paris Ambiguity in the recognition of phonetic vowels when using a bone conduction microphone V. Zimpfer a and K. Buck b a ISL, 5 rue du Général Cassagnou BP 734, 6831 Saint Louis, France b
More informationAUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening
AUDL GS08/GAV1 Signals, systems, acoustics and the ear Pitch & Binaural listening Review 25 20 15 10 5 0-5 100 1000 10000 25 20 15 10 5 0-5 100 1000 10000 Part I: Auditory frequency selectivity Tuning
More informationTopic 4. Pitch & Frequency
Topic 4 Pitch & Frequency A musical interlude KOMBU This solo by Kaigal-ool of Huun-Huur-Tu (accompanying himself on doshpuluur) demonstrates perfectly the characteristic sound of the Xorekteer voice An
More informationAuditory Physiology PSY 310 Greg Francis. Lecture 30. Organ of Corti
Auditory Physiology PSY 310 Greg Francis Lecture 30 Waves, waves, waves. Organ of Corti Tectorial membrane Sits on top Inner hair cells Outer hair cells The microphone for the brain 1 Hearing Perceptually,
More informationA. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER
ARCHIVES OF ACOUSTICS 29, 1, 25 34 (2004) INTELLIGIBILITY OF SPEECH PROCESSED BY A SPECTRAL CONTRAST ENHANCEMENT PROCEDURE AND A BINAURAL PROCEDURE A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER Institute
More informationProceedings of Meetings on Acoustics
Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Noise Session 3aNSa: Wind Turbine Noise I 3aNSa5. Can wind turbine sound
More information9/29/14. Amanda M. Lauer, Dept. of Otolaryngology- HNS. From Signal Detection Theory and Psychophysics, Green & Swets (1966)
Amanda M. Lauer, Dept. of Otolaryngology- HNS From Signal Detection Theory and Psychophysics, Green & Swets (1966) SIGNAL D sensitivity index d =Z hit - Z fa Present Absent RESPONSE Yes HIT FALSE ALARM
More informationSound localization psychophysics
Sound localization psychophysics Eric Young A good reference: B.C.J. Moore An Introduction to the Psychology of Hearing Chapter 7, Space Perception. Elsevier, Amsterdam, pp. 233-267 (2004). Sound localization:
More information! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics
2/14/18 Can hear whistle? Lecture 5 Psychoacoustics Based on slides 2009--2018 DeHon, Koditschek Additional Material 2014 Farmer 1 2 There are sounds we cannot hear Depends on frequency Where are we on
More informationQuarterly Progress and Status Report. From sagittal distance to area
Dept. for Speech, Music and Hearing Quarterly Progress and Status Report From sagittal distance to area Johansson, C. and Sundberg, J. and Wilbrand, H. and Ytterbergh, C. journal: STL-QPSR volume: 24 number:
More informationIntro to Audition & Hearing
Intro to Audition & Hearing Lecture 16 Chapter 9, part II Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Fall 2017 1 Sine wave: one of the simplest kinds of sounds: sound for which pressure
More informationThe effect of wearing conventional and level-dependent hearing protectors on speech production in noise and quiet
The effect of wearing conventional and level-dependent hearing protectors on speech production in noise and quiet Ghazaleh Vaziri Christian Giguère Hilmi R. Dajani Nicolas Ellaham Annual National Hearing
More informationWhat Is the Difference between db HL and db SPL?
1 Psychoacoustics What Is the Difference between db HL and db SPL? The decibel (db ) is a logarithmic unit of measurement used to express the magnitude of a sound relative to some reference level. Decibels
More informationCOM3502/4502/6502 SPEECH PROCESSING
COM3502/4502/6502 SPEECH PROCESSING Lecture 4 Hearing COM3502/4502/6502 Speech Processing: Lecture 4, slide 1 The Speech Chain SPEAKER Ear LISTENER Feedback Link Vocal Muscles Ear Sound Waves Taken from:
More informationTopic 4. Pitch & Frequency. (Some slides are adapted from Zhiyao Duan s course slides on Computer Audition and Its Applications in Music)
Topic 4 Pitch & Frequency (Some slides are adapted from Zhiyao Duan s course slides on Computer Audition and Its Applications in Music) A musical interlude KOMBU This solo by Kaigal-ool of Huun-Huur-Tu
More informationChapter 17 Sound Sound and Hearing. Properties of Sound Waves 1/20/2017. Pearson Prentice Hall Physical Science: Concepts in Action
Pearson Prentice Hall Physical Science: Concepts in Action Chapter 17 Sound Standing Waves in Music When the string of a violin is played with a bow, it vibrates and creates standing waves. Some instruments,
More informationHearing the Universal Language: Music and Cochlear Implants
Hearing the Universal Language: Music and Cochlear Implants Professor Hugh McDermott Deputy Director (Research) The Bionics Institute of Australia, Professorial Fellow The University of Melbourne Overview?
More informationEssential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair
Who are cochlear implants for? Essential feature People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work
More informationComputational Perception /785. Auditory Scene Analysis
Computational Perception 15-485/785 Auditory Scene Analysis A framework for auditory scene analysis Auditory scene analysis involves low and high level cues Low level acoustic cues are often result in
More informationHEARING AND PSYCHOACOUSTICS
CHAPTER 2 HEARING AND PSYCHOACOUSTICS WITH LIDIA LEE I would like to lead off the specific audio discussions with a description of the audio receptor the ear. I believe it is always a good idea to understand
More informationA NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES
A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES Kazuhiro IIDA, Motokuni ITOH AV Core Technology Development Center, Matsushita Electric Industrial Co., Ltd.
More informationRole of F0 differences in source segregation
Role of F0 differences in source segregation Andrew J. Oxenham Research Laboratory of Electronics, MIT and Harvard-MIT Speech and Hearing Bioscience and Technology Program Rationale Many aspects of segregation
More informationComment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA)
Comments Comment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA) Is phase locking to transposed stimuli as good as phase locking to low-frequency
More information17.4 Sound and Hearing
You can identify sounds without seeing them because sound waves carry information to your ears. People who work in places where sound is very loud need to protect their hearing. Properties of Sound Waves
More informationSpectrograms (revisited)
Spectrograms (revisited) We begin the lecture by reviewing the units of spectrograms, which I had only glossed over when I covered spectrograms at the end of lecture 19. We then relate the blocks of a
More informationID# Final Exam PS325, Fall 1997
ID# Final Exam PS325, Fall 1997 Good luck on this exam. Answer each question carefully and completely. Keep your eyes foveated on your own exam, as the Skidmore Honor Code is in effect (as always). Have
More informationSpeech Spectra and Spectrograms
ACOUSTICS TOPICS ACOUSTICS SOFTWARE SPH301 SLP801 RESOURCE INDEX HELP PAGES Back to Main "Speech Spectra and Spectrograms" Page Speech Spectra and Spectrograms Robert Mannell 6. Some consonant spectra
More informationEffects of partial masking for vehicle sounds
Effects of partial masking for vehicle sounds Hugo FASTL 1 ; Josef KONRADL 2 ; Stefan KERBER 3 1 AG Technische Akustik, MMK, TU München, Germany 2 now at: ithera Medical GmbH, München, Germany 3 now at:
More informationQuarterly Progress and Status Report. The contraction of the intra-aural muscle
Dept. for Speech, Music and Hearing Quarterly Progress and Status Report The contraction of the intra-aural muscle Möller, A. journal: STL-QPSR volume: 2 number: 1 year: 1961 pages: 016-017 http://www.speech.kth.se/qpsr
More informationHearing in the Environment
10 Hearing in the Environment Click Chapter to edit 10 Master Hearing title in the style Environment Sound Localization Complex Sounds Auditory Scene Analysis Continuity and Restoration Effects Auditory
More informationSystems Neuroscience Oct. 16, Auditory system. http:
Systems Neuroscience Oct. 16, 2018 Auditory system http: www.ini.unizh.ch/~kiper/system_neurosci.html The physics of sound Measuring sound intensity We are sensitive to an enormous range of intensities,
More informationThresholds for different mammals
Loudness Thresholds for different mammals 8 7 What s the first thing you d want to know? threshold (db SPL) 6 5 4 3 2 1 hum an poodle m ouse threshold Note bowl shape -1 1 1 1 1 frequency (Hz) Sivian &
More informationTechnical Discussion HUSHCORE Acoustical Products & Systems
What Is Noise? Noise is unwanted sound which may be hazardous to health, interfere with speech and verbal communications or is otherwise disturbing, irritating or annoying. What Is Sound? Sound is defined
More informationRe/Habilitation of the Hearing Impaired. Better Hearing Philippines Inc.
Re/Habilitation of the Hearing Impaired Better Hearing Philippines Inc. Nature of Hearing Loss Decreased Audibility Decreased Dynamic Range Decreased Frequency Resolution Decreased Temporal Resolution
More informationWho are cochlear implants for?
Who are cochlear implants for? People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work best in adults who
More informationCollege of Medicine Dept. of Medical physics Physics of ear and hearing /CH
College of Medicine Dept. of Medical physics Physics of ear and hearing /CH 13 2017-2018 ***************************************************************** o Introduction : The ear is the organ that detects
More informationDigital hearing aids are still
Testing Digital Hearing Instruments: The Basics Tips and advice for testing and fitting DSP hearing instruments Unfortunately, the conception that DSP instruments cannot be properly tested has been projected
More informationFREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED
FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED Francisco J. Fraga, Alan M. Marotta National Institute of Telecommunications, Santa Rita do Sapucaí - MG, Brazil Abstract A considerable
More informationWeek 2 Systems (& a bit more about db)
AUDL Signals & Systems for Speech & Hearing Reminder: signals as waveforms A graph of the instantaneousvalue of amplitude over time x-axis is always time (s, ms, µs) y-axis always a linear instantaneousamplitude
More informationBest Practice Protocols
Best Practice Protocols SoundRecover for children What is SoundRecover? SoundRecover (non-linear frequency compression) seeks to give greater audibility of high-frequency everyday sounds by compressing
More informationHearing Lectures. Acoustics of Speech and Hearing. Subjective/Objective (recap) Loudness Overview. Sinusoids through ear. Facts about Loudness
Hearing Lectures coustics of Speech and Hearing Week 2-8 Hearing 1: Perception of Intensity 1. Loudness of sinusoids mainly (see Web tutorial for more) 2. Pitch of sinusoids mainly (see Web tutorial for
More informationCHAPTER 1 INTRODUCTION
CHAPTER 1 INTRODUCTION 1.1 BACKGROUND Speech is the most natural form of human communication. Speech has also become an important means of human-machine interaction and the advancement in technology has
More informationMusic and Hearing in the Older Population: an Audiologist's Perspective
Music and Hearing in the Older Population: an Audiologist's Perspective Dwight Ough, M.A., CCC-A Audiologist Charlotte County Hearing Health Care Centre Inc. St. Stephen, New Brunswick Anatomy and Physiology
More informationHEARING IMPAIRMENT LEARNING OBJECTIVES: Divisions of the Ear. Inner Ear. The inner ear consists of: Cochlea Vestibular
HEARING IMPAIRMENT LEARNING OBJECTIVES: STUDENTS SHOULD BE ABLE TO: Recognize the clinical manifestation and to be able to request appropriate investigations Interpret lab investigations for basic management.
More informationChapter 1: Introduction to digital audio
Chapter 1: Introduction to digital audio Applications: audio players (e.g. MP3), DVD-audio, digital audio broadcast, music synthesizer, digital amplifier and equalizer, 3D sound synthesis 1 Properties
More informationLow Frequency th Conference on Low Frequency Noise
Low Frequency 2012 15th Conference on Low Frequency Noise Stratford-upon-Avon, UK, 12-14 May 2012 Enhanced Perception of Infrasound in the Presence of Low-Level Uncorrelated Low-Frequency Noise. Dr M.A.Swinbanks,
More informationLecture 3: Perception
ELEN E4896 MUSIC SIGNAL PROCESSING Lecture 3: Perception 1. Ear Physiology 2. Auditory Psychophysics 3. Pitch Perception 4. Music Perception Dan Ellis Dept. Electrical Engineering, Columbia University
More informationIn-Ear Microphone Equalization Exploiting an Active Noise Control. Abstract
The 21 International Congress and Exhibition on Noise Control Engineering The Hague, The Netherlands, 21 August 27-3 In-Ear Microphone Equalization Exploiting an Active Noise Control. Nils Westerlund,
More informationSPHSC 462 HEARING DEVELOPMENT. Overview Review of Hearing Science Introduction
SPHSC 462 HEARING DEVELOPMENT Overview Review of Hearing Science Introduction 1 Overview of course and requirements Lecture/discussion; lecture notes on website http://faculty.washington.edu/lawerner/sphsc462/
More informationSignals, systems, acoustics and the ear. Week 1. Laboratory session: Measuring thresholds
Signals, systems, acoustics and the ear Week 1 Laboratory session: Measuring thresholds What s the most commonly used piece of electronic equipment in the audiological clinic? The Audiometer And what is
More informationHearing. Juan P Bello
Hearing Juan P Bello The human ear The human ear Outer Ear The human ear Middle Ear The human ear Inner Ear The cochlea (1) It separates sound into its various components If uncoiled it becomes a tapering
More informationEssential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair
Who are cochlear implants for? Essential feature People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work
More informationTHE MECHANICS OF HEARING
CONTENTS The mechanics of hearing Hearing loss and the Noise at Work Regulations Loudness and the A weighting network Octave band analysis Hearing protection calculations Worked examples and self assessed
More informationAuditory Physiology PSY 310 Greg Francis. Lecture 29. Hearing
Auditory Physiology PSY 310 Greg Francis Lecture 29 A dangerous device. Hearing The sound stimulus is changes in pressure The simplest sounds vary in: Frequency: Hertz, cycles per second. How fast the
More informationPSY 310: Sensory and Perceptual Processes 1
Auditory Physiology PSY 310 Greg Francis Lecture 29 A dangerous device. Hearing The sound stimulus is changes in pressure The simplest sounds vary in: Frequency: Hertz, cycles per second. How fast the
More informationTesting Digital Hearing Aids
Testing Digital Hearing Aids with the FONIX 6500-CX Hearing Aid Analyzer Frye Electronics, Inc. Introduction The following is a quick guide for testing digital hearing aids using the FONIX 6500-CX. All
More informationstudy. The subject was chosen as typical of a group of six soprano voices methods. METHOD
254 J. Physiol. (I937) 9I, 254-258 6I2.784 THE MECHANISM OF PITCH CHANGE IN THE VOICE BY R. CURRY Phonetics Laboratory, King's College, Neweastle-on-Tyne (Received 9 August 1937) THE object of the work
More informationProceedings of Meetings on Acoustics
Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Speech Communication Session 4aSCb: Voice and F0 Across Tasks (Poster
More informationBASIC NOTIONS OF HEARING AND
BASIC NOTIONS OF HEARING AND PSYCHOACOUSICS Educational guide for the subject Communication Acoustics VIHIAV 035 Fülöp Augusztinovicz Dept. of Networked Systems and Services fulop@hit.bme.hu 2018. október
More informationAn active unpleasantness control system for indoor noise based on auditory masking
An active unpleasantness control system for indoor noise based on auditory masking Daisuke Ikefuji, Masato Nakayama, Takanabu Nishiura and Yoich Yamashita Graduate School of Information Science and Engineering,
More informationUnit 4: Sensation and Perception
Unit 4: Sensation and Perception Sensation a process by which our sensory receptors and nervous system receive and represent stimulus (or physical) energy and encode it as neural signals. Perception a
More informationCONTRIBUTION OF DIRECTIONAL ENERGY COMPONENTS OF LATE SOUND TO LISTENER ENVELOPMENT
CONTRIBUTION OF DIRECTIONAL ENERGY COMPONENTS OF LATE SOUND TO LISTENER ENVELOPMENT PACS:..Hy Furuya, Hiroshi ; Wakuda, Akiko ; Anai, Ken ; Fujimoto, Kazutoshi Faculty of Engineering, Kyushu Kyoritsu University
More informationPerception of tonal components contained in wind turbine noise
Perception of tonal components contained in wind turbine noise Sakae YOKOYAMA 1 ; Tomohiro KOBAYASHI 2 ; Hideki TACHIBANA 3 1,2 Kobayasi Institute of Physical Research, Japan 3 The University of Tokyo,
More informationMECHANISM OF HEARING
MECHANISM OF HEARING Sound: Sound is a vibration that propagates as an audible wave of pressure, through a transmission medium such as gas, liquid or solid. Sound is produced from alternate compression
More informationRequired Slide. Session Objectives
Auditory Physiology Required Slide Session Objectives Auditory System: At the end of this session, students will be able to: 1. Characterize the range of normal human hearing. 2. Understand the components
More informationUSING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES
USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES Varinthira Duangudom and David V Anderson School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA 30332
More information