19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007

Size: px
Start display at page:

Download "19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007"

Transcription

1 19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 ASSESSMENTS OF BONE-CONDUCTED ULTRASONIC HEARING-AID (BCUHA): FREQUENCY-DISCRIMINATION, ARTICULATION AND INTELLIGIBILITY TESTS PACS: Ts Nakagawa, Seiji; Okamoto, Yosuke; Fujimoto, Kiyoshi National Institute of Advanced Industrial Science and Technology (AIST), Midorigaoka, Ikeda, Osaka , Japan; ABSTRACT Bone-conducted ultrasounds (BCUs) can be perceived by the profoundly deaf who hardly sense sounds even with conventional hearing-aids, as well as normal-hearing subjects. We developed a bone-conducted ultrasonic hearing-aid (BCUHA) for the profoundly deaf in which ultrasounds at about 30 khz are amplitude-modulated by speech sounds and presented to the user s mastoid by a vibrator. Users perceive demodulated speech sounds. Psychoacoustical tests were undertaken to investigate the characteristics of the BCUHA and assess its practicability. Hearing tests in profoundly deaf subjects showed that 42% were able to perceive sounds and 17% were able to recognize words using the BCUHA prototype. Psychophysical measurements in normal-hearing subjects showed that: (1) difference limens for frequency (DLFs) for BCU are larger than that of air-conduted sound (AC) below khz and above 6.0 khz, however, BCU and AC showed almost same DLFs between 0.25 and 4.0 khz, (2) articulations for Japanese monosyllables were about 60% in normal-hearing subjects, and no major differences were observed between confusion matrices of BCU and AC, and (3) intelligibility for familiar 4-mora Japanese words reached 85%. These results indicate the practicability of BCUHAs, and provide some useful information to estimate the mechanisms of BCU perception. INTRODUCTION Severely hearing-impaired people cannot hear even with the use of a conventional hearing-aid. Although cochlear implants, implanted into the temporal cranial bone, stimulate the cochlea nerve electrically and can restore hearing ability, their performance is not necessarily satisfactory. There are no commercially available hearing-aids that can recover sufficient sensation of hearing for the profoundly deaf. On the other hand, although the upper frequency limit of human hearing is believed to be no higher than about 24,000 Hz, several studies have reported that bone-conducted ultrasounds (BCUs) are audible [1-4]. Indeed, BCU hearing in humans has been demonstrated under various auditory pathological conditions, including sensorineural hearing loss and middle ear disorders [3]. BCUs are perceived even by profoundly deaf subjects who can hardly sense sounds even with conventional hearing-aids [5]. In 1991, Lenhardt et al. reported that BCU modulated by speech sounds were intelligible to some extent [5]; suggesting the possibility to develop a novel hearing-aid based on BCU perception. However, Dobie disputed Lenhardt s results obtained from subjective psychological experiments [6]. Ever since, there has been an ongoing controversy. Lenhardt s argument was recently objectively supported by magnetoencephalography (MEG) [7, 8] and positron emission tomography (PET) [9]. As well, a bone-conducted ultrasonic hearing-aid (BCUHA) for the profoundly deaf has indeed been developed [10]. A BCUHA is far easier to attach than a cochlear implant. This removes the mental and physical burden experienced by cochlear implant users. Moreover, such a hearing-aid could also be used to treat tinnitus in severely hearing-impaired people [11, 12]. Thus, many clinical applications are expected for BCUHAs. In this study, to assess the practicability of the BCUHA, hearing tests were carried out in deaf subjects. To obtain detailed information about the characteristics of hearing using a BCUHA, frequency-resolution, articulations for Japanese monosyllables, and intelligibilities of Japanese words were investigated in normal-hearing subjects.

2 BONE-CONDUCTED ULTRASONIC HEARING-AID (BCUHA) Figure 1 shows a schema of the BCUHA. Ultrasounds are amplitude-modulated by speech and presented to the mastoid or the sternocleidomastoid by a vibrator [10]. The amplitude-modulated signal is given by the following expression: U(t) = ( 1 + m f(t) ) * g(t) (Eq. 1) where f(t), g(t), and m represent a modulator signal (speech), carrier signal, and a modulationdepth, respectively. With the BCUHA, both the speech and pitch of the carrier signal, equals to ten-odd khz, are perceived simultaneously. Figure 2 shows a BCUHA. The basic parameters for the BCUHA were determined according to the results of previous studies into the physiological and psychoacoustic characteristics of BCU perception: subjective pitch [13], dynamic range of loudness [14], and the optimal carrier frequency for perception [15]. HEARING TESTS IN DEAF SUBJECTS TO EVALUATE THE PRACTICABILITY OF BCUHAs To test the utility of the BCUHA, psychoacoustical measurements were carried out in 40 normal-hearing, 37 midrange deaf, and 24 profoundly deaf subjects [10]. Test I BCU tone bursts (25-32 khz, duration: 250 ms, rise/fall: 10 ms) were presented by a BCUHA. Subjects were requested to answer whether they were able to sense sounds or not. Test II 250 Hz, 500 Hz, 1000 Hz, 2000 Hz, and 4000 Hz tone bursts (duration: 250 ms, rise/fall: 10 ms) were presented in a paired comparison procedure via a BCUHA. The stimuli intensity was set at the most clearly perceived level. In each trial, subjects were requested to indicate which of two tones had the higher pitch. Test III Japanese numbers (ex: /ichi/ (one), /ni/ (two), /san/ (three)) were presented to subjects via a BCUHA. In a trial, each stimulus was presented 10 times at intervals of 2 s. After the presentation of the stimulus, the subjects were informed which number was presented by a computer display placed in front of the subject. 5 trails were carried out for each subject using 5 speech sound vibrator microphone amplitude-modulated ultrasound Figure 1.- A schema of the bone-conducted ultrasonic hearing-aid (BCUHA). Ultrasounds are amplitude-modulated by speech or environmental sounds that are detected by a microphone and presented to the mastoid or the sternocleidomastoid muscle by a vibrator. Figure 2.- A prototype of a BCUHA. Body size: 64 * 118 * 24 mm, Weight: 178 g. Some parameters (carrier frequency, carrier amplitude, input signal amplitude, modulation depth, output signal amplitude) can be set on the liquid crystal display. Other parameters can be set from a personal computer via an RS-232C interface. 2

3 different Japanese numbers. Results In experiment I, 100% of normal-hearing, 100% of midrange-deaf and 42% of profoundly-deaf subjects were able to obtain a sound sensation. In experiment II, 95% of normal hearing, 57% of midrange deaf and 21% of profoundly deaf subjects were able to correctly discriminate frequencies. Also, 95% of normal hearing, 73% of midrange deaf and 17% of profoundly deaf subjects were able to recognize the words in experiment III. FREQUENCY-DISCRIMINATION TEST To investigate the psychoacoustic characteristics and underlying mechanisms of BCU hearing, difference limens for frequency (DLFs) for pure tones modulated onto ultrasonic carriers were measured [16]. Methods Five Japanese normal-hearing subjects (20-33 years old) participated. The experiments were conducted individually in a fully anechoic and soundproof room. The tonal signals to be discriminated had center frequencies (CFs) of 0.125, 0.25, 0.5, 1, 2, 4, 6 and 8 khz, respectively. Pure sinusoidal tones were presented under an air-conducted (AC) conditions. Under the BCU conditions, the tone signals were modulated onto the ultrasonic carriers. There were two types of carriers: a 30 khz sine wave and a bandpass Gaussian noise with a rectangular window of 30 ± 4 khz. The modulation depth was set at 0.9. Stimulus duration was 200 ms with rising/falling ramps of 50 ms, each shaped with a half-cycle with a raised-cosine function. The silent interval between the two tones in each trial was 300 ms. Intensities were always above 15 db SL. DLFs were measured using a two-alternative forced choice adaptive procedure with a decision rule that estimated the 70.7% correct point on the psychometric function. In each trial, subjects were asked to indicate which of the two tones, equally spaced in linear frequency on either side of the nominal CF, had the higher frequency. The initial frequency differences of each run were 3% of the CFs. The frequency difference was decreased by a factor of 1.4 after two consecutive correct responses and increased by the same factor after each incorrect response. Each run consisted of six reversals, and the threshold estimate was taken as the geometric mean of the frequency differences of the last four reversals. Three estimates were obtained for each of the CF conditions. The geometric mean of the estimates represented the DLFs for each subject. Results Fig. 3 illustrates the DLFs as a proportion of the CFs under each condition. These modulation conditions yielded similar DLF functions. The proportional DLFs were less than 1% between 1 and 4 khz and increased at below 0.25 khz and above 6 khz. These elevations were greater under the ultrasonic conditions and more with the noise carrier. Two-way ANOVA with repeated measures indicated a significant main effects of the modulation type (p < 0.01) and of the CF (p < 0.001). Multiple comparisons showed significant differences among all signal conditions for the and 8 khz conditions, and between noise carrier and AC for the 6 khz condition (P < 0.05 by Tukey s HSD tests). The largest individual difference was shown at 6 BCU (sinusoid) BCU (bandpass noise) AC CF (khz) Figure 3.- Difference limens for frequency (DLFs) expressed as a proportion of the center frequency (CF) and plotted as a function of it. The mean across five subjects are shown. 3

4 0.125 khz. Discussions Generally, similar patterns were obtained under both BCU and AC conditions. These results indicated that the pitch generated by amplitude-modulated BCU corresponded to that of AC pure tones. Such a pitch was essentially identical between the sinusoidal and noise carriers. For middle (0.125 ~ 6.0 khz) CFs, no significant differences were observed in DLFs between BCU and AC. On the other hand, for low (< khz) and high (> 6.0 khz) CFs, the DLFs were larger under BCU conditions than for AC. Masking by the ultrasonic carrier was implausible because sinusoidal BCU little masked AC of below 8 khz [14]. Some physical explanations might explain the results for the high frequencies. First, undersampling might have occurred due to the small amounts of frequency differences between the tones and carrier. Second, our vibrator had a resonance of 30 khz and outputs were reduced when the frequency moved away from the resonance. This well accounts for the larger DLFs of the noise carrier, which had a wider amplitude fluctuation and broader bandwidth. However, these physical explanations are not applicable to the low center frequency. From the large individual differences at the lowest center frequency, biological explanations seem more plausible. One possible explanation is low-cut filtering in a pathway conducting demodulated signals to the cochlea. However, this cannot fully explain the difference between sinusoidal and noise carriers. Another theoretical line wolud be that demodulation develops after ultrasounds are modulated to an 8 16 khz range by a process like brain resonance [11]. This implies pitch perception as if from sounds of below half the original frequencies. However, it deviates from our impression that subjective pitch is similar between BCU and AC. It also cannot explain the difference between sinusoidal and noise carriers. At present, we have no supporting evidence to explain the low discriminability at the lower frequencies with BCU hearing. ARTICULATION AND INTELLIGIBILITY TESTS To obtain more detailed information about the characteristics of speech-hearing using a BCUHA, monosyllable-articulation and word-intelligibility tests were conducted in 10 Japanese normal-hearing subjects (21-34 years old) [17]. The experiments were carried out in an anechoic room. Confusion matrices, which show information as to what kind of errors occurred in hearing, were produced based on the results of the monosyllable-articulation test and compared between BCU and AC. Articulation tests Articulations of 100 Japanese monosyllables, recorded in a female voice from a commercially available database [18, 19], were measured using a BCUHA. Monosyllablearticulations of AC speech were also measured. Intensities of amplitude-modulated BCUs were set at 20, 25, 30 db greater than the threshold. Intensities of AC were set at 20, 25, 30 db (A). Intelligibility tests The word-intelligibility of BCU speech was investigated. In the experiment, 4-mora Japanese words recorded in a female voice were selected from a database [18, 19] in which Japanese words are classified into four levels using familiarity as a parameter to control the degree of word difficulty: familiarity , familiarity , familiarity , and familiarity The concept of familiarity represents the degree of people s familiarity with each word. 50 words from each level were selected, so 200 words were used in the experiment. Intensities of amplitude-modulated BCUs were set at 20, 25, 30 db greater than the threshold. Confusion matrices To examine the speech perception tendencies of BCUHAs, confusion matrices were composed for both BCU and AC. The results of the monosyllable-articulation tests with BCU and AC were used. The phonemes were classified into vowels, unvoiced consonants, voiced plosive and fricative consonants and other voiced consonants. Results Figure 4 shows the results of the monosyllable-articulation and word-intelligibility tests. Although the scores of the articulation of BCU were lower than that of AC, 60% was achieved. Further, word intelligibility for BCU showed scores of more than 85% for words with highfamiliarities. 4

5 Figure 5 shows the respective confusion matrices for BCU and AC. Some differences between BCU and AC were found; two-way ANOVA regarding the number of misidentified phonemes with spoken phonemes and sound types showed that the number perceived as /j/ was significantly larger for BCU than for AC when palatalized sound was presented (p<0.05), and the number perceived as /r/ or /rj/ was significantly larger for AC than for BCU when voiced plosives and fricatives were presented (p<0.01) (related blocks are denoted by circles and crosses, respectively, in the figure 5). However, as a whole, these matrices show similar patterns. Discussion The similar patterns of both confusion matrices for BCU and AC indicate that there is no major difference in the ways of speech recognition for the types of sounds, i.e., the same pitch is perceived by amplitude-modulated BCU and AC. Therefore, it seems probable that signal processing schemes aimed at enabling better intelligibility of AC speech can also be applied to BCU speech. Some differences observed between amplitude-modulated BCU and AC may depend on the demodulation-mechanism of BCU. Also the confusion patterns themselves may provide some clues as to the sensory mechanism of BCU perception. The word-intelligibility scores obtained at sound levels of 25 and 30 db were significantly higher than those at 20 db, but there was no significant difference between the scores at 25 db and 30 db. Also monosyllable-articulation tests under BCU conditions showed no significant effect between the scores at 20 and 30 db, whereas the scores under AC showed a significant effect among all sound levels. These results indicated that the intelligibility of BCU Sound pressure level [db(a)] Sensation level [db] Familiarity Sensation level [db] Figure 4.- Left: Average scores of the monosyllable-articulation tests for BCU. Right: Average scores of the word-intelligibility tests for BCU and AC. Error bars indicate the SEM. ( * p < 0.05, ** p < 0.01) (1) (2) (3) (4) Perceived phoneme (1) (2) (3) (4) (1) Vowels (2) Unvoiced consonants (3) Voiced plosive and fricative consonants (4) Other voiced consonants AC 0 100% Figure 5.- Confusion matrices based on the results of monosyllable-articulation tests with AC and BCU. The phonemes were classified into four groups; (1) vowels, (2) unvoiced consonants, (3) voiced plosive and fricative consonants, and (4) other voiced consonants. Blocks with larger grey values indicate higher appearance frequencies for those pairs. BCU 5

6 does not necessarily increase with the sound level. This may be due to the carrier signal s pitch [13] and the narrow dynamic range of BCU hearing [14]. Further investigation will be useful regarding the methods of amplitude-modulation of ultrasound by a speech signal. CONCLUSION The BCUHA, a novel hearing-aid using BCU hearing, was assessed by psychoacoustical tests. The hearing tests showed more than 40% of the profoundly deaf subjects examined experienced clear sound sensations, and some subjects recognized words when using a BCUHA. No significant differences were observed in frequency-resolution in the middle frequency range (0.25 ~ 6.0 khz). As well, word-intelligibility for familiar words reached 85%. These results point to the practicability of BCUHAs. The present results are also useful for the continuing development of BCUHAs. Because pitch perception is similar between BCU and AC hearing, several signal-processing methods that are effective for conventional hearing-aids may be applicable to BCUHAs. According to the results of the frequency-discrimination test, improvements are possible through the amplification of modulation signals in the low and high frequency ranges. Learning might also be important. In the current experiments, the more often subjects experienced BCU hearing, the better their performances. Some profoundly deaf subjects, who have frequently participated in our hearing experiments, can even understand everyday speech. Further studies will be needed to clarify the details outlined in this report. Acknowledgments This work was supported by the Industrial Technology Research Grant Program of the New Energy and Industrial Technology Development Organization (NEDO), Japan, and a Research Grant from the Strategic Information and Communications R&D Promotion Programme, Ministry of Internal Affairs and Communication, Japan for SN. References: [1] V. Gavreau V: Audubillite de sons de frequence elevee. Comput Rendu. 226 (1948) [2] R. J. Pumphrey: Upper limit of frequency for human hearing. Nature. 166 (1950) 571 [3] R.J. Bellucci, and D.E. Schneider: Some observations on ultrasonic perception in man. Ann. Otol. Rhinol. Laryngol. 71 (1962) [4] J. F. Corso: Bone-conduction thresholds for sonic and ultrasonic frequencies. J. Acoust. Soc. Am. 35(1963) [5] M. L. Lenhardt, R. Skellett, P. Wang, and A. M. Clarke: Human ultrasonic speech perception. Science 253 (1991) [6] R. A. Dobie, M. L. Wiederhold: Ultrasonic hearing. Science 255 (1992) , [7] H. Hosoi, S. Imaizumi, T. Sakaguchi, M. Tonoike, K. Murata: Activation of the auditory cortex by ultrasound. The Lancet 351 (1999) [8] S. Nakagawa, T. Sakaguchi, M. Yamaguchi, M. Tonoike, S. Imaizumi, H. Hosoi, Y. Watanabe: Characteristics of auditory perception of bone-conducted ultrasound. Tech Rep IEICE. WIT99-15 (1999) [9] S. Imaizumi, H. Hoso, T. Sakaguchi, Y. Watanabe, N. Sadato, S. Nakamura, A. Waki, Y. Yonekura: Ultrasound activates the auditory cortex of profoundly deaf subjects. NeuroReport 12 (2001) [10] S. Nakagawa, Y. Okamoto, Y. Fujisaka: Development of a Bone-conducted Ultrasonic Hearing Aid for the Profoundly Sensorineural Deaf. Trans. Jpn. Soc. Med. Biol. Eng. 44 (2006) [11] M. L. Lenhardt: Ultrasonic hearing in humans: applications for tinnitus treatment. Int. Tinnitus J. 9 (2003) [12] T. Koizumi, S. Nakagawa, T. Nishimura, H. Hosoi: Auditory adaptation by pseudo-tinnitus: A MEG study. Audiology Japan 47 (2004) [13] S. Nakagawa, M. Tonoike: Measurement of brain magnetic fields evoked by bone-conducted ultrasounds: Effect of frequencies. Unveiling the Mystery of the Brain: International Congress Series, Elsevier 1278 (2005) [14] T. Nishimura, S. Nakagawa, T. Sakaguchi, H. Hosoi: Ultrasonic masker crarifies ultrasonic perception in man. Hearing Research 175 (2003) [15] Nakagawa S, Yamaguchi M, Tonoike M, Watanabe Y, Hosoi H, Imaizumi S: Characteristics of auditory perception of bone-conducted ultrasound in humans revealed by magnetoencephalography. NeuroImage 11 (2000) s746 [16] Fujimoto K, Nakagawa S: Non-linear ultrasonic perception. Hearing Research 204: , [17] Y. Okamoto, S. Nakagawa, K. Fujimoto, M. Tonoike: Intelligibility of bone-conducted ultrasonic speech. Hearing Research 208 (2005) [18] Amano S, Kondo T: Modality dependency of familiarity ratings of Japanese words. Perception & Psychophysics 57 (1995) [19] Sakamoto S, Suzuki Y, Ozawa K, Kondo T, Sone T: New lists for word familiarity and phonetic balance. J Acoust Soc Jpn. 54 (199)

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 13 http://acousticalsociety.org/ ICA 13 Montreal Montreal, Canada 2-7 June 13 Psychological and Physiological Acoustics Session 2pPPb: Speech. Attention,

More information

Consonant Perception test

Consonant Perception test Consonant Perception test Introduction The Vowel-Consonant-Vowel (VCV) test is used in clinics to evaluate how well a listener can recognize consonants under different conditions (e.g. with and without

More information

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds Hearing Lectures Acoustics of Speech and Hearing Week 2-10 Hearing 3: Auditory Filtering 1. Loudness of sinusoids mainly (see Web tutorial for more) 2. Pitch of sinusoids mainly (see Web tutorial for more)

More information

Acoustics, signals & systems for audiology. Psychoacoustics of hearing impairment

Acoustics, signals & systems for audiology. Psychoacoustics of hearing impairment Acoustics, signals & systems for audiology Psychoacoustics of hearing impairment Three main types of hearing impairment Conductive Sound is not properly transmitted from the outer to the inner ear Sensorineural

More information

HCS 7367 Speech Perception

HCS 7367 Speech Perception Long-term spectrum of speech HCS 7367 Speech Perception Connected speech Absolute threshold Males Dr. Peter Assmann Fall 212 Females Long-term spectrum of speech Vowels Males Females 2) Absolute threshold

More information

Bone-conducted ultrasonic hearing assessed by tympanic membrane vibration in living human beings

Bone-conducted ultrasonic hearing assessed by tympanic membrane vibration in living human beings Acoust. Sci. & Tech. 34, 6 (213) PAPER #213 The Acoustical Society of Japan Bone-conducted ultrasonic hearing assessed by tympanic membrane vibration in living human beings Kazuhito Ito and Seiji Nakagawa

More information

Topic 4. Pitch & Frequency

Topic 4. Pitch & Frequency Topic 4 Pitch & Frequency A musical interlude KOMBU This solo by Kaigal-ool of Huun-Huur-Tu (accompanying himself on doshpuluur) demonstrates perfectly the characteristic sound of the Xorekteer voice An

More information

Prelude Envelope and temporal fine. What's all the fuss? Modulating a wave. Decomposing waveforms. The psychophysics of cochlear

Prelude Envelope and temporal fine. What's all the fuss? Modulating a wave. Decomposing waveforms. The psychophysics of cochlear The psychophysics of cochlear implants Stuart Rosen Professor of Speech and Hearing Science Speech, Hearing and Phonetic Sciences Division of Psychology & Language Sciences Prelude Envelope and temporal

More information

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair Who are cochlear implants for? Essential feature People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work

More information

The role of periodicity in the perception of masked speech with simulated and real cochlear implants

The role of periodicity in the perception of masked speech with simulated and real cochlear implants The role of periodicity in the perception of masked speech with simulated and real cochlear implants Kurt Steinmetzger and Stuart Rosen UCL Speech, Hearing and Phonetic Sciences Heidelberg, 09. November

More information

Who are cochlear implants for?

Who are cochlear implants for? Who are cochlear implants for? People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work best in adults who

More information

Psychoacoustical Models WS 2016/17

Psychoacoustical Models WS 2016/17 Psychoacoustical Models WS 2016/17 related lectures: Applied and Virtual Acoustics (Winter Term) Advanced Psychoacoustics (Summer Term) Sound Perception 2 Frequency and Level Range of Human Hearing Source:

More information

Determination of filtering parameters for dichotic-listening binaural hearing aids

Determination of filtering parameters for dichotic-listening binaural hearing aids Determination of filtering parameters for dichotic-listening binaural hearing aids Yôiti Suzuki a, Atsunobu Murase b, Motokuni Itoh c and Shuichi Sakamoto a a R.I.E.C., Tohoku University, 2-1, Katahira,

More information

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair Who are cochlear implants for? Essential feature People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work

More information

Issues faced by people with a Sensorineural Hearing Loss

Issues faced by people with a Sensorineural Hearing Loss Issues faced by people with a Sensorineural Hearing Loss Issues faced by people with a Sensorineural Hearing Loss 1. Decreased Audibility 2. Decreased Dynamic Range 3. Decreased Frequency Resolution 4.

More information

FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED

FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED Francisco J. Fraga, Alan M. Marotta National Institute of Telecommunications, Santa Rita do Sapucaí - MG, Brazil Abstract A considerable

More information

The development of a modified spectral ripple test

The development of a modified spectral ripple test The development of a modified spectral ripple test Justin M. Aronoff a) and David M. Landsberger Communication and Neuroscience Division, House Research Institute, 2100 West 3rd Street, Los Angeles, California

More information

Chapter 11: Sound, The Auditory System, and Pitch Perception

Chapter 11: Sound, The Auditory System, and Pitch Perception Chapter 11: Sound, The Auditory System, and Pitch Perception Overview of Questions What is it that makes sounds high pitched or low pitched? How do sound vibrations inside the ear lead to the perception

More information

Topic 4. Pitch & Frequency. (Some slides are adapted from Zhiyao Duan s course slides on Computer Audition and Its Applications in Music)

Topic 4. Pitch & Frequency. (Some slides are adapted from Zhiyao Duan s course slides on Computer Audition and Its Applications in Music) Topic 4 Pitch & Frequency (Some slides are adapted from Zhiyao Duan s course slides on Computer Audition and Its Applications in Music) A musical interlude KOMBU This solo by Kaigal-ool of Huun-Huur-Tu

More information

Simulations of high-frequency vocoder on Mandarin speech recognition for acoustic hearing preserved cochlear implant

Simulations of high-frequency vocoder on Mandarin speech recognition for acoustic hearing preserved cochlear implant INTERSPEECH 2017 August 20 24, 2017, Stockholm, Sweden Simulations of high-frequency vocoder on Mandarin speech recognition for acoustic hearing preserved cochlear implant Tsung-Chen Wu 1, Tai-Shih Chi

More information

Demonstration of a Novel Speech-Coding Method for Single-Channel Cochlear Stimulation

Demonstration of a Novel Speech-Coding Method for Single-Channel Cochlear Stimulation THE HARRIS SCIENCE REVIEW OF DOSHISHA UNIVERSITY, VOL. 58, NO. 4 January 2018 Demonstration of a Novel Speech-Coding Method for Single-Channel Cochlear Stimulation Yuta TAMAI*, Shizuko HIRYU*, and Kohta

More information

Study of perceptual balance for binaural dichotic presentation

Study of perceptual balance for binaural dichotic presentation Paper No. 556 Proceedings of 20 th International Congress on Acoustics, ICA 2010 23-27 August 2010, Sydney, Australia Study of perceptual balance for binaural dichotic presentation Pandurangarao N. Kulkarni

More information

Speech (Sound) Processing

Speech (Sound) Processing 7 Speech (Sound) Processing Acoustic Human communication is achieved when thought is transformed through language into speech. The sounds of speech are initiated by activity in the central nervous system,

More information

Hearing the Universal Language: Music and Cochlear Implants

Hearing the Universal Language: Music and Cochlear Implants Hearing the Universal Language: Music and Cochlear Implants Professor Hugh McDermott Deputy Director (Research) The Bionics Institute of Australia, Professorial Fellow The University of Melbourne Overview?

More information

THE MECHANICS OF HEARING

THE MECHANICS OF HEARING CONTENTS The mechanics of hearing Hearing loss and the Noise at Work Regulations Loudness and the A weighting network Octave band analysis Hearing protection calculations Worked examples and self assessed

More information

Linguistic Phonetics Fall 2005

Linguistic Phonetics Fall 2005 MIT OpenCourseWare http://ocw.mit.edu 24.963 Linguistic Phonetics Fall 2005 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 24.963 Linguistic Phonetics

More information

Hearing. and other senses

Hearing. and other senses Hearing and other senses Sound Sound: sensed variations in air pressure Frequency: number of peaks that pass a point per second (Hz) Pitch 2 Some Sound and Hearing Links Useful (and moderately entertaining)

More information

! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics

! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics 2/14/18 Can hear whistle? Lecture 5 Psychoacoustics Based on slides 2009--2018 DeHon, Koditschek Additional Material 2014 Farmer 1 2 There are sounds we cannot hear Depends on frequency Where are we on

More information

SoundRecover2 the first adaptive frequency compression algorithm More audibility of high frequency sounds

SoundRecover2 the first adaptive frequency compression algorithm More audibility of high frequency sounds Phonak Insight April 2016 SoundRecover2 the first adaptive frequency compression algorithm More audibility of high frequency sounds Phonak led the way in modern frequency lowering technology with the introduction

More information

2/25/2013. Context Effect on Suprasegmental Cues. Supresegmental Cues. Pitch Contour Identification (PCI) Context Effect with Cochlear Implants

2/25/2013. Context Effect on Suprasegmental Cues. Supresegmental Cues. Pitch Contour Identification (PCI) Context Effect with Cochlear Implants Context Effect on Segmental and Supresegmental Cues Preceding context has been found to affect phoneme recognition Stop consonant recognition (Mann, 1980) A continuum from /da/ to /ga/ was preceded by

More information

What Is the Difference between db HL and db SPL?

What Is the Difference between db HL and db SPL? 1 Psychoacoustics What Is the Difference between db HL and db SPL? The decibel (db ) is a logarithmic unit of measurement used to express the magnitude of a sound relative to some reference level. Decibels

More information

whether or not the fundamental is actually present.

whether or not the fundamental is actually present. 1) Which of the following uses a computer CPU to combine various pure tones to generate interesting sounds or music? 1) _ A) MIDI standard. B) colored-noise generator, C) white-noise generator, D) digital

More information

Hearing. istockphoto/thinkstock

Hearing. istockphoto/thinkstock Hearing istockphoto/thinkstock Audition The sense or act of hearing The Stimulus Input: Sound Waves Sound waves are composed of changes in air pressure unfolding over time. Acoustical transduction: Conversion

More information

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for What you re in for Speech processing schemes for cochlear implants Stuart Rosen Professor of Speech and Hearing Science Speech, Hearing and Phonetic Sciences Division of Psychology & Language Sciences

More information

Audiology Curriculum Post-Foundation Course Topic Summaries

Audiology Curriculum Post-Foundation Course Topic Summaries Audiology Curriculum Post-Foundation Course Topic Summaries Speech and Language Speech and Language Acquisition HUCD 5150 This course acquaints students with current theories of language development, the

More information

Effects of speaker's and listener's environments on speech intelligibili annoyance. Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag

Effects of speaker's and listener's environments on speech intelligibili annoyance. Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag JAIST Reposi https://dspace.j Title Effects of speaker's and listener's environments on speech intelligibili annoyance Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag Citation Inter-noise 2016: 171-176 Issue

More information

Brad May, PhD Johns Hopkins University

Brad May, PhD Johns Hopkins University Brad May, PhD Johns Hopkins University When the ear cannot function normally, the brain changes. Brain deafness contributes to poor speech comprehension, problems listening in noise, abnormal loudness

More information

EVALUATION OF SPEECH PERCEPTION IN PATIENTS WITH SKI SLOPE HEARING LOSS USING ARABIC CONSTANT SPEECH DISCRIMINATION LISTS

EVALUATION OF SPEECH PERCEPTION IN PATIENTS WITH SKI SLOPE HEARING LOSS USING ARABIC CONSTANT SPEECH DISCRIMINATION LISTS EVALUATION OF SPEECH PERCEPTION IN PATIENTS WITH SKI SLOPE HEARING LOSS USING ARABIC CONSTANT SPEECH DISCRIMINATION LISTS Mai El Ghazaly, Resident of Audiology Mohamed Aziz Talaat, MD,PhD Mona Mourad.

More information

SOLUTIONS Homework #3. Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03

SOLUTIONS Homework #3. Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03 SOLUTIONS Homework #3 Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03 Problem 1: a) Where in the cochlea would you say the process of "fourier decomposition" of the incoming

More information

Prescribe hearing aids to:

Prescribe hearing aids to: Harvey Dillon Audiology NOW! Prescribing hearing aids for adults and children Prescribing hearing aids for adults and children Adult Measure hearing thresholds (db HL) Child Measure hearing thresholds

More information

SLHS 1301 The Physics and Biology of Spoken Language. Practice Exam 2. b) 2 32

SLHS 1301 The Physics and Biology of Spoken Language. Practice Exam 2. b) 2 32 SLHS 1301 The Physics and Biology of Spoken Language Practice Exam 2 Chapter 9 1. In analog-to-digital conversion, quantization of the signal means that a) small differences in signal amplitude over time

More information

Linguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions.

Linguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions. 24.963 Linguistic Phonetics Basic Audition Diagram of the inner ear removed due to copyright restrictions. 1 Reading: Keating 1985 24.963 also read Flemming 2001 Assignment 1 - basic acoustics. Due 9/22.

More information

Hearing Aids. Bernycia Askew

Hearing Aids. Bernycia Askew Hearing Aids Bernycia Askew Who they re for Hearing Aids are usually best for people who have a mildmoderate hearing loss. They are often benefit those who have contracted noise induced hearing loss with

More information

doi: /brain/awn308 Brain 2009: 132; Enhanced discrimination of low-frequency sounds for subjects with high-frequency dead regions

doi: /brain/awn308 Brain 2009: 132; Enhanced discrimination of low-frequency sounds for subjects with high-frequency dead regions doi:10.1093/brain/awn308 Brain 2009: 132; 524 536 524 BRAIN A JOURNAL OF NEUROLOGY Enhanced discrimination of low-frequency sounds for subjects with high-frequency dead regions Brian C. J. Moore 1 and

More information

Audiology Curriculum Foundation Course Linkages

Audiology Curriculum Foundation Course Linkages Audiology Curriculum Foundation Course Linkages Phonetics (HUCD 5020) a. Vowels b. Consonants c. Suprasegmentals d. Clinical transcription e. Dialectal variation HUCD 5140 HUCD 6360 HUCD 6560 HUCD 6640

More information

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED International Conference on Systemics, Cybernetics and Informatics, February 12 15, 2004 BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED Alice N. Cheeran Biomedical

More information

Tactile Communication of Speech

Tactile Communication of Speech Tactile Communication of Speech RLE Group Sensory Communication Group Sponsor National Institutes of Health/National Institute on Deafness and Other Communication Disorders Grant 2 R01 DC00126, Grant 1

More information

Frequency refers to how often something happens. Period refers to the time it takes something to happen.

Frequency refers to how often something happens. Period refers to the time it takes something to happen. Lecture 2 Properties of Waves Frequency and period are distinctly different, yet related, quantities. Frequency refers to how often something happens. Period refers to the time it takes something to happen.

More information

The role of low frequency components in median plane localization

The role of low frequency components in median plane localization Acoust. Sci. & Tech. 24, 2 (23) PAPER The role of low components in median plane localization Masayuki Morimoto 1;, Motoki Yairi 1, Kazuhiro Iida 2 and Motokuni Itoh 1 1 Environmental Acoustics Laboratory,

More information

Spectrograms (revisited)

Spectrograms (revisited) Spectrograms (revisited) We begin the lecture by reviewing the units of spectrograms, which I had only glossed over when I covered spectrograms at the end of lecture 19. We then relate the blocks of a

More information

Topics in Linguistic Theory: Laboratory Phonology Spring 2007

Topics in Linguistic Theory: Laboratory Phonology Spring 2007 MIT OpenCourseWare http://ocw.mit.edu 24.91 Topics in Linguistic Theory: Laboratory Phonology Spring 27 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

Audibility, discrimination and hearing comfort at a new level: SoundRecover2

Audibility, discrimination and hearing comfort at a new level: SoundRecover2 Audibility, discrimination and hearing comfort at a new level: SoundRecover2 Julia Rehmann, Michael Boretzki, Sonova AG 5th European Pediatric Conference Current Developments and New Directions in Pediatric

More information

An Auditory-Model-Based Electrical Stimulation Strategy Incorporating Tonal Information for Cochlear Implant

An Auditory-Model-Based Electrical Stimulation Strategy Incorporating Tonal Information for Cochlear Implant Annual Progress Report An Auditory-Model-Based Electrical Stimulation Strategy Incorporating Tonal Information for Cochlear Implant Joint Research Centre for Biomedical Engineering Mar.7, 26 Types of Hearing

More information

But, what about ASSR in AN?? Is it a reliable tool to estimate the auditory thresholds in those category of patients??

But, what about ASSR in AN?? Is it a reliable tool to estimate the auditory thresholds in those category of patients?? 1 Auditory Steady State Response (ASSR) thresholds have been shown to be highly correlated to bh behavioral thresholds h in adults and older children with normal hearing or those with sensorineural hearing

More information

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening AUDL GS08/GAV1 Signals, systems, acoustics and the ear Pitch & Binaural listening Review 25 20 15 10 5 0-5 100 1000 10000 25 20 15 10 5 0-5 100 1000 10000 Part I: Auditory frequency selectivity Tuning

More information

Language Speech. Speech is the preferred modality for language.

Language Speech. Speech is the preferred modality for language. Language Speech Speech is the preferred modality for language. Outer ear Collects sound waves. The configuration of the outer ear serves to amplify sound, particularly at 2000-5000 Hz, a frequency range

More information

Sound and Hearing. Decibels. Frequency Coding & Localization 1. Everything is vibration. The universe is made of waves.

Sound and Hearing. Decibels. Frequency Coding & Localization 1. Everything is vibration. The universe is made of waves. Frequency Coding & Localization 1 Sound and Hearing Everything is vibration The universe is made of waves db = 2log(P1/Po) P1 = amplitude of the sound wave Po = reference pressure =.2 dynes/cm 2 Decibels

More information

Jitter, Shimmer, and Noise in Pathological Voice Quality Perception

Jitter, Shimmer, and Noise in Pathological Voice Quality Perception ISCA Archive VOQUAL'03, Geneva, August 27-29, 2003 Jitter, Shimmer, and Noise in Pathological Voice Quality Perception Jody Kreiman and Bruce R. Gerratt Division of Head and Neck Surgery, School of Medicine

More information

SUBJECT: Physics TEACHER: Mr. S. Campbell DATE: 15/1/2017 GRADE: DURATION: 1 wk GENERAL TOPIC: The Physics Of Hearing

SUBJECT: Physics TEACHER: Mr. S. Campbell DATE: 15/1/2017 GRADE: DURATION: 1 wk GENERAL TOPIC: The Physics Of Hearing SUBJECT: Physics TEACHER: Mr. S. Campbell DATE: 15/1/2017 GRADE: 12-13 DURATION: 1 wk GENERAL TOPIC: The Physics Of Hearing The Physics Of Hearing On completion of this section, you should be able to:

More information

Implementation of Spectral Maxima Sound processing for cochlear. implants by using Bark scale Frequency band partition

Implementation of Spectral Maxima Sound processing for cochlear. implants by using Bark scale Frequency band partition Implementation of Spectral Maxima Sound processing for cochlear implants by using Bark scale Frequency band partition Han xianhua 1 Nie Kaibao 1 1 Department of Information Science and Engineering, Shandong

More information

The basic hearing abilities of absolute pitch possessors

The basic hearing abilities of absolute pitch possessors PAPER The basic hearing abilities of absolute pitch possessors Waka Fujisaki 1;2;* and Makio Kashino 2; { 1 Graduate School of Humanities and Sciences, Ochanomizu University, 2 1 1 Ootsuka, Bunkyo-ku,

More information

PERIPHERAL AND CENTRAL AUDITORY ASSESSMENT

PERIPHERAL AND CENTRAL AUDITORY ASSESSMENT PERIPHERAL AND CENTRAL AUDITORY ASSESSMENT Ravi Pachigolla, MD Faculty Advisor: Jeffery T. Vrabec, MD The University of Texas Medical Branch At Galveston Department of Otolaryngology Grand Rounds Presentation

More information

9/29/14. Amanda M. Lauer, Dept. of Otolaryngology- HNS. From Signal Detection Theory and Psychophysics, Green & Swets (1966)

9/29/14. Amanda M. Lauer, Dept. of Otolaryngology- HNS. From Signal Detection Theory and Psychophysics, Green & Swets (1966) Amanda M. Lauer, Dept. of Otolaryngology- HNS From Signal Detection Theory and Psychophysics, Green & Swets (1966) SIGNAL D sensitivity index d =Z hit - Z fa Present Absent RESPONSE Yes HIT FALSE ALARM

More information

Acoustics Research Institute

Acoustics Research Institute Austrian Academy of Sciences Acoustics Research Institute Modeling Modelingof ofauditory AuditoryPerception Perception Bernhard BernhardLaback Labackand andpiotr PiotrMajdak Majdak http://www.kfs.oeaw.ac.at

More information

Signals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor

Signals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor Signals, systems, acoustics and the ear Week 5 The peripheral auditory system: The ear as a signal processor Think of this set of organs 2 as a collection of systems, transforming sounds to be sent to

More information

COM3502/4502/6502 SPEECH PROCESSING

COM3502/4502/6502 SPEECH PROCESSING COM3502/4502/6502 SPEECH PROCESSING Lecture 4 Hearing COM3502/4502/6502 Speech Processing: Lecture 4, slide 1 The Speech Chain SPEAKER Ear LISTENER Feedback Link Vocal Muscles Ear Sound Waves Taken from:

More information

Masker-signal relationships and sound level

Masker-signal relationships and sound level Chapter 6: Masking Masking Masking: a process in which the threshold of one sound (signal) is raised by the presentation of another sound (masker). Masking represents the difference in decibels (db) between

More information

A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES

A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES Kazuhiro IIDA, Motokuni ITOH AV Core Technology Development Center, Matsushita Electric Industrial Co., Ltd.

More information

Optimal Filter Perception of Speech Sounds: Implications to Hearing Aid Fitting through Verbotonal Rehabilitation

Optimal Filter Perception of Speech Sounds: Implications to Hearing Aid Fitting through Verbotonal Rehabilitation Optimal Filter Perception of Speech Sounds: Implications to Hearing Aid Fitting through Verbotonal Rehabilitation Kazunari J. Koike, Ph.D., CCC-A Professor & Director of Audiology Department of Otolaryngology

More information

Voice Pitch Control Using a Two-Dimensional Tactile Display

Voice Pitch Control Using a Two-Dimensional Tactile Display NTUT Education of Disabilities 2012 Vol.10 Voice Pitch Control Using a Two-Dimensional Tactile Display Masatsugu SAKAJIRI 1, Shigeki MIYOSHI 2, Kenryu NAKAMURA 3, Satoshi FUKUSHIMA 3 and Tohru IFUKUBE

More information

Hearing. PSYCHOLOGY (8th Edition, in Modules) David Myers. Module 14. Hearing. Hearing

Hearing. PSYCHOLOGY (8th Edition, in Modules) David Myers. Module 14. Hearing. Hearing PSYCHOLOGY (8th Edition, in Modules) David Myers PowerPoint Slides Aneeq Ahmad Henderson State University Worth Publishers, 2007 1 Hearing Module 14 2 Hearing Hearing The Stimulus Input: Sound Waves The

More information

HEARING IMPAIRMENT LEARNING OBJECTIVES: Divisions of the Ear. Inner Ear. The inner ear consists of: Cochlea Vestibular

HEARING IMPAIRMENT LEARNING OBJECTIVES: Divisions of the Ear. Inner Ear. The inner ear consists of: Cochlea Vestibular HEARING IMPAIRMENT LEARNING OBJECTIVES: STUDENTS SHOULD BE ABLE TO: Recognize the clinical manifestation and to be able to request appropriate investigations Interpret lab investigations for basic management.

More information

Auditory Steady-State Responses and Speech Feature Discrimination in Infants DOI: /jaaa

Auditory Steady-State Responses and Speech Feature Discrimination in Infants DOI: /jaaa J Am Acad Audiol 20:629 643 (2009) Auditory Steady-State Responses and Speech Feature Discrimination in Infants DOI: 10.3766/jaaa.20.10.5 Barbara Cone* Angela Garinis{ Abstract Purpose: The aim of this

More information

Introduction to Audiology: Global Edition

Introduction to Audiology: Global Edition Introduction to Audiology For these Global Editions, the editorial team at Pearson has collaborated with educators across the world to address a wide range of subjects and requirements, equipping students

More information

Communication quality for students with a hearing impairment: An experiment evaluating speech intelligibility and annoyance

Communication quality for students with a hearing impairment: An experiment evaluating speech intelligibility and annoyance Communication quality for students with a hearing impairment: An experiment evaluating speech intelligibility and annoyance Johan Odelius, Örjan Johansson, Anders Ågren Division of Sound and Vibration,

More information

Sound Waves. Sensation and Perception. Sound Waves. Sound Waves. Sound Waves

Sound Waves. Sensation and Perception. Sound Waves. Sound Waves. Sound Waves Sensation and Perception Part 3 - Hearing Sound comes from pressure waves in a medium (e.g., solid, liquid, gas). Although we usually hear sounds in air, as long as the medium is there to transmit the

More information

School of Health Sciences Department or equivalent Division of Language and Communication Science UK credits 15 ECTS 7.5 Level 7

School of Health Sciences Department or equivalent Division of Language and Communication Science UK credits 15 ECTS 7.5 Level 7 MODULE SPECIFICATION KEY FACTS Module name Hearing and Speech Sciences Module code SLM007 School School of Health Sciences Department or equivalent Division of Language and Communication Science UK credits

More information

Digital Speech and Audio Processing Spring

Digital Speech and Audio Processing Spring Digital Speech and Audio Processing Spring 2008-1 Ear Anatomy 1. Outer ear: Funnels sounds / amplifies 2. Middle ear: acoustic impedance matching mechanical transformer 3. Inner ear: acoustic transformer

More information

Adaptive dynamic range compression for improving envelope-based speech perception: Implications for cochlear implants

Adaptive dynamic range compression for improving envelope-based speech perception: Implications for cochlear implants Adaptive dynamic range compression for improving envelope-based speech perception: Implications for cochlear implants Ying-Hui Lai, Fei Chen and Yu Tsao Abstract The temporal envelope is the primary acoustic

More information

Overview. Acoustics of Speech and Hearing. Source-Filter Model. Source-Filter Model. Turbulence Take 2. Turbulence

Overview. Acoustics of Speech and Hearing. Source-Filter Model. Source-Filter Model. Turbulence Take 2. Turbulence Overview Acoustics of Speech and Hearing Lecture 2-4 Fricatives Source-filter model reminder Sources of turbulence Shaping of source spectrum by vocal tract Acoustic-phonetic characteristics of English

More information

Discrete Signal Processing

Discrete Signal Processing 1 Discrete Signal Processing C.M. Liu Perceptual Lab, College of Computer Science National Chiao-Tung University http://www.cs.nctu.edu.tw/~cmliu/courses/dsp/ ( Office: EC538 (03)5731877 cmliu@cs.nctu.edu.tw

More information

Enrique A. Lopez-Poveda Alan R. Palmer Ray Meddis Editors. The Neurophysiological Bases of Auditory Perception

Enrique A. Lopez-Poveda Alan R. Palmer Ray Meddis Editors. The Neurophysiological Bases of Auditory Perception Enrique A. Lopez-Poveda Alan R. Palmer Ray Meddis Editors The Neurophysiological Bases of Auditory Perception 123 The Neurophysiological Bases of Auditory Perception Enrique A. Lopez-Poveda Alan R. Palmer

More information

Auditory System & Hearing

Auditory System & Hearing Auditory System & Hearing Chapters 9 part II Lecture 16 Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Spring 2019 1 Phase locking: Firing locked to period of a sound wave example of a temporal

More information

How high-frequency do children hear?

How high-frequency do children hear? How high-frequency do children hear? Mari UEDA 1 ; Kaoru ASHIHARA 2 ; Hironobu TAKAHASHI 2 1 Kyushu University, Japan 2 National Institute of Advanced Industrial Science and Technology, Japan ABSTRACT

More information

Comment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA)

Comment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA) Comments Comment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA) Is phase locking to transposed stimuli as good as phase locking to low-frequency

More information

Auditory Physiology PSY 310 Greg Francis. Lecture 30. Organ of Corti

Auditory Physiology PSY 310 Greg Francis. Lecture 30. Organ of Corti Auditory Physiology PSY 310 Greg Francis Lecture 30 Waves, waves, waves. Organ of Corti Tectorial membrane Sits on top Inner hair cells Outer hair cells The microphone for the brain 1 Hearing Perceptually,

More information

Although considerable work has been conducted on the speech

Although considerable work has been conducted on the speech Influence of Hearing Loss on the Perceptual Strategies of Children and Adults Andrea L. Pittman Patricia G. Stelmachowicz Dawna E. Lewis Brenda M. Hoover Boys Town National Research Hospital Omaha, NE

More information

Time Varying Comb Filters to Reduce Spectral and Temporal Masking in Sensorineural Hearing Impairment

Time Varying Comb Filters to Reduce Spectral and Temporal Masking in Sensorineural Hearing Impairment Bio Vision 2001 Intl Conf. Biomed. Engg., Bangalore, India, 21-24 Dec. 2001, paper PRN6. Time Varying Comb Filters to Reduce pectral and Temporal Masking in ensorineural Hearing Impairment Dakshayani.

More information

Hearing Evaluation: Diagnostic Approach

Hearing Evaluation: Diagnostic Approach Hearing Evaluation: Diagnostic Approach Hearing Assessment Purpose - to quantify and qualify in terms of the degree of hearing loss, the type of hearing loss and configuration of the hearing loss - carried

More information

Healthy Organ of Corti. Loss of OHCs. How to use and interpret the TEN(HL) test for diagnosis of Dead Regions in the cochlea

Healthy Organ of Corti. Loss of OHCs. How to use and interpret the TEN(HL) test for diagnosis of Dead Regions in the cochlea 'How we do it' Healthy Organ of Corti How to use and interpret the TEN(HL) test for diagnosis of s in the cochlea Karolina Kluk¹ Brian C.J. Moore² Mouse IHCs OHCs ¹ Audiology and Deafness Research Group,

More information

Speech conveys not only linguistic content but. Vocal Emotion Recognition by Normal-Hearing Listeners and Cochlear Implant Users

Speech conveys not only linguistic content but. Vocal Emotion Recognition by Normal-Hearing Listeners and Cochlear Implant Users Cochlear Implants Special Issue Article Vocal Emotion Recognition by Normal-Hearing Listeners and Cochlear Implant Users Trends in Amplification Volume 11 Number 4 December 2007 301-315 2007 Sage Publications

More information

Effects of Age and Hearing Loss on the Processing of Auditory Temporal Fine Structure

Effects of Age and Hearing Loss on the Processing of Auditory Temporal Fine Structure Effects of Age and Hearing Loss on the Processing of Auditory Temporal Fine Structure Brian C. J. Moore Abstract Within the cochlea, broadband sounds like speech and music are filtered into a series of

More information

Computational Perception /785. Auditory Scene Analysis

Computational Perception /785. Auditory Scene Analysis Computational Perception 15-485/785 Auditory Scene Analysis A framework for auditory scene analysis Auditory scene analysis involves low and high level cues Low level acoustic cues are often result in

More information

Temporal offset judgments for concurrent vowels by young, middle-aged, and older adults

Temporal offset judgments for concurrent vowels by young, middle-aged, and older adults Temporal offset judgments for concurrent vowels by young, middle-aged, and older adults Daniel Fogerty Department of Communication Sciences and Disorders, University of South Carolina, Columbia, South

More information

Baker, A., M.Cl.Sc (AUD) Candidate University of Western Ontario: School of Communication Sciences and Disorders

Baker, A., M.Cl.Sc (AUD) Candidate University of Western Ontario: School of Communication Sciences and Disorders Critical Review: Effects of multi-channel, nonlinear frequency compression on speech perception in hearing impaired listeners with high frequency hearing loss Baker, A., M.Cl.Sc (AUD) Candidate University

More information

Hearing. Juan P Bello

Hearing. Juan P Bello Hearing Juan P Bello The human ear The human ear Outer Ear The human ear Middle Ear The human ear Inner Ear The cochlea (1) It separates sound into its various components If uncoiled it becomes a tapering

More information

Auditory System & Hearing

Auditory System & Hearing Auditory System & Hearing Chapters 9 and 10 Lecture 17 Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Spring 2015 1 Cochlea: physical device tuned to frequency! place code: tuning of different

More information

Speech Cue Weighting in Fricative Consonant Perception in Hearing Impaired Children

Speech Cue Weighting in Fricative Consonant Perception in Hearing Impaired Children University of Tennessee, Knoxville Trace: Tennessee Research and Creative Exchange University of Tennessee Honors Thesis Projects University of Tennessee Honors Program 5-2014 Speech Cue Weighting in Fricative

More information