ACOUSTIC ANALYSIS AND PERCEPTION OF CANTONESE VOWELS PRODUCED BY PROFOUNDLY HEARING IMPAIRED ADOLESCENTS

Size: px
Start display at page:

Download "ACOUSTIC ANALYSIS AND PERCEPTION OF CANTONESE VOWELS PRODUCED BY PROFOUNDLY HEARING IMPAIRED ADOLESCENTS"

Transcription

1 ACOUSTIC ANALYSIS AND PERCEPTION OF CANTONESE VOWELS PRODUCED BY PROFOUNDLY HEARING IMPAIRED ADOLESCENTS Edward Khouw, & Valter Ciocca Dept. of Speech and Hearing Sciences, The University of Hong Kong ABSTRACT: This study investigated mid-vocalic F1 and F2 frequencies as cues to the perception of three Cantonese vowels (/a/, /i/, /Á/) in monosyllabic words produced by ten profoundly hearing impaired and ten normal hearing adolescents. For the control group, there were significant differences in F1 and F2 frequencies among the three vowels as expected. In contrast, the hearing impaired speakers did not show significant F2 frequency differences between /a/ and /Á/. Compared to the vowels produced by the control speakers, the hearing impaired speakers showed a reduction in the range of both F1 (tongue height) and F2 (frontback placement of the tongue). Listeners with normal hearing perceived the vowels produced by hearing impaired speakers with an accuracy of 65% correct for /a/, 32% correct for /i/, and 19% correct for /Á/. These results can be explained by the larger deviations from normal F1/F2 patterns for /i/ and /Á/ than for /a/. One of the most common errors was the confusion of a target vowel with /ø/, which may be explained by the clustering of vowels produced by the hearing impaired speakers towards the center of the F1/F2 space. INTRODUCTION This study examined the formant frequency patterns of vowel contrasts produced by profoundly hearing impaired and normal hearing Cantonese speakers. Formant frequencies at the steady state portion of vowels can be acoustic and perceptual cues to vowels produced by English-speaking normal hearing children (Fox, 1983; Peterson and Barney, 1952). English-speaking profoundly hearing impaired children have been found to produce vowels with different formant frequency patterns from normal hearing children. For example, Monsen (1976) found a reduced range of F1 and F2 frequencies of three vowels (/a, i, Á/) produced by the profoundly hearing impaired adolescents. Other studies have also found that the vowels produced by the English-speaking profoundly hearing impaired children were characterized by an overlap of vowel targets in F1/F2 space, as well as restricted range of formant frequency values. In the production of vowels by English speaking profoundly hearing impaired children, errors of substitution and neutralization have been documented (Angelocci et al., 1964; Hudgins and Numbers, 1942; Smith, 1975). Similar vowel errors have also been found for Cantonese speaking hearing impaired children (Dodd and So, 1994). Zee (1998) investigated the formant frequency values that characterize the Cantonese vowel system of speakers with normal hearing. However, acoustic data on vowel production by profoundly hearingimpaired speakers have not been reported. In order to gain a better understanding of the vowel errors of the Cantonese speaking profoundly hearing impaired children, this study will measure the formant frequencies of vowels produced by normal hearing and profoundly hearing impaired children. The use of formant frequency information as a perceptual cue will be examined by studying the identification of these vowels by adult listeners with normal hearing. METHOD OF THE ACOUSTIC ANALYSIS The speakers were twenty Cantonese adolescents, ten of whom (five males, five females) had normal hearing (control group); the other ten adolescents (five males, five females) were profoundly hearingimpaired. The ages of the normal speakers were between 12;10 and 14;02 (mean age = 13;05). The ages of the ten hearing impaired speakers were between 12;08 and 14;02 (mean age = 13;04). The hearing impaired speakers were selected on the criteria of being prelingually deaf with Pure Tone Average (P.T.A.) thresholds at 0.5, 1.0, and 2.0 khz of 90.0 db HL or more in the better ear, based on audiograms provided by audiologists in the Education Department. These audiograms were based on Accepted after abstract review page 367

2 audiological tests carried out less than six months before the recording of stimuli for the present study. Hearing impaired speakers wore their hearing aids for ten hours or more every day, and had no known additional handicapping conditions. They studied in schools for the deaf. Normal hearing speakers had no known speech, language, or hearing disorders and studied in normal schools. The speech stimuli consisted of six sets of monosyllabic words, with three words in each set. The words, which represented common objects and concepts, are familiar to children at Primary School level. Six three-word sets were used, of which three sets contrasted only in vowels, e.g. [sa 55 ], [si 55 ], [sá 55 ]; the other sets differed in initial stops in addition to vowel, e.g. [tsa 33 ], [tsi 33 ], [ts h Á 33 ]. The use of minimal contrasts was not possible for all sets because of the limited number of available Cantonese words that fulfil the requirement of minimal contrasts (same consonants and tones but different vowels). Stimuli were recorded either in a sound-proof room in the Department of Speech and Hearing Sciences at the University of Hong Kong, or in a sound-proof room in the Hong Kong Lutheran School for the Deaf. Speech samples were recorded using a Tascam DA-30 MkII Digital tape Recorder, and a Bruel & Kjaer 4003 low noise unidirectional microphone connected to a Bruel & Kjaer Type 2812 microphone preamplifier. The microphone was held at approximately eight inches from the speakers mouth. The recording gain level was set to ensure similar recording level among the subjects with no clipping. The hearing aids of the hearing impaired subjects were checked to be functional by ensuring that their responses to the Five Sound Test (Ling, 1976) did not differ from their previously recorded and documented responses. The eighteen words were presented to each subject on cards whose order was randomized by shuffling beforehand. The eighteen words were part of a set of seventy two words that were selected to investigate other phonetic contrasts produced by Cantonese-speaking profoundly haring impaired adolescents (Khouw, 2002). Each subject was first asked to read each word silently, and then to read it aloud. The total recording time for each subject was approximately five minutes. Recordings were low-pass filtered at 22 khz, and digitized at sampling rate of 44.1 khz on an Apple PowerMacintosh 7100 computer with a DigiDesign Audiomedia II DSP card. The input level was monitored for each word to ensure the absence of clipping. Each word was saved as a single sound file. The acoustic analysis was carried out using the SoundScope 2.1 software (GW Instruments 1996) on an Apple PowerMacintosh 9500 Computer. The sound files were initially normalized. A wideband spectrogram of the word was then produced with filter bandwidth set at 300 Hz (512 FFT, with 6 db pre-emphasis). In the present study, the frequencies of F1and F2 were measured at the middle of the vocalic segment as previously done by Zee (1998). The middle of the vocalic segment was taken to be the midway position between the beginning and the end of the vocalic segment. The beginning of the vocalic segment was defined as the onset of voicing as signalled by the first of the regularly spaced vertical striations that indicate glottal pulsing. The end of the vocalic segment was set at the beginning of the last vocalic pulse as seen on the spectrographic display. To estimate the formant frequencies, the sampling rate was first decreased from 44K Hz to 10K Hz; then an LPC spectrum (14 coefficients) was calculated. The formant frequencies (in Hz) were automatically estimated by the LPC algorithm of the Soundscope software. The LPC spectra were compared with FFT spectra (filter bandwidth 300 Hz) for the same time point to prevent gross errors in the estimation of the formant frequencies. To assess the intra- and the inter-judge reliability of the measurements of the acoustic features, the stimuli of all the speakers were re-analyzed by the author and by a teacher for the deaf with training in acoustic analysis. Intra- and the inter-judge reliability coefficients were computed. High degree of the intra- and the inter-judge reliability, ranging from 0.90 to 0.96, was found for all the measurements. METHOD OF THE PERCEPTUAL STUDY The listeners were ten normal hearing final year female speech therapy students (age range = 21;02 22;09) from the Department of Speech and Hearing Sciences, at the University of Hong Kong. All the listeners had previous training in phonetic transcription. The stimuli for the perceptual study were the monosyllabic words used in the acoustic study. The loudness of the stimuli was adjusted by the author to approximately equal level during editing of the sound files. The speech stimuli were divided into two sets (one for the normal hearing speakers and one for the hearing impaired speakers). For the vowel identification task, each speaker produced 18 words (six for each vowel) for a total of 180 words for each speaker group (3 vowels x 6 repetitions x 10 speakers). Each listener performed the Accepted after abstract review page 368

3 identification task in a single-wall IAC soundproof booth using a pair of Sennheiser HD 580 headphones connected to an Apple PowerMacintosh 7100 computer with a DigiDesign Audiomedia II DSP. All the sound files were played from hard disk at a sampling rate of 44.1 Hz. The stimuli were presented at a comfortable listening level. The presentation of the stimuli and the response collection was controlled by a custom program written with the Hypercard software (Apple Computer, Inc ). The order of the stimuli within each task was randomized by the Hypercard program. Each listener was told the type of speech sound, i.e. vowels, she would hear, and was instructed to identify by way of clicking the Hypercard button corresponding to the perceived sound. A total of four buttons were available for the vowel identification task: one button for each of the three vowels /a, i, ø/; one for others ; when the listener selected the other button, a dialog box was displayed so that the listener could type in broad transcription of the perceived sound. Each listener also had the option of clicking a repeat button once for a repetition of the presentation of the current stimulus, as well as a next button to proceed to the next stimulus after entering a response for the current trial. The order of normal and hearing impaired listening sets was counterbalanced across listeners. There were no missing data. STATISTICAL ANALYSIS For the acoustic measurements, an ANOVA with repeated measures (using the Huynh Feldt adjustment of degrees of freedom) was used to analyze the mid-vowel frequency of each vowel formant separately (F1, and then F2). Each data point was the mean formant frequency of the six tokens of each of the three vowels produced by a speaker. The between group factor was listener group (control and hearing impaired). The within group factors was vowel (/a/, /i/, /Á/). The Tukey HSD (honestly significant difference) test was used for post hoc comparisons between means. RESULTS OF ACOUSTIC ANALYSIS For the F1 frequency, the significant listener group by vowel interaction effect, F(2, 36) = 17.68, p <.01, showed that the F1 frequency between the two speaker groups depended on the vowels. Posthoc analysis showed that for the vowel /a/, the F1 frequency of the hearing impaired speakers (896 Hz) was significantly lower than that of the control speakers (1142 Hz) (Tukey HSD tests, p <.01). There was no such significant difference in the F1 frequency for vowels /i/ and /Á/ (Tukey HSD tests, p >.05). For the control speakers, the F1 frequency of the vowel /a/ (1142 Hz) was significantly higher than that of the vowel /Á/ (645 Hz) which in turn was significantly higher than that of the vowel /i/ (389 Hz) (Tukey HSD tests, p <.01). Similarly for the hearing impaired speakers, the F1 frequency of the vowel /a/ (896 Hz) was significantly higher than that of the vowel /Á/ (722 Hz) which in turn was significantly higher than that of the vowel /i/ (539 Hz). The main effect of vowel was significant, F(2, 36) = , p <.01, indicating that a significant F1 frequency difference was found among the three vowels. The main effect of listener group was not significant F(1, 18) = 0.01, p >.05, indicating that, overall, there was no significant difference in F1 frequency between the control and the hearing impaired speakers. For the F2 frequency, the significant listener group by vowel interaction effect, F(1.33, 23.99) = 47.21, p <.01, indicated that the F2 frequency differences among the vowels depended on the listener group. Post-hoc analysis showed that for the control speakers, the F2 frequency of vowel /i/ (2844 Hz) was significantly higher than that of vowel /a/ (1514 Hz) which in turn was significantly higher than that of vowel /Á/ (975 Hz) (Tukey HSD tests, p <.01). On the other hand, for the hearing impaired speakers, the F2 frequency of vowel /i/ (2019 Hz) was significantly higher than the F2 frequency of vowel /a/ (1564 Hz) and vowel /Á/ (1412 Hz) (Tukey HSD tests, p <.01), but there was no significant F2 frequency difference between the vowels /a/ and /Á/ (Tukey HSD tests, p >.05). Post-hoc analysis further showed that, compared to the control speakers, the hearing impaired speakers had significantly lower F2 frequency for the vowel /i/ and significantly higher F2 frequency for the vowel /Á/ (Tukey HSD tests, p <.01). There was no significant difference in the F2 frequency between the two speakers groups for the vowel /a/ (Tukey HSD test, p >.05). The main effect of vowel was significant F(1.33, 23.99) = , p <.01, indicating that a significant difference in the F2 frequency was found among the three vowels. The main effect of listener group was not significant F(1, 18) = 1.70, p >.05, showing that, overall, there was no significant difference in the F2 frequency between the two listener groups. Accepted after abstract review page 369

4 RESULTS OF PERCEPTUAL ANALYSIS The vowels /a, i, Á/ produced by control speakers were perceived by listeners with complete accuracy. On the other hand, vowels produced by the hearing impaired speakers were perceived with numerous errors. Table 1 shows the error pattern of the three vowels produced by the hearing impaired speakers. Production of the vowels by the hearing impaired speakers was perceived as either the target vowel (39%), another vowel (55%), or as a diphthong (6%). Of the three target vowels, the vowel /a/ was perceived with 65% accuracy, the vowel /i/ with 32%, and the vowel /Á/ with 19% accuracy. When the hearing impaired speakers produced the vowel /a/, the three main errors were confusion with the vowel /E/ (12%), the vowel / / (9%), and the vowel /ø/ (6%). When errors occurred in the perception of vowel /i/, it was mainly perceived as /E/ (26%) and /ø/ (18%). Errors for the vowel /Á/ were mainly in the form of perception as the vowel /a/ (36%) and the vowel /ø/ (17%). One of the most common errors of the three vowels was confusion with /ø/, which is not surprising considering the findings of the acoustic data. The distribution of the three vowels produced by the hearing impaired speakers in the F1/F2 space shows a clustering at the center with patterns like central vowels, which may explain the common error of the target vowels being misperceived as the central vowel /ø/. Angelocci et al. (1964) reported that for English-speaking profoundly hearing impaired speakers, the major error of vowel /a/ was confusion with /Q/ (20%) and / / (17%). Moreover, the most common errors for the vowel /i/ were confusions with /I/ (26%) and /E/ (10%). Finally, the major error of vowel /Á/ was its confusion with /a/ (19%) and /Q/ (13%). Table 1. Error pattern of the vowels /a, I, Á/ produced by the hearing impaired speakers Target a i Á E y u ø aj uj j çjejaw w iwøyçw Total a 395 (65%) i (32%) Á (19%) DISCUSSION When the control speakers produced the Cantonese vowels /a, i, Á/, F1 frequency and F2 frequency values were significantly different among the vowels. This shows that the three vowels were well separated in the F1-F2 acoustic space, in agreement with the findings of Zee (1998). The present findings also replicated those of similar acoustic studies on English vowels (Peterson and Barney, 1952). In the perceptual analysis, all three vowels /a, i, Á/ were perceived with 100% accuracy by the listeners. This suggests that the listeners had been able to rely on the F1 and F2 mid-vowel frequencies as perceptual cues to make accurate identification of the three vowels. For the three vowels produced by the hearing impaired speakers, F1 frequency distinguished among the three vowels. The vowel /i/ was produced with the lowest tongue height, the vowel /a/ with highest tongue height, and the vowel /Á/ with tongue height in between. The F2 frequency of /i/ was significantly higher than those of /a/ and /Á/. This shows that the vowel /i/ was produced with a more fronted tongue placement compared to the vowels /a/ and /Á/; the front-back placement of vowels /a/ and /Á/ was not different. These findings indicate that the hearing impaired speakers were able to use tongue height to distinguish among the three vowels, but could make use of tongue advancement to distinguish only between the vowel /i/ and the other two vowels /a, Á/. Figure 1 shows the acoustic vowel space, in terms of F1-F2 frequencies, of the three vowels /a, i, Á/ produced by the control and the hearing impaired speakers. A comparison of the F1 and F2 frequencies between the three vowels produced by the control and by the hearing impaired speakers shows that the vowels produced by the hearing impaired speakers occupy a more collapsed acoustic space, with a reduction in the range of both F1 (tongue height) and F2 (front-back placement of the tongue). This indicates, that in terms of articulation, the hearing impaired speakers used a relatively more neutral and less distinctive tongue configuration in producing the three vowels, when compared to the three Accepted after abstract review page 370

5 vowels produced by the control speakers. Since the vowel space for the hearing impaired speakers is located around the center of the vowel space, corresponding to vocal tract configurations of neutral/mid vowels, the hearing impaired speakers had more difficulty in producing /i/ and /Á/ than /a/. The utterances of the hearing impaired speakers were also more variable compared to the control speakers. Studies on vowel production by English-speaking profoundly hearing impaired children have reported formant frequencies deviating from normal values (Angelocci et al., 1964; McGarr and Gelfer, 1983). Limited control of tongue shape by speakers with profound hearing loss has been reported in studies of tongue movement using glossometric technique (Dagenais and Critz-Crosby, 1992), as well as electromyographic technique (McGarr and Gelfer, 1983). Cantonese-speaking profoundly hearing impaired children in the present study also had poorer control of tongue height and tongue advancement than control speakers Control /i/ F2 (Hz) 2000 HI /i/ 1500 HI /ø/ HI /a/ Control /a/ 1000 Control /ø/ F1 (Hz) Figure 1. Acoustic F1-F2 space of the three vowels /a, i, Á/ produced by the control and the hearing impaired (HI) speakers. The horizontal axis represents the F1 frequency in Hz. The vertical axis represents the F2 frequency in Hz. The three vowels by the hearing impaired speakers were perceived as either the target vowel (39%), another vowel (55%), or as a diphthong (6%). When the accuracy of the perception of hearing impaired vowels is analyzed taking into account vowel and nonvowel productions, the accuracy of perception of intended productions was 65% for the vowel /a/, 32% for the vowel /i/, and 19% for the vowel //. This result was expected on the basis of the findings of the acoustic analysis. For Englishspeaking hearing impaired speakers, Hudgins and Numbers (1942) also reported errors with production of vowels /a, i, Á/, with the highest accuracy for /a/, and the lowest for /Á/. However, some studies have reported that even though the vowel /a/ was produced with the highest accuracy, the accuracy of vowels /i/ and /Á/ could be similar (Smith, 1975). The poorer production of front vowels by the hearing impaired speakers is probably related to the lack of normal tongue arching. In terms of the F1-F2 frequency, vowel /a/ and vowel /Á/ were closer to each other as compared to their acoustic separation from the vowel /i/. The profoundly hearing impaired children, due to their hearing loss, may have difficulty in perceiving differences between the vowel /a/ and the vowel /Á/. The difficulty in perceiving differences between the two vowels may have led to corresponding difficulty in in terms of articulatory configuration. The relatively better production of back vowels such as /a/ was likely to due to the hearing impaired speakers sloping tongue configuration and lip rounding. The hearing impaired speakers lack of tongue arching, together with a generally lower jaw position and lack of lip rounding, enabled them to produce relatively better low vowels (Dagenais and Critz-Crosby, 1992). As such, there is a greater likelihood of the profoundly hearing impaired speakers producing the vowel /Á/ as the vowel /a/, instead of the other way around, as reported in the present study. The fact that, for hearing impaired speakers, the three vowels were separated by F1 frequency but not F2 frequency, suggest a relatively poor control of front-back tongue placement. This finding could be due to the fact that speakers with a profound hearing loss have difficulty in the perception of acoustic cues to vowel identity, particularly F2 information. It is also possible that profoundly hearing impaired speakers also have poor perception of F1 frequency information. For this reason, they may Accepted after abstract review page 371

6 rely mainly upon visual information to perceive and produce vowel. The fact that tongue height and lip configuration are more easily seen than front-back placement could account for the fact that hearing impaired speakers were able to produce distinct F1, but not F2, values for the vowels in the present study. REFERENCES Angelocci, A., Kopp, G., and Holbrook, A. (1964). The vowel formants of deaf and normal hearing eleven-to-fourteen-year old boys. Journal of Speech and Hearing Research, 29, Dagenais, P. A. and Critz-Crosby, P. (1992). Comparing tongue positioning by normal-hearing and hearing-impaired children during vowel production. Journal of Speech and Hearing Research, 35(1), Dodd, B. J. and So, L. K. H. (1994). The phonological abilities of Cantonese-speaking children with hearing loss. Journal of Speech and Hearing Research, 37, Fox, R. A. (1983). Perceptual structure of monophthongs and diphthongs in English. Language and Speech, 26, Hudgins, C. V. and Numbers, F. C. (1942). An investigation of the intelligibility of the speech of the deaf. Genetic Psychology Monographs, 25, Khouw, E. (2002) Perception and production of Cantonese phonetic contrasts produced by profoundly hearing impaired adolescents. Unpublished doctoral dissertation, The University of Hong Kong, Hong Kong. Ling, D. (1976). Speech and the Hearing-Impaired Child : Theory and Practice. Washington D.C.: The Alexander Graham Bell Association for the Deaf, Inc.. McGarr, N. S. and Gelfer, C. E. (1983). Simultaneous measurements of vowels produced by a hearing-impaired speaker. Language and Speech, 26, Monsen, R. (1976). Normal and reduced phonological space: The production of English vowels in the speech of deaf and normal-hearing children. Journal of Phonetics, 4, Peterson, G. and Barney, H. (1952). Control methods used in a study of the vowels. The Journal of the Acoustic Society of America, 24, Smith, C. R. (1975). Residual hearing and speech production of deaf children. Journal of Speech and Hearing Research, 18, Zee, E. (1998). Resonance frequency and vowel transcription in Cantonese. Proceedings of the 10th North America Conference of Chinese Linguistics and the 7th Annual Meeting of the International Association of Chinese Linguistics. Accepted after abstract review page 372

2/25/2013. Context Effect on Suprasegmental Cues. Supresegmental Cues. Pitch Contour Identification (PCI) Context Effect with Cochlear Implants

2/25/2013. Context Effect on Suprasegmental Cues. Supresegmental Cues. Pitch Contour Identification (PCI) Context Effect with Cochlear Implants Context Effect on Segmental and Supresegmental Cues Preceding context has been found to affect phoneme recognition Stop consonant recognition (Mann, 1980) A continuum from /da/ to /ga/ was preceded by

More information

ACOUSTIC AND PERCEPTUAL PROPERTIES OF ENGLISH FRICATIVES

ACOUSTIC AND PERCEPTUAL PROPERTIES OF ENGLISH FRICATIVES ISCA Archive ACOUSTIC AND PERCEPTUAL PROPERTIES OF ENGLISH FRICATIVES Allard Jongman 1, Yue Wang 2, and Joan Sereno 1 1 Linguistics Department, University of Kansas, Lawrence, KS 66045 U.S.A. 2 Department

More information

Jitter, Shimmer, and Noise in Pathological Voice Quality Perception

Jitter, Shimmer, and Noise in Pathological Voice Quality Perception ISCA Archive VOQUAL'03, Geneva, August 27-29, 2003 Jitter, Shimmer, and Noise in Pathological Voice Quality Perception Jody Kreiman and Bruce R. Gerratt Division of Head and Neck Surgery, School of Medicine

More information

Consonant Perception test

Consonant Perception test Consonant Perception test Introduction The Vowel-Consonant-Vowel (VCV) test is used in clinics to evaluate how well a listener can recognize consonants under different conditions (e.g. with and without

More information

Speech (Sound) Processing

Speech (Sound) Processing 7 Speech (Sound) Processing Acoustic Human communication is achieved when thought is transformed through language into speech. The sounds of speech are initiated by activity in the central nervous system,

More information

FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED

FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED Francisco J. Fraga, Alan M. Marotta National Institute of Telecommunications, Santa Rita do Sapucaí - MG, Brazil Abstract A considerable

More information

Voice Aid Design Issues for Hearing Impaired

Voice Aid Design Issues for Hearing Impaired International Journal of System Design and Information Processing 31 Voice Aid Design Issues for J. NirmalaDevi Abstract--- Speech is the primary mode of communication for any human being. Some people

More information

Effects of speaker's and listener's environments on speech intelligibili annoyance. Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag

Effects of speaker's and listener's environments on speech intelligibili annoyance. Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag JAIST Reposi https://dspace.j Title Effects of speaker's and listener's environments on speech intelligibili annoyance Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag Citation Inter-noise 2016: 171-176 Issue

More information

Speech Spectra and Spectrograms

Speech Spectra and Spectrograms ACOUSTICS TOPICS ACOUSTICS SOFTWARE SPH301 SLP801 RESOURCE INDEX HELP PAGES Back to Main "Speech Spectra and Spectrograms" Page Speech Spectra and Spectrograms Robert Mannell 6. Some consonant spectra

More information

ACOUSTIC MOMENTS DATA

ACOUSTIC MOMENTS DATA ACOUSTIC MOMENTS DATA FOR PALATALIZED AND DENTALIZED SIBILANT PRODUCTIONS FROM SPEECH DELAYED CHILDREN WITH AND WITHOUT HISTORIES OF OTITIS MEDIA WITH EFFUSION Phonology Project Technical Report No. 12

More information

HCS 7367 Speech Perception

HCS 7367 Speech Perception Babies 'cry in mother's tongue' HCS 7367 Speech Perception Dr. Peter Assmann Fall 212 Babies' cries imitate their mother tongue as early as three days old German researchers say babies begin to pick up

More information

11 Music and Speech Perception

11 Music and Speech Perception 11 Music and Speech Perception Properties of sound Sound has three basic dimensions: Frequency (pitch) Intensity (loudness) Time (length) Properties of sound The frequency of a sound wave, measured in

More information

HCS 7367 Speech Perception

HCS 7367 Speech Perception Long-term spectrum of speech HCS 7367 Speech Perception Connected speech Absolute threshold Males Dr. Peter Assmann Fall 212 Females Long-term spectrum of speech Vowels Males Females 2) Absolute threshold

More information

A PROPOSED MODEL OF SPEECH PERCEPTION SCORES IN CHILDREN WITH IMPAIRED HEARING

A PROPOSED MODEL OF SPEECH PERCEPTION SCORES IN CHILDREN WITH IMPAIRED HEARING A PROPOSED MODEL OF SPEECH PERCEPTION SCORES IN CHILDREN WITH IMPAIRED HEARING Louise Paatsch 1, Peter Blamey 1, Catherine Bow 1, Julia Sarant 2, Lois Martin 2 1 Dept. of Otolaryngology, The University

More information

THE ROLE OF VISUAL SPEECH CUES IN THE AUDITORY PERCEPTION OF SYNTHETIC STIMULI BY CHILDREN USING A COCHLEAR IMPLANT AND CHILDREN WITH NORMAL HEARING

THE ROLE OF VISUAL SPEECH CUES IN THE AUDITORY PERCEPTION OF SYNTHETIC STIMULI BY CHILDREN USING A COCHLEAR IMPLANT AND CHILDREN WITH NORMAL HEARING THE ROLE OF VISUAL SPEECH CUES IN THE AUDITORY PERCEPTION OF SYNTHETIC STIMULI BY CHILDREN USING A COCHLEAR IMPLANT AND CHILDREN WITH NORMAL HEARING Vanessa Surowiecki 1, vid Grayden 1, Richard Dowell

More information

Speech Cue Weighting in Fricative Consonant Perception in Hearing Impaired Children

Speech Cue Weighting in Fricative Consonant Perception in Hearing Impaired Children University of Tennessee, Knoxville Trace: Tennessee Research and Creative Exchange University of Tennessee Honors Thesis Projects University of Tennessee Honors Program 5-2014 Speech Cue Weighting in Fricative

More information

Best Practice Protocols

Best Practice Protocols Best Practice Protocols SoundRecover for children What is SoundRecover? SoundRecover (non-linear frequency compression) seeks to give greater audibility of high-frequency everyday sounds by compressing

More information

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED International Conference on Systemics, Cybernetics and Informatics, February 12 15, 2004 BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED Alice N. Cheeran Biomedical

More information

Effects of noise and filtering on the intelligibility of speech produced during simultaneous communication

Effects of noise and filtering on the intelligibility of speech produced during simultaneous communication Journal of Communication Disorders 37 (2004) 505 515 Effects of noise and filtering on the intelligibility of speech produced during simultaneous communication Douglas J. MacKenzie a,*, Nicholas Schiavetti

More information

Optimal Filter Perception of Speech Sounds: Implications to Hearing Aid Fitting through Verbotonal Rehabilitation

Optimal Filter Perception of Speech Sounds: Implications to Hearing Aid Fitting through Verbotonal Rehabilitation Optimal Filter Perception of Speech Sounds: Implications to Hearing Aid Fitting through Verbotonal Rehabilitation Kazunari J. Koike, Ph.D., CCC-A Professor & Director of Audiology Department of Otolaryngology

More information

Automatic Judgment System for Chinese Retroflex and Dental Affricates Pronounced by Japanese Students

Automatic Judgment System for Chinese Retroflex and Dental Affricates Pronounced by Japanese Students Automatic Judgment System for Chinese Retroflex and Dental Affricates Pronounced by Japanese Students Akemi Hoshino and Akio Yasuda Abstract Chinese retroflex aspirates are generally difficult for Japanese

More information

Speech perception of hearing aid users versus cochlear implantees

Speech perception of hearing aid users versus cochlear implantees Speech perception of hearing aid users versus cochlear implantees SYDNEY '97 OtorhinolaIYngology M. FLYNN, R. DOWELL and G. CLARK Department ofotolaryngology, The University ofmelbourne (A US) SUMMARY

More information

Temporal Location of Perceptual Cues for Cantonese Tone Identification

Temporal Location of Perceptual Cues for Cantonese Tone Identification Temporal Location of Perceptual Cues for Cantonese Tone Identification Zoe Wai-Man Lam, Kathleen Currie Hall and Douglas Pulleyblank Department of Linguistics University of British Columbia 1 Outline of

More information

MedRx HLS Plus. An Instructional Guide to operating the Hearing Loss Simulator and Master Hearing Aid. Hearing Loss Simulator

MedRx HLS Plus. An Instructional Guide to operating the Hearing Loss Simulator and Master Hearing Aid. Hearing Loss Simulator MedRx HLS Plus An Instructional Guide to operating the Hearing Loss Simulator and Master Hearing Aid Hearing Loss Simulator The Hearing Loss Simulator dynamically demonstrates the effect of the client

More information

Although considerable work has been conducted on the speech

Although considerable work has been conducted on the speech Influence of Hearing Loss on the Perceptual Strategies of Children and Adults Andrea L. Pittman Patricia G. Stelmachowicz Dawna E. Lewis Brenda M. Hoover Boys Town National Research Hospital Omaha, NE

More information

Janet Doyle* Lena L. N. Wongt

Janet Doyle* Lena L. N. Wongt J Am Acad Audiol 7 : 442-446 (1996) Mismatch between Aspects of Hearing Impairment and Hearing Disability/Handicap in Adult/Elderly Cantonese Speakers : Some Hypotheses Concerning Cultural and Linguistic

More information

Changes in the Role of Intensity as a Cue for Fricative Categorisation

Changes in the Role of Intensity as a Cue for Fricative Categorisation INTERSPEECH 2013 Changes in the Role of Intensity as a Cue for Fricative Categorisation Odette Scharenborg 1, Esther Janse 1,2 1 Centre for Language Studies and Donders Institute for Brain, Cognition and

More information

WIDEXPRESS. no.30. Background

WIDEXPRESS. no.30. Background WIDEXPRESS no. january 12 By Marie Sonne Kristensen Petri Korhonen Using the WidexLink technology to improve speech perception Background For most hearing aid users, the primary motivation for using hearing

More information

Voice Pitch Control Using a Two-Dimensional Tactile Display

Voice Pitch Control Using a Two-Dimensional Tactile Display NTUT Education of Disabilities 2012 Vol.10 Voice Pitch Control Using a Two-Dimensional Tactile Display Masatsugu SAKAJIRI 1, Shigeki MIYOSHI 2, Kenryu NAKAMURA 3, Satoshi FUKUSHIMA 3 and Tohru IFUKUBE

More information

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds Hearing Lectures Acoustics of Speech and Hearing Week 2-10 Hearing 3: Auditory Filtering 1. Loudness of sinusoids mainly (see Web tutorial for more) 2. Pitch of sinusoids mainly (see Web tutorial for more)

More information

Gick et al.: JASA Express Letters DOI: / Published Online 17 March 2008

Gick et al.: JASA Express Letters DOI: / Published Online 17 March 2008 modality when that information is coupled with information via another modality (e.g., McGrath and Summerfield, 1985). It is unknown, however, whether there exist complex relationships across modalities,

More information

SLHS 1301 The Physics and Biology of Spoken Language. Practice Exam 2. b) 2 32

SLHS 1301 The Physics and Biology of Spoken Language. Practice Exam 2. b) 2 32 SLHS 1301 The Physics and Biology of Spoken Language Practice Exam 2 Chapter 9 1. In analog-to-digital conversion, quantization of the signal means that a) small differences in signal amplitude over time

More information

Hearing the Universal Language: Music and Cochlear Implants

Hearing the Universal Language: Music and Cochlear Implants Hearing the Universal Language: Music and Cochlear Implants Professor Hugh McDermott Deputy Director (Research) The Bionics Institute of Australia, Professorial Fellow The University of Melbourne Overview?

More information

Overview. Acoustics of Speech and Hearing. Source-Filter Model. Source-Filter Model. Turbulence Take 2. Turbulence

Overview. Acoustics of Speech and Hearing. Source-Filter Model. Source-Filter Model. Turbulence Take 2. Turbulence Overview Acoustics of Speech and Hearing Lecture 2-4 Fricatives Source-filter model reminder Sources of turbulence Shaping of source spectrum by vocal tract Acoustic-phonetic characteristics of English

More information

What Is the Difference between db HL and db SPL?

What Is the Difference between db HL and db SPL? 1 Psychoacoustics What Is the Difference between db HL and db SPL? The decibel (db ) is a logarithmic unit of measurement used to express the magnitude of a sound relative to some reference level. Decibels

More information

Assessing Hearing and Speech Recognition

Assessing Hearing and Speech Recognition Assessing Hearing and Speech Recognition Audiological Rehabilitation Quick Review Audiogram Types of hearing loss hearing loss hearing loss Testing Air conduction Bone conduction Familiar Sounds Audiogram

More information

INTRODUCTION J. Acoust. Soc. Am. 104 (6), December /98/104(6)/3597/11/$ Acoustical Society of America 3597

INTRODUCTION J. Acoust. Soc. Am. 104 (6), December /98/104(6)/3597/11/$ Acoustical Society of America 3597 The relation between identification and discrimination of vowels in young and elderly listeners a) Maureen Coughlin, b) Diane Kewley-Port, c) and Larry E. Humes d) Department of Speech and Hearing Sciences,

More information

Advanced Audio Interface for Phonetic Speech. Recognition in a High Noise Environment

Advanced Audio Interface for Phonetic Speech. Recognition in a High Noise Environment DISTRIBUTION STATEMENT A Approved for Public Release Distribution Unlimited Advanced Audio Interface for Phonetic Speech Recognition in a High Noise Environment SBIR 99.1 TOPIC AF99-1Q3 PHASE I SUMMARY

More information

SPEECH PERCEPTION IN A 3-D WORLD

SPEECH PERCEPTION IN A 3-D WORLD SPEECH PERCEPTION IN A 3-D WORLD A line on an audiogram is far from answering the question How well can this child hear speech? In this section a variety of ways will be presented to further the teacher/therapist

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Speech Communication Session 4aSCb: Voice and F0 Across Tasks (Poster

More information

Speech intelligibility in simulated acoustic conditions for normal hearing and hearing-impaired listeners

Speech intelligibility in simulated acoustic conditions for normal hearing and hearing-impaired listeners Speech intelligibility in simulated acoustic conditions for normal hearing and hearing-impaired listeners Ir i s Arw e i l e r 1, To r b e n Po u l s e n 2, a n d To r s t e n Da u 1 1 Centre for Applied

More information

Speech conveys not only linguistic content but. Vocal Emotion Recognition by Normal-Hearing Listeners and Cochlear Implant Users

Speech conveys not only linguistic content but. Vocal Emotion Recognition by Normal-Hearing Listeners and Cochlear Implant Users Cochlear Implants Special Issue Article Vocal Emotion Recognition by Normal-Hearing Listeners and Cochlear Implant Users Trends in Amplification Volume 11 Number 4 December 2007 301-315 2007 Sage Publications

More information

Signals, systems, acoustics and the ear. Week 1. Laboratory session: Measuring thresholds

Signals, systems, acoustics and the ear. Week 1. Laboratory session: Measuring thresholds Signals, systems, acoustics and the ear Week 1 Laboratory session: Measuring thresholds What s the most commonly used piece of electronic equipment in the audiological clinic? The Audiometer And what is

More information

Language Speech. Speech is the preferred modality for language.

Language Speech. Speech is the preferred modality for language. Language Speech Speech is the preferred modality for language. Outer ear Collects sound waves. The configuration of the outer ear serves to amplify sound, particularly at 2000-5000 Hz, a frequency range

More information

Prelude Envelope and temporal fine. What's all the fuss? Modulating a wave. Decomposing waveforms. The psychophysics of cochlear

Prelude Envelope and temporal fine. What's all the fuss? Modulating a wave. Decomposing waveforms. The psychophysics of cochlear The psychophysics of cochlear implants Stuart Rosen Professor of Speech and Hearing Science Speech, Hearing and Phonetic Sciences Division of Psychology & Language Sciences Prelude Envelope and temporal

More information

Juan Carlos Tejero-Calado 1, Janet C. Rutledge 2, and Peggy B. Nelson 3

Juan Carlos Tejero-Calado 1, Janet C. Rutledge 2, and Peggy B. Nelson 3 PRESERVING SPECTRAL CONTRAST IN AMPLITUDE COMPRESSION FOR HEARING AIDS Juan Carlos Tejero-Calado 1, Janet C. Rutledge 2, and Peggy B. Nelson 3 1 University of Malaga, Campus de Teatinos-Complejo Tecnol

More information

ACOUSTIC SIGNALS AS VISUAL BIOFEEDBACK IN THE SPEECH TRAINING OF HEARING IMPAIRED CHILDREN. Elizabeth E. Crawford. Master of Audiology

ACOUSTIC SIGNALS AS VISUAL BIOFEEDBACK IN THE SPEECH TRAINING OF HEARING IMPAIRED CHILDREN. Elizabeth E. Crawford. Master of Audiology ACOUSTIC SIGNALS AS VISUAL BIOFEEDBACK IN THE SPEECH TRAINING OF HEARING IMPAIRED CHILDREN by Elizabeth E. Crawford A thesis submitted in partial fulfilment of the requirements for the degree of Master

More information

The Effects of Speech Production and Vocabulary Training on Different Components of Spoken Language Performance

The Effects of Speech Production and Vocabulary Training on Different Components of Spoken Language Performance The Effects of Speech Production and Vocabulary Training on Different Components of Spoken Language Performance Louise E. Paatsch University of Melbourne Peter J. Blamey University of Melbourne Dynamic

More information

Bark and Hz scaled F2 Locus equations: Sex differences and individual differences

Bark and Hz scaled F2 Locus equations: Sex differences and individual differences Bark and Hz scaled F Locus equations: Sex differences and individual differences Frank Herrmann a, Stuart P. Cunningham b & Sandra P. Whiteside c a Department of English, University of Chester, UK; b,c

More information

Perception of American English can and can t by Japanese professional interpreters* 1

Perception of American English can and can t by Japanese professional interpreters* 1 Research Note JAITS Perception of American English can and can t by Japanese professional interpreters* 1 Kinuko TAKAHASHI Tomohiko OOIGAWA (Doctoral Program in Linguistics, Graduate School of Foreign

More information

Production of Stop Consonants by Children with Cochlear Implants & Children with Normal Hearing. Danielle Revai University of Wisconsin - Madison

Production of Stop Consonants by Children with Cochlear Implants & Children with Normal Hearing. Danielle Revai University of Wisconsin - Madison Production of Stop Consonants by Children with Cochlear Implants & Children with Normal Hearing Danielle Revai University of Wisconsin - Madison Normal Hearing (NH) Who: Individuals with no HL What: Acoustic

More information

Study of perceptual balance for binaural dichotic presentation

Study of perceptual balance for binaural dichotic presentation Paper No. 556 Proceedings of 20 th International Congress on Acoustics, ICA 2010 23-27 August 2010, Sydney, Australia Study of perceptual balance for binaural dichotic presentation Pandurangarao N. Kulkarni

More information

Verification of soft speech amplification in hearing aid fitting: A comparison of methods

Verification of soft speech amplification in hearing aid fitting: A comparison of methods Verification of soft speech amplification in hearing aid fitting: A comparison of methods Sarah E. Dawkins, B.A. AuD Research Project April 5, 2007 University of Memphis Project Advisor Robyn M. Cox, PhD.

More information

Results. Dr.Manal El-Banna: Phoniatrics Prof.Dr.Osama Sobhi: Audiology. Alexandria University, Faculty of Medicine, ENT Department

Results. Dr.Manal El-Banna: Phoniatrics Prof.Dr.Osama Sobhi: Audiology. Alexandria University, Faculty of Medicine, ENT Department MdEL Med-EL- Cochlear Implanted Patients: Early Communicative Results Dr.Manal El-Banna: Phoniatrics Prof.Dr.Osama Sobhi: Audiology Alexandria University, Faculty of Medicine, ENT Department Introduction

More information

RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 24 (2000) Indiana University

RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 24 (2000) Indiana University COMPARISON OF PARTIAL INFORMATION RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 24 (2000) Indiana University Use of Partial Stimulus Information by Cochlear Implant Patients and Normal-Hearing

More information

Use of Auditory Techniques Checklists As Formative Tools: from Practicum to Student Teaching

Use of Auditory Techniques Checklists As Formative Tools: from Practicum to Student Teaching Use of Auditory Techniques Checklists As Formative Tools: from Practicum to Student Teaching Marietta M. Paterson, Ed. D. Program Coordinator & Associate Professor University of Hartford ACE-DHH 2011 Preparation

More information

Baker, A., M.Cl.Sc (AUD) Candidate University of Western Ontario: School of Communication Sciences and Disorders

Baker, A., M.Cl.Sc (AUD) Candidate University of Western Ontario: School of Communication Sciences and Disorders Critical Review: Effects of multi-channel, nonlinear frequency compression on speech perception in hearing impaired listeners with high frequency hearing loss Baker, A., M.Cl.Sc (AUD) Candidate University

More information

Sylvia Rotfleisch, M.Sc.(A.) hear2talk.com HEAR2TALK.COM

Sylvia Rotfleisch, M.Sc.(A.) hear2talk.com HEAR2TALK.COM Sylvia Rotfleisch, M.Sc.(A.) hear2talk.com 1 Teaching speech acoustics to parents has become an important and exciting part of my auditory-verbal work with families. Creating a way to make this understandable

More information

Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation

Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation Aldebaro Klautau - http://speech.ucsd.edu/aldebaro - 2/3/. Page. Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation ) Introduction Several speech processing algorithms assume the signal

More information

A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER

A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER ARCHIVES OF ACOUSTICS 29, 1, 25 34 (2004) INTELLIGIBILITY OF SPEECH PROCESSED BY A SPECTRAL CONTRAST ENHANCEMENT PROCEDURE AND A BINAURAL PROCEDURE A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER Institute

More information

Providing Effective Communication Access

Providing Effective Communication Access Providing Effective Communication Access 2 nd International Hearing Loop Conference June 19 th, 2011 Matthew H. Bakke, Ph.D., CCC A Gallaudet University Outline of the Presentation Factors Affecting Communication

More information

Evaluating the Clinical Effectiveness of EPG. in the Assessment and Diagnosis of Children with Intractable Speech Disorders

Evaluating the Clinical Effectiveness of EPG. in the Assessment and Diagnosis of Children with Intractable Speech Disorders Evaluating the Clinical Effectiveness of EPG in the Assessment and Diagnosis of Children with Intractable Speech Disorders Sara E. Wood*, James M. Scobbie * Forth Valley Primary Care NHS Trust, Scotland,

More information

Demonstration of a Novel Speech-Coding Method for Single-Channel Cochlear Stimulation

Demonstration of a Novel Speech-Coding Method for Single-Channel Cochlear Stimulation THE HARRIS SCIENCE REVIEW OF DOSHISHA UNIVERSITY, VOL. 58, NO. 4 January 2018 Demonstration of a Novel Speech-Coding Method for Single-Channel Cochlear Stimulation Yuta TAMAI*, Shizuko HIRYU*, and Kohta

More information

Phonak Target. SoundRecover2 adult fitting guide. Content. The Connecting the hearing instruments. February 2018

Phonak Target. SoundRecover2 adult fitting guide. Content. The Connecting the hearing instruments. February 2018 Phonak Target February 2018 SoundRecover2 adult fitting guide The following fitting guide is intended for adults. For Pediatric fittings please see the separate Pediatric fitting guide. SoundRecover2 is

More information

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair Who are cochlear implants for? Essential feature People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work

More information

Audibility, discrimination and hearing comfort at a new level: SoundRecover2

Audibility, discrimination and hearing comfort at a new level: SoundRecover2 Audibility, discrimination and hearing comfort at a new level: SoundRecover2 Julia Rehmann, Michael Boretzki, Sonova AG 5th European Pediatric Conference Current Developments and New Directions in Pediatric

More information

Issues faced by people with a Sensorineural Hearing Loss

Issues faced by people with a Sensorineural Hearing Loss Issues faced by people with a Sensorineural Hearing Loss Issues faced by people with a Sensorineural Hearing Loss 1. Decreased Audibility 2. Decreased Dynamic Range 3. Decreased Frequency Resolution 4.

More information

Lecture 5. Brief review and more exercises for consonants Vowels Even more exercises

Lecture 5. Brief review and more exercises for consonants Vowels Even more exercises Lecture 5 Brief review and more exercises for consonants Vowels Even more exercises Chart Review questions What are the three features used to describe consonants? Review questions What are the features

More information

But, what about ASSR in AN?? Is it a reliable tool to estimate the auditory thresholds in those category of patients??

But, what about ASSR in AN?? Is it a reliable tool to estimate the auditory thresholds in those category of patients?? 1 Auditory Steady State Response (ASSR) thresholds have been shown to be highly correlated to bh behavioral thresholds h in adults and older children with normal hearing or those with sensorineural hearing

More information

Using VOCALAB For Voice and Speech Therapy

Using VOCALAB For Voice and Speech Therapy Using VOCALAB For Voice and Speech Therapy Anne MENIN-SICARD, Speech Therapist in Toulouse, France Etienne SICARD Professeur at INSA, University of Toulouse, France June 2014 www.vocalab.org www.gerip.com

More information

FREQUENCY. Prof Dr. Mona Mourad Dr.Manal Elbanna Doaa Elmoazen ALEXANDRIA UNIVERSITY. Background

FREQUENCY. Prof Dr. Mona Mourad Dr.Manal Elbanna Doaa Elmoazen ALEXANDRIA UNIVERSITY. Background FREQUENCY TRANSPOSITION IN HIGH FREQUENCY SNHL Prof Dr. Mona Mourad Dr.Manal Elbanna Doaa Elmoazen Randa Awad ALEXANDRIA UNIVERSITY Background Concept Of Frequency Transposition Frequency transposition

More information

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES Varinthira Duangudom and David V Anderson School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA 30332

More information

Communication with low-cost hearing protectors: hear, see and believe

Communication with low-cost hearing protectors: hear, see and believe 12th ICBEN Congress on Noise as a Public Health Problem Communication with low-cost hearing protectors: hear, see and believe Annelies Bockstael 1,3, Lies De Clercq 2, Dick Botteldooren 3 1 Université

More information

Effect of spectral content and learning on auditory distance perception

Effect of spectral content and learning on auditory distance perception Effect of spectral content and learning on auditory distance perception Norbert Kopčo 1,2, Dávid Čeljuska 1, Miroslav Puszta 1, Michal Raček 1 a Martin Sarnovský 1 1 Department of Cybernetics and AI, Technical

More information

Comparing Speech Perception Abilities of Children with Cochlear Implants and Digital Hearing Aids

Comparing Speech Perception Abilities of Children with Cochlear Implants and Digital Hearing Aids Comparing Speech Perception Abilities of Children with Cochlear Implants and Digital Hearing Aids Lisa S. Davidson, PhD CID at Washington University St.Louis, Missouri Acknowledgements Support for this

More information

Who are cochlear implants for?

Who are cochlear implants for? Who are cochlear implants for? People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work best in adults who

More information

Ambiguity in the recognition of phonetic vowels when using a bone conduction microphone

Ambiguity in the recognition of phonetic vowels when using a bone conduction microphone Acoustics 8 Paris Ambiguity in the recognition of phonetic vowels when using a bone conduction microphone V. Zimpfer a and K. Buck b a ISL, 5 rue du Général Cassagnou BP 734, 6831 Saint Louis, France b

More information

EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE DESIGN OF SPATIAL AUDIO DISPLAYS

EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE DESIGN OF SPATIAL AUDIO DISPLAYS Proceedings of the 14 International Conference on Auditory Display, Paris, France June 24-27, 28 EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE

More information

SoundRecover2 More audibility of high-frequency sounds for adults with severe to profound hearing loss

SoundRecover2 More audibility of high-frequency sounds for adults with severe to profound hearing loss Field Study News July 2016 SoundRecover2 More audibility of high-frequency sounds for adults with severe to profound hearing loss This study was conducted at Phonak headquarters, Stäfa Switzerland, and

More information

Best practice protocol

Best practice protocol Best practice protocol April 2016 Pediatric verification for SoundRecover2 What is SoundRecover? SoundRecover is a frequency lowering signal processing available in Phonak hearing instruments. The aim

More information

Topics in Linguistic Theory: Laboratory Phonology Spring 2007

Topics in Linguistic Theory: Laboratory Phonology Spring 2007 MIT OpenCourseWare http://ocw.mit.edu 24.91 Topics in Linguistic Theory: Laboratory Phonology Spring 27 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

Hello Old Friend the use of frequency specific speech phonemes in cortical and behavioural testing of infants

Hello Old Friend the use of frequency specific speech phonemes in cortical and behavioural testing of infants Hello Old Friend the use of frequency specific speech phonemes in cortical and behavioural testing of infants Andrea Kelly 1,3 Denice Bos 2 Suzanne Purdy 3 Michael Sanders 3 Daniel Kim 1 1. Auckland District

More information

REFERRAL AND DIAGNOSTIC EVALUATION OF HEARING ACUITY. Better Hearing Philippines Inc.

REFERRAL AND DIAGNOSTIC EVALUATION OF HEARING ACUITY. Better Hearing Philippines Inc. REFERRAL AND DIAGNOSTIC EVALUATION OF HEARING ACUITY Better Hearing Philippines Inc. How To Get Started? 1. Testing must be done in an acoustically treated environment far from all the environmental noises

More information

Acoustic and Spectral Characteristics of Young Children's Fricative Productions: A Developmental Perspective

Acoustic and Spectral Characteristics of Young Children's Fricative Productions: A Developmental Perspective Brigham Young University BYU ScholarsArchive All Faculty Publications 2005-10-01 Acoustic and Spectral Characteristics of Young Children's Fricative Productions: A Developmental Perspective Shawn L. Nissen

More information

Errol Davis Director of Research and Development Sound Linked Data Inc. Erik Arisholm Lead Engineer Sound Linked Data Inc.

Errol Davis Director of Research and Development Sound Linked Data Inc. Erik Arisholm Lead Engineer Sound Linked Data Inc. An Advanced Pseudo-Random Data Generator that improves data representations and reduces errors in pattern recognition in a Numeric Knowledge Modeling System Errol Davis Director of Research and Development

More information

Role of F0 differences in source segregation

Role of F0 differences in source segregation Role of F0 differences in source segregation Andrew J. Oxenham Research Laboratory of Electronics, MIT and Harvard-MIT Speech and Hearing Bioscience and Technology Program Rationale Many aspects of segregation

More information

Quarterly Progress and Status Report. Masking effects of one s own voice

Quarterly Progress and Status Report. Masking effects of one s own voice Dept. for Speech, Music and Hearing Quarterly Progress and Status Report Masking effects of one s own voice Gauffin, J. and Sundberg, J. journal: STL-QPSR volume: 15 number: 1 year: 1974 pages: 035-041

More information

PSYCHOMETRIC VALIDATION OF SPEECH PERCEPTION IN NOISE TEST MATERIAL IN ODIA (SPINTO)

PSYCHOMETRIC VALIDATION OF SPEECH PERCEPTION IN NOISE TEST MATERIAL IN ODIA (SPINTO) PSYCHOMETRIC VALIDATION OF SPEECH PERCEPTION IN NOISE TEST MATERIAL IN ODIA (SPINTO) PURJEET HOTA Post graduate trainee in Audiology and Speech Language Pathology ALI YAVAR JUNG NATIONAL INSTITUTE FOR

More information

THRESHOLD PREDICTION USING THE ASSR AND THE TONE BURST CONFIGURATIONS

THRESHOLD PREDICTION USING THE ASSR AND THE TONE BURST CONFIGURATIONS THRESHOLD PREDICTION USING THE ASSR AND THE TONE BURST ABR IN DIFFERENT AUDIOMETRIC CONFIGURATIONS INTRODUCTION INTRODUCTION Evoked potential testing is critical in the determination of audiologic thresholds

More information

Analysis of the Audio Home Environment of Children with Normal vs. Impaired Hearing

Analysis of the Audio Home Environment of Children with Normal vs. Impaired Hearing University of Tennessee, Knoxville Trace: Tennessee Research and Creative Exchange University of Tennessee Honors Thesis Projects University of Tennessee Honors Program 5-2010 Analysis of the Audio Home

More information

Speech Intelligibility Measurements in Auditorium

Speech Intelligibility Measurements in Auditorium Vol. 118 (2010) ACTA PHYSICA POLONICA A No. 1 Acoustic and Biomedical Engineering Speech Intelligibility Measurements in Auditorium K. Leo Faculty of Physics and Applied Mathematics, Technical University

More information

whether or not the fundamental is actually present.

whether or not the fundamental is actually present. 1) Which of the following uses a computer CPU to combine various pure tones to generate interesting sounds or music? 1) _ A) MIDI standard. B) colored-noise generator, C) white-noise generator, D) digital

More information

How to use AutoFit (IMC2) How to use AutoFit (IMC2)

How to use AutoFit (IMC2) How to use AutoFit (IMC2) How to use AutoFit (IMC2) 1 AutoFit is a beneficial feature in the Connexx Fitting Application that automatically provides the Hearing Care Professional (HCP) with an optimized real-ear insertion gain

More information

Audiogram+: GN Resound proprietary fitting rule

Audiogram+: GN Resound proprietary fitting rule Audiogram+: GN Resound proprietary fitting rule Ole Dyrlund GN ReSound Audiological Research Copenhagen Loudness normalization - Principle Background for Audiogram+! Audiogram+ is a loudness normalization

More information

MODALITY, PERCEPTUAL ENCODING SPEED, AND TIME-COURSE OF PHONETIC INFORMATION

MODALITY, PERCEPTUAL ENCODING SPEED, AND TIME-COURSE OF PHONETIC INFORMATION ISCA Archive MODALITY, PERCEPTUAL ENCODING SPEED, AND TIME-COURSE OF PHONETIC INFORMATION Philip Franz Seitz and Ken W. Grant Army Audiology and Speech Center Walter Reed Army Medical Center Washington,

More information

The Influence of Linguistic Experience on the Cognitive Processing of Pitch in Speech and Nonspeech Sounds

The Influence of Linguistic Experience on the Cognitive Processing of Pitch in Speech and Nonspeech Sounds Journal of Experimental Psychology: Human Perception and Performance 2006, Vol. 32, No. 1, 97 103 Copyright 2006 by the American Psychological Association 0096-1523/06/$12.00 DOI: 10.1037/0096-1523.32.1.97

More information

Critical Review: Speech Perception and Production in Children with Cochlear Implants in Oral and Total Communication Approaches

Critical Review: Speech Perception and Production in Children with Cochlear Implants in Oral and Total Communication Approaches Critical Review: Speech Perception and Production in Children with Cochlear Implants in Oral and Total Communication Approaches Leah Chalmers M.Cl.Sc (SLP) Candidate University of Western Ontario: School

More information

A Study on the Degree of Pronunciation Improvement by a Denture Attachment Using an Weighted-α Formant

A Study on the Degree of Pronunciation Improvement by a Denture Attachment Using an Weighted-α Formant A Study on the Degree of Pronunciation Improvement by a Denture Attachment Using an Weighted-α Formant Seong-Geon Bae 1 1 School of Software Application, Kangnam University, Gyunggido, Korea. 1 Orcid Id:

More information