ACOUSTIC ANALYSIS AND PERCEPTION OF CANTONESE VOWELS PRODUCED BY PROFOUNDLY HEARING IMPAIRED ADOLESCENTS

Similar documents
2/25/2013. Context Effect on Suprasegmental Cues. Supresegmental Cues. Pitch Contour Identification (PCI) Context Effect with Cochlear Implants

ACOUSTIC AND PERCEPTUAL PROPERTIES OF ENGLISH FRICATIVES

Jitter, Shimmer, and Noise in Pathological Voice Quality Perception

Consonant Perception test

Speech (Sound) Processing

FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED

Voice Aid Design Issues for Hearing Impaired

Effects of speaker's and listener's environments on speech intelligibili annoyance. Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag

Speech Spectra and Spectrograms

ACOUSTIC MOMENTS DATA

HCS 7367 Speech Perception

11 Music and Speech Perception

HCS 7367 Speech Perception

A PROPOSED MODEL OF SPEECH PERCEPTION SCORES IN CHILDREN WITH IMPAIRED HEARING

THE ROLE OF VISUAL SPEECH CUES IN THE AUDITORY PERCEPTION OF SYNTHETIC STIMULI BY CHILDREN USING A COCHLEAR IMPLANT AND CHILDREN WITH NORMAL HEARING

Speech Cue Weighting in Fricative Consonant Perception in Hearing Impaired Children

Best Practice Protocols

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED

Effects of noise and filtering on the intelligibility of speech produced during simultaneous communication

Optimal Filter Perception of Speech Sounds: Implications to Hearing Aid Fitting through Verbotonal Rehabilitation

Automatic Judgment System for Chinese Retroflex and Dental Affricates Pronounced by Japanese Students

Speech perception of hearing aid users versus cochlear implantees

Temporal Location of Perceptual Cues for Cantonese Tone Identification

MedRx HLS Plus. An Instructional Guide to operating the Hearing Loss Simulator and Master Hearing Aid. Hearing Loss Simulator

Although considerable work has been conducted on the speech

Janet Doyle* Lena L. N. Wongt

Changes in the Role of Intensity as a Cue for Fricative Categorisation

WIDEXPRESS. no.30. Background

Voice Pitch Control Using a Two-Dimensional Tactile Display

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds

Gick et al.: JASA Express Letters DOI: / Published Online 17 March 2008

SLHS 1301 The Physics and Biology of Spoken Language. Practice Exam 2. b) 2 32

Hearing the Universal Language: Music and Cochlear Implants

Overview. Acoustics of Speech and Hearing. Source-Filter Model. Source-Filter Model. Turbulence Take 2. Turbulence

What Is the Difference between db HL and db SPL?

Assessing Hearing and Speech Recognition

INTRODUCTION J. Acoust. Soc. Am. 104 (6), December /98/104(6)/3597/11/$ Acoustical Society of America 3597

Advanced Audio Interface for Phonetic Speech. Recognition in a High Noise Environment

SPEECH PERCEPTION IN A 3-D WORLD

Proceedings of Meetings on Acoustics

Speech intelligibility in simulated acoustic conditions for normal hearing and hearing-impaired listeners

Speech conveys not only linguistic content but. Vocal Emotion Recognition by Normal-Hearing Listeners and Cochlear Implant Users

Signals, systems, acoustics and the ear. Week 1. Laboratory session: Measuring thresholds

Language Speech. Speech is the preferred modality for language.

Prelude Envelope and temporal fine. What's all the fuss? Modulating a wave. Decomposing waveforms. The psychophysics of cochlear

Juan Carlos Tejero-Calado 1, Janet C. Rutledge 2, and Peggy B. Nelson 3

ACOUSTIC SIGNALS AS VISUAL BIOFEEDBACK IN THE SPEECH TRAINING OF HEARING IMPAIRED CHILDREN. Elizabeth E. Crawford. Master of Audiology

The Effects of Speech Production and Vocabulary Training on Different Components of Spoken Language Performance

Bark and Hz scaled F2 Locus equations: Sex differences and individual differences

Perception of American English can and can t by Japanese professional interpreters* 1

Production of Stop Consonants by Children with Cochlear Implants & Children with Normal Hearing. Danielle Revai University of Wisconsin - Madison

Study of perceptual balance for binaural dichotic presentation

Verification of soft speech amplification in hearing aid fitting: A comparison of methods

Results. Dr.Manal El-Banna: Phoniatrics Prof.Dr.Osama Sobhi: Audiology. Alexandria University, Faculty of Medicine, ENT Department

RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 24 (2000) Indiana University

Use of Auditory Techniques Checklists As Formative Tools: from Practicum to Student Teaching

Baker, A., M.Cl.Sc (AUD) Candidate University of Western Ontario: School of Communication Sciences and Disorders

Sylvia Rotfleisch, M.Sc.(A.) hear2talk.com HEAR2TALK.COM

Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation

A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER

Providing Effective Communication Access

Evaluating the Clinical Effectiveness of EPG. in the Assessment and Diagnosis of Children with Intractable Speech Disorders

Demonstration of a Novel Speech-Coding Method for Single-Channel Cochlear Stimulation

Phonak Target. SoundRecover2 adult fitting guide. Content. The Connecting the hearing instruments. February 2018

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Audibility, discrimination and hearing comfort at a new level: SoundRecover2

Issues faced by people with a Sensorineural Hearing Loss

Lecture 5. Brief review and more exercises for consonants Vowels Even more exercises

But, what about ASSR in AN?? Is it a reliable tool to estimate the auditory thresholds in those category of patients??

Using VOCALAB For Voice and Speech Therapy

FREQUENCY. Prof Dr. Mona Mourad Dr.Manal Elbanna Doaa Elmoazen ALEXANDRIA UNIVERSITY. Background

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES

Communication with low-cost hearing protectors: hear, see and believe

Effect of spectral content and learning on auditory distance perception

Comparing Speech Perception Abilities of Children with Cochlear Implants and Digital Hearing Aids

Who are cochlear implants for?

Ambiguity in the recognition of phonetic vowels when using a bone conduction microphone

EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE DESIGN OF SPATIAL AUDIO DISPLAYS

SoundRecover2 More audibility of high-frequency sounds for adults with severe to profound hearing loss

Best practice protocol

Topics in Linguistic Theory: Laboratory Phonology Spring 2007

Hello Old Friend the use of frequency specific speech phonemes in cortical and behavioural testing of infants

REFERRAL AND DIAGNOSTIC EVALUATION OF HEARING ACUITY. Better Hearing Philippines Inc.

Acoustic and Spectral Characteristics of Young Children's Fricative Productions: A Developmental Perspective

Errol Davis Director of Research and Development Sound Linked Data Inc. Erik Arisholm Lead Engineer Sound Linked Data Inc.

Role of F0 differences in source segregation

Quarterly Progress and Status Report. Masking effects of one s own voice

PSYCHOMETRIC VALIDATION OF SPEECH PERCEPTION IN NOISE TEST MATERIAL IN ODIA (SPINTO)

THRESHOLD PREDICTION USING THE ASSR AND THE TONE BURST CONFIGURATIONS

Analysis of the Audio Home Environment of Children with Normal vs. Impaired Hearing

Speech Intelligibility Measurements in Auditorium

whether or not the fundamental is actually present.

How to use AutoFit (IMC2) How to use AutoFit (IMC2)

Audiogram+: GN Resound proprietary fitting rule

MODALITY, PERCEPTUAL ENCODING SPEED, AND TIME-COURSE OF PHONETIC INFORMATION

The Influence of Linguistic Experience on the Cognitive Processing of Pitch in Speech and Nonspeech Sounds

Critical Review: Speech Perception and Production in Children with Cochlear Implants in Oral and Total Communication Approaches

A Study on the Degree of Pronunciation Improvement by a Denture Attachment Using an Weighted-α Formant

Transcription:

ACOUSTIC ANALYSIS AND PERCEPTION OF CANTONESE VOWELS PRODUCED BY PROFOUNDLY HEARING IMPAIRED ADOLESCENTS Edward Khouw, & Valter Ciocca Dept. of Speech and Hearing Sciences, The University of Hong Kong ABSTRACT: This study investigated mid-vocalic F1 and F2 frequencies as cues to the perception of three Cantonese vowels (/a/, /i/, /Á/) in monosyllabic words produced by ten profoundly hearing impaired and ten normal hearing adolescents. For the control group, there were significant differences in F1 and F2 frequencies among the three vowels as expected. In contrast, the hearing impaired speakers did not show significant F2 frequency differences between /a/ and /Á/. Compared to the vowels produced by the control speakers, the hearing impaired speakers showed a reduction in the range of both F1 (tongue height) and F2 (frontback placement of the tongue). Listeners with normal hearing perceived the vowels produced by hearing impaired speakers with an accuracy of 65% correct for /a/, 32% correct for /i/, and 19% correct for /Á/. These results can be explained by the larger deviations from normal F1/F2 patterns for /i/ and /Á/ than for /a/. One of the most common errors was the confusion of a target vowel with /ø/, which may be explained by the clustering of vowels produced by the hearing impaired speakers towards the center of the F1/F2 space. INTRODUCTION This study examined the formant frequency patterns of vowel contrasts produced by profoundly hearing impaired and normal hearing Cantonese speakers. Formant frequencies at the steady state portion of vowels can be acoustic and perceptual cues to vowels produced by English-speaking normal hearing children (Fox, 1983; Peterson and Barney, 1952). English-speaking profoundly hearing impaired children have been found to produce vowels with different formant frequency patterns from normal hearing children. For example, Monsen (1976) found a reduced range of F1 and F2 frequencies of three vowels (/a, i, Á/) produced by the profoundly hearing impaired adolescents. Other studies have also found that the vowels produced by the English-speaking profoundly hearing impaired children were characterized by an overlap of vowel targets in F1/F2 space, as well as restricted range of formant frequency values. In the production of vowels by English speaking profoundly hearing impaired children, errors of substitution and neutralization have been documented (Angelocci et al., 1964; Hudgins and Numbers, 1942; Smith, 1975). Similar vowel errors have also been found for Cantonese speaking hearing impaired children (Dodd and So, 1994). Zee (1998) investigated the formant frequency values that characterize the Cantonese vowel system of speakers with normal hearing. However, acoustic data on vowel production by profoundly hearingimpaired speakers have not been reported. In order to gain a better understanding of the vowel errors of the Cantonese speaking profoundly hearing impaired children, this study will measure the formant frequencies of vowels produced by normal hearing and profoundly hearing impaired children. The use of formant frequency information as a perceptual cue will be examined by studying the identification of these vowels by adult listeners with normal hearing. METHOD OF THE ACOUSTIC ANALYSIS The speakers were twenty Cantonese adolescents, ten of whom (five males, five females) had normal hearing (control group); the other ten adolescents (five males, five females) were profoundly hearingimpaired. The ages of the normal speakers were between 12;10 and 14;02 (mean age = 13;05). The ages of the ten hearing impaired speakers were between 12;08 and 14;02 (mean age = 13;04). The hearing impaired speakers were selected on the criteria of being prelingually deaf with Pure Tone Average (P.T.A.) thresholds at 0.5, 1.0, and 2.0 khz of 90.0 db HL or more in the better ear, based on audiograms provided by audiologists in the Education Department. These audiograms were based on Accepted after abstract review page 367

audiological tests carried out less than six months before the recording of stimuli for the present study. Hearing impaired speakers wore their hearing aids for ten hours or more every day, and had no known additional handicapping conditions. They studied in schools for the deaf. Normal hearing speakers had no known speech, language, or hearing disorders and studied in normal schools. The speech stimuli consisted of six sets of monosyllabic words, with three words in each set. The words, which represented common objects and concepts, are familiar to children at Primary School level. Six three-word sets were used, of which three sets contrasted only in vowels, e.g. [sa 55 ], [si 55 ], [sá 55 ]; the other sets differed in initial stops in addition to vowel, e.g. [tsa 33 ], [tsi 33 ], [ts h Á 33 ]. The use of minimal contrasts was not possible for all sets because of the limited number of available Cantonese words that fulfil the requirement of minimal contrasts (same consonants and tones but different vowels). Stimuli were recorded either in a sound-proof room in the Department of Speech and Hearing Sciences at the University of Hong Kong, or in a sound-proof room in the Hong Kong Lutheran School for the Deaf. Speech samples were recorded using a Tascam DA-30 MkII Digital tape Recorder, and a Bruel & Kjaer 4003 low noise unidirectional microphone connected to a Bruel & Kjaer Type 2812 microphone preamplifier. The microphone was held at approximately eight inches from the speakers mouth. The recording gain level was set to ensure similar recording level among the subjects with no clipping. The hearing aids of the hearing impaired subjects were checked to be functional by ensuring that their responses to the Five Sound Test (Ling, 1976) did not differ from their previously recorded and documented responses. The eighteen words were presented to each subject on cards whose order was randomized by shuffling beforehand. The eighteen words were part of a set of seventy two words that were selected to investigate other phonetic contrasts produced by Cantonese-speaking profoundly haring impaired adolescents (Khouw, 2002). Each subject was first asked to read each word silently, and then to read it aloud. The total recording time for each subject was approximately five minutes. Recordings were low-pass filtered at 22 khz, and digitized at sampling rate of 44.1 khz on an Apple PowerMacintosh 7100 computer with a DigiDesign Audiomedia II DSP card. The input level was monitored for each word to ensure the absence of clipping. Each word was saved as a single sound file. The acoustic analysis was carried out using the SoundScope 2.1 software (GW Instruments 1996) on an Apple PowerMacintosh 9500 Computer. The sound files were initially normalized. A wideband spectrogram of the word was then produced with filter bandwidth set at 300 Hz (512 FFT, with 6 db pre-emphasis). In the present study, the frequencies of F1and F2 were measured at the middle of the vocalic segment as previously done by Zee (1998). The middle of the vocalic segment was taken to be the midway position between the beginning and the end of the vocalic segment. The beginning of the vocalic segment was defined as the onset of voicing as signalled by the first of the regularly spaced vertical striations that indicate glottal pulsing. The end of the vocalic segment was set at the beginning of the last vocalic pulse as seen on the spectrographic display. To estimate the formant frequencies, the sampling rate was first decreased from 44K Hz to 10K Hz; then an LPC spectrum (14 coefficients) was calculated. The formant frequencies (in Hz) were automatically estimated by the LPC algorithm of the Soundscope software. The LPC spectra were compared with FFT spectra (filter bandwidth 300 Hz) for the same time point to prevent gross errors in the estimation of the formant frequencies. To assess the intra- and the inter-judge reliability of the measurements of the acoustic features, the stimuli of all the speakers were re-analyzed by the author and by a teacher for the deaf with training in acoustic analysis. Intra- and the inter-judge reliability coefficients were computed. High degree of the intra- and the inter-judge reliability, ranging from 0.90 to 0.96, was found for all the measurements. METHOD OF THE PERCEPTUAL STUDY The listeners were ten normal hearing final year female speech therapy students (age range = 21;02 22;09) from the Department of Speech and Hearing Sciences, at the University of Hong Kong. All the listeners had previous training in phonetic transcription. The stimuli for the perceptual study were the monosyllabic words used in the acoustic study. The loudness of the stimuli was adjusted by the author to approximately equal level during editing of the sound files. The speech stimuli were divided into two sets (one for the normal hearing speakers and one for the hearing impaired speakers). For the vowel identification task, each speaker produced 18 words (six for each vowel) for a total of 180 words for each speaker group (3 vowels x 6 repetitions x 10 speakers). Each listener performed the Accepted after abstract review page 368

identification task in a single-wall IAC soundproof booth using a pair of Sennheiser HD 580 headphones connected to an Apple PowerMacintosh 7100 computer with a DigiDesign Audiomedia II DSP. All the sound files were played from hard disk at a sampling rate of 44.1 Hz. The stimuli were presented at a comfortable listening level. The presentation of the stimuli and the response collection was controlled by a custom program written with the Hypercard 2.4.1 software (Apple Computer, Inc. 1993-97). The order of the stimuli within each task was randomized by the Hypercard program. Each listener was told the type of speech sound, i.e. vowels, she would hear, and was instructed to identify by way of clicking the Hypercard button corresponding to the perceived sound. A total of four buttons were available for the vowel identification task: one button for each of the three vowels /a, i, ø/; one for others ; when the listener selected the other button, a dialog box was displayed so that the listener could type in broad transcription of the perceived sound. Each listener also had the option of clicking a repeat button once for a repetition of the presentation of the current stimulus, as well as a next button to proceed to the next stimulus after entering a response for the current trial. The order of normal and hearing impaired listening sets was counterbalanced across listeners. There were no missing data. STATISTICAL ANALYSIS For the acoustic measurements, an ANOVA with repeated measures (using the Huynh Feldt adjustment of degrees of freedom) was used to analyze the mid-vowel frequency of each vowel formant separately (F1, and then F2). Each data point was the mean formant frequency of the six tokens of each of the three vowels produced by a speaker. The between group factor was listener group (control and hearing impaired). The within group factors was vowel (/a/, /i/, /Á/). The Tukey HSD (honestly significant difference) test was used for post hoc comparisons between means. RESULTS OF ACOUSTIC ANALYSIS For the F1 frequency, the significant listener group by vowel interaction effect, F(2, 36) = 17.68, p <.01, showed that the F1 frequency between the two speaker groups depended on the vowels. Posthoc analysis showed that for the vowel /a/, the F1 frequency of the hearing impaired speakers (896 Hz) was significantly lower than that of the control speakers (1142 Hz) (Tukey HSD tests, p <.01). There was no such significant difference in the F1 frequency for vowels /i/ and /Á/ (Tukey HSD tests, p >.05). For the control speakers, the F1 frequency of the vowel /a/ (1142 Hz) was significantly higher than that of the vowel /Á/ (645 Hz) which in turn was significantly higher than that of the vowel /i/ (389 Hz) (Tukey HSD tests, p <.01). Similarly for the hearing impaired speakers, the F1 frequency of the vowel /a/ (896 Hz) was significantly higher than that of the vowel /Á/ (722 Hz) which in turn was significantly higher than that of the vowel /i/ (539 Hz). The main effect of vowel was significant, F(2, 36) = 124.64, p <.01, indicating that a significant F1 frequency difference was found among the three vowels. The main effect of listener group was not significant F(1, 18) = 0.01, p >.05, indicating that, overall, there was no significant difference in F1 frequency between the control and the hearing impaired speakers. For the F2 frequency, the significant listener group by vowel interaction effect, F(1.33, 23.99) = 47.21, p <.01, indicated that the F2 frequency differences among the vowels depended on the listener group. Post-hoc analysis showed that for the control speakers, the F2 frequency of vowel /i/ (2844 Hz) was significantly higher than that of vowel /a/ (1514 Hz) which in turn was significantly higher than that of vowel /Á/ (975 Hz) (Tukey HSD tests, p <.01). On the other hand, for the hearing impaired speakers, the F2 frequency of vowel /i/ (2019 Hz) was significantly higher than the F2 frequency of vowel /a/ (1564 Hz) and vowel /Á/ (1412 Hz) (Tukey HSD tests, p <.01), but there was no significant F2 frequency difference between the vowels /a/ and /Á/ (Tukey HSD tests, p >.05). Post-hoc analysis further showed that, compared to the control speakers, the hearing impaired speakers had significantly lower F2 frequency for the vowel /i/ and significantly higher F2 frequency for the vowel /Á/ (Tukey HSD tests, p <.01). There was no significant difference in the F2 frequency between the two speakers groups for the vowel /a/ (Tukey HSD test, p >.05). The main effect of vowel was significant F(1.33, 23.99) = 184.56, p <.01, indicating that a significant difference in the F2 frequency was found among the three vowels. The main effect of listener group was not significant F(1, 18) = 1.70, p >.05, showing that, overall, there was no significant difference in the F2 frequency between the two listener groups. Accepted after abstract review page 369

RESULTS OF PERCEPTUAL ANALYSIS The vowels /a, i, Á/ produced by control speakers were perceived by listeners with complete accuracy. On the other hand, vowels produced by the hearing impaired speakers were perceived with numerous errors. Table 1 shows the error pattern of the three vowels produced by the hearing impaired speakers. Production of the vowels by the hearing impaired speakers was perceived as either the target vowel (39%), another vowel (55%), or as a diphthong (6%). Of the three target vowels, the vowel /a/ was perceived with 65% accuracy, the vowel /i/ with 32%, and the vowel /Á/ with 19% accuracy. When the hearing impaired speakers produced the vowel /a/, the three main errors were confusion with the vowel /E/ (12%), the vowel / / (9%), and the vowel /ø/ (6%). When errors occurred in the perception of vowel /i/, it was mainly perceived as /E/ (26%) and /ø/ (18%). Errors for the vowel /Á/ were mainly in the form of perception as the vowel /a/ (36%) and the vowel /ø/ (17%). One of the most common errors of the three vowels was confusion with /ø/, which is not surprising considering the findings of the acoustic data. The distribution of the three vowels produced by the hearing impaired speakers in the F1/F2 space shows a clustering at the center with patterns like central vowels, which may explain the common error of the target vowels being misperceived as the central vowel /ø/. Angelocci et al. (1964) reported that for English-speaking profoundly hearing impaired speakers, the major error of vowel /a/ was confusion with /Q/ (20%) and / / (17%). Moreover, the most common errors for the vowel /i/ were confusions with /I/ (26%) and /E/ (10%). Finally, the major error of vowel /Á/ was its confusion with /a/ (19%) and /Q/ (13%). Table 1. Error pattern of the vowels /a, I, Á/ produced by the hearing impaired speakers Target a i Á E y u ø aj uj j çjejaw w iwøyçw Total a 395 (65%) 2 5 55 75 37 15 0 3 1 9 1 2 600 i 25 197 (32%) 13 24 158 32 7 108 1 2 1 1 24 3 4 600 Á 219 2 112 (19%) 58 52 3 9 103 4 1 1 9 7 1 19 600 DISCUSSION When the control speakers produced the Cantonese vowels /a, i, Á/, F1 frequency and F2 frequency values were significantly different among the vowels. This shows that the three vowels were well separated in the F1-F2 acoustic space, in agreement with the findings of Zee (1998). The present findings also replicated those of similar acoustic studies on English vowels (Peterson and Barney, 1952). In the perceptual analysis, all three vowels /a, i, Á/ were perceived with 100% accuracy by the listeners. This suggests that the listeners had been able to rely on the F1 and F2 mid-vowel frequencies as perceptual cues to make accurate identification of the three vowels. For the three vowels produced by the hearing impaired speakers, F1 frequency distinguished among the three vowels. The vowel /i/ was produced with the lowest tongue height, the vowel /a/ with highest tongue height, and the vowel /Á/ with tongue height in between. The F2 frequency of /i/ was significantly higher than those of /a/ and /Á/. This shows that the vowel /i/ was produced with a more fronted tongue placement compared to the vowels /a/ and /Á/; the front-back placement of vowels /a/ and /Á/ was not different. These findings indicate that the hearing impaired speakers were able to use tongue height to distinguish among the three vowels, but could make use of tongue advancement to distinguish only between the vowel /i/ and the other two vowels /a, Á/. Figure 1 shows the acoustic vowel space, in terms of F1-F2 frequencies, of the three vowels /a, i, Á/ produced by the control and the hearing impaired speakers. A comparison of the F1 and F2 frequencies between the three vowels produced by the control and by the hearing impaired speakers shows that the vowels produced by the hearing impaired speakers occupy a more collapsed acoustic space, with a reduction in the range of both F1 (tongue height) and F2 (front-back placement of the tongue). This indicates, that in terms of articulation, the hearing impaired speakers used a relatively more neutral and less distinctive tongue configuration in producing the three vowels, when compared to the three Accepted after abstract review page 370

vowels produced by the control speakers. Since the vowel space for the hearing impaired speakers is located around the center of the vowel space, corresponding to vocal tract configurations of neutral/mid vowels, the hearing impaired speakers had more difficulty in producing /i/ and /Á/ than /a/. The utterances of the hearing impaired speakers were also more variable compared to the control speakers. Studies on vowel production by English-speaking profoundly hearing impaired children have reported formant frequencies deviating from normal values (Angelocci et al., 1964; McGarr and Gelfer, 1983). Limited control of tongue shape by speakers with profound hearing loss has been reported in studies of tongue movement using glossometric technique (Dagenais and Critz-Crosby, 1992), as well as electromyographic technique (McGarr and Gelfer, 1983). Cantonese-speaking profoundly hearing impaired children in the present study also had poorer control of tongue height and tongue advancement than control speakers. 3000 2500 Control /i/ F2 (Hz) 2000 HI /i/ 1500 HI /ø/ HI /a/ Control /a/ 1000 Control /ø/ 500 200 400 600 800 1000 1200 F1 (Hz) Figure 1. Acoustic F1-F2 space of the three vowels /a, i, Á/ produced by the control and the hearing impaired (HI) speakers. The horizontal axis represents the F1 frequency in Hz. The vertical axis represents the F2 frequency in Hz. The three vowels by the hearing impaired speakers were perceived as either the target vowel (39%), another vowel (55%), or as a diphthong (6%). When the accuracy of the perception of hearing impaired vowels is analyzed taking into account vowel and nonvowel productions, the accuracy of perception of intended productions was 65% for the vowel /a/, 32% for the vowel /i/, and 19% for the vowel //. This result was expected on the basis of the findings of the acoustic analysis. For Englishspeaking hearing impaired speakers, Hudgins and Numbers (1942) also reported errors with production of vowels /a, i, Á/, with the highest accuracy for /a/, and the lowest for /Á/. However, some studies have reported that even though the vowel /a/ was produced with the highest accuracy, the accuracy of vowels /i/ and /Á/ could be similar (Smith, 1975). The poorer production of front vowels by the hearing impaired speakers is probably related to the lack of normal tongue arching. In terms of the F1-F2 frequency, vowel /a/ and vowel /Á/ were closer to each other as compared to their acoustic separation from the vowel /i/. The profoundly hearing impaired children, due to their hearing loss, may have difficulty in perceiving differences between the vowel /a/ and the vowel /Á/. The difficulty in perceiving differences between the two vowels may have led to corresponding difficulty in in terms of articulatory configuration. The relatively better production of back vowels such as /a/ was likely to due to the hearing impaired speakers sloping tongue configuration and lip rounding. The hearing impaired speakers lack of tongue arching, together with a generally lower jaw position and lack of lip rounding, enabled them to produce relatively better low vowels (Dagenais and Critz-Crosby, 1992). As such, there is a greater likelihood of the profoundly hearing impaired speakers producing the vowel /Á/ as the vowel /a/, instead of the other way around, as reported in the present study. The fact that, for hearing impaired speakers, the three vowels were separated by F1 frequency but not F2 frequency, suggest a relatively poor control of front-back tongue placement. This finding could be due to the fact that speakers with a profound hearing loss have difficulty in the perception of acoustic cues to vowel identity, particularly F2 information. It is also possible that profoundly hearing impaired speakers also have poor perception of F1 frequency information. For this reason, they may Accepted after abstract review page 371

rely mainly upon visual information to perceive and produce vowel. The fact that tongue height and lip configuration are more easily seen than front-back placement could account for the fact that hearing impaired speakers were able to produce distinct F1, but not F2, values for the vowels in the present study. REFERENCES Angelocci, A., Kopp, G., and Holbrook, A. (1964). The vowel formants of deaf and normal hearing eleven-to-fourteen-year old boys. Journal of Speech and Hearing Research, 29, 156-70. Dagenais, P. A. and Critz-Crosby, P. (1992). Comparing tongue positioning by normal-hearing and hearing-impaired children during vowel production. Journal of Speech and Hearing Research, 35(1), 35-44. Dodd, B. J. and So, L. K. H. (1994). The phonological abilities of Cantonese-speaking children with hearing loss. Journal of Speech and Hearing Research, 37, 671-79. Fox, R. A. (1983). Perceptual structure of monophthongs and diphthongs in English. Language and Speech, 26, 21-60. Hudgins, C. V. and Numbers, F. C. (1942). An investigation of the intelligibility of the speech of the deaf. Genetic Psychology Monographs, 25, 289-392. Khouw, E. (2002) Perception and production of Cantonese phonetic contrasts produced by profoundly hearing impaired adolescents. Unpublished doctoral dissertation, The University of Hong Kong, Hong Kong. Ling, D. (1976). Speech and the Hearing-Impaired Child : Theory and Practice. Washington D.C.: The Alexander Graham Bell Association for the Deaf, Inc.. McGarr, N. S. and Gelfer, C. E. (1983). Simultaneous measurements of vowels produced by a hearing-impaired speaker. Language and Speech, 26, 233-46. Monsen, R. (1976). Normal and reduced phonological space: The production of English vowels in the speech of deaf and normal-hearing children. Journal of Phonetics, 4, 189-98. Peterson, G. and Barney, H. (1952). Control methods used in a study of the vowels. The Journal of the Acoustic Society of America, 24, 175-84. Smith, C. R. (1975). Residual hearing and speech production of deaf children. Journal of Speech and Hearing Research, 18, 795-811. Zee, E. (1998). Resonance frequency and vowel transcription in Cantonese. Proceedings of the 10th North America Conference of Chinese Linguistics and the 7th Annual Meeting of the International Association of Chinese Linguistics. Accepted after abstract review page 372