Proceedings of Meetings on Acoustics

Size: px
Start display at page:

Download "Proceedings of Meetings on Acoustics"

Transcription

1 Proceedings of Meetings on Acoustics Volume 19, 13 ICA 13 Montreal Montreal, Canada 2-7 June 13 Psychological and Physiological Acoustics Session 2pPPb: Speech. Attention, and Impairment (Poster Session) 2pPPb3. Speech intelligibility of hearing impaired participants in long-term training of bone-conducted ultrasonic hearing aid Toshie Matsui*, Ryota Shimokura, Tadashi Nishimura, Hiroshi Hosoi and Seiji Nakagawa *Corresponding author's address: Department of Otorhinolaryngology - Head and Neck Surgery, Nara Medical University, Shijocho 8, Kashihara, , Nara, Japan, tomatsui@naramed-u.ac.jp Bone-conducted ultrasonic hearing aid (BCUHA) system is the unique device to provide the auditory sensation to profoundly hearing impaired persons without any surgical operations. To clarify effects of a long-term hearing training with this device, two deaf participants engaged the BCUHA training for more than 9 months. They were trained to use BCUHA through repetition of sentences read aloud, free conversation and singing, and then they participated in word recognition tests and monosyllable identification tests. Both participants showed that they could recognize words above chance using auditory sensation only provided by BCUHA if alternatives or context were presented to them. In addition, it was observed that monosyllable intelligibility score with both of auditory and visual cue had much increased with the days of training than the score with the auditory only cue and that with the visual only cue. The result suggests that the long-term training with BCUHA achieves efficient integration of the auditory and the visual cue of speech such as cochlear implant users showed in previous studies. Published by the Acoustical Society of America through the American Institute of Physics 13 Acoustical Society of America [DOI: / ] Received 22 Jan 13; published 2 Jun 13 Proceedings of Meetings on Acoustics, Vol. 19, 588 (13) Page 1

2 INTRODUCTION Cochlear implants have recently become the most common device to restore hearing sensation to the profoundly hearing impaired. A surgical operation is necessary to use the cochlear implant. It has been reported that reoperation is very difficult because the initial operation perforates the cochlear duct and to insert an electrode. The development of a hearing aid not requiring a surgical operation would greatly benefit hearing impaired people for whom surgical treatment is not an option. The bone-conducted ultrasound hearing aid (BCUHA) is currently a unique method to fill this requirement. First, Gavreau discovered that ultrasound could be perceived through bone-conduction (Gavreau, 1948). Pumphrey subsequently confirmed that bone-conducted ultrasound up to khz was perceivable (Pumphrey, 195). It was also founded that bone-conducted ultrasound modulated by speech signals was perceivable not only for normal hearing, but also for the profoundly hearing impaired (Lenhardt et al., 1991). It has been demonstrated that the boneconducted ultrasonic speech sound is processed in the auditory cortex of both normal hearing and the profoundly hearing impaired (Hosoi et al., 1998). Based on these research results, bone-conducted ultrasound hearing aids for most profoundly hearing impaired people have been developed (Nakagawa et al., 6), however, most of the investigations of speech intelligibility (Okamoto et al., 6; Kagomiya and Nakagawa, 1) and the perception mechanism of bone-conducted ultrasound (Nishimura et al., 3; Nishimura et al., 11) are currently limited to normal hearing. This study carried out long-term training for two profoundly hearing impaired persons using BCUHAs. A speech sound recognition task and a monosyllable identification task were performed as an index of the effects of training. METHODS AND MATERIALS Participants Two profoundly hearing-impaired listeners participated in the training. Participant 1 (P1) used HiSonic (Hearing innovation, Inc.), which had originally been developed as a clinical device for tinnitus suppression. Participant 2 (P2) used AIST-BCUHA-3 (National Institute of Advanced Industrial Science and Technology). The output level was adjusted by the participants themselves for the best hearing of speech in each session. The training schedules were individually decided upon depending on the participants condition. Each training session was carried out individually. P1 participated in 35 sessions that were conducted every other week from December 1 to July 12. P2 participated in 43 sessions that were conducted once or twice a week from August 11 to July 12. The training sessions consisted of word recognition tasks (multiple choice paradigm), Japanese monosyllable identification tasks, word recognition tasks (open questions), free conversation, and singing. Each session was completed within one and a half hours. The results from the word recognition tasks and Japanese monosyllable identification tasks were finally analyzed. TABLE 1. All trials of the word recognition task were carried out without any visual cue. AFC; alternative forced choice paradigm, Diff; Different. Task Difference between alternatives Paradigm Session No. Number of morae P1 P2 Consonant Vowel Word recognition 1 Different Different Diff/Same 2AFC 1- NA Word recognition 2 Different Diff/Same Same 3AFC Word recognition 3 Different Same Same 4AFC Monosyllable identification Audio only NA 5-43 Visual only Audio and visual Word recognition task Tasks were carried out face to face. A speech therapist presented all of options visually and aurally to the participant and then uttered one word from among options with their mouth covered. The therapist repeatedly uttered the word until the participant responded orally. Proceedings of Meetings on Acoustics, Vol. 19, 588 (13) Page 2

3 The word recognition task was separated in three types to observe the effect of the factors below. The number of morae was limited to 2-6 in all types of tasks. P1 participated in all three types of tasks. P2 participated in type 2 and 3 tasks. Tasks were assigned randomly during each session in the period indicated in Table 1. Feedback of correct choices were given after their response in each trial. 1) Number of morae of words (2AFC). This condition was set to clarify whether the number of morae was available as a cue of recognition. Therefore, the task included two condition; same number of morae condition and different number of morae condition. In the same number of morae condition, options consisted of Japanese words with same number of morae such as zou (elephant, 2 morae) and wani (crocodile, 2 morae). In different number of morae condition, words of options had different number of morae such as isu (chair, 2 morae) and tsukue (table, 3 morae). The trials were presented in a two alternative forced choice (2AFC) paradigm. 2) Vowel pattern of words (3AFC). This condition was set to clarify whether vowel pattern of a word were available as a cue of recognition. In a manner similar to the above, the task included two conditions; same vowel pattern condition and different vowel pattern condition. In the same vowel pattern condition, options consisted of words with the same vowel pattern such as narawashi (custom, 4 morae), sawagani (freshwater crab, 4 morae) and kawahagi (threadsail filefish, 4 morae). In the different vowel pattern condition, the options consisted of words which had different vowel patterns such as kusayabu (tuft of grass, 4 morae), yomibito (author of verse, 4 morae) and setoyaki (Seto pottery, 4 morae). This task was presented in a 3AFC paradigm. 3) Vowel pattern of words (4AFC). This only consisted of the same vowel pattern condition which appears in task 2. This task was, however, presented in a 4AFC paradigm. Monosyllable identification task The task was carried out face to face in a manner similar to the word recognition tasks. The task required participants to identify 5 Japanese monosyllables that consist of consonant and vowel monosyllables and vowel only monosyllables. Table 2 shows their phonemes and phonetic values. The task consisted of three conditions; audio condition: audio only cue through BCUHA was given for identification; visual condition: visual only cue (lip reading) was given; audio-visual condition: both audio and visual cues were provided. TABLE 2. Phonemes and phonetic values of 5 Japanese syllables for monosyllable identification task. Nc: no consonant. nc /k/ /s/ /t/ /n/ /h/ /m/ /y/ /r/ /w/ /g/ /z/ /d/ /b/ /a/ a ka sa ta na ha ma ja ɾa wa ga da ba /i/ i kʲi ʃi tʃi ɲi çi mi ɾʲi ʤi /u/ ɯ kɯ sɯ tsɯ ɸu mɯ jɯ ɾɯ dzɯ /e/ e ke se te ne he me de /o/ o ko so to no ho mo jo ɾo go do P1 participated in tasks with the visual and audio-visual conditions. P2 participated in tasks featuring all of three conditions. The order of monosyllable was quasi-randomized in each trial. Participants were provided with the results after all of the trials. The tasks were consecutively carried out during the period indicated in Table 1. Proceedings of Meetings on Acoustics, Vol. 19, 588 (13) Page 3

4 RESULTS Word recognition task The correct response rate and the mean of the number of times an utterance was repeated before a response was given by each participant were analyzed. Figure 1 illustrates the results from P1; Fig 1(a) shows the correct response rate and Fig. 1(b) shows the mean of number of repeated utterances for each condition and type of task. (136) (19) (a) Diff Same Diff Same Mora Vowel (2AFC) chance level (123) (19) (5) (3AFC) (4AFC) Mean of utterance times (b) * Diff Same Diff Same Mora Vowel (2AFC) (3AFC) (4AFC) FIGURE 1. The correct response rate and mean of utterance repetition for the word recognition tasks performed by P1. Numbers in parentheses indicate the number of trials. Error bar: standard error. The results from all of the tasks and conditions by P1 showed a correct response rate above chance (.5,.33,.25, respectively. Chi-square test, p <.1 for each task and condition). However, the number of morae of words did not affect the correct response rate significantly (Pearson s chi-square test, χ 2 (1) =.336, p =.562). The vowel pattern of words did not provide effective cues to discriminate the words either (Pearson s chi-square test, χ 2 (1) = 2.44, p =.118). Conversely, the number of repetitions before P1 s response received significant influence from the number of morae, and vowel pattern, of words. In same number of morae condition of task type 1, the number of repetitions before P1 s response were significantly less than in the different number of morae condition (two-tailed t-test, t(243) = -2.3, p =.2). In the same vowel pattern condition, mean times of utterance times was significantly less than in the different vowel pattern condition (two-tailed t-test, t(212) = 5.41, p <.1). Figure 2 illustrates the result from P2; Fig 2(a) shows the correct response rate and Fig 2(b) shows the mean of number of repetitions for each condition and type of task that P2 participated in. (a) * chance level (77) (31) (49) Diff Same Vowel (3AFC) (4AFC) Mean of utterance times (b) Diff Same Vowel (3AFC) (4AFC) FIGURE 2. The correct response rate and mean of utterance repetition for the word recognition tasks performed by P2. Numbers in parentheses indicate the number of trials. Error bar: standard error. The results from all of the tasks and conditions by P2 showed correct response rate above chance (.33,.25, respectively. Chi-square test, p <.1 for each task and each condition). In the same number of morae condition, the correct response rate was significantly higher than that in different number of morae condition (Pearson s chi-square test, χ 2 (1) = 9.51, p <.5). The number of repetitions before P2 s response did not display any significant difference between same vowel pattern condition and different one (two-tailed t-test, t(16) = -1.45, p =.15). Proceedings of Meetings on Acoustics, Vol. 19, 588 (13) Page 4

5 Monosyllable identification task Although not all sessions, the monosyllable identification task was routinely conducted during the training periods indicated in Table 1. Therefore, the correct response rate over all sessions and the transition of the correct response rate day by day were analyzed for each subject. Figure 3(a) illustrates the correct response rate for the visual and audio-visual conditions from all sessions of P1. The correct response rate was calculated for the consonant part, vowel part and syllable unit. The correct response rate of the audio condition and audio-visual condition are also compared. The correct response rate of consonant part and syllable unit between two conditions shows significant differences (two-tailed t-test, consonant, t(26) = -2.88, p =.79; syllable, t(26) = -3.28, p =.29). (a) P1 p <.1 (t-test) V A+V V A+V V A+V consonant vowel syllable (b) P2 * p <.5 p <.1 (Tuckey-Kramer HSD test) * A V A+V A V A+V A V A+V consonant vowel syllable FIGURE 3. The correct response rate for monosyllable identification task. The left panel illustrates the result of P1, and the right panel illustrates that of P2. Each bar corresponds to the condition. Figure 4 shows the relationship between the correct response rate and the number of training day counted forward from the first day the monosyllable identification task began. Each panel corresponds to the timelines of the bars in Figure 3(a). Solid lines, broken lines and equations represent regression lines calculated by the least mean square method. The regression coefficient of the consonant part and syllable unit in the audio-visual condition showed a marginally significant higher value than (two-tailed t-test, consonant, t(13) = 1.81, p =.931; syllable, t(13) = 2., p =.667). Consonant Vowel Syllable = *day = *day Visual Visual = *day = *day = *day Visual = *day Number of training days since the first day of monosyllable identification task FIGURE 4. The relationship between the correct response rate and training days by P1. The training days count from the first day of the monosyllable identification task. Figure 3(b) illustrates the correct response rate for each condition from all sessions performed by P2. The correct response rate was calculated for the consonant part, the vowel part and the syllable unit. Two-way ANOVA revealed a significant effect of the condition on the consonant part, the vowel part, and the syllable unit respectively (consonant, F(2,54) = 54.3, p <.1; vowel, F(2,54) = 425.6, p <.1; syllable, F(2,54) = 19.9, p <.1). Tucky- Proceedings of Meetings on Acoustics, Vol. 19, 588 (13) Page 5

6 Kramer HSD post hoc analysis showed significant difference in the correct response rate of the consonant part among all of the conditions (respectively, p <.1). In the vowel part, a significant difference between the correct response rate of the audio-visual condition and that of the other conditions was observed (respectively, p <.1). In the syllable unit, a significant difference among all of conditions was apparent (respectively, p <.1). A regression analysis was conducted on the correct response rate of all of the conditions by consonant, vowel, and syllable unit. The regression coefficient of the syllable unit in the visual condition showed significant deviation from zero (two-tailed t-test, t(17) = 2.18, p =.428). The consonant part and syllable unit in audio-visual condition indicated significant derivation from zero as well (two-tailed t-test, consonant, t(23) = 6.11, p <.1; syllable, t(23) = 4.67, p <.1). Furthermore, the difference between the regression coefficients of the syllable unit in the visual condition and that in the audio-visual condition was significant (two-tailed t-test, t(42) = 21.4, p <.1). Consonant = *day Visual = *day Vowel = *day Visual = *day Syllable = *day Visual = *day Audio = *day Audio = *day Audio = *day Number of training daya since the first day of monosyllable identification task FIGURE 5. The relationship between the correct response rate and training days by P2. The training days count from the first day of the monosyllable identification task. DISCUSSION The data of the word recognition task for P1 in the current research includes part of the data from a previous study by Shimokura et al. Trials of the 5-alternative forced choice task in the previous research were removed from the analysis because neither the number of morae nor the vowel pattern of the options was controlled. As for the results of P1, the correct response rate of every condition was above chance. This means that the differences between words could be distinguished when options were under four and the options were exhibited beforehand. On the other hand, the lack of a significant difference in the correct response rate between same condition and different condition implies that a difference in the number of morae and the vowel pattern did not contribute to the discrimination of the word. A larger number of cues may not make discrimination of boneconducted ultrasonic speech easier. The mean number of repeated utterances before a response rather suggests that fewer cues facilitate the discrimination of words. It is speculated that the participant perceived bone-conducted ultrasonic speech in totally different manner to original air-conducted speech, therefore many differences between options might make it difficult to memorize words and compare between them. The results of P2 showed a correct response rate as much above chance as P1. Furthermore, the correct response rate of the same condition was significantly higher than that of different condition in the vowel pattern. It is considered that an increase in the number of features to be memorized and compared made discrimination more complicated in the same way as P1. However, it should be added that P2 was able to have a conversation comfortably with BCUHA. In short, these results suggests that these two profoundly hearing impaired persons could discriminate boneconducted ultrasonic speech with auditory cue only if prior options or context were given. The results from the monosyllable identification task show transmission efficiency of each modality and their combination. The results of both participants were similar. Using audio and visual cue simultaneously, monosyllables could be identified more precisely. The results depicted in Fig. 3 shows that the auditory cue from a hearing aid was largely effective for the consonant identification while a visual cue was basically enough for the vowel identification. However, as for the correct response rate for the syllable unit in the audio-visual condition, P1 remained under % and the P2 remain around 5%. In a previous study, monosyllable identification through BCUHA by a normal Proceedings of Meetings on Acoustics, Vol. 19, 588 (13) Page 6

7 listener showed an approximately % correct response rate in the best SN ratio in the audio-visual condition (Yamashita, et al., 1). In the audio only condition, the mean of the correct response rate of the normal listener was around %. The correct response rate of P2 was around 1% in the same condition. Even for P2 who could hold a comfortable conversation with the BCUHA, monosyllable identification was hard because of poor redundancy for judgment. Only in the visual condition did P1 and P2 show a higher correct response rate than that of normal listeners tested by Yamashita et al. (approximately %). Although lip reading was not particularly trained in our sessions, requirement in the everyday life may have raised their precision in lip reading. Figure 4 and 5 show the change of the results of the monosyllable identification task by BCUHA training. As is the case with the word recognition task, the results of both participants were similar. It was confirmed that the presentation of multiple modalities was efficient for speech sound discrimination based on the statistical test of the difference of regression coefficients on P2 s data. It is common with reports of rehabilitation for cochlear implant users that sensation from multiple modalities (auditory and vision) can achieve effective learning of speech sound (Rouger et al., 7; Kim et al., 9). Note that improvement of integration of auditory and vision is limited as is reported by research on the cochlear implant and the rehabilitation (Massida et al., 1). It is necessary to conduct a similar investigation for hearing impaired people with different conditions to estimate the limitation of training and to create a more effective rehabilitation program. SUMMARY Two types of experiment were performed for two profoundly hearing impaired persons to confirm the effects of the bone-conducted ultrasound hearing aid and hearing training. It was verified that even profoundly hearing impaired persons could discriminate words using only a bone-conducted ultrasound hearing aid if context was provided. In addition, it was confirmed that as with, training with both auditory and visual cues was effective for monosyllable identification. ACKNOWLEDGMENTS This research was supported by Grant-in-Aid for Scientific Research (B) No and Grant-in-Aid for Young Scientists (B) No of the Japan Society for the Promotion of Science. REFERENCES Gavreau, V. (1948). Audibillité de sons de frequence élevée, Compt. Rendu. 226, Hosoi, H., Imaizumi, S., Sakaguchi, T., Tonoike, M., and Murata, K. (1998). Activation of the auditory cortex by ultrasound, Lancet, 351, Kagomiya, T. and Nakagawa, S. (1). An evaluation of bone conducted ultrasonic hearing-aid regarding transmission of japanese prosodic phonemes, Proc. ICA 1, 972, 1-5. Kim, J., Davis, C., and Groot, C. (9). Speech identification in noise: Contribution of temporal, spectral, and visual speech cues, J. Acoust. Soc. Am., 126, Lenhardt, M. L., Skellett, R., Wang, P., and Clarke, A. M. (1991). Human ultrasonic speech perception, Science, 253, Massida Z., Belin, P., James C., and Rouger, J., Fraysse, B., Barone, P., and Deguine, O. (1). Voice discrimination in cochlear-implanted deaf subjects, Hearing Research, 275, Nakagawa, S., Okamoto, Y., and Fujisaka, Y. (6). Development of a bone-conducted ultrasonic hearing aid for the profoundly sensorineural deaf, Trans. Jpn. Soc. Med. Biol. Eng. 44, Nishimura, T., Nakagawa, S., Sakaguchi, T., and Hosoi, H. (3). Ultrasonic masker clarifies ultrasonic perception in man, Hearing Research, 175, Nishimura, T., Okayasu, T., Uratani, Y., Fukuda, H., Saito, O., and Hosoi, H. (11). Peripheral perception mechanism of ultrasonic hearing, Hearing Research, 277, Okamoto, Y., Nakagawa, S., Fujimoto, K., and Tonoike, M. (5). Intelligibility of bone-conducted ultrasonic speech, Hearing Research, 8, Pumphrey, R. J. (195). Upper limit of frequency for human hearing, Nature, 166, 571. Rouger, J., Deguine, O., Barone, P., Lagleyre, S., Fraysse, B., Deguine O., and Deneve S. (7). Evidence that cochlearimplanted deaf patients are better multisensory integrators, PNAS, 14, Shimokura, R., Fukuda, F., and Hosoi, H. (12). A case study of auditory rehabilitation in a profoundly deaf participant using a bone-conducted ultrasonic hearing aid, Behavioral Science Research, 5, Proceedings of Meetings on Acoustics, Vol. 19, 588 (13) Page 7

8 Yamashita, A., Nishimura, T., Nagatani, Y., Sakaguchi, T., Okayasu, T., Yanai, S., and Hosoi, H. (1). The effect of visual information in speech signals by bone-conducted ultrasound, NeuroReport, 21, Proceedings of Meetings on Acoustics, Vol. 19, 588 (13) Page 8

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 ASSESSMENTS OF BONE-CONDUCTED ULTRASONIC HEARING-AID (BCUHA): FREQUENCY-DISCRIMINATION, ARTICULATION AND INTELLIGIBILITY TESTS PACS:

More information

The role of periodicity in the perception of masked speech with simulated and real cochlear implants

The role of periodicity in the perception of masked speech with simulated and real cochlear implants The role of periodicity in the perception of masked speech with simulated and real cochlear implants Kurt Steinmetzger and Stuart Rosen UCL Speech, Hearing and Phonetic Sciences Heidelberg, 09. November

More information

Communication with low-cost hearing protectors: hear, see and believe

Communication with low-cost hearing protectors: hear, see and believe 12th ICBEN Congress on Noise as a Public Health Problem Communication with low-cost hearing protectors: hear, see and believe Annelies Bockstael 1,3, Lies De Clercq 2, Dick Botteldooren 3 1 Université

More information

THE ROLE OF VISUAL SPEECH CUES IN THE AUDITORY PERCEPTION OF SYNTHETIC STIMULI BY CHILDREN USING A COCHLEAR IMPLANT AND CHILDREN WITH NORMAL HEARING

THE ROLE OF VISUAL SPEECH CUES IN THE AUDITORY PERCEPTION OF SYNTHETIC STIMULI BY CHILDREN USING A COCHLEAR IMPLANT AND CHILDREN WITH NORMAL HEARING THE ROLE OF VISUAL SPEECH CUES IN THE AUDITORY PERCEPTION OF SYNTHETIC STIMULI BY CHILDREN USING A COCHLEAR IMPLANT AND CHILDREN WITH NORMAL HEARING Vanessa Surowiecki 1, vid Grayden 1, Richard Dowell

More information

2/25/2013. Context Effect on Suprasegmental Cues. Supresegmental Cues. Pitch Contour Identification (PCI) Context Effect with Cochlear Implants

2/25/2013. Context Effect on Suprasegmental Cues. Supresegmental Cues. Pitch Contour Identification (PCI) Context Effect with Cochlear Implants Context Effect on Segmental and Supresegmental Cues Preceding context has been found to affect phoneme recognition Stop consonant recognition (Mann, 1980) A continuum from /da/ to /ga/ was preceded by

More information

Assessing Hearing and Speech Recognition

Assessing Hearing and Speech Recognition Assessing Hearing and Speech Recognition Audiological Rehabilitation Quick Review Audiogram Types of hearing loss hearing loss hearing loss Testing Air conduction Bone conduction Familiar Sounds Audiogram

More information

Consonant Perception test

Consonant Perception test Consonant Perception test Introduction The Vowel-Consonant-Vowel (VCV) test is used in clinics to evaluate how well a listener can recognize consonants under different conditions (e.g. with and without

More information

Voice Pitch Control Using a Two-Dimensional Tactile Display

Voice Pitch Control Using a Two-Dimensional Tactile Display NTUT Education of Disabilities 2012 Vol.10 Voice Pitch Control Using a Two-Dimensional Tactile Display Masatsugu SAKAJIRI 1, Shigeki MIYOSHI 2, Kenryu NAKAMURA 3, Satoshi FUKUSHIMA 3 and Tohru IFUKUBE

More information

Results. Dr.Manal El-Banna: Phoniatrics Prof.Dr.Osama Sobhi: Audiology. Alexandria University, Faculty of Medicine, ENT Department

Results. Dr.Manal El-Banna: Phoniatrics Prof.Dr.Osama Sobhi: Audiology. Alexandria University, Faculty of Medicine, ENT Department MdEL Med-EL- Cochlear Implanted Patients: Early Communicative Results Dr.Manal El-Banna: Phoniatrics Prof.Dr.Osama Sobhi: Audiology Alexandria University, Faculty of Medicine, ENT Department Introduction

More information

ACOUSTIC AND PERCEPTUAL PROPERTIES OF ENGLISH FRICATIVES

ACOUSTIC AND PERCEPTUAL PROPERTIES OF ENGLISH FRICATIVES ISCA Archive ACOUSTIC AND PERCEPTUAL PROPERTIES OF ENGLISH FRICATIVES Allard Jongman 1, Yue Wang 2, and Joan Sereno 1 1 Linguistics Department, University of Kansas, Lawrence, KS 66045 U.S.A. 2 Department

More information

Production of Stop Consonants by Children with Cochlear Implants & Children with Normal Hearing. Danielle Revai University of Wisconsin - Madison

Production of Stop Consonants by Children with Cochlear Implants & Children with Normal Hearing. Danielle Revai University of Wisconsin - Madison Production of Stop Consonants by Children with Cochlear Implants & Children with Normal Hearing Danielle Revai University of Wisconsin - Madison Normal Hearing (NH) Who: Individuals with no HL What: Acoustic

More information

Speech Cue Weighting in Fricative Consonant Perception in Hearing Impaired Children

Speech Cue Weighting in Fricative Consonant Perception in Hearing Impaired Children University of Tennessee, Knoxville Trace: Tennessee Research and Creative Exchange University of Tennessee Honors Thesis Projects University of Tennessee Honors Program 5-2014 Speech Cue Weighting in Fricative

More information

MULTI-CHANNEL COMMUNICATION

MULTI-CHANNEL COMMUNICATION INTRODUCTION Research on the Deaf Brain is beginning to provide a new evidence base for policy and practice in relation to intervention with deaf children. This talk outlines the multi-channel nature of

More information

Auditory-Visual Integration of Sine-Wave Speech. A Senior Honors Thesis

Auditory-Visual Integration of Sine-Wave Speech. A Senior Honors Thesis Auditory-Visual Integration of Sine-Wave Speech A Senior Honors Thesis Presented in Partial Fulfillment of the Requirements for Graduation with Distinction in Speech and Hearing Science in the Undergraduate

More information

Cochlear Implants. A service of the Head & Neck Institute s Hearing Implant Program

Cochlear Implants. A service of the Head & Neck Institute s Hearing Implant Program Cochlear Implants A service of the Head & Neck Institute s Hearing Implant Program Available Services A range of services are performed leading up to implantation, including various diagnostic tests,

More information

Differential-Rate Sound Processing for Cochlear Implants

Differential-Rate Sound Processing for Cochlear Implants PAGE Differential-Rate Sound Processing for Cochlear Implants David B Grayden,, Sylvia Tari,, Rodney D Hollow National ICT Australia, c/- Electrical & Electronic Engineering, The University of Melbourne

More information

RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 22 (1998) Indiana University

RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 22 (1998) Indiana University SPEECH PERCEPTION IN CHILDREN RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 22 (1998) Indiana University Speech Perception in Children with the Clarion (CIS), Nucleus-22 (SPEAK) Cochlear Implant

More information

Gick et al.: JASA Express Letters DOI: / Published Online 17 March 2008

Gick et al.: JASA Express Letters DOI: / Published Online 17 March 2008 modality when that information is coupled with information via another modality (e.g., McGrath and Summerfield, 1985). It is unknown, however, whether there exist complex relationships across modalities,

More information

Critical Review: Speech Perception and Production in Children with Cochlear Implants in Oral and Total Communication Approaches

Critical Review: Speech Perception and Production in Children with Cochlear Implants in Oral and Total Communication Approaches Critical Review: Speech Perception and Production in Children with Cochlear Implants in Oral and Total Communication Approaches Leah Chalmers M.Cl.Sc (SLP) Candidate University of Western Ontario: School

More information

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds Hearing Lectures Acoustics of Speech and Hearing Week 2-10 Hearing 3: Auditory Filtering 1. Loudness of sinusoids mainly (see Web tutorial for more) 2. Pitch of sinusoids mainly (see Web tutorial for more)

More information

Prelude Envelope and temporal fine. What's all the fuss? Modulating a wave. Decomposing waveforms. The psychophysics of cochlear

Prelude Envelope and temporal fine. What's all the fuss? Modulating a wave. Decomposing waveforms. The psychophysics of cochlear The psychophysics of cochlear implants Stuart Rosen Professor of Speech and Hearing Science Speech, Hearing and Phonetic Sciences Division of Psychology & Language Sciences Prelude Envelope and temporal

More information

Measuring Auditory Performance Of Pediatric Cochlear Implant Users: What Can Be Learned for Children Who Use Hearing Instruments?

Measuring Auditory Performance Of Pediatric Cochlear Implant Users: What Can Be Learned for Children Who Use Hearing Instruments? Measuring Auditory Performance Of Pediatric Cochlear Implant Users: What Can Be Learned for Children Who Use Hearing Instruments? Laurie S. Eisenberg House Ear Institute Los Angeles, CA Celebrating 30

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Speech Communication Session 4aSCb: Voice and F0 Across Tasks (Poster

More information

Chapter 10 Importance of Visual Cues in Hearing Restoration by Auditory Prosthesis

Chapter 10 Importance of Visual Cues in Hearing Restoration by Auditory Prosthesis Chapter 1 Importance of Visual Cues in Hearing Restoration by uditory Prosthesis Tetsuaki Kawase, Yoko Hori, Takenori Ogawa, Shuichi Sakamoto, Yôiti Suzuki, and Yukio Katori bstract uditory prostheses,

More information

Simulations of high-frequency vocoder on Mandarin speech recognition for acoustic hearing preserved cochlear implant

Simulations of high-frequency vocoder on Mandarin speech recognition for acoustic hearing preserved cochlear implant INTERSPEECH 2017 August 20 24, 2017, Stockholm, Sweden Simulations of high-frequency vocoder on Mandarin speech recognition for acoustic hearing preserved cochlear implant Tsung-Chen Wu 1, Tai-Shih Chi

More information

Study of perceptual balance for binaural dichotic presentation

Study of perceptual balance for binaural dichotic presentation Paper No. 556 Proceedings of 20 th International Congress on Acoustics, ICA 2010 23-27 August 2010, Sydney, Australia Study of perceptual balance for binaural dichotic presentation Pandurangarao N. Kulkarni

More information

MEDICAL POLICY SUBJECT: COCHLEAR IMPLANTS AND AUDITORY BRAINSTEM IMPLANTS. POLICY NUMBER: CATEGORY: Technology Assessment

MEDICAL POLICY SUBJECT: COCHLEAR IMPLANTS AND AUDITORY BRAINSTEM IMPLANTS. POLICY NUMBER: CATEGORY: Technology Assessment MEDICAL POLICY PAGE: 1 OF: 5 If the member's subscriber contract excludes coverage for a specific service it is not covered under that contract. In such cases, medical policy criteria are not applied.

More information

Localization Abilities after Cochlear Implantation in Cases of Single-Sided Deafness

Localization Abilities after Cochlear Implantation in Cases of Single-Sided Deafness Localization Abilities after Cochlear Implantation in Cases of Single-Sided Deafness Harold C. Pillsbury, MD Professor and Chair Department of Otolaryngology/Head and Neck Surgery University of North Carolina

More information

Combination of Bone-Conducted Speech with Air-Conducted Speech Changing Cut-Off Frequency

Combination of Bone-Conducted Speech with Air-Conducted Speech Changing Cut-Off Frequency Combination of Bone-Conducted Speech with Air-Conducted Speech Changing Cut-Off Frequency Tetsuya Shimamura and Fumiya Kato Graduate School of Science and Engineering Saitama University 255 Shimo-Okubo,

More information

Bone-conducted ultrasonic hearing assessed by tympanic membrane vibration in living human beings

Bone-conducted ultrasonic hearing assessed by tympanic membrane vibration in living human beings Acoust. Sci. & Tech. 34, 6 (213) PAPER #213 The Acoustical Society of Japan Bone-conducted ultrasonic hearing assessed by tympanic membrane vibration in living human beings Kazuhito Ito and Seiji Nakagawa

More information

HCS 7367 Speech Perception

HCS 7367 Speech Perception Long-term spectrum of speech HCS 7367 Speech Perception Connected speech Absolute threshold Males Dr. Peter Assmann Fall 212 Females Long-term spectrum of speech Vowels Males Females 2) Absolute threshold

More information

Variability in Word Recognition by Adults with Cochlear Implants: The Role of Language Knowledge

Variability in Word Recognition by Adults with Cochlear Implants: The Role of Language Knowledge Variability in Word Recognition by Adults with Cochlear Implants: The Role of Language Knowledge Aaron C. Moberly, M.D. CI2015 Washington, D.C. Disclosures ASA and ASHFoundation Speech Science Research

More information

Speech, Hearing and Language: work in progress. Volume 11

Speech, Hearing and Language: work in progress. Volume 11 Speech, Hearing and Language: work in progress Volume 11 Other Publications in Speech and Hearing Science: 1998 to 1999. Department of Phonetics and Linguistics UNIVERSITY COLLEGE LONDON 201 Other Publications

More information

Audiology. (2003) Hernades,Monreal,Orza

Audiology. (2003) Hernades,Monreal,Orza 1 3 2 1 : drgmovallali@gmail.com -009821-22180072.3.1.2 :... _ : : 16 11 9... Z. 8 60 16. ) : (P< 0/01) (P< 0/05) ( ) 2 ((P 0/01) ( ) 2 :.. : ( 93/5/6 : ).(1) (2007) Klop.(2).(2) Audiology.(1).(3).

More information

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair Who are cochlear implants for? Essential feature People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work

More information

Hearing Impaired K 12

Hearing Impaired K 12 Hearing Impaired K 12 Section 20 1 Knowledge of philosophical, historical, and legal foundations and their impact on the education of students who are deaf or hard of hearing 1. Identify federal and Florida

More information

Jitter, Shimmer, and Noise in Pathological Voice Quality Perception

Jitter, Shimmer, and Noise in Pathological Voice Quality Perception ISCA Archive VOQUAL'03, Geneva, August 27-29, 2003 Jitter, Shimmer, and Noise in Pathological Voice Quality Perception Jody Kreiman and Bruce R. Gerratt Division of Head and Neck Surgery, School of Medicine

More information

WIDEXPRESS. no.30. Background

WIDEXPRESS. no.30. Background WIDEXPRESS no. january 12 By Marie Sonne Kristensen Petri Korhonen Using the WidexLink technology to improve speech perception Background For most hearing aid users, the primary motivation for using hearing

More information

Cochlear Implant The only hope for severely Deaf

Cochlear Implant The only hope for severely Deaf Cochlear Implant The only hope for severely Deaf By: Dr. M. Sohail Awan, FCPS (ENT) Aga Khan University Hospital, Karachi - Pakistan For centuries, people believed that only a miracle could restore hearing

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Noise Session 3aNSa: Wind Turbine Noise I 3aNSa5. Can wind turbine sound

More information

Cochlear Implant Corporate Medical Policy

Cochlear Implant Corporate Medical Policy Cochlear Implant Corporate Medical Policy File Name: Cochlear Implant & Aural Rehabilitation File Code: UM.REHAB.06 Origination: 03/2015 Last Review: 01/2019 Next Review: 01/2020 Effective Date: 04/01/2019

More information

Who are cochlear implants for?

Who are cochlear implants for? Who are cochlear implants for? People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work best in adults who

More information

Simulating cartilage conduction sound to estimate the sound pressure level in

Simulating cartilage conduction sound to estimate the sound pressure level in *Manuscript Click here to download Manuscript: JSV_Cartilage conduction simulation_resubmission_text5.doc Click here to view linked References Original paper Simulating cartilage conduction sound to estimate

More information

Language Speech. Speech is the preferred modality for language.

Language Speech. Speech is the preferred modality for language. Language Speech Speech is the preferred modality for language. Outer ear Collects sound waves. The configuration of the outer ear serves to amplify sound, particularly at 2000-5000 Hz, a frequency range

More information

Diagnosis and Management of ANSD: Outcomes of Cochlear Implants versus Hearing Aids

Diagnosis and Management of ANSD: Outcomes of Cochlear Implants versus Hearing Aids Diagnosis and Management of ANSD: Outcomes of Cochlear Implants versus Hearing Aids Gary Rance PhD The University of Melbourne International Paediatric Conference, Shanghai, April 214 Auditory Neuropathy

More information

Audio-Visual Integration: Generalization Across Talkers. A Senior Honors Thesis

Audio-Visual Integration: Generalization Across Talkers. A Senior Honors Thesis Audio-Visual Integration: Generalization Across Talkers A Senior Honors Thesis Presented in Partial Fulfillment of the Requirements for graduation with research distinction in Speech and Hearing Science

More information

Cued Speech and Cochlear Implants: Powerful Partners. Jane Smith Communication Specialist Montgomery County Public Schools

Cued Speech and Cochlear Implants: Powerful Partners. Jane Smith Communication Specialist Montgomery County Public Schools Cued Speech and Cochlear Implants: Powerful Partners Jane Smith Communication Specialist Montgomery County Public Schools Jane_B_Smith@mcpsmd.org Agenda: Welcome and remarks Cochlear implants how they

More information

EXECUTIVE SUMMARY Academic in Confidence data removed

EXECUTIVE SUMMARY Academic in Confidence data removed EXECUTIVE SUMMARY Academic in Confidence data removed Cochlear Europe Limited supports this appraisal into the provision of cochlear implants (CIs) in England and Wales. Inequity of access to CIs is a

More information

Hearing the Universal Language: Music and Cochlear Implants

Hearing the Universal Language: Music and Cochlear Implants Hearing the Universal Language: Music and Cochlear Implants Professor Hugh McDermott Deputy Director (Research) The Bionics Institute of Australia, Professorial Fellow The University of Melbourne Overview?

More information

Speech perception of hearing aid users versus cochlear implantees

Speech perception of hearing aid users versus cochlear implantees Speech perception of hearing aid users versus cochlear implantees SYDNEY '97 OtorhinolaIYngology M. FLYNN, R. DOWELL and G. CLARK Department ofotolaryngology, The University ofmelbourne (A US) SUMMARY

More information

ADVANCES in NATURAL and APPLIED SCIENCES

ADVANCES in NATURAL and APPLIED SCIENCES ADVANCES in NATURAL and APPLIED SCIENCES ISSN: 1995-0772 Published BYAENSI Publication EISSN: 1998-1090 http://www.aensiweb.com/anas 2016 December10(17):pages 275-280 Open Access Journal Improvements in

More information

Use of Auditory Techniques Checklists As Formative Tools: from Practicum to Student Teaching

Use of Auditory Techniques Checklists As Formative Tools: from Practicum to Student Teaching Use of Auditory Techniques Checklists As Formative Tools: from Practicum to Student Teaching Marietta M. Paterson, Ed. D. Program Coordinator & Associate Professor University of Hartford ACE-DHH 2011 Preparation

More information

Speech, Language, and Hearing Sciences. Discovery with delivery as WE BUILD OUR FUTURE

Speech, Language, and Hearing Sciences. Discovery with delivery as WE BUILD OUR FUTURE Speech, Language, and Hearing Sciences Discovery with delivery as WE BUILD OUR FUTURE It began with Dr. Mack Steer.. SLHS celebrates 75 years at Purdue since its beginning in the basement of University

More information

FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED

FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED Francisco J. Fraga, Alan M. Marotta National Institute of Telecommunications, Santa Rita do Sapucaí - MG, Brazil Abstract A considerable

More information

Providing Effective Communication Access

Providing Effective Communication Access Providing Effective Communication Access 2 nd International Hearing Loop Conference June 19 th, 2011 Matthew H. Bakke, Ph.D., CCC A Gallaudet University Outline of the Presentation Factors Affecting Communication

More information

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED International Conference on Systemics, Cybernetics and Informatics, February 12 15, 2004 BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED Alice N. Cheeran Biomedical

More information

SPEECH PERCEPTION IN A 3-D WORLD

SPEECH PERCEPTION IN A 3-D WORLD SPEECH PERCEPTION IN A 3-D WORLD A line on an audiogram is far from answering the question How well can this child hear speech? In this section a variety of ways will be presented to further the teacher/therapist

More information

Kaylah Lalonde, Ph.D. 555 N. 30 th Street Omaha, NE (531)

Kaylah Lalonde, Ph.D. 555 N. 30 th Street Omaha, NE (531) Kaylah Lalonde, Ph.D. kaylah.lalonde@boystown.org 555 N. 30 th Street Omaha, NE 68131 (531)-355-5631 EDUCATION 2014 Ph.D., Speech and Hearing Sciences Indiana University minor: Psychological and Brain

More information

Prosody Rule for Time Structure of Finger Braille

Prosody Rule for Time Structure of Finger Braille Prosody Rule for Time Structure of Finger Braille Manabi Miyagi 1-33 Yayoi-cho, Inage-ku, +81-43-251-1111 (ext. 3307) miyagi@graduate.chiba-u.jp Yasuo Horiuchi 1-33 Yayoi-cho, Inage-ku +81-43-290-3300

More information

Auditory-Visual Speech Perception Laboratory

Auditory-Visual Speech Perception Laboratory Auditory-Visual Speech Perception Laboratory Research Focus: Identify perceptual processes involved in auditory-visual speech perception Determine the abilities of individual patients to carry out these

More information

Kaitlin MacKay M.Cl.Sc. (AUD.) Candidate University of Western Ontario: School of Communication Sciences and Disorders

Kaitlin MacKay M.Cl.Sc. (AUD.) Candidate University of Western Ontario: School of Communication Sciences and Disorders 1 C ritical Review: Do adult cochlear implant (C I) recipients over 70 years of age experience similar speech perception/recognition gains postoperatively in comparison with adult C I recipients under

More information

Determination of filtering parameters for dichotic-listening binaural hearing aids

Determination of filtering parameters for dichotic-listening binaural hearing aids Determination of filtering parameters for dichotic-listening binaural hearing aids Yôiti Suzuki a, Atsunobu Murase b, Motokuni Itoh c and Shuichi Sakamoto a a R.I.E.C., Tohoku University, 2-1, Katahira,

More information

The effect of wearing conventional and level-dependent hearing protectors on speech production in noise and quiet

The effect of wearing conventional and level-dependent hearing protectors on speech production in noise and quiet The effect of wearing conventional and level-dependent hearing protectors on speech production in noise and quiet Ghazaleh Vaziri Christian Giguère Hilmi R. Dajani Nicolas Ellaham Annual National Hearing

More information

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for What you re in for Speech processing schemes for cochlear implants Stuart Rosen Professor of Speech and Hearing Science Speech, Hearing and Phonetic Sciences Division of Psychology & Language Sciences

More information

Psychosocial Determinants of Quality of Life and CI Outcome in Older Adults

Psychosocial Determinants of Quality of Life and CI Outcome in Older Adults April 1 23, 2018 Psychosocial Determinants of Quality of Life and CI Outcome in Older Adults Howard W. Francis, MD, MBA, FACS Professor and Chief Duke Head and Neck Surgery & Communication Sciences Presented

More information

Effects of speaker's and listener's environments on speech intelligibili annoyance. Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag

Effects of speaker's and listener's environments on speech intelligibili annoyance. Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag JAIST Reposi https://dspace.j Title Effects of speaker's and listener's environments on speech intelligibili annoyance Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag Citation Inter-noise 2016: 171-176 Issue

More information

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair Who are cochlear implants for? Essential feature People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work

More information

BORDERLINE PATIENTS AND THE BRIDGE BETWEEN HEARING AIDS AND COCHLEAR IMPLANTS

BORDERLINE PATIENTS AND THE BRIDGE BETWEEN HEARING AIDS AND COCHLEAR IMPLANTS BORDERLINE PATIENTS AND THE BRIDGE BETWEEN HEARING AIDS AND COCHLEAR IMPLANTS Richard C Dowell Graeme Clark Chair in Audiology and Speech Science The University of Melbourne, Australia Hearing Aid Developers

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Psychological and Physiological Acoustics Session 3aPP: Auditory Physiology

More information

A PROPOSED MODEL OF SPEECH PERCEPTION SCORES IN CHILDREN WITH IMPAIRED HEARING

A PROPOSED MODEL OF SPEECH PERCEPTION SCORES IN CHILDREN WITH IMPAIRED HEARING A PROPOSED MODEL OF SPEECH PERCEPTION SCORES IN CHILDREN WITH IMPAIRED HEARING Louise Paatsch 1, Peter Blamey 1, Catherine Bow 1, Julia Sarant 2, Lois Martin 2 1 Dept. of Otolaryngology, The University

More information

Speech conveys not only linguistic content but. Vocal Emotion Recognition by Normal-Hearing Listeners and Cochlear Implant Users

Speech conveys not only linguistic content but. Vocal Emotion Recognition by Normal-Hearing Listeners and Cochlear Implant Users Cochlear Implants Special Issue Article Vocal Emotion Recognition by Normal-Hearing Listeners and Cochlear Implant Users Trends in Amplification Volume 11 Number 4 December 2007 301-315 2007 Sage Publications

More information

Effects of noise and filtering on the intelligibility of speech produced during simultaneous communication

Effects of noise and filtering on the intelligibility of speech produced during simultaneous communication Journal of Communication Disorders 37 (2004) 505 515 Effects of noise and filtering on the intelligibility of speech produced during simultaneous communication Douglas J. MacKenzie a,*, Nicholas Schiavetti

More information

Demonstration of a Novel Speech-Coding Method for Single-Channel Cochlear Stimulation

Demonstration of a Novel Speech-Coding Method for Single-Channel Cochlear Stimulation THE HARRIS SCIENCE REVIEW OF DOSHISHA UNIVERSITY, VOL. 58, NO. 4 January 2018 Demonstration of a Novel Speech-Coding Method for Single-Channel Cochlear Stimulation Yuta TAMAI*, Shizuko HIRYU*, and Kohta

More information

Kristina M. Blaiser, PhD, CCC-SLP Assistant Professor, Utah State University Director, Sound Beginnings

Kristina M. Blaiser, PhD, CCC-SLP Assistant Professor, Utah State University Director, Sound Beginnings Kristina M. Blaiser, PhD, CCC-SLP Assistant Professor, Utah State University Director, Sound Beginnings Objectives Discuss changes in population of children with hearing loss: Impacts on speech production

More information

[5]. Our research showed that two deafblind subjects using this system could control their voice pitch with as much accuracy as hearing children while

[5]. Our research showed that two deafblind subjects using this system could control their voice pitch with as much accuracy as hearing children while NTUT Education of Disabilities Vol.12 2014 Evaluation of voice pitch control in songs with different melodies using tactile voice pitch feedback display SAKAJIRI Masatsugu 1), MIYOSHI Shigeki 2), FUKUSHIMA

More information

Cochlear Implantation for Single-Sided Deafness in Children and Adolescents

Cochlear Implantation for Single-Sided Deafness in Children and Adolescents Cochlear Implantation for Single-Sided Deafness in Children and Adolescents Douglas Sladen, PhD Dept of Communication Sciences and Disorders Western Washington University Daniel M. Zeitler MD, Virginia

More information

HCS 7367 Speech Perception

HCS 7367 Speech Perception Babies 'cry in mother's tongue' HCS 7367 Speech Perception Dr. Peter Assmann Fall 212 Babies' cries imitate their mother tongue as early as three days old German researchers say babies begin to pick up

More information

Perceptual Effects of Nasal Cue Modification

Perceptual Effects of Nasal Cue Modification Send Orders for Reprints to reprints@benthamscience.ae The Open Electrical & Electronic Engineering Journal, 2015, 9, 399-407 399 Perceptual Effects of Nasal Cue Modification Open Access Fan Bai 1,2,*

More information

A Structured Language Approach to Teach Language and Literacy to Hearing and Visually Impaired Pupils with Autism

A Structured Language Approach to Teach Language and Literacy to Hearing and Visually Impaired Pupils with Autism A Structured Language Approach to Teach Language and Literacy to Hearing and Visually Impaired Pupils with Autism Enid Wolf-Schein Rhonda Bachmann Christine Polys Ruth Rogge Purpose of Presentation This

More information

Studying the time course of sensory substitution mechanisms (CSAIL, 2014)

Studying the time course of sensory substitution mechanisms (CSAIL, 2014) Studying the time course of sensory substitution mechanisms (CSAIL, 2014) Christian Graulty, Orestis Papaioannou, Phoebe Bauer, Michael Pitts & Enriqueta Canseco-Gonzalez, Reed College. Funded by the Murdoch

More information

Best Practice Protocols

Best Practice Protocols Best Practice Protocols SoundRecover for children What is SoundRecover? SoundRecover (non-linear frequency compression) seeks to give greater audibility of high-frequency everyday sounds by compressing

More information

This position is also supported by the following consensus statements:

This position is also supported by the following consensus statements: The Action Group on Adult Cochlear Implants welcomes the invitation to comment on the proposal to conduct a review of Section 1.5 of the NICE guideline TA166; Cochlear implants for children and adults

More information

TitleSimulation of Cochlear Implant Usin. Citation 音声科学研究 = Studia phonologica (1990),

TitleSimulation of Cochlear Implant Usin. Citation 音声科学研究 = Studia phonologica (1990), TitleSimulation of Cochlear Implant Usin Author(s) Sakakihara, Junji; Takeuchi, Mariko Juichi; Honjo, Iwao Citation 音声科学研究 = Studia phonologica (1990), Issue Date 1990 URL http://hdl.handle.net/2433/52483

More information

PATTERN ELEMENT HEARING AIDS AND SPEECH ASSESSMENT AND TRAINING Adrian FOURCIN and Evelyn ABBERTON

PATTERN ELEMENT HEARING AIDS AND SPEECH ASSESSMENT AND TRAINING Adrian FOURCIN and Evelyn ABBERTON PATTERN ELEMENT HEARING AIDS AND SPEECH ASSESSMENT AND TRAINING Adrian FOURCIN and Evelyn ABBERTON Summary This paper has been prepared for a meeting (in Beijing 9-10 IX 1996) organised jointly by the

More information

McGurk stimuli for the investigation of multisensory integration in cochlear implant users: The Oldenburg Audio Visual Speech Stimuli (OLAVS)

McGurk stimuli for the investigation of multisensory integration in cochlear implant users: The Oldenburg Audio Visual Speech Stimuli (OLAVS) Psychon Bull Rev (2017) 24:863 872 DOI 10.3758/s13423-016-1148-9 BRIEF REPORT McGurk stimuli for the investigation of multisensory integration in cochlear implant users: The Oldenburg Audio Visual Speech

More information

Preliminary Results of Adult Patients with Digisonic SP Cohlear Implant System

Preliminary Results of Adult Patients with Digisonic SP Cohlear Implant System Int. Adv. Otol. 2009; 5:(1) 93-99 ORIGINAL ARTICLE Maria-Fotini Grekou, Stavros Mavroidakos, Maria Economides, Xrisa Lira, John Vathilakis Red Cross Hospital of Athens, Greece, Department of Audiology-Neurootology,

More information

A Brief (very brief) Overview of Biostatistics. Jody Kreiman, PhD Bureau of Glottal Affairs

A Brief (very brief) Overview of Biostatistics. Jody Kreiman, PhD Bureau of Glottal Affairs A Brief (very brief) Overview of Biostatistics Jody Kreiman, PhD Bureau of Glottal Affairs What We ll Cover Fundamentals of measurement Parametric versus nonparametric tests Descriptive versus inferential

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Psychological and Physiological Acoustics Session 2pPPb: Speech. Attention,

More information

The REAL Story on Spectral Resolution How Does Spectral Resolution Impact Everyday Hearing?

The REAL Story on Spectral Resolution How Does Spectral Resolution Impact Everyday Hearing? The REAL Story on Spectral Resolution How Does Spectral Resolution Impact Everyday Hearing? Harmony HiResolution Bionic Ear System by Advanced Bionics what it means and why it matters Choosing a cochlear

More information

Bilateral cochlear implantation in children identified in newborn hearing screening: Why the rush?

Bilateral cochlear implantation in children identified in newborn hearing screening: Why the rush? Bilateral cochlear implantation in children identified in newborn hearing screening: Why the rush? 7 th Australasian Newborn Hearing Screening Conference Rendezous Grand Hotel 17 th 18 th May 2013 Maree

More information

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 3,800 116,000 120M Open access books available International authors and editors Downloads Our

More information

Masking release and the contribution of obstruent consonants on speech recognition in noise by cochlear implant users

Masking release and the contribution of obstruent consonants on speech recognition in noise by cochlear implant users Masking release and the contribution of obstruent consonants on speech recognition in noise by cochlear implant users Ning Li and Philipos C. Loizou a Department of Electrical Engineering, University of

More information

Brad May, PhD Johns Hopkins University

Brad May, PhD Johns Hopkins University Brad May, PhD Johns Hopkins University When the ear cannot function normally, the brain changes. Brain deafness contributes to poor speech comprehension, problems listening in noise, abnormal loudness

More information

Role of F0 differences in source segregation

Role of F0 differences in source segregation Role of F0 differences in source segregation Andrew J. Oxenham Research Laboratory of Electronics, MIT and Harvard-MIT Speech and Hearing Bioscience and Technology Program Rationale Many aspects of segregation

More information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, S. Lee, U. Neumann, S. Narayanan Emotion

More information

A Senior Honors Thesis. Brandie Andrews

A Senior Honors Thesis. Brandie Andrews Auditory and Visual Information Facilitating Speech Integration A Senior Honors Thesis Presented in Partial Fulfillment of the Requirements for graduation with distinction in Speech and Hearing Science

More information

Study on Effect of Voice Analysis Applying on Tone Training Software for Hearing Impaired People

Study on Effect of Voice Analysis Applying on Tone Training Software for Hearing Impaired People International Journal of Education and Information Technology Vol. 3, No. 3, 2018, pp. 53-59 http://www.aiscience.org/journal/ijeit ISSN: 2381-7410 (Print); ISSN: 2381-7429 (Online) Study on Effect of

More information

INTRODUCTION J. Acoust. Soc. Am. 103 (2), February /98/103(2)/1080/5/$ Acoustical Society of America 1080

INTRODUCTION J. Acoust. Soc. Am. 103 (2), February /98/103(2)/1080/5/$ Acoustical Society of America 1080 Perceptual segregation of a harmonic from a vowel by interaural time difference in conjunction with mistuning and onset asynchrony C. J. Darwin and R. W. Hukin Experimental Psychology, University of Sussex,

More information

ACOUSTIC ANALYSIS AND PERCEPTION OF CANTONESE VOWELS PRODUCED BY PROFOUNDLY HEARING IMPAIRED ADOLESCENTS

ACOUSTIC ANALYSIS AND PERCEPTION OF CANTONESE VOWELS PRODUCED BY PROFOUNDLY HEARING IMPAIRED ADOLESCENTS ACOUSTIC ANALYSIS AND PERCEPTION OF CANTONESE VOWELS PRODUCED BY PROFOUNDLY HEARING IMPAIRED ADOLESCENTS Edward Khouw, & Valter Ciocca Dept. of Speech and Hearing Sciences, The University of Hong Kong

More information