Speech enhancement based on harmonic estimation combined with MMSE to improve speech intelligibility for cochlear implant recipients

Size: px
Start display at page:

Download "Speech enhancement based on harmonic estimation combined with MMSE to improve speech intelligibility for cochlear implant recipients"

Transcription

1 INERSPEECH 1 August 4, 1, Stockholm, Sweden Speech enhancement based on harmonic estimation combined with MMSE to improve speech intelligibility for cochlear implant recipients Dongmei Wang, John H. L. Hansen CRSS Lab, Dept. Electrical Engineering he University of exas at Dallas Richardson, exas 58 dongmei.wang@utdallas.edu, john.hansen@utdallas.edu Abstract In this paper, a speech enhancement algorithm is proposed to improve the speech intelligibility for cochlear implant recipients. Our method is based on combination of harmonic estimation and traditional statistical method. raditional statistical based speech enhancement method is effective only for stationary noise suppression, but not non-stationary noise. o address more complex noise scenarios, we explore the harmonic structure of target speech to obtain a more accurate noise estimation. he estimated noise is then employed in the MMSE framework to obtain the gain function for recovering the target speech. Listening test experiments show a substantial speech intelligibility improvement for cochlear implant recipients in noisy environments. Index erms: speech enhancement, cochlear implant, harmonic structure, noise estimation, MMSE 1. Introduction Cochlear implant (CI) devices are able to assist the individuals with severely hearing loss to recover some level of hearing ability. Currently, CI recipients are able to achieve a relatively high level of speech intelligibility in quiet environments. However, when they present in noisy backgrounds, their speech intelligibility drops dramatically. Previous research has shown that the speech reception threshold (SR) of CI listeners is typically 15 to 5dB higher than normal hearing listeners in noisy backgrounds [1,, 3]. herefore, developing effective speech enhancement algorithms is essential to improve speech perception in noise for CI recipients. Speech is a type of over-redundant signal. Normal hearing listeners usually have no difficulties in understanding speech in noise, even at a negative signal-tonoise ratio (SNR) level, depending on noise types. On the contrary, CI recipients are only able to decode limited amounts of spectral and temporal information from the speech which are delivered by a few CI encoding channels (16 or ) [4, 5]. his leaves CI recipients unable to discriminate target speech from noise interference. o improve speech perception in noise for CI recipients, both single and multiple microphone based speech enhancement algorithms have been developed in previous research [6, ]. Multiple microphone speech enhancement algorithms are able to significantly increase speech intelligibility for CI recipients in both stationary and nonstationary noise. However, strict assumptions such as speech and noise sources must be spatially separated, have to be satisfied. In some real scenarios, such as a diffusive sound field, target speech and noise coming from the same direction, or if reverberation exists, multiple channel algorithms have very limited benefit. hus, single microphone speech enhancement to improve speech perception for CI recipients still remains as an open problem. In addition, single microphone algorithm can incorporate with multiple microphone algorithms to improve speech intelligibility in noise [8]. For single microphone based methods, existing research has been focused on i) developing noise reduction algorithm before the CI encoding [1, 9, 1, 11, 1] and ii) optimizing the CI channel selection to deliver higher SNR speech components [13, 14, 15, 16, 1]. In the former cases, noise reduction is performed as a pre-processing to improve speech representation. Such approaches include spectral subtraction method [1, 9], wiener filter [18], subspace methods [1]. In the latter cases, strategies are designed to manipulate the CI encoding channels to efficiently deliver the speech signal to the electrode array for the improved auditory neural stimulation. Both binary mask [14] and soft mask [13, 15, 16] based studies have demonstrated that SNR-dependent channel selection is more desirable than the current default n-of-m channel selection strategy [19]. Non-negative matrix factorization (NMF) method has also been used in channel selection for CI device based on speech sparsity characteristics [1]. raditional statistical based speech enhancement algorithms are desirable for CI devices because of the real time processing []. However, these methods are only effective for stationary noise. heir benefits are moderated or disappear in the fluctuating noises. In this study, we propose a single microphone speech enhancement algorithm based on harmonic estimation combined with MMSE to improve speech intelligibility for CI subjects. On one hand, MMSE method is efficient to reduce the stationary noise along time dimension. On the other hand, we explore the harmonic structures of target speech to accurately estimate the fluctuating noise along frequency Copyright 1 ISCA 186

2 Figure 1: Algorithm Overview dimension. he noise estimation from both the time and frequency dimensions are employed in the MMSE framework for speech enhancement [1]. Moreover, the harmonic structure estimation is based on the noise robust pitch estimation from our previous study [, 3].. he Proposed Speech Enhancement Algorithm.1. Algorithm Overview he overall algorithm overview is shown in Fig. 1. wo types of noise estimation are included: i) noise estimation along the frequency dimension in the voiced speech segments, ii) noise estimation along the time dimensions in both voiced and unvoiced segments. Specifically, along time dimension, the assumption of stationary noise is made during noise tracking. Along the frequency dimension, the noise estimation is based on exploring harmonic structure. he estimated noise variance from both frequency and time dimensions ( λ ˆ d and λ ˆF d ) will be inputting to the MMSE framework to obtain the gain functions ( G ˆ and G ˆ F ). Finally, the above two gain functions will be fused into one form (Ĝ) for the target speech estimation based on the MMSE [1]... Harmonic structure estimation Speech spectrum of voiced speech is comprised of a series of harmonic partials with frequencies which are integer multiple times of fundamental frequency (F). Given the estimated Fs of the target speech, the harmonic structures can be approximated by selecting the noisy spectrum peaks which are near the ideal harmonic frequencies (kf). In practice, the observed harmonic partials usually deviates from the ideal frequency due to the instability of the glottal pulse sequence/shape during speech production. herefore, we set the deviation threshold f H differently depending on the frequency band, shown as follows,, f < 5Hz Δf H = 3, 5Hz f<hz (1) 45, f Hz he F estimation for noisy speech is based on harmonic feature classification based methods [, 3].3. Noise estimation based on harmonic structure Based on harmonic model, we infer that: i) the noisy spectrum between the harmonic partials are usually dominated by noise; ii) the noisy spectrum within the harmonic partials are dominated by speech. Accordingly, we perform the noise estimation between harmonics (BH) and within harmonics (WH) separately. First, harmonic spectrum for the target speech is generated by convolving the harmonic peak vector with the spectrum of a short-term analysis hamming window, shown as follow, S H (f) =S win (f) K a k H δ(f fh), k () k=1 where a k H and f H k are the amplitude and frequency of the kth order harmonic peak, S win is the spectrum vector of a hamming window, and δ(f) is a delta function. Next, the generated harmonic spectrum amplitude S H are reduced from the noisy speech spectrum to obtain the initial estimated noise spectrum A ˆ n, shown as below, ˆ A n = max ( S n S H, ). (3) Here, we call the noise spectrum within the harmonics main lobe as WH noise, and the noise spectrum between the harmonics main lobes as BH noise. he bandwidth of the main lobe for each harmonics is set as /3 of that of the corresponding short-term hamming window spectrum. In the BH frequency range, noise is the dominant component which is seldom overlapped with speech. hus, the initial estimated noise A ˆ n in this frequency range is considered as the estimated noise spectrum, shown as, Â BH (f) =Â n (f), (4) where f [ kf+ 1 f mb, (k +1)F 1 f mb], and fmb is the bandwidth of the harmonic main lobe. However, in the WH frequency range, speech has the dominant energy. he initiated estimated noise spectrum in this frequency range is not as accurate which requires reestimation. o estimate the WH noise, we made an assumption that noise spectrum is continuously distributed along the frequency dimension. Given the noise variance in the 18

3 .8k μ = -.1 σ = k.4k.k 4k k -5dB db 5dB (a) SS, -khz 8k μ = -.6 σ = 3.9 6k.8k μ = -. σ = k.4k.k 4k k -5dB db 5dB (b) SS, -4kHz 8k μ = -.16 σ =.4 6k Amplitude Amplitude 1 noisy speech spectrum noise spectrum estimated noise spectrum Frequency/Hz (a) speech-shaped noise -5dB db 5dB (c) BB, -khz -5dB db 5dB (d) BB, -4kHz Figure : Histogram of energy ratio between neighboring frequency bands. SS: speech-shaped noise, BB: babble noise Frequency/Hz (b) babble noise Figure 3: Noise estimation based on harmonic structure near frequency bands, the noise variance in the current frequency band can be approximated based on interpolation technique. Fig. presents the statistical histogram of the logarithmic energy ratio between neighboring frequency bands. Both the bandwidth and the frequency shift are set to 1Hz. We include two different frequency ranges: -khz and k-4khz. wo types of noise are considered: speech-shaped noise and babble noise. he mean and standard variance values are shown along with the histogram. From Fig. we see that the mean values of the logarithmic energy ratio between neighboring bands are around db in all four cases. he maximum spread value for both types of noises is less than 5dB. his analysis indicates the continuity of the noise energy between adjacent frequency bands. herefore, we propose to estimate the WH noise variance using the estimated BH noise variance with linear interpolation method, shown as below, Â WH (f) =â L BH + ( â R BH â L f f BH) L BH fbh R f BH L where f [ kf 1 f mb,kf+ 1 f mb], f L BH and fbh R are the edge frequencies of left and right neighboring band near the current harmonic partial, â L BH and âr BH are the average amplitude of estimated BH noise spectrum in left and right adjacent frequency band. Fig. 3 demonstrates an example of the noise estimation based on harmonic structure. Fig. 3a is for the speech-shaped noise case, and Fig. 3b is for the babble noise case. It can be seen that both the BH and WH noise estimation is almost consistent as the original noise. (5).4. Noise tracking along time dimension he noise tracking along time dimension is performed based on a minimum statistics algorithm with optimal smoothing [4], assuming noise is stationary. In practice, noise variance is estimated in the beginning quiet section and updated during later unvoiced segments..5. Gain function estimation he noise variance estimated along frequency dimension is employed in the MMSE framework to generate the harmonic-based gain function G ˆ F [5, ]. Alternatively, the time dimension estimated noise is used to generate a gain function G ˆ. hen G ˆ F and G ˆ are combined with weights to obtain the optimal gain function for the target speech [1], shown as, Ĝ = ˆ G F ˆ λ F G F + λ ˆ + ˆ F + λ ˆ. (6) where λ ˆ F and λ ˆ are the frequency- and time-dimension based noise variances which are computed from Sec. 3.3 and Sec Experiments and Results 3.1. Experiment setting Listening test is carried out with CI subjects to evaluate the performance of the proposed algorithm. he target speech materials are comprised of sentences from the IEEE database [6]. Babble noise was used to corrupt the sentences at the SNR of db, 5dB and 1dB. Six postlingually deafened cochlear implants users, with a mean 188

4 WRR/(%) 6 4 original noisy mmse harmonic+mmse SNR/dB Figure 4: Average word recognition rate results of.5 years implant use, participated in the listening test. Subjects were paid an hourly wage for their participation. he listening task involved sentence recognition by CI subjects who were seated in a soundproof room (Acoustic System, Inc). he sentences were played to the CI subjects through a loudspeaker placed at a distance of 8cm in front of the subject. he sound pressure level of the speech sentences from the loudspeaker was set as fixed 65dB through out the test. he subjects were fitted with their daily strategy during the test. Each subject participated in a total of 9 test conditions (3 SNR levels 3 processing conditions). wo IEEE sentence lists were used per test condition, and each IEEE sentence list includes 1 sentences. he order of test conditions was randomized across subjects. Subjects were given a 5-min break every 3 min during the test sessions to avoid listening fatigue. 3.. Results he speech intelligibility performance of CI subjects is measured in terms of average word recognition rate (WRR). he results of average WRR in different conditions are presented in Fig. 4. he standard error of the mean (SEM) of WRR results is also shown along with the average value. From Fig. 4 we see that the Harmonic + MMSE approach shows benefits for CI subjects at all SNR levels in the babble noise condition. In contrast, the MMSE processing only shows minor benefit at the SNR of db. We also carry out a two-way ANOVA on the WRR results to investigate the significance of processing conditions at different SNR levels. he ANOVA results for db and 5dB are [F (, 1) = 1.6,p <.3] and [F (, 1) = 16.1, p <.] respectively indicating significant interaction between different processing conditions. However, the ANOVA result for 1dB is [F (, 1) =.6, p <.1181], indicating no significant difference between different processing conditions. he Post hoc comparisons between different processed conditions are carried out at db and 5dB which have been shown significant statistics by ANOVA analysis. Specifically, when the SNR is db, the Post hoc results show significant difference between MMSE + Harmonic processed condition and the original noisy condition. Significant difference is also shown between MMSE + Harmonic processed condition and the only MMSE (a) Clean (b) Noisy 1 3 (c) MMSE (d) Harmonic + MMSE Figure 5: Electrodogram: babble noise, SRN = db processed condition (p <. and p<.3). However, there is no significant difference between MMSE processed condition and the original unprocessed condition (p <.8). When the SNR is 5dB, the similar statistic results are found as db. Fig. 5 shows an example of electrodogram for different processing conditions. Electrodogram represents the output of the cochlear implant devices. Comparing the electrodograms among different processing conditions, we see that much more residual noise exists in MMSE processed speech than Harmonic+MMSE processed speech. Harmonic structure estimation is able to provide more accurate noise tracking, thus removing the fluctuating noise, whereas MMSE method is not as effective. 4. Conclusion and discussion A speech enhancement algorithm based on harmonic estimation combined with MMSE framework is proposed in this paper to improve the speech intelligibility for CI recipients. he proposed Harmonic + MMSE processing is able to distinguish speech harmonics from noise interference. In this way, the non-stationary noise can be estimated and removed accurately. he listening test results demonstrate the potential benefit to CI subjects by the proposed method in terms of the WRR performance. It further indicates that F is a substantial cue for speech perception in fluctuating noise for CI recipients For the future research, the investigation of tradeoff between noise reduction and speech distortion is warranted for different noise condition regarding speech intelligibility for CI recipients

5 5. References [1] Hochberg I., A. Boothroyd, and M Weiss, Effects of noise and noise suppression on speech perception by cochlear implant users, Ear and Hearing, vol. 13, no. 4, pp. 63 1, Aug [] J. Wouters and J. Vanden Berghe, Speech recognition in noise for cochlear implantees with a two-microphone monaural adaptive noise reduction system, Ear and Hearing, vol., no. 5, pp. 4 43, Oct. 1. [3] A. Spriet, L. Van Deun, K. Eftaxiadis, J. Laneau, M. Moonen, B. van Dijk, A. van Wieringen, and Wouters J., Speech understanding in background noise with the twomicrophone adaptive beamformer beam in the nucleus freedom cochlear implant system, Ear and Hearing, vol. 8, no. 1, pp. 6, Feb.. [4] J. F. Patrick, P. A. Busby, and P. J. Gibson, he development of the nucleus freedom cochlear implant system, rends Amplify, vol. 1, no. 4, pp. 15, Dec. 6. [5] P. C. Loizou, Speech processing in vocoder-centric cochlear implants, in Cochlear and Brainstem Implants, Otorhinolaryngol, A. R. Moller, Ed., vol. 64, pp Karger, Basel, 6. [6] K. Kokkinakis, B. Azimi, Y. Hu, and D. R. Friedland, Single and multiple microphone noise reduction strategies in cochlear implants, rends in Amplification, vol. 16, no., pp , Jun. 1. [] Raphael Koning, Speech enhancement in cochlear implants, Ph.D. thesis, Dept. Neurosciences, KU Leuven, 14. [8] A. A. Hersbach, K. Arora, Mauger S. J., and Dawson P. W., Combining directional microphone and singlechannel noise reduction algorithms: a clinical evaluation in difficult listening conditions with cochlear implant users, Ear and Hearing, vol. 33, no. 4, pp. e13 e19, July - Aug. 1. [9] L. P. Yang and Q. J. Fu, Spectral subtraction-based speech enhancement for cochlear implant patients in background noise, J. Acoust. Soc. Am., vol. 11, no. 3 Pt. 1, pp , Nov. 5. [1] P. C. Loizou, A. Lobo, and Y. Hu, Subspace algorithms for noise reduction in cochlear implants, J. Acoust. Soc. Am., vol. 118, no. 5, pp , Nov. 5. [11] J. Li, Q. J. Fu, H. Jiang, and M. Akagi, Psychoacoustically-motivated adaptive beta-order generalized spectral subtraction for cochlear implant patients, in Proc. ICASSP, aipei, aiwan, Ari. 9, pp [1] F. oledo, P. C. Loizou, and A. Lobo, Subspace and envelope subtraction algorithms for noise reduction in cochlear implants, in Proc.5th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, aipei, aiwan, Ari. 3, pp. 5. [13] Y. Hu, P. C. Loizou, N. Li, and K. Kasturi, Use of a sigmoid-shaped function for noise attenuation in cochlear implants, J. Acoust. Soc. Am., vol. 1, no. 4, pp. EL18 EL134, Oct.. [14] Y. Hu and P. C. Loizou, Environment-specific noise suppression for improved speech intelligibility by cochlear implant users, J. Acoust. Soc. Am., vol. 1, no. 6, pp , June 1. [15] P. W. Dawson, S. J. Mauger, and A. A. Hersbach, Clinical evaluation of signal-to-noise ratio based noise reduction in nucleus cochlear implant recipients, Ear and Hearing, vol. 3, no. 3, pp , May-June 11. [16] S. J. Mauger, K. Arora, and P. W. Dawson, Cochlear implant optimized noise reduction, J. Neural Eng., vol. 9, no. 6, pp. 1 9, Dec. 1. [1] H. Hu, M. E. Lutman, S. D. Ewert, G. Li, and S. Bleeck, Sparse nonnegative matrix factorization strategy for cochlear implants, rends in Hearing, vol. 19, pp. 1 16, Dec. 15. [18] F. Chen, Y. Hu, and M. Yuan, Evaluation of noise reduction methods for sentence recognition by mandarinspeaking cochlear implant listeners, Ear and Hearing, vol. 36, no. 1, pp. 61 1, Jan. 15. [19] Y. Hu and P. C. Loizou, A new sound coding strategy for suppressing noise in cochlear implants, J. Acoust. Soc. Am., vol. 14, no. 1, pp , July 8. [] P. C. Loizou, Speech Enhancement: heory and Practice, chapter, CRC Press, Boca Raton, USA, 1 edition,. [1] M. Krawczyk-Becker and. Gerkmann, MMSE-optimal combination of wiener filtering and harmonic model based speech enhancement in general framework, in Proc. WASPAA, New Paltz, NY, Dec. 15, pp [] D. Wang, P. C. Loizou, and J. H. L. Hansen, F estimation in noisy speech based on long-term harmonic feature analysis combined with neural network classification, in Proc. INERSPEECH, Singapore, Sep. 14, pp [3] D. Wang, C. Yu, and J. H. L. Hansen, Robust harmonic features for classification-based pitch estimation, IEEE/ACM rans. Audio Speech Lang. Process., vol. 5, no. 5, pp , May 1. [4] R. Martin, Noise power spectral density estimation based on optimal smoothing and minimum statistics, IEEE rans. Audio Speech Lang. Process., vol. 9, no. 5, pp , Jul. 1. [5] Y. Ephraim and D. Malah, Speech enhancement using a minimum mean-square error short-time spectral amplitude estimator, IEEE rans. Acoustic Speech and Signal Processing, vol. ASSP-3, no. 6, pp , Dec [6] IEEE, IEEE recommended practice for speech quality measurements, IEEE rans. Audio Electroacoust, vol. 1, no. 3, pp. 5 46, Sep

IMPROVING CHANNEL SELECTION OF SOUND CODING ALGORITHMS IN COCHLEAR IMPLANTS. Hussnain Ali, Feng Hong, John H. L. Hansen, and Emily Tobey

IMPROVING CHANNEL SELECTION OF SOUND CODING ALGORITHMS IN COCHLEAR IMPLANTS. Hussnain Ali, Feng Hong, John H. L. Hansen, and Emily Tobey 2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) IMPROVING CHANNEL SELECTION OF SOUND CODING ALGORITHMS IN COCHLEAR IMPLANTS Hussnain Ali, Feng Hong, John H. L. Hansen,

More information

NOISE REDUCTION ALGORITHMS FOR BETTER SPEECH UNDERSTANDING IN COCHLEAR IMPLANTATION- A SURVEY

NOISE REDUCTION ALGORITHMS FOR BETTER SPEECH UNDERSTANDING IN COCHLEAR IMPLANTATION- A SURVEY INTERNATIONAL JOURNAL OF RESEARCH IN COMPUTER APPLICATIONS AND ROBOTICS ISSN 2320-7345 NOISE REDUCTION ALGORITHMS FOR BETTER SPEECH UNDERSTANDING IN COCHLEAR IMPLANTATION- A SURVEY Preethi N.M 1, P.F Khaleelur

More information

ADVANCES in NATURAL and APPLIED SCIENCES

ADVANCES in NATURAL and APPLIED SCIENCES ADVANCES in NATURAL and APPLIED SCIENCES ISSN: 1995-0772 Published BYAENSI Publication EISSN: 1998-1090 http://www.aensiweb.com/anas 2016 December10(17):pages 275-280 Open Access Journal Improvements in

More information

Investigating the effects of noise-estimation errors in simulated cochlear implant speech intelligibility

Investigating the effects of noise-estimation errors in simulated cochlear implant speech intelligibility Downloaded from orbit.dtu.dk on: Sep 28, 208 Investigating the effects of noise-estimation errors in simulated cochlear implant speech intelligibility Kressner, Abigail Anne; May, Tobias; Malik Thaarup

More information

Investigating the effects of noise-estimation errors in simulated cochlear implant speech intelligibility

Investigating the effects of noise-estimation errors in simulated cochlear implant speech intelligibility Investigating the effects of noise-estimation errors in simulated cochlear implant speech intelligibility ABIGAIL ANNE KRESSNER,TOBIAS MAY, RASMUS MALIK THAARUP HØEGH, KRISTINE AAVILD JUHL, THOMAS BENTSEN,

More information

An Auditory-Model-Based Electrical Stimulation Strategy Incorporating Tonal Information for Cochlear Implant

An Auditory-Model-Based Electrical Stimulation Strategy Incorporating Tonal Information for Cochlear Implant Annual Progress Report An Auditory-Model-Based Electrical Stimulation Strategy Incorporating Tonal Information for Cochlear Implant Joint Research Centre for Biomedical Engineering Mar.7, 26 Types of Hearing

More information

HCS 7367 Speech Perception

HCS 7367 Speech Perception Long-term spectrum of speech HCS 7367 Speech Perception Connected speech Absolute threshold Males Dr. Peter Assmann Fall 212 Females Long-term spectrum of speech Vowels Males Females 2) Absolute threshold

More information

Fig. 1 High level block diagram of the binary mask algorithm.[1]

Fig. 1 High level block diagram of the binary mask algorithm.[1] Implementation of Binary Mask Algorithm for Noise Reduction in Traffic Environment Ms. Ankita A. Kotecha 1, Prof. Y.A.Sadawarte 2 1 M-Tech Scholar, Department of Electronics Engineering, B. D. College

More information

The effect of wearing conventional and level-dependent hearing protectors on speech production in noise and quiet

The effect of wearing conventional and level-dependent hearing protectors on speech production in noise and quiet The effect of wearing conventional and level-dependent hearing protectors on speech production in noise and quiet Ghazaleh Vaziri Christian Giguère Hilmi R. Dajani Nicolas Ellaham Annual National Hearing

More information

Effects of Cochlear Hearing Loss on the Benefits of Ideal Binary Masking

Effects of Cochlear Hearing Loss on the Benefits of Ideal Binary Masking INTERSPEECH 2016 September 8 12, 2016, San Francisco, USA Effects of Cochlear Hearing Loss on the Benefits of Ideal Binary Masking Vahid Montazeri, Shaikat Hossain, Peter F. Assmann University of Texas

More information

HybridMaskingAlgorithmforUniversalHearingAidSystem. Hybrid Masking Algorithm for Universal Hearing Aid System

HybridMaskingAlgorithmforUniversalHearingAidSystem. Hybrid Masking Algorithm for Universal Hearing Aid System Global Journal of Researches in Engineering: Electrical and Electronics Engineering Volume 16 Issue 5 Version 1.0 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals

More information

Hybrid Masking Algorithm for Universal Hearing Aid System

Hybrid Masking Algorithm for Universal Hearing Aid System Hybrid Masking Algorithm for Universal Hearing Aid System H.Hensiba 1,Mrs.V.P.Brila 2, 1 Pg Student, Applied Electronics, C.S.I Institute Of Technology 2 Assistant Professor, Department of Electronics

More information

The role of periodicity in the perception of masked speech with simulated and real cochlear implants

The role of periodicity in the perception of masked speech with simulated and real cochlear implants The role of periodicity in the perception of masked speech with simulated and real cochlear implants Kurt Steinmetzger and Stuart Rosen UCL Speech, Hearing and Phonetic Sciences Heidelberg, 09. November

More information

CHAPTER 1 INTRODUCTION

CHAPTER 1 INTRODUCTION CHAPTER 1 INTRODUCTION 1.1 BACKGROUND Speech is the most natural form of human communication. Speech has also become an important means of human-machine interaction and the advancement in technology has

More information

Role of F0 differences in source segregation

Role of F0 differences in source segregation Role of F0 differences in source segregation Andrew J. Oxenham Research Laboratory of Electronics, MIT and Harvard-MIT Speech and Hearing Bioscience and Technology Program Rationale Many aspects of segregation

More information

Speech Enhancement Based on Deep Neural Networks

Speech Enhancement Based on Deep Neural Networks Speech Enhancement Based on Deep Neural Networks Chin-Hui Lee School of ECE, Georgia Tech chl@ece.gatech.edu Joint work with Yong Xu and Jun Du at USTC 1 Outline and Talk Agenda In Signal Processing Letter,

More information

Computational Perception /785. Auditory Scene Analysis

Computational Perception /785. Auditory Scene Analysis Computational Perception 15-485/785 Auditory Scene Analysis A framework for auditory scene analysis Auditory scene analysis involves low and high level cues Low level acoustic cues are often result in

More information

Hearing the Universal Language: Music and Cochlear Implants

Hearing the Universal Language: Music and Cochlear Implants Hearing the Universal Language: Music and Cochlear Implants Professor Hugh McDermott Deputy Director (Research) The Bionics Institute of Australia, Professorial Fellow The University of Melbourne Overview?

More information

Speech Enhancement Using Deep Neural Network

Speech Enhancement Using Deep Neural Network Speech Enhancement Using Deep Neural Network Pallavi D. Bhamre 1, Hemangi H. Kulkarni 2 1 Post-graduate Student, Department of Electronics and Telecommunication, R. H. Sapat College of Engineering, Management

More information

International Journal of Scientific & Engineering Research, Volume 5, Issue 3, March ISSN

International Journal of Scientific & Engineering Research, Volume 5, Issue 3, March ISSN International Journal of Scientific & Engineering Research, Volume 5, Issue 3, March-214 1214 An Improved Spectral Subtraction Algorithm for Noise Reduction in Cochlear Implants Saeed Kermani 1, Marjan

More information

Adaptation of Classification Model for Improving Speech Intelligibility in Noise

Adaptation of Classification Model for Improving Speech Intelligibility in Noise 1: (Junyoung Jung et al.: Adaptation of Classification Model for Improving Speech Intelligibility in Noise) (Regular Paper) 23 4, 2018 7 (JBE Vol. 23, No. 4, July 2018) https://doi.org/10.5909/jbe.2018.23.4.511

More information

Acoustic Signal Processing Based on Deep Neural Networks

Acoustic Signal Processing Based on Deep Neural Networks Acoustic Signal Processing Based on Deep Neural Networks Chin-Hui Lee School of ECE, Georgia Tech chl@ece.gatech.edu Joint work with Yong Xu, Yanhui Tu, Qing Wang, Tian Gao, Jun Du, LiRong Dai Outline

More information

Noise-Robust Speech Recognition in a Car Environment Based on the Acoustic Features of Car Interior Noise

Noise-Robust Speech Recognition in a Car Environment Based on the Acoustic Features of Car Interior Noise 4 Special Issue Speech-Based Interfaces in Vehicles Research Report Noise-Robust Speech Recognition in a Car Environment Based on the Acoustic Features of Car Interior Noise Hiroyuki Hoshino Abstract This

More information

UvA-DARE (Digital Academic Repository) Perceptual evaluation of noise reduction in hearing aids Brons, I. Link to publication

UvA-DARE (Digital Academic Repository) Perceptual evaluation of noise reduction in hearing aids Brons, I. Link to publication UvA-DARE (Digital Academic Repository) Perceptual evaluation of noise reduction in hearing aids Brons, I. Link to publication Citation for published version (APA): Brons, I. (2013). Perceptual evaluation

More information

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for What you re in for Speech processing schemes for cochlear implants Stuart Rosen Professor of Speech and Hearing Science Speech, Hearing and Phonetic Sciences Division of Psychology & Language Sciences

More information

A Single-Channel Noise Reduction Algorithm for Cochlear-Implant Users

A Single-Channel Noise Reduction Algorithm for Cochlear-Implant Users A Single-Channel Noise Reduction Algorithm for Cochlear-Implant Users A THESIS SUBMITTED TO THE FACULTY OF UNIVERSITY OF MINNESOTA BY Ningyuan Wang IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE

More information

SUPPRESSION OF MUSICAL NOISE IN ENHANCED SPEECH USING PRE-IMAGE ITERATIONS. Christina Leitner and Franz Pernkopf

SUPPRESSION OF MUSICAL NOISE IN ENHANCED SPEECH USING PRE-IMAGE ITERATIONS. Christina Leitner and Franz Pernkopf 2th European Signal Processing Conference (EUSIPCO 212) Bucharest, Romania, August 27-31, 212 SUPPRESSION OF MUSICAL NOISE IN ENHANCED SPEECH USING PRE-IMAGE ITERATIONS Christina Leitner and Franz Pernkopf

More information

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening AUDL GS08/GAV1 Signals, systems, acoustics and the ear Pitch & Binaural listening Review 25 20 15 10 5 0-5 100 1000 10000 25 20 15 10 5 0-5 100 1000 10000 Part I: Auditory frequency selectivity Tuning

More information

ADVANCES in NATURAL and APPLIED SCIENCES

ADVANCES in NATURAL and APPLIED SCIENCES ADVANCES in NATURAL and APPLIED SCIENCES ISSN: 1995-0772 Published BY AENSI Publication EISSN: 1998-1090 http://www.aensiweb.com/anas 2016 Special 10(14): pages 242-248 Open Access Journal Comparison ofthe

More information

Adaptive dynamic range compression for improving envelope-based speech perception: Implications for cochlear implants

Adaptive dynamic range compression for improving envelope-based speech perception: Implications for cochlear implants Adaptive dynamic range compression for improving envelope-based speech perception: Implications for cochlear implants Ying-Hui Lai, Fei Chen and Yu Tsao Abstract The temporal envelope is the primary acoustic

More information

Masking release and the contribution of obstruent consonants on speech recognition in noise by cochlear implant users

Masking release and the contribution of obstruent consonants on speech recognition in noise by cochlear implant users Masking release and the contribution of obstruent consonants on speech recognition in noise by cochlear implant users Ning Li and Philipos C. Loizou a Department of Electrical Engineering, University of

More information

PLEASE SCROLL DOWN FOR ARTICLE

PLEASE SCROLL DOWN FOR ARTICLE This article was downloaded by:[michigan State University Libraries] On: 9 October 2007 Access Details: [subscription number 768501380] Publisher: Informa Healthcare Informa Ltd Registered in England and

More information

Speech Understanding in Background Noise with the Two-Microphone Adaptive Beamformer BEAM in the Nucleus Freedom Cochlear Implant System

Speech Understanding in Background Noise with the Two-Microphone Adaptive Beamformer BEAM in the Nucleus Freedom Cochlear Implant System Speech Understanding in Background Noise with the Two-Microphone Adaptive Beamformer BEAM in the Nucleus Freedom Cochlear Implant System Ann Spriet, Lieselot Van Deun, Kyriaky Eftaxiadis, Johan Laneau,

More information

Binaural Integrated Active Noise Control and Noise Reduction in Hearing Aids

Binaural Integrated Active Noise Control and Noise Reduction in Hearing Aids Downloaded from orbit.dtu.dk on: Nov 25, 2018 Binaural Integrated Active Noise Control and Noise Reduction in Hearing Aids Serizel, Romain; Moonen, Marc; Wouters, Jan; Jensen, Søren Holdt Published in:

More information

Using Source Models in Speech Separation

Using Source Models in Speech Separation Using Source Models in Speech Separation Dan Ellis Laboratory for Recognition and Organization of Speech and Audio Dept. Electrical Eng., Columbia Univ., NY USA dpwe@ee.columbia.edu http://labrosa.ee.columbia.edu/

More information

2/25/2013. Context Effect on Suprasegmental Cues. Supresegmental Cues. Pitch Contour Identification (PCI) Context Effect with Cochlear Implants

2/25/2013. Context Effect on Suprasegmental Cues. Supresegmental Cues. Pitch Contour Identification (PCI) Context Effect with Cochlear Implants Context Effect on Segmental and Supresegmental Cues Preceding context has been found to affect phoneme recognition Stop consonant recognition (Mann, 1980) A continuum from /da/ to /ga/ was preceded by

More information

Implant Subjects. Jill M. Desmond. Department of Electrical and Computer Engineering Duke University. Approved: Leslie M. Collins, Supervisor

Implant Subjects. Jill M. Desmond. Department of Electrical and Computer Engineering Duke University. Approved: Leslie M. Collins, Supervisor Using Forward Masking Patterns to Predict Imperceptible Information in Speech for Cochlear Implant Subjects by Jill M. Desmond Department of Electrical and Computer Engineering Duke University Date: Approved:

More information

Binaural Hearing. Why two ears? Definitions

Binaural Hearing. Why two ears? Definitions Binaural Hearing Why two ears? Locating sounds in space: acuity is poorer than in vision by up to two orders of magnitude, but extends in all directions. Role in alerting and orienting? Separating sound

More information

Multi-microphone adaptive noise reduction strategies for coordinated stimulation in bilateral cochlear implant devices

Multi-microphone adaptive noise reduction strategies for coordinated stimulation in bilateral cochlear implant devices Multi-microphone adaptive noise reduction strategies for coordinated stimulation in bilateral cochlear implant devices Kostas Kokkinakis and Philipos C. Loizou a Department of Electrical Engineering, University

More information

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED International Conference on Systemics, Cybernetics and Informatics, February 12 15, 2004 BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED Alice N. Cheeran Biomedical

More information

The development of a modified spectral ripple test

The development of a modified spectral ripple test The development of a modified spectral ripple test Justin M. Aronoff a) and David M. Landsberger Communication and Neuroscience Division, House Research Institute, 2100 West 3rd Street, Los Angeles, California

More information

Speech conveys not only linguistic content but. Vocal Emotion Recognition by Normal-Hearing Listeners and Cochlear Implant Users

Speech conveys not only linguistic content but. Vocal Emotion Recognition by Normal-Hearing Listeners and Cochlear Implant Users Cochlear Implants Special Issue Article Vocal Emotion Recognition by Normal-Hearing Listeners and Cochlear Implant Users Trends in Amplification Volume 11 Number 4 December 2007 301-315 2007 Sage Publications

More information

Speech Enhancement Based on Spectral Subtraction Involving Magnitude and Phase Components

Speech Enhancement Based on Spectral Subtraction Involving Magnitude and Phase Components Speech Enhancement Based on Spectral Subtraction Involving Magnitude and Phase Components Miss Bhagat Nikita 1, Miss Chavan Prajakta 2, Miss Dhaigude Priyanka 3, Miss Ingole Nisha 4, Mr Ranaware Amarsinh

More information

GENERALIZATION OF SUPERVISED LEARNING FOR BINARY MASK ESTIMATION

GENERALIZATION OF SUPERVISED LEARNING FOR BINARY MASK ESTIMATION GENERALIZATION OF SUPERVISED LEARNING FOR BINARY MASK ESTIMATION Tobias May Technical University of Denmark Centre for Applied Hearing Research DK - 2800 Kgs. Lyngby, Denmark tobmay@elektro.dtu.dk Timo

More information

Simulations of high-frequency vocoder on Mandarin speech recognition for acoustic hearing preserved cochlear implant

Simulations of high-frequency vocoder on Mandarin speech recognition for acoustic hearing preserved cochlear implant INTERSPEECH 2017 August 20 24, 2017, Stockholm, Sweden Simulations of high-frequency vocoder on Mandarin speech recognition for acoustic hearing preserved cochlear implant Tsung-Chen Wu 1, Tai-Shih Chi

More information

A neural network model for optimizing vowel recognition by cochlear implant listeners

A neural network model for optimizing vowel recognition by cochlear implant listeners A neural network model for optimizing vowel recognition by cochlear implant listeners Chung-Hwa Chang, Gary T. Anderson, Member IEEE, and Philipos C. Loizou, Member IEEE Abstract-- Due to the variability

More information

Challenges in microphone array processing for hearing aids. Volkmar Hamacher Siemens Audiological Engineering Group Erlangen, Germany

Challenges in microphone array processing for hearing aids. Volkmar Hamacher Siemens Audiological Engineering Group Erlangen, Germany Challenges in microphone array processing for hearing aids Volkmar Hamacher Siemens Audiological Engineering Group Erlangen, Germany SIEMENS Audiological Engineering Group R&D Signal Processing and Audiology

More information

A NOVEL METHOD FOR OBTAINING A BETTER QUALITY SPEECH SIGNAL FOR COCHLEAR IMPLANTS USING KALMAN WITH DRNL AND SSB TECHNIQUE

A NOVEL METHOD FOR OBTAINING A BETTER QUALITY SPEECH SIGNAL FOR COCHLEAR IMPLANTS USING KALMAN WITH DRNL AND SSB TECHNIQUE A NOVEL METHOD FOR OBTAINING A BETTER QUALITY SPEECH SIGNAL FOR COCHLEAR IMPLANTS USING KALMAN WITH DRNL AND SSB TECHNIQUE Rohini S. Hallikar 1, Uttara Kumari 2 and K Padmaraju 3 1 Department of ECE, R.V.

More information

Sonic Spotlight. SmartCompress. Advancing compression technology into the future

Sonic Spotlight. SmartCompress. Advancing compression technology into the future Sonic Spotlight SmartCompress Advancing compression technology into the future Speech Variable Processing (SVP) is the unique digital signal processing strategy that gives Sonic hearing aids their signature

More information

Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation

Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation Aldebaro Klautau - http://speech.ucsd.edu/aldebaro - 2/3/. Page. Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation ) Introduction Several speech processing algorithms assume the signal

More information

Speech enhancement with multichannel Wiener filter techniques in multimicrophone binaural hearing aids a)

Speech enhancement with multichannel Wiener filter techniques in multimicrophone binaural hearing aids a) 1 2 Speech enhancement with multichannel Wiener filter techniques in multimicrophone binaural hearing aids a) 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 Tim Van den Bogaert b ExpORL,

More information

Robust Neural Encoding of Speech in Human Auditory Cortex

Robust Neural Encoding of Speech in Human Auditory Cortex Robust Neural Encoding of Speech in Human Auditory Cortex Nai Ding, Jonathan Z. Simon Electrical Engineering / Biology University of Maryland, College Park Auditory Processing in Natural Scenes How is

More information

White Paper: micon Directivity and Directional Speech Enhancement

White Paper: micon Directivity and Directional Speech Enhancement White Paper: micon Directivity and Directional Speech Enhancement www.siemens.com Eghart Fischer, Henning Puder, Ph. D., Jens Hain Abstract: This paper describes a new, optimized directional processing

More information

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds Hearing Lectures Acoustics of Speech and Hearing Week 2-10 Hearing 3: Auditory Filtering 1. Loudness of sinusoids mainly (see Web tutorial for more) 2. Pitch of sinusoids mainly (see Web tutorial for more)

More information

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair Who are cochlear implants for? Essential feature People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work

More information

Voice Detection using Speech Energy Maximization and Silence Feature Normalization

Voice Detection using Speech Energy Maximization and Silence Feature Normalization , pp.25-29 http://dx.doi.org/10.14257/astl.2014.49.06 Voice Detection using Speech Energy Maximization and Silence Feature Normalization In-Sung Han 1 and Chan-Shik Ahn 2 1 Dept. of The 2nd R&D Institute,

More information

Hearing Preservation Cochlear Implantation: Benefits of Bilateral Acoustic Hearing

Hearing Preservation Cochlear Implantation: Benefits of Bilateral Acoustic Hearing Hearing Preservation Cochlear Implantation: Benefits of Bilateral Acoustic Hearing Kelly Jahn, B.S. Vanderbilt University TAASLP Convention October 29, 2015 Background 80% of CI candidates now have bilateral

More information

Speech quality evaluation of a sparse coding shrinkage noise reduction algorithm with normal hearing and hearing impaired listeners

Speech quality evaluation of a sparse coding shrinkage noise reduction algorithm with normal hearing and hearing impaired listeners Accepted Manuscript Speech quality evaluation of a sparse coding shrinkage noise reduction algorithm with normal hearing and hearing impaired listeners Jinqiu Sang, Hongmei Hu, Chengshi Zheng, Guoping

More information

LATERAL INHIBITION MECHANISM IN COMPUTATIONAL AUDITORY MODEL AND IT'S APPLICATION IN ROBUST SPEECH RECOGNITION

LATERAL INHIBITION MECHANISM IN COMPUTATIONAL AUDITORY MODEL AND IT'S APPLICATION IN ROBUST SPEECH RECOGNITION LATERAL INHIBITION MECHANISM IN COMPUTATIONAL AUDITORY MODEL AND IT'S APPLICATION IN ROBUST SPEECH RECOGNITION Lu Xugang Li Gang Wang Lip0 Nanyang Technological University, School of EEE, Workstation Resource

More information

Simulation of an electro-acoustic implant (EAS) with a hybrid vocoder

Simulation of an electro-acoustic implant (EAS) with a hybrid vocoder Simulation of an electro-acoustic implant (EAS) with a hybrid vocoder Fabien Seldran a, Eric Truy b, Stéphane Gallégo a, Christian Berger-Vachon a, Lionel Collet a and Hung Thai-Van a a Univ. Lyon 1 -

More information

Research Article The Acoustic and Peceptual Effects of Series and Parallel Processing

Research Article The Acoustic and Peceptual Effects of Series and Parallel Processing Hindawi Publishing Corporation EURASIP Journal on Advances in Signal Processing Volume 9, Article ID 6195, pages doi:1.1155/9/6195 Research Article The Acoustic and Peceptual Effects of Series and Parallel

More information

Who are cochlear implants for?

Who are cochlear implants for? Who are cochlear implants for? People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work best in adults who

More information

Performance Comparison of Speech Enhancement Algorithms Using Different Parameters

Performance Comparison of Speech Enhancement Algorithms Using Different Parameters Performance Comparison of Speech Enhancement Algorithms Using Different Parameters Ambalika, Er. Sonia Saini Abstract In speech communication system, background noise degrades the information or speech

More information

HCS 7367 Speech Perception

HCS 7367 Speech Perception Babies 'cry in mother's tongue' HCS 7367 Speech Perception Dr. Peter Assmann Fall 212 Babies' cries imitate their mother tongue as early as three days old German researchers say babies begin to pick up

More information

Auditory System & Hearing

Auditory System & Hearing Auditory System & Hearing Chapters 9 part II Lecture 16 Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Spring 2019 1 Phase locking: Firing locked to period of a sound wave example of a temporal

More information

Implementation of Spectral Maxima Sound processing for cochlear. implants by using Bark scale Frequency band partition

Implementation of Spectral Maxima Sound processing for cochlear. implants by using Bark scale Frequency band partition Implementation of Spectral Maxima Sound processing for cochlear implants by using Bark scale Frequency band partition Han xianhua 1 Nie Kaibao 1 1 Department of Information Science and Engineering, Shandong

More information

Coding Strategies for Cochlear Implants Under Adverse Environments

Coding Strategies for Cochlear Implants Under Adverse Environments University of Wisconsin Milwaukee UWM Digital Commons Theses and Dissertations May 2016 Coding Strategies for Cochlear Implants Under Adverse Environments Qudsia Tahmina University of Wisconsin-Milwaukee

More information

Prelude Envelope and temporal fine. What's all the fuss? Modulating a wave. Decomposing waveforms. The psychophysics of cochlear

Prelude Envelope and temporal fine. What's all the fuss? Modulating a wave. Decomposing waveforms. The psychophysics of cochlear The psychophysics of cochlear implants Stuart Rosen Professor of Speech and Hearing Science Speech, Hearing and Phonetic Sciences Division of Psychology & Language Sciences Prelude Envelope and temporal

More information

A channel-selection criterion for suppressing reverberation in cochlear implants

A channel-selection criterion for suppressing reverberation in cochlear implants A channel-selection criterion for suppressing reverberation in cochlear implants Kostas Kokkinakis, Oldooz Hazrati, and Philipos C. Loizou a) Department of Electrical Engineering, The University of Texas

More information

Acoustics, signals & systems for audiology. Psychoacoustics of hearing impairment

Acoustics, signals & systems for audiology. Psychoacoustics of hearing impairment Acoustics, signals & systems for audiology Psychoacoustics of hearing impairment Three main types of hearing impairment Conductive Sound is not properly transmitted from the outer to the inner ear Sensorineural

More information

INTEGRATING THE COCHLEA S COMPRESSIVE NONLINEARITY IN THE BAYESIAN APPROACH FOR SPEECH ENHANCEMENT

INTEGRATING THE COCHLEA S COMPRESSIVE NONLINEARITY IN THE BAYESIAN APPROACH FOR SPEECH ENHANCEMENT INTEGRATING THE COCHLEA S COMPRESSIVE NONLINEARITY IN THE BAYESIAN APPROACH FOR SPEECH ENHANCEMENT Eric Plourde and Benoît Champagne Department of Electrical and Computer Engineering, McGill University

More information

Perceptual Effects of Nasal Cue Modification

Perceptual Effects of Nasal Cue Modification Send Orders for Reprints to reprints@benthamscience.ae The Open Electrical & Electronic Engineering Journal, 2015, 9, 399-407 399 Perceptual Effects of Nasal Cue Modification Open Access Fan Bai 1,2,*

More information

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES Varinthira Duangudom and David V Anderson School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA 30332

More information

Single-Channel Sound Source Localization Based on Discrimination of Acoustic Transfer Functions

Single-Channel Sound Source Localization Based on Discrimination of Acoustic Transfer Functions 3 Single-Channel Sound Source Localization Based on Discrimination of Acoustic Transfer Functions Ryoichi Takashima, Tetsuya Takiguchi and Yasuo Ariki Graduate School of System Informatics, Kobe University,

More information

Jitter, Shimmer, and Noise in Pathological Voice Quality Perception

Jitter, Shimmer, and Noise in Pathological Voice Quality Perception ISCA Archive VOQUAL'03, Geneva, August 27-29, 2003 Jitter, Shimmer, and Noise in Pathological Voice Quality Perception Jody Kreiman and Bruce R. Gerratt Division of Head and Neck Surgery, School of Medicine

More information

Variability in Word Recognition by Adults with Cochlear Implants: The Role of Language Knowledge

Variability in Word Recognition by Adults with Cochlear Implants: The Role of Language Knowledge Variability in Word Recognition by Adults with Cochlear Implants: The Role of Language Knowledge Aaron C. Moberly, M.D. CI2015 Washington, D.C. Disclosures ASA and ASHFoundation Speech Science Research

More information

EEL 6586, Project - Hearing Aids algorithms

EEL 6586, Project - Hearing Aids algorithms EEL 6586, Project - Hearing Aids algorithms 1 Yan Yang, Jiang Lu, and Ming Xue I. PROBLEM STATEMENT We studied hearing loss algorithms in this project. As the conductive hearing loss is due to sound conducting

More information

Auditory System & Hearing

Auditory System & Hearing Auditory System & Hearing Chapters 9 and 10 Lecture 17 Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Spring 2015 1 Cochlea: physical device tuned to frequency! place code: tuning of different

More information

SOLUTIONS Homework #3. Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03

SOLUTIONS Homework #3. Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03 SOLUTIONS Homework #3 Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03 Problem 1: a) Where in the cochlea would you say the process of "fourier decomposition" of the incoming

More information

Spectrograms (revisited)

Spectrograms (revisited) Spectrograms (revisited) We begin the lecture by reviewing the units of spectrograms, which I had only glossed over when I covered spectrograms at the end of lecture 19. We then relate the blocks of a

More information

Psychoacoustical Models WS 2016/17

Psychoacoustical Models WS 2016/17 Psychoacoustical Models WS 2016/17 related lectures: Applied and Virtual Acoustics (Winter Term) Advanced Psychoacoustics (Summer Term) Sound Perception 2 Frequency and Level Range of Human Hearing Source:

More information

Enhancement of Reverberant Speech Using LP Residual Signal

Enhancement of Reverberant Speech Using LP Residual Signal IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, VOL. 8, NO. 3, MAY 2000 267 Enhancement of Reverberant Speech Using LP Residual Signal B. Yegnanarayana, Senior Member, IEEE, and P. Satyanarayana Murthy

More information

An active unpleasantness control system for indoor noise based on auditory masking

An active unpleasantness control system for indoor noise based on auditory masking An active unpleasantness control system for indoor noise based on auditory masking Daisuke Ikefuji, Masato Nakayama, Takanabu Nishiura and Yoich Yamashita Graduate School of Information Science and Engineering,

More information

Ear Beamer. Aaron Lucia, Niket Gupta, Matteo Puzella, Nathan Dunn. Department of Electrical and Computer Engineering

Ear Beamer. Aaron Lucia, Niket Gupta, Matteo Puzella, Nathan Dunn. Department of Electrical and Computer Engineering Ear Beamer Aaron Lucia, Niket Gupta, Matteo Puzella, Nathan Dunn The Team Aaron Lucia CSE Niket Gupta CSE Matteo Puzella EE Nathan Dunn CSE Advisor: Prof. Mario Parente 2 A Familiar Scenario 3 Outlining

More information

Noise Susceptibility of Cochlear Implant Users: The Role of Spectral Resolution and Smearing

Noise Susceptibility of Cochlear Implant Users: The Role of Spectral Resolution and Smearing JARO 6: 19 27 (2004) DOI: 10.1007/s10162-004-5024-3 Noise Susceptibility of Cochlear Implant Users: The Role of Spectral Resolution and Smearing QIAN-JIE FU AND GERALDINE NOGAKI Department of Auditory

More information

ReSound NoiseTracker II

ReSound NoiseTracker II Abstract The main complaint of hearing instrument wearers continues to be hearing in noise. While directional microphone technology exploits spatial separation of signal from noise to improve listening

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Speech Communication Session 4aSCb: Voice and F0 Across Tasks (Poster

More information

Perceptual Evaluation of Signal Enhancement Strategies

Perceptual Evaluation of Signal Enhancement Strategies Perceptual Evaluation of Signal Enhancement Strategies Koen Eneman, Heleen Luts, Jan Wouters, Michael Büchler, Norbert Dillier, Wouter Dreschler, Matthias Froehlich, Giso Grimm, Niklas Harlander, Volker

More information

Binaural processing of complex stimuli

Binaural processing of complex stimuli Binaural processing of complex stimuli Outline for today Binaural detection experiments and models Speech as an important waveform Experiments on understanding speech in complex environments (Cocktail

More information

Codebook driven short-term predictor parameter estimation for speech enhancement

Codebook driven short-term predictor parameter estimation for speech enhancement Codebook driven short-term predictor parameter estimation for speech enhancement Citation for published version (APA): Srinivasan, S., Samuelsson, J., & Kleijn, W. B. (2006). Codebook driven short-term

More information

Providing Effective Communication Access

Providing Effective Communication Access Providing Effective Communication Access 2 nd International Hearing Loop Conference June 19 th, 2011 Matthew H. Bakke, Ph.D., CCC A Gallaudet University Outline of the Presentation Factors Affecting Communication

More information

Binaural advantages in users of bimodal and bilateral cochlear implant devices

Binaural advantages in users of bimodal and bilateral cochlear implant devices Binaural advantages in users of bimodal and bilateral cochlear implant devices Kostas Kokkinakis a) and Natalie Pak Department of Speech-Language-Hearing: Sciences and Disorders, University of Kansas,

More information

Speech Enhancement, Human Auditory system, Digital hearing aid, Noise maskers, Tone maskers, Signal to Noise ratio, Mean Square Error

Speech Enhancement, Human Auditory system, Digital hearing aid, Noise maskers, Tone maskers, Signal to Noise ratio, Mean Square Error Volume 119 No. 18 2018, 2259-2272 ISSN: 1314-3395 (on-line version) url: http://www.acadpubl.eu/hub/ http://www.acadpubl.eu/hub/ SUBBAND APPROACH OF SPEECH ENHANCEMENT USING THE PROPERTIES OF HUMAN AUDITORY

More information

Possibilities for the evaluation of the benefit of binaural algorithms using a binaural audio link

Possibilities for the evaluation of the benefit of binaural algorithms using a binaural audio link Possibilities for the evaluation of the benefit of binaural algorithms using a binaural audio link Henning Puder Page 1 Content Presentation of a possible realization of a binaural beamformer iscussion

More information

Hearing4all with two ears: Benefit of binaural signal processing for users of hearing aids and cochlear implants

Hearing4all with two ears: Benefit of binaural signal processing for users of hearing aids and cochlear implants Hearing4all with two ears: Benefit of binaural signal processing for users of hearing aids and cochlear implants Birger Kollmeier* Cluster of Excellence Hearing4All & Medizinische Physik, Universität Oldenburg,

More information

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair Who are cochlear implants for? Essential feature People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work

More information

Sonic Spotlight. Binaural Coordination: Making the Connection

Sonic Spotlight. Binaural Coordination: Making the Connection Binaural Coordination: Making the Connection 1 Sonic Spotlight Binaural Coordination: Making the Connection Binaural Coordination is the global term that refers to the management of wireless technology

More information

Auditory Perception: Sense of Sound /785 Spring 2017

Auditory Perception: Sense of Sound /785 Spring 2017 Auditory Perception: Sense of Sound 85-385/785 Spring 2017 Professor: Laurie Heller Classroom: Baker Hall 342F (sometimes Cluster 332P) Time: Tuesdays and Thursdays 1:30-2:50 Office hour: Thursday 3:00-4:00,

More information

Peter S Roland M.D. UTSouthwestern Medical Center Dallas, Texas Developments

Peter S Roland M.D. UTSouthwestern Medical Center Dallas, Texas Developments Peter S Roland M.D. UTSouthwestern Medical Center Dallas, Texas Developments New electrodes New speech processing strategies Bilateral implants Hybrid implants ABI in Kids MRI vs CT Meningitis Totally

More information

Two-microphone spatial filtering provides speech reception benefits for cochlear implant users in difficult acoustic environments

Two-microphone spatial filtering provides speech reception benefits for cochlear implant users in difficult acoustic environments Two-microphone spatial filtering provides speech reception benefits for cochlear implant users in difficult acoustic environments The MIT Faculty has made this article openly available. Please share how

More information