EEL 6586, Project - Hearing Aids algorithms

Similar documents
Issues faced by people with a Sensorineural Hearing Loss

Frequency refers to how often something happens. Period refers to the time it takes something to happen.

Audiogram+: The ReSound Proprietary Fitting Algorithm

Sonic Spotlight. SmartCompress. Advancing compression technology into the future

FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED

Juan Carlos Tejero-Calado 1, Janet C. Rutledge 2, and Peggy B. Nelson 3

The Effect of Analysis Methods and Input Signal Characteristics on Hearing Aid Measurements

Testing Digital Hearing Aids

Four-Channel WDRC Compression with Dynamic Contrast Detection

Acoustics, signals & systems for audiology. Psychoacoustics of hearing impairment

HCS 7367 Speech Perception

Audibility, discrimination and hearing comfort at a new level: SoundRecover2

Study of perceptual balance for binaural dichotic presentation

Best Practice Protocols

A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER

UvA-DARE (Digital Academic Repository) Perceptual evaluation of noise reduction in hearing aids Brons, I. Link to publication

Research Article The Acoustic and Peceptual Effects of Series and Parallel Processing

Hearing Sound. The Human Auditory System. The Outer Ear. Music 170: The Ear

Music 170: The Ear. Tamara Smyth, Department of Music, University of California, San Diego (UCSD) November 17, 2016

Elements of Effective Hearing Aid Performance (2004) Edgar Villchur Feb 2004 HearingOnline

Hearing. Juan P Bello

SoundRecover2 the first adaptive frequency compression algorithm More audibility of high frequency sounds

An active unpleasantness control system for indoor noise based on auditory masking

Slow compression for people with severe to profound hearing loss

! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics

Audiogram+: GN Resound proprietary fitting rule

Beltone Electronics 2601 Patriot Boulevard Glenview, IL U.S.A. (800)

Implementation of Spectral Maxima Sound processing for cochlear. implants by using Bark scale Frequency band partition

Combination of Bone-Conducted Speech with Air-Conducted Speech Changing Cut-Off Frequency

Linguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions.

Using digital hearing aids to visualize real-life effects of signal processing

Linguistic Phonetics Fall 2005

ON THE USE OF LINEAR PREDICTION FOR ACOUSTIC FEEDBACK CANCELLATION IN MULTI-BAND HEARING AIDS

Why? Speech in Noise + Hearing Aids = Problems in Noise. Recall: Two things we must do for hearing loss: Directional Mics & Digital Noise Reduction

Literature Overview - Digital Hearing Aids and Group Delay - HADF, June 2017, P. Derleth

FREQUENCY COMPOSITION : A NEW APPROACH TO FREQUENCY- LOWERING

Characterizing individual hearing loss using narrow-band loudness compensation

Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation

Noise-Robust Speech Recognition in a Car Environment Based on the Acoustic Features of Car Interior Noise

Topic 4. Pitch & Frequency. (Some slides are adapted from Zhiyao Duan s course slides on Computer Audition and Its Applications in Music)

The Compression Handbook Fourth Edition. An overview of the characteristics and applications of compression amplification

Topic 4. Pitch & Frequency

Re/Habilitation of the Hearing Impaired. Better Hearing Philippines Inc.

Physiological assessment of contrast-enhancing frequency shaping and multiband compression in hearing aids

Signals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor

LATERAL INHIBITION MECHANISM IN COMPUTATIONAL AUDITORY MODEL AND IT'S APPLICATION IN ROBUST SPEECH RECOGNITION

OCCLUSION REDUCTION SYSTEM FOR HEARING AIDS WITH AN IMPROVED TRANSDUCER AND AN ASSOCIATED ALGORITHM

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED

Two Modified IEC Ear Simulators for Extended Dynamic Range

Spectrograms (revisited)

Candidacy and Verification of Oticon Speech Rescue TM technology

Digital hearing aids are still

Prescribe hearing aids to:

Digital. hearing instruments have burst on the

Brad May, PhD Johns Hopkins University

AccuQuest Spotlight: Successful Fittings with Oasis. Fitting Range

An Auditory-Model-Based Electrical Stimulation Strategy Incorporating Tonal Information for Cochlear Implant

FIR filter bank design for Audiogram Matching

HEARING AND PSYCHOACOUSTICS

PHYSIOLOGICAL ASSESSMENT OF HEARING AID COMPRESSION SCHEMES

A truly remarkable aspect of human hearing is the vast

64 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 61, NO. 1, JANUARY 2014

Chapter 40 Effects of Peripheral Tuning on the Auditory Nerve s Representation of Speech Envelope and Temporal Fine Structure Cues

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

SUBJECT: Physics TEACHER: Mr. S. Campbell DATE: 15/1/2017 GRADE: DURATION: 1 wk GENERAL TOPIC: The Physics Of Hearing

Discrete Signal Processing

2/16/2012. Fitting Current Amplification Technology on Infants and Children. Preselection Issues & Procedures

Multichannel Dynamic-Range Compression Using Digital Frequency Warping

Power Instruments, Power sources: Trends and Drivers. Steve Armstrong September 2015

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds

9/29/14. Amanda M. Lauer, Dept. of Otolaryngology- HNS. From Signal Detection Theory and Psychophysics, Green & Swets (1966)

THE MECHANICS OF HEARING

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

ClaroTM Digital Perception ProcessingTM

Paediatric Amplification

A PROBABILISTIC APPROACH TO HEARING LOSS COMPENSATION

NOISE ROBUST ALGORITHMS TO IMPROVE CELL PHONE SPEECH INTELLIGIBILITY FOR THE HEARING IMPAIRED

Thresholds for different mammals

Who are cochlear implants for?

CONTACTLESS HEARING AID DESIGNED FOR INFANTS

Fig. 1 High level block diagram of the binary mask algorithm.[1]

functions grow at a higher rate than in normal{hearing subjects. In this chapter, the correlation

Phonak Target 4.3. Desktop Fitting Guide. Content. March 2016

Week 2 Systems (& a bit more about db)

Make the world louder!

Healthy Organ of Corti. Loss of OHCs. How to use and interpret the TEN(HL) test for diagnosis of Dead Regions in the cochlea

The Use of a High Frequency Emphasis Microphone for Musicians Published on Monday, 09 February :50

CHAPTER 1 INTRODUCTION

personalization meets innov ation

Echo Canceller with Noise Reduction Provides Comfortable Hands-free Telecommunication in Noisy Environments

Estimating auditory filter bandwidth using distortion product otoacoustic emissions

Evidence base for hearing aid features:

Noise reduction in modern hearing aids long-term average gain measurements using speech

Evaluating the role of spectral and envelope characteristics in the intelligibility advantage of clear speech

Systems Neuroscience Oct. 16, Auditory system. http:

For hearing aid noise reduction, babble is not just babble

Low Frequency th Conference on Low Frequency Noise

HearIntelligence by HANSATON. Intelligent hearing means natural hearing.

TOLERABLE DELAY FOR SPEECH PROCESSING: EFFECTS OF HEARING ABILITY AND ACCLIMATISATION

Transcription:

EEL 6586, Project - Hearing Aids algorithms 1 Yan Yang, Jiang Lu, and Ming Xue I. PROBLEM STATEMENT We studied hearing loss algorithms in this project. As the conductive hearing loss is due to sound conducting problem in the ear canal, it can be treated with a linear sound amplifier. However, the other kind of hearing loss, sensorineural hearing loss, which is resulted from the disorder of the middle- or inner-ear, can not be solved merely by a linear amplifier. Sensorineural hearing loss is what we will focus on in this project; and for convenience we mean sensorineural hearing loss simply by hearing loss or sensorineural impairment in later discussion. The symptoms of sensorineural hearing loss are relatively more complicated. First, the high frequency signal power is diminished in the sound signal perceived by people with sensorineural hearing loss. This makes some phonemes inaudible or can not be heard in a correct way, as the second formant or the higher order formants can not be detected effectively. This problem can be alleviated by frequency compensation in which high frequency component are boosted by different gains according to hearing impairment level (see, e.g., [1][2]). Another symptom of sensorineural hearing loss is that the dynamic range between the weakest sound that can be heard and the most intense sound that can be tolerated becomes much narrower. The weakest sound that can be heard by patient with hearing loss will have a much higher threshold. On the other hand, however, the most intense sound that can be tolerated does not rise as much as threshold of the weakest sound. This narrowed dynamic range will make the normal sound can not be heard to be continuous and smooth by people with sensorineural impairment. A linear amplifier will always make some sound level out of the range, while the non-linear amplification, namely dynamic range compression, can let pass all sound levels into the perception of patients with hearing loss (see, e.g., [1] [3]). So far we have discussed the two most important aspects about hearing loss, one in frequency domain, and the other in the time domain. Detailed methods to solve these two problems are discussed in Section II and Section III respectively. Additional symptoms of hearing loss include poor frequency resolution and poor time resolution. However, they are beyond the scope of this project for the complexity involved. II. FREQUENCY COMPENSATION Patients with hearing loss can not hear some sound, or can not hear the sound in a correct way. This is because the hearing impairment is not of the same level at different frequency band. Hearing impairment at frequency above 7 Hz is much more severe than that in low frequency band. To restore the sound

into a normal level, we need to apply different gain to different frequency band, according to the hearing impairment level there. This is implemented using a filter-bank approach. For each frame of the speech signal, we first calculate the N-point FFT coefficients. Then the signal is split into five 1/3 octave bands (five channels) using triangle windows as depicted in Figure 2, which is very similar to what we do in MFCC. The center frequencies are 25 Hz, 5 Hz, 1 KHz, 2 KHz, and 4 KHz respectively. We use the triangle window rather than the rectangular window to ensure the frequency magnitude of the signal is still continuous after the frequency compensation. In each band, we will amplify the coefficients by a certain gain value that is pre-specified these gain values should be evaluated according to each patient. We then do an inverse-fft to the modified coefficients to get the signal in time domain. Figure 1 shows the frequency magnitude of the signal before and after the frequency compensation. We notice that the high frequency bands have much larger gains than that of low frequency bands. From the demo sound produced, we can also hear that the frequency compensated speech have more power distributed in the high frequency. After that, we employ dynamic range compression to transform the sound level into a range that can be effectively perceived by the patients. This will be discussed in the next section. III. DYNAMIC RANGE COMPRESSION Patient with hearing impairment commonly has much higher threshold to hear the weakest sound, but the level of the most intense sound that can be tolerated dose not rise by the same amount as the threshold of the weakest sound. With this narrowed dynamic range, the patient can not hear the input sound that is too weak, and may perceived a discontinuous and volatile sound signal. This problem can not be addressed merely by employing a linear amplifier, where no matter how much the sound is amplified there is always some sound level that lies out of the range as in Figure 3. To let the patient hear comfortable and smooth sound, we need to compress the input sound level into the patient s dynamic range. This dynamic range compression can be viewed as an input-output mapping strategy (I-O curve) shown in Figure 4 (see, e.g., [1][3]). Notice that the beautiful curve in Figure 4 is realized by one of our very talented members using the function which we believe is much better than the linear wide range compression: 1, (1) 1 + e α tan(d(x β)) where α, d, and β are user parameters that need to be tuned. As patients have different hearing impairment level, the I-O curve should also be evaluated for each patient. From the I-O curve, we can calculate the gain values for different input SPL (Figure 5). When implementing the dynamic range compression, the input sound level should be calculated according the past several frames rather than just current one. Otherwise, the noise between voiced

phonemes will be amplified, and the envelops of the phonemes will be distorted. When determining the gain for the current frame, we calcualted the input sound level ˆP () by averaging the power of the past N frames (including current one) {P (n)} n= N 1 weighted by exponential coefficients as N 1 n= ˆP () = ean P (n) N 1 (2) n= ean where a =.1. The gain value for current frame is determined by taking looking up the gain versus input SPL curves as in Figure 5. ˆ P (n) as the input SPL and Because the speech signal is processed frame by frame, the signal may not be continuous at the boundaries of the frames. A Chebyshev Type II lowpass filter with cutoff frequency of 8 KHz and -3 db stopband ripple is used to smooth the output signal and filter out the high frequency noise. In Figure 6, we show the envelops for the original sound, frequency compensated sound, dynamic range compressed sound, and the gain values applied by the compressor. In the original sound sound, the signal power from 2 s to 35 s is 12 db higher than the rest. After the dynamic range compression, the intense sound in the middle is suppressed, and rest weak sound is amplified. In this way, the resulted speech sounds more smooth and more comfortable. We also notice that there are some noises due to the overshoot in the final sound. This is because the gain value for current frame is determined the weighted average of the past 1 frames. If we can look forward a few frames, this overshoot problem should be attenuated, however, it will result in delays for the patient which we should try to avoid. IV. CONCLUSION When doing this project, we always wish that if only we know more about the hearing loss and hearing aid facts, as there are so many details that reqire the information or feedback from the patient. We tried our best to make this project realizable in the real world, if there is any chance. We sincerely hope that there is something in this project that is useful to improve the hearing aid performance. V. ACKNOWLEDGEMENT We thank Dr. John Harris and Qing Yang for their directions in our project. REFERENCES [1] H. Dillon, Hearing aids. Sydney: Boomerang Press, 2. [2] B. W. Edwards, Signal processing techniques for a dsp hearing aid, Proceedings of the 1998 IEEE International Symposium on Circuits and Systems, Monterey, California, USA, vol. 6, pp. 586 589, 1998. [3] M. A. Stone, B. C. J. Moore, J. I. Alcántara, and B. R. Glasberg, Comparison of different forms of compression using wearable digital hearing aids, Acoustical Society of America Journal, vol. 16, pp. 363 3619, Dec. 1999.

Frequency Compensation Estimated power spectrum / db 1 1 2 3 4 Before After 1 2 3 4 5 Frequency / Hz Fig. 1. Estimated power spectra for a speech frame before and after frequency compensation. Triangle window.9.8.7.6.5.4.3.2.1 1/3 Octave band Channel 1 Channel 2 Channel 3 Channel 4 Channel 5 2 4 6 8 Frequency / Hz Fig. 2. Frequency bands for filter-bank approach. The center frequencies are 25 Hz, 5 Hz, 1 KHz, 2 KHz, and 4 KHz respectively.

Fig. 3. Loudness dynamic range of ordinary people and patients with hearing impairment. In each of the three parts, the left box indicate the sound levels in a full range; the right box indicate the dynamic range that can be perceived by patients. Without linear amplifier, the patients can not hear the weak sound, while with the linear amplifier, the patients can not hear the intense sound. 95 I O Curve of Dynamic Range Compression Output Level / db SPL 9 85 8 75 7 65 6 4 5 6 7 8 9 1 Input Level / db SPL Fig. 4. Dynamic range compression. Green curve: high level compression; yellow curve: low level compression; and black curve: wide dynamic range compression.

25 Gain versus Input 2 Gain / db 15 1 5 5 4 5 6 7 8 9 1 Input Level / db SPL Fig. 5. Gain versus input SPL that corresponds to I-O curve in Figure 4. Amplitude Amplitude Amplitude 1 1 5 5 1 1 Original Sound 1 2 3 4 5 Time / s Frequency Compensated Sound 1 2 3 4 5 Time / s Dynamic Range Compressed Sound 1 2 3 4 5 Time / s Fig. 6. Waveform profiles of the original sound, frequency compensated sound, and dynamic compressed sound.