Simulation of an electro-acoustic implant (EAS) with a hybrid vocoder

Similar documents
Hearing Preservation Cochlear Implantation: Benefits of Bilateral Acoustic Hearing

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for

Hearing. istockphoto/thinkstock

Who are cochlear implants for?

An Auditory-Model-Based Electrical Stimulation Strategy Incorporating Tonal Information for Cochlear Implant

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

The role of periodicity in the perception of masked speech with simulated and real cochlear implants

Influence of Microphone Encrusting on the Efficiency of Cochlear Implants Preliminary Study with a Simulation of CIS and n-of-m Strategies

Signals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor

Hearing the Universal Language: Music and Cochlear Implants

COM3502/4502/6502 SPEECH PROCESSING

Auditory System & Hearing

Hearing Aids. Bernycia Askew

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Brad May, PhD Johns Hopkins University

ADVANCES in NATURAL and APPLIED SCIENCES

PLEASE SCROLL DOWN FOR ARTICLE

Auditory System & Hearing

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds

Chapter 13 Physics of the Ear and Hearing

HCS 7367 Speech Perception

Implementation of Spectral Maxima Sound processing for cochlear. implants by using Bark scale Frequency band partition

ENT 318 Artificial Organs Physiology of Ear

Acoustics, signals & systems for audiology. Psychoacoustics of hearing impairment

SOLUTIONS Homework #3. Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03

(Thomas Lenarz) Ok, thank you, thank you very much for inviting me to be here and speak to you, on cochlear implant technology.

Simulations of high-frequency vocoder on Mandarin speech recognition for acoustic hearing preserved cochlear implant

Systems Neuroscience Oct. 16, Auditory system. http:

A Consumer-friendly Recap of the HLAA 2018 Research Symposium: Listening in Noise Webinar

College of Medicine Dept. of Medical physics Physics of ear and hearing /CH

Cochlear Implants : Influence of the Coding Strategy on Syllable Recognition in Noise

PSY 215 Lecture 10 Topic: Hearing Chapter 7, pages

Deafness and hearing impairment

Modern cochlear implants provide two strategies for coding speech

Spectrograms (revisited)

Peter S Roland M.D. UTSouthwestern Medical Center Dallas, Texas Developments

EVALUATION OF SPEECH PERCEPTION IN PATIENTS WITH SKI SLOPE HEARING LOSS USING ARABIC CONSTANT SPEECH DISCRIMINATION LISTS

Fitting of the Hearing System Affects Partial Deafness Cochlear Implant Performance

Before we talk about the auditory system we will talk about the sound and waves

Glossary For Parents. Atresia: closure of the ear canal or absence of an ear opening.

Hearing Sound. The Human Auditory System. The Outer Ear. Music 170: The Ear

Music 170: The Ear. Tamara Smyth, Department of Music, University of California, San Diego (UCSD) November 17, 2016

Bioscience in the 21st century

Implantable Treatments for Different Types of Hearing Loss. Margaret Dillon, AuD Marcia Adunka, AuD

Chapter 11: Sound, The Auditory System, and Pitch Perception

Required Slide. Session Objectives

The Structure and Function of the Auditory Nerve

Auditory System. Barb Rohrer (SEI )

Hearing Research 242 (2008) Contents lists available at ScienceDirect. Hearing Research. journal homepage:

Hearing. Figure 1. The human ear (from Kessel and Kardon, 1979)

Manchester Adult Cochlear Implant Programme

Hearing. PSYCHOLOGY (8th Edition, in Modules) David Myers. Module 14. Hearing. Hearing

FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED

Prelude Envelope and temporal fine. What's all the fuss? Modulating a wave. Decomposing waveforms. The psychophysics of cochlear

Effect of Microphone Cleaning on Syllable Recognition Performed by Cochlear Implant Users using the CIS and NofM Coding Strategies

SPHSC 462 HEARING DEVELOPMENT. Overview Review of Hearing Science Introduction

Cochlear Implants. What is a Cochlear Implant (CI)? Audiological Rehabilitation SPA 4321

Coding Strategy Behavior related to Microphone Integrity in Cochlear Implants using the Recognition of Syllables in a Noisy Environment

Hearing Screening, Diagnostics and Intervention

ClaroTM Digital Perception ProcessingTM

Design and Implementation of Speech Processing in Cochlear Implant

THE MECHANICS OF HEARING

Cochlear Implant The only hope for severely Deaf

The Outer and Middle Ear PERIPHERAL AUDITORY SYSTEM HOW WE HEAR. The Ear in Action AUDITORY NEUROPATHY: A CLOSER LOOK. The 3 parts of the ear

PSY 214 Lecture 16 (11/09/2011) (Sound, auditory system & pitch perception) Dr. Achtman PSY 214

Hearing. By Jack & Tori

Cochlear implants. Carol De Filippo Viet Nam Teacher Education Institute June 2010

Sound Waves. Sensation and Perception. Sound Waves. Sound Waves. Sound Waves

Role of F0 differences in source segregation

Linguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions.

Binaural Hearing. Steve Colburn Boston University

Intro to Audition & Hearing

Ref. 1: Materials and methods Details

Healthy Organ of Corti. Loss of OHCs. How to use and interpret the TEN(HL) test for diagnosis of Dead Regions in the cochlea

Can You Hear Me Now?

Ting Zhang, 1 Michael F. Dorman, 2 and Anthony J. Spahr 2

Elements of Effective Hearing Aid Performance (2004) Edgar Villchur Feb 2004 HearingOnline

Central Auditory System Basics and the Effects of Abnormal Auditory Input to the Brain. Amanda M. Lauer, Ph.D. July 3,

Representation of sound in the auditory nerve

Wheaton Journal of Neuroscience Senior Seminar Research

Perception of Spectrally Shifted Speech: Implications for Cochlear Implants

Lecture 7 Hearing 2. Raghav Rajan Bio 354 Neurobiology 2 February 04th All lecture material from the following links unless otherwise mentioned:

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

Issues faced by people with a Sensorineural Hearing Loss

The development of a modified spectral ripple test

Learning Targets. Module 20. Hearing Explain how the ear transforms sound energy into neural messages.

Speech conveys not only linguistic content but. Vocal Emotion Recognition by Normal-Hearing Listeners and Cochlear Implant Users

Linguistic Phonetics Fall 2005

Auditory Physiology Richard M. Costanzo, Ph.D.

Converting Sound Waves into Neural Signals, Part 1. What happens to initiate neural signals for sound?

Acoustics Research Institute

Lecture 3: Perception

Lecture 6 Hearing 1. Raghav Rajan Bio 354 Neurobiology 2 January 28th All lecture material from the following links unless otherwise mentioned:

Cochlear Implant Innovations

EEL 6586, Project - Hearing Aids algorithms

Auditory Physiology PSY 310 Greg Francis. Lecture 30. Organ of Corti

HCS 7367 Speech Perception

Spectral-Ripple Resolution Correlates with Speech Reception in Noise in Cochlear Implant Users

! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics

Transcription:

Simulation of an electro-acoustic implant (EAS) with a hybrid vocoder Fabien Seldran a, Eric Truy b, Stéphane Gallégo a, Christian Berger-Vachon a, Lionel Collet a and Hung Thai-Van a a Univ. Lyon 1 - Lab. Neurosciences, Service Pr Collet, Pavillon U, Hôpital Edouard Herriot, F-693 Lyon, France b Hôpital Edouard Herriot, Service ORL, Place d Arsonval, 693 Lyon, France fseldran@yahoo.fr 3411

Electric-acoustic stimulation (EAS) is indicated for hearing impaired patients with enough residual hearing in low frequencies and severe hearing loss in high frequencies. We aimed at simulating the speech intelligibility provided by EAS with a hybrid vocoder model. The French Fournier word set was used in this study. We therefore tested several parameters on 24 normal hearing adults. First, the boundary between acoustic and electric stimulation frequency areas (Fc) was taken at 5, 7, and 1414Hz. Second, we assessed the effect of electrical stimulation channel numbers (1 to 4). Third, we tested the effect of background noise with a cocktail party noise at -6 db SNR. It appeared that the 3 electrical channels & 7 Hz Fc condition produced normal-hearing-like results (at least in quiet). In noisy auditory scene, 4 electrical channels & 5 Hz Fc could produce fair speech intelligibility. [Work supported by CNRS, Lyon1 University and Medel]. 1 Introduction 1.1 Audition in normal-hearing people The sound wave when enters, vibrates the eardrum, and then it is transmitted to cochlear fluid of the inner ear via the adaptation of impedance realized by the ossicular chain of the middle ear. The cochlea is made of sensory hair cells which give birth to the nervous message. Then, the nervous message is conveyed along the auditory pathways, it crosses several relays, each composed of specialized neurons, the brainstem and the auditory cortex, where the sound information is integrated and the person becomes aware of it. Audition is allowed thanks to two main ways of coding the frequential information [1]. The spatial coding is a result of the tonotopic organization of the cochlea and of the cortex (each part allows to encode a specific frequency). The site of maximum displacement of the basilar membrane leads to a certain sensation of pitch [2]. The second coding is the temporal coding. Some neurons can synchronize with the periodic component of the stimulus till a given frequency, which decreases for the most integrated centers, and give additional information about the pitch. The coding of the intensity is linked to the amplitude of the eardrum s vibration; a high vibration implies a high stimulation of the cochlea, and a high discharge of the fibers on the auditory nerve. On the other hand, the outer hair cells, parallels to the inner hair cells, realize an active mechanism which amplifies the vibration of the basilar membrane and thus the intensity of sound is perceived in a more dynamic way. 1.2 The classical hearing aid Hearing aid allows curing mild to profound deafness. It can be an In The Ear (ITE = the device is placed into the canal ear), or a Behind The Ear (BTE) device, which can be more powerful. The sound is captured by one (or two) microphone(s), and it is digitized. This signal goes into a processor which gives more or less importance to certain frequencies and intensities, depending on the hearing loss of the patient. Finally, the signal is amplified and a loudspeaker translates it as a sound which will be conveyed into the canal ear. 1.3 The cochlear implant (CI) The cochlear implant (figure 1) is a device for hearingimpaired listeners who have a profound to total deafness, and who can t be helped by high-powered hearing aids. This cochlear implant is based on the stimulation of the auditory pathways thanks to an electrode array surgically inserted into the cochlea, which transmit the sound by electrical pulses. The CI is composed of two parts (internal and external). The external part, has the aspect of a BTE, it contains the microphone which captures the sound and digitizes it. The signal is analysed by the speech processor (in the same BTE) and the message is sent to the internal part, placed under the scalp. This implies the selective activation of the different electrodes which send electrical pulses directly on the auditory nerve, each electrode stimulating a particular frequency band. Fig. 1. Principle of the cochlear implant. (source Med-El) The implant is composed of two parts: an external (microphone, processor) and an internal part (implanted electrode array). The two parts are linked at the level of the scalp thanks to a magnetic link in which the information from the external part is transmitted by a radio-frequency coupling. The sound signal captured by the microphone is treated by the implant s processor; the message is transformed as a succession of electrical pulses sent on the different electrodes which will directly stimulate the auditory nerve. 3412

1.4 The electric-acoustic implant The conventional cochlear implantation stay a dilemma for people who are at the limit of the indication ; that is people who have a too important hearing loss to be restored with classical hearing aids but not important enough to consider a cochlear implantation. Recently, cochlear implantation which was only proposed to patients who had a severe to profound hearing loss, has also become accessible to hearing impaired persons who have residual hearing in the low frequencies. New techniques of miniinvasive surgery (soft-surgery) [3,4], coupled to new electrode arrays, allow to minimize the trauma induced by the surgical act and not even destroys the residual hearing, as it could be implied with a conventionnal implantation [5,6]. In parallel, implant manufacturers integrate in one device only (Med-El DUET) the bimodal stimulation, which means that the sound will be still captured by the BTE s microphone but the part of the mesage contained in the low frequencies is processed by an acoustic unit, as it could be done by a classical BTE hearing aid, while the high frequencies part of the message is processed by another unit which will send an electrical message to the implanted electrodes. 2 Acoustic simulation of the Electric- Acoustic Stimulation (EAS) 2.1 Aim of the study The EAS combines the functions of the cochlear implant for the electrical stimulation of the high frequencies and a hearing aid unit which amplifies the low-frequencies (25-15 Hz). In this study, we wish to evaluate the number of electrical channels necessary to restore the lack of speech intelligibility of a hearing impaired patient implanted with EAS, for several cut-off frequencies between acoustic and electrical stimulation. Recent studies [7,8] have shown that using the residual hearing in the low frequencies brings a real benefit for understanding and here we wish to extend these studies in order to show that, for a person who has residual hearing until a certain frequency, N channels are sufficient at the implanted part to restore the lack of understanding. We would like to evaluate N (number of channels) depending on Fc, frequency until which there is usable residual hearing. Our study will be limited to a normal-hearing population for the moment, and we wait for enough patients to be implanted with the EAS system, so that we could confirm the results shown with the EAS simulator. 2.2 Material & Methods This study has been realized on a population of 24 normalhearing subjects (hearing loss inferior to db HL on the 25- Hz frequency range), between 18 and 34 years old, on the right ear only, without practice for the test. To evaluate the speech intelligibility depending on the simulated hearing loss, the subjects listened to lists of words, and they were asked to repeat what they had understood. The phonetic material used was the french Fournier word set ( lists, each composed of bissyllabic words), pronouced by a single male talker. As we use bissyllabic words, the unit for counting is the number of syllables correctly repeated. Each list contains bissyllabic words, so each right syllable is equal to 5% recognition. In order to simulate the different hearing loss, the speech sounds were modified with the following parameters: the cut-off frequency Fc between acoustic and electrical stimulation (Fc = 5 Hz; 7 Hz; Hz; 1414 Hz), the type of stimulation, and the number of channels on the simulated implant (1 to 4 channels). The tests were realized in quiet and in a noisy environment at -6 db SNR. To generate the different types of stimulation, we were inspired by different studies [9,,11] to make a vocoder, it s a computing program which recieves as an input the acoustic signal we want to transform, and the output of the program is a sound which reproduces the acoustic signal as it could be heard by a EAS implantee, depending on the situation to be tested. It is this signal that will be heard by the listeners. About the signal processing, the hearing impaired patient s residual hearing, amplified by a hearing aid was reproduced by a low-pass filtering of the signal until the frequency Fc. This method is little faithful to the reality because it doesn t reflect exactly the auditory perception of a deaf person, but it allows us to reproduce the frequency limitation which interests us. The high-frequency part of the signal was encoded in order to reproduce the functioning of a CI. For our study, we considered that the frequential informations above 4 khz were not absolutely essential for speech intelligibility, that s the reason why we chose to limit the simulation of the implanted part in the [Fc-4kHz] range. The cochlear tonotopy is organized depending on a lin-log scale (linear in the low frequencies and logarithmic in the high frequencies) and for our study, the simulated implant only encodes the high frequency. So, the cut-off frequencies between the different channels were chosen in order to respect this logarithmic scale. All the filterings were realized with the software Cool Edit Pro (Adobe Audition), using the function FFT filter. The parameters we chose for the filters were coefficients of % in the bandwidth and % in the attenuated band, with % corresponding to an attenuation of db for all the spectral coefficients of the attenuated band. To generate the electrical channels, we divided the [Fc-4kHz] part in N bands, N corresponding to the number of channels of the CI (N = 1, 2, 3 or 4). For each channel, the signal is selected by a band-pass filter (we will call Fmin and Fmax the limits of the bandwidth), and its envelope is extracted by full-wave rectification and a lowpass filtering until 5 Hz. Following this step, all the envelopes are filled with a white noise, filtered for the same frequency range, but of which the bandwidth is narrower, which means that each envelope with a bandwith between [Fmin Fmax] is filled with a white noise filtered by a bandpass between [(Fmin + 75Hz) (Fmax 75Hz)] (this in order to avoid a frequential recovering due to the modulation of the narrow band noise by the envelope). Once the signal of each channel is made, there is a necessary step of amplification, because of the fact that 3413

modulation of the narrow band noise by the envelope reduces the initial energy contained in this signal. So, we need to be sure that the energy contained in the simulated electrical channel is the same as of the energy contained in the initial acoustic signal, filtered by the band-pass corresponding to the channel. Finally, when all the electrical channels have been amplified, both acoustic and electrical channels are summed, in order to get the acoustic signal for the tests (figure 2). Among the various types of stimulations tested, we use the acoustic only stimulation which allows to reproduce the audition of a deaf person who has residual hearing in the low frequencies, the electric only stimulation, to simulate the audition of a cochlear implantee, and finally a situation acoustic + electrical, to reproduce the sound percieved by a person implanted with an EAS system. All the tracks were randomized and played the same number of time for every subject. We use lists of words and we have situations to test, which prevents any learning of the lists due to repetition by the listeners. The figure 2 allows us to see how the frequential resolution is damaged in the part which encodes the simualtion of implant. electrical part only, and between 5% and 75% for the EAS situation. For the situations EAS and cochlear implant only, the results with the implantee are quite consistent with the results obtained with our simulation. On the other hand, about the results for using only the acoustic part, we can t quantify exactly the cut-off frequency of the residual hearing in patients and we can t say there is a perfect similarity between the results obtained for our model and the results obtained with the implantees. We can see on figure 6, that in very noisy environment (using a cocktail party noise), with residual hearing until 1 Hz and 4 added electrical channels, we can reach very good scores of about 5%. 3 Conclusion After new analysis of the results, we expect to show that, for the same given information, the electrical channels of the implant have a more important contribution for an implantee who has residual hearing, than for an implantee who hasn t some. We also wish to compare the results of this simulation, with the performances of the EAS implanted patients, in real situation, once we will have enough EAS implantees, for the same conditions (hearing aid only, CI only and EAS). For figures 3 and 6, every point is the mean result for the 24 normal-hearing subjects. The vertical bar represents the standard error. The curves represent respectively, from the bottom to the top, acoustic stimulation only (), acoustic stimulation + 1 electrical channel (), acoustic stimulation + 2 electrical channels (), acoustic stimulation + 3 electrical channel (), and acoustic stimulation + 4 electrical channel ( ). Quiet Fig. 2. Spectrogram of the french word «le bouchon» encoded for the situation Fc=5 Hz with 4 electrical channels. The brackets show the different bands : A = acoustic stimulation ; 1, 2, 3 and 4 = n of the simulated electrical channels. 2.3 Preliminary results For the interpretation of the results, we considered that the lack in speech intelligibility was restored when the subject s score in syllables recognition was superior to 9%. The first results (figure 3 and 4) seem to show that, in quiet, one channel () in the part coded by the CI is sufficient to restore the loss of a deaf person who has residual hearing until 1 Hz. Also, 2 channels () are sufficient to restore the loss of a deaf person who has residual hearing until Hz, 3 channels () with rests until Hz and finally 4 channels ( ) with residual hearing until 5 Hz. The studies [12], [13], [14], realized with patients implanted EAS (figure 5), present recognition scores for monosyllabic words between % and 13% with the acoustic part only, between 45% and 61% with 9 5 5 7 1414 Cut-off frequency (Hz) Fig. 3. depending on the cut-off frequency between acoustic and electrical stimulation, in quiet. The understanding is restored (> 9%) thanks to 1 channel if the hearing-impaired person has residual hearing until 1 Hz, thanks to 2 channels with rests until Hz, 3 channels with rests until Hz and 4 channels with residual hearing until 5 Hz. Finally, we have to precise that our model has some limits. On one hand, the envelope of our electrical signals is low-pass filtered till 5 Hz, while the CI stimulation allows 3414

encoding until Hz, thus the results we get are underevaluated when compared to the reality. On the other hand, for our simulation, we did suppress the interactions between the electrical channels, and the consequence is an overevaluation of the results. References [1] J. Eggermont, Between sound and perception: reviewing the search for a neural code, Hear. Res., vol. 157, 1-2, pp. 1-42, 1. [2] G. Von Békésy, Experiments in hearing, New-York: Mc Graw Hill, 19. [3] R.J.S. Briggs et al., Comparison of Round Window and Cochleostomy Approaches with a Prototype Hearing Preservation Electrode. Audiol. Neurotol., vol. 11(suppl1), pp. 42-48, 6. [4] T. Lenarz et al., Temporal Bone Results and Hearing Preservation with a New Straight Electrode. Audiol. Neurotol., vol. 11(suppl1), pp. 34-41, 6. [5] C.J. James, et al., Combined Electroacoustic Stimulation in Conventional Candidates for Cochlear Implantation, Audiol. Neurotol., vol. 11(suppl1), pp. 57-62, 6. [6] B.J. Gantz and C.W. Turner, Combining acoustic and electrical hearing, Laryngoscope, vol. 113, pp. 1726-17, October 3. [7] M.F. Dorman, A.J. Spahr, P.C. Loizou, C.J. Dana and J.S. Schmidt, Acoustic simulation of combined electric and acoustic hearing (EAS), Ear & Hearing, vol. 26, 4, pp. 1-, 5. [8] C.W. Turner, B.J. Gantz, C. Vidal and A. Behrens, Speech recognition in noise for cochlear implant listeners: benefits of residual acoustic hearing, J. Acoust. Soc. Am., vol. 115, pp. 1729-1735, 4. [9] R.V. Shannon, F.G. Zeng, V. Kamath, J. Wygonski and M. Ekelid, Speech recognition with primarily temporal cues, Science, vol. 2, pp. 3-4, 1995. [] R.V. Shannon, F.G. Zeng and J. Wygonski, Speech recognition with altered spectral distribution of envelope cues, J. Acoust. Soc. Am., vol. 4, pp. 2467-2476, 1998. [11] C.W. Turner, P.E. Souza and L.N. Forget, Use of temporal envelope cues in speech recognition by normal and hearing-impaired listeners, J. Acoust. Soc. Am., vol. 97, pp. 2568-2576, 1995. [12] C. Von Ilberg et al., Electric-acoustic stimulation of the auditory system, ORL, vol. 61, pp.334-3, 1999. [13] J. Kiefer et al., Combined electric and acoustic stimulation of the auditory system : results of a clinical study, Audiol. Neurotol., vol., pp.134-144, 5. [14] W.K. Gstoettner et al., Ipsilateral electric acoustic stimulation of the auditory system : results of longterm hearing preservation, Audiol. Neurotol., vol. 11(suppl1), pp. 49-56, 6. 9 5 Quiet 5 7 1414 Cut-off frequency (Hz) Fig. 4. depending on the cut-off frequency between acoustic and electrical stimulation, in quiet (mean results on 24 normal-hearing people), for the situation electrical part only. The curves represent respectively, from the bottom to the top, 1 electrical channel (),2 electrical channels (),3 electrical channel (), and 4 electrical channel ( ). Intéligibility (monosyllabic words in %) 5 HA CI EAS Von Ilberg, 1999 Kiefer, 5 Gstoettner, 6 Fig. 5. Intelligibility of monosyllabic words (%), results obtained in the studies [12], [13], [14] for the situation hearing aid only (HA), cochlear implant only (CI), and cochlear implant combined with the hearing aid (EAS). 9 5 SNR = -6 db 5 7 1414 Cut-off Frequency (Hz) Fig. 6. depending on the cut-off frequency between acoustic and electrical stimulation, in a noisy environment at -6 db SNR. 3415