Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Similar documents
Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Who are cochlear implants for?

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for

Prelude Envelope and temporal fine. What's all the fuss? Modulating a wave. Decomposing waveforms. The psychophysics of cochlear

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

Hearing the Universal Language: Music and Cochlear Implants

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds

Signals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor

Implementation of Spectral Maxima Sound processing for cochlear. implants by using Bark scale Frequency band partition

Acoustics, signals & systems for audiology. Psychoacoustics of hearing impairment

2/25/2013. Context Effect on Suprasegmental Cues. Supresegmental Cues. Pitch Contour Identification (PCI) Context Effect with Cochlear Implants

Role of F0 differences in source segregation

HCS 7367 Speech Perception

SOLUTIONS Homework #3. Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03

The role of periodicity in the perception of masked speech with simulated and real cochlear implants

An Auditory-Model-Based Electrical Stimulation Strategy Incorporating Tonal Information for Cochlear Implant

REVISED. The effect of reduced dynamic range on speech understanding: Implications for patients with cochlear implants

Issues faced by people with a Sensorineural Hearing Loss

Hearing Aids. Bernycia Askew

PSY 214 Lecture 16 (11/09/2011) (Sound, auditory system & pitch perception) Dr. Achtman PSY 214

The development of a modified spectral ripple test

Speech, Language, and Hearing Sciences. Discovery with delivery as WE BUILD OUR FUTURE

Systems Neuroscience Oct. 16, Auditory system. http:

Hearing. istockphoto/thinkstock

SPHSC 462 HEARING DEVELOPMENT. Overview Review of Hearing Science Introduction

Cochlear Implant The only hope for severely Deaf

PLEASE SCROLL DOWN FOR ARTICLE

Sound Waves. Sensation and Perception. Sound Waves. Sound Waves. Sound Waves

Chapter 11: Sound, The Auditory System, and Pitch Perception

Representation of sound in the auditory nerve

J Jeffress model, 3, 66ff

A neural network model for optimizing vowel recognition by cochlear implant listeners

Simulation of an electro-acoustic implant (EAS) with a hybrid vocoder

Topic 4. Pitch & Frequency

Auditory Physiology PSY 310 Greg Francis. Lecture 30. Organ of Corti

Mechanical Properties of the Cochlea. Reading: Yost Ch. 7

Spectrograms (revisited)

University of Arkansas at Little Rock. remaining auditory neurons directly.

Linguistic Phonetics Fall 2005

Cochlear implants. Carol De Filippo Viet Nam Teacher Education Institute June 2010

Linguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions.

ABSTRACT. Møller A (ed): Cochlear and Brainstem Implants. Adv Otorhinolaryngol. Basel, Karger, 2006, vol 64, pp

Frequency refers to how often something happens. Period refers to the time it takes something to happen.

Lecture 3: Perception

! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics

Hearing. PSYCHOLOGY (8th Edition, in Modules) David Myers. Module 14. Hearing. Hearing

Required Slide. Session Objectives

Acoustics Research Institute

Comment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA)

ADVANCES in NATURAL and APPLIED SCIENCES

Complete Cochlear Coverage WITH MED-EL S DEEP INSERTION ELECTRODE

Topic 4. Pitch & Frequency. (Some slides are adapted from Zhiyao Duan s course slides on Computer Audition and Its Applications in Music)

1. Human auditory pathway.

Bioscience in the 21st century

Corporate Medical Policy

Auditory Scene Analysis

Binaural Hearing. Steve Colburn Boston University

The REAL Story on Spectral Resolution How Does Spectral Resolution Impact Everyday Hearing?

whether or not the fundamental is actually present.

COM3502/4502/6502 SPEECH PROCESSING

Sound and Hearing. Decibels. Frequency Coding & Localization 1. Everything is vibration. The universe is made of waves.

Hearing Research 242 (2008) Contents lists available at ScienceDirect. Hearing Research. journal homepage:

EVALUATION OF SPEECH PERCEPTION IN PATIENTS WITH SKI SLOPE HEARING LOSS USING ARABIC CONSTANT SPEECH DISCRIMINATION LISTS

PSY 214 Lecture # (11/9/2011) (Sound, Auditory & Speech Perception) Dr. Achtman PSY 214

Noise Susceptibility of Cochlear Implant Users: The Role of Spectral Resolution and Smearing

Speech conveys not only linguistic content but. Vocal Emotion Recognition by Normal-Hearing Listeners and Cochlear Implant Users

Peter S Roland M.D. UTSouthwestern Medical Center Dallas, Texas Developments

Topics in Linguistic Theory: Laboratory Phonology Spring 2007

Hearing in the Environment

Chapter 3. Sounds, Signals, and Studio Acoustics

ENT 318 Artificial Organs Physiology of Ear

The functional importance of age-related differences in temporal processing

Auditory System. Barb Rohrer (SEI )

TitleSimulation of Cochlear Implant Usin. Citation 音声科学研究 = Studia phonologica (1990),

FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED

A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER

Simulations of high-frequency vocoder on Mandarin speech recognition for acoustic hearing preserved cochlear implant

Auditory System & Hearing

Auditory System & Hearing

Cochlear Implants. What is a Cochlear Implant (CI)? Audiological Rehabilitation SPA 4321

Deafness and hearing impairment

Static and Dynamic Spectral Acuity in Cochlear Implant Listeners for Simple and Speech-like Stimuli

Exploring the parameter space of Cochlear Implant Processors for consonant and vowel recognition rates using normal hearing listeners

Coding of Sounds in the Auditory System and Its Relevance to Signal Processing and Coding in Cochlear Implants

Perception of Spectrally Shifted Speech: Implications for Cochlear Implants

Design and Implementation of Speech Processing in Cochlear Implant

Hearing Sound. The Human Auditory System. The Outer Ear. Music 170: The Ear

Music 170: The Ear. Tamara Smyth, Department of Music, University of California, San Diego (UCSD) November 17, 2016

Improving Music Percep1on With Cochlear Implants David M. Landsberger

Healthy Organ of Corti. Loss of OHCs. How to use and interpret the TEN(HL) test for diagnosis of Dead Regions in the cochlea

HCS 7367 Speech Perception

What is sound? Range of Human Hearing. Sound Waveforms. Speech Acoustics 5/14/2016. The Ear. Threshold of Hearing Weighting

(Thomas Lenarz) Ok, thank you, thank you very much for inviting me to be here and speak to you, on cochlear implant technology.

Discrete Signal Processing

Slow compression for people with severe to profound hearing loss

Before we talk about the auditory system we will talk about the sound and waves

Auditory nerve. Amanda M. Lauer, Ph.D. Dept. of Otolaryngology-HNS

Differential-Rate Sound Processing for Cochlear Implants

Michael Dorman Department of Speech and Hearing Science, Arizona State University, Tempe, Arizona 85287

Transcription:

Who are cochlear implants for? Essential feature People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work best in adults who had a significant period of relatively good hearing before becoming profoundly deaf, and who developed good language. children who are young enough to develop language through an implant. 1 substitute for faulty or missing inner hair cell by direct electrical stimulation of residual auditory nerve fibres but brain stem implants are also being used Need, at a minimum microphone + processor electrodes in the cochlea a way to connect them (radio transmission) 2 1. Sound is received by the microphone of the speech processor. 2. The sound is digitized, analyzed and transformed into coded signals. 3. Coded signals are sent to the transmitter. 4. The transmitter sends the code across the skin to the internal implant where it is converted to electric signals. 5. Electric signals are sent to the electrode array to stimulate the residual auditory nerve fibres in the cochlea. 6. Signals travel to the brain, carrying information about sound. 3 4

The implant in place The electrode array Implanted radio receiver Electrode inserted in inner ear 5 6 What are the essential purposes of a speech processor? To transduce acoustical signals into an electrical form. To process the acoustic signal in various ways (e.g., filter, compress). To convert (or code) the resulting electrical signals into a form appropriate for stimulation of the auditory nerve. 7 8

What other functions can and might be implemented in a speech processor? Minimising the effects of background noise. The possibility of different processing schemes for different situations. Enhancing speech features that contribute most to speech intelligibility. 9 What should an implant do? Mimic the most important functions of the normal ear. So what does a normal ear do? frequency analysis amplitude compression preservation of temporal features, bot slow and fast (e.g., through envelope following and phase locking) 10 Common elements in speech processing A microphone to transduce acoustic signals into electrical ones. Amplitude compression to address the very limited dynamic range of electrocochlear stimulation. Use of the place principle for multiple electrodes (mapping low to high frequency components onto apical to basal cochlear places). But speech processing schemes vary significantly in other ways Pulsatile vs. continuously varying ( wavey ) stimulation. Not to be confused with analogue vs. digital implementations. All electrical stimulation is analogue. Simultaneous vs. non-simultaneous presentation of currents to different electrodes. Non-simultaneous stimulation requires pulsatile stimulation 11 12

25 20 15 10 5 0-5 100 1000 10000 Multi-channel systems All contemporary systems present different waveforms to different electrodes to mimic the frequency analysis of the normal mammalian cochlea. Think of the peripheral auditory system as analogous to a filter bank. The filter bank analogy Imagine each afferent auditory nerve fibre has a bandpass filter attached to its input centre frequencies decreasing from base to apex 25 20 15 10 5 0-5 100 1000 10000 13 14 The no-brainer cochlear implant speech processing strategy A simple speech processing scheme for a cochlear implant: Compressed Analogue (CA) Use an electronic filter bank to substitute for the auditory filter bank (the mechanics of the basilar membrane). 100Hz- 400Hz 400Hz- 1000Hz 1000Hz- 2000Hz Acoustic signal 2000Hz- 3500Hz 3500Hz- 5000Hz 5000Hz- 8000Hz 15 Electrical signal 16

The most common current method: Continuous Interleaved Sampling (CIS) Continuous Interleaved Sampling Use a filter bank approach to represent spectral shape with non-simultaneous pulatile stimulation to minimise electrode interactions with pulse amplitudes modulated by the envelope of the bandpass filter outputs. 17 18 from Philipos Louizou: http://www.utdallas.edu/~loizou/cimplants/tutorial/ Continuous Interleaved Sampling CIS in detail 0.5 1 2 4 khz 19 20

Simulations can give us some idea of what an implant user might experience But...caveat perceptor! Noise-excited Vocoding These are not exactly what an implant sounds like but you can get some idea about the kind of information that gets through. 21 Note important variants in rectification, lowpass filter cutoffs, etc. 22 23 24

Note similarity to CIS (and normal cochlear) processing 25 25 20 20 15 15 10 10 5 5 0-5 100 0 1000 10000-5 100 1000 10000 25 Separate channels in a 6channel simulation... and when summed together. Children like strawberries. 27 28

Never mind the quality... feel the intelligibility. Effects of channel number Children like strawberries. 1 2 4 8 16 29 30 Spectral Peak Strategy SPEAK (n of m strategies) Other schemes: Necessity is the mother of invention The problem (historically) How could devices running at relatively slow rates be used for CIS, which required high rates of pulsatile stimulation? 20 Programmabl e Fi l ters 100Hz Spectra Peak 200Hz Ex tractor 200Hz - The solution 350Hz 350Hz 500Hz Pick and present pulses only at the significant peaks in the spectrum. Acous ti c s i gnal 500Hz 800Hz 100Hz etc 7500Hz 8000Hz 31 8k Hz (Sample of s ound spectrum) El ectri cal s i gnal32

SPEAK stimulation pattern Restricted dynamic range means compression is crucial 33 Absolute thresholds and maximum acceptable loudness levels Nelson et al. (1996) JASA 34 Intensity jnds in electrical (opposed to acoustic) stimulation: 1) miss Weber s Law more 2) are typically smaller, but not by enough to offset reduced dynamic range. Acoustic/electrical loudness matches 4 different stimulation frequencies vs. contralateral 1.5 khz tone CI users here had 7-45 discriminable steps in the total dynamic range, compared to 83 in normal hearing Nelson et al. (1996) JASA 35 Eddington et al. 1978 Ann Otol Rhinol Laryngol 36

Loudness grows much faster in electrical stimulation (hyperrecruitment!) Temporal resolution: gap detection 37 Shannon 1993 38 Temporal resolution: modulation detection (100 Hz) Temporal resolution: TMTFs More dependent on level (as for intensity jnd s) More dependent on level Otherwise similar to normal listeners (dashed lines) Fu 2002 NeuroReport 39 Shannon 1992 J Acoust Soc Amer 40

Relationships to performance with speech modulation detection thresholds measured at 100 Hz, at a number of levels (previous slide) Fu 2002 NeuroReport 41 Perceiving variations in amount of activity across electrodes Essential for signaling of spectral shape Spectral shape is encoded by relatively slow level changes across electrodes Striking fact preservation of fast modulation rates not necessary for intelligibility in noisevocoded speech 42 Restricting modulation rates allowable in noise-excited vocoding Slow level changes across channels Th-ee-z d- ay - s a ch-i-ck - en-l-e-g is a r-a- re d -i - sh 43

Discrimination of rippled noise Discrimination of rippled noise find the maximum ripple density at which it is possible to discriminate standard ripple noise from its inverted version This test is hypothesized to provide a direct measure of the ability of listeners to perceive the frequency locations of spectral peaks in a broadband acoustic signal. Henry et al. 2005 J Acoust Soc Am 45 Henry et al. 2005 J Acoust Soc Am 46 Relationships to performance with speech in quiet Statistical interlude: The effect of outliers 12 hvd by 20 talkers 16 VCVs by 4 talkers r 2 =0.28 p<0.01 r 2 =0.09 p>0.15 Henry et al. 2005 J Acoust Soc Am 47 vowels 48

Statistical interlude: The effect of outliers Relationships to performance with speech in noise SRT determined for selection of one of 12 spondees r 2 =0.37 p<0.005 r 2 =0.33 p<0.006 better performance consonants 49 Won et al. 2005 JARO 50 Why is speech melody (voice pitch) important to hear? Pitch based on a purely temporal code Contributes to speech intelligibility in all languages A good supplement to lipread information May play an important role in separating speech from background noises Appears to play a more crucial role for the young child developing language Crucial in so-called tone languages Merzenich et al. 1973 Shannon 1993 51 limited to 300 Hz or so 52

Pitch based on a purely temporal code Melody recognition Best normal performance for normal listeners about 0.2 % over entire range 12 songs familiar to most people, synthesised with and without natural rhythm Kong et al. (2004) Merzenich et al. 1973 54 CI users classifying rise/fall contours on diphthongs Melody coded as periodicity in rapid within-channel patterns Th-ee-z d- ay - s a ch-i-ck - en-l-e-g is a r-a- re d -i - sh Green et al. 2004 J Acoust Soc Amer 55

The representation of melody can be messy! Perception of fundamental pitch in complex waves is very poor Lower harmonics cannot be resolved as in normal hearing Phase-locking seems different Mis-match between place of excitation and temporal pattern may be important 58 What happens when an electrode is incompletely inserted? Simulations of incomplete insertions 0 mm 2.2 mm 4.3 mm 6.5 mm 59 60

Can the deleterious effects of spectral shifting be overcome over time? Pre-training Post-training Hair cell substitution? words in sentences over 3 hours of experience using CDT base apex normal listeners in simulations: Rosen et al. 1999 J Acoust Soc Am from Lynne Werner: http://depts.washington.edu/sphsc461/ci_notes.htm Why is a CI not as good as normal hearing? It s a damaged auditory system, presumably with accompanying neural degeneration (e.g. dead regions) Electrodes may not extend fully along the length of the basilar membrane (BM), so mis-matched tuning and restricted access to apical regions (where nerve survival is typically greatest) 3000 IHCs vs. a couple of dozen electrodes, hence poorer frequency selectivity Current spreads across BM, hence poorer frequency selectivity Less independence of firing across nerve fibres, appears to affect temporal coding Small dynamic ranges but intensity jnd s not correspondingly smaller, hence fewer discriminable steps in loudness But good temporal and intensity resolution 63