Acoustics, signals & systems for audiology. Psychoacoustics of hearing impairment

Similar documents
Issues faced by people with a Sensorineural Hearing Loss

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds

HCS 7367 Speech Perception

Frequency refers to how often something happens. Period refers to the time it takes something to happen.

Prescribe hearing aids to:

Acoustics Research Institute

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

Auditory nerve. Amanda M. Lauer, Ph.D. Dept. of Otolaryngology-HNS

Representation of sound in the auditory nerve

Psychoacoustical Models WS 2016/17

Healthy Organ of Corti. Loss of OHCs. How to use and interpret the TEN(HL) test for diagnosis of Dead Regions in the cochlea

Enrique A. Lopez-Poveda Alan R. Palmer Ray Meddis Editors. The Neurophysiological Bases of Auditory Perception

Signals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for

ClaroTM Digital Perception ProcessingTM

EVALUATION OF SPEECH PERCEPTION IN PATIENTS WITH SKI SLOPE HEARING LOSS USING ARABIC CONSTANT SPEECH DISCRIMINATION LISTS

Linguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions.

9/29/14. Amanda M. Lauer, Dept. of Otolaryngology- HNS. From Signal Detection Theory and Psychophysics, Green & Swets (1966)

Synaptopathy Research Uwe Andreas Hermann

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Who are cochlear implants for?

HST 721 Lecture 4: Mechanics, electromotility and the cochlear amplifier

Linguistic Phonetics Fall 2005

ABSTRACT EQUALIZING NOISE (TEN) TEST. Christine Gmitter Doctor of Audiology, 2008

Christopher J. Plack Department of Psychology, University of Essex, Wivenhoe Park, Colchester CO4 3SQ, England

A truly remarkable aspect of human hearing is the vast

Hearing Sound. The Human Auditory System. The Outer Ear. Music 170: The Ear

Music 170: The Ear. Tamara Smyth, Department of Music, University of California, San Diego (UCSD) November 17, 2016

SOLUTIONS Homework #3. Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03

! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics

Emissions are low-intensity sounds that may be detected in the external ear canal by a microphone

PSY 214 Lecture # (11/9/2011) (Sound, Auditory & Speech Perception) Dr. Achtman PSY 214

Auditory System. Barb Rohrer (SEI )

Lecture 3: Perception

Auditory Physiology Richard M. Costanzo, Ph.D.

Refining a model of hearing impairment using speech psychophysics

Mechanical Properties of the Cochlea. Reading: Yost Ch. 7

Power Instruments, Power sources: Trends and Drivers. Steve Armstrong September 2015

Brad May, PhD Johns Hopkins University

functions grow at a higher rate than in normal{hearing subjects. In this chapter, the correlation

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Sound and Hearing. Decibels. Frequency Coding & Localization 1. Everything is vibration. The universe is made of waves.

PHYSIOLOGICAL ASSESSMENT OF HEARING AID COMPRESSION SCHEMES

Auditory System Feedback

Systems Neuroscience Oct. 16, Auditory system. http:

Audibility, discrimination and hearing comfort at a new level: SoundRecover2

HEARING AND PSYCHOACOUSTICS

Digital Speech and Audio Processing Spring

An Auditory-Model-Based Electrical Stimulation Strategy Incorporating Tonal Information for Cochlear Implant

The Fitting Process. Sing Soc Audiol Professionals 1. Talk structure (?) The NAL-NL2 prescription method for hearing aids

The Structure and Function of the Auditory Nerve

The Spectrum of Hearing Loss in Adults Theresa H. Chisolm, Ph.D.

Hearing Evaluation: Diagnostic Approach

SUBJECT: Physics TEACHER: Mr. S. Campbell DATE: 15/1/2017 GRADE: DURATION: 1 wk GENERAL TOPIC: The Physics Of Hearing

Auditory Physiology PSY 310 Greg Francis. Lecture 30. Organ of Corti

Topics in Linguistic Theory: Laboratory Phonology Spring 2007

Relation between speech intelligibility, temporal auditory perception, and auditory-evoked potentials

COM3502/4502/6502 SPEECH PROCESSING

Chapter 40 Effects of Peripheral Tuning on the Auditory Nerve s Representation of Speech Envelope and Temporal Fine Structure Cues

Deafness and hearing impairment

NOISE ROBUST ALGORITHMS TO IMPROVE CELL PHONE SPEECH INTELLIGIBILITY FOR THE HEARING IMPAIRED

Sonic Spotlight. Frequency Transfer Providing Audibility For High-Frequency Losses

Cochlear anatomy, function and pathology II. Professor Dave Furness Keele University

Lecture 4: Auditory Perception. Why study perception?

A good audiologic diagnosis is increasingly more important

Music and Hearing in the Older Population: an Audiologist's Perspective

Auditory nerve model for predicting performance limits of normal and impaired listeners

Stimulus Coding in the Auditory Nerve. Neural Coding and Perception of Sound 1

Sonic Spotlight. SmartCompress. Advancing compression technology into the future

Required Slide. Session Objectives

SoundRecover2 the first adaptive frequency compression algorithm More audibility of high frequency sounds

The role of periodicity in the perception of masked speech with simulated and real cochlear implants

Thresholds for different mammals

1706 J. Acoust. Soc. Am. 113 (3), March /2003/113(3)/1706/12/$ Acoustical Society of America

Topic 4. Pitch & Frequency

I. INTRODUCTION. J. Acoust. Soc. Am. 111 (1), Pt. 1, Jan /2002/111(1)/271/14/$ Acoustical Society of America

Topic 4. Pitch & Frequency. (Some slides are adapted from Zhiyao Duan s course slides on Computer Audition and Its Applications in Music)

Audiogram+: The ReSound Proprietary Fitting Algorithm

EEL 6586, Project - Hearing Aids algorithms

Masker-signal relationships and sound level

L2: Speech production and perception Anatomy of the speech organs Models of speech production Anatomy of the ear Auditory psychophysics

Christopher J. Plack Laboratory of Experimental Psychology, University of Sussex, Brighton, E. Sussex BN1 9QG, England

ENT 318 Artificial Organs Physiology of Ear

Predicting Speech Intelligibility

In patients with moderate to severe high-frequency hearing

A computer model of medial efferent suppression in the mammalian auditory system

PHYS 1240 Sound and Music Professor John Price. Cell Phones off Laptops closed Clickers on Transporter energized

HCS 7367 Speech Perception

Audiogram+: GN Resound proprietary fitting rule

Proceedings of Meetings on Acoustics

Hearing Aids. Bernycia Askew

Hearing Lecture notes (1): Introductory Hearing

A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER

Chapter 11: Sound, The Auditory System, and Pitch Perception

Hearing. istockphoto/thinkstock

Prelude Envelope and temporal fine. What's all the fuss? Modulating a wave. Decomposing waveforms. The psychophysics of cochlear

Nature of the Sound Stimulus. Sound is the rhythmic compression and decompression of the air around us caused by a vibrating object.

PSY 214 Lecture 16 (11/09/2011) (Sound, auditory system & pitch perception) Dr. Achtman PSY 214

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED

Pitfalls in behavioral estimates of basilar-membrane compression in humans a)

Transcription:

Acoustics, signals & systems for audiology Psychoacoustics of hearing impairment

Three main types of hearing impairment Conductive Sound is not properly transmitted from the outer to the inner ear Sensorineural Damage to the inner ear Retrocochlear Damage to the auditory nerve and beyond

What do we know about physiological reflections of sensori-neural hearing loss? focus on hair cell damage

Auditory Nerve Structure and Function Tuning curves Cochlea Apex Cochlear Frequency Map Auditory Nerve Tracer Single-unit Recording Electrode Base slide courtesy of Chris Brown, Mass Eye & Ear Liberman (1982)

Outer Hair Cells are relatively vulnerable to damage, leading to... Decreases in basilar membrane movement and hence increased thresholds to sound hearing loss A loss of cochlear compression (a linearised input/output function) reduced dynamic range loudness recruitment Loss of frequency tuning (analogous to widened filters in an auditory filter bank). degraded frequency selectivity

Input/ Output functions on the basilar membrane near CF in an impaired ear

Frequency response of a single place on the BM in an impaired ear (furosemide) Ruggero and Rich (1991)

Inner Hair Cell (IHC) damage... Leads to a more sparse representation of all auditory information passed on to higher auditory centres. There are possibly even regions of the cochlea without any IHCs so-called dead regions. Hence, there may be a degradation of a wide variety of auditory abilities (e.g. temporal resolution).

Relation of Hair Cell loss to audiogram

Auditory Nerve Fiber Responses From Damaged Cochleae Normal OHC slide courtesy of Chris Brown, Mass Eye & Ear IH C IHC Stereocilia Damage OHC Damage Massive IHC Stereocilia Damage Total OHC Loss Intact Stereocilia Stereocilia on IHCs Missing Tallest Stereocilia Fused Stereocilia Liberman and Kiang (1978) Liberman and Beil (1979) Liberman and Dodds (1984)

Effects of OHC damage and total loss on tuning in the auditory nerve

Psychoacoustic consequences of sensorineural (cochlear) hearing loss Raised thresholds Reduction of dynamic range and abnormal loudness growth Impaired frequency selectivity What is the impact on speech perception?

A normal auditory area ------------------- ULL threshold dynamic range

An auditory area in sensorineural loss ------------------- ULL threshold -

Hearing Loss & Speech Perception Words recognised from simple sentences in quiet by aided hearing impaired adults as a function of average hearing loss at 0.5, 1 and 2 khz. (After Boothroyd, 1990)

The Role of Audibility Much of the impact of hearing loss is thought of in terms of audibility How much of the information in speech is audible? Over frequency Over intensity Consider the audible area of frequency and intensity in relation to the range of frequencies and intensities in speech

Speech energy and audibility blue: the energy range of speech according to frequency relative to the normal threshold of hearing. red: the range of audible levels over frequency for a typical moderate sloping hearing loss. Intelligibility can be predicted from the portion of the speech range that is audible. Hearing aids can be set to increase audibility by overall amplification and by shaping of frequency response

Speech energy and audibility blue: the energy range of speech after amplification.

Articulation Index (AI) or Speech Intelligibility Index (SII) An attempt to quantify the role of audibility in speech perception Related to standard rules for setting HA frequency response Intelligibility is assumed to relate to a simple sum of the contributions from different frequency bands Some frequency bands are more important than others

Frequency importance weightings: AI I (2000 Hz) W (2000 Hz) here W is approx 0.6 A is the Articulation Index (predicted intelligibility). A is determined by adding up W x I over frequency bands, where I is the band importance weight and W is the proportion of a 30 db dynamic range of speech in that band that is audible.

AI can predict the frequency response of a hearing aid for a given audiogram that should maximise intelligibility. Similar to standard HA fitting rules, although these generally recommend less gain than AI where losses are more severe.

AI predictions AI predictions reasonable for mild and moderate hearing losses. But the effects of audibility in severe and profound losses are not enough to explain limits to speech recognition.

Dead regions: An extreme case of increased threshold Regions in the inner ear with absent or non-functioning inner hair cells (IHCs) No BM vibrations in such regions are directly sensed But spread of BM vibration means that tones can be detected off-place by auditory nerve fibres typically sensitive to a different frequency region Most clearly seen when measuring PTCs directly interpretable

Psychophysical tuning curves (PTCs) Level (db SPL) 20 probe masker? Level (db SPL) 20 masker? probe 1000 1200 Frequency (Hz) 800 1000 Frequency (Hz) Masker level (db SPL) 1000 Masker frequency (Hz)

Psychophysical tuning curves (PTCs) Determine the minimum level of a narrow-band masker at a wide variety of frequencies that will just mask a fixed low-level sinusoidal probe. probe level & frequency

Physiological TCs for a range of auditory nerve fibres: Normal hearing level (db SPL) 100 Hz 1 khz 10 khz log frequency

Hearing loss without a dead region level (db SPL) 100 Hz 1 khz 10 khz log frequency

Hearing loss with a dead region level (db SPL) 100 Hz 1 khz 10 khz log frequency

SNHL without dead region: PTCs probe level & frequency

SNHL with dead region: PTCs

Diagnosing dead regions PTCs perhaps clinically impractical TEN test (threshold equalizing noise)

Audibility accounts don t explain everything Good predictions of speech intelligibility from audibility hold only for mild to moderate hearing losses Complete restoration of audibility with more severe losses cannot restore intelligibility And these predictions only hold for speech in quiet

Reduced dynamic range in sensori-neural hearing loss ------------------- ULL threshold -

Recruitment requires compression as well as amplification to maximize audibility

Categorical scaling of loudness ACALOS (adaptive categorical loudness scaling)

Changes in frequency selectivity reflect loss of nonlinearity Rosen & Baker (2002)

Normal compared to impaired excitation patterns - quiet Impaired excitation pattern - retains much of formant structure in quiet

Normal compared to impaired excitation patterns - noise SNR = +6 db Normal excitation pattern retains much of formant structure in noise Impaired excitation pattern has reduced formant structure in noise

What can current hearing aids Hearing loss? do for... Reduced dynamic range & loudness recruitment? Degraded frequency selectivity? Dead regions?