Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds

Similar documents
Acoustics, signals & systems for audiology. Psychoacoustics of hearing impairment

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

HCS 7367 Speech Perception

Issues faced by people with a Sensorineural Hearing Loss

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for

Lecture 3: Perception

Frequency refers to how often something happens. Period refers to the time it takes something to happen.

Signals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor

! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics

COM3502/4502/6502 SPEECH PROCESSING

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Chapter 11: Sound, The Auditory System, and Pitch Perception

Sound Waves. Sensation and Perception. Sound Waves. Sound Waves. Sound Waves

Topic 4. Pitch & Frequency

Hearing the Universal Language: Music and Cochlear Implants

Loudness. Loudness is not simply sound intensity!

PSY 214 Lecture 16 (11/09/2011) (Sound, auditory system & pitch perception) Dr. Achtman PSY 214

Psychoacoustical Models WS 2016/17

HEARING. Structure and Function

Spectrograms (revisited)

Brad May, PhD Johns Hopkins University

Who are cochlear implants for?

Linguistic Phonetics Fall 2005

PSY 214 Lecture # (11/9/2011) (Sound, Auditory & Speech Perception) Dr. Achtman PSY 214

Linguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions.

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

9/29/14. Amanda M. Lauer, Dept. of Otolaryngology- HNS. From Signal Detection Theory and Psychophysics, Green & Swets (1966)

Prelude Envelope and temporal fine. What's all the fuss? Modulating a wave. Decomposing waveforms. The psychophysics of cochlear

Hearing. Juan P Bello

Hearing. istockphoto/thinkstock

Auditory Physiology PSY 310 Greg Francis. Lecture 30. Organ of Corti

Auditory System & Hearing

whether or not the fundamental is actually present.

Topic 4. Pitch & Frequency. (Some slides are adapted from Zhiyao Duan s course slides on Computer Audition and Its Applications in Music)

Topics in Linguistic Theory: Laboratory Phonology Spring 2007

ID# Exam 2 PS 325, Fall 2003

Role of F0 differences in source segregation

Hearing. PSYCHOLOGY (8th Edition, in Modules) David Myers. Module 14. Hearing. Hearing

Hearing: Physiology and Psychoacoustics

Systems Neuroscience Oct. 16, Auditory system. http:

Auditory System & Hearing

The role of periodicity in the perception of masked speech with simulated and real cochlear implants

Digital Speech and Audio Processing Spring

Auditory nerve. Amanda M. Lauer, Ph.D. Dept. of Otolaryngology-HNS

Representation of sound in the auditory nerve

Hearing in the Environment

Healthy Organ of Corti. Loss of OHCs. How to use and interpret the TEN(HL) test for diagnosis of Dead Regions in the cochlea

Sound and Hearing. Decibels. Frequency Coding & Localization 1. Everything is vibration. The universe is made of waves.

ClaroTM Digital Perception ProcessingTM

Thresholds for different mammals

Auditory Physiology PSY 310 Greg Francis. Lecture 29. Hearing

PSY 310: Sensory and Perceptual Processes 1

Required Slide. Session Objectives

Lecture 4: Auditory Perception. Why study perception?

What is sound? Range of Human Hearing. Sound Waveforms. Speech Acoustics 5/14/2016. The Ear. Threshold of Hearing Weighting

HEARING AND PSYCHOACOUSTICS

SOLUTIONS Homework #3. Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03

Deafness and hearing impairment

FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED

Acoustics Research Institute

What Is the Difference between db HL and db SPL?

SLHS 1301 The Physics and Biology of Spoken Language. Practice Exam 2. b) 2 32

Hearing Lecture notes (1): Introductory Hearing

Speech (Sound) Processing

Hearing. and other senses

Sound and its characteristics. The decibel scale. Structure and function of the ear. Békésy s theory. Molecular basis of hair cell function.

Hearing Lectures. Acoustics of Speech and Hearing. Subjective/Objective (recap) Loudness Overview. Sinusoids through ear. Facts about Loudness

Auditory Perception: Sense of Sound /785 Spring 2017

Technical Discussion HUSHCORE Acoustical Products & Systems

L2: Speech production and perception Anatomy of the speech organs Models of speech production Anatomy of the ear Auditory psychophysics

Music and Hearing in the Older Population: an Audiologist's Perspective

Computational Perception /785. Auditory Scene Analysis

MECHANISM OF HEARING

The Structure and Function of the Auditory Nerve

Hearing Sound. The Human Auditory System. The Outer Ear. Music 170: The Ear

Music 170: The Ear. Tamara Smyth, Department of Music, University of California, San Diego (UCSD) November 17, 2016

Vision and Audition. This section concerns the anatomy of two important sensory systems, the visual and the auditory systems.

PSY 215 Lecture 10 Topic: Hearing Chapter 7, pages

Auditory gist perception and attention

Auditory Physiology Richard M. Costanzo, Ph.D.

Lecture 6 Hearing 1. Raghav Rajan Bio 354 Neurobiology 2 January 28th All lecture material from the following links unless otherwise mentioned:

An Auditory-Model-Based Electrical Stimulation Strategy Incorporating Tonal Information for Cochlear Implant

Auditory System. Barb Rohrer (SEI )

Ganglion Cells Blind Spot Cornea Pupil Visual Area of the Bipolar Cells Thalamus Rods and Cones Lens Visual cortex of the occipital lobe

J Jeffress model, 3, 66ff

Auditory scene analysis in humans: Implications for computational implementations.

Slow compression for people with severe to profound hearing loss

ENT 318 Artificial Organs Physiology of Ear

Chapter 1: Introduction to digital audio

COCHLEAR IMPLANTS CAN TALK BUT CANNOT SING IN TUNE

The speed at which it travels is a function of the density of the conducting medium.

16.400/453J Human Factors Engineering /453. Audition. Prof. D. C. Chandra Lecture 14

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES

Effects of Age and Hearing Loss on the Processing of Auditory Temporal Fine Structure

Binaural Hearing. Steve Colburn Boston University

SPHSC 462 HEARING DEVELOPMENT. Overview Review of Hearing Science Introduction

Coding of Sounds in the Auditory System and Its Relevance to Signal Processing and Coding in Cochlear Implants

PERIPHERAL AND CENTRAL AUDITORY ASSESSMENT

EVALUATION OF SPEECH PERCEPTION IN PATIENTS WITH SKI SLOPE HEARING LOSS USING ARABIC CONSTANT SPEECH DISCRIMINATION LISTS

Transcription:

Hearing Lectures Acoustics of Speech and Hearing Week 2-10 Hearing 3: Auditory Filtering 1. Loudness of sinusoids mainly (see Web tutorial for more) 2. Pitch of sinusoids mainly (see Web tutorial for more) 3. Timbre of complex sounds Facts about Timbre Timbre is defined as all the sound differences that are not due to loudness or pitch a wastebasket category For example two musical instruments playing the same note two different vowels spoken on same pitch Mark s patented lighthouse analogy Auditory Lighthouse Loudness brightness of flashes Pitch repetition rate of flashes Timbre colour of flashes Although There is more to timbre than colour Timbre also has a temporal dimension: a musical note has a different timbre when played backwards How a sound starts & stops is also important. amplitude attack sustain decay time Analysis of Complex Sounds So far we have concentrated on processing of single sinusoids in cochlea But to study timbre we need to consider how the cochlea deals with complex sounds made up from many sinusoids So instead of asking how does a single sinusoid excite all parts of the basilar membrane, we need to ask how does a single part of the basilar membrane respond to all sinusoids in a complex sound? 1

Change of View Audiogram of a Single Fibre Excitation of all places by different sinusoids This is called a Physiological Tuning Curve or PTC. Excitation of different places by all sinusoids By putting a fine electrode into a single nerve fibre, we can find the threshold of audibility for a single region on the basilar membrane. BM Region = Band-Pass Filter From PTC can see that each region of the basilar membrane is excited by a range of frequencies, but is most sensitive to frequencies that match preferred frequency of vibration Just like a band-pass filter! Excitation low freq single fibre high freq Place Basilar Membrane = Filterbank Hair cells on each region of the BM fire when that part of the BM vibrates But each sinusoid vibrates a number of regions Or conversely, each region is excited by a range of frequencies (but to different degrees) Filterbank analysis Just like colour sensation! Vibrations enter cochlea Nerve impulses from each BM region encode energy in signal that falls within band of frequencies In filterbank analysis, a sound signal is input to a bank of band-pass filters, each sensitive to a different frequency region Output encodes how much energy falls in each band Three types of colour sensitive cell in eye: red (580nm), green (540nm) and blue (450nm) Any given colour excites these three band-pass filters to different degrees 2

How a filterbank analyses a sound Possibly helpful mental picture Base Apex 10000Hz 1000Hz 100Hz Treat each 1mm section of basilar membrane as a separate resonator Each has its own resonant frequency = centre frequency of band-pass filter But because logarithmic mapping from Hertz to Place, 1mm at apex responds to a narrow region of low frequencies, but 1mm at base responds to a wide region of high frequencies So resonators are both spaced logarithmically and have increasing bandwidth with frequency. Characteristics of Auditory Filters Bandwidth of filters increases as centre frequency increases. Narrow filters at low frequency. Wide filters at high frequency. Compare Cochlear Analysis to Spectrographic Analysis Spectrography Filter bandwidth is the same at all frequencies, either narrow (45Hz) or wide (300Hz) Filters are spaced linearly in Hertz along the frequency scale Cochlear Analysis Filter bandwidth is narrow (100Hz) at low frequencies, and wide (500Hz) at high frequencies Filters are spaced logarithmically in Hertz along the basilar membrane Types of Spectrogram Wide-band Narrow-band Auditory-band Auditory-band spectrogram looks like a narrow-band spectrogram at low frequencies and a wide-band spectrogram at high frequencies Back to Timbre Excitation pattern for a vowel reflects auditory filtering At low frequencies harmonics are resolved At high frequencies the excitation pattern follows the spectral envelope 3

Temporal aspects of filtering Low-frequency filters pass a single harmonic, while high frequency filters pass multiple harmonics. Mixtures of harmonics maintain the periodicity of the original signal. Temporal aspects of filtering Can see in the auditory spectrogram that firing at low frequency matches harmonic frequency, while firing at high frequency follows fundamental frequency of input signal Explains why pitch of signal is perceivable even when individual harmonics are not resolved. Filterbank Summary As far as processing complex sounds is concerned, it is best to treat cochlea as a filterbank with say 30 filters rather than 30,000 hair cells The filters in the cochlea filterbank are logarithmically spaced and have a bandwidth that increases with frequency The excitation pattern on the auditory nerve reflects this filtering, with harmonics being resolved only at low frequency, and with the spectral envelope being encoded at high frequency Importance of Filterbank Model - 1 Combination of narrow and wide band analysis explains how we can be sensitive to pitch and timbre simultaneously Model explains why formant frequencies are so important they affect spectral envelope at mid to high frequencies Model explains auditory masking Model explains why speech is less intelligible in noisy situations Filterbank explanation of masking Importance of Filterbank Model 2 A R If tone and noise occupy different frequencies they fall into separate filters and are distinguished Freq A R If tone and noise occupy similar frequencies they fall into the same filter and are not distinguished Freq Model explains why listeners with sensorineural deafness have difficulty in discriminating sounds they have an increase in auditory analysis bandwidths this makes the spectral envelope smoother and worsens the masking caused by noise Model has influenced design of hearing aids and cochlear implants 4

The Final Picture Summer Term Revision Day: Tuesday 23 rd April 9-10 Review lecture (118) 10-12 Review tutorials 12-1 Examination advice Examination: XXXday YY th ZZZZZ (check!) Hand in coursework folder for moderation of marks 5