SOLUTIONS Homework #3. Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03

Similar documents
Chapter 11: Sound, The Auditory System, and Pitch Perception

ENT 318 Artificial Organs Physiology of Ear

PSY 214 Lecture 16 (11/09/2011) (Sound, auditory system & pitch perception) Dr. Achtman PSY 214

Lecture 6 Hearing 1. Raghav Rajan Bio 354 Neurobiology 2 January 28th All lecture material from the following links unless otherwise mentioned:

MECHANISM OF HEARING

Unit VIII Problem 9 Physiology: Hearing

Intro to Audition & Hearing

Auditory System. Barb Rohrer (SEI )

College of Medicine Dept. of Medical physics Physics of ear and hearing /CH

Systems Neuroscience Oct. 16, Auditory system. http:

Required Slide. Session Objectives

Deafness and hearing impairment

PSY 214 Lecture # (11/9/2011) (Sound, Auditory & Speech Perception) Dr. Achtman PSY 214

Hearing: Physiology and Psychoacoustics

Auditory Physiology Richard M. Costanzo, Ph.D.

Mechanical Properties of the Cochlea. Reading: Yost Ch. 7

Spectrograms (revisited)

Hearing. istockphoto/thinkstock

Auditory System Feedback

Hearing Sound. The Human Auditory System. The Outer Ear. Music 170: The Ear

Music 170: The Ear. Tamara Smyth, Department of Music, University of California, San Diego (UCSD) November 17, 2016

Auditory Physiology PSY 310 Greg Francis. Lecture 30. Organ of Corti

PSY 215 Lecture 10 Topic: Hearing Chapter 7, pages

HEARING. Structure and Function

Vision and Audition. This section concerns the anatomy of two important sensory systems, the visual and the auditory systems.

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for

Cochlear Implant The only hope for severely Deaf

SPHSC 462 HEARING DEVELOPMENT. Overview Review of Hearing Science Introduction

Acoustics, signals & systems for audiology. Psychoacoustics of hearing impairment

(Thomas Lenarz) Ok, thank you, thank you very much for inviting me to be here and speak to you, on cochlear implant technology.

Chapter 3: Anatomy and physiology of the sensory auditory mechanism

Carlson (7e) PowerPoint Lecture Outline Chapter 7: Audition, the Body Senses, and the Chemical Senses

SPECIAL SENSES: THE AUDITORY SYSTEM

Implementation of Spectral Maxima Sound processing for cochlear. implants by using Bark scale Frequency band partition

Implantable Treatments for Different Types of Hearing Loss. Margaret Dillon, AuD Marcia Adunka, AuD

A&P 1. Ear, Hearing & Equilibrium Lab. Basic Concepts. These notes follow Carl s Talk at the beginning of lab

Hearing Aids. Bernycia Askew

Auditory Physiology PSY 310 Greg Francis. Lecture 29. Hearing

PSY 310: Sensory and Perceptual Processes 1

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds

Chapter 17, Part 2! The Special Senses! Hearing and Equilibrium!

Chapter 17, Part 2! Chapter 17 Part 2 Special Senses! The Special Senses! Hearing and Equilibrium!

COM3502/4502/6502 SPEECH PROCESSING

Presentation On SENSATION. Prof- Mrs.Kuldeep Kaur

Chapter 13 Physics of the Ear and Hearing

to vibrate the fluid. The ossicles amplify the pressure. The surface area of the oval window is

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Hearing. PSYCHOLOGY (8th Edition, in Modules) David Myers. Module 14. Hearing. Hearing

Before we talk about the auditory system we will talk about the sound and waves

Human Acoustic Processing

Learning Targets. Module 20. Hearing Explain how the ear transforms sound energy into neural messages.

Printable version - Hearing - OpenLearn - The Open University

Structure, Energy Transmission and Function. Gross Anatomy. Structure, Function & Process. External Auditory Meatus or Canal (EAM, EAC) Outer Ear

What does it mean to analyze the frequency components of a sound? A spectrogram such as that shown here is the usual display of frequency components

Hearing and Balance 1

Hearing. Juan P Bello

Representation of sound in the auditory nerve

Can You Hear Me Now?

SUBJECT: Physics TEACHER: Mr. S. Campbell DATE: 15/1/2017 GRADE: DURATION: 1 wk GENERAL TOPIC: The Physics Of Hearing

Hearing. By: Jimmy, Dana, and Karissa

BCS 221: Auditory Perception BCS 521 & PSY 221

Chapter Fourteen. The Hearing Mechanism. 1. Introduction.

! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics

Signals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor

The Ear. The ear can be divided into three major parts: the outer ear, the middle ear and the inner ear.

THE COCHLEA AND AUDITORY PATHWAY

Sound Waves. Sensation and Perception. Sound Waves. Sound Waves. Sound Waves

A&P 1. Ear, Hearing & Equilibrium Lab. Basic Concepts. Pre-lab Exercises

HCS 7367 Speech Perception

Design and Implementation of Speech Processing in Cochlear Implant

THE EAR AND HEARING Be sure you have read and understand Chapter 16 before beginning this lab. INTRODUCTION: hair cells outer ear tympanic membrane

The cochlea: auditory sense. The cochlea: auditory sense

Anatomy and Physiology of Hearing

Who are cochlear implants for?

Receptors / physiology

Perception of Sound. To hear sound, your ear has to do three basic things:

Healthy Organ of Corti. Loss of OHCs. How to use and interpret the TEN(HL) test for diagnosis of Dead Regions in the cochlea

Topic 4. Pitch & Frequency

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Cochlear implants. Carol De Filippo Viet Nam Teacher Education Institute June 2010

Complete Cochlear Coverage WITH MED-EL S DEEP INSERTION ELECTRODE

Manchester Adult Cochlear Implant Programme

9.01 Introduction to Neuroscience Fall 2007

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

Auditory System & Hearing

Lecture 7 Hearing 2. Raghav Rajan Bio 354 Neurobiology 2 February 04th All lecture material from the following links unless otherwise mentioned:

Sounds Good to Me. Engagement. Next Generation Science Standards (NGSS)

INTRODUCTION TO AUDIOLOGY Hearing Balance Tinnitus - Treatment

Outline. The ear and perception of sound (Psychoacoustics) A.1 Outer Ear Amplifies Sound. Introduction

Chapter 1: Introduction to digital audio

Cochlear anatomy, function and pathology I. Professor Dave Furness Keele University

The Structure and Function of the Auditory Nerve

Educational Module Tympanometry. Germany D Germering

Bioscience in the 21st century

au/images/conductive-loss-new.jpg

HEARING AND COCHLEAR IMPLANTS

HEARING AND PSYCHOACOUSTICS

BIOMATERIALS AND COCHLEAR IMPLANTS

Ear. Utricle & saccule in the vestibule Connected to each other and to the endolymphatic sac by a utriculosaccular duct

Acoustics Research Institute

Transcription:

SOLUTIONS Homework #3 Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03 Problem 1: a) Where in the cochlea would you say the process of "fourier decomposition" of the incoming sound energy occurs? Please explain your reasoning. Fourier decomposition in this context refers to the spatial separation of the sound energy entering the fluid-filled cochlea which occurs along the length of the basilar membrane. It is the basilar membrane and to some degree the hair cells themselves that use mechanical tuning and resonance to distribute the sound energy as a function of frequency over space. Structures in the middle ear, including the tympanic membrane and ossicular system, serve only to increase the efficiency by which the time-varying pressure (sound) in the air is conducted into the fluid of the inner ear. No time-domain to frequency-domain organization of the sound energy occurred in the middle ear. b) Describe the important physical characteristics of the structure which you identified in part a) which give it the ability to effectively "fourier transform" the sound. Explain how each of these characteristics contributes to this frequency separation process. The basilar membrane has several important mechanical characteristics which endow it with what is effectively a continuum of natural mechanical resonance from very high frequencies at the base (entrance) to low frequencies at the apex (end). These are briefly described below, and in more detail in the text: i.. Overall Basilar Membrane Geometry: The basilar membrane is a filmy structure which is attached along its transverse edges by bony projections from the inner walls of the cochlear cavity. It separates the cochlea longitudinally into the scala tympani and scala media, and therefore vibrates with incoming sound energy. The bony projections to which the edges of the basilar membrane are attached extend furthest at the base of the cochlea and least at the apex. Thus, the width of the basilar progressively increases from the base to apex, while the width of the cochlear cavity decreases. ii. Transverse basilar fibers: The filmy basilar membrane has 20,000 to 30,000 transverse basilar fibers which are embedded within it, and are connected at one end in the bony projection of the cochlear "modiolus", but free on their other ends. These basilar fibers affect the local stiffness or rigidity in the membrane, and can thus contribute to the local resonant characteristic of the basilar membrane in their proximity. These transverse fibers are approximately 40 microns (0.04 mm) in length near the entrance (base) of the cochlea, and increase in length progressively towards the apex, reaching 500 microns (0.5 mm) at the deepest part of the membrane (apex). In addition, the diameters of the shorter fibers are greater, thereby making them stiffer, while the diameter of the longer fibers is less, thereby making them less stiff. Thus, the short,

stiff basilar fibers embedded in the filmy membrane exhibit a high mechanical resonant frequency, while the longer, more flexible fibers at the base exhibit a low frequency resonance. iii. Contractile Outer Hair Cells: In addition to the single row of inner hair cells which are distributed along the length of the basilar membrane and convert the mechanical vibration of the membrane into small electrical potentials for transmission to the brain, there are 3-4 rows of outer hair cells which respond to signals from the brain by mechanically contracting. This allows the brain to use a feedback path to locally adjust or tune the stiffness of the basilar membrane. This is thought to provide a means of local mechanical tuning of the resonance characteristics of the membrane to improve the sharpness of the separation of frequencies. iv. Hair Cell Stereocilia: The stereocilia which project from the tops of the inner hair cells exhibit a similar position-dependent variation as the transverse basilar fibers. At the base (front) of the cochlea, these are shorter and stiffer, and become progressively longer and more flexible as one approaches the base. This helps the stereocilia respond more specifically to the frequencies which they are intended to sense. c) Imagine that you could uncoil the cochlea and you are now "watching" the basilar membrane vibrate in response to sound energy coming from my voice. Explain how what you see relates to the frequency domain representation of my voice that you saw displayed on the spectrum analyzer screen during the class demo. In other words, compare what would you look for in the vibration of the basilar membrane to acquire information about the frequency content of my voice (and the relative power of each frequency) with what you would look at with a frequency domain representation provided by the spectrum analyzer. Do these seem similar to you? Do you think it is reasonable to say that the cochlea performs a Fourier transform of the sound energy before this information is sent to your brain for further processing? As discussed in part b, the basilar membrane exhibits a progressive range of natural resonant frequencies from high (at the oval/round windows) to low, at the back of the cochlea where the fluid from the scala vestibuli and scala tympani "communicate". Sound energy (which is a pressure which varies as a function of time) travels along the basilar membrane, and in the process, the energy associated with the various frequencies which compose the traveling sound wave tend to naturally be trapped or dissipated at the place along the membrane which has the same natural resonant frequency. Thus, it is the amplitude of the vibration of the basilar membrane as a function of location or place where the information about the frequency composition of the incoming sound energy is represented. Thus if you were watching the fourier decomposition of my voice on the spectrum analyzer, as we did in class, you would see the relative amplitude of each of the frequency components displayed on the screen at any given moment as a function of position. The frequency axis could be linear, or logarithmic. Similarly, if you were "watching" the movement of the basilar membrane in response to my voice, you would see the relative amplitude of each of the frequency components represented by the amplitude of the membrane vibration at different points or places along its length. In the case of the basilar membrane, the distribution of the frequency response is not linear, but is logarithmic, as discussed in class and in the text. It is also reversed from the normal frequency representation, in that the highest frequencies

appear at the entrance to the cochlea, and the lowest frequencies at the rear. Nevertheless, it is quite reasonable to say that the basilar membrane effectively performs a Fourier transform of the incoming time-varying sound pressure, utilizing mechanical resonance of the various membrane and hair cell structures. 2. You are examining a person who exhibits a hearing impairment of unknown origin. (a) What are the two major classes of hearing impairment? Identify the basic structures which are responsible for the impairment in each class. Conduction System of the Middle Ear: Any condition leading to restriction of movement of the tympanic membrane and ossicular system will result in increased thresholds (decreased sensitivity). This is often associated with fibrosis (in-growth of fibrous tissue) which may be caused by chronic inflammation accompanying infection or response to trauma. Particularly in cases where chronic conditions are present, fibrosis may also be accompanied by the degeneration of the ossicular bone tissue. Generally, decreased sensitivity due to conduction impairment is comparatively greater in response to low frequency acoustic stimuli, since these require relatively greater excursions (displacement) of the ossicular system for adequate conduction to the inner ear. Sensorineural (Inner Ear): Any condition leading to the degradation or destruction of the hair cells along the organ of corti will result in increased thresholds. Hair cell damage/loss may be due to ototoxic action of antibiotics on certain individuals, trauma, exposure to excessive acute or long term noise, etc. Less frequently, sensorineural impairment may be attributed to damage to the cochlear nerve bundle or to higher level auditory processing regions in the CNS (e.g. following tumor removal). Animal models have suggested that destruction of the hair cells by either acute noise exposure or antibiotic reaction leads to subsequent degeneration of the associated ganglia and cochlear nerve fibers over a period of months or years. This is a somewhat discouraging observation in the context of the design of implantable cochlear electrodes, which depend on the stimulation of existing surviving nerve fibers through which to deliver speech information. As noted in lecture, these two major classes of hearing sensitivity impairment may occur alone, or in combination in different individuals. The degree to which a given amount of sensitivity loss (due to any physiological impairment) translates to a functional impairment (e.g. speech discrimination) varies greatly among individuals. Certain people appear to exhibit exceptional capability to extract vital speech information despite very limited acoustic input. (b) What tests would you perform to distinguish the origin of this person's impairment? Explain the procedure, using representative sketches, and how you interpret the results One of the basic tests compares the threshold or sensitivity to pure tones as a function of frequency when delivered through the air vs. through bone conduction. Reference sensitivities across the audio range have been established, and these are used as a comparison to the sensitivity exhibited by the person being tested. The results are plotted and are generally referred to as audiograms. Examples of two audiograms are shown in chapter 52 of the text - one consistent with fibrosis of the middle ear, and one consistent with moderate sensorineural impairment typically exhibited as people age. The degree of loss of sensitivity (and the

frequency response) may vary greatly depending on the individual, and in the case of the profoundly deaf, may be essentially total. Such persons may be candidates for the cochlear implant if they have retained some viable nerve fibers. As noted in class, if the frequencydependent sensitivity is significantly different for bone conduction than for air, this result suggests a conduction problem. If the decreased sensitivity is similar for both air and bone conduction, this suggests sensorineural pathology. 3. You are contemplating the design of a cochlear implant device with the goal of restoring human speech perception in people with sensorineural (loss of hair cell function) impairment. (a) Sketch a block diagram, identifying the major components of the system, and describe the basic function of each. The overall function of this implant is to restore hearing to those who have lost the ability to transform sound energy into stimulation of the auditory nerve fibers which carry the encoded sound information to the brain for processing. This stimulation may be achieved by appropriate direct electrical stimulation of the auditory nerve fibers by an array of implanted electrodes. As we have discussed in lectures, one of the basic means by which pitch discrimination occurs is via the so-called "place principle". This refers to the ability of the cochlea and basilar membrane in particular to spread incoming sound energy out along the length of the membrane as a function of frequency. Thus the cochlea first performs a mechanical Fourier transform, effectively "mapping " the basilar membrane as a function of frequency. Greater displacement amplitudes of the membrane in any one area translates to greater stimulation of the hair cells and associated auditory neurons, which carry this information to auditory processing areas of the brain. The frequency is interpreted by the brain largely (but not exclusively, as we shall see later) by virtue of the mapping of that neuron to a specific region on the basilar membrane. In general then, we can identify some of the most basic attributes of a cochlear implant as follows: Transformation of sound energy into an electrical signal which can be further processed. We would utilize some form of microphone or dynamic pressure transducer which converts variations in sound pressure into a time-varying voltage. The ability to perform frequency analysis/decomposition. The electrical signal from the microphone must be separated into its frequency components such that the individual components can be used to direct the electrical stimulation to the auditory neurons in the proper region of the basilar membrane. This is performed in the speech processor. An implanted electrode array which can deliver electrical stimuli to the auditory neurons as a function of the output of the frequency analysis of the incoming sound. A source of energy for the functioning of the device. This is achieved using an inductive link across the skin, i.e. electrical energy is transferred between two adjacent coils: one inside the body, and one located on the skin adjacent to it. An electrode signal receiver/stimulator directs the energy to the appropriate electrodes. This system may be represented in block form as follows:

External microphone Speech Processor External coil Internal coil Electrode Signal receiver / stimulator Cochlear electrode array Below is an illustration of the structure of the human ear with an implant called the Nucleus 24. It is made by a company called Cochlear:

(b) Given that your goal is to restore speech perception, identify the region of the basilar membrane that you will target with the electrode array. Studies of the frequency content of the human voice have indicated that most of the important components of speech are in the range of 500 Hz to 5KHz, and thus we would like our array to target the nerves which are arranged along the portion of the basilar membrane corresponding to this frequency. Based on Fig. 52-6 of the text, this corresponds to the region roughly between 8-27 mm from the stapes. (c) How many electrodes would you use along this region? Comment on the tradeoffs with using more or fewer electrodes in terms of performance, complexity, safety, etc. In the basilar membrane, there are approximately 1500 inner hair cells which stimulate approximately 25000 nerve fibers which in turn transfer auditory information to the brain. Are 1500, or perhaps 25,000 individual frequency channels (and hence individual electrodes) thus necessary? Hopefully not, since given the space constraints, it would not be feasible to implant such an array. Studies using simulations such as those demonstrated in class have indicated that a minimum of about 8-10 channels (electrodes) are required for reliable speech perception. Using more channels may increase the quality of the sound up to a point, but as the number of electrodes increases beyond about 25, the relative improvement is outweighed by added complications, such as high current density (due to smaller electrode area), and increased complexity to manufacture. The high current density may damage the tissue directly, and tends to form free radicals at the electrode surface by processes similar to electrolysis (production of hydrogen and oxygen at the surface of electrodes in water). These free radicals are very unstable and reactive, and may damage not only the tissue, but even the electrode surface itself. Current clinical devices use between 8 and 22, with 22 being the most popular. (d) How would you space the electrodes uniformly or non-uniformly? Please explain your reasoning. Most of the devices that have been implanted so far have uniform spacing of electrodes, such that the currents which are produced in the cochlear fluid around the nerves you wish to simulate is most uniform. More recently, the trend is going towards a non-uniform spacing which attempts to match the logarithmic spacing of frequencies along the basilar membrane, with closer spacing of the low frequency electrodes relative to the higher frequency electrodes. Both approaches have their merit, although the non-uniform approach is currently though to yield better results. For the nonuniform spacing, the idea is to use roughly the same number of electrodes per octave, which is a doubling of frequency. Thus you would have approximately 7 or 8 electrodes covering the 5K- 2.5KHz range, 7-8 over the 2.5KHz to 1250 Hz range, and 7-8 over the 1250-625 Hz range. Here is an illustration of the current 22-electrode array used in the Cochlear system. Note the non-uniform spacing of the electrodes: