FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED

Size: px
Start display at page:

Download "FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED"

Transcription

1 FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED Francisco J. Fraga 1, Alan M. Marotta 2 1 National Institute o Telecommunications, Santa Rita do Sapucaí - MG, Brazil 2 National Institute o Telecommunications, Santa Rita do Sapucaí - MG, Brazil Abstract A considerable percentage o listeners with severe hearing loss have audiograms where the losses are high or high requencies and low or low requencies. For these patients, lowering the speech spectrum to the requencies where there is some residual hearing could be a good solution to be implemented or digital hearing aids. In this paper we have presented two dierent requency lowering algorithms: requency compression and requency shiting. Results o subjective intelligibility tests have shown a slight better perormance o the requency shiting method relatively to the requency compression method, although their perormance remarkably depends on which are the speciic phonemes that are being processed by these two algorithms. Keywords digital hearing aids, requency lowering 1. Introduction There are several kinds o hearing impairment. The origin o the sensorineural hearing losses can be due to deects in the cochlea, auditory nerve or both. These problems reduce the dynamic range o hearing. The threshold o hearing is elevated, but the threshold o discomort (at which the loudness become uncomortable) is almost the same as or normal hearing listeners, or even may be lower. For some range o requencies, the threshold o hearing is so high than it is equal to the threshold o discomort, i.e., it is impossible or the listener hearing any sound at those requencies. Hearing loss is more common or high requency and mid requency sounds (1 to 3 khz) than or low requency. Frequently, there are only small losses at low requencies (below 1 khz) but almost absolute deaness above 1.5 or 2 khz. These acts lead researchers to lower the spectrum o speech in order to match the residual low requency hearing o listeners with high requency impairments. Slow playback, vocoding, and zero crossing rate division are some o the methods that have been employed in the last decades. All o these methods involve signal distortion, more or less noticeable, generally depending on the amount o the requency shiting. Many o the lowering schemes have altered perceptually important characteristics o speech, such as temporal and rhythmic patterns, pitch and durations o segmental elements. Hicks et al. [1] have done one o the most remarkable investigations about requency lowering. Their technique involve pitch synchronous, monotonic compression o the short term spectral envelope, while at the same time avoiding some o the above-described problems observed in the other methods. Reed et al. [2] have conducted consonant discrimination experiments on normal hearing listeners. They have observed that Hick s requency lowering scheme presented better perormance or ricative and aricate sounds i compared with low pass iltering to an equivalent bandwidth. On the other hand, the perormance o the low pass iltering was better or vowels, semivowels and nasal sounds. For plosive sounds, both methods have shown similar results. In general, the perormance on the best requency lowering conditions was almost the same to that obtained on low pass iltering to an equivalent bandwidth. Further, Reed et al. [3] have extended the results o Hick s et al. system to listeners with high requency impairment. In general, the perormance o the impaired subjects was inerior to that obtained by normal subjects. Few years ago, Nelson and Revoile [4] discovered that relative to the normal hearing listeners, those with moderate to severe hearing loss required approximately double the peak to valley depth or detection o spectral peaks in bands o noise when signals have a high numbers o peaks per octave. Findings revealed that detection o spectral peaks in noise is signiicantly related to consonant identiication abilities in listeners with moderate to severe hearing loss. All previous mentioned requency-lowering schemes compress the speech spectrum into a narrower band o requencies, increasing the number o peaks per octave while maintaining the peak to valley depth. According to Nelson s and Revoile s investigation, applying sharpening processing to a requency lowered speech may allow better detection o spectral peaks and better consonant identiication. Recently, Muñoz et al. [5] have combined sharpening (i.e., increasing the peak to valley depth) and requency compression. They have demonstrated that the

2 processed speech improved the understanding o ricative and aricate sounds, while providing no signiicant change in identiication o vowels and other sounds by listeners with severe high requency hearing loss. Based on Nelson s and Revoile s investigation, we hypothesize that the relatively poor perormance o Hick sand Muñoz s requency lowering schemes is due to the increasing o the numbers o peaks per octave, which is inherent to the requency compression method used in these systems. In this paper, we propose a new requency-lowering algorithm that does not increase the number o peaks per octave because it uses requency shiting instead o requency compression. Furthermore, the requency shiting is applied only or ricative and aricate sounds, leaving all others types o sounds untouched, because it is only or ricative sounds that the requency lowering technique brings real beneits as have been demonstrated by all the previous mentioned works. We have also implemented a requency compression algorithm based on Hick s [1] and Muñoz s [5] ideas. the losses are classiied as mild. Moderate losses are those which are greater than 40 db but until inerior to 70 db. From 71 to 90 db, we consider that the patient have severe hearing losses and more than 95 db o loss is classiied as proound [6]. The threshold o discomort, or normal or impaired listeners, is always below 120 db SPL. Indeed, commonly the threshold o discomort or the impaired subjects is lower than or normal hearing subjects. Although less common, some audiograms bring both the threshold o discomort and the threshold o hearing [7], as we can observe in Fig. 1. In this igure, the points o the audiogram corresponding to the right ear are signaled with a round mark and those corresponding to the let ear are signaled with an X mark. These marks are worldwide used in this way by audiologists [6]. The dynamic range o listening or each requency is the threshold o discomort minus the threshold o hearing. Preliminary results o subjective preerence (considering only the qualitative aspect o the processed speech) have conirmed our hypothesis about the better perormance o the requency shiting method compared to the requency compression method. But urther subjective intelligibility tests over 20 subjects have clearly shown that their perormance (now considering only the intelligible aspect o the speech) remarkably depends on which are the speciic phonemes that are being processed by these two algorithms. 2. Methodology A. Audiometric data acquisition and processing The irst step o both requency-lowering algorithms consists in audiometric data acquisition o the impaired subject. The audiometric exam is employed or measuring the degree o the hearing impairment o a given patient. In this exam, the listener is submitted to a perception test by continuously varying the sound pressure level (SPL) o a pure sinusoidal tone in a discrete requency scale. The requency values most requently used are 250 Hz, 500 Hz, 1 khz, 2 khz, 4 khz, 6 khz and 8 khz. For each o these requencies, the minimum SPL in db or which the patient is capable o perceiving the sound is registered in a graph. The audiogram is the result o the audiometric exam, which is presented by a graph with the values in db SPL or each o the discrete requencies. This graph is done separately or each ear o the subject. Since the level o 0 db SPL is considered the minimum sound pressure level or normal hearing, the positive values in db registered on the vertical axis o the audiogram can be considered as the hearing losses o the patient s ear. I the losses are equal or inerior to 20 db, the subject is considered as having normal hearing. From 21 to 40 db, Fig. 1: Ski-slope losses case Based on the acquired audiometric data, the algorithm analyses the range o requencies where there is still some residual hearing. The criterion used is the ollowing: irst, it is veriied i the patient have a ski slope kind o losses, i.e., i the losses are increasing with requency. Only patients with this type o impairment can be aided by any requency lowering method. Ater that, the irst requency where there is a proound loss is determined. I this requency is between 1.2 khz and 3.4 khz, a destination requency to which the high requency spectrum will be shited is calculated. Otherwise, no requency shiting is needed (residual hearing above 3.4 khz) or proitable (residual hearing below 1.2 khz). This destination requency is considered as the geometrical mean between 900 Hz and the highest requency where there is still some residual hearing. The geometrical mean was empirically chosen because it provides a good tradeo between minimum spectrum distortion and maximum residual

3 hearing proit. In order to obtain more accuracy in the losses thresholds, the points o the audiogram are linearly interpolated. B. Speech data acquisition and processing The speech signal is sampled at a 16 khz rate and Hamming windowed with 25 msec windows. These windows are 50% overlapped, what means that the signal is analyzed at a rame rate that is the inverse o 12.5 msec. A 1024-point FFT is used or representing the high resolution short time speech spectrum in the requency domain. I in the previous audiometric data analysis a ski slope kind o loss was detected and the requency-shiting criterion was matched, a destination requency have already been determined. Then, we have to ind out (in a rame by rame basis) i the short time speech spectrum presents signiicant inormation at high requencies that justiy the requency shiting operation. The criterion used or shiting or not the short time spectrum o each speech rame is based in a threshold. When the signal has high energy in high requencies the algorithm shits this high requency inormation to lower requencies. The threshold is set or suppressing the processing o all vowels, nasals and the semivowels, while activating the requency transposition or ricatives and aricates. To decide which part o the spectrum will be shited, the energy o 500 Hz bandwidth windows are calculated with 100 Hz spacing, rom 1 khz to 8 khz. This is done with the aim o ind out an origin requency. The origin requency is the requency 100 Hz below the beginning o the 500 Hz bandwidth window that have maximum energy. The part o the spectrum that will be transposed corresponds to the range o all requencies above the origin requency. This empirical criterion guarantees that the unavoidable distortion due to the requency lowering operation will be proitable. Because the most important part o the high energy inormation will be shited to the limited range o requencies that are above 1 khz (thereore maintaining untouched the low requency inormation) but still below the highest requency where the patient presents residual hearing. For comparison, the Hick s requency compression scheme was already implemented, but now only when the same requency lowering criterion (high/low requency energy ratio) used or transposition was matched, i. e., only or ricatives and aricates. The requency compression was done by means o an equation deined in [2]. But in practice, it is more useul to implement the inverse equation, which is IN S 1 1 = tan π 1 a tan Kπ 1 + a OUT S where IN is the original requency, OUT is the corresponding compressed requency, K is the requency compression actor, a is the warping parameter and S is (1) the sampling rate. For minimum distortion at low requencies, the warping parameter must be chosen as being a = (K 1)/(K+1). The compression actor K was determined according to the degree o loss presented by the listener. Fig. 2 shows the curves o equation (1) or K = 2, 3 and 4. In this igure we can see that the low requency inormation (below 1000 Hz) is barely compressed. Ater requency shiting or compressing (i it occurs), the FFT spectrum o each speech rame is multiplied by the gain actor, which is calculated or each requency in order to ull compensate the hearing loss, unless the ampliied sound pressure level exceed the threshold o discomort. In this case, the gain actor is limited to the amount required or maintaining the loudness below the threshold o discomort. The way we implemented this spectral shaping process is similar to that described in [8]. This last step was still under development in our digital hearing aids system. Fig. 2: Input vs. output requency curves Fig. 3: Comparison o requency lowering schemes

4 Part (a) o Fig. 3 illustrates the original FFT spectrum o a speech rame, in part (b) the same rame is shown compressed by a actor K = 4 and part (c) presents the rame ater requency shiting. It is important to observe that in the last case (requency shiting) the shape o the spectrum is preserved, what does not occur in the case o requency compression, where we can clearly note a great amount o shape distortion, but still preserving the low requency inormation. These preliminary results indicate that the requency shiting method was preerred by the listeners when compared with the requency compression method. But it is important to remark that the subjective dierence between the low pass iltered signal, the requency compressed signal and the requency-shited signal is very slight, as perceived by normal listeners. 3. Results A. Preliminary Qualitative Tests The two requency lowering algorithms were not already tested with hearing impaired subjects because they inal spectral shaping part are not completely developed, as mentioned in the last paragraph o the previous section. But we got some preliminary results with normal listeners, considering irst only the qualitative aspect o the processed speech. In this case, a simple low pass iltering process simulates the losses above the requency where there is no more residual hearing. In this preliminary qualitative test, this cuto requency was ixed to 2 khz. The experiment we have carried out consists o submitting the speech signal to the two requency lowering algorithms. Ater that, the resulted signals were listened by two normal hearing subjects, one man and one woman. The listeners do not know anything about the origin o the signals and are asked or ranking the signals according to their intelligibility. In this preliminary test, only two speech signals were submitted to the algorithms. The original and processed spectrograms o one o these speech signals (pronunciation o the words loose management ) are shown in Fig. 4, where we can appreciate again the visual dierence between the two requency lowering algorithms. According to the prevision, only the ricative speech sounds were requency lowered in both algorithms. The unique exception is the phone [ l ], which is not ricative but lateral approximant. But in this case, its pronunciation had high requency energy, as we can observe in the spectrogram o the original speech signal. The preerences o the listeners were listed in Table 1. In this table, Signal 1 is the Portuguese word pensando (which means thinking ) and Signal 2 is the English words loose management. Table 1: Listener s preerences Speech signal Man Woman Signal 1 low pass 1 st 3 rd Signal 1 compr. 3 rd 2 nd Signal 1 shited 2 nd 1 st Signal 2 low pass 2 nd 2 nd Signal 2 compr. 3 rd 3 rd Signal 2 shited 1 st 1 st Fig. 4: Spectrograms o loose management B. Detailed Intelligibility Tests The intelligibility test was perormed over 20 listeners, 15 male and 5 emale. Each o them heard 36 syllables randomly chosen rom a database ormed by the utterances o 6 speakers, 3 emale and 3 male. The original database was ormed by 21 dierent CV phonetic syllables, each o them is composed by one o the 7 most commonly used ricative sounds o the Portuguese language ( [ ], [ ], [ ], [ ], [ ], [ ], [ ] ) and by one o these 3 vowels: [ ], [ ] or [ ]. These syllables were pronounced once by the 6 speakers, thereore the original database was ormed by 126 utterances. Each o these utterances generates 9 dierent processed WAVE iles: original syllable, requency compressed syllable and requency shited syllable, passed through 3 dierent low pass ilters with cuto requencies o 1.5, 2 and 2.5 khz, orming a inal speech database composed by 1134 WAVE iles. Ater have heard 3 times a randomly chosen phonetic syllable rom the inal database (without any additional inormation than their sounds), the listener must choose one syllable rom a list o 7 possibilities. The vowel is the correct one in these 7 syllables, which means that the decision will be made based only in the acoustic properties o the processed ricative sounds. The results o this test are shown in Table 2, where the column None means no processing urther than low pass iltering, Compression means requency compression and Shiting means requency shiting. In the irst column we have all

5 the possible ricatives or each o the 3 ilter cuto requencies. In the table, the numbers signaled in boldace correspond to the greatest percentage o correct decisions made or each type o processing. Because o the random choice o the syllables that were presented to the listeners, there were some syllables that were less listened than others. But each o the 63 dierent processed ricatives corresponding to the cells o Table 2 was presented at least 5 times and any o them was presented more than 15 times. Processed Syllable Table 2: Listener s correct decisions (%) None Compression Shiting [ ] ,5 72,7 62,5 [ ] ,0 44,4 69,2 [ ] ,7 53,3 58,3 [ ] ,6 80,0 81,8 [ ] ,0 71,4 66,7 [ ] ,8 61,5 90,9 [ ] ,0 28,6 33,3 [ ] ,0 81,8 86,7 [ ] ,2 62,5 77,8 [ ] ,0 20,0 45,5 [ ] ,4 62,5 55,6 [ ] ,8 100,0 84,6 [ ] ,8 50,0 8,3 [ ] ,3 36,4 33,3 [ ] ,9 41,7 25,0 [ ] ,1 46,7 75,0 [ ] ,0 60,0 40,0 [ ] ,4 60,0 33,3 [ ] ,7 40,0 12,5 [ ] ,2 21,4 38,5 [ ] ,6 38,5 75,0 These results are diicult to analyze i we consider the set o syllables as a whole. But it is interesting to analyze each ricative sound in particular. For example, we can conclude rom the results that or the phone [ ] the better is to do no urther processing with it, but i we consider the case o the phone [ ] we conclude just the opposite: no processing leads to 0.0 % o intelligibility when the highest audible requency is 1.5 khz! In the case o the ricative sound [ ], the better solution is to apply our requency shiting proposed algorithm. For all other situations, the optimal solution depends on the speciic phone and cuto requency considered. 4. Conclusion It is necessary to inish the spectral shaping part o the system in order to submit the processed signals to hearing impaired listeners. The slight dierence in the quality observed among the processed signals may be due to the act that the dierence between the original signal (with requencies up to 8 khz) and the low pass iltered (2 khz) signals is large. But or the impaired subject, that never had any perception o sounds with requencies above 2 khz, may be the dierence between the processed signals was not so slight. Relatively to the results o the intelligibility test, we can conclude that i we incorporate a simple automatic phoneme classiier in the system, it is possible to choose the better requency lowering algorithm to be applied or each speciic phone, given the maximum requency where there is some residual hearing. This is not diicult to do, considering the advances observed in the perormance o automatic phoneme recognition algorithms over the last years. Finally, it is important to remark that, with all the processing being done in the requency domain, both algorithms have demonstrated to be ast enough or enabling the usage in real time applications. Reerences [1] B. L. Hicks, L. D. Braida, and N. I. Durlach, Pitch invariant requency lowering with nonuniorm spectral compression, in Proc. IEEE International Conerence on Acoustics, Speech, and Signal Processing (IEEE New York), pp , [2] C. M. Reed, B. L. Hicks, L. D. Braida, and N. I. Durlach, Discrimination o speech processed by lowpass iltering and pitch-invariant requency lowering, J. Acoust. Soc. Am, vol. 74, pp , [3] C. M. Reed, K. I. Schultz, L. D. Braida, N. I. Durlach, Discrimination and identiication o requency-lowered speech in listeners with high-requency hearing impairment, J. Acoust. Soc. Am, vol. 78, pp , [4] P. Nelson, and S. Revoile, Detection o spectral peaks in noise: Eects o hearing loss and requency regions, J. Acoust. Soc. Am, [5] C. M. Aguilera Muñoz, B. N. Peggy, J. C. Rutledge, A. Gago, Frequency lowering processing or listeners with signiicant hearing loss, IEEE, pp , [6] S. Frota, Fundamentos em Fonoaudiologia, 1 st ed., vol. 1. Guanabara Koogan, 2001, pp [7] Y. A. Alsaka, B. McLean, Spectral Shaping or the Hearing Impaired, IEEE, pp , 1996 [8] J. C. Tejero-Calado, B. N. Peggy, J. C. Rutledge, Combination compression and linear gain processing or digital hearing aids, IEEE, pp , 1998.

FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED

FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED Francisco J. Fraga, Alan M. Marotta National Institute of Telecommunications, Santa Rita do Sapucaí - MG, Brazil Abstract A considerable

More information

Juan Carlos Tejero-Calado 1, Janet C. Rutledge 2, and Peggy B. Nelson 3

Juan Carlos Tejero-Calado 1, Janet C. Rutledge 2, and Peggy B. Nelson 3 PRESERVING SPECTRAL CONTRAST IN AMPLITUDE COMPRESSION FOR HEARING AIDS Juan Carlos Tejero-Calado 1, Janet C. Rutledge 2, and Peggy B. Nelson 3 1 University of Malaga, Campus de Teatinos-Complejo Tecnol

More information

HCS 7367 Speech Perception

HCS 7367 Speech Perception Long-term spectrum of speech HCS 7367 Speech Perception Connected speech Absolute threshold Males Dr. Peter Assmann Fall 212 Females Long-term spectrum of speech Vowels Males Females 2) Absolute threshold

More information

EEL 6586, Project - Hearing Aids algorithms

EEL 6586, Project - Hearing Aids algorithms EEL 6586, Project - Hearing Aids algorithms 1 Yan Yang, Jiang Lu, and Ming Xue I. PROBLEM STATEMENT We studied hearing loss algorithms in this project. As the conductive hearing loss is due to sound conducting

More information

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds Hearing Lectures Acoustics of Speech and Hearing Week 2-10 Hearing 3: Auditory Filtering 1. Loudness of sinusoids mainly (see Web tutorial for more) 2. Pitch of sinusoids mainly (see Web tutorial for more)

More information

Issues faced by people with a Sensorineural Hearing Loss

Issues faced by people with a Sensorineural Hearing Loss Issues faced by people with a Sensorineural Hearing Loss Issues faced by people with a Sensorineural Hearing Loss 1. Decreased Audibility 2. Decreased Dynamic Range 3. Decreased Frequency Resolution 4.

More information

Prelude Envelope and temporal fine. What's all the fuss? Modulating a wave. Decomposing waveforms. The psychophysics of cochlear

Prelude Envelope and temporal fine. What's all the fuss? Modulating a wave. Decomposing waveforms. The psychophysics of cochlear The psychophysics of cochlear implants Stuart Rosen Professor of Speech and Hearing Science Speech, Hearing and Phonetic Sciences Division of Psychology & Language Sciences Prelude Envelope and temporal

More information

SoundRecover2 the first adaptive frequency compression algorithm More audibility of high frequency sounds

SoundRecover2 the first adaptive frequency compression algorithm More audibility of high frequency sounds Phonak Insight April 2016 SoundRecover2 the first adaptive frequency compression algorithm More audibility of high frequency sounds Phonak led the way in modern frequency lowering technology with the introduction

More information

Audibility, discrimination and hearing comfort at a new level: SoundRecover2

Audibility, discrimination and hearing comfort at a new level: SoundRecover2 Audibility, discrimination and hearing comfort at a new level: SoundRecover2 Julia Rehmann, Michael Boretzki, Sonova AG 5th European Pediatric Conference Current Developments and New Directions in Pediatric

More information

Hearing. Juan P Bello

Hearing. Juan P Bello Hearing Juan P Bello The human ear The human ear Outer Ear The human ear Middle Ear The human ear Inner Ear The cochlea (1) It separates sound into its various components If uncoiled it becomes a tapering

More information

functions grow at a higher rate than in normal{hearing subjects. In this chapter, the correlation

functions grow at a higher rate than in normal{hearing subjects. In this chapter, the correlation Chapter Categorical loudness scaling in hearing{impaired listeners Abstract Most sensorineural hearing{impaired subjects show the recruitment phenomenon, i.e., loudness functions grow at a higher rate

More information

Frequency refers to how often something happens. Period refers to the time it takes something to happen.

Frequency refers to how often something happens. Period refers to the time it takes something to happen. Lecture 2 Properties of Waves Frequency and period are distinctly different, yet related, quantities. Frequency refers to how often something happens. Period refers to the time it takes something to happen.

More information

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED International Conference on Systemics, Cybernetics and Informatics, February 12 15, 2004 BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED Alice N. Cheeran Biomedical

More information

Chapter 3 NON-LINEAR FREQUENCY COMPRESSION IN HEARING AIDS

Chapter 3 NON-LINEAR FREQUENCY COMPRESSION IN HEARING AIDS Chapter 3 NON-LINEAR FREQUENCY COMPRESSION IN HEARING AIDS Chapter aim: This chapter serves as a theoretical basis for the empirical research and provides a critical evaluation as well as interpretation

More information

FREQUENCY COMPOSITION : A NEW APPROACH TO FREQUENCY- LOWERING

FREQUENCY COMPOSITION : A NEW APPROACH TO FREQUENCY- LOWERING FREQUENCY COMPOSITION : A NEW APPROACH TO FREQUENCY- LOWERING -lowering has come a long way. Over time, the technique has evolved from a controversial feature to one that is gaining more and more acceptance.

More information

Psychoacoustical Models WS 2016/17

Psychoacoustical Models WS 2016/17 Psychoacoustical Models WS 2016/17 related lectures: Applied and Virtual Acoustics (Winter Term) Advanced Psychoacoustics (Summer Term) Sound Perception 2 Frequency and Level Range of Human Hearing Source:

More information

Who are cochlear implants for?

Who are cochlear implants for? Who are cochlear implants for? People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work best in adults who

More information

EVALUATION OF SPEECH PERCEPTION IN PATIENTS WITH SKI SLOPE HEARING LOSS USING ARABIC CONSTANT SPEECH DISCRIMINATION LISTS

EVALUATION OF SPEECH PERCEPTION IN PATIENTS WITH SKI SLOPE HEARING LOSS USING ARABIC CONSTANT SPEECH DISCRIMINATION LISTS EVALUATION OF SPEECH PERCEPTION IN PATIENTS WITH SKI SLOPE HEARING LOSS USING ARABIC CONSTANT SPEECH DISCRIMINATION LISTS Mai El Ghazaly, Resident of Audiology Mohamed Aziz Talaat, MD,PhD Mona Mourad.

More information

Acoustics, signals & systems for audiology. Psychoacoustics of hearing impairment

Acoustics, signals & systems for audiology. Psychoacoustics of hearing impairment Acoustics, signals & systems for audiology Psychoacoustics of hearing impairment Three main types of hearing impairment Conductive Sound is not properly transmitted from the outer to the inner ear Sensorineural

More information

Best Practice Protocols

Best Practice Protocols Best Practice Protocols SoundRecover for children What is SoundRecover? SoundRecover (non-linear frequency compression) seeks to give greater audibility of high-frequency everyday sounds by compressing

More information

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair Who are cochlear implants for? Essential feature People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work

More information

Low Frequency th Conference on Low Frequency Noise

Low Frequency th Conference on Low Frequency Noise Low Frequency 2012 15th Conference on Low Frequency Noise Stratford-upon-Avon, UK, 12-14 May 2012 Enhanced Perception of Infrasound in the Presence of Low-Level Uncorrelated Low-Frequency Noise. Dr M.A.Swinbanks,

More information

Hearing the Universal Language: Music and Cochlear Implants

Hearing the Universal Language: Music and Cochlear Implants Hearing the Universal Language: Music and Cochlear Implants Professor Hugh McDermott Deputy Director (Research) The Bionics Institute of Australia, Professorial Fellow The University of Melbourne Overview?

More information

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for What you re in for Speech processing schemes for cochlear implants Stuart Rosen Professor of Speech and Hearing Science Speech, Hearing and Phonetic Sciences Division of Psychology & Language Sciences

More information

Frequency-Lowering Devices for Managing High-Frequency Hearing Loss: A Review

Frequency-Lowering Devices for Managing High-Frequency Hearing Loss: A Review Frequency-Lowering Devices for Managing High-Frequency Hearing Loss: A Review Trends in Amplification Volume 13 Number 2 June 2009 87-106 2009 The Author(s) 10.1177/1084713809336421 http://tia.sagepub.com

More information

! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics

! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics 2/14/18 Can hear whistle? Lecture 5 Psychoacoustics Based on slides 2009--2018 DeHon, Koditschek Additional Material 2014 Farmer 1 2 There are sounds we cannot hear Depends on frequency Where are we on

More information

ACOUSTIC AND PERCEPTUAL PROPERTIES OF ENGLISH FRICATIVES

ACOUSTIC AND PERCEPTUAL PROPERTIES OF ENGLISH FRICATIVES ISCA Archive ACOUSTIC AND PERCEPTUAL PROPERTIES OF ENGLISH FRICATIVES Allard Jongman 1, Yue Wang 2, and Joan Sereno 1 1 Linguistics Department, University of Kansas, Lawrence, KS 66045 U.S.A. 2 Department

More information

2/25/2013. Context Effect on Suprasegmental Cues. Supresegmental Cues. Pitch Contour Identification (PCI) Context Effect with Cochlear Implants

2/25/2013. Context Effect on Suprasegmental Cues. Supresegmental Cues. Pitch Contour Identification (PCI) Context Effect with Cochlear Implants Context Effect on Segmental and Supresegmental Cues Preceding context has been found to affect phoneme recognition Stop consonant recognition (Mann, 1980) A continuum from /da/ to /ga/ was preceded by

More information

Design Note: HFDN Rev.2; 04/08. Loop-Filter Configuration for the MAX3670 Low-Jitter PLL Reference Clock Generator

Design Note: HFDN Rev.2; 04/08. Loop-Filter Configuration for the MAX3670 Low-Jitter PLL Reference Clock Generator Design Note: HFDN-3.0 Rev.; 04/08 Loop-Filter Coniguration or the MAX3670 Low-Jitter PLL Reerence Clock Generator Loop-Filter Coniguration or the MAX3670 Low-Jitter PLL Reerence Clock Generator Introduction

More information

Linguistic Phonetics Fall 2005

Linguistic Phonetics Fall 2005 MIT OpenCourseWare http://ocw.mit.edu 24.963 Linguistic Phonetics Fall 2005 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 24.963 Linguistic Phonetics

More information

Topic 4. Pitch & Frequency

Topic 4. Pitch & Frequency Topic 4 Pitch & Frequency A musical interlude KOMBU This solo by Kaigal-ool of Huun-Huur-Tu (accompanying himself on doshpuluur) demonstrates perfectly the characteristic sound of the Xorekteer voice An

More information

Audiogram+: The ReSound Proprietary Fitting Algorithm

Audiogram+: The ReSound Proprietary Fitting Algorithm Abstract Hearing instruments should provide end-users with access to undistorted acoustic information to the degree possible. The Resound compression system uses state-of-the art technology and carefully

More information

The development of a modified spectral ripple test

The development of a modified spectral ripple test The development of a modified spectral ripple test Justin M. Aronoff a) and David M. Landsberger Communication and Neuroscience Division, House Research Institute, 2100 West 3rd Street, Los Angeles, California

More information

Research Article The Acoustic and Peceptual Effects of Series and Parallel Processing

Research Article The Acoustic and Peceptual Effects of Series and Parallel Processing Hindawi Publishing Corporation EURASIP Journal on Advances in Signal Processing Volume 9, Article ID 6195, pages doi:1.1155/9/6195 Research Article The Acoustic and Peceptual Effects of Series and Parallel

More information

The role of periodicity in the perception of masked speech with simulated and real cochlear implants

The role of periodicity in the perception of masked speech with simulated and real cochlear implants The role of periodicity in the perception of masked speech with simulated and real cochlear implants Kurt Steinmetzger and Stuart Rosen UCL Speech, Hearing and Phonetic Sciences Heidelberg, 09. November

More information

CHAPTER 1 INTRODUCTION

CHAPTER 1 INTRODUCTION CHAPTER 1 INTRODUCTION 1.1 BACKGROUND Speech is the most natural form of human communication. Speech has also become an important means of human-machine interaction and the advancement in technology has

More information

What Is the Difference between db HL and db SPL?

What Is the Difference between db HL and db SPL? 1 Psychoacoustics What Is the Difference between db HL and db SPL? The decibel (db ) is a logarithmic unit of measurement used to express the magnitude of a sound relative to some reference level. Decibels

More information

An active unpleasantness control system for indoor noise based on auditory masking

An active unpleasantness control system for indoor noise based on auditory masking An active unpleasantness control system for indoor noise based on auditory masking Daisuke Ikefuji, Masato Nakayama, Takanabu Nishiura and Yoich Yamashita Graduate School of Information Science and Engineering,

More information

Thresholds for different mammals

Thresholds for different mammals Loudness Thresholds for different mammals 8 7 What s the first thing you d want to know? threshold (db SPL) 6 5 4 3 2 1 hum an poodle m ouse threshold Note bowl shape -1 1 1 1 1 frequency (Hz) Sivian &

More information

Linguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions.

Linguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions. 24.963 Linguistic Phonetics Basic Audition Diagram of the inner ear removed due to copyright restrictions. 1 Reading: Keating 1985 24.963 also read Flemming 2001 Assignment 1 - basic acoustics. Due 9/22.

More information

HCS 7367 Speech Perception

HCS 7367 Speech Perception Babies 'cry in mother's tongue' HCS 7367 Speech Perception Dr. Peter Assmann Fall 212 Babies' cries imitate their mother tongue as early as three days old German researchers say babies begin to pick up

More information

Spectrograms (revisited)

Spectrograms (revisited) Spectrograms (revisited) We begin the lecture by reviewing the units of spectrograms, which I had only glossed over when I covered spectrograms at the end of lecture 19. We then relate the blocks of a

More information

Optimal Filter Perception of Speech Sounds: Implications to Hearing Aid Fitting through Verbotonal Rehabilitation

Optimal Filter Perception of Speech Sounds: Implications to Hearing Aid Fitting through Verbotonal Rehabilitation Optimal Filter Perception of Speech Sounds: Implications to Hearing Aid Fitting through Verbotonal Rehabilitation Kazunari J. Koike, Ph.D., CCC-A Professor & Director of Audiology Department of Otolaryngology

More information

Sonic Spotlight. SmartCompress. Advancing compression technology into the future

Sonic Spotlight. SmartCompress. Advancing compression technology into the future Sonic Spotlight SmartCompress Advancing compression technology into the future Speech Variable Processing (SVP) is the unique digital signal processing strategy that gives Sonic hearing aids their signature

More information

FREQUENCY LOWERING IN THE PEDIATRIC POPULATION: OUTCOMES AND CONSIDERATIONS FOR FITTING. Capstone. Lauren Virginia Ross, B.A.

FREQUENCY LOWERING IN THE PEDIATRIC POPULATION: OUTCOMES AND CONSIDERATIONS FOR FITTING. Capstone. Lauren Virginia Ross, B.A. FREQUENCY LOWERING IN THE PEDIATRIC POPULATION: OUTCOMES AND CONSIDERATIONS FOR FITTING Capstone Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Audiology in the Graduate

More information

Slow compression for people with severe to profound hearing loss

Slow compression for people with severe to profound hearing loss Phonak Insight February 2018 Slow compression for people with severe to profound hearing loss For people with severe to profound hearing loss, poor auditory resolution abilities can make the spectral and

More information

Advanced Audio Interface for Phonetic Speech. Recognition in a High Noise Environment

Advanced Audio Interface for Phonetic Speech. Recognition in a High Noise Environment DISTRIBUTION STATEMENT A Approved for Public Release Distribution Unlimited Advanced Audio Interface for Phonetic Speech Recognition in a High Noise Environment SBIR 99.1 TOPIC AF99-1Q3 PHASE I SUMMARY

More information

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening AUDL GS08/GAV1 Signals, systems, acoustics and the ear Pitch & Binaural listening Review 25 20 15 10 5 0-5 100 1000 10000 25 20 15 10 5 0-5 100 1000 10000 Part I: Auditory frequency selectivity Tuning

More information

UvA-DARE (Digital Academic Repository) Perceptual evaluation of noise reduction in hearing aids Brons, I. Link to publication

UvA-DARE (Digital Academic Repository) Perceptual evaluation of noise reduction in hearing aids Brons, I. Link to publication UvA-DARE (Digital Academic Repository) Perceptual evaluation of noise reduction in hearing aids Brons, I. Link to publication Citation for published version (APA): Brons, I. (2013). Perceptual evaluation

More information

FREQUENCY. Prof Dr. Mona Mourad Dr.Manal Elbanna Doaa Elmoazen ALEXANDRIA UNIVERSITY. Background

FREQUENCY. Prof Dr. Mona Mourad Dr.Manal Elbanna Doaa Elmoazen ALEXANDRIA UNIVERSITY. Background FREQUENCY TRANSPOSITION IN HIGH FREQUENCY SNHL Prof Dr. Mona Mourad Dr.Manal Elbanna Doaa Elmoazen Randa Awad ALEXANDRIA UNIVERSITY Background Concept Of Frequency Transposition Frequency transposition

More information

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair Who are cochlear implants for? Essential feature People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work

More information

Computational Perception /785. Auditory Scene Analysis

Computational Perception /785. Auditory Scene Analysis Computational Perception 15-485/785 Auditory Scene Analysis A framework for auditory scene analysis Auditory scene analysis involves low and high level cues Low level acoustic cues are often result in

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Noise Session 3aNSa: Wind Turbine Noise I 3aNSa5. Can wind turbine sound

More information

Testing Digital Hearing Aids

Testing Digital Hearing Aids Testing Digital Hearing Aids with the FONIX 6500-CX Hearing Aid Analyzer Frye Electronics, Inc. Introduction The following is a quick guide for testing digital hearing aids using the FONIX 6500-CX. All

More information

Speech Intelligibility Measurements in Auditorium

Speech Intelligibility Measurements in Auditorium Vol. 118 (2010) ACTA PHYSICA POLONICA A No. 1 Acoustic and Biomedical Engineering Speech Intelligibility Measurements in Auditorium K. Leo Faculty of Physics and Applied Mathematics, Technical University

More information

Technical Discussion HUSHCORE Acoustical Products & Systems

Technical Discussion HUSHCORE Acoustical Products & Systems What Is Noise? Noise is unwanted sound which may be hazardous to health, interfere with speech and verbal communications or is otherwise disturbing, irritating or annoying. What Is Sound? Sound is defined

More information

Time Varying Comb Filters to Reduce Spectral and Temporal Masking in Sensorineural Hearing Impairment

Time Varying Comb Filters to Reduce Spectral and Temporal Masking in Sensorineural Hearing Impairment Bio Vision 2001 Intl Conf. Biomed. Engg., Bangalore, India, 21-24 Dec. 2001, paper PRN6. Time Varying Comb Filters to Reduce pectral and Temporal Masking in ensorineural Hearing Impairment Dakshayani.

More information

Representation of sound in the auditory nerve

Representation of sound in the auditory nerve Representation of sound in the auditory nerve Eric D. Young Department of Biomedical Engineering Johns Hopkins University Young, ED. Neural representation of spectral and temporal information in speech.

More information

Simulations of high-frequency vocoder on Mandarin speech recognition for acoustic hearing preserved cochlear implant

Simulations of high-frequency vocoder on Mandarin speech recognition for acoustic hearing preserved cochlear implant INTERSPEECH 2017 August 20 24, 2017, Stockholm, Sweden Simulations of high-frequency vocoder on Mandarin speech recognition for acoustic hearing preserved cochlear implant Tsung-Chen Wu 1, Tai-Shih Chi

More information

Study of perceptual balance for binaural dichotic presentation

Study of perceptual balance for binaural dichotic presentation Paper No. 556 Proceedings of 20 th International Congress on Acoustics, ICA 2010 23-27 August 2010, Sydney, Australia Study of perceptual balance for binaural dichotic presentation Pandurangarao N. Kulkarni

More information

2/16/2012. Fitting Current Amplification Technology on Infants and Children. Preselection Issues & Procedures

2/16/2012. Fitting Current Amplification Technology on Infants and Children. Preselection Issues & Procedures Fitting Current Amplification Technology on Infants and Children Cindy Hogan, Ph.D./Doug Sladen, Ph.D. Mayo Clinic Rochester, Minnesota hogan.cynthia@mayo.edu sladen.douglas@mayo.edu AAA Pediatric Amplification

More information

Topics in Linguistic Theory: Laboratory Phonology Spring 2007

Topics in Linguistic Theory: Laboratory Phonology Spring 2007 MIT OpenCourseWare http://ocw.mit.edu 24.91 Topics in Linguistic Theory: Laboratory Phonology Spring 27 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

whether or not the fundamental is actually present.

whether or not the fundamental is actually present. 1) Which of the following uses a computer CPU to combine various pure tones to generate interesting sounds or music? 1) _ A) MIDI standard. B) colored-noise generator, C) white-noise generator, D) digital

More information

Topic 4. Pitch & Frequency. (Some slides are adapted from Zhiyao Duan s course slides on Computer Audition and Its Applications in Music)

Topic 4. Pitch & Frequency. (Some slides are adapted from Zhiyao Duan s course slides on Computer Audition and Its Applications in Music) Topic 4 Pitch & Frequency (Some slides are adapted from Zhiyao Duan s course slides on Computer Audition and Its Applications in Music) A musical interlude KOMBU This solo by Kaigal-ool of Huun-Huur-Tu

More information

Noise-Robust Speech Recognition in a Car Environment Based on the Acoustic Features of Car Interior Noise

Noise-Robust Speech Recognition in a Car Environment Based on the Acoustic Features of Car Interior Noise 4 Special Issue Speech-Based Interfaces in Vehicles Research Report Noise-Robust Speech Recognition in a Car Environment Based on the Acoustic Features of Car Interior Noise Hiroyuki Hoshino Abstract This

More information

Intelligibility of clear speech at normal rates for older adults with hearing loss

Intelligibility of clear speech at normal rates for older adults with hearing loss University of South Florida Scholar Commons Graduate Theses and Dissertations Graduate School 2006 Intelligibility of clear speech at normal rates for older adults with hearing loss Billie Jo Shaw University

More information

Paediatric Amplification

Paediatric Amplification Paediatric Amplification The paediatric technical advisory group (PTAG) of the NZAS recommends following the protocols found in UNHSEIP Diagnostic and Amplification Protocols (January, 2016). These guidelines

More information

C HAPTER FOUR. Audiometric Configurations in Children. Andrea L. Pittman. Introduction. Methods

C HAPTER FOUR. Audiometric Configurations in Children. Andrea L. Pittman. Introduction. Methods C HAPTER FOUR Audiometric Configurations in Children Andrea L. Pittman Introduction Recent studies suggest that the amplification needs of children and adults differ due to differences in perceptual ability.

More information

The Effect of Analysis Methods and Input Signal Characteristics on Hearing Aid Measurements

The Effect of Analysis Methods and Input Signal Characteristics on Hearing Aid Measurements The Effect of Analysis Methods and Input Signal Characteristics on Hearing Aid Measurements By: Kristina Frye Section 1: Common Source Types FONIX analyzers contain two main signal types: Puretone and

More information

Speech (Sound) Processing

Speech (Sound) Processing 7 Speech (Sound) Processing Acoustic Human communication is achieved when thought is transformed through language into speech. The sounds of speech are initiated by activity in the central nervous system,

More information

Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation

Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation Aldebaro Klautau - http://speech.ucsd.edu/aldebaro - 2/3/. Page. Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation ) Introduction Several speech processing algorithms assume the signal

More information

A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER

A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER ARCHIVES OF ACOUSTICS 29, 1, 25 34 (2004) INTELLIGIBILITY OF SPEECH PROCESSED BY A SPECTRAL CONTRAST ENHANCEMENT PROCEDURE AND A BINAURAL PROCEDURE A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER Institute

More information

Hearing Lectures. Acoustics of Speech and Hearing. Subjective/Objective (recap) Loudness Overview. Sinusoids through ear. Facts about Loudness

Hearing Lectures. Acoustics of Speech and Hearing. Subjective/Objective (recap) Loudness Overview. Sinusoids through ear. Facts about Loudness Hearing Lectures coustics of Speech and Hearing Week 2-8 Hearing 1: Perception of Intensity 1. Loudness of sinusoids mainly (see Web tutorial for more) 2. Pitch of sinusoids mainly (see Web tutorial for

More information

The Compression Handbook Fourth Edition. An overview of the characteristics and applications of compression amplification

The Compression Handbook Fourth Edition. An overview of the characteristics and applications of compression amplification The Compression Handbook Fourth Edition An overview of the characteristics and applications of compression amplification Table of Contents Chapter 1: Understanding Hearing Loss...3 Essential Terminology...4

More information

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES Varinthira Duangudom and David V Anderson School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA 30332

More information

Evaluating the role of spectral and envelope characteristics in the intelligibility advantage of clear speech

Evaluating the role of spectral and envelope characteristics in the intelligibility advantage of clear speech Evaluating the role of spectral and envelope characteristics in the intelligibility advantage of clear speech Jean C. Krause a and Louis D. Braida Research Laboratory of Electronics, Massachusetts Institute

More information

WIDEXPRESS. no.30. Background

WIDEXPRESS. no.30. Background WIDEXPRESS no. january 12 By Marie Sonne Kristensen Petri Korhonen Using the WidexLink technology to improve speech perception Background For most hearing aid users, the primary motivation for using hearing

More information

Perceptual Effects of Nasal Cue Modification

Perceptual Effects of Nasal Cue Modification Send Orders for Reprints to reprints@benthamscience.ae The Open Electrical & Electronic Engineering Journal, 2015, 9, 399-407 399 Perceptual Effects of Nasal Cue Modification Open Access Fan Bai 1,2,*

More information

Hello Old Friend the use of frequency specific speech phonemes in cortical and behavioural testing of infants

Hello Old Friend the use of frequency specific speech phonemes in cortical and behavioural testing of infants Hello Old Friend the use of frequency specific speech phonemes in cortical and behavioural testing of infants Andrea Kelly 1,3 Denice Bos 2 Suzanne Purdy 3 Michael Sanders 3 Daniel Kim 1 1. Auckland District

More information

Evidence base for hearing aid features:

Evidence base for hearing aid features: Evidence base for hearing aid features: { the ʹwhat, how and whyʹ of technology selection, fitting and assessment. Drew Dundas, PhD Director of Audiology, Clinical Assistant Professor of Otolaryngology

More information

Voice Detection using Speech Energy Maximization and Silence Feature Normalization

Voice Detection using Speech Energy Maximization and Silence Feature Normalization , pp.25-29 http://dx.doi.org/10.14257/astl.2014.49.06 Voice Detection using Speech Energy Maximization and Silence Feature Normalization In-Sung Han 1 and Chan-Shik Ahn 2 1 Dept. of The 2nd R&D Institute,

More information

Determination of filtering parameters for dichotic-listening binaural hearing aids

Determination of filtering parameters for dichotic-listening binaural hearing aids Determination of filtering parameters for dichotic-listening binaural hearing aids Yôiti Suzuki a, Atsunobu Murase b, Motokuni Itoh c and Shuichi Sakamoto a a R.I.E.C., Tohoku University, 2-1, Katahira,

More information

DSL v5 in Connexx 7 Mikael Menard, Ph.D., Philippe Lantin Sivantos, 2015.

DSL v5 in Connexx 7 Mikael Menard, Ph.D., Philippe Lantin Sivantos, 2015. www.bestsound-technology.com DSL v5 in Connexx 7 Mikael Menard, Ph.D., Philippe Lantin Sivantos, 2015. First fit is an essential stage of the hearing aid fitting process and is a cornerstone of the ultimate

More information

Elements of Effective Hearing Aid Performance (2004) Edgar Villchur Feb 2004 HearingOnline

Elements of Effective Hearing Aid Performance (2004) Edgar Villchur Feb 2004 HearingOnline Elements of Effective Hearing Aid Performance (2004) Edgar Villchur Feb 2004 HearingOnline To the hearing-impaired listener the fidelity of a hearing aid is not fidelity to the input sound but fidelity

More information

FAST AMPLITUDE COMPRESSION IN HEARING AIDS IMPROVES AUDIBILITY BUT DEGRADES SPEECH INFORMATION TRANSMISSION

FAST AMPLITUDE COMPRESSION IN HEARING AIDS IMPROVES AUDIBILITY BUT DEGRADES SPEECH INFORMATION TRANSMISSION FAST AMPLITUDE COMPRESSION IN HEARING AIDS IMPROVES AUDIBILITY BUT DEGRADES SPEECH INFORMATION TRANSMISSION Arne Leijon and Svante Stadler Sound and Image Processing Lab., School of Electrical Engineering,

More information

Basic Audiogram Interpretation

Basic Audiogram Interpretation Basic Audiogram Interpretation Audiogram - graph showing Frequency on Horizontal axis db Hearing Level on Vertical axis db level increases as we move down on graph Audiogram displays the individuals pure

More information

The functional importance of age-related differences in temporal processing

The functional importance of age-related differences in temporal processing Kathy Pichora-Fuller The functional importance of age-related differences in temporal processing Professor, Psychology, University of Toronto Adjunct Scientist, Toronto Rehabilitation Institute, University

More information

Gick et al.: JASA Express Letters DOI: / Published Online 17 March 2008

Gick et al.: JASA Express Letters DOI: / Published Online 17 March 2008 modality when that information is coupled with information via another modality (e.g., McGrath and Summerfield, 1985). It is unknown, however, whether there exist complex relationships across modalities,

More information

Lecture 3: Perception

Lecture 3: Perception ELEN E4896 MUSIC SIGNAL PROCESSING Lecture 3: Perception 1. Ear Physiology 2. Auditory Psychophysics 3. Pitch Perception 4. Music Perception Dan Ellis Dept. Electrical Engineering, Columbia University

More information

Audiogram+: GN Resound proprietary fitting rule

Audiogram+: GN Resound proprietary fitting rule Audiogram+: GN Resound proprietary fitting rule Ole Dyrlund GN ReSound Audiological Research Copenhagen Loudness normalization - Principle Background for Audiogram+! Audiogram+ is a loudness normalization

More information

PLEASE SCROLL DOWN FOR ARTICLE

PLEASE SCROLL DOWN FOR ARTICLE This article was downloaded by:[michigan State University Libraries] On: 9 October 2007 Access Details: [subscription number 768501380] Publisher: Informa Healthcare Informa Ltd Registered in England and

More information

Ambiguity in the recognition of phonetic vowels when using a bone conduction microphone

Ambiguity in the recognition of phonetic vowels when using a bone conduction microphone Acoustics 8 Paris Ambiguity in the recognition of phonetic vowels when using a bone conduction microphone V. Zimpfer a and K. Buck b a ISL, 5 rue du Général Cassagnou BP 734, 6831 Saint Louis, France b

More information

ClaroTM Digital Perception ProcessingTM

ClaroTM Digital Perception ProcessingTM ClaroTM Digital Perception ProcessingTM Sound processing with a human perspective Introduction Signal processing in hearing aids has always been directed towards amplifying signals according to physical

More information

HEARING AND PSYCHOACOUSTICS

HEARING AND PSYCHOACOUSTICS CHAPTER 2 HEARING AND PSYCHOACOUSTICS WITH LIDIA LEE I would like to lead off the specific audio discussions with a description of the audio receptor the ear. I believe it is always a good idea to understand

More information

HEARING. Structure and Function

HEARING. Structure and Function HEARING Structure and Function Rory Attwood MBChB,FRCS Division of Otorhinolaryngology Faculty of Health Sciences Tygerberg Campus, University of Stellenbosch Analyse Function of auditory system Discriminate

More information

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS 19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 27 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS PACS: 43.66.Pn Seeber, Bernhard U. Auditory Perception Lab, Dept.

More information

64 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 61, NO. 1, JANUARY 2014

64 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 61, NO. 1, JANUARY 2014 64 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 61, NO. 1, JANUARY 2014 Signal-Processing Strategy for Restoration of Cross-Channel Suppression in Hearing-Impaired Listeners Daniel M. Rasetshwane,

More information

SLHS 1301 The Physics and Biology of Spoken Language. Practice Exam 2. b) 2 32

SLHS 1301 The Physics and Biology of Spoken Language. Practice Exam 2. b) 2 32 SLHS 1301 The Physics and Biology of Spoken Language Practice Exam 2 Chapter 9 1. In analog-to-digital conversion, quantization of the signal means that a) small differences in signal amplitude over time

More information