FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED

Size: px
Start display at page:

Download "FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED"

Transcription

1 FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED Francisco J. Fraga, Alan M. Marotta National Institute of Telecommunications, Santa Rita do Sapucaí - MG, Brazil Abstract A considerable percentage of listeners with severe hearing loss have audiograms where the losses are high for high frequencies and low for low frequencies. For these patients, lowering the speech spectrum to the frequencies where there is some residual hearing could be a good solution to be implemented for digital hearing aids. In this paper we have presented two different frequency lowering algorithms: frequency compression and frequency shifting. Results of subjective intelligibility tests have shown a slight better performance of the frequency shifting method relatively to the frequency compression method, although their performance remarkably depends on which are the specific phonemes that are being processed by these two algorithms. Key Words digital hearing aids, frequency lowering 1. Introduction There are several kinds of hearing impairment. The origin of the sensorineural hearing losses can be due to defects in the cochlea, auditory nerve or both. These problems reduce the dynamic range of hearing. The threshold of hearing is elevated, but the threshold of discomfort (at which the loudness become uncomfortable) is almost the same as for normal hearing listeners, or even may be lower. For some range of frequencies, the threshold of hearing is so high than it is equal to the threshold of discomfort, i.e., it is impossible for the listener hearing any sound at those frequencies. Hearing loss is more common for high frequency and mid frequency sounds (1 to 3 khz) than for low frequency. Frequently, there are only small losses at low frequencies (below 1 khz) but almost absolute deafness above 1.5 or 2 khz. These facts lead researchers to lower the spectrum of speech in order to match the residual low frequency hearing of listeners with high frequency impairments. Slow playback, vocoding, and zero crossing rate division are some of the methods that have been employed in the last decades. All of these methods involve signal distortion, more or less noticeable, generally depending on the amount of the frequency shifting. Many of the lowering schemes have altered perceptually important characteristics of speech, such as temporal and rhythmic patterns, pitch and durations of segmental elements. Hicks et al. [1] have done one of the most remarkable investigations about frequency lowering. Their technique involve pitch synchronous, monotonic compression of the short term spectral envelope, while at the same time avoiding some of the above-described problems observed in the other methods. Reed et al. [2] have conducted consonant discrimination experiments on normal hearing listeners. They have observed that Hick s frequency lowering scheme presented better performance for fricative and affricate sounds if compared with low pass filtering to an equivalent bandwidth. On the other hand, the performance of the low pass filtering was better for vowels, semivowels and nasal sounds. For plosive sounds, both methods have shown similar results. In general, the performance on the best frequency lowering conditions was almost the same to that obtained on low pass filtering to an equivalent bandwidth. Further, Reed et al. [3] have extended the results of Hick s et al. system to listeners with high frequency impairment. In general, the performance of the impaired subjects was inferior to that obtained by normal subjects. Few years ago, Nelson and Revoile [4] have discovered that relative to the normal hearing listeners, those with moderate to severe hearing loss required approximately double the peak to valley depth for detection of spectral peaks in bands of noise when signals have a high numbers of peaks per octave. Findings revealed that detection of spectral peaks in noise is significantly related to consonant identification abilities in listeners with moderate to severe hearing loss. All previous mentioned frequency-lowering schemes compress the speech spectrum into a narrower band of frequencies, increasing the number of peaks per octave while maintaining the peak to valley depth. According to Nelson s and Revoile s investigation, applying sharpening processing to a frequency lowered speech may allow better detection of spectral peaks and better consonant identification. Recently, Muñoz et al. [5] have combined sharpening (i.e., increasing the peak to valley depth) and frequency compression. They have demonstrated that the

2 processed speech improved the understanding of fricative and affricate sounds, while providing no significant change in identification of vowels and other sounds by listeners with severe high frequency hearing loss. Based on Nelson s and Revoile s investigation, we hypothesize that the relatively poor performance of Hick sand Muñoz s frequency lowering schemes is due to the increasing of the numbers of peaks per octave, which is inherent to the frequency compression method used in these systems. In this paper, we propose a new frequency-lowering algorithm that does not increase the number of peaks per octave because it uses frequency shifting instead of frequency compression. Furthermore, the frequency shifting is applied only for fricative and affricate sounds, leaving all others types of sounds untouched, because it is only for fricative sounds that the frequency lowering technique brings real benefits as have been demonstrated by all the previous mentioned works. We have also implemented a frequency compression algorithm based on Hick s [1] and Muñoz s [5] ideas. Preliminary results of subjective preference (considering only the qualitative aspect of the processed speech) have confirmed our hypothesis about the better performance of the frequency shifting method compared to the frequency compression method. But further subjective intelligibility tests over 20 subjects have clearly shown that their performance (now considering only the intelligible aspect of the speech) remarkably depends on which are the specific phonemes that are being processed by these two algorithms. the losses are classified as mild. Moderate losses are those which are greater than 40 db but until inferior to 70 db. From 71 to 90 db, we consider that the patient have severe hearing losses and more than 95 db of loss is classified as profound [6]. The threshold of discomfort, for normal or impaired listeners, is always below 120 db SPL. Indeed, commonly the threshold of discomfort for the impaired subjects is lower than for normal hearing subjects. Although less common, some audiograms bring both the threshold of discomfort and the threshold of hearing [7], as we can observe in Fig. 1. In this figure, the points of the audiogram corresponding to the right ear are signaled with a round mark and those corresponding to the left ear are signaled with an X mark. These marks are worldwide used in this way by audiologists [6]. The dynamic range of listening for each frequency is the threshold of discomfort minus the threshold of hearing. 2. Methodology A. Audiometric data acquisition and processing The first step of both frequency-lowering algorithms consists in audiometric data acquisition of the impaired subject. The audiometric exam is employed for measuring the degree of the hearing impairment of a given patient. In this exam, the listener is submitted to a perception test by continuously varying the sound pressure level (SPL) of a pure sinusoidal tone in a discrete frequency scale. The frequency values most frequently used are 250 Hz, 500 Hz, 1 khz, 2 khz, 4 khz, 6 khz and 8 khz. For each of these frequencies, the minimum SPL in db for which the patient is capable of perceiving the sound is registered in a graph. The audiogram is the result of the audiometric exam, which is presented by a graph with the values in db SPL for each of the discrete frequencies. This graph is done separately for each ear of the subject. Since the level of 0 db SPL is considered the minimum sound pressure level for normal hearing, the positive values in db registered on the vertical axis of the audiogram can be considered as the hearing losses of the patient s ear. If the losses are equal or inferior to 20 db, the subject is considered as having normal hearing. From 21 to 40 db, Fig. 1: Ski-slope losses case Based on the acquired audiometric data, the algorithm analyses the range of frequencies where there is still some residual hearing. The criterion used is the following: first, it is verified if the patient have a ski slope kind of losses, i.e., if the losses are increasing with frequency. Only patients with this type of impairment can be aided by any frequency lowering method. After that, the first frequency where there is a profound loss is determined. If this frequency is between 1.2 khz and 3.4 khz, a destination frequency to which the high frequency spectrum will be shifted is calculated. Otherwise, no frequency shifting is needed (residual hearing above 3.4 khz) or profitable (residual hearing below 1.2 khz). This destination frequency is considered as the geometrical mean between 900 Hz and the highest frequency where there is still some residual hearing. The geometrical mean was empirically chosen because it provides a good tradeoff between minimum spectrum distortion and maximum 353

3 residual hearing profit. In order to obtain more accuracy in the losses thresholds, the points of the audiogram are linearly interpolated. B. Speech data acquisition and processing The speech signal is sampled at a 16 khz rate and Hamming windowed with 25 msec windows. These windows are 50% overlapped, what means that the signal is analyzed at a frame rate that is the inverse of 12.5 msec. A 1024-point FFT is used for representing the high resolution short time speech spectrum in the frequency domain. If in the previous audiometric data analysis a ski slope kind of loss was detected and the frequency-shifting criterion was matched, a destination frequency have already been determined. Then, we have to find out (in a frame by frame basis) if the short time speech spectrum presents significant information at high frequencies that justify the frequency shifting operation. The criterion used for shifting or not the short time spectrum of each speech frame is based in a threshold. When the signal has high energy in high frequencies the algorithm shifts this high frequency information to lower frequencies. The threshold is set for suppressing the processing of all vowels, nasals and the semivowels, while activating the frequency transposition for fricatives and affricates. To decide which part of the spectrum will be shifted, the energy of 500 Hz bandwidth windows are calculated with 100 Hz spacing, from 1 khz to 8 khz. This is done with the aim of find out an origin frequency. The origin frequency is the frequency 100 Hz below the beginning of the 500 Hz bandwidth window that have maximum energy. The part of the spectrum that will be transposed corresponds to the range of all frequencies above the origin frequency. This empirical criterion guarantees that the unavoidable distortion due to the frequency lowering operation will be profitable. Because the most important part of the high energy information will be shifted to the limited range of frequencies that are above 1 khz (therefore maintaining untouched the low frequency information) but still below the highest frequency where the patient presents residual hearing. For comparison, the Hick s frequency compression scheme was already implemented, but now only when the same frequency lowering criterion (high/low frequency energy ratio) used for transposition was matched, i. e., only for fricatives and affricates. The frequency compression was done by means of an equation defined in [2]. But in practice, it is more useful to implement the inverse equation, which is f f IN S 1 1 = tan π 1 a f tan Kπ 1 + a f OUT S where f IN is the original frequency, f OUT is the corresponding compressed frequency, K is the frequency compression factor, a is the warping parameter and f S is (1) the sampling rate. For minimum distortion at low frequencies, the warping parameter must be chosen as being a = (K 1)/(K+1). The compression factor K was determined according to the degree of loss presented by the listener. Fig. 2 shows the curves of equation (1) for K = 2, 3 and 4. In this figure we can see that the low frequency information (below 1000 Hz) is barely compressed. After frequency shifting or compressing (if it occurs), the FFT spectrum of each speech frame is multiplied by the gain factor, which is calculated for each frequency in order to full compensate the hearing loss, unless the amplified sound pressure level exceed the threshold of discomfort. In this case, the gain factor is limited to the amount required for maintaining the loudness below the threshold of discomfort. The way we implemented this spectral shaping process is similar to that described in [8]. This last step was still under development in our digital hearing aids system. Fig. 2: Input vs. output frequency curves Fig. 3: Comparison of frequency lowering schemes 354

4 Part (a) of Fig. 3 illustrates the original FFT spectrum of a speech frame, in part (b) the same frame is shown compressed by a factor K = 4 and part (c) presents the frame after frequency shifting. It is important to observe that in the last case (frequency shifting) the shape of the spectrum is preserved, what does not occur in the case of frequency compression, where we can clearly note a great amount of shape distortion, but still preserving the low frequency information. These preliminary results indicate that the frequency shifting method was preferred by the listeners when compared with the frequency compression method. But it is important to remark that the subjective difference between the low pass filtered signal, the frequency compressed signal and the frequency-shifted signal is very slight, as perceived by normal listeners. 3. Results A. Preliminary Qualitative Tests The two frequency lowering algorithms were not already tested with hearing impaired subjects because they final spectral shaping part are not completely developed, as mentioned in the last paragraph of the previous section. But we got some preliminary results with normal listeners, considering first only the qualitative aspect of the processed speech. In this case, a simple low pass filtering process simulates the losses above the frequency where there is no more residual hearing. In this preliminary qualitative test, this cutoff frequency was fixed to 2 khz. The experiment we have carried out consists of submitting the speech signal to the two frequency lowering algorithms. After that, the resulted signals were listened by two normal hearing subjects, one man and one woman. The listeners do not know anything about the origin of the signals and are asked for ranking the signals according to their intelligibility. In this preliminary test, only two speech signals were submitted to the algorithms. The original and processed spectrograms of one of these speech signals (pronunciation of the words loose management ) are shown in Fig. 4, where we can appreciate again the visual difference between the two frequency lowering algorithms. According to the prevision, only the fricative speech sounds were frequency lowered in both algorithms. The unique exception is the phone [ l ], which is not fricative but lateral approximant. But in this case, its pronunciation had high frequency energy, as we can observe in the spectrogram of the original speech signal. The preferences of the listeners were listed in Table 1. In this table, Signal 1 is the Portuguese word pensando (which means thinking ) and Signal 2 is the English words loose management. Table 1: Listener s preferences Speech signal Man Woman Signal 1 low pass 1 st 3 rd Signal 1 compr. 3 rd 2 nd Signal 1 shifted 2 nd 1 st Signal 2 low pass 2 nd 2 nd Signal 2 compr. 3 rd 3 rd Signal 2 shifted 1 st 1 st Fig. 4: Spectrograms of loose management A. Detailed Intelligibility Tests The intelligibility test was performed over 20 listeners, 15 male and 5 female. Each of them heard 36 syllables randomly chosen from a database formed by the utterances of 6 speakers, 3 female and 3 male. The original database was formed by 21 different CV phonetic syllables, each of them is composed by one of the 7 most commonly used fricative sounds of the Portuguese language ( [ ], [ ], [ ], [ ], [ ], [ ], [ ] ) and by one of these 3 vowels: [ ], [ ] or [ ]. These syllables were pronounced once by the 6 speakers, therefore the original database was formed by 126 utterances. Each of these utterances generates 9 different processed WAVE files: original syllable, frequency compressed syllable and frequency shifted syllable, passed through 3 different low pass filters with cutoff frequencies of 1.5, 2 and 2.5 khz, forming a final speech database composed by 1134 WAVE files. After have heard 3 times a randomly chosen phonetic syllable from the final database (without any additional information than their sounds), the listener must choose one syllable from a list of 7 possibilities. The vowel is the correct one in these 7 syllables, which means that the decision will be made based only in the acoustic properties of the processed fricative sounds. The results of this test are shown in Table 2, where the column None means no processing further than low pass filtering, Compression means frequency compression and Shifting means frequency shifting. In the first column we have all the possible fricatives for each of the 3 filter cutoff 355

5 frequencies. In the table, the numbers signaled in boldface correspond to the greatest percentage of correct decisions made for each type of processing. Because of the random choice of the syllables that were presented to the listeners, there were some syllables that were less listened than others. But each of the 63 different processed fricatives corresponding to the cells of Table 2 was presented at least 5 times and any of them was presented more than 15 times. Table 2: Listener s correct decisions (%) Processed None Compression Shifting Syllable [ ] ,5 72,7 62,5 [ ] ,0 44,4 69,2 [ ] ,7 53,3 58,3 [ ] ,6 80,0 81,8 [ ] ,0 71,4 66,7 [ ] ,8 61,5 90,9 [ ] ,0 28,6 33,3 [ ] ,0 81,8 86,7 [ ] ,2 62,5 77,8 [ ] ,0 20,0 45,5 [ ] ,4 62,5 55,6 [ ] ,8 100,0 84,6 [ ] ,8 50,0 8,3 [ ] ,3 36,4 33,3 [ ] ,9 41,7 25,0 [ ] ,1 46,7 75,0 [ ] ,0 60,0 40,0 [ ] ,4 60,0 33,3 [ ] ,7 40,0 12,5 [ ] ,2 21,4 38,5 [ ] ,6 38,5 75,0 These results are difficult to analyze if we consider the set of syllables as a whole. But it is interesting to analyze each fricative sound in particular. For example, we can conclude from the results that for the phone [ ] the better is to do no further processing with it, but if we consider the case of the phone [ ] we conclude just the opposite: no processing leads to 0.0 % of intelligibility when the highest audible frequency is 1.5 khz! In the case of the fricative sound [ ], the better solution is to apply our frequency shifting proposed algorithm. For all other situations, the optimal solution depends on the specific phone and cutoff frequency considered. signals is large. But for the impaired subject, that never had any perception of sounds with frequencies above 2 khz, may be the difference between the processed signals was not so slight. Relatively to the results of the intelligibility test, we can conclude that if we incorporate a simple automatic phoneme classifier in the system, it is possible to choose the better frequency lowering algorithm to be applied for each specific phone, given the maximum frequency where there is some residual hearing. This is not difficult to do, considering the advances observed in the performance of automatic phoneme recognition algorithms over the last years. Finally, it is important to remark that, with all the processing being done in the frequency domain, both algorithms have demonstrated to be fast enough for enabling the usage in real time applications. References [1] B. L. Hicks, L. D. Braida, and N. I. Durlach, Pitch invariant frequency lowering with nonuniform spectral compression, in Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (IEEE New York), pp , [2] C. M. Reed, B. L. Hicks, L. D. Braida, and N. I. Durlach, Discrimination of speech processed by lowpass filtering and pitch-invariant frequency lowering, J. Acoust. Soc. Am, vol. 74, pp , [3] C. M. Reed, K. I. Schultz, L. D. Braida, N. I. Durlach, Discrimination and identification of frequency-lowered speech in listeners with high-frequency hearing impairment, J. Acoust. Soc. Am, vol. 78, pp , [4] P. Nelson, and S. Revoile, Detection of spectral peaks in noise: Effects of hearing loss and frequency regions, J. Acoust. Soc. Am, [5] C. M. Aguilera Muñoz, B. N. Peggy, J. C. Rutledge, A. Gago, Frequency lowering processing for listeners with significant hearing loss, IEEE, pp , [6] S. Frota, Fundamentos em Fonoaudiologia, 1 st ed., vol. 1. Guanabara Koogan, 2001, pp [7] Y. A. Alsaka, B. McLean, Spectral Shaping for the Hearing Impaired, IEEE, pp , 1996 [8] J. C. Tejero-Calado, B. N. Peggy, J. C. Rutledge, Combination compression and linear gain processing for digital hearing aids, IEEE, pp , Conclusion It is necessary to finish the spectral shaping part of the system in order to submit the processed signals to hearing impaired listeners. The slight difference in the quality observed among the processed signals may be due to the fact that the difference between the original signal (with frequencies up to 8 khz) and the low pass filtered (2 khz) 356

FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED

FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED Francisco J. Fraga 1, Alan M. Marotta 2 1 National Institute o Telecommunications, Santa Rita do Sapucaí - MG, Brazil 2 National Institute

More information

Juan Carlos Tejero-Calado 1, Janet C. Rutledge 2, and Peggy B. Nelson 3

Juan Carlos Tejero-Calado 1, Janet C. Rutledge 2, and Peggy B. Nelson 3 PRESERVING SPECTRAL CONTRAST IN AMPLITUDE COMPRESSION FOR HEARING AIDS Juan Carlos Tejero-Calado 1, Janet C. Rutledge 2, and Peggy B. Nelson 3 1 University of Malaga, Campus de Teatinos-Complejo Tecnol

More information

HCS 7367 Speech Perception

HCS 7367 Speech Perception Long-term spectrum of speech HCS 7367 Speech Perception Connected speech Absolute threshold Males Dr. Peter Assmann Fall 212 Females Long-term spectrum of speech Vowels Males Females 2) Absolute threshold

More information

EEL 6586, Project - Hearing Aids algorithms

EEL 6586, Project - Hearing Aids algorithms EEL 6586, Project - Hearing Aids algorithms 1 Yan Yang, Jiang Lu, and Ming Xue I. PROBLEM STATEMENT We studied hearing loss algorithms in this project. As the conductive hearing loss is due to sound conducting

More information

Issues faced by people with a Sensorineural Hearing Loss

Issues faced by people with a Sensorineural Hearing Loss Issues faced by people with a Sensorineural Hearing Loss Issues faced by people with a Sensorineural Hearing Loss 1. Decreased Audibility 2. Decreased Dynamic Range 3. Decreased Frequency Resolution 4.

More information

SoundRecover2 the first adaptive frequency compression algorithm More audibility of high frequency sounds

SoundRecover2 the first adaptive frequency compression algorithm More audibility of high frequency sounds Phonak Insight April 2016 SoundRecover2 the first adaptive frequency compression algorithm More audibility of high frequency sounds Phonak led the way in modern frequency lowering technology with the introduction

More information

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds Hearing Lectures Acoustics of Speech and Hearing Week 2-10 Hearing 3: Auditory Filtering 1. Loudness of sinusoids mainly (see Web tutorial for more) 2. Pitch of sinusoids mainly (see Web tutorial for more)

More information

Audibility, discrimination and hearing comfort at a new level: SoundRecover2

Audibility, discrimination and hearing comfort at a new level: SoundRecover2 Audibility, discrimination and hearing comfort at a new level: SoundRecover2 Julia Rehmann, Michael Boretzki, Sonova AG 5th European Pediatric Conference Current Developments and New Directions in Pediatric

More information

FREQUENCY COMPOSITION : A NEW APPROACH TO FREQUENCY- LOWERING

FREQUENCY COMPOSITION : A NEW APPROACH TO FREQUENCY- LOWERING FREQUENCY COMPOSITION : A NEW APPROACH TO FREQUENCY- LOWERING -lowering has come a long way. Over time, the technique has evolved from a controversial feature to one that is gaining more and more acceptance.

More information

Chapter 3 NON-LINEAR FREQUENCY COMPRESSION IN HEARING AIDS

Chapter 3 NON-LINEAR FREQUENCY COMPRESSION IN HEARING AIDS Chapter 3 NON-LINEAR FREQUENCY COMPRESSION IN HEARING AIDS Chapter aim: This chapter serves as a theoretical basis for the empirical research and provides a critical evaluation as well as interpretation

More information

Frequency refers to how often something happens. Period refers to the time it takes something to happen.

Frequency refers to how often something happens. Period refers to the time it takes something to happen. Lecture 2 Properties of Waves Frequency and period are distinctly different, yet related, quantities. Frequency refers to how often something happens. Period refers to the time it takes something to happen.

More information

Best Practice Protocols

Best Practice Protocols Best Practice Protocols SoundRecover for children What is SoundRecover? SoundRecover (non-linear frequency compression) seeks to give greater audibility of high-frequency everyday sounds by compressing

More information

Prelude Envelope and temporal fine. What's all the fuss? Modulating a wave. Decomposing waveforms. The psychophysics of cochlear

Prelude Envelope and temporal fine. What's all the fuss? Modulating a wave. Decomposing waveforms. The psychophysics of cochlear The psychophysics of cochlear implants Stuart Rosen Professor of Speech and Hearing Science Speech, Hearing and Phonetic Sciences Division of Psychology & Language Sciences Prelude Envelope and temporal

More information

Frequency-Lowering Devices for Managing High-Frequency Hearing Loss: A Review

Frequency-Lowering Devices for Managing High-Frequency Hearing Loss: A Review Frequency-Lowering Devices for Managing High-Frequency Hearing Loss: A Review Trends in Amplification Volume 13 Number 2 June 2009 87-106 2009 The Author(s) 10.1177/1084713809336421 http://tia.sagepub.com

More information

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED International Conference on Systemics, Cybernetics and Informatics, February 12 15, 2004 BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED Alice N. Cheeran Biomedical

More information

Hearing. Juan P Bello

Hearing. Juan P Bello Hearing Juan P Bello The human ear The human ear Outer Ear The human ear Middle Ear The human ear Inner Ear The cochlea (1) It separates sound into its various components If uncoiled it becomes a tapering

More information

ACOUSTIC AND PERCEPTUAL PROPERTIES OF ENGLISH FRICATIVES

ACOUSTIC AND PERCEPTUAL PROPERTIES OF ENGLISH FRICATIVES ISCA Archive ACOUSTIC AND PERCEPTUAL PROPERTIES OF ENGLISH FRICATIVES Allard Jongman 1, Yue Wang 2, and Joan Sereno 1 1 Linguistics Department, University of Kansas, Lawrence, KS 66045 U.S.A. 2 Department

More information

Who are cochlear implants for?

Who are cochlear implants for? Who are cochlear implants for? People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work best in adults who

More information

Audiogram+: The ReSound Proprietary Fitting Algorithm

Audiogram+: The ReSound Proprietary Fitting Algorithm Abstract Hearing instruments should provide end-users with access to undistorted acoustic information to the degree possible. The Resound compression system uses state-of-the art technology and carefully

More information

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair Who are cochlear implants for? Essential feature People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work

More information

Speech (Sound) Processing

Speech (Sound) Processing 7 Speech (Sound) Processing Acoustic Human communication is achieved when thought is transformed through language into speech. The sounds of speech are initiated by activity in the central nervous system,

More information

Hearing the Universal Language: Music and Cochlear Implants

Hearing the Universal Language: Music and Cochlear Implants Hearing the Universal Language: Music and Cochlear Implants Professor Hugh McDermott Deputy Director (Research) The Bionics Institute of Australia, Professorial Fellow The University of Melbourne Overview?

More information

Psychoacoustical Models WS 2016/17

Psychoacoustical Models WS 2016/17 Psychoacoustical Models WS 2016/17 related lectures: Applied and Virtual Acoustics (Winter Term) Advanced Psychoacoustics (Summer Term) Sound Perception 2 Frequency and Level Range of Human Hearing Source:

More information

An active unpleasantness control system for indoor noise based on auditory masking

An active unpleasantness control system for indoor noise based on auditory masking An active unpleasantness control system for indoor noise based on auditory masking Daisuke Ikefuji, Masato Nakayama, Takanabu Nishiura and Yoich Yamashita Graduate School of Information Science and Engineering,

More information

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for What you re in for Speech processing schemes for cochlear implants Stuart Rosen Professor of Speech and Hearing Science Speech, Hearing and Phonetic Sciences Division of Psychology & Language Sciences

More information

FREQUENCY. Prof Dr. Mona Mourad Dr.Manal Elbanna Doaa Elmoazen ALEXANDRIA UNIVERSITY. Background

FREQUENCY. Prof Dr. Mona Mourad Dr.Manal Elbanna Doaa Elmoazen ALEXANDRIA UNIVERSITY. Background FREQUENCY TRANSPOSITION IN HIGH FREQUENCY SNHL Prof Dr. Mona Mourad Dr.Manal Elbanna Doaa Elmoazen Randa Awad ALEXANDRIA UNIVERSITY Background Concept Of Frequency Transposition Frequency transposition

More information

Low Frequency th Conference on Low Frequency Noise

Low Frequency th Conference on Low Frequency Noise Low Frequency 2012 15th Conference on Low Frequency Noise Stratford-upon-Avon, UK, 12-14 May 2012 Enhanced Perception of Infrasound in the Presence of Low-Level Uncorrelated Low-Frequency Noise. Dr M.A.Swinbanks,

More information

The role of periodicity in the perception of masked speech with simulated and real cochlear implants

The role of periodicity in the perception of masked speech with simulated and real cochlear implants The role of periodicity in the perception of masked speech with simulated and real cochlear implants Kurt Steinmetzger and Stuart Rosen UCL Speech, Hearing and Phonetic Sciences Heidelberg, 09. November

More information

! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics

! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics 2/14/18 Can hear whistle? Lecture 5 Psychoacoustics Based on slides 2009--2018 DeHon, Koditschek Additional Material 2014 Farmer 1 2 There are sounds we cannot hear Depends on frequency Where are we on

More information

Slow compression for people with severe to profound hearing loss

Slow compression for people with severe to profound hearing loss Phonak Insight February 2018 Slow compression for people with severe to profound hearing loss For people with severe to profound hearing loss, poor auditory resolution abilities can make the spectral and

More information

Elements of Effective Hearing Aid Performance (2004) Edgar Villchur Feb 2004 HearingOnline

Elements of Effective Hearing Aid Performance (2004) Edgar Villchur Feb 2004 HearingOnline Elements of Effective Hearing Aid Performance (2004) Edgar Villchur Feb 2004 HearingOnline To the hearing-impaired listener the fidelity of a hearing aid is not fidelity to the input sound but fidelity

More information

Acoustics, signals & systems for audiology. Psychoacoustics of hearing impairment

Acoustics, signals & systems for audiology. Psychoacoustics of hearing impairment Acoustics, signals & systems for audiology Psychoacoustics of hearing impairment Three main types of hearing impairment Conductive Sound is not properly transmitted from the outer to the inner ear Sensorineural

More information

EVALUATION OF SPEECH PERCEPTION IN PATIENTS WITH SKI SLOPE HEARING LOSS USING ARABIC CONSTANT SPEECH DISCRIMINATION LISTS

EVALUATION OF SPEECH PERCEPTION IN PATIENTS WITH SKI SLOPE HEARING LOSS USING ARABIC CONSTANT SPEECH DISCRIMINATION LISTS EVALUATION OF SPEECH PERCEPTION IN PATIENTS WITH SKI SLOPE HEARING LOSS USING ARABIC CONSTANT SPEECH DISCRIMINATION LISTS Mai El Ghazaly, Resident of Audiology Mohamed Aziz Talaat, MD,PhD Mona Mourad.

More information

FREQUENCY LOWERING IN THE PEDIATRIC POPULATION: OUTCOMES AND CONSIDERATIONS FOR FITTING. Capstone. Lauren Virginia Ross, B.A.

FREQUENCY LOWERING IN THE PEDIATRIC POPULATION: OUTCOMES AND CONSIDERATIONS FOR FITTING. Capstone. Lauren Virginia Ross, B.A. FREQUENCY LOWERING IN THE PEDIATRIC POPULATION: OUTCOMES AND CONSIDERATIONS FOR FITTING Capstone Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Audiology in the Graduate

More information

Noise-Robust Speech Recognition in a Car Environment Based on the Acoustic Features of Car Interior Noise

Noise-Robust Speech Recognition in a Car Environment Based on the Acoustic Features of Car Interior Noise 4 Special Issue Speech-Based Interfaces in Vehicles Research Report Noise-Robust Speech Recognition in a Car Environment Based on the Acoustic Features of Car Interior Noise Hiroyuki Hoshino Abstract This

More information

Research Article The Acoustic and Peceptual Effects of Series and Parallel Processing

Research Article The Acoustic and Peceptual Effects of Series and Parallel Processing Hindawi Publishing Corporation EURASIP Journal on Advances in Signal Processing Volume 9, Article ID 6195, pages doi:1.1155/9/6195 Research Article The Acoustic and Peceptual Effects of Series and Parallel

More information

What Is the Difference between db HL and db SPL?

What Is the Difference between db HL and db SPL? 1 Psychoacoustics What Is the Difference between db HL and db SPL? The decibel (db ) is a logarithmic unit of measurement used to express the magnitude of a sound relative to some reference level. Decibels

More information

2/25/2013. Context Effect on Suprasegmental Cues. Supresegmental Cues. Pitch Contour Identification (PCI) Context Effect with Cochlear Implants

2/25/2013. Context Effect on Suprasegmental Cues. Supresegmental Cues. Pitch Contour Identification (PCI) Context Effect with Cochlear Implants Context Effect on Segmental and Supresegmental Cues Preceding context has been found to affect phoneme recognition Stop consonant recognition (Mann, 1980) A continuum from /da/ to /ga/ was preceded by

More information

SLHS 1301 The Physics and Biology of Spoken Language. Practice Exam 2. b) 2 32

SLHS 1301 The Physics and Biology of Spoken Language. Practice Exam 2. b) 2 32 SLHS 1301 The Physics and Biology of Spoken Language Practice Exam 2 Chapter 9 1. In analog-to-digital conversion, quantization of the signal means that a) small differences in signal amplitude over time

More information

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair Who are cochlear implants for? Essential feature People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work

More information

Hello Old Friend the use of frequency specific speech phonemes in cortical and behavioural testing of infants

Hello Old Friend the use of frequency specific speech phonemes in cortical and behavioural testing of infants Hello Old Friend the use of frequency specific speech phonemes in cortical and behavioural testing of infants Andrea Kelly 1,3 Denice Bos 2 Suzanne Purdy 3 Michael Sanders 3 Daniel Kim 1 1. Auckland District

More information

WIDEXPRESS. no.30. Background

WIDEXPRESS. no.30. Background WIDEXPRESS no. january 12 By Marie Sonne Kristensen Petri Korhonen Using the WidexLink technology to improve speech perception Background For most hearing aid users, the primary motivation for using hearing

More information

Linguistic Phonetics Fall 2005

Linguistic Phonetics Fall 2005 MIT OpenCourseWare http://ocw.mit.edu 24.963 Linguistic Phonetics Fall 2005 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 24.963 Linguistic Phonetics

More information

Testing Digital Hearing Aids

Testing Digital Hearing Aids Testing Digital Hearing Aids with the FONIX 6500-CX Hearing Aid Analyzer Frye Electronics, Inc. Introduction The following is a quick guide for testing digital hearing aids using the FONIX 6500-CX. All

More information

Study of perceptual balance for binaural dichotic presentation

Study of perceptual balance for binaural dichotic presentation Paper No. 556 Proceedings of 20 th International Congress on Acoustics, ICA 2010 23-27 August 2010, Sydney, Australia Study of perceptual balance for binaural dichotic presentation Pandurangarao N. Kulkarni

More information

The development of a modified spectral ripple test

The development of a modified spectral ripple test The development of a modified spectral ripple test Justin M. Aronoff a) and David M. Landsberger Communication and Neuroscience Division, House Research Institute, 2100 West 3rd Street, Los Angeles, California

More information

UvA-DARE (Digital Academic Repository) Perceptual evaluation of noise reduction in hearing aids Brons, I. Link to publication

UvA-DARE (Digital Academic Repository) Perceptual evaluation of noise reduction in hearing aids Brons, I. Link to publication UvA-DARE (Digital Academic Repository) Perceptual evaluation of noise reduction in hearing aids Brons, I. Link to publication Citation for published version (APA): Brons, I. (2013). Perceptual evaluation

More information

Spectrograms (revisited)

Spectrograms (revisited) Spectrograms (revisited) We begin the lecture by reviewing the units of spectrograms, which I had only glossed over when I covered spectrograms at the end of lecture 19. We then relate the blocks of a

More information

Linguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions.

Linguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions. 24.963 Linguistic Phonetics Basic Audition Diagram of the inner ear removed due to copyright restrictions. 1 Reading: Keating 1985 24.963 also read Flemming 2001 Assignment 1 - basic acoustics. Due 9/22.

More information

CHAPTER 1 INTRODUCTION

CHAPTER 1 INTRODUCTION CHAPTER 1 INTRODUCTION 1.1 BACKGROUND Speech is the most natural form of human communication. Speech has also become an important means of human-machine interaction and the advancement in technology has

More information

HCS 7367 Speech Perception

HCS 7367 Speech Perception Babies 'cry in mother's tongue' HCS 7367 Speech Perception Dr. Peter Assmann Fall 212 Babies' cries imitate their mother tongue as early as three days old German researchers say babies begin to pick up

More information

Optimal Filter Perception of Speech Sounds: Implications to Hearing Aid Fitting through Verbotonal Rehabilitation

Optimal Filter Perception of Speech Sounds: Implications to Hearing Aid Fitting through Verbotonal Rehabilitation Optimal Filter Perception of Speech Sounds: Implications to Hearing Aid Fitting through Verbotonal Rehabilitation Kazunari J. Koike, Ph.D., CCC-A Professor & Director of Audiology Department of Otolaryngology

More information

Topic 4. Pitch & Frequency

Topic 4. Pitch & Frequency Topic 4 Pitch & Frequency A musical interlude KOMBU This solo by Kaigal-ool of Huun-Huur-Tu (accompanying himself on doshpuluur) demonstrates perfectly the characteristic sound of the Xorekteer voice An

More information

Representation of sound in the auditory nerve

Representation of sound in the auditory nerve Representation of sound in the auditory nerve Eric D. Young Department of Biomedical Engineering Johns Hopkins University Young, ED. Neural representation of spectral and temporal information in speech.

More information

Speech Intelligibility Measurements in Auditorium

Speech Intelligibility Measurements in Auditorium Vol. 118 (2010) ACTA PHYSICA POLONICA A No. 1 Acoustic and Biomedical Engineering Speech Intelligibility Measurements in Auditorium K. Leo Faculty of Physics and Applied Mathematics, Technical University

More information

Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation

Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation Aldebaro Klautau - http://speech.ucsd.edu/aldebaro - 2/3/. Page. Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation ) Introduction Several speech processing algorithms assume the signal

More information

The Effect of Analysis Methods and Input Signal Characteristics on Hearing Aid Measurements

The Effect of Analysis Methods and Input Signal Characteristics on Hearing Aid Measurements The Effect of Analysis Methods and Input Signal Characteristics on Hearing Aid Measurements By: Kristina Frye Section 1: Common Source Types FONIX analyzers contain two main signal types: Puretone and

More information

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening AUDL GS08/GAV1 Signals, systems, acoustics and the ear Pitch & Binaural listening Review 25 20 15 10 5 0-5 100 1000 10000 25 20 15 10 5 0-5 100 1000 10000 Part I: Auditory frequency selectivity Tuning

More information

whether or not the fundamental is actually present.

whether or not the fundamental is actually present. 1) Which of the following uses a computer CPU to combine various pure tones to generate interesting sounds or music? 1) _ A) MIDI standard. B) colored-noise generator, C) white-noise generator, D) digital

More information

Time Varying Comb Filters to Reduce Spectral and Temporal Masking in Sensorineural Hearing Impairment

Time Varying Comb Filters to Reduce Spectral and Temporal Masking in Sensorineural Hearing Impairment Bio Vision 2001 Intl Conf. Biomed. Engg., Bangalore, India, 21-24 Dec. 2001, paper PRN6. Time Varying Comb Filters to Reduce pectral and Temporal Masking in ensorineural Hearing Impairment Dakshayani.

More information

Evaluating the role of spectral and envelope characteristics in the intelligibility advantage of clear speech

Evaluating the role of spectral and envelope characteristics in the intelligibility advantage of clear speech Evaluating the role of spectral and envelope characteristics in the intelligibility advantage of clear speech Jean C. Krause a and Louis D. Braida Research Laboratory of Electronics, Massachusetts Institute

More information

functions grow at a higher rate than in normal{hearing subjects. In this chapter, the correlation

functions grow at a higher rate than in normal{hearing subjects. In this chapter, the correlation Chapter Categorical loudness scaling in hearing{impaired listeners Abstract Most sensorineural hearing{impaired subjects show the recruitment phenomenon, i.e., loudness functions grow at a higher rate

More information

Advanced Audio Interface for Phonetic Speech. Recognition in a High Noise Environment

Advanced Audio Interface for Phonetic Speech. Recognition in a High Noise Environment DISTRIBUTION STATEMENT A Approved for Public Release Distribution Unlimited Advanced Audio Interface for Phonetic Speech Recognition in a High Noise Environment SBIR 99.1 TOPIC AF99-1Q3 PHASE I SUMMARY

More information

Determination of filtering parameters for dichotic-listening binaural hearing aids

Determination of filtering parameters for dichotic-listening binaural hearing aids Determination of filtering parameters for dichotic-listening binaural hearing aids Yôiti Suzuki a, Atsunobu Murase b, Motokuni Itoh c and Shuichi Sakamoto a a R.I.E.C., Tohoku University, 2-1, Katahira,

More information

Intelligibility of clear speech at normal rates for older adults with hearing loss

Intelligibility of clear speech at normal rates for older adults with hearing loss University of South Florida Scholar Commons Graduate Theses and Dissertations Graduate School 2006 Intelligibility of clear speech at normal rates for older adults with hearing loss Billie Jo Shaw University

More information

A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER

A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER ARCHIVES OF ACOUSTICS 29, 1, 25 34 (2004) INTELLIGIBILITY OF SPEECH PROCESSED BY A SPECTRAL CONTRAST ENHANCEMENT PROCEDURE AND A BINAURAL PROCEDURE A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER Institute

More information

Topics in Linguistic Theory: Laboratory Phonology Spring 2007

Topics in Linguistic Theory: Laboratory Phonology Spring 2007 MIT OpenCourseWare http://ocw.mit.edu 24.91 Topics in Linguistic Theory: Laboratory Phonology Spring 27 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

Paediatric Amplification

Paediatric Amplification Paediatric Amplification The paediatric technical advisory group (PTAG) of the NZAS recommends following the protocols found in UNHSEIP Diagnostic and Amplification Protocols (January, 2016). These guidelines

More information

1706 J. Acoust. Soc. Am. 113 (3), March /2003/113(3)/1706/12/$ Acoustical Society of America

1706 J. Acoust. Soc. Am. 113 (3), March /2003/113(3)/1706/12/$ Acoustical Society of America The effects of hearing loss on the contribution of high- and lowfrequency speech information to speech understanding a) Benjamin W. Y. Hornsby b) and Todd A. Ricketts Dan Maddox Hearing Aid Research Laboratory,

More information

Implementation of Spectral Maxima Sound processing for cochlear. implants by using Bark scale Frequency band partition

Implementation of Spectral Maxima Sound processing for cochlear. implants by using Bark scale Frequency band partition Implementation of Spectral Maxima Sound processing for cochlear implants by using Bark scale Frequency band partition Han xianhua 1 Nie Kaibao 1 1 Department of Information Science and Engineering, Shandong

More information

Sonic Spotlight. SmartCompress. Advancing compression technology into the future

Sonic Spotlight. SmartCompress. Advancing compression technology into the future Sonic Spotlight SmartCompress Advancing compression technology into the future Speech Variable Processing (SVP) is the unique digital signal processing strategy that gives Sonic hearing aids their signature

More information

Technical Discussion HUSHCORE Acoustical Products & Systems

Technical Discussion HUSHCORE Acoustical Products & Systems What Is Noise? Noise is unwanted sound which may be hazardous to health, interfere with speech and verbal communications or is otherwise disturbing, irritating or annoying. What Is Sound? Sound is defined

More information

The Compression Handbook Fourth Edition. An overview of the characteristics and applications of compression amplification

The Compression Handbook Fourth Edition. An overview of the characteristics and applications of compression amplification The Compression Handbook Fourth Edition An overview of the characteristics and applications of compression amplification Table of Contents Chapter 1: Understanding Hearing Loss...3 Essential Terminology...4

More information

Perceptual Effects of Nasal Cue Modification

Perceptual Effects of Nasal Cue Modification Send Orders for Reprints to reprints@benthamscience.ae The Open Electrical & Electronic Engineering Journal, 2015, 9, 399-407 399 Perceptual Effects of Nasal Cue Modification Open Access Fan Bai 1,2,*

More information

DSL v5 in Connexx 7 Mikael Menard, Ph.D., Philippe Lantin Sivantos, 2015.

DSL v5 in Connexx 7 Mikael Menard, Ph.D., Philippe Lantin Sivantos, 2015. www.bestsound-technology.com DSL v5 in Connexx 7 Mikael Menard, Ph.D., Philippe Lantin Sivantos, 2015. First fit is an essential stage of the hearing aid fitting process and is a cornerstone of the ultimate

More information

Candidacy and Verification of Oticon Speech Rescue TM technology

Candidacy and Verification of Oticon Speech Rescue TM technology PAGE 1 TECH PAPER 2015 Candidacy and Verification of Oticon Speech Rescue TM technology Kamilla Angelo 1, Marianne Hawkins 2, Danielle Glista 2, & Susan Scollie 2 1 Oticon A/S, Headquarters, Denmark 2

More information

2/16/2012. Fitting Current Amplification Technology on Infants and Children. Preselection Issues & Procedures

2/16/2012. Fitting Current Amplification Technology on Infants and Children. Preselection Issues & Procedures Fitting Current Amplification Technology on Infants and Children Cindy Hogan, Ph.D./Doug Sladen, Ph.D. Mayo Clinic Rochester, Minnesota hogan.cynthia@mayo.edu sladen.douglas@mayo.edu AAA Pediatric Amplification

More information

Voice Detection using Speech Energy Maximization and Silence Feature Normalization

Voice Detection using Speech Energy Maximization and Silence Feature Normalization , pp.25-29 http://dx.doi.org/10.14257/astl.2014.49.06 Voice Detection using Speech Energy Maximization and Silence Feature Normalization In-Sung Han 1 and Chan-Shik Ahn 2 1 Dept. of The 2nd R&D Institute,

More information

Simulations of high-frequency vocoder on Mandarin speech recognition for acoustic hearing preserved cochlear implant

Simulations of high-frequency vocoder on Mandarin speech recognition for acoustic hearing preserved cochlear implant INTERSPEECH 2017 August 20 24, 2017, Stockholm, Sweden Simulations of high-frequency vocoder on Mandarin speech recognition for acoustic hearing preserved cochlear implant Tsung-Chen Wu 1, Tai-Shih Chi

More information

Masker-signal relationships and sound level

Masker-signal relationships and sound level Chapter 6: Masking Masking Masking: a process in which the threshold of one sound (signal) is raised by the presentation of another sound (masker). Masking represents the difference in decibels (db) between

More information

Validation Studies. How well does this work??? Speech perception (e.g., Erber & Witt 1977) Early Development... History of the DSL Method

Validation Studies. How well does this work??? Speech perception (e.g., Erber & Witt 1977) Early Development... History of the DSL Method DSL v5.: A Presentation for the Ontario Infant Hearing Program Associates The Desired Sensation Level (DSL) Method Early development.... 198 Goal: To develop a computer-assisted electroacoustic-based procedure

More information

Computational Perception /785. Auditory Scene Analysis

Computational Perception /785. Auditory Scene Analysis Computational Perception 15-485/785 Auditory Scene Analysis A framework for auditory scene analysis Auditory scene analysis involves low and high level cues Low level acoustic cues are often result in

More information

FAST AMPLITUDE COMPRESSION IN HEARING AIDS IMPROVES AUDIBILITY BUT DEGRADES SPEECH INFORMATION TRANSMISSION

FAST AMPLITUDE COMPRESSION IN HEARING AIDS IMPROVES AUDIBILITY BUT DEGRADES SPEECH INFORMATION TRANSMISSION FAST AMPLITUDE COMPRESSION IN HEARING AIDS IMPROVES AUDIBILITY BUT DEGRADES SPEECH INFORMATION TRANSMISSION Arne Leijon and Svante Stadler Sound and Image Processing Lab., School of Electrical Engineering,

More information

Characterizing individual hearing loss using narrow-band loudness compensation

Characterizing individual hearing loss using narrow-band loudness compensation Characterizing individual hearing loss using narrow-band loudness compensation DIRK OETTING 1,2,*, JENS-E. APPELL 1, VOLKER HOHMANN 2, AND STEPHAN D. EWERT 2 1 Project Group Hearing, Speech and Audio Technology

More information

Basic Audiogram Interpretation

Basic Audiogram Interpretation Basic Audiogram Interpretation Audiogram - graph showing Frequency on Horizontal axis db Hearing Level on Vertical axis db level increases as we move down on graph Audiogram displays the individuals pure

More information

Sonic Spotlight. Frequency Transfer Providing Audibility For High-Frequency Losses

Sonic Spotlight. Frequency Transfer Providing Audibility For High-Frequency Losses Frequency Transfer 1 Sonic Spotlight Frequency Transfer Providing Audibility For High-Frequency Losses Through the years, hearing care professionals have become good prognosticators. With one look at the

More information

Prescribe hearing aids to:

Prescribe hearing aids to: Harvey Dillon Audiology NOW! Prescribing hearing aids for adults and children Prescribing hearing aids for adults and children Adult Measure hearing thresholds (db HL) Child Measure hearing thresholds

More information

Although considerable work has been conducted on the speech

Although considerable work has been conducted on the speech Influence of Hearing Loss on the Perceptual Strategies of Children and Adults Andrea L. Pittman Patricia G. Stelmachowicz Dawna E. Lewis Brenda M. Hoover Boys Town National Research Hospital Omaha, NE

More information

Fitting Frequency Compression Hearing Aids to Kids: The Basics

Fitting Frequency Compression Hearing Aids to Kids: The Basics Fitting Frequency Compression Hearing Aids to Kids: The Basics Presenters: Susan Scollie and Danielle Glista Presented at AudiologyNow! 2011, Chicago Support This work was supported by: Canadian Institutes

More information

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES Varinthira Duangudom and David V Anderson School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA 30332

More information

Baker, A., M.Cl.Sc (AUD) Candidate University of Western Ontario: School of Communication Sciences and Disorders

Baker, A., M.Cl.Sc (AUD) Candidate University of Western Ontario: School of Communication Sciences and Disorders Critical Review: Effects of multi-channel, nonlinear frequency compression on speech perception in hearing impaired listeners with high frequency hearing loss Baker, A., M.Cl.Sc (AUD) Candidate University

More information

Phonak Target. SoundRecover2 adult fitting guide. Content. The Connecting the hearing instruments. February 2018

Phonak Target. SoundRecover2 adult fitting guide. Content. The Connecting the hearing instruments. February 2018 Phonak Target February 2018 SoundRecover2 adult fitting guide The following fitting guide is intended for adults. For Pediatric fittings please see the separate Pediatric fitting guide. SoundRecover2 is

More information

Topic 4. Pitch & Frequency. (Some slides are adapted from Zhiyao Duan s course slides on Computer Audition and Its Applications in Music)

Topic 4. Pitch & Frequency. (Some slides are adapted from Zhiyao Duan s course slides on Computer Audition and Its Applications in Music) Topic 4 Pitch & Frequency (Some slides are adapted from Zhiyao Duan s course slides on Computer Audition and Its Applications in Music) A musical interlude KOMBU This solo by Kaigal-ool of Huun-Huur-Tu

More information

C HAPTER FOUR. Audiometric Configurations in Children. Andrea L. Pittman. Introduction. Methods

C HAPTER FOUR. Audiometric Configurations in Children. Andrea L. Pittman. Introduction. Methods C HAPTER FOUR Audiometric Configurations in Children Andrea L. Pittman Introduction Recent studies suggest that the amplification needs of children and adults differ due to differences in perceptual ability.

More information

HEARING AND PSYCHOACOUSTICS

HEARING AND PSYCHOACOUSTICS CHAPTER 2 HEARING AND PSYCHOACOUSTICS WITH LIDIA LEE I would like to lead off the specific audio discussions with a description of the audio receptor the ear. I believe it is always a good idea to understand

More information

Evidence base for hearing aid features:

Evidence base for hearing aid features: Evidence base for hearing aid features: { the ʹwhat, how and whyʹ of technology selection, fitting and assessment. Drew Dundas, PhD Director of Audiology, Clinical Assistant Professor of Otolaryngology

More information

Physiological assessment of contrast-enhancing frequency shaping and multiband compression in hearing aids

Physiological assessment of contrast-enhancing frequency shaping and multiband compression in hearing aids INSTITUTE OF PHYSICS PUBLISHING Physiol. Meas. 2 (24) 94 96 PHYSIOLOGICAL MEASUREMENT PII: S967-3334(4)7637-8 Physiological assessment of contrast-enhancing frequency shaping and multiband compression

More information