FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED

Similar documents
FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED

Juan Carlos Tejero-Calado 1, Janet C. Rutledge 2, and Peggy B. Nelson 3

HCS 7367 Speech Perception

EEL 6586, Project - Hearing Aids algorithms

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds

Issues faced by people with a Sensorineural Hearing Loss

Prelude Envelope and temporal fine. What's all the fuss? Modulating a wave. Decomposing waveforms. The psychophysics of cochlear

SoundRecover2 the first adaptive frequency compression algorithm More audibility of high frequency sounds

Audibility, discrimination and hearing comfort at a new level: SoundRecover2

Hearing. Juan P Bello

functions grow at a higher rate than in normal{hearing subjects. In this chapter, the correlation

Frequency refers to how often something happens. Period refers to the time it takes something to happen.

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED

Chapter 3 NON-LINEAR FREQUENCY COMPRESSION IN HEARING AIDS

FREQUENCY COMPOSITION : A NEW APPROACH TO FREQUENCY- LOWERING

Psychoacoustical Models WS 2016/17

Who are cochlear implants for?

EVALUATION OF SPEECH PERCEPTION IN PATIENTS WITH SKI SLOPE HEARING LOSS USING ARABIC CONSTANT SPEECH DISCRIMINATION LISTS

Acoustics, signals & systems for audiology. Psychoacoustics of hearing impairment

Best Practice Protocols

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Low Frequency th Conference on Low Frequency Noise

Hearing the Universal Language: Music and Cochlear Implants

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for

Frequency-Lowering Devices for Managing High-Frequency Hearing Loss: A Review

! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics

ACOUSTIC AND PERCEPTUAL PROPERTIES OF ENGLISH FRICATIVES

2/25/2013. Context Effect on Suprasegmental Cues. Supresegmental Cues. Pitch Contour Identification (PCI) Context Effect with Cochlear Implants

Design Note: HFDN Rev.2; 04/08. Loop-Filter Configuration for the MAX3670 Low-Jitter PLL Reference Clock Generator

Linguistic Phonetics Fall 2005

Topic 4. Pitch & Frequency

Audiogram+: The ReSound Proprietary Fitting Algorithm

The development of a modified spectral ripple test

Research Article The Acoustic and Peceptual Effects of Series and Parallel Processing

The role of periodicity in the perception of masked speech with simulated and real cochlear implants

CHAPTER 1 INTRODUCTION

What Is the Difference between db HL and db SPL?

An active unpleasantness control system for indoor noise based on auditory masking

Thresholds for different mammals

Linguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions.

HCS 7367 Speech Perception

Spectrograms (revisited)

Optimal Filter Perception of Speech Sounds: Implications to Hearing Aid Fitting through Verbotonal Rehabilitation

Sonic Spotlight. SmartCompress. Advancing compression technology into the future

FREQUENCY LOWERING IN THE PEDIATRIC POPULATION: OUTCOMES AND CONSIDERATIONS FOR FITTING. Capstone. Lauren Virginia Ross, B.A.

Slow compression for people with severe to profound hearing loss

Advanced Audio Interface for Phonetic Speech. Recognition in a High Noise Environment

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

UvA-DARE (Digital Academic Repository) Perceptual evaluation of noise reduction in hearing aids Brons, I. Link to publication

FREQUENCY. Prof Dr. Mona Mourad Dr.Manal Elbanna Doaa Elmoazen ALEXANDRIA UNIVERSITY. Background

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Computational Perception /785. Auditory Scene Analysis

Proceedings of Meetings on Acoustics

Testing Digital Hearing Aids

Speech Intelligibility Measurements in Auditorium

Technical Discussion HUSHCORE Acoustical Products & Systems

Time Varying Comb Filters to Reduce Spectral and Temporal Masking in Sensorineural Hearing Impairment

Representation of sound in the auditory nerve

Simulations of high-frequency vocoder on Mandarin speech recognition for acoustic hearing preserved cochlear implant

Study of perceptual balance for binaural dichotic presentation

2/16/2012. Fitting Current Amplification Technology on Infants and Children. Preselection Issues & Procedures

Topics in Linguistic Theory: Laboratory Phonology Spring 2007

whether or not the fundamental is actually present.

Topic 4. Pitch & Frequency. (Some slides are adapted from Zhiyao Duan s course slides on Computer Audition and Its Applications in Music)

Noise-Robust Speech Recognition in a Car Environment Based on the Acoustic Features of Car Interior Noise

Intelligibility of clear speech at normal rates for older adults with hearing loss

Paediatric Amplification

C HAPTER FOUR. Audiometric Configurations in Children. Andrea L. Pittman. Introduction. Methods

The Effect of Analysis Methods and Input Signal Characteristics on Hearing Aid Measurements

Speech (Sound) Processing

Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation

A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER

Hearing Lectures. Acoustics of Speech and Hearing. Subjective/Objective (recap) Loudness Overview. Sinusoids through ear. Facts about Loudness

The Compression Handbook Fourth Edition. An overview of the characteristics and applications of compression amplification

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES

Evaluating the role of spectral and envelope characteristics in the intelligibility advantage of clear speech

WIDEXPRESS. no.30. Background

Perceptual Effects of Nasal Cue Modification

Hello Old Friend the use of frequency specific speech phonemes in cortical and behavioural testing of infants

Evidence base for hearing aid features:

Voice Detection using Speech Energy Maximization and Silence Feature Normalization

Determination of filtering parameters for dichotic-listening binaural hearing aids

DSL v5 in Connexx 7 Mikael Menard, Ph.D., Philippe Lantin Sivantos, 2015.

Elements of Effective Hearing Aid Performance (2004) Edgar Villchur Feb 2004 HearingOnline

FAST AMPLITUDE COMPRESSION IN HEARING AIDS IMPROVES AUDIBILITY BUT DEGRADES SPEECH INFORMATION TRANSMISSION

Basic Audiogram Interpretation

The functional importance of age-related differences in temporal processing

Gick et al.: JASA Express Letters DOI: / Published Online 17 March 2008

Lecture 3: Perception

Audiogram+: GN Resound proprietary fitting rule

PLEASE SCROLL DOWN FOR ARTICLE

Ambiguity in the recognition of phonetic vowels when using a bone conduction microphone

ClaroTM Digital Perception ProcessingTM

HEARING AND PSYCHOACOUSTICS

HEARING. Structure and Function

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS

64 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 61, NO. 1, JANUARY 2014

SLHS 1301 The Physics and Biology of Spoken Language. Practice Exam 2. b) 2 32

Transcription:

FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED Francisco J. Fraga 1, Alan M. Marotta 2 1 National Institute o Telecommunications, Santa Rita do Sapucaí - MG, Brazil 2 National Institute o Telecommunications, Santa Rita do Sapucaí - MG, Brazil Abstract A considerable percentage o listeners with severe hearing loss have audiograms where the losses are high or high requencies and low or low requencies. For these patients, lowering the speech spectrum to the requencies where there is some residual hearing could be a good solution to be implemented or digital hearing aids. In this paper we have presented two dierent requency lowering algorithms: requency compression and requency shiting. Results o subjective intelligibility tests have shown a slight better perormance o the requency shiting method relatively to the requency compression method, although their perormance remarkably depends on which are the speciic phonemes that are being processed by these two algorithms. Keywords digital hearing aids, requency lowering 1. Introduction There are several kinds o hearing impairment. The origin o the sensorineural hearing losses can be due to deects in the cochlea, auditory nerve or both. These problems reduce the dynamic range o hearing. The threshold o hearing is elevated, but the threshold o discomort (at which the loudness become uncomortable) is almost the same as or normal hearing listeners, or even may be lower. For some range o requencies, the threshold o hearing is so high than it is equal to the threshold o discomort, i.e., it is impossible or the listener hearing any sound at those requencies. Hearing loss is more common or high requency and mid requency sounds (1 to 3 khz) than or low requency. Frequently, there are only small losses at low requencies (below 1 khz) but almost absolute deaness above 1.5 or 2 khz. These acts lead researchers to lower the spectrum o speech in order to match the residual low requency hearing o listeners with high requency impairments. Slow playback, vocoding, and zero crossing rate division are some o the methods that have been employed in the last decades. All o these methods involve signal distortion, more or less noticeable, generally depending on the amount o the requency shiting. Many o the lowering schemes have altered perceptually important characteristics o speech, such as temporal and rhythmic patterns, pitch and durations o segmental elements. Hicks et al. [1] have done one o the most remarkable investigations about requency lowering. Their technique involve pitch synchronous, monotonic compression o the short term spectral envelope, while at the same time avoiding some o the above-described problems observed in the other methods. Reed et al. [2] have conducted consonant discrimination experiments on normal hearing listeners. They have observed that Hick s requency lowering scheme presented better perormance or ricative and aricate sounds i compared with low pass iltering to an equivalent bandwidth. On the other hand, the perormance o the low pass iltering was better or vowels, semivowels and nasal sounds. For plosive sounds, both methods have shown similar results. In general, the perormance on the best requency lowering conditions was almost the same to that obtained on low pass iltering to an equivalent bandwidth. Further, Reed et al. [3] have extended the results o Hick s et al. system to listeners with high requency impairment. In general, the perormance o the impaired subjects was inerior to that obtained by normal subjects. Few years ago, Nelson and Revoile [4] discovered that relative to the normal hearing listeners, those with moderate to severe hearing loss required approximately double the peak to valley depth or detection o spectral peaks in bands o noise when signals have a high numbers o peaks per octave. Findings revealed that detection o spectral peaks in noise is signiicantly related to consonant identiication abilities in listeners with moderate to severe hearing loss. All previous mentioned requency-lowering schemes compress the speech spectrum into a narrower band o requencies, increasing the number o peaks per octave while maintaining the peak to valley depth. According to Nelson s and Revoile s investigation, applying sharpening processing to a requency lowered speech may allow better detection o spectral peaks and better consonant identiication. Recently, Muñoz et al. [5] have combined sharpening (i.e., increasing the peak to valley depth) and requency compression. They have demonstrated that the

processed speech improved the understanding o ricative and aricate sounds, while providing no signiicant change in identiication o vowels and other sounds by listeners with severe high requency hearing loss. Based on Nelson s and Revoile s investigation, we hypothesize that the relatively poor perormance o Hick sand Muñoz s requency lowering schemes is due to the increasing o the numbers o peaks per octave, which is inherent to the requency compression method used in these systems. In this paper, we propose a new requency-lowering algorithm that does not increase the number o peaks per octave because it uses requency shiting instead o requency compression. Furthermore, the requency shiting is applied only or ricative and aricate sounds, leaving all others types o sounds untouched, because it is only or ricative sounds that the requency lowering technique brings real beneits as have been demonstrated by all the previous mentioned works. We have also implemented a requency compression algorithm based on Hick s [1] and Muñoz s [5] ideas. the losses are classiied as mild. Moderate losses are those which are greater than 40 db but until inerior to 70 db. From 71 to 90 db, we consider that the patient have severe hearing losses and more than 95 db o loss is classiied as proound [6]. The threshold o discomort, or normal or impaired listeners, is always below 120 db SPL. Indeed, commonly the threshold o discomort or the impaired subjects is lower than or normal hearing subjects. Although less common, some audiograms bring both the threshold o discomort and the threshold o hearing [7], as we can observe in Fig. 1. In this igure, the points o the audiogram corresponding to the right ear are signaled with a round mark and those corresponding to the let ear are signaled with an X mark. These marks are worldwide used in this way by audiologists [6]. The dynamic range o listening or each requency is the threshold o discomort minus the threshold o hearing. Preliminary results o subjective preerence (considering only the qualitative aspect o the processed speech) have conirmed our hypothesis about the better perormance o the requency shiting method compared to the requency compression method. But urther subjective intelligibility tests over 20 subjects have clearly shown that their perormance (now considering only the intelligible aspect o the speech) remarkably depends on which are the speciic phonemes that are being processed by these two algorithms. 2. Methodology A. Audiometric data acquisition and processing The irst step o both requency-lowering algorithms consists in audiometric data acquisition o the impaired subject. The audiometric exam is employed or measuring the degree o the hearing impairment o a given patient. In this exam, the listener is submitted to a perception test by continuously varying the sound pressure level (SPL) o a pure sinusoidal tone in a discrete requency scale. The requency values most requently used are 250 Hz, 500 Hz, 1 khz, 2 khz, 4 khz, 6 khz and 8 khz. For each o these requencies, the minimum SPL in db or which the patient is capable o perceiving the sound is registered in a graph. The audiogram is the result o the audiometric exam, which is presented by a graph with the values in db SPL or each o the discrete requencies. This graph is done separately or each ear o the subject. Since the level o 0 db SPL is considered the minimum sound pressure level or normal hearing, the positive values in db registered on the vertical axis o the audiogram can be considered as the hearing losses o the patient s ear. I the losses are equal or inerior to 20 db, the subject is considered as having normal hearing. From 21 to 40 db, Fig. 1: Ski-slope losses case Based on the acquired audiometric data, the algorithm analyses the range o requencies where there is still some residual hearing. The criterion used is the ollowing: irst, it is veriied i the patient have a ski slope kind o losses, i.e., i the losses are increasing with requency. Only patients with this type o impairment can be aided by any requency lowering method. Ater that, the irst requency where there is a proound loss is determined. I this requency is between 1.2 khz and 3.4 khz, a destination requency to which the high requency spectrum will be shited is calculated. Otherwise, no requency shiting is needed (residual hearing above 3.4 khz) or proitable (residual hearing below 1.2 khz). This destination requency is considered as the geometrical mean between 900 Hz and the highest requency where there is still some residual hearing. The geometrical mean was empirically chosen because it provides a good tradeo between minimum spectrum distortion and maximum residual

hearing proit. In order to obtain more accuracy in the losses thresholds, the points o the audiogram are linearly interpolated. B. Speech data acquisition and processing The speech signal is sampled at a 16 khz rate and Hamming windowed with 25 msec windows. These windows are 50% overlapped, what means that the signal is analyzed at a rame rate that is the inverse o 12.5 msec. A 1024-point FFT is used or representing the high resolution short time speech spectrum in the requency domain. I in the previous audiometric data analysis a ski slope kind o loss was detected and the requency-shiting criterion was matched, a destination requency have already been determined. Then, we have to ind out (in a rame by rame basis) i the short time speech spectrum presents signiicant inormation at high requencies that justiy the requency shiting operation. The criterion used or shiting or not the short time spectrum o each speech rame is based in a threshold. When the signal has high energy in high requencies the algorithm shits this high requency inormation to lower requencies. The threshold is set or suppressing the processing o all vowels, nasals and the semivowels, while activating the requency transposition or ricatives and aricates. To decide which part o the spectrum will be shited, the energy o 500 Hz bandwidth windows are calculated with 100 Hz spacing, rom 1 khz to 8 khz. This is done with the aim o ind out an origin requency. The origin requency is the requency 100 Hz below the beginning o the 500 Hz bandwidth window that have maximum energy. The part o the spectrum that will be transposed corresponds to the range o all requencies above the origin requency. This empirical criterion guarantees that the unavoidable distortion due to the requency lowering operation will be proitable. Because the most important part o the high energy inormation will be shited to the limited range o requencies that are above 1 khz (thereore maintaining untouched the low requency inormation) but still below the highest requency where the patient presents residual hearing. For comparison, the Hick s requency compression scheme was already implemented, but now only when the same requency lowering criterion (high/low requency energy ratio) used or transposition was matched, i. e., only or ricatives and aricates. The requency compression was done by means o an equation deined in [2]. But in practice, it is more useul to implement the inverse equation, which is IN S 1 1 = tan π 1 a tan Kπ 1 + a OUT S where IN is the original requency, OUT is the corresponding compressed requency, K is the requency compression actor, a is the warping parameter and S is (1) the sampling rate. For minimum distortion at low requencies, the warping parameter must be chosen as being a = (K 1)/(K+1). The compression actor K was determined according to the degree o loss presented by the listener. Fig. 2 shows the curves o equation (1) or K = 2, 3 and 4. In this igure we can see that the low requency inormation (below 1000 Hz) is barely compressed. Ater requency shiting or compressing (i it occurs), the FFT spectrum o each speech rame is multiplied by the gain actor, which is calculated or each requency in order to ull compensate the hearing loss, unless the ampliied sound pressure level exceed the threshold o discomort. In this case, the gain actor is limited to the amount required or maintaining the loudness below the threshold o discomort. The way we implemented this spectral shaping process is similar to that described in [8]. This last step was still under development in our digital hearing aids system. Fig. 2: Input vs. output requency curves Fig. 3: Comparison o requency lowering schemes

Part (a) o Fig. 3 illustrates the original FFT spectrum o a speech rame, in part (b) the same rame is shown compressed by a actor K = 4 and part (c) presents the rame ater requency shiting. It is important to observe that in the last case (requency shiting) the shape o the spectrum is preserved, what does not occur in the case o requency compression, where we can clearly note a great amount o shape distortion, but still preserving the low requency inormation. These preliminary results indicate that the requency shiting method was preerred by the listeners when compared with the requency compression method. But it is important to remark that the subjective dierence between the low pass iltered signal, the requency compressed signal and the requency-shited signal is very slight, as perceived by normal listeners. 3. Results A. Preliminary Qualitative Tests The two requency lowering algorithms were not already tested with hearing impaired subjects because they inal spectral shaping part are not completely developed, as mentioned in the last paragraph o the previous section. But we got some preliminary results with normal listeners, considering irst only the qualitative aspect o the processed speech. In this case, a simple low pass iltering process simulates the losses above the requency where there is no more residual hearing. In this preliminary qualitative test, this cuto requency was ixed to 2 khz. The experiment we have carried out consists o submitting the speech signal to the two requency lowering algorithms. Ater that, the resulted signals were listened by two normal hearing subjects, one man and one woman. The listeners do not know anything about the origin o the signals and are asked or ranking the signals according to their intelligibility. In this preliminary test, only two speech signals were submitted to the algorithms. The original and processed spectrograms o one o these speech signals (pronunciation o the words loose management ) are shown in Fig. 4, where we can appreciate again the visual dierence between the two requency lowering algorithms. According to the prevision, only the ricative speech sounds were requency lowered in both algorithms. The unique exception is the phone [ l ], which is not ricative but lateral approximant. But in this case, its pronunciation had high requency energy, as we can observe in the spectrogram o the original speech signal. The preerences o the listeners were listed in Table 1. In this table, Signal 1 is the Portuguese word pensando (which means thinking ) and Signal 2 is the English words loose management. Table 1: Listener s preerences Speech signal Man Woman Signal 1 low pass 1 st 3 rd Signal 1 compr. 3 rd 2 nd Signal 1 shited 2 nd 1 st Signal 2 low pass 2 nd 2 nd Signal 2 compr. 3 rd 3 rd Signal 2 shited 1 st 1 st Fig. 4: Spectrograms o loose management B. Detailed Intelligibility Tests The intelligibility test was perormed over 20 listeners, 15 male and 5 emale. Each o them heard 36 syllables randomly chosen rom a database ormed by the utterances o 6 speakers, 3 emale and 3 male. The original database was ormed by 21 dierent CV phonetic syllables, each o them is composed by one o the 7 most commonly used ricative sounds o the Portuguese language ( [ ], [ ], [ ], [ ], [ ], [ ], [ ] ) and by one o these 3 vowels: [ ], [ ] or [ ]. These syllables were pronounced once by the 6 speakers, thereore the original database was ormed by 126 utterances. Each o these utterances generates 9 dierent processed WAVE iles: original syllable, requency compressed syllable and requency shited syllable, passed through 3 dierent low pass ilters with cuto requencies o 1.5, 2 and 2.5 khz, orming a inal speech database composed by 1134 WAVE iles. Ater have heard 3 times a randomly chosen phonetic syllable rom the inal database (without any additional inormation than their sounds), the listener must choose one syllable rom a list o 7 possibilities. The vowel is the correct one in these 7 syllables, which means that the decision will be made based only in the acoustic properties o the processed ricative sounds. The results o this test are shown in Table 2, where the column None means no processing urther than low pass iltering, Compression means requency compression and Shiting means requency shiting. In the irst column we have all

the possible ricatives or each o the 3 ilter cuto requencies. In the table, the numbers signaled in boldace correspond to the greatest percentage o correct decisions made or each type o processing. Because o the random choice o the syllables that were presented to the listeners, there were some syllables that were less listened than others. But each o the 63 dierent processed ricatives corresponding to the cells o Table 2 was presented at least 5 times and any o them was presented more than 15 times. Processed Syllable Table 2: Listener s correct decisions (%) None Compression Shiting [ ] 1500 61,5 72,7 62,5 [ ] 2000 40,0 44,4 69,2 [ ] 2500 85,7 53,3 58,3 [ ] 1500 78,6 80,0 81,8 [ ] 2000 100,0 71,4 66,7 [ ] 2500 77,8 61,5 90,9 [ ] 1500 25,0 28,6 33,3 [ ] 2000 50,0 81,8 86,7 [ ] 2500 69,2 62,5 77,8 [ ] 1500 0,0 20,0 45,5 [ ] 2000 44,4 62,5 55,6 [ ] 2500 77,8 100,0 84,6 [ ] 1500 53,8 50,0 8,3 [ ] 2000 73,3 36,4 33,3 [ ] 2500 76,9 41,7 25,0 [ ] 1500 57,1 46,7 75,0 [ ] 2000 70,0 60,0 40,0 [ ] 2500 44,4 60,0 33,3 [ ] 1500 66,7 40,0 12,5 [ ] 2000 46,2 21,4 38,5 [ ] 2500 55,6 38,5 75,0 These results are diicult to analyze i we consider the set o syllables as a whole. But it is interesting to analyze each ricative sound in particular. For example, we can conclude rom the results that or the phone [ ] the better is to do no urther processing with it, but i we consider the case o the phone [ ] we conclude just the opposite: no processing leads to 0.0 % o intelligibility when the highest audible requency is 1.5 khz! In the case o the ricative sound [ ], the better solution is to apply our requency shiting proposed algorithm. For all other situations, the optimal solution depends on the speciic phone and cuto requency considered. 4. Conclusion It is necessary to inish the spectral shaping part o the system in order to submit the processed signals to hearing impaired listeners. The slight dierence in the quality observed among the processed signals may be due to the act that the dierence between the original signal (with requencies up to 8 khz) and the low pass iltered (2 khz) signals is large. But or the impaired subject, that never had any perception o sounds with requencies above 2 khz, may be the dierence between the processed signals was not so slight. Relatively to the results o the intelligibility test, we can conclude that i we incorporate a simple automatic phoneme classiier in the system, it is possible to choose the better requency lowering algorithm to be applied or each speciic phone, given the maximum requency where there is some residual hearing. This is not diicult to do, considering the advances observed in the perormance o automatic phoneme recognition algorithms over the last years. Finally, it is important to remark that, with all the processing being done in the requency domain, both algorithms have demonstrated to be ast enough or enabling the usage in real time applications. Reerences [1] B. L. Hicks, L. D. Braida, and N. I. Durlach, Pitch invariant requency lowering with nonuniorm spectral compression, in Proc. IEEE International Conerence on Acoustics, Speech, and Signal Processing (IEEE New York), pp. 121-124, 1981. [2] C. M. Reed, B. L. Hicks, L. D. Braida, and N. I. Durlach, Discrimination o speech processed by lowpass iltering and pitch-invariant requency lowering, J. Acoust. Soc. Am, vol. 74, pp. 409-419, 1983. [3] C. M. Reed, K. I. Schultz, L. D. Braida, N. I. Durlach, Discrimination and identiication o requency-lowered speech in listeners with high-requency hearing impairment, J. Acoust. Soc. Am, vol. 78, pp. 2139-2141, 1985. [4] P. Nelson, and S. Revoile, Detection o spectral peaks in noise: Eects o hearing loss and requency regions, J. Acoust. Soc. Am, 1998. [5] C. M. Aguilera Muñoz, B. N. Peggy, J. C. Rutledge, A. Gago, Frequency lowering processing or listeners with signiicant hearing loss, IEEE, pp. 741-744, 1999. [6] S. Frota, Fundamentos em Fonoaudiologia, 1 st ed., vol. 1. Guanabara Koogan, 2001, pp. 40-59. [7] Y. A. Alsaka, B. McLean, Spectral Shaping or the Hearing Impaired, IEEE, pp. 103-106, 1996 [8] J. C. Tejero-Calado, B. N. Peggy, J. C. Rutledge, Combination compression and linear gain processing or digital hearing aids, IEEE, pp. 3140-3143, 1998.