A. Faulkner, S. Rosen, and L. Wilkinson
|
|
- Arabella Newton
- 5 years ago
- Views:
Transcription
1 Effects of the Number of Channels and Speech-to- Noise Ratio on Rate of Connected Discourse Tracking Through a Simulated Cochlear Implant Speech Processor A. Faulkner, S. Rosen, and L. Wilkinson Objective: To investigate the effects of number of channels and speech-to-noise ratio on connected discourse tracking (CDT) through simulations of cochlear implant speech processing. Previous studies have used citation-form vowel and consonant materials or simple sentences. CDT rates were expected to be less likely to be limited by ceiling effects and more representative of everyday speech communication. Design: Four normal-hearing subjects were presented with speech processed through a real-time sine-excited vocoder having three, four, eight, or 12 channels. Amplitude envelopes extracted from each band modulated sinusoidal carrier signals placed at each band center frequency. Speech-spectrum shaped noise was added to speech before vocoder processing at three signal to noise ratios based on real-time measurements of speech level ( 7, 12, 17 db). Results: CDT rates increased significantly with number of channels up to eight in both quiet and noise, and decreased significantly with each increase in noise level from quiet. Conclusions: The effects on CDT rates of the number of channels and speech-to-noise ratio are highly correlated with intelligibility measures for Hearing in Noise Test (HINT) sentences, consonants and vowels. However, HINT sentence scores even in noise show ceiling effects that obscure the advantages of processors with eight or more channels. Moderate levels of noise that have only slight effects on other measures significantly affected CDT rate. CDT rates with three or four bands of spectral information were much lower than asymptotic rates, especially in the presence of noise. (Ear & Hearing 2001;22; ) One of the factors that limits the effectiveness of a cochlear implant is likely to be the relatively modest degree of spectral resolution that is available. Normal hearing offers some 18 cochlear placedefined channels of information over the frequency Department of Phonetics and Linguistics, University College London, London, United Kingdom. range 500 to 5000 Hz (as estimated from Moore & Glasberg, 1983). Although this is comparable with the numbers of stimulation places offered by modern 16 or 24 electrode cochlear implants, the effective number of place channels in electrical hearing is likely to be fewer as a result of current spread and interaction between channels. Two within-subject studies of implanted subjects have examined the effective number of channels by varying the number of allocated channels of the Nucleus SPEAK processing strategy. Fishman, Shannon, and Slattery (1997) found that 20 electrodes were no more effective than seven as measured by performance in vowel and monosyllabic word identification in the quiet. For speech in noise, Zeng and Galvin (1999) found that phoneme recognition was similar for 10 and 20 electrodes, and significantly poorer for four electrodes. A number of recent studies have investigated the effect of the number of channels in simulated speech processors that present to normal-hearing listeners the spectro-temporal information provided by a cochlear implant. Important advantages of such simulations over studies in implanted patients arise from much greater control of the effective stimulation. It can be reasonably assumed that all normalhearing subjects receive essentially identical stimulation. Furthermore, these listeners capacity to process spectro-temporal variation is well understood. These simulations can be considered as representative of the optimal performance that may be expected from users of current cochlear implants when the spectral resolution of a simulation is comparable with the number of effective place channels of electrical stimulation (Dorman & Loizou, 1998). These simulations have primarily examined the perception of citation-form speech presented both in quiet conditions (Dorman & Loizou, 1997; Shannon, Zeng, Kamath, Wygonski, & Ekelid, 1995) and in noise (Dorman, Loizou, Fitzke, & Tu, 1998; Fu, Shannon, & Wang, 1998). All of these studies have used vocoder-like processing. Here, as in continuous interleaved sampling cochlear implant speech pro- 0196/0202/01/ /0 Ear & Hearing Copyright 2001 by Lippincott Williams & Wilkins Printed in the U.S.A. 431
2 432 EAR &HEARING /OCTOBER 2001 Figure 1. Block diagram of signal processing for a 3-channel sine-excited vocoder. cessors (Wilson, Finley, Lawson, Wolford, Eddington, & Rabinowitz, 1991), speech is split into a limited number of frequency bands, each of which is represented by the time-varying amplitude envelope measured over the band. Within-band spectral information is thus discarded, and only time-varying between-band level differences are available to signal spectral structure. Purely temporal cues are preserved up to a modulation rate determined by details of the envelope extraction. These simulations relate not only to cochlear implant speech processing, but also more generally to the effects of noise and the degree of spectral resolution on the auditory processing of speech information. With such processing applied to speech in quiet, consonant identification and the intelligibility of words in the relatively simple Hearing in Noise Test (HINT) sentences both show fairly high levels of performance with between four to six spectral bands (Dorman, Loizou, & Rainey, 1997; Shannon et al., 1995). Vowel identification in these same studies, and the intelligibility of more complex sentences from the TIMIT database (Loizou, Dorman, & Tu, 1999), are only slightly more demanding of spectral detail in the absence of noise. These data show asymptotic scores with between six to eight channels. In these studies of speech in quiet it is difficult to distinguish asymptotic performance from ceiling effects, and hence any effects of limiting spectral resolution may be obscured. For speech in noise, ceiling effects are less evident. The intelligibility of HINT sentences at a 2 db speech-to-noise ratio continues to increase with number of channels up to at least eight (Dorman et al., 1998; Fishman et al., 1997). Consonant and vowel identification in noise show a similar pattern, with 16 channels leading to higher scores than eight channels (Fu et al., 1998). In this study, we have examined the effects of the number of channels on speech perception in quiet and in noise using connected discourse tracking (CDT; DeFilippo & Scott, 1978). This task allows a measure of communication that is less likely than are intelligibility measures from HINT and similar sentences to be limited by ceiling effects. The use in CDT of extended and meaningful connected speech also makes it more representative of everyday communication than measures based on isolated citation-form words or single sentences. Furthermore, the rate at which accurate communication can occur is in itself an important parameter that has direct relevance to the benefit derived from a cochlear implant. CDT has recognized limitations both in its inherent nonrepeatability and variability, and the possibility that the test talker adapts their speaking level and style to compensate for difficult communication conditions. It nevertheless remains an interpretable measure of performance when robust differences are found and appropriate controls are observed. One major difficulty in the use of CDT can be the lack of control of live speech levels. In this study this drawback was overcome, except in a quiet condition, by the presentation of background noise whose level was dynamically adapted according to real-time measurements of speech level, ensuring a constant speech-to-noise ratio regardless of speech level. METHOD Speech Processing Speech processing was carried out in real time using the Aladdin Interactive DSP Workbench (v1.02, Hitech Development AB), and ran at an khz sample rate on a Loughborough Sound Images DSP card with a Texas Instruments TMS320C31 processor. The technique, which is illustrated in Figure 1, was similar to that described by Dorman et al. (1997) in the use of a series of sinusoids as carriers for envelope modulations in each frequency band.* The input speech was first low-pass filtered and sampled (16 bits). The signal was then passed through a bank of analysis filters (sixth-order elliptical IIR) with frequency responses that crossed 15 db down from the passband peak. Envelope extraction occurred at the output of each analysis filter by half-wave rectification and firstorder low-pass filtering at 30 Hz. These envelopes *Dorman, Loizou, and Rainey (1997) have shown that sinusoidal carriers result in similar performance to the noise-band carriers used in other studies. The generation of sine carriers requires less computation than the filtering of noise carriers, and enabled the real-time processing of 12 channels. The use of sinusoidal carriers makes it essential to use a low cut-off frequency for the extracted envelopes. Modulation of a sine carrier leads to spectral side-bands separated from the carrier frequency by integral multiples of the modulation frequency. If
3 EAR &HEARING, VOL. 22 NO TABLE 1. Center frequency (c.f.) and 15 db down crossover points of the analysis filters for the 3-, 4-, 8-, and 12-channel processors Channel Number 3-Channel (c.f. 15 db) 4-Channel (c.f. 15 db) 8-Channel (c.f. 15 db) Channel (c.f. 15 db) were then multiplied against sinusoids at the center frequency of the analyzing filter. The modulated sinusoids were summed and played out through a 16 bit digital-to-analogue converter. The center frequencies of each analysis filter and the 15 db crossover frequencies of the filters are shown in Table 1. These are based on equal basilar membrane distance according to the formula given by Greenwood (1990). The lowest cut-off frequency was always 100 Hz, and the highest always 5 khz. The frequency of the sinusoidal carrier for each channel was always the same as the analysis filter center frequency. An unprocessed condition was also employed, in which speech passed through the DSP card without alteration except for a gain adjustment to match the level of the processed conditions. The real-time processing also controlled the addition of noise. Speech-to-noise ratio was maintained constant through dynamic adaptation to the speech level. The spectral shape of the masking noise closely approximated the long-term average speech spectrum for male and female voices (Byrne et al., 1994). This spectral shaping was performed by a the modulation envelope were allowed to extend into the voice fundamental frequency range, F 0 would often be a major component of the modulation envelope, and side-bands would be present at F 0,2F 0, etc. above and below each carrier frequency. Such side-bands would make the presented spectral envelope very complex and dependent not only on the spectral envelope of the speech input but also on its F 0. second-order Butterworth band-pass filter applied to white noise. Adaptation of the noise level relative to that of the speech was controlled by a slowmoving amplitude envelope extracted from the speech input. A two-stage process extracted this envelope so that the decay of the envelope in response to speech was slower than its onset. The speech waveform was first full-wave rectified, and then passed through two cascaded first-order 1 Hz low-pass filters. This first-stage envelope was then further low-pass filtered by a second cascaded pair of first-order 1 Hz filters. The envelope used to modulate the noise was the sum of the first-stage envelope and the output of the second pair of lowpass filters. The response to an impulse of this envelope extractor is shown in Figure 2. The envelope reached its maximum 270 msec after the onset of the impulse, and decayed to 50% of the maximum 740 msec after onset. In addition, a constant lowlevel speech-spectrum shaped noise was present to mask responses to environmental sounds. This noise was 40 db down from the speech-level related noise component at typical speech input levels, and was included in the measured speech-to-noise ratios. To time-align the processed and unprocessed speech with the adaptively controlled noise level, the speech input was delayed by 272 msec before being added to the noise. This combined signal was then fed to the analysis filters of the vocoder, or in the
4 434 EAR &HEARING /OCTOBER 2001 Figure 2. Response of slow envelope extractor (lower panel) to a band-limited impulse (upper panel). unprocessed condition, was presented directly to the listener. The signal to noise ratio (SNR) at the input to the vocoder was calibrated using triggered measurements with a real-time spectrum analyzer. Five sentences prerecorded by the CDT talker were used for these measurements. From pilot testing, SNRs of 17, 12, and 7 db were selected to cover a range over which the noise caused from mild to more extreme difficulty in CDT. Figure 3. Connected discourse tracking (CDT) rates as a function of processing condition and signal to noise ratio (SNR). The boxes show the interquartile range, and the bars within each box represent the median. The whiskers show extreme values. Two outliers, both from the same subject, are shown by the open circles. These outliers deviate from the median by more than three times the interquartile range. Each data set contains eight samples (two CDT sessions from each subject). Procedure Texts were chosen from the Heinemann Guided Readers series, elementary level. These texts, designed for learners of English as a second language, are controlled in syntactic complexity and vocabulary. The CDT talker (author LW) had more than 20 hr previous experience administering CDT. The talker and listener sat in adjacent sound-isolated rooms. A constant masking noise at 45 dba was present in the listener s room to mask any unprocessed speech transmitted through the intervening wall. Processed speech was presented diotically to the listener at approximately 70 dba over headphones (Sennheiser HD 475) after amplification (Revox A77). The talker was able to hear the listener s responses over an intercom. The talker read from the text in phrases, and the listener repeated back what she or he had heard. If the listener s response was completely correct, the speaker moved on to the next phrase. Where any word was not correctly repeated, the speaker and listener worked together until the phrase was repeated verbatim. Performance was measured by the average number of words per minute correctly repeated back by the listener during each 5-minute block of CDT. Four native English-speaking adults having audiometric thresholds within normal limits between 125 and 4000 Hz were paid for their participation. All subjects took part in the unprocessed condition first, to familiarize them with the testing procedures. After the first session, the number of channels of processing was fixed throughout each session. The order of presentation of the four processed speech conditions over sessions was determined by a Latin square design across subjects. Each testing session consisted of eight 5-minute blocks of CDT with a short break between blocks. The four noise conditions were presented in turn, in a random order for the first four of these blocks, and repeated in a different random order for the second four blocks of the session. RESULTS Raw CDT rates are presented in Figure 3. A repeated-measures analysis of variance (ANOVA) was performed using factors of subject, test run (first or second half of test session), SNR and processing condition including the unprocessed condition. This showed significant effects (p 0.001) of test run, noise level, and processor condition. There was also a significant interaction between SNR and processing condition. Because this interaction was likely to be due to the lack of an effect of noise for the unprocessed condition, a second repeated-measures ANOVA excluded that condition. This showed only main effects of number of channels (F[3,9] 33.3, , p 0.001), SNR (F[3,9] 106, , p 0.001) and test run (F[1,3] 408, , p
5 EAR &HEARING, VOL. 22 NO ). A set of a priori contrasts showed that each successive increase in number of channels from three up to eight led to significant increases in CDT rate across all SNR conditions, whereas rates with eight and 12 channels did not differ significantly. A second series of a priori contrasts showed that across all of the processed speech conditions, each increment of noise from quiet to a 7 db SNR led to a significant decrease in CDT rate. The significant effect of test run represented an average increase of 6.1 words per minute between the first and second test run in the equivalent condition. This increase is to be expected in CDT, and may also in part reflect continuing adaptation by the test subjects to the processed speech (Rosen, Faulkner, & Wilkinson, 1999). Because test run does not show significant interactions with other factors, the presence of this effect is not regarded as problematic. The question of whether CDT rates through the processors differed from those with unprocessed speech cannot be addressed from an ANOVA of the whole dataset because of the interaction between processor condition and noise level. Hence, this has been examined through planned contrasts based on subanalyses in each of the noise conditions to compare each processed condition with the unprocessed speech data. Because unprocessed CDT rates may be at ceiling level, it is not reasonable to take the lack of a significant difference between these rates and those through a processor as strong evidence for equivalence. However, the presence of a significant difference is readily interpretable. In quiet, the unprocessed condition differed only from the 3-channel processor. At the 17 db SNR, unprocessed speech scores exceeded those from both the three and four channel processors. At 12 db, unprocessed speech scores significantly exceeded those from the three, four, and 12 (but not eight) channel processors. At 7 db, unprocessed speech scores exceeded those for three and four channels and were close to being significantly different from the 8-channel (p 0.058) and 12-channel (p 0.073) processors. In summary, these subanalyses indicate that CDT rates with unprocessed speech always exceeded those through 3- and 4-channel processors, and at poorer SNRs, tended also to exceed rates through 8- and 12-channel processors. The quantitative effects of number of channels and SNR were estimated through a regression that Using Huynh-Feldt Epsilon-corrected degrees of freedom where 1 in each case. 2 is the Eta-squared statistic indicating proportion of variance accounted for by the independent variable. The observed power 1.0 for each test. All of these analyses were carried out both with and without the two outlying points. There were no substantive differences in the outcomes. Figure 4. Linear regression of signal to noise ratio (SNR) and log 10 (nc) with connected discourse tracking (CDT) rate for processed speech excluding scores in quiet. The lines are calculated from the regression equation: rate (SNR db) log 10 (nc) The individual symbols are mean CDT rates for the four individual subjects at each of the three SNRs. is illustrated in Figure 4. The regression equation is: rate (SNR) 69.6(log 10 (nc)), where SNR is expressed in db and nc is the number of channels. CDT rate showed a slightly higher correlation with the logarithm of the number of channels than with the untransformed number of channels. The fit of a regression of SNR in db and log 10 (nc) to CDT rate is shown by the lines in Figure 4. Both log(nc) (R ) and SNR (R ) contributed significantly to the regression. The overall R 2 of is only slightly less than the sum of the squared correlations for the two factors, indicating that the fit is consistent with additive effects of the two variables. The regression indicates that CDT rates increase by about 21 words/minute for each doubling of the number of channels, and by about 18 words/minute for a 6 db improvement in SNR. Comparison of CDT Rates with Other Speech Measures CDT rate represents a measure of communication efficiency that is reasonably representative of normal spoken discourse. It is important to establish the degree to which CDT data are comparable with data from other studies using intelligibility measures for citation-form and simple sentence materials. In particular, simple speech materials that show dependencies similar to CDT might as a consequence be regarded as yielding intelligibility measures that are more representative of communication rate than are measures from other materials that do not show these dependencies. The two outliers were excluded from this regression, but once again these affected the analysis minimally.
6 436 EAR &HEARING /OCTOBER 2001 TABLE 2. Pearson product-moment correlations between performance in four speech tasks HINT Vowels Consonants CDT r p * * * N HINT r p * N 7 7 Vowels r 0.85 p * N 29 Figure 5. Effect of signal to noise ratio (SNR) on connected discourse tracking (CDT) rate, Hearing in Noise Test sentences (Eddington et al., Reference Note 1), and vowel and consonant identification (Fu et al., 1998). Each panel shows data with a different number of channels. CDT rates have been multiplicatively scaled to words per 1.17 minutes to place the maximum at 100. Symbols and scaling of CDT rate are the same as Figure 6. Figure 6. Effect of number of channels on connected discourse tracking (CDT) rate, Hearing in Noise Test sentence, and vowel and consonant identification. Each panel shows data from a different signal to noise ratio (SNR). *p The data are average scores at each signal to noise ratio and number of channels, including scores in quiet but excluding those with unprocessed speech. CDT rates at signal to noise ratios of 7 and 17 db are compared here with performance in other tasks at 6 and 18 db, respectively. HINT Hearing in Noise Test; CDT connected discourse tracking. Figures 5 and 6 show CDT rates together with scores obtained using comparable processing in the intelligibility of HINT sentences (Eddington, Rabinowitz, Tierney, Noel & Whearty, Reference Note 1) and consonants and vowels (Fu et al., 1998). The CDT rates have been multiplicatively scaled to place the maximum group mean at 100 measured in units of words per 1.17 minutes. This serves only to visually equalize a reasonable estimate of maximum CDT rate for unprocessed speech with 100% correct identification, and has no impact on the conclusions that follow. Scores for all measures increase in approximate proportion to the logarithm of the number of channels. However, the HINT sentence scores show ceiling levels of performance at 3 channels in quiet and at 8 channels at 6 db SNR. The vowel and consonant data show improved performance with 16 compared with 8 channels (Fu et al., 1998), and the trend of the CDT data is consistent with continuing increases in performance with more than 8 channels, although there was no significant difference between rates with 8 and 12 channels. Apart from ceiling limits on HINT sentence scores, there is rather close correspondence between the effects of number of channels and SNR in these data sets. Table 2 shows correlations between scores for these measures at matching numbers of channels and SNR. The variation of CDT rate with number of channels and SNR shows high ( 0.8) and significant correlations with the variation of the other three measures over the SNR range used here (between 7 and 17 db). With the exception of the Both these studies also used speech-spectrum shaped noise. Fu et al. (1998) also used the same unweighted rms measures of speech and noise levels as here. Eddington et al. (Reference Note 1) do not state a method of SNR measurement.
7 EAR &HEARING, VOL. 22 NO vowel and HINT sentence scores, the other measures also show significant inter-correlations. One notable feature of the CDT data is that even a modest level of noise ( 17 db SNR) has a significant impact on scores through the processors. The consonant and vowel identification data from Fu et al. s (1998) simulation study show similar trends. In the sentence data, however, performance at lower SNRs is at ceiling levels for 8 or 12 bands (see Fig. 6). Also striking is that at a relatively moderate SNR of 7 db, CDT rate with the three and four channel processors has fallen to very low levels of around 5 to 10 words per minute. In comparable conditions, HINT sentence, vowel and consonant scores are approximately 50% correct. DISCUSSION Other studies have examined the effects of spectral resolution on speech perception using speech processing that smears spectral detail rather than representing spectral shape in terms of a small number of channels as in vocoding. One method of smearing spectral detail involves the computation of amplitude spectra representing the output of a series of band-pass filters. From these smeared amplitude spectra, processed speech is re-synthesized so as to discard spectral detail lost as a result of the limited spectral resolution of the filter bank. Using such techniques, as with vocoder processors, the intelligibility of sentences in quiet is relatively unaffected even when the filtering uses bandwidths up to six times broader than those of human auditory filters (Baer & Moore, 1993). For comparison with the vocoding results, a set of filters six times broader than the bandwidth of human auditory filters represent between four and five independent frequency channels over a frequency range of 200 to 5000 Hz. As in the vocoder studies, effects of spectral information loss are much more apparent in noise. At a 3 db speech-to-noise ratio, Baer and Moore found that sentence intelligibility did decline significantly as filter bandwidth was broadened from normal human auditory filter bandwidths to filters three and six times wider. Other studies using comparable methods show similar outcomes (Leek & Summers, 1996; ter Keurs, Festen, & Plomp, 1992, 1993). Baer and Moore found a significant interaction between the degree of smearing and SNR for sentence intelligibility scores. As in studies of sentence intelligibility through vocoder processors with four to six channels of spectral information (Dorman et al., 1997; Eddington et al., Reference Note 1), sentence scores in quiet were at or close to ceiling levels. A ceiling is not, however, evident for the effects of spectral resolution on Baer and Moore s data in noise, and it seems likely that the interaction they found is a consequence of the ceiling on scores in quiet. CDT rate, as measured here, proves to be free of the difficulties of interpretation that arise from sentence scores close to ceiling levels. CDT rates through the processors used here show no interaction between spectral resolution and SNR, and reduced spectral resolution has a clear effect both in the quiet and in noise. SUMMARY The effects of number of channels and noise on CDT rate are highly correlated with accuracy in vowel and consonant identification (Fu et al., 1998) and with scores in sentence identification (Dorman et al., 1998; Eddington et al., Reference Note 1; Loizou et al., 1999). CDT rate is, however, shown here to be less subject to ceiling effects than are scores for the simple HINT sentences. It is also notable that CDT rates are rather low in conditions that give quite high levels of intelligibility for HINT sentences. Across the whole range of SNRs tested, there are significant increases of CDT rate with each increase in number of channels from three to four and to eight, but not between eight and 12 channels. The measurement precision of the CDT data limits the conclusions that can be reached on the asymptotic number of channels in CDT. The data are, however, consistent with two other simulation studies, where an asymptotic number of channels of at least eight has been found for vowel and consonant identification in noise (Fu et al., 1998), and for sentence in noise performance (Dorman et al., 1998). The present CDT data and these previous studies thus converge on similar estimates of the asymptotic number of channels for speech perception in noise. This is in contrast to the conclusion based on speech in quiet that between four and six channels are sufficient (Dorman et al., 1997; Shannon et al., 1995). The addition of noise up to a 7 db speech-tonoise ratio had no effect on CDT rates for unprocessed speech. For the vocoder processed speech, however, performance was significantly impaired compared with that in quiet even at a 17 db speech-to-noise ratio, and declined further with each 5 db increment in noise level to a 7 db SNR. This contrasts with the sentence intelligibility data reviewed above, which approaches ceiling performance levels at about a 6 db SNR for 8 and 12 band processors. The effect of moderate noise on CDT was especially apparent with low numbers of channels, which may represent the situation for many cochlear implant users. Here, CDT rates at 7
8 438 EAR &HEARING /OCTOBER 2001 to 12 db SNRs were very low. SNRs of the order of 6 db are relatively common in everyday situations (Pearsons, Bennett, & Fidell, 1977). Whereas 50% words correct in sentences in such conditions may seem a reasonably high score in the context of speech intelligibility for a cochlear implant user, these very low CDT rates are perhaps more indicative of the impairment of speech communication that can be expected with a small number of channels in typical noisy environments. ACKNOWLEDGMENTS: We are grateful to Don Eddington for permission to reproduce unpublished data, and to Mario Svirsky and the anonymous reviewers for their constructive comments on the manuscript. This paper is based on L. Wilkinson s B.Sc. Project. Address for correspondence: Andrew Faulkner, D. Phil., Department of Phonetics and Linguistics, University College London, Wolfson House, 4 Stephenson Way, London NW1 2HE, United Kingdom. Received May 12, 2000; accepted May 21, 2001 REFERENCES Baer, T., & Moore, B. C. J. (1993). Effects of spectral smearing on the intelligibility of sentences in noise. Journal of the Acoustical Society of America, 94, Byrne, D., Dillon, H., Tran, K., Arlinger, S., Wilbraham, K., Cox, R., Hagerman, B., Hetu, K., Kei, J., Lui, C., Kiessling, J., Kotby, M. N., Nasser, N. H. A, Elkholy, W. A. H., Nakanishi, Y., Oyer, H., Powell, R., Stephens, D., Meredith, R., Sirimanna, T., Tavartkiladze, G., Frolenkov, G. I., Westerman, S., & Ludvigsen, C. (1994). An international comparison of long-term average speech spectra. Journal of the Acoustical Society of America, 96, DeFilippo, C. L., & Scott, B. L. (1978). A method for training and evaluation of the reception of on-going speech. Journal of the Acoustical Society of America, 63, Dorman, M. F., & Loizou, P. C. (1997). Speech intelligibility as a function of the number of channels of stimulation for normalhearing listeners and patients with cochlear implants. American Journal of Otology, 18, S113 S114. Dorman, M. F., & Loizou, P. C. (1998). The identification of consonants and vowels by cochlear implant patients using a 6-channel continuous interleaved sampling processor and by normal-hearing subjects using simulations of processors with two to nine channels. Ear and Hearing, 19, Dorman, M. F., Loizou, P. C., Fitzke, J., & Tu, Z. (1998). The recognition of sentences in noise by normal-hearing listeners using simulations of cochlear-implant signal processors with 6 20 channels. Journal of the Acoustical Society of America, 104, Dorman, M. F, Loizou, P. C., & Rainey, D. (1997). Speech intelligibility as a function of the number of channels for signal processors using sine-wave and noise-band outputs. Journal of the Acoustical Society of America, 102, Fishman, K. E, Shannon, R. V., & Slattery, W. H. (1997). Speech recognition as a function of the number of electrodes used in the SPEAK cochlear implant speech processor. Journal of Speech Language and Hearing Research, 40, Fu, Q.-J., Shannon, R. V., & Wang, X. (1998). Effects of noise and spectral resolution on vowel and consonant recognition: Acoustic and electric hearing. Journal of the Acoustical Society of America, 104, Greenwood, D. D. (1990). A cochlear frequency-position function for several species 29 years later. Journal of the Acoustical Society of America, 87, Leek, M. R., & Summers, V. (1996). Reduced frequency selectivity and the preservation of spectral contrast in noise. Journal of the Acoustical Society of America, 100, Loizou, P. C., Dorman, M., & Tu, Z. (1999). On the number of channels needed to understand speech. Journal of the Acoustical Society of America, 106, Moore, B. C. J., & Glasberg, B. R. (1983). Suggested formulae for calculating auditory-filter bandwidths and excitation patterns. Journal of the Acoustical Society of America, 74, Pearsons, K. S., Bennett, R. L., & Fidell, S. (1977). Speech Levels in Various Noise Environments (Rep. No. EPA-600/ ). Washington, DC: US Environmental Protection Agency. Rosen, S., Faulkner, A., & Wilkinson, L. (1999). Perceptual adaptation by normal listeners to upward shifts of spectral information in speech and its relevance for users of cochlear implants. Journal of the Acoustical Society of America, 106, Shannon, R. V., Zeng, F.-G., Kamath, V., Wygonski, J., & Ekelid, M. (1995). Speech recognition with primarily temporal cues. Science, 270, ter Keurs, M., Festen, J. M., & Plomp, R. (1992). Effect of spectral envelope smearing on speech reception. I. Journal of the Acoustical Society of America, 91, ter Keurs, M., Festen, J. M., & Plomp, R. (1993). Limited resolution of spectral contrast and hearing loss for speech in noise. Journal of the Acoustical Society of America, 94, Wilson, B., Finley, C., Lawson, D., Wolford, R., Eddington, D., & Rabinowitz, W. (1991). Better speech recognition with cochlear implants. Nature, 352, 2. Zeng, F. G., & Galvin, J. J. (1999). Amplitude mapping and phoneme recognition cochlear implant listeners. Ear and Hearing, 20, REFERENCE NOTE 1 Eddington, D. K., Rabinowitz, W. R., Tierney, J., Noel, V., & Whearty, M. (1997). Eighth Quarterly Progress Report, October 1, 1997, through December 31, Speech Processors for Auditory Prostheses. NIH Contract N01-DC Cambridge, MA: MIT.
PLEASE SCROLL DOWN FOR ARTICLE
This article was downloaded by:[michigan State University Libraries] On: 9 October 2007 Access Details: [subscription number 768501380] Publisher: Informa Healthcare Informa Ltd Registered in England and
More informationThe right information may matter more than frequency-place alignment: simulations of
The right information may matter more than frequency-place alignment: simulations of frequency-aligned and upward shifting cochlear implant processors for a shallow electrode array insertion Andrew FAULKNER,
More informationModern cochlear implants provide two strategies for coding speech
A Comparison of the Speech Understanding Provided by Acoustic Models of Fixed-Channel and Channel-Picking Signal Processors for Cochlear Implants Michael F. Dorman Arizona State University Tempe and University
More informationHCS 7367 Speech Perception
Long-term spectrum of speech HCS 7367 Speech Perception Connected speech Absolute threshold Males Dr. Peter Assmann Fall 212 Females Long-term spectrum of speech Vowels Males Females 2) Absolute threshold
More informationSimulations of high-frequency vocoder on Mandarin speech recognition for acoustic hearing preserved cochlear implant
INTERSPEECH 2017 August 20 24, 2017, Stockholm, Sweden Simulations of high-frequency vocoder on Mandarin speech recognition for acoustic hearing preserved cochlear implant Tsung-Chen Wu 1, Tai-Shih Chi
More informationEssential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair
Who are cochlear implants for? Essential feature People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work
More informationWho are cochlear implants for?
Who are cochlear implants for? People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work best in adults who
More informationSpeech, Hearing and Language: work in progress. Volume 11
Speech, Hearing and Language: work in progress Volume 11 Periodicity and pitch information in simulations of cochlear implant speech processing Andrew FAULKNER, Stuart ROSEN and Clare SMITH Department
More informationREVISED. The effect of reduced dynamic range on speech understanding: Implications for patients with cochlear implants
REVISED The effect of reduced dynamic range on speech understanding: Implications for patients with cochlear implants Philipos C. Loizou Department of Electrical Engineering University of Texas at Dallas
More informationRole of F0 differences in source segregation
Role of F0 differences in source segregation Andrew J. Oxenham Research Laboratory of Electronics, MIT and Harvard-MIT Speech and Hearing Bioscience and Technology Program Rationale Many aspects of segregation
More informationA. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER
ARCHIVES OF ACOUSTICS 29, 1, 25 34 (2004) INTELLIGIBILITY OF SPEECH PROCESSED BY A SPECTRAL CONTRAST ENHANCEMENT PROCEDURE AND A BINAURAL PROCEDURE A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER Institute
More informationWhat you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for
What you re in for Speech processing schemes for cochlear implants Stuart Rosen Professor of Speech and Hearing Science Speech, Hearing and Phonetic Sciences Division of Psychology & Language Sciences
More informationADVANCES in NATURAL and APPLIED SCIENCES
ADVANCES in NATURAL and APPLIED SCIENCES ISSN: 1995-0772 Published BYAENSI Publication EISSN: 1998-1090 http://www.aensiweb.com/anas 2016 December10(17):pages 275-280 Open Access Journal Improvements in
More informationEssential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair
Who are cochlear implants for? Essential feature People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work
More informationSpeech conveys not only linguistic content but. Vocal Emotion Recognition by Normal-Hearing Listeners and Cochlear Implant Users
Cochlear Implants Special Issue Article Vocal Emotion Recognition by Normal-Hearing Listeners and Cochlear Implant Users Trends in Amplification Volume 11 Number 4 December 2007 301-315 2007 Sage Publications
More informationPrelude Envelope and temporal fine. What's all the fuss? Modulating a wave. Decomposing waveforms. The psychophysics of cochlear
The psychophysics of cochlear implants Stuart Rosen Professor of Speech and Hearing Science Speech, Hearing and Phonetic Sciences Division of Psychology & Language Sciences Prelude Envelope and temporal
More informationPerception of Spectrally Shifted Speech: Implications for Cochlear Implants
Int. Adv. Otol. 2011; 7:(3) 379-384 ORIGINAL STUDY Perception of Spectrally Shifted Speech: Implications for Cochlear Implants Pitchai Muthu Arivudai Nambi, Subramaniam Manoharan, Jayashree Sunil Bhat,
More informationNoise Susceptibility of Cochlear Implant Users: The Role of Spectral Resolution and Smearing
JARO 6: 19 27 (2004) DOI: 10.1007/s10162-004-5024-3 Noise Susceptibility of Cochlear Implant Users: The Role of Spectral Resolution and Smearing QIAN-JIE FU AND GERALDINE NOGAKI Department of Auditory
More informationExploring the parameter space of Cochlear Implant Processors for consonant and vowel recognition rates using normal hearing listeners
PAGE 335 Exploring the parameter space of Cochlear Implant Processors for consonant and vowel recognition rates using normal hearing listeners D. Sen, W. Li, D. Chung & P. Lam School of Electrical Engineering
More informationSpectral peak resolution and speech recognition in quiet: Normal hearing, hearing impaired, and cochlear implant listeners
Spectral peak resolution and speech recognition in quiet: Normal hearing, hearing impaired, and cochlear implant listeners Belinda A. Henry a Department of Communicative Disorders, University of Wisconsin
More informationThe development of a modified spectral ripple test
The development of a modified spectral ripple test Justin M. Aronoff a) and David M. Landsberger Communication and Neuroscience Division, House Research Institute, 2100 West 3rd Street, Los Angeles, California
More information2/25/2013. Context Effect on Suprasegmental Cues. Supresegmental Cues. Pitch Contour Identification (PCI) Context Effect with Cochlear Implants
Context Effect on Segmental and Supresegmental Cues Preceding context has been found to affect phoneme recognition Stop consonant recognition (Mann, 1980) A continuum from /da/ to /ga/ was preceded by
More informationImplementation of Spectral Maxima Sound processing for cochlear. implants by using Bark scale Frequency band partition
Implementation of Spectral Maxima Sound processing for cochlear implants by using Bark scale Frequency band partition Han xianhua 1 Nie Kaibao 1 1 Department of Information Science and Engineering, Shandong
More informationEffects of speaker's and listener's environments on speech intelligibili annoyance. Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag
JAIST Reposi https://dspace.j Title Effects of speaker's and listener's environments on speech intelligibili annoyance Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag Citation Inter-noise 2016: 171-176 Issue
More informationThe role of periodicity in the perception of masked speech with simulated and real cochlear implants
The role of periodicity in the perception of masked speech with simulated and real cochlear implants Kurt Steinmetzger and Stuart Rosen UCL Speech, Hearing and Phonetic Sciences Heidelberg, 09. November
More informationHearing Research 241 (2008) Contents lists available at ScienceDirect. Hearing Research. journal homepage:
Hearing Research 241 (2008) 73 79 Contents lists available at ScienceDirect Hearing Research journal homepage: www.elsevier.com/locate/heares Simulating the effect of spread of excitation in cochlear implants
More informationAUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening
AUDL GS08/GAV1 Signals, systems, acoustics and the ear Pitch & Binaural listening Review 25 20 15 10 5 0-5 100 1000 10000 25 20 15 10 5 0-5 100 1000 10000 Part I: Auditory frequency selectivity Tuning
More informationBINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED
International Conference on Systemics, Cybernetics and Informatics, February 12 15, 2004 BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED Alice N. Cheeran Biomedical
More informationA neural network model for optimizing vowel recognition by cochlear implant listeners
A neural network model for optimizing vowel recognition by cochlear implant listeners Chung-Hwa Chang, Gary T. Anderson, Member IEEE, and Philipos C. Loizou, Member IEEE Abstract-- Due to the variability
More informationWe are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors
We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 3,900 116,000 120M Open access books available International authors and editors Downloads Our
More informationEffects of Cochlear Hearing Loss on the Benefits of Ideal Binary Masking
INTERSPEECH 2016 September 8 12, 2016, San Francisco, USA Effects of Cochlear Hearing Loss on the Benefits of Ideal Binary Masking Vahid Montazeri, Shaikat Hossain, Peter F. Assmann University of Texas
More informationMichael Dorman Department of Speech and Hearing Science, Arizona State University, Tempe, Arizona 85287
The effect of parametric variations of cochlear implant processors on speech understanding Philipos C. Loizou a) and Oguz Poroy Department of Electrical Engineering, University of Texas at Dallas, Richardson,
More informationRESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 22 (1998) Indiana University
SPEECH PERCEPTION IN CHILDREN RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 22 (1998) Indiana University Speech Perception in Children with the Clarion (CIS), Nucleus-22 (SPEAK) Cochlear Implant
More informationDifferential-Rate Sound Processing for Cochlear Implants
PAGE Differential-Rate Sound Processing for Cochlear Implants David B Grayden,, Sylvia Tari,, Rodney D Hollow National ICT Australia, c/- Electrical & Electronic Engineering, The University of Melbourne
More informationLinguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions.
24.963 Linguistic Phonetics Basic Audition Diagram of the inner ear removed due to copyright restrictions. 1 Reading: Keating 1985 24.963 also read Flemming 2001 Assignment 1 - basic acoustics. Due 9/22.
More informationLinguistic Phonetics Fall 2005
MIT OpenCourseWare http://ocw.mit.edu 24.963 Linguistic Phonetics Fall 2005 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 24.963 Linguistic Phonetics
More informationSpectral-Ripple Resolution Correlates with Speech Reception in Noise in Cochlear Implant Users
JARO 8: 384 392 (2007) DOI: 10.1007/s10162-007-0085-8 JARO Journal of the Association for Research in Otolaryngology Spectral-Ripple Resolution Correlates with Speech Reception in Noise in Cochlear Implant
More informationSpeech perception of hearing aid users versus cochlear implantees
Speech perception of hearing aid users versus cochlear implantees SYDNEY '97 OtorhinolaIYngology M. FLYNN, R. DOWELL and G. CLARK Department ofotolaryngology, The University ofmelbourne (A US) SUMMARY
More informationHCS 7367 Speech Perception
Babies 'cry in mother's tongue' HCS 7367 Speech Perception Dr. Peter Assmann Fall 212 Babies' cries imitate their mother tongue as early as three days old German researchers say babies begin to pick up
More informationADVANCES in NATURAL and APPLIED SCIENCES
ADVANCES in NATURAL and APPLIED SCIENCES ISSN: 1995-0772 Published BY AENSI Publication EISSN: 1998-1090 http://www.aensiweb.com/anas 2016 Special 10(14): pages 242-248 Open Access Journal Comparison ofthe
More informationACOUSTIC AND PERCEPTUAL PROPERTIES OF ENGLISH FRICATIVES
ISCA Archive ACOUSTIC AND PERCEPTUAL PROPERTIES OF ENGLISH FRICATIVES Allard Jongman 1, Yue Wang 2, and Joan Sereno 1 1 Linguistics Department, University of Kansas, Lawrence, KS 66045 U.S.A. 2 Department
More informationRuth Litovsky University of Wisconsin, Department of Communicative Disorders, Madison, Wisconsin 53706
Cochlear implant speech recognition with speech maskers a) Ginger S. Stickney b) and Fan-Gang Zeng University of California, Irvine, Department of Otolaryngology Head and Neck Surgery, 364 Medical Surgery
More informationBenefits to Speech Perception in Noise From the Binaural Integration of Electric and Acoustic Signals in Simulated Unilateral Deafness
Benefits to Speech Perception in Noise From the Binaural Integration of Electric and Acoustic Signals in Simulated Unilateral Deafness Ning Ma, 1 Saffron Morris, 1 and Pádraig Thomas Kitterick 2,3 Objectives:
More informationAdaptive dynamic range compression for improving envelope-based speech perception: Implications for cochlear implants
Adaptive dynamic range compression for improving envelope-based speech perception: Implications for cochlear implants Ying-Hui Lai, Fei Chen and Yu Tsao Abstract The temporal envelope is the primary acoustic
More informationEvaluating the role of spectral and envelope characteristics in the intelligibility advantage of clear speech
Evaluating the role of spectral and envelope characteristics in the intelligibility advantage of clear speech Jean C. Krause a and Louis D. Braida Research Laboratory of Electronics, Massachusetts Institute
More information1706 J. Acoust. Soc. Am. 113 (3), March /2003/113(3)/1706/12/$ Acoustical Society of America
The effects of hearing loss on the contribution of high- and lowfrequency speech information to speech understanding a) Benjamin W. Y. Hornsby b) and Todd A. Ricketts Dan Maddox Hearing Aid Research Laboratory,
More informationHearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds
Hearing Lectures Acoustics of Speech and Hearing Week 2-10 Hearing 3: Auditory Filtering 1. Loudness of sinusoids mainly (see Web tutorial for more) 2. Pitch of sinusoids mainly (see Web tutorial for more)
More informationTime Varying Comb Filters to Reduce Spectral and Temporal Masking in Sensorineural Hearing Impairment
Bio Vision 2001 Intl Conf. Biomed. Engg., Bangalore, India, 21-24 Dec. 2001, paper PRN6. Time Varying Comb Filters to Reduce pectral and Temporal Masking in ensorineural Hearing Impairment Dakshayani.
More informationEffects of Presentation Level on Phoneme and Sentence Recognition in Quiet by Cochlear Implant Listeners
Effects of Presentation Level on Phoneme and Sentence Recognition in Quiet by Cochlear Implant Listeners Gail S. Donaldson, and Shanna L. Allen Objective: The objectives of this study were to characterize
More informationIntelligibility of narrow-band speech and its relation to auditory functions in hearing-impaired listeners
Intelligibility of narrow-band speech and its relation to auditory functions in hearing-impaired listeners VRIJE UNIVERSITEIT Intelligibility of narrow-band speech and its relation to auditory functions
More informationHearing Research 231 (2007) Research paper
Hearing Research 231 (2007) 42 53 Research paper Fundamental frequency discrimination and speech perception in noise in cochlear implant simulations q Jeff Carroll a, *, Fan-Gang Zeng b,c,d,1 a Hearing
More informationEffect of spectral normalization on different talker speech recognition by cochlear implant users
Effect of spectral normalization on different talker speech recognition by cochlear implant users Chuping Liu a Department of Electrical Engineering, University of Southern California, Los Angeles, California
More informationSpeech, Hearing and Language: work in progress. Volume 14
Speech, Hearing and Language: work in progress Volume 14 EVALUATION OF A MULTILINGUAL SYNTHETIC TALKING FACE AS A COMMUNICATION AID FOR THE HEARING IMPAIRED Catherine SICILIANO, Geoff WILLIAMS, Jonas BESKOW
More informationVariation in spectral-shape discrimination weighting functions at different stimulus levels and signal strengths
Variation in spectral-shape discrimination weighting functions at different stimulus levels and signal strengths Jennifer J. Lentz a Department of Speech and Hearing Sciences, Indiana University, Bloomington,
More informationAcoustics, signals & systems for audiology. Psychoacoustics of hearing impairment
Acoustics, signals & systems for audiology Psychoacoustics of hearing impairment Three main types of hearing impairment Conductive Sound is not properly transmitted from the outer to the inner ear Sensorineural
More informationResearch Article The Relative Weight of Temporal Envelope Cues in Different Frequency Regions for Mandarin Sentence Recognition
Hindawi Neural Plasticity Volume 2017, Article ID 7416727, 7 pages https://doi.org/10.1155/2017/7416727 Research Article The Relative Weight of Temporal Envelope Cues in Different Frequency Regions for
More informationFREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED
FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED Francisco J. Fraga, Alan M. Marotta National Institute of Telecommunications, Santa Rita do Sapucaí - MG, Brazil Abstract A considerable
More informationSimulation of an electro-acoustic implant (EAS) with a hybrid vocoder
Simulation of an electro-acoustic implant (EAS) with a hybrid vocoder Fabien Seldran a, Eric Truy b, Stéphane Gallégo a, Christian Berger-Vachon a, Lionel Collet a and Hung Thai-Van a a Univ. Lyon 1 -
More informationMasking release and the contribution of obstruent consonants on speech recognition in noise by cochlear implant users
Masking release and the contribution of obstruent consonants on speech recognition in noise by cochlear implant users Ning Li and Philipos C. Loizou a Department of Electrical Engineering, University of
More informationInternational Journal of Scientific & Engineering Research, Volume 5, Issue 3, March ISSN
International Journal of Scientific & Engineering Research, Volume 5, Issue 3, March-214 1214 An Improved Spectral Subtraction Algorithm for Noise Reduction in Cochlear Implants Saeed Kermani 1, Marjan
More informationSpectrograms (revisited)
Spectrograms (revisited) We begin the lecture by reviewing the units of spectrograms, which I had only glossed over when I covered spectrograms at the end of lecture 19. We then relate the blocks of a
More informationPsychoacoustical Models WS 2016/17
Psychoacoustical Models WS 2016/17 related lectures: Applied and Virtual Acoustics (Winter Term) Advanced Psychoacoustics (Summer Term) Sound Perception 2 Frequency and Level Range of Human Hearing Source:
More informationSpeech Perception in Tones and Noise via Cochlear Implants Reveals Influence of Spectral Resolution on Temporal Processing
Original Article Speech Perception in Tones and Noise via Cochlear Implants Reveals Influence of Spectral Resolution on Temporal Processing Trends in Hearing 2014, Vol. 18: 1 14! The Author(s) 2014 Reprints
More informationEffects of noise and filtering on the intelligibility of speech produced during simultaneous communication
Journal of Communication Disorders 37 (2004) 505 515 Effects of noise and filtering on the intelligibility of speech produced during simultaneous communication Douglas J. MacKenzie a,*, Nicholas Schiavetti
More informationChapter 40 Effects of Peripheral Tuning on the Auditory Nerve s Representation of Speech Envelope and Temporal Fine Structure Cues
Chapter 40 Effects of Peripheral Tuning on the Auditory Nerve s Representation of Speech Envelope and Temporal Fine Structure Cues Rasha A. Ibrahim and Ian C. Bruce Abstract A number of studies have explored
More informationBest Practice Protocols
Best Practice Protocols SoundRecover for children What is SoundRecover? SoundRecover (non-linear frequency compression) seeks to give greater audibility of high-frequency everyday sounds by compressing
More informationSpectral-peak selection in spectral-shape discrimination by normal-hearing and hearing-impaired listeners
Spectral-peak selection in spectral-shape discrimination by normal-hearing and hearing-impaired listeners Jennifer J. Lentz a Department of Speech and Hearing Sciences, Indiana University, Bloomington,
More informationComment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA)
Comments Comment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA) Is phase locking to transposed stimuli as good as phase locking to low-frequency
More informationTopics in Linguistic Theory: Laboratory Phonology Spring 2007
MIT OpenCourseWare http://ocw.mit.edu 24.91 Topics in Linguistic Theory: Laboratory Phonology Spring 27 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
More informationTopic 4. Pitch & Frequency
Topic 4 Pitch & Frequency A musical interlude KOMBU This solo by Kaigal-ool of Huun-Huur-Tu (accompanying himself on doshpuluur) demonstrates perfectly the characteristic sound of the Xorekteer voice An
More informationResearch Article The Acoustic and Peceptual Effects of Series and Parallel Processing
Hindawi Publishing Corporation EURASIP Journal on Advances in Signal Processing Volume 9, Article ID 6195, pages doi:1.1155/9/6195 Research Article The Acoustic and Peceptual Effects of Series and Parallel
More informationUSING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES
USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES Varinthira Duangudom and David V Anderson School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA 30332
More informationJuan Carlos Tejero-Calado 1, Janet C. Rutledge 2, and Peggy B. Nelson 3
PRESERVING SPECTRAL CONTRAST IN AMPLITUDE COMPRESSION FOR HEARING AIDS Juan Carlos Tejero-Calado 1, Janet C. Rutledge 2, and Peggy B. Nelson 3 1 University of Malaga, Campus de Teatinos-Complejo Tecnol
More informationIMPROVING CHANNEL SELECTION OF SOUND CODING ALGORITHMS IN COCHLEAR IMPLANTS. Hussnain Ali, Feng Hong, John H. L. Hansen, and Emily Tobey
2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP) IMPROVING CHANNEL SELECTION OF SOUND CODING ALGORITHMS IN COCHLEAR IMPLANTS Hussnain Ali, Feng Hong, John H. L. Hansen,
More informationAuditory training strategies for adult users of cochlear implants
Auditory training strategies for adult users of cochlear implants PAULA STACEY 1,* AND QUENTIN SUMMERFIELD 2 1 Division of Psychology, Nottingham Trent University, Nottingham, England 2 Department of Psychology,
More informationThe effect of wearing conventional and level-dependent hearing protectors on speech production in noise and quiet
The effect of wearing conventional and level-dependent hearing protectors on speech production in noise and quiet Ghazaleh Vaziri Christian Giguère Hilmi R. Dajani Nicolas Ellaham Annual National Hearing
More informationFundamental frequency is critical to speech perception in noise in combined acoustic and electric hearing a)
Fundamental frequency is critical to speech perception in noise in combined acoustic and electric hearing a) Jeff Carroll b) Hearing and Speech Research Laboratory, Department of Biomedical Engineering,
More informationThe intelligibility of noise-vocoded speech: Spectral information available from across-channel comparison of amplitude envelopes
1 2 Publication details: Proceedings of the Royal Society of London, Series B: Biological Sciences, 2011, 278, 1595-1600. 3 4 5 6 7 8 9 10 11 12 The intelligibility of noise-vocoded speech: Spectral information
More informationThe contribution of visual information to the perception of speech in noise with and without informative temporal fine structure
The contribution of visual information to the perception of speech in noise with and without informative temporal fine structure Paula C. Stacey 1 Pádraig T. Kitterick Saffron D. Morris Christian J. Sumner
More informationFig. 1 High level block diagram of the binary mask algorithm.[1]
Implementation of Binary Mask Algorithm for Noise Reduction in Traffic Environment Ms. Ankita A. Kotecha 1, Prof. Y.A.Sadawarte 2 1 M-Tech Scholar, Department of Electronics Engineering, B. D. College
More informationUvA-DARE (Digital Academic Repository) Perceptual evaluation of noise reduction in hearing aids Brons, I. Link to publication
UvA-DARE (Digital Academic Repository) Perceptual evaluation of noise reduction in hearing aids Brons, I. Link to publication Citation for published version (APA): Brons, I. (2013). Perceptual evaluation
More informationTing Zhang, 1 Michael F. Dorman, 2 and Anthony J. Spahr 2
Information From the Voice Fundamental Frequency (F0) Region Accounts for the Majority of the Benefit When Acoustic Stimulation Is Added to Electric Stimulation Ting Zhang, 1 Michael F. Dorman, 2 and Anthony
More informationSlow compression for people with severe to profound hearing loss
Phonak Insight February 2018 Slow compression for people with severe to profound hearing loss For people with severe to profound hearing loss, poor auditory resolution abilities can make the spectral and
More informationEEL 6586, Project - Hearing Aids algorithms
EEL 6586, Project - Hearing Aids algorithms 1 Yan Yang, Jiang Lu, and Ming Xue I. PROBLEM STATEMENT We studied hearing loss algorithms in this project. As the conductive hearing loss is due to sound conducting
More informationUniversity of California
University of California Peer Reviewed Title: Encoding frequency modulation to improve cochlear implant performance in noise Author: Nie, K B Stickney, G Zeng, F G, University of California, Irvine Publication
More information9/29/14. Amanda M. Lauer, Dept. of Otolaryngology- HNS. From Signal Detection Theory and Psychophysics, Green & Swets (1966)
Amanda M. Lauer, Dept. of Otolaryngology- HNS From Signal Detection Theory and Psychophysics, Green & Swets (1966) SIGNAL D sensitivity index d =Z hit - Z fa Present Absent RESPONSE Yes HIT FALSE ALARM
More informationAn Auditory-Model-Based Electrical Stimulation Strategy Incorporating Tonal Information for Cochlear Implant
Annual Progress Report An Auditory-Model-Based Electrical Stimulation Strategy Incorporating Tonal Information for Cochlear Implant Joint Research Centre for Biomedical Engineering Mar.7, 26 Types of Hearing
More informationLow Frequency th Conference on Low Frequency Noise
Low Frequency 2012 15th Conference on Low Frequency Noise Stratford-upon-Avon, UK, 12-14 May 2012 Enhanced Perception of Infrasound in the Presence of Low-Level Uncorrelated Low-Frequency Noise. Dr M.A.Swinbanks,
More informationHearing Research 242 (2008) Contents lists available at ScienceDirect. Hearing Research. journal homepage:
Hearing Research (00) 3 0 Contents lists available at ScienceDirect Hearing Research journal homepage: www.elsevier.com/locate/heares Spectral and temporal cues for speech recognition: Implications for
More informationEvaluation of sentence list equivalency for the TIMIT sentences by cochlear implant recipients
Washington University School of Medicine Digital Commons@Becker Independent Studies and Capstones Program in Audiology and Communication Sciences 2007 Evaluation of sentence list equivalency for the TIMIT
More informationHearing the Universal Language: Music and Cochlear Implants
Hearing the Universal Language: Music and Cochlear Implants Professor Hugh McDermott Deputy Director (Research) The Bionics Institute of Australia, Professorial Fellow The University of Melbourne Overview?
More informationIssues faced by people with a Sensorineural Hearing Loss
Issues faced by people with a Sensorineural Hearing Loss Issues faced by people with a Sensorineural Hearing Loss 1. Decreased Audibility 2. Decreased Dynamic Range 3. Decreased Frequency Resolution 4.
More informationPsychophysically based site selection coupled with dichotic stimulation improves speech recognition in noise with bilateral cochlear implants
Psychophysically based site selection coupled with dichotic stimulation improves speech recognition in noise with bilateral cochlear implants Ning Zhou a) and Bryan E. Pfingst Kresge Hearing Research Institute,
More informationThe functional importance of age-related differences in temporal processing
Kathy Pichora-Fuller The functional importance of age-related differences in temporal processing Professor, Psychology, University of Toronto Adjunct Scientist, Toronto Rehabilitation Institute, University
More informationPerceptual Effects of Nasal Cue Modification
Send Orders for Reprints to reprints@benthamscience.ae The Open Electrical & Electronic Engineering Journal, 2015, 9, 399-407 399 Perceptual Effects of Nasal Cue Modification Open Access Fan Bai 1,2,*
More informationSpeech Cue Weighting in Fricative Consonant Perception in Hearing Impaired Children
University of Tennessee, Knoxville Trace: Tennessee Research and Creative Exchange University of Tennessee Honors Thesis Projects University of Tennessee Honors Program 5-2014 Speech Cue Weighting in Fricative
More informationLANGUAGE IN INDIA Strength for Today and Bright Hope for Tomorrow Volume 10 : 6 June 2010 ISSN
LANGUAGE IN INDIA Strength for Today and Bright Hope for Tomorrow Volume ISSN 1930-2940 Managing Editor: M. S. Thirumalai, Ph.D. Editors: B. Mallikarjun, Ph.D. Sam Mohanlal, Ph.D. B. A. Sharada, Ph.D.
More informationTHE MECHANICS OF HEARING
CONTENTS The mechanics of hearing Hearing loss and the Noise at Work Regulations Loudness and the A weighting network Octave band analysis Hearing protection calculations Worked examples and self assessed
More informationNIH Public Access Author Manuscript J Hear Sci. Author manuscript; available in PMC 2013 December 04.
NIH Public Access Author Manuscript Published in final edited form as: J Hear Sci. 2012 December ; 2(4): 9 17. HEARING, PSYCHOPHYSICS, AND COCHLEAR IMPLANTATION: EXPERIENCES OF OLDER INDIVIDUALS WITH MILD
More information