Auditory and Auditory-Visual Recognition of Clear and Conversational Speech by Older Adults
|
|
- Shana Mitchell
- 5 years ago
- Views:
Transcription
1 J Am Acad Audiol 9 : (1998) Auditory and Auditory-Visual Recognition of and Conversational Speech by Older Adults Karen S. Helfer* Abstract Research has shown that speech articulated in a clear manner is easier to understand than conversationally spoken speech in both the auditory-only (A-only) and auditory-visual (AV) domains. Because this research has been conducted using younger adults, it is unknown whether age-related changes in auditory and/or visual processing affect older adults' ability to benefit when a talker speaks clearly. The present study examined how speaking mode (clear vs conversational) and presentation mode (A-only vs AV) influenced nonsense sentence recognition by older listeners. Results showed that neither age nor hearing loss limited the amount of benefit that older adults obtained from a talker speaking clearly. However, age was inversely correlated with identification of AV (but not A-only) conversational speech, even when pure-tone thresholds were controlled statistically. Key Words : Aging, presbyacusis, speech perception, speechreading Abbreviations : A-only = auditory-only, AV = auditory-visual, RCB = relative clear benefit, RVB = relative visual benefit, S/N -signal-to-noise ratio, SPIN = Speech Perception in Noise Ider adults often complain that their hearing problems stem from communication partners who fail to speak clearly. 0 Although many audiologists believe that this comment represents denial of presbyacusis on the part of some of these older adults, studies have found that both younger listeners (Picheny et al, 1985 ; Gagne et al, 1994, 1995 ; Payton et al, 1994 ; Helfer, 1997) and older listeners (Schum, 1996) benefit from talkers who speak in an intentionally clear manner. The clearspeech benefit (for younger adults) extends to auditory-visual (AV) speech perception (Gagne et al, 1994, 1995 ; Helfer, 1997), and it is especially salient for younger listeners in noisy (Payton et al, 1994 ; Uchanski et al, 1996 ; Helfer, 1997) or reverberant environments (Payton et al, 1994). However, most speakers do not talk in a deliberately clear manner ; rather, they use *Department of Communication Disorders, University of Massachusetts, Amherst, Massachusetts Reprint requests : Karen S. Helfer, Department of Communication Disorders, University of Massachusetts, Arnold House, Amherst, MA what is sometimes referred to as "functional adaptation," subconsciously adjusting their articulatory clarity depending upon available cues. For example, talkers speak less clearly when either context (Lieberman, 1963 ; Hunnicut, 1985 ; Lindblom, 1992) or visual speech cues (Anderson et al, 1997) are available. Research has shown that speechreading ability declines with increasing age even when visual acuity is controlled (e.g., Farrimond, 1959 ; Ewertsen and Birk-Nielsen, 1971 ; Pelson and Prather, 1974 ; Shoop and Binnie, 1979 ; Middelweerd and Plomp, 1987 ; Honnell et al, 1991 ; Lyxell and Ronnberg, 1991). This decrease in the ability to use visual speech cues may be related to the fact that many of the aging changes in the visual system are temporally related, and speech comprehension requires rapid on-line processing of information. For example, visual processing speed slows with increasing age (e.g., Cerella, 1985). Age-related increase in visual persistance (i.e., visual images appear for a longer duration for elders than for young adults) and an increased susceptibility to backward masking in the visual domain (Kline and Orme- Rogers, 1978 ; DiLollo et al, 1982) may limit the use of speechreading. 234
2 Auditory and Auditory-Visual and Conversational Speech/Helfer Vision is not the only system that undergoes temporally related changes in senescence. An almost universal finding in the corpus of research on aging is a slowing in sensorimotor and perceptual processing with increasing age (Birren, 1965). This slowing appears to affect auditory speech recognition : ample evidence exists to support the contention that older adults are especially susceptible to auditory distortions that are temporal in nature. For example, older adults have a more difficult time than younger adults in understanding speech that is time compressed (Sticht and Gray, 1969 ; Schon, 1970 ; Konkle et al, 1977 ; Schmitt, 1983 ; Wingfield et al, 1985 ; Stine et al, 1986 ; Rastatter et al, 1989 ; Tun et al, 1992 ; Gordon-Salant and Fitzgibbons, 1995) or reverberated (Nabelek and Robinson, 1982 ; Harris and Reitz, 1985 ; Helfer and Wilber, 1990 ; Gordon-Salant and Fitzgibbons, 1993 ; Divenyi and Haupt, 1997). Even when controlling for elevated auditory thresholds, some elders still have more difficulty understanding such temporally distorted stimuli, as compared to younger adults (Helfer and Wilber, 1990 ; Gordon-Salant and Fitzgibbons, 1993, 1995). Moreover, noise in combination with another distortion, such as reverberation (Harris and Reitz, 1985 ; Helfer and Wilber, 1990) or time compression (Stollman and Kapteyn, 1994 ; Gordon-Salant and Fitzgibbons, 1995), makes speech understanding especially problematic for older adults. One prominent characteristic of conversationally spoken speech is that it is substantially faster than clear speech (Picheny et al, 1986). The increased rate is caused by a shortening of certain phonemes and a decrease or elimination of pauses. Such temporal changes may be particularly detrimental to older adults, whose auditory and visual systems may have difficulty processing rapid stimuli. Although clear speech differs acoustically from slow speech (Picheny et al, 1986 ; Uchanski et al, 1996), it is appropriate to consider whether older adults benefit from speech that has been time expanded. In general, studies have found that expansion does not appreciably improve older adults' performance, whether the expansion is achieved electronically (Luterman et al, 1966 ; Schon, 1970 ; Korabic et al, 1978 ; Schmitt, 1983) or whether it is produced simply by speaking more slowly (Schmitt and Carroll, 1985 ; Schmitt and Moore, 1989 ; but see Schmitt and McCroskey, 1981). This finding is surprising, in light of the fact that many older adults (and their communication partners) report that speaking with a slightly slowed pace is a helpful strategy. One possible reason for this apparent contradiction is that, in face-to-face communication, older adults are actually benefiting from slowing in both the auditory and visual (i.e., speechreading) domains. Two studies that are relevant to the question of clear-speech benefit in older adults have been conducted. Schum (1996) reported results of research on clear versus conversational (auditory-only [A-only]) speech perception by older hearing-impaired individuals. He found that older adults, like younger adults, do benefit from speech that is spoken clearly. Schum also found that the average amount of benefit from speaking clearly was about 19 percent, which is comparable to that demonstrated by younger listeners (Picheny et al, 1985 ; Payton et al, 1994 ; Uchanski et al, 1996). The second study, by Gordon-Salant and Fitzgibbons (1997), examined the effect of the length of interword pauses on speech comprehension by normal and hearing-impaired younger and older adults. The authors found that performance by all groups declined with an increase in interword interval, leading them to speculate that the algorithm (which changed interword interval by a standard amount between words) might have been detrimental to prosody or coarticulation. These results suggest that any clear speech effect found with older adults is not likely due to an increase in processing time that a slowed presentation speed allows. The present study examined older adults' ability to understand clearly and conversationally spoken nonsense sentences presented in both A-only and AV modes. It was of interest to determine the extent to which elders can take advantage of visual speech cues in conversational speech. The rapidity of conversational speech might limit older adults' ability to speechread. Conversely, older adults may be more reliant on visual speech cues in conversational speech, which is more difficult to understand than clear speech. Data analyses examined differences in percent correct performance between presentation modes (AV vs A-only) and speaking modes (clear vs conversational), as well as their interaction. Analyses of variance (ANOVAs) also were completed on the amount of clear-speech benefit and the amount of visual benefit with the stimuli aggregated by position in the sentence. These analyses permitted comparison of the older subjects' performance to that of young, normal-hearing listeners using identical stimuli (Helfer, 1997).
3 Journal of the American Academy of Audiology/Volume 9, Number 3, June 1998 Another focus of this study concerned the relations among age, degree of hearing loss, performance in the two speaking modes (clear and conversational) and in the two viewing conditions (A-only and AV), and the amount of benefit from clear speech and from the provision of visual cues. Of interest were the following questions : Do age-related limitations in auditory and/or visual processing restrict the amount of benefit from using clear-speech or visual speech cues? Is hearing loss and/or age related to performance, to the amount of benefit from speaking clearly, or to using visual cues? Is there a relation between one's ability to use visual cues and overall performance, or between the benefit from clear speech and overall performance? To answer these questions, correlations among amount of hearing loss, subject age, performance in the various conditions, and the amount of benefit from visual information and from clear speech were computed. Stimuli METHOD Nonsense sentence stimuli were chosen for this study to eliminate the possible confounding factor of age-related changes in the ability to use semantic cues. Moreover, the majority of previous studies of clear-speech effects have used nonsense sentence stimuli. Stimuli for this study consisted of 200 nonsense sentences between five and seven words in length. Each sentence had three key words, which were one- and twosyllable nouns and verbs taken from the Thorndike-Lorge lists of most common words (Thorndike and Lorge, 1952). The sentences were constructed using one of the two following forms : article noun (auxiliary verb) verb (preposition) article noun or verb article noun preposition article noun, where italicized words are the key words and items in parentheses appear in some, but not all, sentences. Two lists of 100 sentences were created. Both lists contained the same 300 key words, but in different sentences. One list was recorded in a conversational manner : the talker (a female with no discernible regional accent) was told to speak as she did in normal conversation. The other list was spoken clearly : the talker was instructed to speak as if communicating in a noisy environment or with a listener who has a hearing loss. She was directed to enunciate consonants more carefully and with greater effort than in conversational speech and to avoid slurring words together. The sentences were videotaped using a Panasonic WV V3 professional videocassette recorder (VCR). A lavalier microphone (Shure SM 83) was placed approximately 8.5 inches from the speaker's mouth. Lighting was directed on the speaker to reduce shadows on her face. The distance from the camera to the talker was adjusted to produce a life-size image of the speaker's face on a 19-inch television screen. The audio portion of the signal was routed to a VU meter to monitor input level. Sentences that were produced improperly (e.g., with missing or incorrect words, or not spoken clearly or conversationally) were re-recorded immediately. The videotapes were dubbed using a VCR (Panasonic GX4 AG-1950 Proline), in which the automatic gain control function adjusted the level of the stimuli to equalize the intensity among sentences. The level of the speech peaks of the key words (verified by playing the sentences to a graphic level recorder) fell within a 5-dB range. Subjects Fifteen older listeners (average age 72 years, range years) participated in this study. Their hearing thresholds were not restricted to ensure an adequate range of hearing loss for correlational analysis. Demographic data for these individuals can be found in Table 1. Conventional pure-tone averages ranged from -2 db HL to 28 db HL (average 14 db HL); high-frequency pure-tone averages (the average of thresholds for 2-, 3-, 4-, and 6-kHz tones) ranged from 15 db HL to 53 db HL (average 32 db HL) (re : ANSI, Table 1 Age and Test Ear Threshold Data for Each Participant Frequency (Hz) Subject Age (Yrs) PR RB VD GG RH MJ JC PM DD SB LA NM MW MC BB Thresholds are in db HL re : ANSI (1989). 236
4 Auditory and Auditory-Visual and Conversational Speech/Helfer 1989). None of the participants had ever worn a hearing aid and all had self-reported normal or corrected-to-normal vision. Procedures All subjects listened to the two sentence lists (one spoken clearly, one spoken conversationally) in both A-only and AV presentation modes. The first test session consisted of puretone audiometry, one 50-sentence practice list (containing sentences not used in the data collection phase of this study) and then two 100- sentence test lists (one clear, one conversational, in random order). Subjects were randomly assigned to receive the A-only or AV mode first. Each subject returned approximately 1 month later to complete testing in the other mode. Stimuli were presented at +3 db signal-to-noise (S :N) ratio with the sentences presented at 80 db SPL (re : peak level in the sentences). The competing noise (the babble recorded with the Speech Perception in Noise [SPIN] test [Kalikow et al, 1977]) was played out of a cassette recorder, attenuated, and mixed with the speech signal, which originated from a videotape player. Stimuli were presented via an insert earphone (EAR 3-A). If interaural threshold differences were present, the better ear was used as the test ear. The right ear was used for subjects whose interaural thresholds were equivalent. Subjects were seated approximately 4 feet from the television screen displaying the videotaped stimuli. For the A-only condition, the television screen was covered with heavy posterboard. Participants were instructed that the sentences they were about to hear did not make sense, but each word within the sentence was a real word. All listeners repeated each sentence verbally and their responses were recorded on paper by the investigator. RESULTS Analysis of Variance Results A-only and AV performance is summarized in Figure 1. A repeated measures ANOVA was completed on these data to examine differences in the within-subject factors of speaking mode (clear vs conversational) and presentation mode (A-only vs AV), and their interaction. Percent correct scores were arcsine-transformed to stabilize error variance prior to analysis. As anticipated, words presented with AV cues were easier to understand than those presented in the A-only Z y 30 a AV A-only Presentation Condition III Conversational Figure 1 Percent-correct performance for key word identification within nonsense sentences. AV represents auditory-visual presentation conditions, and A-only represents auditory-only presentation conditions. Error bars represent the standard error. mode (F [1,14] = 83.66, p <.001), and clearly spoken speech was perceived more readily than conversational speech (F [1,14] = , p <.001). The interaction of these two variables was not significant (F [1,14] =.01, p =.992). A second set of analyses was completed on the arcsine-transformed word-level data aggregated by word position (initial, medial, or final), combined across subjects (Fig. 2). It should be kept in mind that although the same corpus of words was used for clearly and conversationally spoken sentences, a given word often occurred in different positions in the two speaking modes. An arbitrary decision was made to use word position in the clearly spoken sentences for these analyses. Two separate analyses were completed. The first ANOVA examined main and interaction effects of word position, speaking mode, and viewing mode. This analysis (summarized in C 60 U a CIearAV CIearA ConvAV ConvA Presentation Condition 0 Initial Medial Final Figure 2 Percent-correct performance for key word identification with words aggregated by position within the sentence. AV and A represent scores for clearly spoken speech for auditory-visual and auditoryonly presentation modes, respectively. Conv AV and Conv A represent scores for conversationally spoken speech for auditory-visual and auditory-only presentation modes, respectively. Error bars represent the standard error. 237
5 Journal of the American Academy of Audiology/Volume 9, Number 3, June 1998 Table 2 Results of ANOVA on the Percent Correct Data Aggregated by Word Position in Z 20 Sentences (Initial, Medial, or Final) V ariable of F p P sition 2, S peaking mode 1, <.001 P resentation mode 1, <. 001 Position x speaking mode 2, <.001 Position x presentation mode 1, Speaking mode x presentation 1, mode Speaking mode x presentation 2, mode x position Data were arcsine-transformed prior to analysis. Speaking mode refers to clear vs conversational ; presentation mode refers to AvsAV. 4Q2) 10 `m a 5 0 Conversational Speaking Mode 0 Initial E] Medial Final Figure 3 Relative amount of benefit from providing visual cues (RVB) in clear and conversational speaking modes. RVB is defined as (AV score - A score)/(100 - A score). Error bars represent the standard error. Table 2) showed significant main effects of speaking mode and viewing mode in the predictable direction (clear better than conversational ; AV better than A-only). The main effect of word position was not significant. However, a significant interaction of speaking mode with word position was found. Post hoc Tukey HSD tests showed that the difference between clear and conversational speaking modes was significant only for words in the initial and medial sentence positions. Other interactions involving word position failed to reach statistical significance. ANOVAs also were conducted on the amount of benefit from visual cues and the arfiount of benefit from speaking clearly (Table 3). Because visual benefit may be limited by A-only performance (Sumby and Pollack, 1954), and because clear benefit may be related to the perception of conversational stimuli (Payton et al, 1994), relative benefit metrics were calculated, as suggested by Sumby and Pollack (1954). These two benefit measures were relative clear benefit (RCB) = (clear score - conversational score)/(100 - conversational score) and relative visual benefit Table 3 Results of ANOVA on the RVB and RCB Data Variable of F P Visual benefit - 1, <.001 Position 2, Visual benefit by position 2, benefit 1, <.001 Position 2, benefit by position 2, RVB is derived using the formula (AV score -A score/100 - A score). RCB is derived with the formula (clear score - conversational score/100 - conversational score). (RVB) = (AV score -A-only score)/(100 -A-only score). Figures 3 and 4 illustrate RVB and RCB. Results ofanovas on the relative benefit metrics found a significant main effect for RUB, which was greater for words spoken conversationally than for words spoken clearly. The main effect of speaking clearly also was significant: A- only presentation led to greater RCB than did AV presentation. No significant main or interaction effects of word position were found for RVB. However, a significant main effect of word position was shown for RCB. Post hoc Tukey HSD tests showed that RCB was greater for words in the initial or medial position than for words in the final position. The interaction of position with RCB failed to reach significance. The amount of RCB and RVB obtained by subjects in the present study was similar in magnitude to that shown by younger participants in a study by Helfer (1997) using the same stimuli (but at a +2 db S/N). The amount of RCB, averaged across AV and A-only presentation modes, was approximately 13 percent in this Z 20 N Q) M15.~ 10 c 18- C1 a 5 0 A-only AV Presentation Condition 0 Initial E] Medial Final Figure 4 Relative amount of benefit from clear speech (RCB) in auditory-visual (AV) and auditory-only (A-only) presentation modes. RCB is defined as (clear score - conversational score)/(100 - conversational score). Error bars represent the standard error. 238
6 Auditory and Auditory-Visual and Conversational Speech/Helfer study and 14 percent in the Helfer (1997) investigation. The average RVB shown by subjects in the present investigation (about 18%) is identical to that obtained by the young, normal-hearing listeners in the Helfer (1997) study. Correlation Results To study relations among age, hearing loss, and performance, a Pearson product-moment correlation matrix was computed using the following variables : high-frequency pure-tone average of thresholds at 2, 3, 4, and 6 khz ; percent correct scores in each condition (A-only and AV, clear and conversational) and the absolute (not relative) amount of clear benefit and visual benefit. Absolute (rather than relative) benefit figures were used because it was of interest to determine whether these values were connected to absolute performance. As will be discussed below, hearing loss was found to be strongly related to absolute scores. Therefore, a partial correlation analysis was also computed, controlling for high-frequency pure-tone average. The results of these analyses can be found in Table 4. Age was significantly (and negatively) correlated with the perception of conversational speech in the AV condition, and was positively correlated with the amount of clear-speech benefit in the AV mode. These correlations remained significant in the partial correlation analysis. Hence, the older the subject, the more poorly they recognized speech spoken conversationally with both auditory and visual speech cues, and the more benefit they obtained from the talker speaking clearly in this same viewing condition. Age was not correlated significantly with the amount of visual benefit obtained in either speaking mode, nor was it related to A- only speech recognition. Hearing loss was not related significantly to the amount of either clear-speech benefit or visual benefit, even though it was correlated strongly with absolute performance. Performance in AV conditions was strongly related to performance in A-only conditions, and scores on clear lists were highly correlated with scores on conversational lists. Finally, absolute performance was unrelated to the amount of either visual benefit or clear benefit in the Pearson r correlation, but some significant correlations were found when highfrequency hearing loss was controlled. DISCUSSION T he older adults in this study benefited from the use of both clear speech and visual Table 4 Coefficie nts of Correlation for Ag e, Hearing Loss, and Performance Variables HFPTA Age A A Conversational AV AV Conversational V Benefit V Benefit Conversational Benefit A Benefit AV HFPTA * -.649** Age A clear - 829** A conversational AV clear - AV conversational - V benefit clear - V benefit conversational - benefit A - benefit AV ** -.757** -.750** * ** * ** 722** 748** * 536* -.640** ** 813** ** 649** ** ** * * ** The upper value in the matrix is the Pearson product-moment coefficient ; the lower value is the partial correlation coefficient obtained with high-frequency pure-tone average (HFPTA) partialed out. *Statistically significant at the.05 level ; ''statistically significant at the.01 level. 239
7 Journal of the American Academy of Audiology/Volume 9, Number 3, June 1998 speech information. In a previous study, Schum (1996) had demonstrated a large, clear speech advantage for older, hearing-impaired adults in an A-only presentation mode. The present study's results extend this finding to AV communication. The amount of clear-speech benefit shown by the present study's subjects, averaged across AV and A presentation modes, was similar to that found by Helfer (1997) in a study using the same stimuli with young, normal-hearing subjects. The clear-speech benefit value found in the A- only condition for these older subjects was 15 percent, which is slightly lower than that reported by other investigators (Picheny et al, 1985 ; Payton et al, 1994 ; Uchanski et al, 1996). It is likely that talker differences can account for a large portion of the discrepancy in clear-speech benefit values among studies (Gagne et al, 1994, 1995). Although the amount of clear-speech benefit appears similar in magnitude between younger and older subjects, some interesting qualitative differences occurred between the present study's results and those obtained with young, normal-hearing listeners in the Helfer (1997) study. Younger adults showed greater clear-speech benefit for AV stimuli, while older adults demonstrate significantly larger clearspeech advantage for A-only stimuli. Also, older adults had greater visual benefit for conversational speech than for clear speech, whereas the opposite was true for younger adults. Given the present results, the relation between aging and the ability to take advantage of visual cues in clear and conversational speech remains ambiguous. The average amount of visual benefit shown by subjects in the present investigation (about 18%) is identical to that found with young, normal-hearing listeners in the Helfer (1997) study. Age was not correlated with the amount of visual benefit for either clear or conversational speech. However, there is some indication that age does affect the integration of auditory and visual speech information. The only significant correlations with age were found in the AV data : age was correlated negatively with AV performance, accounting for about 31 percent of variance in AV scores for conversational speech. This connection persisted even when pure-tone thresholds were controlled (the amount of variance accounted for declined slightly, to 26%). Hearing loss was related strongly to performance, especially in AV conditions, but was not correlated with the amount of clear-speech benefit. This confirms results from previous studies (using young, hearing-impaired sub- jects), which showed no apparent relation between elevated thresholds and the amount of clear-speech benefit (Picheny et al, 1985 ; Payton et al, 1994 ; Uchanski et al, 1996). Hearing loss also was not correlated with the degree of visual benefit, concurring with results found in other studies of speechreading ability by older individuals (Farrimond, 1959 ; Lyxell and Ronnberg, 1991). The correlations between absolute performance and clear/visual benefit were significant only when hearing thresholds were partialed out, making interpretation problematic. In general, the present data suggest that neither the amount of clear-speech benefit nor the amount of visual benefit were restricted by absolute performance. Results of this study have some practical importance. First, as with younger adults, older individuals benefit from both speaking clearly and using visual speech cues, independently. The difference between A-only perception of conversational speech and AV perception of clear speech was approximately 30 percent, far greater than the benefit obtained from either speaking clearly or using visual speech cues in isolation. The fact that hearing loss was not related to either clear benefit or visual benefit suggests that these modifications should be helpful for many older listeners, irrespective of their hearing status. It is apparent that instructing the communication partners of hearing-impaired listeners to speak clearly is a viable strategy in aiding communication. Schum (1996) found that, with minimal instruction, both younger and older adults can alter their vocal habits to produce speech that is significantly easier for older adults to understand. In the present study, the older the subject, the more poorly he/she performed in AV conditions, and the more benefit was obtained from clear speech. In light of the functional adaptation literature, which suggests that speakers talk less clearly when visual speech cues are available (Anderson et al, 1997), audiologists should be conscientious about counseling communication partners to speak clearly when conversing face to face with older adults. With the advent of digital hearing aids, the specification of new digital signal-processing schema is becoming a research area of increasing importance. One reason for studying the differences between clearly and conversationally spoken speech is in the hope that digital hearing aids may some day be able to transform conversational speech into clear speech. Because some differences were found in the magnitude 240
8 Auditory and Auditory-Visual and Conversational Speech/Helfer of the clear speech effect for AV versus A-only listening, research on such modifications needs to consider AV communication. Moreover, the fact that qualitative differences occurred between the performance of older subjects in this study and younger subjects in Helfer (1997) suggests that research needs to include older individuals. This is a particularly salient point in light of the prevalence of presbyacusis, as older adults will be the primary consumers of digital hearing aids. The use of a nonsense sentence task somewhat limits these results to real-world applications. Nonsense sentence perception requires more bottom-up processing in that semantic context cannot be used to help decipher an utterance. Older adults tend to be especially adept at using semantic constraints to aid understanding (Cohen and Faulkner, 1983 ; Wingfield and Stine, 1986 ; Nittrouer and Boothroyd, 1990 ; Pichora-Fuller et al, 1995). However, the present results serve to illustrate that older adults do benefit from both speaking clearly and using visual speech cues when messages are difficult to understand. Acknowledgment. Portions of this paper were presented at the American Speech-Language-Hearing Association annual convention, Seattle, WA, November REFERENCES American National Standards Institute. (1989). Specifications for Audiometers. (ANSI S ). New York: ANSI. Anderson AH, Bard EG, Sotillo C, Newlands A, Doherty- Sneddon G. (1997). Limited visual control of the intelligibility of speech in face-to-face dialogue. Percept Psychophys 59 : Birren JE. (1965). Age changes in speed of behavior : its central nature and physiological correlates. In : Welford AT, Birren JE, eds. Behauior, Aging and the Neruous System. Springfield, IL : Thomas, Cerella J. (1985). Information processing rates in the elderly. Psychol Bull 98 : Cohen G, Faulkner D. (1983). Age differences in performance on two information processing tasks : strategy selection and processing efficiency. J Gerontol 38 : DiLollo V, Arnett JL, Kruk RV (1982). Age-related changes in rate of visual information processing. J Exp Psychol: Hum Percept Perform 8: Divenyi PL, Haupt KM. (1997). Audiologic correlates of speech understanding deficits in elderly listeners with mild-to-moderate hearing loss. I. Age and lateral asymmetry effects. Ear Hear 18 : Ewertsen HW Birk-Nielson B. (1971). A comparative analysis of the audiovisual, auditive and visual perception of speech. Acta Otolaryngol 72 : Farrimond T. (1959). Age differences in the ability to use visual cues in auditory communication. Lang Speech 2: Gagne JP, Masterson V, Munhall KG, Bilida N, Querengesser C. (1994). Across talker variability in auditory, visual, and audiovisual speech intelligibility for conversational and clear speech. JAcad Rehabil Audiol 27 : Gagne JP, Querengesser C, Folkeard P, 1VIunhall KG, Masterson VM. (1995). Auditory, visual, and audiovisual speech intelligibility for sentence-length stimuli : an investigation of conversational and clear speech. Volta Reu 97 : Gordon-Salant S, Fitzgibbons PJ. (1993). Temporal factors and speech recognition performance in young and elderly listeners. J Speech Hear Res 36: Gordon-Salant S, Fitzgibbons PJ. (1995). Recognition of multiply degraded speech by young and elderly listeners. J Speech Hear Res 38 : Gordon-Salant S, Fitzgibbons PJ. (1997). Selected cognitive factors and speech recognition performance among young and elderly listeners. J Speech Lang Hear 40 : Harris $W Reitz ML (1985). Effects of room reverberation and noise on speech discrimination in the elderly. Audiology 24 : Helfer KS. (1997). Auditory and auditory-visual perception of clear and conversational speech. J Speech Lang Hear 40 : Helfer K5, Wilber IA. (1990). Hearing loss, age and speech perception in reverberation and noise. J Speech Hear Res 33 : Honnell S, Dancer J, Gentry B. (1991). Age and speechreading performance in relation to percent correct, eyeblinks, and written responses. Volta Reu 93 : Hunnicut S. (1985). Intelligibility vs. redundancy-conditions of dependency. Lang Speech 28 : Kalikow DN, Stevens KN, Elliott LL. (1977). Development of a test of speech intelligibility in noise using sentence materials with controlled word predictability. J Acoust Soc Am 61 : Kline DW, Orme-Rogers C. (1978). Examination of stimulus persistence as the basis for superior visual identification performance among older adults. J Gerontol 33 : Konkle DF, Beasley DS, Bess FM. (1977). Intelligibility of time-altered speech in relation to chronological aging. J Speech Hear Res 20 : Korabic EW, Freeman BA, Church GT. (1978). Intelligibility of time-expanded speech with normally hearing and elderly subjects. Audiology 17: Lieberman P. (1963). Some effects of the semantic and grammatical context on the production and perception of speech. Lang Speech 6:
9 Journal of the American Academy of Audiology/Volume 9, Number 3, June 1998 Lindblom B. (1992). Explaining variation : a sketch of the H and H theory. In : Hardcastle W, Marchal A, eds. Speech Production and Speech Modelling. Dordrecht : Kluwer, Luterman DM, Welsh OL, Melrose J. (1966). Responses of aged males to time-altered speech stimuli. J Speech Hear Res 9: Lyxell B, Ronnberg J. (1991). Word discrimination and chronological age related to sentence-based speech-reading skill. Br JAudiol 25 :3-10. Middelweerd MJ, Plomp R. (1987). The effect of speechreading on the speech reception threshold of sentences in noise. JAcoust Soc Am 82: Nabelek AK, Robinson PK. (1982). Monaural and binaural speech perception in reverberation for listeners of various ages. JAcoust Soc Am 71: Nittrouer S, Boothroyd A. (1990). Context effects in phoneme and word recognition by young children and older adults. JAcoust Soc Am 87: Payton KL, Uchanski RM, Braida LD. (1994). Intelligibility of conversational and clear speech in noise and reverberation for listeners with normal and impaired hearing. J Acoust Soc Am 95 : Pelson RO, Prather WK (1974). Effects of visual message-related cues, age and hearing impairment on speech reading performance. J Speech Hear Res 17 : Picheny MA, Durlach NL, Braida LD. (1985). Speaking clearly for the hard of hearing I : intelligibility differences between clear and conversational speech. J Speech Hear Res 28 : Picheny MA, Durlach NL, Braida LD. (1986)r Speaking clearly for the hard of hearing II : acoustic characteristics of clear and conversational speech. J Speech Hear Res 29: Pichora-Fuller MK, Schneider BA, Daneman M. (1995). How young and old adults listen to and remember speech in noise. J Acoust Soc Am 97: Rastatter M, Watson M, Strauss-Simmons D. (1989). Effects of time-compression on feature and frequency discrimination in aged listeners. Percept Mot Skills 68 : Schmitt JF. (1983). The effects of time compression and time expansion on passage comprehension by elderly listeners. J Speech Hear Res 26 : Schmitt JF, Carroll MR. (1985). Older listeners' ability to comprehend speaker-generated rate alteration of passages. J Speech Hear Res 28 : Schmitt JF, McCroskey RL. (1981). Sentence comprehension in elderly listeners : the factor of rate. J Gerontol 36: Schmitt JF, Moore JR. (1989). Natural alteration of speaking rate : the effect on passage comprehension by listeners over 75 years of age J Speech Hear Res 32: Schon TD. (1970). The effects on speech intelligibility of time compression and expansion on normal-hearing, hard of hearing, and aged males. JAuditory Res 10 : Schum D. (1996). Intelligibility of clear and conversational speech of young and elderly talkers. J Am Acad Audiol 7: Shoop C, Binnie CA. (1979). The effects of age on the visual perception of speech. Scand Audiol 8:3-8. Sticht TG, Gray BB. (1969). The intelligibility of time compressed words as a function of age and hearing loss. J Speech Hear Res 12: Stine EAL, Wingfield A, Poon LW. (1986). How muchhow fast: rapid processing of spoken language in later adulthood. Psychol Aging 1 : Stollman MHP, Kapteyn TS. (1994). Effect of time scale modification of speech on the speech recognition threshold in noise for elderly listeners. Audiology 33 : Sumby W, Pollack 1. (1954). Visual contributions to speech intelligibility in noise. JAcoust Soc Am 26 : Thorndike KI, Lorge I. (1952). The Teacher's Word Book of 30,000 Words. New York : Columbia University. Tun PA, Wingfield A, Stine EAL, Mecsas C. (1992). Rapid speech processing and divided attention : processing rate versus processing resources as an explanation of age effects. Psychol Aging 4: Uchanski RM, Choi SS, Braida LD, Reed CM, Durlach NI. (1996). Speaking clearly for the hard of hearing IV: further studies of the role of speaking rate. J Speech Hear Res 39 : Wingfield A, Poon LW, Lombardi L, Lowe D. (1985). Speed of processing in normal aging : effects of speech rate, linguistic structure, and processing time. J Gerontol 40 : Wingfield A, Stine EL. (1986). Organizational strategies in immediate recall of rapid speech by young and elderly adults. Exp Aging Res 12 :79-83.
Recognition of Multiply Degraded Speech by Young and Elderly Listeners
Journal of Speech and Hearing Research, Volume 38, 1150-1156, October 1995 Recognition of Multiply Degraded Speech by Young and Elderly Listeners Sandra Gordon-Salant University of Maryland College Park
More informationS althouse (1982, 1985) and Birren et al
J Am Acad Audiol 6: 433-439 (1995) Understanding of Time-compressed Speech by Older Adults : Effect of Discard Interval Tomasz Letowski* Nancy Poch* Abstract Fifteen subjects, aged 60 to 74 years, participated
More informationProviding Effective Communication Access
Providing Effective Communication Access 2 nd International Hearing Loop Conference June 19 th, 2011 Matthew H. Bakke, Ph.D., CCC A Gallaudet University Outline of the Presentation Factors Affecting Communication
More informationIntelligibility of clear speech at normal rates for older adults with hearing loss
University of South Florida Scholar Commons Graduate Theses and Dissertations Graduate School 2006 Intelligibility of clear speech at normal rates for older adults with hearing loss Billie Jo Shaw University
More informationEvaluating the role of spectral and envelope characteristics in the intelligibility advantage of clear speech
Evaluating the role of spectral and envelope characteristics in the intelligibility advantage of clear speech Jean C. Krause a and Louis D. Braida Research Laboratory of Electronics, Massachusetts Institute
More informationIntelligibility of Clear and Conversational Speech of Young and Elderly Talkers
J Am Acad Audiol 7 : 212-218 (1996) Intelligibility of Clear and Conversational Speech of Young and Elderly Talkers Donald J. Schum* Abstract It has been documented that talkers can be trained to produce
More informationAuditory and Auditory-Visual Lombard Speech Perception by Younger and Older Adults
Auditory and Auditory-Visual Lombard Speech Perception by Younger and Older Adults Michael Fitzpatrick, Jeesun Kim, Chris Davis MARCS Institute, University of Western Sydney, Australia michael.fitzpatrick@uws.edu.au,
More informationPSYCHOMETRIC VALIDATION OF SPEECH PERCEPTION IN NOISE TEST MATERIAL IN ODIA (SPINTO)
PSYCHOMETRIC VALIDATION OF SPEECH PERCEPTION IN NOISE TEST MATERIAL IN ODIA (SPINTO) PURJEET HOTA Post graduate trainee in Audiology and Speech Language Pathology ALI YAVAR JUNG NATIONAL INSTITUTE FOR
More informationLindsay De Souza M.Cl.Sc AUD Candidate University of Western Ontario: School of Communication Sciences and Disorders
Critical Review: Do Personal FM Systems Improve Speech Perception Ability for Aided and/or Unaided Pediatric Listeners with Minimal to Mild, and/or Unilateral Hearing Loss? Lindsay De Souza M.Cl.Sc AUD
More informationS everal studies indicate that the identification/recognition. Identification Performance by Right- and Lefthanded Listeners on Dichotic CV Materials
J Am Acad Audiol 7 : 1-6 (1996) Identification Performance by Right- and Lefthanded Listeners on Dichotic CV Materials Richard H. Wilson* Elizabeth D. Leigh* Abstract Normative data from 24 right-handed
More informationEffects of noise and filtering on the intelligibility of speech produced during simultaneous communication
Journal of Communication Disorders 37 (2004) 505 515 Effects of noise and filtering on the intelligibility of speech produced during simultaneous communication Douglas J. MacKenzie a,*, Nicholas Schiavetti
More informationTemporal offset judgments for concurrent vowels by young, middle-aged, and older adults
Temporal offset judgments for concurrent vowels by young, middle-aged, and older adults Daniel Fogerty Department of Communication Sciences and Disorders, University of South Carolina, Columbia, South
More informationTHE ROLE OF VISUAL SPEECH CUES IN THE AUDITORY PERCEPTION OF SYNTHETIC STIMULI BY CHILDREN USING A COCHLEAR IMPLANT AND CHILDREN WITH NORMAL HEARING
THE ROLE OF VISUAL SPEECH CUES IN THE AUDITORY PERCEPTION OF SYNTHETIC STIMULI BY CHILDREN USING A COCHLEAR IMPLANT AND CHILDREN WITH NORMAL HEARING Vanessa Surowiecki 1, vid Grayden 1, Richard Dowell
More informationAging and Word Recognition in Competing Message
J Am Acad Audiol 9 : 191-198 (1998) Aging and Word Recognition in Competing Message Terry L. Wiley* Karen J. Cruickshanks' David M. Nondahl' Ted S. Tweed' Ronald Klein" Barbara E. K. Klein' Abstract As
More informationVisual impairment and speech understanding in older adults with acquired hearing loss
Visual impairment and speech understanding in older adults with acquired hearing loss Jean-Pierre Gagné École d orthophonie et d audiologie, Université de Montréal, Montréal, Québec, Canada H3C3J7 jean-pierre.gagne@umontreal.ca
More informationMODALITY, PERCEPTUAL ENCODING SPEED, AND TIME-COURSE OF PHONETIC INFORMATION
ISCA Archive MODALITY, PERCEPTUAL ENCODING SPEED, AND TIME-COURSE OF PHONETIC INFORMATION Philip Franz Seitz and Ken W. Grant Army Audiology and Speech Center Walter Reed Army Medical Center Washington,
More informationPerception of clear fricatives by normal-hearing and simulated hearing-impaired listeners
Perception of clear fricatives by normal-hearing and simulated hearing-impaired listeners Kazumi Maniwa a and Allard Jongman Department of Linguistics, The University of Kansas, Lawrence, Kansas 66044
More informationAlthough considerable work has been conducted on the speech
Influence of Hearing Loss on the Perceptual Strategies of Children and Adults Andrea L. Pittman Patricia G. Stelmachowicz Dawna E. Lewis Brenda M. Hoover Boys Town National Research Hospital Omaha, NE
More informationA Senior Honors Thesis. Brandie Andrews
Auditory and Visual Information Facilitating Speech Integration A Senior Honors Thesis Presented in Partial Fulfillment of the Requirements for graduation with distinction in Speech and Hearing Science
More informationThe functional importance of age-related differences in temporal processing
Kathy Pichora-Fuller The functional importance of age-related differences in temporal processing Professor, Psychology, University of Toronto Adjunct Scientist, Toronto Rehabilitation Institute, University
More informationAssessing Hearing and Speech Recognition
Assessing Hearing and Speech Recognition Audiological Rehabilitation Quick Review Audiogram Types of hearing loss hearing loss hearing loss Testing Air conduction Bone conduction Familiar Sounds Audiogram
More informationEvaluation of a Danish speech corpus for assessment of spatial unmasking
Syddansk Universitet Evaluation of a Danish speech corpus for assessment of spatial unmasking Behrens, Thomas; Neher, Tobias; Johannesson, René B. Published in: Proceedings of the International Symposium
More informationHearing in Noise Test in Subjects With Conductive Hearing Loss
ORIGINAL ARTICLE Hearing in Noise Test in Subjects With Conductive Hearing Loss Duen-Lii Hsieh, 1 Kai-Nan Lin, 2 Jung-Hung Ho, 3 Tien-Chen Liu 2 * Background/Purpose: It has been reported that patients
More informationWIDEXPRESS. no.30. Background
WIDEXPRESS no. january 12 By Marie Sonne Kristensen Petri Korhonen Using the WidexLink technology to improve speech perception Background For most hearing aid users, the primary motivation for using hearing
More informationDichotic Word Recognition in Young Adults with Simulated Hearing Loss. A Senior Honors Thesis
Dichotic Word Recognition in Young Adults with Simulated Hearing Loss A Senior Honors Thesis Presented in Partial Fulfillment of the Requirements for graduation with distinction in Speech and Hearing Science
More informationPredicting Directional Hearing Aid Benefit for Individual Listeners
J Am Acad Audiol 11 : 561-569 (2000) Predicting Directional Hearing Aid Benefit for Individual Listeners Todd Ricketts* H. Gustav Muellert Abstract The fitting of directional microphone hearing aids is
More informationTemporal order discrimination of tonal sequences by younger and older adults: The role of duration and rate a)
Temporal order discrimination of tonal sequences by younger and older adults: The role of duration and rate a) Mini N. Shrivastav b Department of Communication Sciences and Disorders, 336 Dauer Hall, University
More informationReSound NoiseTracker II
Abstract The main complaint of hearing instrument wearers continues to be hearing in noise. While directional microphone technology exploits spatial separation of signal from noise to improve listening
More informationThis study examines age-related changes in auditory sequential processing
1052 JSLHR, Volume 41, 1052 1060, October 1998 Auditory Temporal Order Perception in Younger and Older Adults Peter J. Fitzgibbons Gallaudet University Washington, DC Sandra Gordon-Salant University of
More information19.0 Sensory Communication
19.0 Sensory Communication 19.1 Auditory Psychophysics and Aids for the Deaf Academic and Research Staff Prof. L.D. Braida, Dr. D.K. Bustamante, Dr. H.S. Colburn, Dr. L.A. Delhorne, Dr. N.I. Durlach, Dr.
More informationSpeech perception in individuals with dementia of the Alzheimer s type (DAT) Mitchell S. Sommers Department of Psychology Washington University
Speech perception in individuals with dementia of the Alzheimer s type (DAT) Mitchell S. Sommers Department of Psychology Washington University Overview Goals of studying speech perception in individuals
More informationProceedings of Meetings on Acoustics
Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Psychological and Physiological Acoustics Session 2pPPb: Speech. Attention,
More informationI. INTRODUCTION. J. Acoust. Soc. Am. 111 (1), Pt. 1, Jan /2002/111(1)/401/8/$ Acoustical Society of America
The relationship between the intelligibility of time-compressed speech and speech in noise in young and elderly listeners Niek J. Versfeld a) and Wouter A. Dreschler Department of Clinical and Experimental
More informationPERCEPTION OF UNATTENDED SPEECH. University of Sussex Falmer, Brighton, BN1 9QG, UK
PERCEPTION OF UNATTENDED SPEECH Marie Rivenez 1,2, Chris Darwin 1, Anne Guillaume 2 1 Department of Psychology University of Sussex Falmer, Brighton, BN1 9QG, UK 2 Département Sciences Cognitives Institut
More informationREFERRAL AND DIAGNOSTIC EVALUATION OF HEARING ACUITY. Better Hearing Philippines Inc.
REFERRAL AND DIAGNOSTIC EVALUATION OF HEARING ACUITY Better Hearing Philippines Inc. How To Get Started? 1. Testing must be done in an acoustically treated environment far from all the environmental noises
More informationSpeech perception of hearing aid users versus cochlear implantees
Speech perception of hearing aid users versus cochlear implantees SYDNEY '97 OtorhinolaIYngology M. FLYNN, R. DOWELL and G. CLARK Department ofotolaryngology, The University ofmelbourne (A US) SUMMARY
More informationSpeech intelligibility in simulated acoustic conditions for normal hearing and hearing-impaired listeners
Speech intelligibility in simulated acoustic conditions for normal hearing and hearing-impaired listeners Ir i s Arw e i l e r 1, To r b e n Po u l s e n 2, a n d To r s t e n Da u 1 1 Centre for Applied
More informationSPEECH PERCEPTION IN A 3-D WORLD
SPEECH PERCEPTION IN A 3-D WORLD A line on an audiogram is far from answering the question How well can this child hear speech? In this section a variety of ways will be presented to further the teacher/therapist
More informationSound localization psychophysics
Sound localization psychophysics Eric Young A good reference: B.C.J. Moore An Introduction to the Psychology of Hearing Chapter 7, Space Perception. Elsevier, Amsterdam, pp. 233-267 (2004). Sound localization:
More informationThe Situational Hearing Aid Response Profile (SHARP), version 7 BOYS TOWN NATIONAL RESEARCH HOSPITAL. 555 N. 30th St. Omaha, Nebraska 68131
The Situational Hearing Aid Response Profile (SHARP), version 7 BOYS TOWN NATIONAL RESEARCH HOSPITAL 555 N. 30th St. Omaha, Nebraska 68131 (402) 498-6520 This work was supported by NIH-NIDCD Grants R01
More informationStudy of perceptual balance for binaural dichotic presentation
Paper No. 556 Proceedings of 20 th International Congress on Acoustics, ICA 2010 23-27 August 2010, Sydney, Australia Study of perceptual balance for binaural dichotic presentation Pandurangarao N. Kulkarni
More informationRESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 27 (2005) Indiana University
AUDIOVISUAL ASYNCHRONY DETECTION RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 27 (2005) Indiana University Audiovisual Asynchrony Detection and Speech Perception in Normal-Hearing Listeners
More informationSpatial unmasking in aided hearing-impaired listeners and the need for training
Spatial unmasking in aided hearing-impaired listeners and the need for training Tobias Neher, Thomas Behrens, Louise Kragelund, and Anne Specht Petersen Oticon A/S, Research Centre Eriksholm, Kongevejen
More informationAuditory Temporal Resolution In Normal Hearing Preschool Children Revealed by Word. Recognition In Continuous And Interrupted Noise a
Temporal Resolution In Preschool Children 1 Running Head: TEMPORAL RESOLUTION IN PRESCHOOL CHILDREN Auditory Temporal Resolution In Normal Hearing Preschool Children Revealed by Word Recognition In Continuous
More information2/25/2013. Context Effect on Suprasegmental Cues. Supresegmental Cues. Pitch Contour Identification (PCI) Context Effect with Cochlear Implants
Context Effect on Segmental and Supresegmental Cues Preceding context has been found to affect phoneme recognition Stop consonant recognition (Mann, 1980) A continuum from /da/ to /ga/ was preceded by
More information1 the perception of speech by interpreting visually available
Lipreading, Processing Speed, and Working Memory in Younger and Older Adults Julia E. Feld Mitchell S. Sommers Washington University in St. Louis, Missouri Purpose: To examine several cognitive and perceptual
More informationConsonant Perception test
Consonant Perception test Introduction The Vowel-Consonant-Vowel (VCV) test is used in clinics to evaluate how well a listener can recognize consonants under different conditions (e.g. with and without
More informationORIGINAL ARTICLE. A Modern Greek Word Recognition Score Test Designed for School Aged Children
ORIGINAL ARTICLE A Modern Greek Word Recognition Score Test Designed for School Aged Children Nikolaos Trimmis MS, Evangoles Papadeas MD; Theodoros Papadas MD; Panagiotis Papathanasopoulos MD; Panagioto
More information1706 J. Acoust. Soc. Am. 113 (3), March /2003/113(3)/1706/12/$ Acoustical Society of America
The effects of hearing loss on the contribution of high- and lowfrequency speech information to speech understanding a) Benjamin W. Y. Hornsby b) and Todd A. Ricketts Dan Maddox Hearing Aid Research Laboratory,
More informationHCS 7367 Speech Perception
Babies 'cry in mother's tongue' HCS 7367 Speech Perception Dr. Peter Assmann Fall 212 Babies' cries imitate their mother tongue as early as three days old German researchers say babies begin to pick up
More informationEffects of Reverberation and Compression on Consonant Identification in Individuals with Hearing Impairment
144 Effects of Reverberation and Compression on Consonant Identification in Individuals with Hearing Impairment Paul N. Reinhart, 1 Pamela E. Souza, 1,2 Nirmal K. Srinivasan,
More informationS ince the introduction of dichotic digits
J Am Acad Audiol 7 : 358-364 (1996) Interactions of Age, Ear, and Stimulus Complexity on Dichotic Digit Recognition Richard H. Wilson*t Melissa S. Jaffet Abstract The effect that the aging process has
More informationHCS 7367 Speech Perception
Long-term spectrum of speech HCS 7367 Speech Perception Connected speech Absolute threshold Males Dr. Peter Assmann Fall 212 Females Long-term spectrum of speech Vowels Males Females 2) Absolute threshold
More informationAt oral schools for the deaf, such as Central Institute for the Deaf. for Young Listeners With Normal and Impaired Hearing
Intelligibility of Modified Speech for Young Listeners With Normal and Impaired Hearing Rosalie M. Uchanski Ann E. Geers Central Institute for the Deaf Saint Louis, MO Athanassios Protopapas Scientific
More informationFREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED
FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED Francisco J. Fraga, Alan M. Marotta National Institute of Telecommunications, Santa Rita do Sapucaí - MG, Brazil Abstract A considerable
More informationW ord recognition performance is poorer. Recognition of One-, Two-, and Three-Pair Dichotic Digits under Free and Directed Recall
J Am Acad Audiol 10 : 557-571 (1999) Recognition of One-, Two-, and Three-Pair Dichotic Digits under Free and Directed Recall Anne Strouse* Richard H. Wilson' Abstract A one-, two-, and three-pair dichotic
More informationYun, I.J. M.Cl.Sc. (Aud) Candidate School of Communication Sciences and Disorders, U.W.O.
Copyright 2008 by Yun, I.J. Critical Review: Does every older adult with symmetric sensorineural hearing loss receive more performance benefit from binaural rather than monaural amplification in speech
More informationAging and Hearing Loss: Why does it Matter?
Aging and Hearing Loss: Why does it Matter? Kathy Pichora-Fuller Professor, Psychology, University of Toronto Adjunct Scientist, Toronto Rehabilitation Institute, University Health Network Adjunct Scientist,
More informationCST for Windows. Version 1.0 Revised: 7/29/13. Software to administer and score the Connected Speech Test (CST)
CST for Windows Version 1.0 Revised: 7/29/13 Software to administer and score the Connected Speech Test (CST) Technical Software Support & Contact Information AUSP Software Group School of Communication
More informationSpeech Recognition in Noise for Hearing- Impaired Subjects : Effects of an Adaptive Filter Hearing Aid
J Am Acad Audiol 2 : 146-150 (1991) Speech Recognition in Noise for Hearing- Impaired Subjects : Effects of an Adaptive Filter Hearing Aid Carl R. Chiasson* Robert 1. Davis* Abstract Speech-recognition
More informationINTRODUCTION J. Acoust. Soc. Am. 104 (6), December /98/104(6)/3597/11/$ Acoustical Society of America 3597
The relation between identification and discrimination of vowels in young and elderly listeners a) Maureen Coughlin, b) Diane Kewley-Port, c) and Larry E. Humes d) Department of Speech and Hearing Sciences,
More informationImproving Audibility with Nonlinear Amplification for Listeners with High-Frequency Loss
J Am Acad Audiol 11 : 214-223 (2000) Improving Audibility with Nonlinear Amplification for Listeners with High-Frequency Loss Pamela E. Souza* Robbi D. Bishop* Abstract In contrast to fitting strategies
More informationBINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED
International Conference on Systemics, Cybernetics and Informatics, February 12 15, 2004 BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED Alice N. Cheeran Biomedical
More informationLecture Outline. The GIN test and some clinical applications. Introduction. Temporal processing. Gap detection. Temporal resolution and discrimination
Lecture Outline The GIN test and some clinical applications Dr. Doris-Eva Bamiou National Hospital for Neurology Neurosurgery and Institute of Child Health (UCL)/Great Ormond Street Children s Hospital
More informationDoes the Information Content of an Irrelevant Source Differentially Affect Spoken Word Recognition in Younger and Older Adults?
Journal of Experimental Psychology: Human Perception and Performance 2004, Vol. 30, No. 6, 1077 1091 Copyright 2004 by the American Psychological Association 0096-1523/04/$12.00 DOI: 10.1037/0096-1523.30.6.1077
More informationWhat Is the Difference between db HL and db SPL?
1 Psychoacoustics What Is the Difference between db HL and db SPL? The decibel (db ) is a logarithmic unit of measurement used to express the magnitude of a sound relative to some reference level. Decibels
More information한국어 Hearing in Noise Test(HINT) 문장의개발
KISEP Otology Korean J Otolaryngol 2005;48:724-8 한국어 Hearing in Noise Test(HINT) 문장의개발 문성균 1 문형아 1 정현경 1 Sigfrid D. Soli 2 이준호 1 박기현 1 Development of Sentences for Korean Hearing in Noise Test(KHINT) Sung-Kyun
More informationW illeford's early paper (1974) and publications
J Am Acad Audiol 11 : 57-63 (2000) Central Auditory Processing Disorders and Reduced Motivation : Three Case Studies Shlomo Silman*+ Carol A. Silvermantt Michele B. Emmer* Abstract The central auditory
More informationAge-Related Changes in Listening Effort for Various Types of Masker Noises
Syracuse University SURFACE Communication Sciences and Disorders - Dissertations College of Arts and Sciences 2011 Age-Related Changes in Listening Effort for Various Types of Masker Noises Jamie L. Desjardins
More informationLocalization 103: Training BiCROS/CROS Wearers for Left-Right Localization
Localization 103: Training BiCROS/CROS Wearers for Left-Right Localization Published on June 16, 2015 Tech Topic: Localization July 2015 Hearing Review By Eric Seper, AuD, and Francis KuK, PhD While the
More informationAuditory-Visual Speech Perception Laboratory
Auditory-Visual Speech Perception Laboratory Research Focus: Identify perceptual processes involved in auditory-visual speech perception Determine the abilities of individual patients to carry out these
More informationCommunication with low-cost hearing protectors: hear, see and believe
12th ICBEN Congress on Noise as a Public Health Problem Communication with low-cost hearing protectors: hear, see and believe Annelies Bockstael 1,3, Lies De Clercq 2, Dick Botteldooren 3 1 Université
More informationAnumber of studies have shown that the perception of speech develops. by Normal-Hearing and Hearing- Impaired Children and Adults
Perception of Voiceless Fricatives by Normal-Hearing and Hearing- Impaired Children and Adults Andrea L. Pittman Patricia G. Stelmachowicz Boys Town National Research Hospital Omaha, NE This study examined
More informationBest Practice Protocols
Best Practice Protocols SoundRecover for children What is SoundRecover? SoundRecover (non-linear frequency compression) seeks to give greater audibility of high-frequency everyday sounds by compressing
More informationListening Comprehension Across the Adult Lifespan
Listening Comprehension Across the Adult Lifespan Mitchell S. Sommers, 1 Sandra Hale, 1 Joel Myerson, 1 Nathan Rose, 1 Nancy Tye-Murray, 2 and Brent Spehar 2 Objectives: Although age-related declines in
More informationBinaural processing of complex stimuli
Binaural processing of complex stimuli Outline for today Binaural detection experiments and models Speech as an important waveform Experiments on understanding speech in complex environments (Cocktail
More informationAuditory-Visual Integration of Sine-Wave Speech. A Senior Honors Thesis
Auditory-Visual Integration of Sine-Wave Speech A Senior Honors Thesis Presented in Partial Fulfillment of the Requirements for Graduation with Distinction in Speech and Hearing Science in the Undergraduate
More informationEffects of type of context on use of context while lipreading and listening
Washington University School of Medicine Digital Commons@Becker Independent Studies and Capstones Program in Audiology and Communication Sciences 2013 Effects of type of context on use of context while
More informationTeresa Cervera & Vicente Rosell
The Effects of Linguistic Context on Word Recognition in Noise by Elderly Listeners Using Spanish Sentence Lists (SSL) Teresa Cervera & Vicente Rosell Journal of Psycholinguistic Research ISSN 0090-6905
More informationC ontrolling the deleterious effects of
JAAA 1:31-36 (1990) Noise Reduction Strategies for Elderly, Hearing-Impaired Listeners Donald J. Schum Abstract A variety of technical features are available in hearing aids in order to reduce the negative
More informationEffects of speaker's and listener's environments on speech intelligibili annoyance. Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag
JAIST Reposi https://dspace.j Title Effects of speaker's and listener's environments on speech intelligibili annoyance Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag Citation Inter-noise 2016: 171-176 Issue
More informationEvaluating Language and Communication Skills
Evaluating Language and Communication Skills Statewide Conference on Education of the Deaf San Marcos, Texas July, 2016 Kelley Watt, M. Ed Region 4 Education Service Center Houston, Texas kelley.watt@esc4.net
More informationInfluence of acoustic complexity on spatial release from masking and lateralization
Influence of acoustic complexity on spatial release from masking and lateralization Gusztáv Lőcsei, Sébastien Santurette, Torsten Dau, Ewen N. MacDonald Hearing Systems Group, Department of Electrical
More informationA PROPOSED MODEL OF SPEECH PERCEPTION SCORES IN CHILDREN WITH IMPAIRED HEARING
A PROPOSED MODEL OF SPEECH PERCEPTION SCORES IN CHILDREN WITH IMPAIRED HEARING Louise Paatsch 1, Peter Blamey 1, Catherine Bow 1, Julia Sarant 2, Lois Martin 2 1 Dept. of Otolaryngology, The University
More informationCLASSROOM AMPLIFICATION: WHO CARES? AND WHY SHOULD WE? James Blair and Jeffery Larsen Utah State University ASHA, San Diego, 2011
CLASSROOM AMPLIFICATION: WHO CARES? AND WHY SHOULD WE? James Blair and Jeffery Larsen Utah State University ASHA, San Diego, 2011 What we know Classroom amplification has been reported to be an advantage
More informationAsymmetric Suprathreshold Speech Recognition and the Telephone Ear Phenomenon
J Am Acad Audiol 9 : 380-384 (1998) Asymmetric Suprathreshold Speech Recognition and the Telephone Ear Phenomenon Shlomo Silman*t** James Jerger$ Robert Fanning Carol A. Silverman" ** Michele B. Emmer"
More informationEDITORIAL POLICY GUIDANCE HEARING IMPAIRED AUDIENCES
EDITORIAL POLICY GUIDANCE HEARING IMPAIRED AUDIENCES (Last updated: March 2011) EDITORIAL POLICY ISSUES This guidance note should be considered in conjunction with the following Editorial Guidelines: Accountability
More informationNon-commercial use only
Audiology Research 2011; volume 1:e4 Auditory temporal processing and aging: implications for speech understanding of older people S. Gordon-Salant, 1 P.J. Fitzgibbons, 2 G.H. Yeni-Komshian 1 1 University
More informationGick et al.: JASA Express Letters DOI: / Published Online 17 March 2008
modality when that information is coupled with information via another modality (e.g., McGrath and Summerfield, 1985). It is unknown, however, whether there exist complex relationships across modalities,
More informationHow Does Speaking Clearly Influence Acoustic Measures? A Speech Clarity Study Using Long-term Average Speech Spectra in Korean Language
Clinical and Experimental Otorhinolaryngology Vol. 5, No. 2: 68-73, June 12 http://dx.doi.org/10.3342/ceo.12.5.2.68 Original Article How Does Speaking Clearly Influence Acoustic Measures? A Speech Clarity
More informationI. INTRODUCTION. J. Acoust. Soc. Am. 110 (4), October /2001/110(4)/2183/8/$ Acoustical Society of America
Effect of stimulus bandwidth on the perception of ÕsÕ in normal- and hearing-impaired children and adults Patricia G. Stelmachowicz, Andrea L. Pittman, Brenda M. Hoover, and Dawna E. Lewis Boys Town National
More informationDigital Versus Analog Signal Processing : Effect of Directional Microphone
J Am Acad Audiol 10 : 133-150 (1999) Digital Versus Analog Signal Processing : Effect of Directional Microphone Michael Valente* Robert Sweetow' Lisa G. Potts* Becky Bingeat Abstract Differences in performance
More informationVerification of soft speech amplification in hearing aid fitting: A comparison of methods
Verification of soft speech amplification in hearing aid fitting: A comparison of methods Sarah E. Dawkins, B.A. AuD Research Project April 5, 2007 University of Memphis Project Advisor Robyn M. Cox, PhD.
More informationCan you hear me now? Amanda Wolfe, Au.D. Examining speech intelligibility differences between bilateral and unilateral telephone listening conditions
Examining speech intelligibility differences between bilateral and unilateral telephone listening conditions Amanda Wolfe, Au.D. Communicating on the telephone continues to be a point of difficulty for
More informationPredictors of aided speech recognition, with and without frequency compression, in older adults.
Predictors of aided speech recognition, with and without frequency compression, in older adults. Rachel J Ellis and Kevin J Munro Linköping University Post Print N.B.: When citing this work, cite the original
More informationThe Relationship between Acceptable Noise Level and Sound Quality
University of Colorado, Boulder CU Scholar Undergraduate Honors Theses Honors Program Spring 2011 The Relationship between Acceptable Noise Level and Sound Quality Jessica Elliott University of Colorado
More informationWhat can pupillometry tell us about listening effort and fatigue? Ronan McGarrigle
What can pupillometry tell us about listening effort and fatigue? Ronan McGarrigle Edwards (2007) Introduction Listening effort refers to the mental exertion required to attend to, and understand, an auditory
More informationCognitive and Auditory Factors Underlying Auditory Spatial Attention in Younger and Older Adults
Cognitive and Auditory Factors Underlying Auditory Spatial Attention in Younger and Older Adults by Gurjit Singh A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy
More informationOptimal Filter Perception of Speech Sounds: Implications to Hearing Aid Fitting through Verbotonal Rehabilitation
Optimal Filter Perception of Speech Sounds: Implications to Hearing Aid Fitting through Verbotonal Rehabilitation Kazunari J. Koike, Ph.D., CCC-A Professor & Director of Audiology Department of Otolaryngology
More informationLocalization in speech mixtures by listeners with hearing loss
Localization in speech mixtures by listeners with hearing loss Virginia Best a) and Simon Carlile School of Medical Sciences and The Bosch Institute, University of Sydney, Sydney, New South Wales 2006,
More information