Even though a large body of work exists on the detrimental effects. The Effect of Hearing Loss on Identification of Asynchronous Double Vowels

Size: px
Start display at page:

Download "Even though a large body of work exists on the detrimental effects. The Effect of Hearing Loss on Identification of Asynchronous Double Vowels"

Transcription

1 The Effect of Hearing Loss on Identification of Asynchronous Double Vowels Jennifer J. Lentz Indiana University, Bloomington Shavon L. Marsh St. John s University, Jamaica, NY This study determined whether listeners with hearing loss received reduced benefits due to an onset asynchrony between sounds. Seven normal-hearing listeners and 7 listeners with hearing impairment (HI) were presented with 2 synthetic, steady-state vowels. One vowel (the late-arriving vowel) was 250 ms in duration, and the other (the early-arriving vowel) varied in duration between 350 and 550 ms. The vowels had simultaneous offsets, and therefore an onset asynchrony between the 2 vowels ranged between 100 and 300 ms. The early-arriving and late-arriving vowels also had either the same or different fundamental frequencies. Increases in onset asynchrony and differences in fundamental frequency led to better vowel-identification performance for both groups, with listeners with HI benefiting less from onset asynchrony than normal-hearing listeners. The presence of fundamental frequency differences did not influence the benefit received from onset asynchrony for either group. Excitation pattern modeling indicated that the reduced benefit received from onset asynchrony was not easily predicted by the reduced audibility of the vowel sounds for listeners with HI. Therefore, suprathreshold factors such as loss of the cochlear nonlinearity, reduced temporal integration, and the perception of vowel dominance probably play a greater role in the reduced benefit received from onset asynchrony in listeners with HI. KEY WORDS: hearing loss, onset asynchrony, vowel identification Even though a large body of work exists on the detrimental effects that sensorineural hearing loss has on understanding speech in complex and noisy environments, relatively few studies have focused on the ability of listeners with hearing impairment (HI) to segregate a meaningful signal from an unwanted background. Speech perception in noise might involve a complex segregation process that requires adequate representation of spectral and temporal differences between sounds (Bregman, 1990; Darwin & Carlyon, 1995). Because the encoding of these spectral and temporal differences is often altered by an impaired cochlea (see Moore, 1995, for a review; Fitzgibbons & Gordon-Salant, 1987; Nejime & Moore, 1997), sensorineural hearing loss might lead to difficulty separating a meaningful sound from a background (Arehart, Rossi-Katz, & Swensson-Prutsman, 2005; Mackersie, Prida, & Stiles, 2001). The goal of this study was to evaluate the effects of hearing loss on the identification of one sound in the presence of another based on two compelling segregation cues: temporal separation between sounds (i.e., onset asynchrony) and onset asynchrony in the presence of fundamental frequency differences between the two sounds. Unfortunately, for the millions of listeners with sensorineural hearing loss, distortion caused by cochlear damage leads to difficulties analyzing 1354 Journal of Speech, Language, and Hearing Research Vol December 2006 D American Speech-Language-Hearing Association /06/

2 sounds in both the temporal and spectral domains (see Moore, 1995). Reduced sensation levels, a reduced audible frequency range, and loss of the cochlear amplifier contribute to difficulty following rapid amplitude changes in dynamic stimuli (cf. Bacon & Viemeister, 1985; Glasberg & Moore, 1992; Glasberg, Moore, & Bacon, 1987). In addition to difficulty analyzing sounds in the temporal domain, the impaired auditory system exhibits reduced frequency selectivity (Glasberg & Moore, 1986; Leek & Summers, 1993), leading to a spectrally smeared internal representation and loss of the rich spectral information that is available to listeners with normal hearing (Bacon & Brandt, 1982; Leek & Summers, 1996). Because spectral and temporal analyses are important precursors to sound segregation, the ability to segregate sounds based on spectral or temporal cues should be impaired in listeners with hearing loss. Double-vowel experiments have been commonly used to study the sound segregation process in listeners with normal hearing and HI. In a typical double-vowel experiment, two synthetic vowels are presented to a listener, and the listener usually identifies both vowels (e.g., Arehart, King, & McLean-Mudgett, 1997; Assmann & Summerfield, 1989, 1990; Summerfield & Culling, 1992; Summers & Leek, 1998). The double-vowel paradigm is appealing because an experimenter can independently vary a number of stimulus parameters for each vowel (such as the duration, formant frequencies, and fundamental frequency) and still use stimuli modeled after speech sounds. For normal-hearing listeners, double-vowel studies that have directly manipulated onset differences between two vowels indicate that vowel identification is easier when the vowels have asynchronous onsets. Summerfield and Culling (1992) showed that the masked detection threshold of a temporally offset target vowel was lower when both target and masker vowel were asynchronous and shared offsets than when both were simultaneous. A nonlinear, spectral enhancement mechanism might contribute to the benefits received by onset asynchrony, in which the spectral contrast of a later-arriving stimulus is enhanced when it follows (or perhaps is added to) the early-arriving stimulus. Summerfield, Sidwell, and Nelson (1987) found support for this mechanism by showing that a precursor harmonic stimulus, a stimulus not overlapping in time with a later-arriving stimulus, enhanced the perception of a later-arriving harmonic stimulus. This finding was later replicated by Summerfield and Assmann (1989) using synthetic vowels. To date, the effects of onset asynchrony have not been tested using listeners with HI on double-vowel tasks. Further, the data addressing whether hearing loss detrimentally affects the processing of onset asynchrony are equivocal. Grose and Hall (1996a) tested the effects of hearing loss on the processing of onset asynchrony using a comodulation-masking-release task and showed that an onset asynchrony between stimulus components degraded performance. Listeners with HI were as sensitive as normal-hearing listeners to onset differences across a wide frequency range. Lentz, Leek, and Molis (2004) used a profile-analysis task, in which an onset asynchrony between stimulus components degraded performance, and also found that when stimuli had a broad bandwidth, the effect of onset asynchrony was similar for normalhearing listeners and listeners with HI. In contrast, data from across-frequency gap detection experiments suggest that listeners with HI might be less sensitive than normal-hearing listeners in processing across-frequency temporal changes (Grose & Hall, 1996b). A reduced frequency range of audibility might influence the effect of onset asynchrony on a task. The stimuli used in the aforementioned tasks consisted of tones widely separated in frequency that were presented at levels clearly audible to the listeners, and therefore, the stimuli had similar frequency ranges for normal-hearing listeners and listeners with HI. Lentz et al. (2004) showed that when stimulus bandwidth was reduced, sensitivity to onset asynchrony decreased. The impaired ear, which also diminishes the internal bandwidth of speech-like stimuli by attenuating sound levels at some frequencies, might not process onset differences as effectively as the normal ear. In addition, the attenuation of high-frequency components that is associated with a sloping hearing loss is particularly detrimental for listeners with HI when processing temporal changes across frequency (Bacon & Viemeister, 1985; Fitzgibbons & Gordon-Salant, 1987). Spectral contrast enhancement that might occur when a later-occurring vowel is added to an early-occurring vowel is also diminished in listeners with cochlear hearing loss (Thibodeau, 1991). The frequency-dependence of many hearing losses could also degrade the representation of spectral enhancement across frequency. Fundamental frequency differences between two vowels also provide improvements over conditions in which the fundamental frequencies of two vowels are the same for normal-hearing listeners (Assmann & Summerfield, 1990; Culling & Darwin, 1993). Work that models the ability to take advantage of fundamental frequency differences capitalizes on the auditory system s ability to temporally and spectrally analyze sounds (Assmann & Summerfield, 1989, 1990; Meddis & Hewitt, 1992). Their success in modeling double-vowel data implies that spectro-temporal processing might be a precursor to sound segregation based on fundamental frequency differences. Because it has been suggested that spectrotemporal processing is impaired in listeners with hearing loss (Arehart et al., 1997; Grose & Hall, 1996b), it would be anticipated that listeners with HI would have a reduced ability to segregate vowels based on fundamental frequency differences. Lentz & Marsh: Use of Onset Asynchrony by Listeners With HI 1355

3 However, many, but not all, listeners with hearing loss benefit from different fundamental frequencies on double-vowel tasks to the same extent as normalhearing listeners (Arehart et al., 1997; Summers & Leek, 1998). Arehart et al. (1997) and Summers and Leek (1998) tested the abilities of listeners to identify both vowels of a double-vowel stimulus. Arehart et al. (1997) showed that even though listeners with hearing loss performed more poorly than listeners with normal hearing, a 2-semitone fundamental frequency difference led to benefits that were not significantly different between the two groups. Summers and Leek (1998) showed that only about half of their listeners benefited from fundamental frequency differences to the same extent as normalhearing listeners. A reason why listeners with hearing loss might benefit from fundamental frequency differences to a similar extent as normal-hearing listeners might be that the spectro-temporal analysis needed to benefit from fundamental frequency differences primarily occurs in the low-frequency, first formant region (Culling & Darwin, 1993). The reduced frequency selectivity associated with hearing loss might not distort the ability to take advantage of fundamental frequency differences because stimulus harmonics are widely spaced with respect to the bandwidth of auditory filters in the low frequencies. When fundamental frequency differences are small, additional cues, such as beating between adjacent harmonics, are also present (Culling & Darwin, 1994) and provide another basis for benefit received from fundamental frequency differences. As yet, no study has evaluated (a) the abilities of normal-hearing listeners or listeners with HI to take advantage of onset-asynchrony in a double-vowel task or (b) whether fundamental-frequency differences influence the ability to take advantage of onset asynchrony. Different mechanisms are thought to underlie the processing of both cues, but the influence of fundamental frequency differences on the processing of onset asynchrony should be evaluated to determine whether each cue is processed independently in the impaired auditory system. The following experiment tests whether hearing loss detrimentally affects the benefits received from onset asynchrony for conditions in which the fundamental frequencies of the vowels are either the same or different. It was anticipated that listeners with HI would show a reduced ability to process onset asynchrony, and that fundamental frequency differences would not influence the benefits received from onset asynchrony. In this experiment, listeners identified a single vowel of an asynchronous double-vowel stimulus. This approach differs from traditional double-vowel tasks in which listeners identify two vowels of a double-vowel stimulus (e.g., Assmann & Summerfield, 1990). The approach also contrasts with that used by de Cheveigné, McAdams, and Marin (1997), who showed larger effects of experimental manipulations when only one vowel was identified. In their experiment, the vowels were simultaneous, and listeners could respond with one or two vowels. Here, listeners only identified the later-occurring vowel. Method Observer Characteristics Participants were 7 normal-hearing listeners, ranging in age from 18 to 51 years (M = 31.0 years) and 7 listeners with HI who ranged in age from 25 to 61 years (M = 45.5 years). Normal-hearing listeners had puretone audiometric thresholds no greater than 20 db HL (American National Standards Institute [ANSI], 1996) between 250 and 8000 Hz. Listeners with HI were selected so that mean pure-tone average thresholds at 2000 and 4000 Hz were greater than 35 db HL and were less than or equal to 70 db HL in the test ear. Hearing losses were moderate and bilateral; the site of lesion was presumed to have cochlear origin based on air- and boneconduction thresholds and normal immitance audiometry. For normal-hearing listeners, the right ear was tested, except for 3 listeners (NH3, NH4, and NH7) who had mild hearing losses in their right ears ( 30 db HL). The audiometric configurations for all test ears together with the participants age are reported in Table 1. Four normal-hearing participants and 2 with HI were nabve to psychoacoustic tasks, and none of the listeners had participated previously in a vowel identification study. Stimuli Steady-state versions of the vowels /P, i, æ, i,u/ were generated using an implementation of Klatt synthesis software (Klatt, 1980) by H. Timothy Bunnell. Two steady fundamental frequencies approximately 4 semitones apart (120 and 151 Hz) were used. Table 2 lists the formant frequencies for the vowels, which matched those described by Assmann and Summerfield (1994). Vowels differed in their three lowest formant frequencies (F1, F2, and F3), while the fourth and fifth formant frequencies (F4 and F5) were at the same frequency for all vowel tokens. In an attempt to ensure audibility of energy in the first and second formant regions, the total power of each vowel stimulus was calibrated to be 90 db SPL. Stimuli were double-vowels, consisting of an earlyarriving vowel and a late-arriving vowel (i.e., no conditions included synchronous vowels). The late-arriving vowel was always 250 ms in duration, and four different early-arriving vowel durations were tested: 350, 400, 450, and 550 ms. Late- and early-arriving vowels shared offsets, producing an onset asynchrony between the two vowels ranging between 100 and 300 ms. All stimuli had 1356 Journal of Speech, Language, and Hearing Research Vol December 2006

4 Table 1. Audiometric thresholds (db HL re: ANSI S ) of the test ear for normal-hearing listeners (NH1 NH7) and listeners with hearing impairment (HI1 HI7). Frequency (Hz) Observer Age Test ear NH1 20 R NH2 20 R j NH3 32 L NH4 44 L NH5 21 R NH6 51 R NH7 18 L j5 0 j5 j5 0 j5 HI1 45 L HI2 31 L HI3 59 L HI4 52 L HI5 45 L HI6 61 L HI7 25 L raised cosine on/off ramps of 30 ms. Each early-arriving vowel was paired with each late-arriving vowel for a total of 25 combinations of early-arriving and late-arriving vowels. Note that early-arriving and late-arriving vowels shared the same identity on 20% of the trials. The stimuli were generated digitally off-line at a sampling rate of Hz. The vowels were played using a 24-bit digital-to-analog converter (Tucker-Davis Technologies TDT RP2.1) at a sampling period of j5 s. 1 The resulting stimuli were fed into a programmable attenuator (TDT PA5) and a headphone buffer (TDT HB6), and then into one earphone of a Sennheiser HD 250 II Linear headset. Procedure Listeners sat in a double-walled, sound-attenuating room and listened to the synthesized vowel sounds. Before the onset of the experiment, listeners were given the opportunity to familiarize themselves with the identity of the synthesized vowels. On a computer monitor, listeners saw five boxes labeled a, i, ae, er, and u. Using a mouse, listeners selected the different boxes to listen to the practice vowels, which were 550 ms in duration. All listeners practiced by listening to the vowel tokens and began the baseline phase of the experiment when they felt ready. 1 The TDT system has a limited number of sampling rates, and the sampling rate chosen (about Hz) was closest to the sampling rate used in the creation of the vowels (12000 Hz). The discrepant sampling rate leads to frequencies and durations that differ by a multiplicative factor of from those reported. In the baseline phase of the experiment, listeners were tested on the 250-ms late-arriving vowels presented in isolation (without the early-arriving vowel). A single block consisted of each of the five 250-ms late-arriving vowels presented 5 times to the listener in random order for a total of 25 vowel presentations. Two blocks were tested: a block at the 120-Hz fundamental frequency was tested first, and a block at the 151-Hz fundamental frequency was tested second. Listeners indicated the vowel heard by clicking on the appropriate box and received feedback when the response indicated was correct. Listeners who achieved 96% or better on both blocks continued to the double-vowel portion of the experiment. Listeners who did not achieve 96% correct identifications repeated testing on the vowels in isolation. Listeners who were required to repeat testing repeated the baseline task for a maximum of 10 repetitions (i.e., 20 blocks) total. Once a listener received an average score of 92% or better on 4 consecutive repetitions (8 consecutive blocks), the listener continued to the double-vowel portion of the experiment. Listeners unable to achieve this criterion were Table 2. Formant frequencies (in Hz) for the vowel stimuli (Assmann & Summerfield, 1994). Vowel /P/ /i/ /æ/ /i/ /u/ F F F F F Lentz & Marsh: Use of Onset Asynchrony by Listeners With HI 1357

5 excluded from the study. All normal-hearing listeners were able to achieve this criterion of performance, with some listening to only the first two blocks. In general, listeners with HI required more practice than the normalhearing listeners, and 1 listener was excluded from the study due to an inability to achieve criterion performance. On the double-vowel task, listeners sat in the soundattenuated room and heard two synthetic vowels (earlyarriving + late-arriving vowels). To indicate the identity of the late-arriving vowel, listeners were instructed to identify the vowel they heard second. Thus, the task involved a temporal order judgment (listeners must determine which of the two vowels occurred later) and identification of the late-arriving vowel. As with the baseline task, the response boxes had labels a, i, ae, er, and u, and listeners were told when their response was correct. Data were collected using a randomized block design. The fundamental frequencies of the late-arriving and early-arriving vowel were each selected at random. Next, the onset asynchrony to be tested was randomly selected. An experimental block for this double-vowel task consisted of a randomized order of 25 different vowel combinations of early-arriving/late-arriving vowel pairs at a particular fundamental frequency combination and onset asynchrony. After each experimental block, listeners were tested at one of the remaining onset asynchronies. In between blocks, listeners were allowed to listen to the vowels in isolation (at 550 ms) again as practice if they chose. Once the listener finished all four onset asynchronies, another fundamental frequency pair was tested, and the onset asynchronies were randomized again. This process was repeated 9 times, for a total of 10 replicates for each condition. Thus, 10 responses for each of the 25 vowel combinations were obtained (10 replicates 4 f 0 combinations 4 onset asynchronies 25 vowel pairs = 4,000 stimuli). Data points reflect the average of these 10 responses, collapsed across the different vowel pairs. Trials in which the early-arriving and late-arriving vowels shared identities are included in the percentage correct calculations. Results Percentage correct late-arriving vowel identifications are plotted as a function of the onset asynchrony between the early-arriving and the late-arriving vowels in Figure 1 for normal-hearing listeners (unfilled symbols) and listeners with HI (filled symbols). The left panel shows data obtained when the fundamental frequencies of the early- and late-arriving vowels are the same (f 0same ), whereas the right panel shows data when the fundamental frequencies of the two vowels differ (f 0different ). In general, the listeners with HI performed more poorly than normal-hearing listeners, as normalhearing listeners had scores averaging 11.3 percentage points higher than those for the listeners with HI. Both panels indicate that identification scores increased with increases in the onset asynchrony between the two vowels. For the f 0same conditions, identification scores consistently increased with increasing onset asynchrony. However, for the f 0different conditions, identification scores increased up to the 200-ms onset asynchrony, and then either increased further or reached an asymptote. When Figure 1. Average percentage correct target identifications are plotted as a function of onset asynchrony for 7 normalhearing (NH) listeners (unfilled symbols) and 7 listeners with hearing impairment (HI; filled symbols). Left and right panels indicate f 0same and f 0different conditions. Squares and triangles indicate the fundamental frequency of the late-arriving vowel, 120 and 151 Hz, respectively. The legend indicates the different fundamental frequency combinations in the order of early-arriving vowel f 0 /late-arriving vowel f 0. Error bars indicate standard errors of the mean Journal of Speech, Language, and Hearing Research Vol December 2006

6 comparing across the two f 0 conditions (left vs. right panels), it can be seen that f 0different scores were substantially higher than the f 0same scores. For the normalhearing listeners and listeners with HI, f 0different scores were 11.7 and 11.1 percentage points higher than the f 0same scores, respectively. A repeated-measures analysis of variance (ANOVA) was performed on arcsine transformed data (Myers, 1972), treating group membership as a between-subjects factor and onset asynchrony and fundamental frequency combination as within-subjects factors. Because the within-subjects factors had multiple levels, Mauchly s Test of Sphericity was also conducted. Reported p values are those associated with the Greenhouse-Geisser adjusted degrees of freedom. A main effect of onset asynchrony, F(1.4, 16.4) = 57.69, p <.001, indicates that both groups of listeners benefited from increasing onset asynchrony between the early-arriving and late-arriving vowels. In all conditions, the early-arriving and the latearriving vowels overlapped for 250 ms, and the onset asynchrony was imposed by an increase in duration of the early-arriving vowel. Thus, the increase in identification scores with increasing onset asynchrony might be due to the early-arriving vowel having a longer duration as the onset asynchrony was increased (e.g., McKeown & Patterson,1995),theearly-arriving vowel having a greater amount of time during which no masking was present, or improvement in the judgment of temporal order. Not surprisingly, the number of responses in which listeners in both groups mistakenly identified the early-arriving vowel decreased with increasing onset asynchrony, indicating that the increased duration of the early-arriving vowel led to less confusion between the early- and latearriving vowels. The effects of vowel dominance may also play a role in the reduction in the number of early-arriving vowel errors. In particular, when two vowels are presented at similar stimulus levels, one vowel often dominates the percept (McKeown & Patterson, 1995). If the earlyarriving vowel dominated the percept, then listeners could have mistakenly identified this vowel as the latearriving vowel. As onset asynchrony increased, the perception of two vowels became more distinct (i.e., the perception of multiplicity improved). As the perception of multiplicity improves, listeners might be less likely to mistakenly identify the dominant vowel as the target vowel. An interaction between group membership and onset asynchrony, F(1.4, 16.4) = 4.76, p <.05, reflects the result that listeners with HI did not benefit as greatly from onset asynchrony as the normal-hearing listeners. This interaction between group membership and onset asynchrony is illustrated in Figure 2, which shows average scores collapsed across the different Figure 2. Percentage correct identification scores are collapsed across fundamental frequency and plotted as a function of onset asynchrony. Unfilled and filled symbols denote data obtained from NH listeners and listeners with HI, respectively. Error bars indicate standard errors of the mean. fundamental frequency combinations plotted as a function of onset asynchrony. Figure 2 indicates that normalhearing listeners received greater benefit from onset asynchrony across the span of onset asynchronies tested: Their change in performance across the onset asynchronies ( ms) was 13.4 percentage points, whereas the change in performance across the same span was only 8.8 percentage points for listeners with HI. This result reflects steeper slopes in the data obtained from normal-hearing listeners and the listeners with HI, at least for onset asynchronies between 100 and 200 ms. For example, between 150 and 100 ms, the change in performance was 6.4 and 4.1 percentage points for normal-hearing listeners and listeners with HI, respectively. Between 300 and 200 ms, the change in scores was more similar for both groups (2.6 and 2.8 percentage points). The similarity in these slopes might be due to the asymptote in the f 0different conditions in the data obtained from normal-hearing listeners (see Figure 1). It should be noted that a synchronous condition (0-ms onset asynchrony) was not tested here, because listeners identified only the late-arriving vowel of the doublevowel pair. The ANOVA also revealed a significant main effect of the f 0 combination, F(1.9, 22.6) = 34.96, p <.001, which reflected better performance when the fundamental frequencies of the two vowels differed than when the f 0 s were the same. Listeners with HI and normal-hearing listeners received similar benefits from fundamental frequency differences (11.1 and 11.7 percentage points, respectively). No other significant interactions or main effects were revealed. The lack of a significant effect of group membership, F(1, 12) = 2.8, p =.12, was due to large variability in performance within the group with Lentz & Marsh: Use of Onset Asynchrony by Listeners With HI 1359

7 Figure 3. Onset-asynchrony benefit (performance at 200ms performance at 100ms) is plotted as a function of fundamental frequency combination for NH listeners (unfilled bars) and listeners with HI (filled bars). HI and the small group size. Note that there was no twoway interaction between onset asynchrony and fundamental frequency combination, F(3.3, 40) = 0.7, p =.57, indicating that the benefit received from onset asynchrony was similar at both fundamental frequency combinations. The relationship between onset-asynchrony benefit (performance at 200 ms minus performance at 100 ms) and fundamental frequency combination is plotted in Figure 3. Figure 3 indicates that while listeners with HI received less benefit from onset asynchrony (filled bars are shorter than unfilled bars), f 0 combination had no effect on onset-asynchrony benefit. Discussion The results of the current experiment indicate that, on average, listeners with HI receive less benefit from onset asynchrony than do listeners with normal hearing on a double-vowel task. Fundamental frequency differences in the presence of an onset asynchrony benefited both listeners to similar extents, and whether the vowels had the same or different fundamental frequency did not impact the magnitude of the benefit received by onset asynchrony. The following discussion addresses the mechanisms responsible for the reduced effect of onset asynchrony seen for listeners with HI, the differences in that mechanism, and the possible mechanisms responsible for the processing of fundamental frequency differences. Benefits Due to Onset Asynchrony The current double-vowel experiment illustrated that listeners with HI benefit less from the presence of an onset asynchrony than do normal-hearing listeners. This result on a double-vowel identification task did not replicate past work using comodulation-maskingrelease (CMR) and profile-analysis experiments, where the effects of onset asynchrony were similar for both normal-hearing listeners and listeners with HI. In the CMR experiment by Grose and Hall (1996a), listeners detected a tone added to a band of noise (the target band) and additional bands of noise were placed distant in frequency from the tone (the flanking bands). As the onset asynchrony between the target and flanking bands was increased from 0 to 100 ms, thresholds increased, and the elevation in threshold with increasing onset asynchrony was similar for normal-hearing listeners and listeners with HI. In Lentz et al. s (2004) experiment, listeners discriminated between a stimulus with equal-amplitude tones and a stimulus in which half the components were increased and half the components were decreased in level. An onset asynchrony was imposed between the components that increased in level and the components that decreased in level. Like the findings of Grose and Hall, Lentz et al. s data revealed that the change in threshold with increasing onset asynchrony (0 200 ms) was similar for the two groups of listeners when the bandwidth of the stimuli was wide. Three explanations to account for the differences across the various studies are proposed. First, the studies of Grose and Hall (1996a) and Lentz et al. (2004) evaluated onset asynchrony using a paradigm in which an onset asynchrony degraded performance, whereas in the current experiment, an onset asynchrony led to increased performance. It is possible that the mechanisms responsible for the processing of onset asynchrony differ whether sensitivity is degraded or improved by the onset asynchrony. However, it is unclear how these different mechanisms might differ between normal-hearing listeners and listeners with HI, especially given the paucity of data addressing the effects of HI on onset asynchrony processing. Second, the Grose and Hall and Lentz et al. studies used broadband stimuli with components that were all audible to the listeners, whereas the current study did not take efforts to ensure that all stimulus harmonics were audible. Thus, the listeners with HI in this study might have experienced deficits due to reduced audibility. Third, the Grose and Hall and Lentz et al. studies tested the ability to process an onset asynchrony that was present across frequency channels (i.e., the late-arriving components were processed by frequency channels that were not previously activated by the early-arriving components), but in the current experiment, the late-arriving stimulus was processed by frequency channels previously activated by the early-arriving stimulus. Thus, withinchannel processes could play a role in the processing of onset asynchrony for double vowels. For example, poorer 1360 Journal of Speech, Language, and Hearing Research Vol December 2006

8 spectral contrast enhancement in listeners with HI compared to normal-hearing listeners (Thibodeau, 1991) might result in the reduced benefit received from onset asynchrony in listeners with HI. Loss of spectralcontrast enhancement would not have played a role in the Grose and Hall and Lentz et al. studies, as the onset asynchrony was imposed strictly across channels. Psychophysical data on temporal processing support an interpretation that reduced audibility degrades the ability to take advantage of onset asynchrony. Temporal processing data suggest that attenuation of stimulus components (either by hearing loss or filtering) leads to poorer measures of temporal processing ability (Bacon & Viemeister, 1985; Eddins, Hall, & Grose, 1992; Shailer & Moore, 1983). In the current experiment, there were differences in the audible frequency range of normal-hearing listeners and listeners with HI, as some of the stimulus harmonics would have been inaudible to the listeners with HI. The sloping and flat losses used in this study attenuate the high-frequency harmonics of the already sloping synthetic vowel spectrum. If many harmonics are attenuated by hearing loss to levels below audibility, the auditory system has fewer channels available to process the onset asynchrony, and therefore a reduction in the benefit received by onset asynchrony might be observed. However, applying this logic (that a reduced audible frequency range leads to a smaller benefit due to onset asynchrony) to double-vowel stimuli might be oversimplifying the processing of onset asynchrony for speech-like stimuli. For example, the identification of steady-state vowel sounds requires broadband spectral integration, with the key spectral-shape features being the location of spectral peaks near the formant frequencies. While all frequency channels will indicate a spectral change concomitant with the arrival of a later-occurring stimulus, channels around the formants might carry the largest spectral changes. The coding of onset asynchrony might also be restricted to the frequency channels activated by harmonics in the formant regions. HI will reduce the number of formants that are audible and will change the bandwidth associated with the formants, possibly altering the representation of onset asynchrony. A model of the peripheral auditory system and the benefits of onset asynchrony for each double-vowel pair can be used to determine whether differences in the audible frequency ranges of the vowel pairs or differences related to the representation of the formants are related to the effects associated with onset asynchrony. To simulate the normal auditory system, excitation patterns were generated for each single vowel and each double-vowel pair (excluding the double vowels in which both vowels had the same identity). Therefore, 10 single vowels (5 different vowels at 2 f 0 s) and 80 vowel pairs (20 vowel pairs at 4 f 0 combinations) were processed by a filter bank containing 201 Roex filters (Glasberg & Moore, 1990), ranging between 80 and 9000 Hz. To simulate the impaired auditory periphery, the filters were broadened to simulate reduced frequency selectivity associated with hearing loss. The bandwidth of each auditory filter was assumed to be related to the average hearing level at the center frequency corresponding to the auditory filter, according to the function ERB = j SPL HL, reported by Glasberg and Moore (1986), where ERB is defined as the equivalent rectangular bandwidth of the filter in khz and SPL HL is absolute threshold in db SPL. 2 Because no direct measurements of auditory filters were made and the excitation patterns presented here are intended to be an approximation of impaired auditory processing, the filters were assumed to contain the same symmetry present in the healthy auditory system. Examples of excitation patterns generated for normal-hearing listeners and listeners with HI when the f 0 s of both vowels are 120 Hz are plotted in Figure 4, which illustrates the changes in the excitation pattern associated with the addition of a late-arriving /i, æ, i, and u/ to an early-arriving /P/. The average audiogram plus 15 db (15 db SL) is plotted in each panel for reference. The excitation pattern of the early-arriving vowel /P/ is shown as the dotted line in each panel, and the excitation patterns of the other four vowels added to the /P/ are shown as the solid lines in each panel. Thus, the solid lines represent excitation patterns of a twovowel pair. Figure 4 indicates that the excitation patterns for the normal-hearing listeners contain a representation of the spectral prominences (i.e., formants), both in the single-vowel excitation patterns (dotted lines) and in the double-vowel excitation patterns (solid lines), whereas excitation patterns for the listeners with HI show pronounced formants in the low frequencies and loss of good spectral resolution at frequencies above about 1000 Hz or so. For both groups of listeners, large differences are evident between the single-vowel excitation pattern and the double-vowel excitation pattern. These unique differences include an overall higher level of excitation in the double vowel when compared with the single vowel and the presence of new formants in the double vowel. For example, when an /u/ is added to an /P/ (bottom panels), the excitation patterns for normal-hearing listeners and listeners with HI indicate that the 250-Hz formant in the /u/ leads to a prominent, low-frequency change in excitation in the double-vowel pair, and the other two 2 The average hearing levels of the clinical audiograms were linearly interpolated so that a hearing level corresponding to each auditory filter center frequency was derived. Hearing levels corresponding to frequencies below 250 Hz were set to be equal to the hearing level at 250 Hz. Lentz & Marsh: Use of Onset Asynchrony by Listeners With HI 1361

9 Figure 4. Excitation patterns representing the single vowel, /P/, (dotted lines) and double vowels, /P-i, P-æ, P-i, P-u/ (solid lines), for NH listeners and listeners with HI are plotted in the left and right panels, respectively. Audiograms at 15 db SL also are plotted on each panel Journal of Speech, Language, and Hearing Research Vol December 2006

10 formants in the /u/ are not evident in the double-vowel pair. In other vowel combinations, such as when the /æ/is added to the /P/ (second panels), a representation of all five unique formants is present for the normal-hearing listeners, with the highest-frequency formants being lost in the excitation pattern for listeners with HI. Thus, the excitation patterns illustrate that when a latearriving vowel is added to an early-arriving vowel, there are spectral changes unique to each vowel combination and that these changes are different for the normal and impaired auditory systems. Correlation coefficients between excitation-pattern based variables and the effect of onset asynchrony were calculated to determine if changes in the excitation patterns are predictive of benefits received from onset asynchrony. The total change in excitation was calculated to determine if a large change in excitation is associated with a large effect of onset asynchrony. Also evaluated was the change in excitation at the newly added formants to determine if the effect of onset asynchrony is related to a change in excitation at the formant frequencies in the late-arriving vowel that are different from the formants in the early-arriving vowel. For example, when /u/ is added to /P/ (see bottom panel of Figure 4) the large increase in excitation associated with the first formant of the /u/ between the doublevowel and the early-arriving vowel would be used. To establish these metrics, a formant peak was defined as all frequencies within the 3-dB bandwidth of the spectral peak. 3 A newly added formant was defined as a formant in the late-arriving vowel that is at least 10% different in frequency from a formant in the early-arriving vowel. As mentioned in the previous discussion, it was also hypothesized that reduced audibility plays a role in the detrimental effects of onset asynchrony. Therefore, two variables related to the overall audibility of the stimuli were evaluated: the audible frequency range of the earlyarriving vowel (to determine if the number of the earlyactivated frequency channels is related to onset-asynchrony benefit) and the audible frequency range of the doublevowel pair (to determine if the final number of activated frequency channels is related to onset-asynchrony benefit). The audible frequency range is computed by determining the highest frequency at which the stimulus components are above 15 db SL. Two other variables were included because vowel identification might not require the full stimulus bandwidth and might be restricted to the formant peaks. These variables were the total bandwidth of the formants of the early-arriving vowel (to determine if the number of channels activated by the formants of the early-arriving vowel are related to onset-asynchrony benefit) and the total bandwidth of the newly added formants (to determine if onset-asynchrony benefit is related to the channels activated by the formants in the double vowel). To give approximate equal contribution of low frequencies and high frequencies, the log of the bandwidths of the formants was determined, and the log bandwidths of the individual formants were summed to produce the log formant bandwidth. If the newly added formant did not introduce a change in excitation of at least 1 db, it was not included in the analysis. An average onset-asynchrony benefit was obtained for each double-vowel combination by averaging 70 identification scores (10 replications 7 listeners in each group) at 200 ms, and then subtracting the average identification scores at 100 ms for normal-hearing listeners and listeners with HI. 4 Correlation coefficients were based on 80 unique vowel pairs (20 vowel combinations 4 f 0 combinations) for data obtained from normal-hearing listeners and listeners with HI. A data set based on all listeners was also evaluated that had 160 different samples (80 vowel pairs 2 groups). Correlation coefficients are indicated in Table 3. Notably, Table 3 illustrates that all model metrics for the individual groups and the pooled data were associated with relatively low correlation coefficients. Thus, all proposed hypotheses that might account for poorer onset asynchrony benefits (such as a smaller change in excitation associated between the single and double vowels, a reduced audible frequency range, and altered formant bandwidths) have little support. In particular, the modeling suggests that the reduced audible frequency range of the impaired auditory system is not associated with a smaller benefit due to onset asynchrony (see row 2). Despite the overall low correlation coefficients, the normal-hearing group did have variables that were significantly correlated with onset-asynchrony benefit. The variable that had the greatest predictability on onset-asynchrony benefit was related to the bandwidth of the newly added formants (row 4: r =.42; t[78] = 4.09, p <.0001). However, this variable was not significantly correlated with onset-asynchrony benefit for the listeners with HI (r =.04). Thus, the relatively large correlation coefficient obtained for the group (r =.32) was most likely driven by the difference in onset-asynchrony benefit between the two groups, rather than a true relationship to the bandwidth of the added formants. Because this weak relationship between the bandwidth of the newly added formants and onset-asynchrony only held for normal-hearing listeners, other factors might be at play in the group with HI. 3 In cases where the representation of the formant was too broad to define the 3-dB down points, the formant bandwidth was estimated from the frequency at which thresholds began to rise. 4 The onset asynchrony difference between 200 and 100 ms is used due to asymptotic performance of many listeners at 200 ms in the f 0different conditions. Lentz & Marsh: Use of Onset Asynchrony by Listeners With HI 1363

11 Table 3. Correlation coefficients between onset-asynchrony benefit scores and excitation-pattern model variables. Model variable NH HI Entire group Change in excitation Total change in excitation between early-arriving vowel.27* j.09 j.01 and two-vowel pair Change in excitation at newly added formants Audible frequency range Audible frequency range of early-arriving vowel j.15 j.03.29* Audible frequency range of double vowel.06 j.04.37* Audible formant bandwidths Total bandwidth (log) of early-arriving vowel formants j.32* j Total bandwidth (log) of newly added formants.42*.04.32* *p <.05. Only two model variables led to correlation coefficients that have the same sign across all three groups tested: the change in excitation at the newly added formants (.12,.10, and.21 for normal-hearing listeners, listeners with HI, and entire group data) and the total bandwidth of the new formants associated with the latearriving vowel (.42,.04, and.32 for the same groups tested). Both of these model variables were associated with the formants in the vowels, and not the entire power spectrum, suggesting that the formants might have played a small role in the processing of onset asynchrony for double vowels. It should be noted, though, that the correlation coefficients obtained from the change in excitation at the newly added formants are not significantly different from zero. Thus, any conclusions about the relationship between the change in excitation at the formant frequencies and the benefit received from onset asynchrony would be tenuous at best. The small correlation coefficients obtained for all model variables leads to the suggestion that suprathreshold deficits related to hearing loss might be more closely predictive of the reduced effects of onset asynchrony in listeners with HI (cf. Fitzgibbons & Gordon-Salant, 1987). A possible factor than cannot be easily tested here is whether the loss of the cochlear nonlinearity leads to reduced benefits from onset asynchrony. Listeners with normal hearing, who have active cochleae, can benefit from a spectral enhancement that might occur with the addition of a second vowel to a first. Listeners with HI will not benefit from the spectral enhancement to the same extent as normal-hearing listeners. Therefore, their reduced sensitivity to onset asynchrony may also be related to loss of the active mechanism. As mentioned in the Results section, vowel dominance might also influence the benefits received from onset asynchrony, especially in conditions when the fundamental frequency of the two vowels is the same. Listeners with normal hearing typically report hearing one dominant vowel when two vowels are presented simultaneously, and which vowel is dominant in a two-vowel pair differs between listeners with normal-hearing and listeners with hearing loss (Arehart et al., 2005). If listeners with hearing loss experienced a greater degree of one vowel being dominant over the other, then one might expect that listeners with hearing loss would mistakenly identify the early-arriving vowel more than listeners with normal hearing. However, while listeners with hearing loss made more errors than listeners with normal hearing, listeners with normal hearing made proportionally more early-arriving vowel identification errors (64%) than listeners with hearing loss (48%). Thus, even though listeners were only identifying one vowel of the double-vowel stimulus, listeners with HI made more errors in which neither of the two stimulus vowels were selected. One possible interpretation of this result is that listeners with HI experienced greater masking of the early-arriving vowel on the late-arriving vowel. In general, the errors made by the listeners suggest that whereas listeners with normal hearing may have had difficulty determining which of the two vowels came first, listeners with HI were having more difficulty determining the identity of the vowels. For both groups of listeners, a reduction in early-arriving vowel identification errors was entirely responsible for the improvement in performance due to onset asynchrony. These results are also consistent with a decline in temporal integration in which listeners experience less of a decrease in detection threshold with increasing signal duration (Carlyon, Buus, & Florentine, 1990; Hall & Fernandes, 1983). Temporal integration is particularly important to process onset asynchrony because identification of the early-arriving information will be better at longer stimulus durations. McKeown and Patterson (1995) showed that increases in the duration of a synchronous double-vowel stimulus lead to better identification of one of the simultaneous vowels Journal of Speech, Language, and Hearing Research Vol December 2006

12 Relationship Between f 0 and Onset Asynchrony Processing The major results related to fundamental frequency differences are that (a) both normal-hearing listeners and listeners with HI received similar benefits from fundamental frequency differences, and (b) the presence of a fundamental frequency difference had no effect on the benefit received from onset asynchrony. Because the scoring method adopted in this experiment is rather different from methods adopted by other studies measuring the effects of onset asynchrony, direct comparisons of the magnitude of the f 0 difference benefit cannot be made. However, Arehart et al. (1997) also found that normalhearing listeners and listeners with HI received similar benefits due to fundamental frequency differences. In contrast, Summers and Leek (1998) showed that only about half of their listeners benefited from fundamental frequency differences to the same extent as normal-hearing listeners. Thus, even though many listeners with hearing loss tend to experience similar benefits from fundamental frequency differences as normal-hearing listeners, individual differences can be present. The result that the benefits received from onset asynchrony were unaltered by the presence of fundamental frequency differences suggests that onset asynchrony and fundamental frequency differences are processed separately in the auditory system. Fundamental frequency differences are largely processed by the first-formant frequency region, with some contribution from higher frequency regions for double-vowel stimuli with large differences in fundamental frequency (>2 semitones; Culling & Darwin, 1993). Therefore, fewer channels, and primarily low-frequency channels, are required to receive benefit from differences in fundamental frequency. Listeners with HI who have normal hearing in the low frequencies might have the ability to take advantage of fundamental frequency differences, whereas hearing loss at any frequency might make processing of onset asynchrony difficult. Because there is large variability in the hearing loss configurations in the current study, individual hearing losses can be used to determine if listeners with normal low-frequency hearing receive more benefit from fundamental frequency differences than listeners with higher low-frequency thresholds. The hearing levels of HI1, HI4, HI5, and HI6 were 25 db HL at frequencies of 1000 Hz and lower. It would be anticipated that these listeners should benefit from fundamental frequency differences as well as normal-hearing listeners. In fact, these 4 listeners all received benefits from fundamental frequency that were between 12 and 18 percentage points. These benefits are comparable with the average benefit received by normal-hearing listeners of 11.7 percentage points. Three other listeners with HI HI2, HI3, and HI7 had average audiometric thresholds at 250, 500, and 1000 Hz that were >25 db HL. HI2 and HI3 benefited from the fundamental frequency difference by <3.5 percentage points. However, HI7 had audiometric thresholds >40 db HL at frequencies below 1000 Hz, yet she received a 15 percentage point benefit from the fundamental frequency difference. The data of Arehart et al. (1997) and Summers and Leek (1998) also did not show a strong relationship between hearing level and fundamental frequency difference benefit. Perhaps a better metric than hearing levels is frequency selectivity, as the auditory system conducts a frequency analysis on the stimulus that might be important for extracting the pitch, and therefore, extracting the fundamental frequency of the stimulus (Assmann & Summerfield, 1990). Hearing levels are only mildly correlated with frequency selectivity (Patterson, Nimmo-Smith, Weber, & Milroy, 1982) and might not be the best metric to relate cochlear filtering with fundamental frequency benefits. Summary and Conclusions The current experiment independently manipulated onset differences and fundamental frequency differences between two synthetic vowel sounds. Listeners with hearing loss benefited from onset asynchrony by a lesser amount than listeners with normal hearing, and the presence of fundamental frequency differences between the two vowels did not alter the benefit received from onset asynchrony. Excitation-pattern analyses indicated that the reduced onset-asynchrony benefit received by listeners with HI is not due to their reduced audible frequency range. Suprathreshold factors such as loss of the cochlear nonlinearity, reduced temporal integration, and the perception of vowel dominance might play a role in the reduced benefit received from onset asynchrony in listeners with HI. The ability to take advantage of fundamental frequency differences is not compromised in the same way as the ability to take advantage of onset asynchrony. Acknowledgments This work was supported by Grant DC from the National Institute on Deafness and Other Communication Disorders and by a summer undergraduate research scholarship to Shavon Marsh by the McNair Scholars Program. We thank T. Beth Trainor-Hayes for assistance with data collection and Marjorie Leek, Larry Humes, and Michelle Molis for their valuable comments on this manuscript and assistance in interpreting the data. References American National Standards Institute. (1996). Specification for audiometers (ANSI S ). New York: Author. Lentz & Marsh: Use of Onset Asynchrony by Listeners With HI 1365

Spectral-peak selection in spectral-shape discrimination by normal-hearing and hearing-impaired listeners

Spectral-peak selection in spectral-shape discrimination by normal-hearing and hearing-impaired listeners Spectral-peak selection in spectral-shape discrimination by normal-hearing and hearing-impaired listeners Jennifer J. Lentz a Department of Speech and Hearing Sciences, Indiana University, Bloomington,

More information

Variation in spectral-shape discrimination weighting functions at different stimulus levels and signal strengths

Variation in spectral-shape discrimination weighting functions at different stimulus levels and signal strengths Variation in spectral-shape discrimination weighting functions at different stimulus levels and signal strengths Jennifer J. Lentz a Department of Speech and Hearing Sciences, Indiana University, Bloomington,

More information

HCS 7367 Speech Perception

HCS 7367 Speech Perception Long-term spectrum of speech HCS 7367 Speech Perception Connected speech Absolute threshold Males Dr. Peter Assmann Fall 212 Females Long-term spectrum of speech Vowels Males Females 2) Absolute threshold

More information

Spectral processing of two concurrent harmonic complexes

Spectral processing of two concurrent harmonic complexes Spectral processing of two concurrent harmonic complexes Yi Shen a) and Virginia M. Richards Department of Cognitive Sciences, University of California, Irvine, California 92697-5100 (Received 7 April

More information

Temporal offset judgments for concurrent vowels by young, middle-aged, and older adults

Temporal offset judgments for concurrent vowels by young, middle-aged, and older adults Temporal offset judgments for concurrent vowels by young, middle-aged, and older adults Daniel Fogerty Department of Communication Sciences and Disorders, University of South Carolina, Columbia, South

More information

INTRODUCTION J. Acoust. Soc. Am. 103 (2), February /98/103(2)/1080/5/$ Acoustical Society of America 1080

INTRODUCTION J. Acoust. Soc. Am. 103 (2), February /98/103(2)/1080/5/$ Acoustical Society of America 1080 Perceptual segregation of a harmonic from a vowel by interaural time difference in conjunction with mistuning and onset asynchrony C. J. Darwin and R. W. Hukin Experimental Psychology, University of Sussex,

More information

INTRODUCTION J. Acoust. Soc. Am. 104 (6), December /98/104(6)/3597/11/$ Acoustical Society of America 3597

INTRODUCTION J. Acoust. Soc. Am. 104 (6), December /98/104(6)/3597/11/$ Acoustical Society of America 3597 The relation between identification and discrimination of vowels in young and elderly listeners a) Maureen Coughlin, b) Diane Kewley-Port, c) and Larry E. Humes d) Department of Speech and Hearing Sciences,

More information

Issues faced by people with a Sensorineural Hearing Loss

Issues faced by people with a Sensorineural Hearing Loss Issues faced by people with a Sensorineural Hearing Loss Issues faced by people with a Sensorineural Hearing Loss 1. Decreased Audibility 2. Decreased Dynamic Range 3. Decreased Frequency Resolution 4.

More information

C HAPTER FOUR. Audiometric Configurations in Children. Andrea L. Pittman. Introduction. Methods

C HAPTER FOUR. Audiometric Configurations in Children. Andrea L. Pittman. Introduction. Methods C HAPTER FOUR Audiometric Configurations in Children Andrea L. Pittman Introduction Recent studies suggest that the amplification needs of children and adults differ due to differences in perceptual ability.

More information

JSLHR. Research Article

JSLHR. Research Article JSLHR Research Article Gap Detection and Temporal Modulation Transfer Function as Behavioral Estimates of Auditory Temporal Acuity Using Band-Limited Stimuli in Young and Older Adults Yi Shen a Purpose:

More information

Pitfalls in behavioral estimates of basilar-membrane compression in humans a)

Pitfalls in behavioral estimates of basilar-membrane compression in humans a) Pitfalls in behavioral estimates of basilar-membrane compression in humans a) Magdalena Wojtczak b and Andrew J. Oxenham Department of Psychology, University of Minnesota, 75 East River Road, Minneapolis,

More information

9/29/14. Amanda M. Lauer, Dept. of Otolaryngology- HNS. From Signal Detection Theory and Psychophysics, Green & Swets (1966)

9/29/14. Amanda M. Lauer, Dept. of Otolaryngology- HNS. From Signal Detection Theory and Psychophysics, Green & Swets (1966) Amanda M. Lauer, Dept. of Otolaryngology- HNS From Signal Detection Theory and Psychophysics, Green & Swets (1966) SIGNAL D sensitivity index d =Z hit - Z fa Present Absent RESPONSE Yes HIT FALSE ALARM

More information

Role of F0 differences in source segregation

Role of F0 differences in source segregation Role of F0 differences in source segregation Andrew J. Oxenham Research Laboratory of Electronics, MIT and Harvard-MIT Speech and Hearing Bioscience and Technology Program Rationale Many aspects of segregation

More information

Frequency refers to how often something happens. Period refers to the time it takes something to happen.

Frequency refers to how often something happens. Period refers to the time it takes something to happen. Lecture 2 Properties of Waves Frequency and period are distinctly different, yet related, quantities. Frequency refers to how often something happens. Period refers to the time it takes something to happen.

More information

What Is the Difference between db HL and db SPL?

What Is the Difference between db HL and db SPL? 1 Psychoacoustics What Is the Difference between db HL and db SPL? The decibel (db ) is a logarithmic unit of measurement used to express the magnitude of a sound relative to some reference level. Decibels

More information

Limited available evidence suggests that the perceptual salience of

Limited available evidence suggests that the perceptual salience of Spectral Tilt Change in Stop Consonant Perception by Listeners With Hearing Impairment Joshua M. Alexander Keith R. Kluender University of Wisconsin, Madison Purpose: To evaluate how perceptual importance

More information

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening AUDL GS08/GAV1 Signals, systems, acoustics and the ear Pitch & Binaural listening Review 25 20 15 10 5 0-5 100 1000 10000 25 20 15 10 5 0-5 100 1000 10000 Part I: Auditory frequency selectivity Tuning

More information

NIH Public Access Author Manuscript J Hear Sci. Author manuscript; available in PMC 2013 December 04.

NIH Public Access Author Manuscript J Hear Sci. Author manuscript; available in PMC 2013 December 04. NIH Public Access Author Manuscript Published in final edited form as: J Hear Sci. 2012 December ; 2(4): 9 17. HEARING, PSYCHOPHYSICS, AND COCHLEAR IMPLANTATION: EXPERIENCES OF OLDER INDIVIDUALS WITH MILD

More information

1706 J. Acoust. Soc. Am. 113 (3), March /2003/113(3)/1706/12/$ Acoustical Society of America

1706 J. Acoust. Soc. Am. 113 (3), March /2003/113(3)/1706/12/$ Acoustical Society of America The effects of hearing loss on the contribution of high- and lowfrequency speech information to speech understanding a) Benjamin W. Y. Hornsby b) and Todd A. Ricketts Dan Maddox Hearing Aid Research Laboratory,

More information

INTRODUCTION J. Acoust. Soc. Am. 100 (4), Pt. 1, October /96/100(4)/2352/13/$ Acoustical Society of America 2352

INTRODUCTION J. Acoust. Soc. Am. 100 (4), Pt. 1, October /96/100(4)/2352/13/$ Acoustical Society of America 2352 Lateralization of a perturbed harmonic: Effects of onset asynchrony and mistuning a) Nicholas I. Hill and C. J. Darwin Laboratory of Experimental Psychology, University of Sussex, Brighton BN1 9QG, United

More information

Psychoacoustical Models WS 2016/17

Psychoacoustical Models WS 2016/17 Psychoacoustical Models WS 2016/17 related lectures: Applied and Virtual Acoustics (Winter Term) Advanced Psychoacoustics (Summer Term) Sound Perception 2 Frequency and Level Range of Human Hearing Source:

More information

David A. Nelson. Anna C. Schroder. and. Magdalena Wojtczak

David A. Nelson. Anna C. Schroder. and. Magdalena Wojtczak A NEW PROCEDURE FOR MEASURING PERIPHERAL COMPRESSION IN NORMAL-HEARING AND HEARING-IMPAIRED LISTENERS David A. Nelson Anna C. Schroder and Magdalena Wojtczak Clinical Psychoacoustics Laboratory Department

More information

Linguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions.

Linguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions. 24.963 Linguistic Phonetics Basic Audition Diagram of the inner ear removed due to copyright restrictions. 1 Reading: Keating 1985 24.963 also read Flemming 2001 Assignment 1 - basic acoustics. Due 9/22.

More information

Linguistic Phonetics Fall 2005

Linguistic Phonetics Fall 2005 MIT OpenCourseWare http://ocw.mit.edu 24.963 Linguistic Phonetics Fall 2005 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 24.963 Linguistic Phonetics

More information

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds Hearing Lectures Acoustics of Speech and Hearing Week 2-10 Hearing 3: Auditory Filtering 1. Loudness of sinusoids mainly (see Web tutorial for more) 2. Pitch of sinusoids mainly (see Web tutorial for more)

More information

Temporal Masking Contributions of Inherent Envelope Fluctuations for Listeners with Normal and Impaired Hearing

Temporal Masking Contributions of Inherent Envelope Fluctuations for Listeners with Normal and Impaired Hearing Temporal Masking Contributions of Inherent Envelope Fluctuations for Listeners with Normal and Impaired Hearing A DISSERTATION SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA

More information

ABSTRACT EQUALIZING NOISE (TEN) TEST. Christine Gmitter Doctor of Audiology, 2008

ABSTRACT EQUALIZING NOISE (TEN) TEST. Christine Gmitter Doctor of Audiology, 2008 ABSTRACT Title of Document: AGE-RELATED EFFECTS ON THE THRESHOLD EQUALIZING NOISE (TEN) TEST Christine Gmitter Doctor of Audiology, 2008 Directed By: Professor Sandra Gordon-Salant Department of Hearing

More information

Juan Carlos Tejero-Calado 1, Janet C. Rutledge 2, and Peggy B. Nelson 3

Juan Carlos Tejero-Calado 1, Janet C. Rutledge 2, and Peggy B. Nelson 3 PRESERVING SPECTRAL CONTRAST IN AMPLITUDE COMPRESSION FOR HEARING AIDS Juan Carlos Tejero-Calado 1, Janet C. Rutledge 2, and Peggy B. Nelson 3 1 University of Malaga, Campus de Teatinos-Complejo Tecnol

More information

Recovery from on- and off-frequency forward masking in listeners with normal and impaired hearing

Recovery from on- and off-frequency forward masking in listeners with normal and impaired hearing Recovery from on- and off-frequency forward masking in listeners with normal and impaired hearing Magdalena Wojtczak a and Andrew J. Oxenham Department of Psychology, University of Minnesota, 75 East River

More information

Enrique A. Lopez-Poveda Alan R. Palmer Ray Meddis Editors. The Neurophysiological Bases of Auditory Perception

Enrique A. Lopez-Poveda Alan R. Palmer Ray Meddis Editors. The Neurophysiological Bases of Auditory Perception Enrique A. Lopez-Poveda Alan R. Palmer Ray Meddis Editors The Neurophysiological Bases of Auditory Perception 123 The Neurophysiological Bases of Auditory Perception Enrique A. Lopez-Poveda Alan R. Palmer

More information

Acoustics, signals & systems for audiology. Psychoacoustics of hearing impairment

Acoustics, signals & systems for audiology. Psychoacoustics of hearing impairment Acoustics, signals & systems for audiology Psychoacoustics of hearing impairment Three main types of hearing impairment Conductive Sound is not properly transmitted from the outer to the inner ear Sensorineural

More information

Effects of Cochlear Hearing Loss on the Benefits of Ideal Binary Masking

Effects of Cochlear Hearing Loss on the Benefits of Ideal Binary Masking INTERSPEECH 2016 September 8 12, 2016, San Francisco, USA Effects of Cochlear Hearing Loss on the Benefits of Ideal Binary Masking Vahid Montazeri, Shaikat Hossain, Peter F. Assmann University of Texas

More information

Auditory scene analysis in humans: Implications for computational implementations.

Auditory scene analysis in humans: Implications for computational implementations. Auditory scene analysis in humans: Implications for computational implementations. Albert S. Bregman McGill University Introduction. The scene analysis problem. Two dimensions of grouping. Recognition

More information

functions grow at a higher rate than in normal{hearing subjects. In this chapter, the correlation

functions grow at a higher rate than in normal{hearing subjects. In this chapter, the correlation Chapter Categorical loudness scaling in hearing{impaired listeners Abstract Most sensorineural hearing{impaired subjects show the recruitment phenomenon, i.e., loudness functions grow at a higher rate

More information

Research Article The Acoustic and Peceptual Effects of Series and Parallel Processing

Research Article The Acoustic and Peceptual Effects of Series and Parallel Processing Hindawi Publishing Corporation EURASIP Journal on Advances in Signal Processing Volume 9, Article ID 6195, pages doi:1.1155/9/6195 Research Article The Acoustic and Peceptual Effects of Series and Parallel

More information

Jitter, Shimmer, and Noise in Pathological Voice Quality Perception

Jitter, Shimmer, and Noise in Pathological Voice Quality Perception ISCA Archive VOQUAL'03, Geneva, August 27-29, 2003 Jitter, Shimmer, and Noise in Pathological Voice Quality Perception Jody Kreiman and Bruce R. Gerratt Division of Head and Neck Surgery, School of Medicine

More information

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES Varinthira Duangudom and David V Anderson School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA 30332

More information

Effects of Age and Hearing Loss on the Processing of Auditory Temporal Fine Structure

Effects of Age and Hearing Loss on the Processing of Auditory Temporal Fine Structure Effects of Age and Hearing Loss on the Processing of Auditory Temporal Fine Structure Brian C. J. Moore Abstract Within the cochlea, broadband sounds like speech and music are filtered into a series of

More information

Effects of background noise level on behavioral estimates of basilar-membrane compression

Effects of background noise level on behavioral estimates of basilar-membrane compression Effects of background noise level on behavioral estimates of basilar-membrane compression Melanie J. Gregan a and Peggy B. Nelson Department of Speech-Language-Hearing Science, University of Minnesota,

More information

64 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 61, NO. 1, JANUARY 2014

64 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 61, NO. 1, JANUARY 2014 64 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 61, NO. 1, JANUARY 2014 Signal-Processing Strategy for Restoration of Cross-Channel Suppression in Hearing-Impaired Listeners Daniel M. Rasetshwane,

More information

Although considerable work has been conducted on the speech

Although considerable work has been conducted on the speech Influence of Hearing Loss on the Perceptual Strategies of Children and Adults Andrea L. Pittman Patricia G. Stelmachowicz Dawna E. Lewis Brenda M. Hoover Boys Town National Research Hospital Omaha, NE

More information

Asynchronous glimpsing of speech: Spread of masking and task set-size

Asynchronous glimpsing of speech: Spread of masking and task set-size Asynchronous glimpsing of speech: Spread of masking and task set-size Erol J. Ozmeral, a) Emily Buss, and Joseph W. Hall III Department of Otolaryngology/Head and Neck Surgery, University of North Carolina

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

Lateralized speech perception in normal-hearing and hearing-impaired listeners and its relationship to temporal processing

Lateralized speech perception in normal-hearing and hearing-impaired listeners and its relationship to temporal processing Lateralized speech perception in normal-hearing and hearing-impaired listeners and its relationship to temporal processing GUSZTÁV LŐCSEI,*, JULIE HEFTING PEDERSEN, SØREN LAUGESEN, SÉBASTIEN SANTURETTE,

More information

INTRODUCTION. Institute of Technology, Cambridge, MA Electronic mail:

INTRODUCTION. Institute of Technology, Cambridge, MA Electronic mail: Level discrimination of sinusoids as a function of duration and level for fixed-level, roving-level, and across-frequency conditions Andrew J. Oxenham a) Institute for Hearing, Speech, and Language, and

More information

HCS 7367 Speech Perception

HCS 7367 Speech Perception Babies 'cry in mother's tongue' HCS 7367 Speech Perception Dr. Peter Assmann Fall 212 Babies' cries imitate their mother tongue as early as three days old German researchers say babies begin to pick up

More information

Computational Perception /785. Auditory Scene Analysis

Computational Perception /785. Auditory Scene Analysis Computational Perception 15-485/785 Auditory Scene Analysis A framework for auditory scene analysis Auditory scene analysis involves low and high level cues Low level acoustic cues are often result in

More information

FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED

FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED Francisco J. Fraga, Alan M. Marotta National Institute of Telecommunications, Santa Rita do Sapucaí - MG, Brazil Abstract A considerable

More information

Christopher J. Plack Department of Psychology, University of Essex, Wivenhoe Park, Colchester CO4 3SQ, England

Christopher J. Plack Department of Psychology, University of Essex, Wivenhoe Park, Colchester CO4 3SQ, England Inter-relationship between different psychoacoustic measures assumed to be related to the cochlear active mechanism Brian C. J. Moore a) and Deborah A. Vickers Department of Experimental Psychology, University

More information

Lecture Outline. The GIN test and some clinical applications. Introduction. Temporal processing. Gap detection. Temporal resolution and discrimination

Lecture Outline. The GIN test and some clinical applications. Introduction. Temporal processing. Gap detection. Temporal resolution and discrimination Lecture Outline The GIN test and some clinical applications Dr. Doris-Eva Bamiou National Hospital for Neurology Neurosurgery and Institute of Child Health (UCL)/Great Ormond Street Children s Hospital

More information

Improving Audibility with Nonlinear Amplification for Listeners with High-Frequency Loss

Improving Audibility with Nonlinear Amplification for Listeners with High-Frequency Loss J Am Acad Audiol 11 : 214-223 (2000) Improving Audibility with Nonlinear Amplification for Listeners with High-Frequency Loss Pamela E. Souza* Robbi D. Bishop* Abstract In contrast to fitting strategies

More information

Topics in Linguistic Theory: Laboratory Phonology Spring 2007

Topics in Linguistic Theory: Laboratory Phonology Spring 2007 MIT OpenCourseWare http://ocw.mit.edu 24.91 Topics in Linguistic Theory: Laboratory Phonology Spring 27 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER

A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER ARCHIVES OF ACOUSTICS 29, 1, 25 34 (2004) INTELLIGIBILITY OF SPEECH PROCESSED BY A SPECTRAL CONTRAST ENHANCEMENT PROCEDURE AND A BINAURAL PROCEDURE A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER Institute

More information

Binaural Hearing. Why two ears? Definitions

Binaural Hearing. Why two ears? Definitions Binaural Hearing Why two ears? Locating sounds in space: acuity is poorer than in vision by up to two orders of magnitude, but extends in all directions. Role in alerting and orienting? Separating sound

More information

Temporal order discrimination of tonal sequences by younger and older adults: The role of duration and rate a)

Temporal order discrimination of tonal sequences by younger and older adults: The role of duration and rate a) Temporal order discrimination of tonal sequences by younger and older adults: The role of duration and rate a) Mini N. Shrivastav b Department of Communication Sciences and Disorders, 336 Dauer Hall, University

More information

Combination of binaural and harmonic. masking release effects in the detection of a. single component in complex tones

Combination of binaural and harmonic. masking release effects in the detection of a. single component in complex tones Combination of binaural and harmonic masking release effects in the detection of a single component in complex tones Martin Klein-Hennig, Mathias Dietz, and Volker Hohmann a) Medizinische Physik and Cluster

More information

Spatial processing in adults with hearing loss

Spatial processing in adults with hearing loss Spatial processing in adults with hearing loss Harvey Dillon Helen Glyde Sharon Cameron, Louise Hickson, Mark Seeto, Jörg Buchholz, Virginia Best creating sound value TM www.hearingcrc.org Spatial processing

More information

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED International Conference on Systemics, Cybernetics and Informatics, February 12 15, 2004 BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED Alice N. Cheeran Biomedical

More information

Representation of sound in the auditory nerve

Representation of sound in the auditory nerve Representation of sound in the auditory nerve Eric D. Young Department of Biomedical Engineering Johns Hopkins University Young, ED. Neural representation of spectral and temporal information in speech.

More information

Level dependence of auditory filters in nonsimultaneous masking as a function of frequency

Level dependence of auditory filters in nonsimultaneous masking as a function of frequency Level dependence of auditory filters in nonsimultaneous masking as a function of frequency Andrew J. Oxenham a and Andrea M. Simonson Research Laboratory of Electronics, Massachusetts Institute of Technology,

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Psychological and Physiological Acoustics Session 3aPP: Auditory Physiology

More information

The basic hearing abilities of absolute pitch possessors

The basic hearing abilities of absolute pitch possessors PAPER The basic hearing abilities of absolute pitch possessors Waka Fujisaki 1;2;* and Makio Kashino 2; { 1 Graduate School of Humanities and Sciences, Ochanomizu University, 2 1 1 Ootsuka, Bunkyo-ku,

More information

JARO. Estimates of Human Cochlear Tuning at Low Levels Using Forward and Simultaneous Masking ANDREW J. OXENHAM, 1 AND CHRISTOPHER A.

JARO. Estimates of Human Cochlear Tuning at Low Levels Using Forward and Simultaneous Masking ANDREW J. OXENHAM, 1 AND CHRISTOPHER A. JARO 4: 541 554 (2003) DOI: 10.1007/s10162-002-3058-y JARO Journal of the Association for Research in Otolaryngology Estimates of Human Cochlear Tuning at Low Levels Using Forward and Simultaneous Masking

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Speech Communication Session 4aSCb: Voice and F0 Across Tasks (Poster

More information

Acoustics Research Institute

Acoustics Research Institute Austrian Academy of Sciences Acoustics Research Institute Modeling Modelingof ofauditory AuditoryPerception Perception Bernhard BernhardLaback Labackand andpiotr PiotrMajdak Majdak http://www.kfs.oeaw.ac.at

More information

THRESHOLD PREDICTION USING THE ASSR AND THE TONE BURST CONFIGURATIONS

THRESHOLD PREDICTION USING THE ASSR AND THE TONE BURST CONFIGURATIONS THRESHOLD PREDICTION USING THE ASSR AND THE TONE BURST ABR IN DIFFERENT AUDIOMETRIC CONFIGURATIONS INTRODUCTION INTRODUCTION Evoked potential testing is critical in the determination of audiologic thresholds

More information

Christopher J. Plack and Ray Meddis Department of Psychology, University of Essex, Wivenhoe Park, Colchester CO4 3SQ, United Kingdom

Christopher J. Plack and Ray Meddis Department of Psychology, University of Essex, Wivenhoe Park, Colchester CO4 3SQ, United Kingdom Cochlear nonlinearity between 500 and 8000 Hz in listeners with normal hearing Enrique A. Lopez-Poveda a) Centro Regional de Investigaciones Biomédicas, Facultad de Medicina, Universidad de Castilla-La

More information

2/25/2013. Context Effect on Suprasegmental Cues. Supresegmental Cues. Pitch Contour Identification (PCI) Context Effect with Cochlear Implants

2/25/2013. Context Effect on Suprasegmental Cues. Supresegmental Cues. Pitch Contour Identification (PCI) Context Effect with Cochlear Implants Context Effect on Segmental and Supresegmental Cues Preceding context has been found to affect phoneme recognition Stop consonant recognition (Mann, 1980) A continuum from /da/ to /ga/ was preceded by

More information

EFFECTS OF SPECTRAL DISTORTION ON SPEECH INTELLIGIBILITY. Lara Georgis

EFFECTS OF SPECTRAL DISTORTION ON SPEECH INTELLIGIBILITY. Lara Georgis EFFECTS OF SPECTRAL DISTORTION ON SPEECH INTELLIGIBILITY by Lara Georgis Submitted in partial fulfilment of the requirements for the degree of Master of Science at Dalhousie University Halifax, Nova Scotia

More information

Intelligibility of narrow-band speech and its relation to auditory functions in hearing-impaired listeners

Intelligibility of narrow-band speech and its relation to auditory functions in hearing-impaired listeners Intelligibility of narrow-band speech and its relation to auditory functions in hearing-impaired listeners VRIJE UNIVERSITEIT Intelligibility of narrow-band speech and its relation to auditory functions

More information

THE EFFECT OF A REMINDER STIMULUS ON THE DECISION STRATEGY ADOPTED IN THE TWO-ALTERNATIVE FORCED-CHOICE PROCEDURE.

THE EFFECT OF A REMINDER STIMULUS ON THE DECISION STRATEGY ADOPTED IN THE TWO-ALTERNATIVE FORCED-CHOICE PROCEDURE. THE EFFECT OF A REMINDER STIMULUS ON THE DECISION STRATEGY ADOPTED IN THE TWO-ALTERNATIVE FORCED-CHOICE PROCEDURE. Michael J. Hautus, Daniel Shepherd, Mei Peng, Rebecca Philips and Veema Lodhia Department

More information

The masking of interaural delays

The masking of interaural delays The masking of interaural delays Andrew J. Kolarik and John F. Culling a School of Psychology, Cardiff University, Tower Building, Park Place, Cardiff CF10 3AT, United Kingdom Received 5 December 2006;

More information

Infant Hearing Development: Translating Research Findings into Clinical Practice. Auditory Development. Overview

Infant Hearing Development: Translating Research Findings into Clinical Practice. Auditory Development. Overview Infant Hearing Development: Translating Research Findings into Clinical Practice Lori J. Leibold Department of Allied Health Sciences The University of North Carolina at Chapel Hill Auditory Development

More information

Discrimination of temporal fine structure by birds and mammals

Discrimination of temporal fine structure by birds and mammals Auditory Signal Processing: Physiology, Psychoacoustics, and Models. Pressnitzer, D., de Cheveigné, A., McAdams, S.,and Collet, L. (Eds). Springer Verlag, 24. Discrimination of temporal fine structure

More information

Learning to detect a tone in unpredictable noise

Learning to detect a tone in unpredictable noise Learning to detect a tone in unpredictable noise Pete R. Jones and David R. Moore MRC Institute of Hearing Research, University Park, Nottingham NG7 2RD, United Kingdom p.r.jones@ucl.ac.uk, david.moore2@cchmc.org

More information

On the Interplay Between Cochlear Gain Loss and Temporal Envelope Coding Deficits

On the Interplay Between Cochlear Gain Loss and Temporal Envelope Coding Deficits On the Interplay Between Cochlear Gain Loss and Temporal Envelope Coding Deficits Sarah Verhulst, Patrycja Piktel, Anoop Jagadeesh and Manfred Mauermann Abstract Hearing impairment is characterized by

More information

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS 19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 27 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS PACS: 43.66.Pn Seeber, Bernhard U. Auditory Perception Lab, Dept.

More information

doi: /brain/awn308 Brain 2009: 132; Enhanced discrimination of low-frequency sounds for subjects with high-frequency dead regions

doi: /brain/awn308 Brain 2009: 132; Enhanced discrimination of low-frequency sounds for subjects with high-frequency dead regions doi:10.1093/brain/awn308 Brain 2009: 132; 524 536 524 BRAIN A JOURNAL OF NEUROLOGY Enhanced discrimination of low-frequency sounds for subjects with high-frequency dead regions Brian C. J. Moore 1 and

More information

Toward an objective measure for a stream segregation task

Toward an objective measure for a stream segregation task Toward an objective measure for a stream segregation task Virginia M. Richards, Eva Maria Carreira, and Yi Shen Department of Cognitive Sciences, University of California, Irvine, 3151 Social Science Plaza,

More information

Study of perceptual balance for binaural dichotic presentation

Study of perceptual balance for binaural dichotic presentation Paper No. 556 Proceedings of 20 th International Congress on Acoustics, ICA 2010 23-27 August 2010, Sydney, Australia Study of perceptual balance for binaural dichotic presentation Pandurangarao N. Kulkarni

More information

Age Effects on Measures of Auditory Duration Discrimination

Age Effects on Measures of Auditory Duration Discrimination Journal of Speech and Hearing Research, Volume 37, 662-670, June 1994 Age Effects on Measures of Auditory Duration Discrimination Peter J. Fitzgibbons Gallaudet University Washington, U Sandra Gordon-Salant

More information

The role of periodicity in the perception of masked speech with simulated and real cochlear implants

The role of periodicity in the perception of masked speech with simulated and real cochlear implants The role of periodicity in the perception of masked speech with simulated and real cochlear implants Kurt Steinmetzger and Stuart Rosen UCL Speech, Hearing and Phonetic Sciences Heidelberg, 09. November

More information

USING THE AUDITORY STEADY-STATE RESPONSE TO DIAGNOSE DEAD REGIONS IN THE COCHLEA. A thesis submitted to The University of Manchester for the degree of

USING THE AUDITORY STEADY-STATE RESPONSE TO DIAGNOSE DEAD REGIONS IN THE COCHLEA. A thesis submitted to The University of Manchester for the degree of USING THE AUDITORY STEADY-STATE RESPONSE TO DIAGNOSE DEAD REGIONS IN THE COCHLEA A thesis submitted to The University of Manchester for the degree of Doctor of Philosophy Faculty of Medical and Human Sciences

More information

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair Who are cochlear implants for? Essential feature People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work

More information

Prelude Envelope and temporal fine. What's all the fuss? Modulating a wave. Decomposing waveforms. The psychophysics of cochlear

Prelude Envelope and temporal fine. What's all the fuss? Modulating a wave. Decomposing waveforms. The psychophysics of cochlear The psychophysics of cochlear implants Stuart Rosen Professor of Speech and Hearing Science Speech, Hearing and Phonetic Sciences Division of Psychology & Language Sciences Prelude Envelope and temporal

More information

Perceptual pitch shift for sounds with similar waveform autocorrelation

Perceptual pitch shift for sounds with similar waveform autocorrelation Pressnitzer et al.: Acoustics Research Letters Online [DOI./.4667] Published Online 4 October Perceptual pitch shift for sounds with similar waveform autocorrelation Daniel Pressnitzer, Alain de Cheveigné

More information

Signals, systems, acoustics and the ear. Week 1. Laboratory session: Measuring thresholds

Signals, systems, acoustics and the ear. Week 1. Laboratory session: Measuring thresholds Signals, systems, acoustics and the ear Week 1 Laboratory session: Measuring thresholds What s the most commonly used piece of electronic equipment in the audiological clinic? The Audiometer And what is

More information

Hello Old Friend the use of frequency specific speech phonemes in cortical and behavioural testing of infants

Hello Old Friend the use of frequency specific speech phonemes in cortical and behavioural testing of infants Hello Old Friend the use of frequency specific speech phonemes in cortical and behavioural testing of infants Andrea Kelly 1,3 Denice Bos 2 Suzanne Purdy 3 Michael Sanders 3 Daniel Kim 1 1. Auckland District

More information

Lecture 3: Perception

Lecture 3: Perception ELEN E4896 MUSIC SIGNAL PROCESSING Lecture 3: Perception 1. Ear Physiology 2. Auditory Psychophysics 3. Pitch Perception 4. Music Perception Dan Ellis Dept. Electrical Engineering, Columbia University

More information

Sound localization psychophysics

Sound localization psychophysics Sound localization psychophysics Eric Young A good reference: B.C.J. Moore An Introduction to the Psychology of Hearing Chapter 7, Space Perception. Elsevier, Amsterdam, pp. 233-267 (2004). Sound localization:

More information

The role of tone duration in dichotic temporal order judgment (TOJ)

The role of tone duration in dichotic temporal order judgment (TOJ) Babkoff, H., Fostick, L. (2013). The role of tone duration in dichotic temporal order judgment. Attention Perception and Psychophysics, 75(4):654-60 ***This is a self-archiving copy and does not fully

More information

Digital East Tennessee State University

Digital East Tennessee State University East Tennessee State University Digital Commons @ East Tennessee State University ETSU Faculty Works Faculty Works 10-1-2011 Effects of Degree and Configuration of Hearing Loss on the Contribution of High-

More information

Audibility, discrimination and hearing comfort at a new level: SoundRecover2

Audibility, discrimination and hearing comfort at a new level: SoundRecover2 Audibility, discrimination and hearing comfort at a new level: SoundRecover2 Julia Rehmann, Michael Boretzki, Sonova AG 5th European Pediatric Conference Current Developments and New Directions in Pediatric

More information

This study examines age-related changes in auditory sequential processing

This study examines age-related changes in auditory sequential processing 1052 JSLHR, Volume 41, 1052 1060, October 1998 Auditory Temporal Order Perception in Younger and Older Adults Peter J. Fitzgibbons Gallaudet University Washington, DC Sandra Gordon-Salant University of

More information

Precursor effects on behavioral estimates of frequency selectivity and gain in forward masking

Precursor effects on behavioral estimates of frequency selectivity and gain in forward masking Precursor effects on behavioral estimates of frequency selectivity and gain in forward masking Skyler G. Jennings, a Elizabeth A. Strickland, and Michael G. Heinz b Department of Speech, Language, and

More information

I. INTRODUCTION. J. Acoust. Soc. Am. 111 (1), Pt. 1, Jan /2002/111(1)/271/14/$ Acoustical Society of America

I. INTRODUCTION. J. Acoust. Soc. Am. 111 (1), Pt. 1, Jan /2002/111(1)/271/14/$ Acoustical Society of America The use of distortion product otoacoustic emission suppression as an estimate of response growth Michael P. Gorga, a) Stephen T. Neely, Patricia A. Dorn, and Dawn Konrad-Martin Boys Town National Research

More information

Masker-signal relationships and sound level

Masker-signal relationships and sound level Chapter 6: Masking Masking Masking: a process in which the threshold of one sound (signal) is raised by the presentation of another sound (masker). Masking represents the difference in decibels (db) between

More information

! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics

! Can hear whistle? ! Where are we on course map? ! What we did in lab last week. ! Psychoacoustics 2/14/18 Can hear whistle? Lecture 5 Psychoacoustics Based on slides 2009--2018 DeHon, Koditschek Additional Material 2014 Farmer 1 2 There are sounds we cannot hear Depends on frequency Where are we on

More information

PERCEPTUAL MEASUREMENT OF BREATHY VOICE QUALITY

PERCEPTUAL MEASUREMENT OF BREATHY VOICE QUALITY PERCEPTUAL MEASUREMENT OF BREATHY VOICE QUALITY By SONA PATEL A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER

More information

Investigators have known for at least 125 years that auditory detection

Investigators have known for at least 125 years that auditory detection Temporal Integration of Sinusoidal Increments in the Absence of Absolute Energy Cues C. Formby Division of Otolaryngology HNS University of Maryland School of Medicine Baltimore M. G. Heinz* Speech and

More information