Music Processing in Deaf Adults. with Cochlear Implants

Size: px
Start display at page:

Download "Music Processing in Deaf Adults. with Cochlear Implants"

Transcription

1 Music Processing in Deaf Adults with Cochlear Implants by Mathieu R. Saindon A thesis submitted in conformity with the requirements for the degree of Master of Arts Graduate Department of Psychology University of Toronto Copyright by Mathieu R. Saindon 2010

2 Music Processing in Deaf Adults with Cochlear Implants Abstract Mathieu R. Saindon Master of Arts Graduate Department of Psychology University of Toronto 2010 Cochlear implants (CIs) provide coarse representations of pitch, which are adequate for speech but not for music. Despite increasing interest in music processing by CI users, the available information is fragmentary. The present experiment attempted to fill this void by conducting a comprehensive assessment of music processing in adult CI users. CI users (n =6) and normally hearing (NH) controls (n = 12) were tested on several tasks involving melody and rhythm perception, recognition of familiar music, and emotion of recognition in speech and music. CI performance was substantially poorer than NH performance and at chance levels on pitch processing tasks. Performance was highly variable, however, with one individual achieving NH performance levels on some tasks, probably because of low-frequency residual hearing in his unimplanted ear. Future research with a larger sample of CI users can shed light on factors associated with good and poor music processing in this population. ii

3 Acknowledgments This thesis would have not been possible without the constant efforts, guidance and dedication of my supervisors Dr. Sandra Trehub and Dr. Glenn Schellenberg. I am very grateful for all of their help with this research project. I would also like to thank my parents and sister for their long-distance support, and my lovely wife Lauren for baking all of those muffins. Lastly, I would like to thank the health professionals and patients of the Sunnybrook Cochlear Implant Program. Without them, this project would not have been possible. iii

4 Table of Contents Acknowledgments... iii Table of Contents... iv List of Tables... vi List of Figures... vii List of Appendices... viii 1 Introduction Method Participants Apparatus Test Battery Metric task Rhythm task Distorted Tunes Test Musical emotion test Diagnostic Analysis of Nonverbal Accuracy Open-set word recognition CAMP test Familiar music task Pitch- and interval matching Procedure Results and Discussion Open-Set Word Recognition CAMP iv

5 3.3 Distorted Tunes Test Familiar Music Task Metric Task & Modified MBEA Rhythm Task Music Emotion & DANVA Pitch- and Interval-Matching Task Conclusion References Appendix v

6 List of Tables Table 1. Participant Characteristics.5 Table 2. List of CVC Words 9 Table 3. Music Emotion Arousal Scores vi

7 List of Figures Figure 1. Syllable (CVC) Recognition...13 Figure 2. CAMP Pitch Threshold (semitones) Figure 3. CAMP Melody Recognition (Percent Correct)..14 Figure 4. CAMP Timbre Recognition (Percent Correct)...14 Figure 5. Distorted Tunes Test...16 Figure 6. Familiar Music No-Rhythm Condition (Percent Correct)...17 Figure 7. Familiar Music Melody Condition (Percent Correct)..18 Figure 8. Familiar Music Instrumental Condition (Percent Correct)..19 Figure 9. Metric Task Figure 10. Modified MBEA Rhythm Task 22 Figure 11. Music Emotion Task. 24 Figure 12. DANVA 2: Adult Vocal Emotion Task Figure 13. Average Deviations in Pitch (semitones). 26 Figure 14. Deviations in Interval Matching (semitones) vii

8 List of Appendices Appendix A. Music and Cochlear Implants Questionnaire Appendix B. Music Background Information Questionnaire (Adults)..42 Appendix C. Music and Cochlear Implants Interview...44 Appendix D. Semi-Structured Interview...45 viii

9 1 1 Introduction A cochlear implant (CI) is a prosthetic device designed to provide hearing sensations to deaf individuals. Unquestionably, it is the most successful neural prosthesis to date, as viewed by the number of individuals who have received it worldwide and derived great benefit from it (Wilson, 2004). Its external microphone and signal processor receive incoming sound, transform it into an electrical signal, and extract features that are important for speech perception. This information is then transmitted to electrodes implanted in the cochlea, and, in turn, to the auditory nerve. Modern devices provide relatively coarse representations of spectral information, which are adequate for perceiving speech in ideal listening conditions (Shannon, Zeng, Kamath, Wygonski & Ekelid, 1995; Wilson, 2000), but they are inadequate for perceiving speech in noise (Fetterman & Domico, 2002; Firszt et al., 2004), identifying emotion from speech prosody (Hopyan-Misakyan, Gordon, Dennis & Papsin, 2009; Meister, Landwehr, Pyschny, Walger & Von Wedel, 2009), differentiating one speaker from another (Meister et al., 2009), identifying musical timbres or instruments (McDermott & Looi, 2004), and recognizing melodies from pitch cues alone (Kang et al., 2009; Kong, Cruz, Ackland-Jones & Zeng, 2004). Music perception is especially challenging for CI users. Coding strategies in implant processors extract the temporal envelope, discarding the temporal fine structure that is critical for music perception (Galvin, Fu & Shannon, 2009). Consequently, the music perceived by CI users is considerably degraded in sound quality and detail, especially as it pertains to pitch patterning. In fact, implant users often describe music as unpleasant, mechanical, and difficult to follow (Gfeller, Christ, Knutson, Woodworth, Witt & DeBus, 1998; Gfeller, Witt, Stordahl, Mehr & Woodworth, 2000; Lassaletta et al., 2007). It comes as no surprise, then, that postlingually deafened adult CI users, who had access to rich auditory representations of music before their hearing loss, are often disappointed with music heard via their implant (Gfeller, Christ, Knutson, Witt, Murray & Tyler, 2000; Lassaletta et al., 2007; Looi & She, 2010; Veekmans, Ressel, Mueller, Vischer & Brockmeier, 2009). This is unfortunate because music is an important source of pleasure for many, if not most, hearing individuals (Laukka, 2006). Even for postlingually

10 2 deafened implant users, quality of music perception is associated positively with quality-of-life ratings (Lassaletta et al., 2007). As noted, limited temporal fine structure or spectral detail provides limited access to pitch patterning. Cooper, Tobey, and Loizou (2008) used a test battery designed for the diagnosis of amusia, or tone deafness, in individuals with normal audiological profiles. They found that CI users failed to discriminate two melodies that differed in pitch patterning even when the difference involved a change in pitch contour or key. In that sense, CI users performed much like amusic individuals who are typically deficient in the perception of pitch patterns but not temporal patterns (Foxton, Nandy & Griffiths, 2006). In other research, CI users have exhibited difficulty determining which of two sounds is higher in pitch (also referred to as pitch ranking see Kang et al., 2009; Looi, McDermott, McKay & Hickson, 2004, 2008), detecting the direction (higher or lower) of a pitch change in a melody (Gfeller et al., 2007; Leal et al., 2003), and differentiating melodies in the absence of rhythmic cues (Kang et al., 2009; Kong et al., 2004). In the context of these pitch perception difficulties, it is not surprising to find deficient pitch production as well. For example, child CI users preserve the rhythms but not the pitch contours (i.e., patterns of rising and falling pitches) when they sing familiar songs (Nakata, Trehub, Mitani & Kanda, 2006; Xu et al., 2009). Although this pattern is mirrored, to some extent, in the song production of amusic individuals, some individuals with severe pitch perception deficits manage to produce accurate contours and intervals when singing familiar songs with words, which reveals an unexpected dissociation between perception and action (Dalla Bella, Giguère & Peretz, 2009). By contrast, tempo and rhythm perception in CI users are reportedly comparable to normally hearing (NH) listeners except when the stimuli or tasks are complex (Cooper et al., 2008; Gfeller, Woodworth, Robin, Witt & Knutson, 1997; Kong et al., 2004). Although we have learned much in recent years about the music perception skills of CI users, much remains to be learned. For example, the perceptual demands of differentiating simple rhythm or pitch patterns differ drastically from the demands of perceiving conventional music on the radio, on ipods, or in concert halls. Rhythm, pitch, and timbre are typically blended into a coherent whole. Discriminating two rhythms in isolation does not mean that a CI user would be able to hear a guitar solo when it is accompanined by a drum kit, bass, guitar, and vocals. He or she might also be unable to pick out the recurring cello melody in a Beethoven symphony. In short, there is little

11 3 understanding of CI users ability to perceive music as they might hear it on a recording or at a concert. In addition to providing pleasure and contributing to quality of life, music perception skills underlie the perception of emotion in speech as well as music (Juslin & Laukka, 2003). Emotion in speech is conveyed primarily by musically relevant cues such as loudness, tempo or rate, rhythm, pitch height, pitch range, and pitch contours. For example, expressions of anger in speech and music typically involve rapid tempo and increased amplitude or loudness in contrast to expressions of sadness, which typically involve slow tempo, low pitch, and decreased loudness. Although word recognition is obviously crucial for successful verbal communication, it is difficult to discern a speaker s true emotions and communicative intentions without access to paralinguistic and prosodic cues. To date, however, there has been little research on CI users perception of emotion in speech and none on their perception of emotion in music. The goal of the present study was to provide a comprehensive assessment of the music perception skills of adult CI users who became deaf postlingually. The perception of rhythm and pitch perception was assessed. The perception of emotion conveyed through speech and music and pitch production were also assessed. Rhythm perception was assessed in the context of simple rhythmic patterns as well as melodies with accompaniment. Adding accompaniment to a simple rhythm test used previously with adult CI users (Cooper et al., 2008) made it possible to determine whether normal rhythm perception skills remained evident in ecologically valid musical contexts. Melody perception was assessed by means of tasks that required comparisons of the musical input with long-term representations of music. The perception of emotion in speech was assessed with a task that has been used with child CI users (Hopyan-Misakyan et al., 2009). Although child CI users were unsuccessful at differentiating vocal emotions, it is possible that adult CI users, by virtue of their previous access to acoustic information and their greater understanding of communicative conventions, might be more successful than children at this task. Finally, we tested open-set word recognition, using monosyllabic consonant-vowelconsonant words, as a check on CI users use of bottom-up cues in speech. Large individual differences are pervasive in CI outcomes. Factors influencing outcomes among postlingually deafened adults include duration of near-total deafness (i.e., little or no benefit from hearing aids) before implantation, with shorter durations having more favorable

12 4 outcomes (Van Dijk, Van Olphen, Langereis, Mens, Brokx & Smoorenburg, 1999); cognitive abilities (Pisoni & Cleary, 2004); integrity of the auditory nerve and central auditory system (Hartman & Kral, 2004; Leake & Rebscher, 2004); and relevant experience or training (Fu, Nogaki, & Galvin, 2005; Galvin, Fu & Nogaki, 2007). Adults with residual hearing immediately prior to implantation perform better on subsequent recognition of speech and environmental sounds than those without usable residual hearing even though implantation destroys the residual hearing (Van Dijk et al., 1999). Moreover, CI users with music training in high school, college, or later exhibit better music perception (Gfeller et al., 2008). Based on these findings and our own specific goals, we designed a questionnaire that could potentially shed light on individual differences in performance. Information was solicited about education, history of hearing loss and implantation, implant characteristics, music listening and music-making habits, and music training. We expected CI users to perform poorly compared to NH listeners except on the test of simple rhythm discrimination. We also expected performance to be affected by duration of deafness before implantation, musical exposure and training, and residual hearing, if any, in the unimplanted ear. Finally, we expected CI users to perform better on musical materials that were highly familiar to them than on those that were less familiar or unfamiliar. 2 Method 2.1 Participants The target participants were adult CI users (n = 6) years of age (M = 62.2, SD = 13.0; see Table 1) who were recruited from the Cochlear Implant Program of Sunnybrook hospital in Toronto. All of them were postlingually deafened, they communicated solely by auditory-oral means, and they expressed some interest in music. Additionally, they all reported progressive hearing losses that were gradual, except for one participant. Although she experienced substantial hearing loss when she was very young, her bilateral hearing aids were very helpful until 6 years ago when she experienced a precipitous loss of most of her residual hearing. One participant used a hearing aid in his unimplanted ear to amplify his residual hearing selectively at 500 and 250 Hz (90 and 70 db thresholds, respectively). With respect to musical background, three CI users had taken music lessons in the past, but only two were still playing music.

13 Table 1. Participant Characteristics Participant M/F Age Device(s) Type of CI Hearing loss onset (age) Progressive loss (yes/no) Hearing aid use (years) Implant use (years) Music lessons (years) Current instrument Weekly music listening (hours) CI-1 F 47 2 CIs Advanced Bionics 1 yes sudden No 7 10 CI-2 M 46 CI + HA Cochlear 5 yes gradual Yes 10 or more CI-3 F 67 CI Advanced Bionics 57 yes sudden No 4 7 CI-4 F 74 CI Advanced Bionics 35 yes gradual Yes 1 4 CI-5 M 76 CI +HA Med-El 58 CI-6 F 63 2 CIs Cochlear 10 yes gradual yes gradual No No 1 4 5

14 6 The control group consisted of normally hearing (NH) listeners (n = 12) years of age (M = 29.0, SD = 13.8) with no personal or family history of hearing problems. A few participants in the control group had received music lessons as children, but only two had substantial musical training. One of these was a professional musician. 2.2 Apparatus Testing was conducted in a double-wall sound-attenuating chamber (Industrial Acoustics Co., Bronx, NY). A computer workstation and amplifier (Harmon-Kardon 3380, Stamford, CT) outside of the booth interfaced with a 17-in touch-screen monitor (Elo LCD TouchSystems, Berwyn, PA) and two wall-mounted loudspeakers (Electro-Medical Instrument Co., Mississauga, ON) inside the booth. The touch-screen monitor was used for presenting instructions for all tasks and for recording participants responses. The loudspeakers were mounted at the corners of the sound booth, each located at 45 degrees azimuth to the participant, and the touch-screen monitor was placed at the midpoint. Sound files were presented between 60 and 65 db, according to the preferences of each participant. One CI user (CI-2) requested sound levels up to 75 db. CI participants were free to alter the settings on their processor in the course of the test session. 2.3 Test Battery Trials for the Metric Task (from Hébert & Cuddy, 2002), the Rhythmic subtest of the Montreal Battery for Evaluation of Amusia (MBEA; Peretz, Champod & Hyde, 2003), the Distorted Tunes Test (DTT; Drayna, Manichaikul, de Lange, Snieder & Spector, 2001), the Music Emotion Task (Veillard, Peretz, Gosselin, Khalfa, Gagnon & Bouchard, 2007), the Diagnostic Analysis of Nonverbal Accuracy Scale 2 (DANVA2; Nowicki & Duke, 1994; Baum & Nowicki, 1998), and the individualized Familiar Music Task were presented via a customized program created with Affect 4.0 (Hermans, Clarysse, Baeyens & Spruyt, 2005; Spruyt, Clarysse, Vansteenwegen, Baeyens & Hermans, 2010). FLXLab 2.3 software (Haskell, 2009) was used to arrange the presentation of the Word Recognition, Pitch-Matching, and Interval-Matching tasks. The entire Clinical Assessment of Music Perception (CAMP) test, which was designed for cochlear implant users (Kang et al., 2009), was also adminstered.

15 Metric task The rhythms comprising this task were the strong-meter rhythms from Hébert and Cuddy (2002). These rhythms were created with SoundEdit 16, version 2.0. A temporal interval was defined as the onset-to-onset time (IOI) of successive events, with all events consisting of the sound of a snare drum. The basic IOI was 200 ms, and IOIs varied in a 1:2:3:4 ratio, with IOIs of 200, 400, 600, and 800 ms. Each standard rhythm consisted of a different permutation of nine IOIs (five IOIs of 200 ms, two of 400 ms, one of 600, and one of 800 ms). All tones were of equal intensity (i.e., no amplitude accents) and duration (100 ms). To create strong metric patterns, longer IOIs occurred on the beat. There were 4 practice trials (2 same, 2 different) with visual feedback (correct, incorrect) provided on the monitor followed by 20 test trials (10 same, 10 different) presented in random order with no feedback. On each trial, participants received a standard and comparison drum pattern, and they were required to judge whether they were the same or different. On same trials, the standard and comparison patterns were identical. On different trials, one 400-ms IOI from the standard pattern was replaced by an 800-ms IOI. Participants responded by touching same or different on the touch-sensitive monitor. They also touched the monitor to proceed to the following trial, at their own pace Rhythm task The principal modification to the Rhythmic subtest of the MBEA (Peretz et al., 2003) was the addition of accompaniment, as described below. The test consisted of 31 trials without feedback preceded by training trials consisting of two examples with feedback. Participants listened to two tonal melodies and judged whether they were the same or different. Differences consisted of alterations in the duration of two adjacent tones, which changed the rhythmic grouping but not the meter or number of tones. Rhythmical patterns varied across melodies. The melodies spanned a total frequency range of 247 (B3) to 988 Hz (B5), with the smallest range being 247 to 311 (Eflat-4) Hz, and the largest range 247 to 784 (G5) Hz. Melodies had 7 to 21 notes and were 3.8 to 6.4 s in duration (M = 5.1 s), depending on the tempo (100, 120, 150, 180, and 200 bpm). Tone durations varied from 150 to 1800 ms depending on the rhythm and tempo of each melody. Synthesized piano versions of the melodies were used.

16 8 For the present purposes, accompaniment consisting of sampled bass, guitar (strummed chords), and drum kit sounds created by means of Cakewalk Music Creator (Version ; Roland, Hamamatsu, Japan) was added to all of the melodies. Amplitude was standardized for each instrumental track across all melodies. Participants were told that accompaniment had been added to increase the difficulty of the task. They were asked to base their judgments of similarity or difference entirely on the piano melody. Participants called for trials by touching the monitor and entered their responses (same or different) on the monitor Distorted Tunes Test This test (Drayna et al., 2001) required participants to judge whether synthesized piano performances of 26 short melodies (12-26 notes) that are well-known in the U.K. and North America were correct (no pitch errors) or distorted (one or more pitch errors). Of the 26 tunes, 9 were played correctly, and 17 were distorted by pitch changes (i.e., errors) in 2-9 notes, within one or two semitones of the correct note but maintaining the melodic contour (rise and fall) of the normal melody. The errors in the melodies resulted in out-of-key notes in all but one melody (stimulus no. 13). All melodies in the DTT were unaltered in rhythm. The majority of tunes (17 out of 26) were played incorrectly, but there is no indication of performance differences on intact or distorted versions (Drayna et al., 2001) Musical emotion test This task, from Veillard et al. (2007), required participants to identify the predominant emotion conveyed by short musical excerpts as happy, sad, angry, or scary. The excerpts, representing five of the most readily identified excerpts from each emotion category, as determined in a preliminary study (Hunter, Schellenberg, & Stalinski, submitted), were MIDI files set to piano timbre. The happy excerpts were in the major mode with a mean tempo of 137 beats per minute (bpm) and the melodic line in a medium-to-high pitch range. The sad excerpts were in the minor mode, with a mean tempo of 44 bpm, medium pitch range, and sustain pedal. The peaceful excerpts were in the major mode, with an intermediate tempo of 69 bpm, a medium pitch range, and also the sustain pedal. The scary excerpts had minor chords on the third and sixth degree, a mean tempo of 95 bpm, and a low-medium pitch range. Mean stimulus duration was 13.3 s for all emotional categories.

17 Diagnostic Analysis of Nonverbal Accuracy 2 The Adult Paralanguage subtest of the DANVA2 (Baum & Nowicki, 1998) assessed the ability to perceive emotion through non-verbal speech cues. In this test, a semantically neutral sentence ( I m going out of the room now, but I ll be back later ) was spoken with happy, sad, angry, or fearful intentions at two levels of emotional intensity by a male and female actor Open-set word recognition As a check on basic speech perception skills, CI users and NH listeners were required to repeat 20 isolated consonant-vowel-consonant (CVC) words (see Table 2) produced by a female speaker. This task, like others in the battery, was self-administered and self-paced. Each stimulus word was preceded by a visual warning signal on the monitor (+), and participants responses were recorded. Table 2. List of CVC words back beach chain cup doll fan food gum jar leg love map meat nut pen pig run sit sun talk CAMP test This music perception test (Kang et al., 2009) had subtests of pitch direction discrimination, melody recognition, and timbre recognition. The pitch subtest used an adaptive procedure (1-up 1-down) to determine the threshold for pitch direction discrimination within the range of 1 to 12 semitones. On each trial, listeners indicated whether the first or second of two tones was higher in pitch. The melody subtest assessed recognition of widely known melodies presented without rhythmic cues (i.e., all tones of equal duration). On each trial, listeners identified the melody

18 10 from a set of 12 alternatives. In the timbre subtest, listeners heard a five-note sequence (the same one on all trials) and were required to identify the instrument from a set of eight alternatives. Stimuli for the pitch direction and melody subtests consisted of synthesized, complex tones with uniform spectral envelopes to preclude temporal envelope cues. Stimuli for the timbre subtest consisted of recordings of professional musicians playing real instruments. The pitch subtest was preceded by four practice trials and the melody and rhythm subtests were preceded by training sessions, in which participants were required to listen to each stimulus twice before beginning the test phase Familiar music task Stimuli for this task were personalized for CI users and NH listeners based on music that was most familiar to them. Prior to their laboratory visit, participants provided a list of up to 10 musical selections (title, album and recording artist) that they heard regularly. Five selections were included in the test, along with five unfamiliar selections from the same genre (as listed on ITunes) and with similar tempi. The familiar music task had three conditions: (1) no rhythm, (2) melody only, and (3) original instrumental. The original instrumental versions consisted of 10-s excerpts with salient melodic content from each selection, which were extracted with Audacity software (Version Beta). If the musical selection did not have 10 s without vocal content, the vocals were removed with Vocal Remover Version 2 plugin from Audacity. Melodic content from all selections was transcribed to produce two monophonic WAV files per selection a melody version and a no-rhythm version. These excerpts were produced with a synthesized flute timbre from Cakewalk Music Creator. In contrast to the melody version, which maintained the rhythm, the no-rhythm version was isochronous (i.e., all tones of equal duration). The original pitch durations were maintained in the no-rhythm version by means of repeated tones at the pitches in question. On each trial, participants listened to the selection and identified it from a set of six alternatives, which consisted of the five familiar musical pieces and none of the above. The conditions were administered in fixed order from most to least difficult: (1) no rhythm; (2) melody; and (3) original instrumental.

19 Pitch- and interval matching The stimuli for this task consisted of eight pitches 1-2 s in duration sung by a man and woman and eight ascending intervals sung by the same individuals in a legato (continuous or uninterrupted) manner. The male stimuli ranged from B3 ( Hz) to B4 ( Hz), and the female stimuli ranged from B4 to B5 ( Hz). Each pitch and interval stimulus was presented twice in a predetermined order, with the pitch-matching task presented first. Participants were required to sing back what they heard, and their responses were recorded by means of FLXLab. The intervals always began on the first degree of the scale (B3 for male stimuli and B4 for female stimuli). Only pitches from the key of B major were used, which resulted in the following intervals: unison, octave, major 2nd, 3rd, 6th and 7th, and perfect 4th and 5th. Pitches and intervals of the imitations were calculated by means of Praat software (Version ; Boersma & Weenink, 2010). 2.4 Procedure Prior to their laboratory visit, implant users completed a questionnaire (see Appendix A) that included information about demographic background (e.g., history of hearing loss, implant experience, education, languages spoken), musical background (e.g., musical training, music listening habits before and after their hearing loss, music enjoyment) and familiar musical selections. NH participants completed a questionnaire about their musical background and subjective experience of music (see Appendices B and C) just before the test session. Test sessions with CI users began with a semi-structured interview designed to elicit information about their subjective experience of music (see Appendix D). All interviews were recorded with a Sony Net MD Walkman (MZ-N707 model) and a Sony electret condenser microphone (ECM-DS70P model). Once the interview was completed, participants were escorted to the sound-attenuating booth for administration of the test battery. The experimenter provided instructions before each component of the battery. These instructions were repeated on the touch-screen monitor prior to each task. Participants were also told that the sounds could be made louder or softer, according to their preference. Tasks were presented in fixed order. Participants were told that the pitch- and interval-matching tasks, which were the in the test battery, were strictly optional.

20 12 3 Results and Discussion Due to the small sample size and large individual differences among CI users, we examined performance individually for each task, noting the CI users who performed within one SD of the mean for NH listeners, those who performed within two SDs, and so on. On the basis of previous research, CI users were expected to perform much better on tests of rhythm and meter and on other tasks based on timing cues than on those based on pitch cues (Cooper et al., 2008; Kang et al., 2009; Kong et al., 2004). 3.1 Open-Set Word Recognition As one would expect, performance of the NH group was at ceiling (see Figure 1) such that there was no variance in the data. CI users also performed well on this task, with scores ranging from 100% correct (CI-2) to a low of 20% correct (CI-4). Performance on these isolated monosyllabic words sheds light on how well each CI user was using bottom-up cues that are relevant to speech perception. It should be emphasized, however, that the CI participants were uniformly excellent at perceiving speech in quiet backgrounds when contextual cues were available, as confirmed in lengthy, individual interviews. Variability on open-set recognition tasks has been reported in a number of other studies (Loizou, Poroy, & Dormann, 2000; Vandali, Whitford, Plant, & Clark, 2000). CI-4, who had the poorest performance on this task, had the longest delay from the time her hearing aids became ineffectual (i.e., no usable residual hearing) until implant surgery. The top performer, CI-2, had a number of advantages, including professional knowledge of hearing and assistive technologies as well as residual low-frequency hearing (at 250 and 500 Hz) in his unimplanted ear, which was selectively amplified. Zhang, Dorman, and Spahr (2010) have documented the contribution of low-frequency acoustic hearing to the recognition of monosyllabic words. 3.2 CAMP Pitch-detection thresholds are illustrated in Figure 2, whereas melody recognition and timbre recognition are illustrated in Figures 3 and 4, respectively. The mean threshold for pitchdirection identification for NH listeners was 1.3 semitones (SD = 0.8), whereas their average on the melody-recognition task was 88.0% (SD = 10.0%) and their average result on the timbrerecognition task was 85.6% (SD = 16.0%). The means for the CI users group were 4.6 semitones

21 13 Figure 1. Mean score and standard deviation on the Speech Perception Task for normally hearing (NH) listeners and individual scores for cochlear implant (CI) users. Figure 2. Mean score and standard deviation on the CAMP Pitch Task for normally hearing (NH) listeners and individual scores for cochlear implant (CI) users.

22 14 Figure 3. Mean score and standard deviation on the CAMP Melody Recognition Task for normally hearing (NH) listeners and individual scores for cochlear implant (CI) users. The dotted line represents chance performance. Figure 4. Mean score and standard deviation on the CAMP Timbre Recognition Task for normally hearing (NH) listeners and individual scores for cochlear implant (CI) users. The dotted line represents chance performance.

23 15 (SD = 2.7) for the pitch-ranking task, 25.7% correct for the melody task (SD = 25.5), and 45.8% for the timbre task (SD = 19.5). Results for both groups were very similar to those reported previously by the developers of the test (Kang et al., 2009). Because the CAMP tests examine the ability to perceive pitch and timbre cues, it is not surprising that most of the CI users did not do well on these tasks. Two CI users (CI-1 and CI-2) managed to perform particularly well on pitch-direction identification, falling within one SD of the mean of the NH group. The Melody task, which excluded all timing cues, proved to be more difficult. In fact, two CI users (CI-5 and CI-6) opted to discontinue the task because of its extreme difficulty. Moreover, no CI user was able to obtain a score within two SDs of the NH mean, although CI-2 s performance was substantially better than that of the other CI users. His score was 63.9%, whereas the average of the scores of the three other CI users was 13.0%. CI-2 also scored much higher than other CI users on the timbre identification task, obtaining a score of 83.3% correct, which was near the NH mean. None of the other CI users had a score within two SDs of the NH mean, although CI-3 came close. The amplified residual hearing of CI-2 undoubtedly accounts for his success and for his ability to play in a musical ensemble. The contribution of hearing aids in the unimplanted ear to music perception has been noted previously (Looi, McDermott, McKay, & Hickson, 2007; Turner, Reiss & Gantz, 2008). 3.3 Distorted Tunes Test The DTT comprised 26 questions and two response options on each trial, such that chance performance was a score of 13. As in the original study that used the DTT (Drayna et al., 2001) and an additional study by the same research team (Jones et al., 2009), the scores of NH listeners were near ceiling (M = 24.7, SD = 1.3; see Figure 5). Because the DTT comprises traditional North-American folk melodies, our CI users, who were on average much older than our control group (mean age of 62.7 vs 29.0), would have been more familiar with these melodies before they became deaf. Nonetheless, CI users had extreme difficulty on this task. Their mean score was 11.8 correct (SD = 2.8), and scores for all of the CI users were more than two SDs below the NH mean and near chance levels. In fact, the highest score was 16 correct (CI-2). Because mistunings on the DTT (except for one) are created by using pitches outside the key of each melody, the findings indicate that CI users are unable to use tonality-related cues when

24 16 Figure 5. Mean score and standard deviation on the Distorted Tunes Test for normally hearing (NH) listeners and individual scores for cochlear implant (CI) users. The dotted line represents chance performance. perceiving music. This interpretation is consistent with previous performance of CI users on the Scale subtest of the MBEA, which was at chance levels (Cooper et al., 2008). It is notable that CI-2, despite his good pitch resolution and reasonable performance on other tasks, was unable to do this task, which involved comparing current melodies with longterm representations of those melodies or making judgments based on tonality. As noted, the pitch errors in this test are relatively small (one or two semitones). Considering the mean score of CI users on the CAMP pitch-ranking task (threshold of 4.6 semitones), it is not surprising that CI users were unable to perceive mistunings in the DTT melodies. The authors of the DTT created these errors to be salient by virtue of their violations of tonality. Thus, it is not surprising that these violations are not salient to CI users.

25 Familiar Music Task Scores for this task were the number of correct answers out of 10, converted into percent correct scores. It was possible to generate individualized materials for only five participants from the NH group. Mean scores were 74.0% (SD = 15.2%) in the No-rhythm condition (Figure 6), Figure 6. Mean score and standard deviation on the No-Rhythm Condition of the Familiar Music Task for normally hearing (NH) listeners and individual scores for cochlear implant (CI) users. 92.0% (SD = 4.5%) in the Melody (with timing cues) condition (Figure 7), and 94.0% (SD = 5.5%) in the Instrumental condition (Figure 8), which featured all or most cues from the original recordings, except for the lyrics for selections involving songs. CI users scores were exceedingly low. Moreover, they were lowest in the No-Rhythm condition (M = 26.7%, SD = 25.2%), slightly higher in the Melody condition (M = 35.0%, SD = 19.1%), and highest in the Instrumental condition (M = 70.0%, SD = 18.3%). Two CI users were excluded from consideration because they provided artists with whom they were familiar (e.g., Louis Armstrong, Frank Sinatra) but no specific musical selections. Of the four remaining CI users, one discontinued the No-Rhythm condition because of its difficulty. CI-1 and CI-2 scored below two SDs of the NH mean for this condition. Although CI-4 managed to score within two SDs of

26 18 the NH mean, she did so only by responding none of the above for all of the trials. Obviously, she was unable to recognize any melodies without rhythmic cues. Although CI users fared better in the Melody condition than in the No-Rhythm condition, all four failed to score within two SDs of the mean for NH listeners. In the Instrumental condition, CI-2 obtained a score similar to the NH mean (90.0% versus 94.0%, respectively). Although CI-4 and CI-1 obtained higher scores in the Instrumental condition than in the other two conditions, they were still more than two SDs below the NH mean. Figure 7. Mean score and standard deviation on the Melody Condition of the Familiar Music Task for normally hearing (NH) listeners and individual scores for cochlear implant (CI) users.

27 19 Figure 8. Mean score and standard deviation on the Instrumental Condition of the Familiar Music Task for normally hearing (NH) listeners and individual scores for cochlear implant (CI) users. The Familiar Music Task was created specifically for this study. The expectation was that the use of highly familiar music would generate better performance than one would predict based on the available literature. In fact, CI children have shown some success in the recognition of specific recordings that they hear regularly (Vongpaisal, Trehub & Schellenberg, 2006 & 2009) even though such children are generally unsuccessful at recognizing generic versions of culturally familiar tunes (Olszweski, Gfeller, Froman, Stordahl, & Tomblin, 2005; Stordahl, 2002). However, this was not the case for the current group of adult CI users. Of the six CI users in the present study, three (CI-1, CI-5 and CI-6) reported in their interview that the lyrics were the most salient part of their music listening experiences. However, lyrics were excluded from the test materials, even when recordings with vocals were selected as familiar music, because they provided obvious cues to the identity of the music. CI-4, who listened to classical music and attended concerts frequently, was unable to recognize the original recordings (same music and performers) that she heard regularly. CI-6 was also unable to recognize the four instrumental pieces that were among her 10 selections, which suggests that her at-home listening experiences are guided and enriched by knowledge of what

28 20 she is playing. CI-2, the star performer in the present study, indicated that he listens especially closely to the bass line in music. This follows, perhaps, from programming his hearing aid to capitalize on his residual low-frequency hearing. CI-2 is also a bass player who performs with an amateur blues/rock group. It is nevertheless impressive that this participant was as proficient as NH listeners at identifying the familiar instrumental excerpts. During her interview, participant CI-6 shed light on factors contributing to her musical preferences. She stated that, in order to enjoy music, it had to have meaning, such as a narrative. For example, she very much enjoyed the lyrics in a number of the selections she submitted. Although she also selected instrumental pieces, some of them were orchestral works with underlying narratives. For example, Symphony No. 11 by Dmitri Shostakovich, entitled In The Year 1905, depicts the Russian revolution. Another of her selections, the orchestral work Finlandia by Jean Sibelius, depicts the Finnish struggle to break free from the Russian empire. Because CI users do not have access to the acoustic details available to NH listeners, they may find other ways of enjoying music. The enjoyment of CI-6 was enriched by a narrative linked to the overall structure of the musical work rather than its melodies or harmonies. CI-6 described hearing the Cossacks charging on their horses in the work by Shostakovich, and the struggles and the triumph of the Finnish people in the Sibelius piece. Another factor that may have contributed to CI users difficulty at identifying the material was the 10-second duration of the excerpts, which posed no problem for NH listeners. It is possible that they would be somewhat more successful with longer samples of the music. 3.5 Metric Task & Modified MBEA Rhythm Task Because the Metric task comprised 20 questions and each trial had two response options, chance responding was a score of 10. The mean of the NH group was 17.1 (SD = 3.4; see Figure 9), which is similar to the mean of a NH group tested on the same task in a previous study (Hopyan, Schellenberg & Dennis, 2009). CI-1 received a perfect score on this task. CI-2 and CI- 3 scored within one SD below the NH mean, and CI-4 and CI-5 scored within two SDs. CI-6 was below two SDs and also below chance levels for this task. In short, the majority of CI users (5 of 6) were within two SDs of the mean for NH listeners, which is in line with CI users previous success in discriminating simple rhythmic patterns (Gfeller et al., 1997; Kong et al., 2004).

29 21 The modified subtest of the MBEA had 31 trials and two response options on each trial, such that chance performance was a score of The NH group mean was 26.9 (SD = 3.6, see Figure 10), which is virtually identical to a sample of individuals with normal music perception skills reported by Peretz et al. (2003) who were tested on a similar task without accompaniment (M = 27.0, SD = 2.1). Similar performance across studies indicates that the additional instrumentation created for the purpose of this study did not impair the performance of NH listeners. By contrast, the average performance of CI users on our modified task was only 17.0 (SD = 2.1), which is substantially lower than the mean obtained by CI users tested by Cooper et al. (2008; approximately 24 correct) on the original MBEA rhythm task. Although CI-2 was slightly less than two SDs below the NH mean, the other CI users were near or below chance levels, which confirms that the additional instrumentation impeded their ability to perceive the rhythm of the melodic line. Figure 9. Mean score and standard deviation on the Metric Task for normally hearing (NH) listeners and individual scores for cochlear implant (CI) users. The dotted line represents chance performance.

30 22 Figure 10. Mean score and standard deviation on the Modified MBEA Rhythm Task for normally hearing (NH) listeners and individual scores for cochlear implant (CI) users. The dotted line represents chance performance. Although CI users fared as well as NH listeners on the original version of this rhythm discrimination task, which involved monophonic piano melodies (Cooper et al., 2008), their rhythm discrimination was impaired when there were multiple streams of auditory information. In fact, almost all of the CI users performed near chance level. This finding suggests that CI users would have difficulty discerning the rhythms encountered in their everyday experience of music. 3.6 Music Emotion & DANVA2 The Music Emotion task comprised 20 questions and 4 response options on each trial, such that a score of five correct responses corresponded to chance responding. Once again, NH listeners were near ceiling on this task (M = 19.2, SD = 1.4; see Figure 11), which is slightly higher than the results reported by Hunter et al. (submitted) for adult listeners, who had an average of 16.7 correct. All CI users were more than two SDs below the NH mean, with a mean of 12.7 (SD = 3.6).

31 23 Because CI users are better able to perceive timing cues than pitch cues, we examined the possibility that CI users could interpret arousal, which is based largely on tempo cues, better than valence, which is based on mode (major/minor) and consonance/dissonance cues. Thus, we combined the response options based on arousal: happy or scary vs. sad or peaceful (see Table 3 for arousal scores). For three of the CI users (CI-2, CI-4 and CI-5), a majority of the errors (over 50%) on this task involved confusions between stimuli that contrasted in valence but were similar in arousal. These findings suggest that tempo cues play a substantially greater role than mode cues in CI users perception of emotion in music. This interpretation is consistent with reports of adequate tempo perception in CI users (Kong et al., 2004). Tempo cues are also more important than mode cues for young children (Dalla Bella, Peretz, Rousseau, & Gosselin, 2001), not because of pitch resolution difficulties but because they have not yet learned Western musical conventions about mode. The DANVA2 comprised 24 trials and four response options on each trial (happy, angry, sad, and fearful) such that a score of six correct corresponded to chance responding. The mean for the NH listeners was 19.3 (SD = 2.3; see Figure 12), which is similar to the mean reported by Nowicki (2006; M = 18.0, SD = 2.9). The average score for the CI users was only 10.8 (SD = 3.3). Only CI-2 and CI-6 performed within two SDs of the NH mean, with the remaining CI users having lower scores and three performing at close to chance levels (CI-3, CI-4, CI-5). Performance on the DANVA2 by child CI implant users in the study by Hopyan-Misakyan et al. (2009) was similar to adult CI users in the present study in that both groups were unsuccessful in differentiating the vocal emotions. The DANVA2, which has been used widely (Nowicki, 2006), is intended to be a challenging test, with average NH scores ranging from 14 to 18.5 out of 24. Among its advantages is that it allows specially gifted individuals to achieve higher scores than the population mean. However, this test may not be the most appropriate means of assessing CI users access to emotion cues in speech. A test involving a greater range of emotional expressiveness would enable us to learn more about this skill in CI users.

32 24 Figure 11. Mean score and standard deviation on the Music Emotion Task for normally hearing (NH) listeners and individual scores for cochlear implant (CI) users. The dotted line represents chance performance. Table 3. Music Emotion Arousal Scores Participant Test score (20) Modified arousal score (20) Valence errors CI % CI % CI % CI % CI % CI %

33 25 Figure 12. Mean score and standard deviation on the DANVA2 Adult Vocal Emotion Task for normally hearing (NH) listeners and individual scores for cochlear implant (CI) users. The dotted line represents chance performance. 3.7 Pitch- and Interval-Matching Task Only CI users were asked to complete the pitch- and interval-matching tasks, which were described as strictly optional. With the exception of one CI user, who was short of time, all agreed to complete the matching tasks. The overwhelming majority of NH individuals can match pitches within one semitone (Moore, Estis, Gordon-Hickey, & Watts, 2008). For CI users, the mean error in pitch matching (Figure 13) was 3.9 semitones (SD = 3.1). Only CI-2 performed within the expected range of NH listeners, with a mean pitch error of 1.1 semitones. Performance in the interval-matching task (Figure 14) was similar. Errors on interval matching (Figure 14) were comparable to those on pitch matching (M = 3.1 semitones, SD = 2.0). Again, CI-2 performed surprisingly well, with a mean error of 1.0 semitones on interval matching, which is in line with his low pitch-ranking threshold on the CAMP test (1.6 semitones).

34 26 Figure 13. Individual average pitch deviations in semitones on the Pitch- Matching Task for cochlear implant (CI) users. Figure 14. Individual average pitch deviations in semitones on the Interval- Matching Task for cochlear implant (CI) users.

35 Conclusion In sum, postlingually deafened adult CI users performed well below the level of the NH control group on most tasks in the present study. Their performance was especially poor on tasks that relied strongly on pitch cues, such as the DTT, isochronous melody tasks, familiar melody task, pitch ranking, and pitch matching. They had more success on the simple rhythm discrimination task but not on the more complex rhythm discrimination task. They also had poor results on the emotion discrimination tasks, which required the joint use of pitch and timing cues. As in most studies of CI users, there were large individual differences in performance. CI-2 performed considerably better than other CI users, especially on the pitch-ranking and pitch-matching tasks. Although his musical background may have played some role, it is likely that amplified residual hearing in his unimplanted ear made the most important contributions to his success on the tasks involving pitch. Along with musical training and residual hearing, CI-2 had the further advantage of formal training in audiology and familiarity with hearing aid technology. As he put it, he programmed his own hearing aid to act like a subwoofer, which enables him to maximize his perception of music and speech. In his interview, CI-2 indicated that neither his implant nor his hearing aid alone provided a satisfactory representation of sound but together they provided a credible and highly enjoyable rendition of music. In short, the whole was a lot better than the sum of its parts. Plans for re-testing CI-2 with his implant alone will provide a clearer picture of the independent contributions of implant and hearing aid. CI-4 had extensive musical training (piano) and even considered a career as a musician when she was a young woman with normal hearing. Her progressive hearing loss over the years and a long period of very poor auditory reception with hearing aids seemed to erase any potential benefit from her training and knowledge of music. For CI-2, by contrast, gradual hearing loss began at about 5 years of age and his hearing aids functioned effectively for music listening until approximately five years before receiving his implant. Plans to enlarge the sample will make it possible to identify links between various background variables and performance on music processing tasks such as these. It would be of interest to determine whether limited training enhances music processing in CI users and their

Music Perception in Cochlear Implant Users

Music Perception in Cochlear Implant Users Music Perception in Cochlear Implant Users Patrick J. Donnelly (2) and Charles J. Limb (1,2) (1) Department of Otolaryngology-Head and Neck Surgery (2) Peabody Conservatory of Music The Johns Hopkins University

More information

Hearing the Universal Language: Music and Cochlear Implants

Hearing the Universal Language: Music and Cochlear Implants Hearing the Universal Language: Music and Cochlear Implants Professor Hugh McDermott Deputy Director (Research) The Bionics Institute of Australia, Professorial Fellow The University of Melbourne Overview?

More information

Pitch and rhythmic pattern discrimination of percussion instruments for cochlear implant users

Pitch and rhythmic pattern discrimination of percussion instruments for cochlear implant users PROCEEDINGS of the 22 nd International Congress on Acoustics Psychological and Physiological Acoustics (others): Paper ICA2016-279 Pitch and rhythmic pattern discrimination of percussion instruments for

More information

Music Training Or Focused Music Listening For Cochlear Implant Recipients?

Music Training Or Focused Music Listening For Cochlear Implant Recipients? Music Training Or Focused Music Listening For Cochlear Implant Recipients? Valerie Looi 1, Yuhan Wong 2, Jenny Loo 2 1. SCIC Cochlear Implant Program An RIDBC Service, Australia 2. NaFonal University Singapore,

More information

The effects of training on music perception and appreciation in cochlear implant users

The effects of training on music perception and appreciation in cochlear implant users The effects of training on music perception and appreciation in cochlear implant users WONG YUHAN (A0046683X) SUPERVISORS: DR VALRIE LOOI & DR JENNY LOO Introduction CI users experience poor perceptual

More information

Critical Review: The Impact of Structured Auditory Training on Musical Pitch and Melody Perception in Individuals with a Unilateral Cochlear Implant

Critical Review: The Impact of Structured Auditory Training on Musical Pitch and Melody Perception in Individuals with a Unilateral Cochlear Implant Critical Review: The Impact of Structured Auditory Training on Musical Pitch and Melody Perception in Individuals with a Unilateral Cochlear Implant Samidha S. Joglekar M.Cl.Sc (AUD) Candidate University

More information

Music in the Lives of Deaf Children with Cochlear Implants

Music in the Lives of Deaf Children with Cochlear Implants THE NEUROSCIENCES AND MUSIC III DISORDERS AND PLASTICITY Music in the Lives of Deaf Children with Cochlear Implants Sandra E. Trehub, a Tara Vongpaisal, a and Takayuki Nakata b a University of Toronto,

More information

Perception of Music: Problems and Prospects

Perception of Music: Problems and Prospects A neuroscience perspective on using music to achieve emotional objectives Perception of Music: Problems and Prospects Frank Russo Ryerson University Frank Russo SMARTLab Ryerson University Room 217 February

More information

Speech conveys not only linguistic content but. Vocal Emotion Recognition by Normal-Hearing Listeners and Cochlear Implant Users

Speech conveys not only linguistic content but. Vocal Emotion Recognition by Normal-Hearing Listeners and Cochlear Implant Users Cochlear Implants Special Issue Article Vocal Emotion Recognition by Normal-Hearing Listeners and Cochlear Implant Users Trends in Amplification Volume 11 Number 4 December 2007 301-315 2007 Sage Publications

More information

2/25/2013. Context Effect on Suprasegmental Cues. Supresegmental Cues. Pitch Contour Identification (PCI) Context Effect with Cochlear Implants

2/25/2013. Context Effect on Suprasegmental Cues. Supresegmental Cues. Pitch Contour Identification (PCI) Context Effect with Cochlear Implants Context Effect on Segmental and Supresegmental Cues Preceding context has been found to affect phoneme recognition Stop consonant recognition (Mann, 1980) A continuum from /da/ to /ga/ was preceded by

More information

International Journal of Audiology. Looi et al. The effect of cochlear implantation on music perception

International Journal of Audiology. Looi et al. The effect of cochlear implantation on music perception Page of 0 0 0 0 0 0 THE EFFECT OF COCHLEAR IMPLANTATION ON MUSIC PERCEPTION BY ADULTS WITH USABLE PRE-OPERATIVE ACOUSTIC HEARING Valerie Looi a,b,c, Hugh McDermott a, Colette McKay a,d, & Louise Hickson

More information

Music Perception of Cochlear Implant Users Compared with that of Hearing Aid Users

Music Perception of Cochlear Implant Users Compared with that of Hearing Aid Users Music Perception of Cochlear Implant Users Compared with that of Hearing Aid Users Valerie Looi, 1,2,3 Hugh McDermott, 1 Colette McKay, 1,4 and Louise Hickson 5 Objective: To investigate the music perception

More information

Cochlear Implant The only hope for severely Deaf

Cochlear Implant The only hope for severely Deaf Cochlear Implant The only hope for severely Deaf By: Dr. M. Sohail Awan, FCPS (ENT) Aga Khan University Hospital, Karachi - Pakistan For centuries, people believed that only a miracle could restore hearing

More information

Music and Hearing in the Older Population: an Audiologist's Perspective

Music and Hearing in the Older Population: an Audiologist's Perspective Music and Hearing in the Older Population: an Audiologist's Perspective Dwight Ough, M.A., CCC-A Audiologist Charlotte County Hearing Health Care Centre Inc. St. Stephen, New Brunswick Anatomy and Physiology

More information

Music. listening with hearing aids

Music. listening with hearing aids Music listening with hearing aids T F A R D Music listening with hearing aids Hearing loss can range from mild to profound and can affect one or both ears. Understanding what you can hear with and without

More information

Who are cochlear implants for?

Who are cochlear implants for? Who are cochlear implants for? People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work best in adults who

More information

The Effects of Training on Recognition of Musical Instruments by Adults with Cochlear Implants

The Effects of Training on Recognition of Musical Instruments by Adults with Cochlear Implants The Effects of Training on Recognition of Musical Instruments by Adults with Cochlear Implants Virginia D. Driscoll 1 ABSTRACT This study examines the efficiency and effectiveness of three types of training

More information

Role of F0 differences in source segregation

Role of F0 differences in source segregation Role of F0 differences in source segregation Andrew J. Oxenham Research Laboratory of Electronics, MIT and Harvard-MIT Speech and Hearing Bioscience and Technology Program Rationale Many aspects of segregation

More information

BORDERLINE PATIENTS AND THE BRIDGE BETWEEN HEARING AIDS AND COCHLEAR IMPLANTS

BORDERLINE PATIENTS AND THE BRIDGE BETWEEN HEARING AIDS AND COCHLEAR IMPLANTS BORDERLINE PATIENTS AND THE BRIDGE BETWEEN HEARING AIDS AND COCHLEAR IMPLANTS Richard C Dowell Graeme Clark Chair in Audiology and Speech Science The University of Melbourne, Australia Hearing Aid Developers

More information

Children with bilateral cochlear implants identify emotion in speech and music

Children with bilateral cochlear implants identify emotion in speech and music Children with bilateral cochlear implants identify emotion in speech and music Anna Volkova 1, Sandra E Trehub 1*, E Glenn Schellenberg 1, Blake C Papsin 2, Karen A Gordon 2 1 Department of Psychology,

More information

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair Who are cochlear implants for? Essential feature People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work

More information

Voice Pitch Control Using a Two-Dimensional Tactile Display

Voice Pitch Control Using a Two-Dimensional Tactile Display NTUT Education of Disabilities 2012 Vol.10 Voice Pitch Control Using a Two-Dimensional Tactile Display Masatsugu SAKAJIRI 1, Shigeki MIYOSHI 2, Kenryu NAKAMURA 3, Satoshi FUKUSHIMA 3 and Tohru IFUKUBE

More information

EXECUTIVE SUMMARY Academic in Confidence data removed

EXECUTIVE SUMMARY Academic in Confidence data removed EXECUTIVE SUMMARY Academic in Confidence data removed Cochlear Europe Limited supports this appraisal into the provision of cochlear implants (CIs) in England and Wales. Inequity of access to CIs is a

More information

Language Speech. Speech is the preferred modality for language.

Language Speech. Speech is the preferred modality for language. Language Speech Speech is the preferred modality for language. Outer ear Collects sound waves. The configuration of the outer ear serves to amplify sound, particularly at 2000-5000 Hz, a frequency range

More information

Prelude Envelope and temporal fine. What's all the fuss? Modulating a wave. Decomposing waveforms. The psychophysics of cochlear

Prelude Envelope and temporal fine. What's all the fuss? Modulating a wave. Decomposing waveforms. The psychophysics of cochlear The psychophysics of cochlear implants Stuart Rosen Professor of Speech and Hearing Science Speech, Hearing and Phonetic Sciences Division of Psychology & Language Sciences Prelude Envelope and temporal

More information

2014 European Phoniatrics Hearing EUHA Award

2014 European Phoniatrics Hearing EUHA Award 2014 European Phoniatrics Hearing EUHA Award Processing of music by prelingually and postlingually deafened patients with cochlear implant: Electrophysiological evidence Author: Lisa Bruns, Dresden Published

More information

PLEASE SCROLL DOWN FOR ARTICLE

PLEASE SCROLL DOWN FOR ARTICLE This article was downloaded by:[michigan State University Libraries] On: 9 October 2007 Access Details: [subscription number 768501380] Publisher: Informa Healthcare Informa Ltd Registered in England and

More information

Music perception in bimodal cochlear implant users

Music perception in bimodal cochlear implant users Music perception in bimodal cochlear implant users Mohammad Maarefvand Submitted in total fulfilment of the requirements of the degree of Doctor of Philosophy July 2014 Department of Audiology and Speech

More information

Development and Validation of the University of Washington Clinical Assessment of Music Perception Test

Development and Validation of the University of Washington Clinical Assessment of Music Perception Test Development and Validation of the University of Washington Clinical Assessment of Music Perception Test Robert Kang, 1, Grace Liu Nimmons, 3 Ward Drennan, Jeff Longnion, Chad Ruffin,, Kaibao Nie, 1, Jong

More information

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening AUDL GS08/GAV1 Signals, systems, acoustics and the ear Pitch & Binaural listening Review 25 20 15 10 5 0-5 100 1000 10000 25 20 15 10 5 0-5 100 1000 10000 Part I: Auditory frequency selectivity Tuning

More information

Talker Discrimination, Emotion Identification, and Melody Recognition by Young Children with Bilateral Cochlear Implants.

Talker Discrimination, Emotion Identification, and Melody Recognition by Young Children with Bilateral Cochlear Implants. Talker Discrimination, Emotion Identification, and Melody Recognition by Young Children with Bilateral Cochlear Implants by Anna Volkova A thesis submitted in conformity with the requirements for the degree

More information

What Is the Difference between db HL and db SPL?

What Is the Difference between db HL and db SPL? 1 Psychoacoustics What Is the Difference between db HL and db SPL? The decibel (db ) is a logarithmic unit of measurement used to express the magnitude of a sound relative to some reference level. Decibels

More information

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair Who are cochlear implants for? Essential feature People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work

More information

A Computerized Pitch-Perception Training Program for the Hearing Impaired

A Computerized Pitch-Perception Training Program for the Hearing Impaired A Computerized Pitch-Perception Training Program for the Hearing Impaired by Dona Mariyesa Priyanwada Jayakody, B.Sc, M.Sc A Thesis Submitted in Partial Fulfilment of the Requirements for the Degree of

More information

COCHLEAR IMPLANTS CAN TALK BUT CANNOT SING IN TUNE

COCHLEAR IMPLANTS CAN TALK BUT CANNOT SING IN TUNE COCHLEAR IMPLANTS CAN TALK BUT CANNOT SING IN TUNE Jeremy Marozeau, Ninia Simon and Hamish Innes-Brown The Bionics Institute, East Melbourne, Australia jmarozeau@bionicsinstitute.org The cochlear implant

More information

Chapter 11: Sound, The Auditory System, and Pitch Perception

Chapter 11: Sound, The Auditory System, and Pitch Perception Chapter 11: Sound, The Auditory System, and Pitch Perception Overview of Questions What is it that makes sounds high pitched or low pitched? How do sound vibrations inside the ear lead to the perception

More information

Peter S Roland M.D. UTSouthwestern Medical Center Dallas, Texas Developments

Peter S Roland M.D. UTSouthwestern Medical Center Dallas, Texas Developments Peter S Roland M.D. UTSouthwestern Medical Center Dallas, Texas Developments New electrodes New speech processing strategies Bilateral implants Hybrid implants ABI in Kids MRI vs CT Meningitis Totally

More information

MULTI-CHANNEL COMMUNICATION

MULTI-CHANNEL COMMUNICATION INTRODUCTION Research on the Deaf Brain is beginning to provide a new evidence base for policy and practice in relation to intervention with deaf children. This talk outlines the multi-channel nature of

More information

The REAL Story on Spectral Resolution How Does Spectral Resolution Impact Everyday Hearing?

The REAL Story on Spectral Resolution How Does Spectral Resolution Impact Everyday Hearing? The REAL Story on Spectral Resolution How Does Spectral Resolution Impact Everyday Hearing? Harmony HiResolution Bionic Ear System by Advanced Bionics what it means and why it matters Choosing a cochlear

More information

Cochlear Implants. What is a Cochlear Implant (CI)? Audiological Rehabilitation SPA 4321

Cochlear Implants. What is a Cochlear Implant (CI)? Audiological Rehabilitation SPA 4321 Cochlear Implants Audiological Rehabilitation SPA 4321 What is a Cochlear Implant (CI)? A device that turns signals into signals, which directly stimulate the auditory. 1 Basic Workings of the Cochlear

More information

Relationship Between Tone Perception and Production in Prelingually Deafened Children With Cochlear Implants

Relationship Between Tone Perception and Production in Prelingually Deafened Children With Cochlear Implants Otology & Neurotology 34:499Y506 Ó 2013, Otology & Neurotology, Inc. Relationship Between Tone Perception and Production in Prelingually Deafened Children With Cochlear Implants * Ning Zhou, Juan Huang,

More information

whether or not the fundamental is actually present.

whether or not the fundamental is actually present. 1) Which of the following uses a computer CPU to combine various pure tones to generate interesting sounds or music? 1) _ A) MIDI standard. B) colored-noise generator, C) white-noise generator, D) digital

More information

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for What you re in for Speech processing schemes for cochlear implants Stuart Rosen Professor of Speech and Hearing Science Speech, Hearing and Phonetic Sciences Division of Psychology & Language Sciences

More information

What is sound? Range of Human Hearing. Sound Waveforms. Speech Acoustics 5/14/2016. The Ear. Threshold of Hearing Weighting

What is sound? Range of Human Hearing. Sound Waveforms. Speech Acoustics 5/14/2016. The Ear. Threshold of Hearing Weighting Speech Acoustics Agnes A Allen Head of Service / Consultant Clinical Physicist Scottish Cochlear Implant Programme University Hospital Crosshouse What is sound? When an object vibrates it causes movement

More information

Modern cochlear implants provide two strategies for coding speech

Modern cochlear implants provide two strategies for coding speech A Comparison of the Speech Understanding Provided by Acoustic Models of Fixed-Channel and Channel-Picking Signal Processors for Cochlear Implants Michael F. Dorman Arizona State University Tempe and University

More information

Complete Cochlear Coverage WITH MED-EL S DEEP INSERTION ELECTRODE

Complete Cochlear Coverage WITH MED-EL S DEEP INSERTION ELECTRODE Complete Cochlear Coverage WITH MED-EL S DEEP INSERTION ELECTRODE hearlife CONTENTS A Factor To Consider... 3 The Cochlea In the Normal Hearing Process... 5 The Cochlea In the Cochlear Implant Hearing

More information

Long-Term Performance for Children with Cochlear Implants

Long-Term Performance for Children with Cochlear Implants Long-Term Performance for Children with Cochlear Implants The University of Iowa Elizabeth Walker, M.A., Camille Dunn, Ph.D., Bruce Gantz, M.D., Virginia Driscoll, M.A., Christine Etler, M.A., Maura Kenworthy,

More information

Effects of Setting Thresholds for the MED- EL Cochlear Implant System in Children

Effects of Setting Thresholds for the MED- EL Cochlear Implant System in Children Effects of Setting Thresholds for the MED- EL Cochlear Implant System in Children Stacy Payne, MA, CCC-A Drew Horlbeck, MD Cochlear Implant Program 1 Background Movement in CI programming is to shorten

More information

Source and Description Category of Practice Level of CI User How to Use Additional Information. Intermediate- Advanced. Beginner- Advanced

Source and Description Category of Practice Level of CI User How to Use Additional Information. Intermediate- Advanced. Beginner- Advanced Source and Description Category of Practice Level of CI User How to Use Additional Information Randall s ESL Lab: http://www.esllab.com/ Provide practice in listening and comprehending dialogue. Comprehension

More information

Differential-Rate Sound Processing for Cochlear Implants

Differential-Rate Sound Processing for Cochlear Implants PAGE Differential-Rate Sound Processing for Cochlear Implants David B Grayden,, Sylvia Tari,, Rodney D Hollow National ICT Australia, c/- Electrical & Electronic Engineering, The University of Melbourne

More information

Hearing in the Environment

Hearing in the Environment 10 Hearing in the Environment Click Chapter to edit 10 Master Hearing title in the style Environment Sound Localization Complex Sounds Auditory Scene Analysis Continuity and Restoration Effects Auditory

More information

Running head: HEARING-AIDS INDUCE PLASTICITY IN THE AUDITORY SYSTEM 1

Running head: HEARING-AIDS INDUCE PLASTICITY IN THE AUDITORY SYSTEM 1 Running head: HEARING-AIDS INDUCE PLASTICITY IN THE AUDITORY SYSTEM 1 Hearing-aids Induce Plasticity in the Auditory System: Perspectives From Three Research Designs and Personal Speculations About the

More information

Speaker s Notes: AB is dedicated to helping people with hearing loss hear their best. Partnering with Phonak has allowed AB to offer unique

Speaker s Notes: AB is dedicated to helping people with hearing loss hear their best. Partnering with Phonak has allowed AB to offer unique 1 General Slide 2 Speaker s Notes: AB is dedicated to helping people with hearing loss hear their best. Partnering with Phonak has allowed AB to offer unique technological advances to help people with

More information

Static and Dynamic Spectral Acuity in Cochlear Implant Listeners for Simple and Speech-like Stimuli

Static and Dynamic Spectral Acuity in Cochlear Implant Listeners for Simple and Speech-like Stimuli University of South Florida Scholar Commons Graduate Theses and Dissertations Graduate School 6-30-2016 Static and Dynamic Spectral Acuity in Cochlear Implant Listeners for Simple and Speech-like Stimuli

More information

Exploring the parameter space of Cochlear Implant Processors for consonant and vowel recognition rates using normal hearing listeners

Exploring the parameter space of Cochlear Implant Processors for consonant and vowel recognition rates using normal hearing listeners PAGE 335 Exploring the parameter space of Cochlear Implant Processors for consonant and vowel recognition rates using normal hearing listeners D. Sen, W. Li, D. Chung & P. Lam School of Electrical Engineering

More information

UNDERSTANDING HEARING LOSS

UNDERSTANDING HEARING LOSS Helping Babies and Toddlers get a Strong Start UNDERSTANDING HEARING LOSS You have recently been told that your child has a hearing loss. You may feel emotional and overwhelmed as you begin to learn more

More information

UNDERSTANDING HEARING LOSS

UNDERSTANDING HEARING LOSS Helping Babies and Toddlers get a Strong Start UNDERSTANDING HEARING LOSS You have recently been told that your child has a hearing loss. You may feel emotional and overwhelmed as you begin to learn more

More information

Results. Dr.Manal El-Banna: Phoniatrics Prof.Dr.Osama Sobhi: Audiology. Alexandria University, Faculty of Medicine, ENT Department

Results. Dr.Manal El-Banna: Phoniatrics Prof.Dr.Osama Sobhi: Audiology. Alexandria University, Faculty of Medicine, ENT Department MdEL Med-EL- Cochlear Implanted Patients: Early Communicative Results Dr.Manal El-Banna: Phoniatrics Prof.Dr.Osama Sobhi: Audiology Alexandria University, Faculty of Medicine, ENT Department Introduction

More information

Hearing Research 242 (2008) Contents lists available at ScienceDirect. Hearing Research. journal homepage:

Hearing Research 242 (2008) Contents lists available at ScienceDirect. Hearing Research. journal homepage: Hearing Research 242 (2008) 164 171 Contents lists available at ScienceDirect Hearing Research journal homepage: www.elsevier.com/locate/heares Combined acoustic and electric hearing: Preserving residual

More information

Simulation of an electro-acoustic implant (EAS) with a hybrid vocoder

Simulation of an electro-acoustic implant (EAS) with a hybrid vocoder Simulation of an electro-acoustic implant (EAS) with a hybrid vocoder Fabien Seldran a, Eric Truy b, Stéphane Gallégo a, Christian Berger-Vachon a, Lionel Collet a and Hung Thai-Van a a Univ. Lyon 1 -

More information

An Auditory-Model-Based Electrical Stimulation Strategy Incorporating Tonal Information for Cochlear Implant

An Auditory-Model-Based Electrical Stimulation Strategy Incorporating Tonal Information for Cochlear Implant Annual Progress Report An Auditory-Model-Based Electrical Stimulation Strategy Incorporating Tonal Information for Cochlear Implant Joint Research Centre for Biomedical Engineering Mar.7, 26 Types of Hearing

More information

The functional importance of age-related differences in temporal processing

The functional importance of age-related differences in temporal processing Kathy Pichora-Fuller The functional importance of age-related differences in temporal processing Professor, Psychology, University of Toronto Adjunct Scientist, Toronto Rehabilitation Institute, University

More information

Speech, Language, and Hearing Sciences. Discovery with delivery as WE BUILD OUR FUTURE

Speech, Language, and Hearing Sciences. Discovery with delivery as WE BUILD OUR FUTURE Speech, Language, and Hearing Sciences Discovery with delivery as WE BUILD OUR FUTURE It began with Dr. Mack Steer.. SLHS celebrates 75 years at Purdue since its beginning in the basement of University

More information

Optical Illusions 4/5. Optical Illusions 2/5. Optical Illusions 5/5 Optical Illusions 1/5. Reading. Reading. Fang Chen Spring 2004

Optical Illusions 4/5. Optical Illusions 2/5. Optical Illusions 5/5 Optical Illusions 1/5. Reading. Reading. Fang Chen Spring 2004 Optical Illusions 2/5 Optical Illusions 4/5 the Ponzo illusion the Muller Lyer illusion Optical Illusions 5/5 Optical Illusions 1/5 Mauritz Cornelis Escher Dutch 1898 1972 Graphical designer World s first

More information

Simulations of high-frequency vocoder on Mandarin speech recognition for acoustic hearing preserved cochlear implant

Simulations of high-frequency vocoder on Mandarin speech recognition for acoustic hearing preserved cochlear implant INTERSPEECH 2017 August 20 24, 2017, Stockholm, Sweden Simulations of high-frequency vocoder on Mandarin speech recognition for acoustic hearing preserved cochlear implant Tsung-Chen Wu 1, Tai-Shih Chi

More information

Hearing Aids. Bernycia Askew

Hearing Aids. Bernycia Askew Hearing Aids Bernycia Askew Who they re for Hearing Aids are usually best for people who have a mildmoderate hearing loss. They are often benefit those who have contracted noise induced hearing loss with

More information

HCS 7367 Speech Perception

HCS 7367 Speech Perception Long-term spectrum of speech HCS 7367 Speech Perception Connected speech Absolute threshold Males Dr. Peter Assmann Fall 212 Females Long-term spectrum of speech Vowels Males Females 2) Absolute threshold

More information

A dissertation presented to. the faculty of. In partial fulfillment. of the requirements for the degree. Doctor of Philosophy. Ning Zhou.

A dissertation presented to. the faculty of. In partial fulfillment. of the requirements for the degree. Doctor of Philosophy. Ning Zhou. Lexical Tone Development, Music Perception and Speech Perception in Noise with Cochlear Implants: The Effects of Spectral Resolution and Spectral Mismatch A dissertation presented to the faculty of the

More information

Music. listening with hearing aids

Music. listening with hearing aids Music listening with hearing aids Music listening with hearing aids Hearing loss can range from mild to profound and can affect one or both ears. Understanding what you can hear with and without hearing

More information

WIDEXPRESS. no.30. Background

WIDEXPRESS. no.30. Background WIDEXPRESS no. january 12 By Marie Sonne Kristensen Petri Korhonen Using the WidexLink technology to improve speech perception Background For most hearing aid users, the primary motivation for using hearing

More information

The role of periodicity in the perception of masked speech with simulated and real cochlear implants

The role of periodicity in the perception of masked speech with simulated and real cochlear implants The role of periodicity in the perception of masked speech with simulated and real cochlear implants Kurt Steinmetzger and Stuart Rosen UCL Speech, Hearing and Phonetic Sciences Heidelberg, 09. November

More information

Outcomes in Implanted Teenagers Who Do Not Meet the UK Adult Candidacy Criteria

Outcomes in Implanted Teenagers Who Do Not Meet the UK Adult Candidacy Criteria Outcomes in Implanted Teenagers Who Do Not Meet the UK Adult Candidacy Criteria Fiona Vickers, Clinical Scientist (Audiology) The Royal National Throat Nose and Ear Hospital, London Current criteria guidelines

More information

Variability in Word Recognition by Adults with Cochlear Implants: The Role of Language Knowledge

Variability in Word Recognition by Adults with Cochlear Implants: The Role of Language Knowledge Variability in Word Recognition by Adults with Cochlear Implants: The Role of Language Knowledge Aaron C. Moberly, M.D. CI2015 Washington, D.C. Disclosures ASA and ASHFoundation Speech Science Research

More information

DO NOT DUPLICATE. Copyrighted Material

DO NOT DUPLICATE. Copyrighted Material Annals of Otology, Rhinology & Laryngology 115(6):425-432. 2006 Annals Publishing Company. All rights reserved. Effects of Converting Bilateral Cochlear Implant Subjects to a Strategy With Increased Rate

More information

[5]. Our research showed that two deafblind subjects using this system could control their voice pitch with as much accuracy as hearing children while

[5]. Our research showed that two deafblind subjects using this system could control their voice pitch with as much accuracy as hearing children while NTUT Education of Disabilities Vol.12 2014 Evaluation of voice pitch control in songs with different melodies using tactile voice pitch feedback display SAKAJIRI Masatsugu 1), MIYOSHI Shigeki 2), FUKUSHIMA

More information

Cochlear implant patients localization using interaural level differences exceeds that of untrained normal hearing listeners

Cochlear implant patients localization using interaural level differences exceeds that of untrained normal hearing listeners Cochlear implant patients localization using interaural level differences exceeds that of untrained normal hearing listeners Justin M. Aronoff a) Communication and Neuroscience Division, House Research

More information

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds Hearing Lectures Acoustics of Speech and Hearing Week 2-10 Hearing 3: Auditory Filtering 1. Loudness of sinusoids mainly (see Web tutorial for more) 2. Pitch of sinusoids mainly (see Web tutorial for more)

More information

Preliminary Results of Adult Patients with Digisonic SP Cohlear Implant System

Preliminary Results of Adult Patients with Digisonic SP Cohlear Implant System Int. Adv. Otol. 2009; 5:(1) 93-99 ORIGINAL ARTICLE Maria-Fotini Grekou, Stavros Mavroidakos, Maria Economides, Xrisa Lira, John Vathilakis Red Cross Hospital of Athens, Greece, Department of Audiology-Neurootology,

More information

EDITORIAL POLICY GUIDANCE HEARING IMPAIRED AUDIENCES

EDITORIAL POLICY GUIDANCE HEARING IMPAIRED AUDIENCES EDITORIAL POLICY GUIDANCE HEARING IMPAIRED AUDIENCES (Last updated: March 2011) EDITORIAL POLICY ISSUES This guidance note should be considered in conjunction with the following Editorial Guidelines: Accountability

More information

RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 23 (1999) Indiana University

RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 23 (1999) Indiana University GAP DURATION IDENTIFICATION BY CI USERS RESEARCH ON SPOKEN LANGUAGE PROCESSING Progress Report No. 23 (1999) Indiana University Use of Gap Duration Identification in Consonant Perception by Cochlear Implant

More information

Auditory Scene Analysis

Auditory Scene Analysis 1 Auditory Scene Analysis Albert S. Bregman Department of Psychology McGill University 1205 Docteur Penfield Avenue Montreal, QC Canada H3A 1B1 E-mail: bregman@hebb.psych.mcgill.ca To appear in N.J. Smelzer

More information

Providing Effective Communication Access

Providing Effective Communication Access Providing Effective Communication Access 2 nd International Hearing Loop Conference June 19 th, 2011 Matthew H. Bakke, Ph.D., CCC A Gallaudet University Outline of the Presentation Factors Affecting Communication

More information

Localization 103: Training BiCROS/CROS Wearers for Left-Right Localization

Localization 103: Training BiCROS/CROS Wearers for Left-Right Localization Localization 103: Training BiCROS/CROS Wearers for Left-Right Localization Published on June 16, 2015 Tech Topic: Localization July 2015 Hearing Review By Eric Seper, AuD, and Francis KuK, PhD While the

More information

Production of Stop Consonants by Children with Cochlear Implants & Children with Normal Hearing. Danielle Revai University of Wisconsin - Madison

Production of Stop Consonants by Children with Cochlear Implants & Children with Normal Hearing. Danielle Revai University of Wisconsin - Madison Production of Stop Consonants by Children with Cochlear Implants & Children with Normal Hearing Danielle Revai University of Wisconsin - Madison Normal Hearing (NH) Who: Individuals with no HL What: Acoustic

More information

Ting Zhang, 1 Michael F. Dorman, 2 and Anthony J. Spahr 2

Ting Zhang, 1 Michael F. Dorman, 2 and Anthony J. Spahr 2 Information From the Voice Fundamental Frequency (F0) Region Accounts for the Majority of the Benefit When Acoustic Stimulation Is Added to Electric Stimulation Ting Zhang, 1 Michael F. Dorman, 2 and Anthony

More information

Use of Auditory Techniques Checklists As Formative Tools: from Practicum to Student Teaching

Use of Auditory Techniques Checklists As Formative Tools: from Practicum to Student Teaching Use of Auditory Techniques Checklists As Formative Tools: from Practicum to Student Teaching Marietta M. Paterson, Ed. D. Program Coordinator & Associate Professor University of Hartford ACE-DHH 2011 Preparation

More information

ADVANCES in NATURAL and APPLIED SCIENCES

ADVANCES in NATURAL and APPLIED SCIENCES ADVANCES in NATURAL and APPLIED SCIENCES ISSN: 1995-0772 Published BYAENSI Publication EISSN: 1998-1090 http://www.aensiweb.com/anas 2016 December10(17):pages 275-280 Open Access Journal Improvements in

More information

Rhythm Categorization in Context. Edward W. Large

Rhythm Categorization in Context. Edward W. Large Rhythm Categorization in Context Edward W. Large Center for Complex Systems Florida Atlantic University 777 Glades Road, P.O. Box 39 Boca Raton, FL 3343-99 USA large@walt.ccs.fau.edu Keywords: Rhythm,

More information

Congruency Effects with Dynamic Auditory Stimuli: Design Implications

Congruency Effects with Dynamic Auditory Stimuli: Design Implications Congruency Effects with Dynamic Auditory Stimuli: Design Implications Bruce N. Walker and Addie Ehrenstein Psychology Department Rice University 6100 Main Street Houston, TX 77005-1892 USA +1 (713) 527-8101

More information

University of California Press is collaborating with JSTOR to digitize, preserve and extend access to Music Perception: An Interdisciplinary Journal.

University of California Press is collaborating with JSTOR to digitize, preserve and extend access to Music Perception: An Interdisciplinary Journal. Decoding the Expressive Intentions in Children's Songs Author(s): Mayumi Adachi and Sandra E. Trehub Source: Music Perception: An Interdisciplinary Journal, Vol. 18, No. 2 (Winter, 2000), pp. 213-224 Published

More information

Alma Mater Studiorum University of Bologna, August

Alma Mater Studiorum University of Bologna, August Alma Mater Studiorum University of Bologna, August -6 006 Recognition of intended emotions in drum performances: differences and similarities between hearing-impaired people and people with normal hearing

More information

Auditory Perception: Sense of Sound /785 Spring 2017

Auditory Perception: Sense of Sound /785 Spring 2017 Auditory Perception: Sense of Sound 85-385/785 Spring 2017 Professor: Laurie Heller Classroom: Baker Hall 342F (sometimes Cluster 332P) Time: Tuesdays and Thursdays 1:30-2:50 Office hour: Thursday 3:00-4:00,

More information

A Basic Study on possibility to improve stage acoustics by active method

A Basic Study on possibility to improve stage acoustics by active method Toronto, Canada International Symposium on Room Acoustics June 9- ISRA A Basic Study on possibility to improve stage acoustics by active method Ayumi Ishikawa (ayumi@e.arch.mie-u.ac.jp) Takane Terashima

More information

The development of a modified spectral ripple test

The development of a modified spectral ripple test The development of a modified spectral ripple test Justin M. Aronoff a) and David M. Landsberger Communication and Neuroscience Division, House Research Institute, 2100 West 3rd Street, Los Angeles, California

More information

Cochlear implants. Carol De Filippo Viet Nam Teacher Education Institute June 2010

Cochlear implants. Carol De Filippo Viet Nam Teacher Education Institute June 2010 Cochlear implants Carol De Filippo Viet Nam Teacher Education Institute June 2010 Controversy The CI is invasive and too risky. People get a CI because they deny their deafness. People get a CI because

More information

Topic 4. Pitch & Frequency

Topic 4. Pitch & Frequency Topic 4 Pitch & Frequency A musical interlude KOMBU This solo by Kaigal-ool of Huun-Huur-Tu (accompanying himself on doshpuluur) demonstrates perfectly the characteristic sound of the Xorekteer voice An

More information

Hearing Preservation Cochlear Implantation: Benefits of Bilateral Acoustic Hearing

Hearing Preservation Cochlear Implantation: Benefits of Bilateral Acoustic Hearing Hearing Preservation Cochlear Implantation: Benefits of Bilateral Acoustic Hearing Kelly Jahn, B.S. Vanderbilt University TAASLP Convention October 29, 2015 Background 80% of CI candidates now have bilateral

More information

HEARING. Structure and Function

HEARING. Structure and Function HEARING Structure and Function Rory Attwood MBChB,FRCS Division of Otorhinolaryngology Faculty of Health Sciences Tygerberg Campus, University of Stellenbosch Analyse Function of auditory system Discriminate

More information

Improving Music Percep1on With Cochlear Implants David M. Landsberger

Improving Music Percep1on With Cochlear Implants David M. Landsberger Improving Music Percep1on With Cochlear Implants David M. Landsberger Music Enjoyment With a Cochlear Implant is Low Tested music enjoyment with people with one Normal Hearing Ear and one ear with a cochlear

More information