This dissertation is available at Iowa Research Online:

Size: px
Start display at page:

Download "This dissertation is available at Iowa Research Online:"

Transcription

1 University of Iowa Iowa Research Online Theses and Dissertations Fall 2013 Relationships among peripheral and central electrophysiological measures of spatial / spectral resolution and speech perception in cochlear implant users Rachel Anna Scheperle University of Iowa Copyright 2013 Rachel Anna Scheperle This dissertation is available at Iowa Research Online: Recommended Citation Scheperle, Rachel Anna. "Relationships among peripheral and central electrophysiological measures of spatial / spectral resolution and speech perception in cochlear implant users." PhD (Doctor of Philosophy) thesis, University of Iowa, Follow this and additional works at: Part of the Speech Pathology and Audiology Commons

2 RELATIONSHIPS AMONG PERIPHERAL AND CENTRAL ELECTROPHYSIOLOGICAL MEASURES OF SPATIAL / SPECTRAL RESOLUTION AND SPEECH PERCEPTION IN COCHLEAR IMPLANT USERS by Rachel Anna Scheperle A thesis submitted in partial fulfillment of the requirements for the Doctor of Philosophy degree in Speech and Hearing Science in the Graduate College of The University of Iowa December 2013 Thesis Supervisor: Professor Paul J. Abbas

3 Graduate College The University of Iowa Iowa City, Iowa CERTIFICATE OF APPROVAL PH.D. THESIS This is to certify that the Ph. D. thesis of Rachel Anna Scheperle has been approved by the Examining Committee for the thesis requirement for the Doctor of Philosophy degree in Speech and Hearing Science at the December 2013 graduation. Thesis Committee: Paul J. Abbas, Thesis Supervisor Carolyn J. Brown Camille C. Dunn Shawn S. Goodman Christopher W. Turner

4 To Gerald and Paula Scheperle and Caleb Kollmeyer ii

5 When I look at your heavens, the work of your fingers, the moon and the stars, which you have set in place, what is man that you are mindful of him, and the son of man that you care for him? Yet you have made him a little lower than the heavenly beings and crowned him with glory and honor. You have given him dominion over the works of your hands O Lord, our Lord, how majestic is your name in all the earth! Psalm 8:3-6a, 9 As each has received a gift, use it to serve one another, as good stewards of God s varied grace: whoever speaks, as one who speaks oracles of God; whoever serves, as one who serves by the strength that God supplies in order that in everything God may be glorified through Jesus Christ. To him belong glory and dominion forever and ever. Amen. 1 Peter 4:10-11 Trust in the Lord with all your heart, and do not lean on your own understanding. In all your ways acknowledge him, and he will make straight your paths. Proverbs 3:5-6 iii

6 ACKNOWLEDGMENTS Financial support for this project was provided by the National Institutes of Health, National Institute on Deafness and Other Communication Disorders under awards F31DC013202, P50DC000242, and R01DC The content is solely the responsibility of the author and does not necessarily represent the official views of the National Institutes of Health. Participant compensation was funded in part by the University of Iowa Department of Communication Sciences and Disorders. John VanBuren, Jake Oleson, and Rhonda DeCook provided statistical support. This project would not have been possible without the willingness and commitment of the participants and input from a number of individuals. Thank you to everyone who assisted me. Members of the dissertation committee, and specifically the thesis supervisor and my academic advisor, Paul Abbas, provided constructive feedback, invaluable input and guidance over the course of the project. I would like to extend a special thanks to friends, lab members and colleagues, particularly Amanda Silberer, Bruna Mussoi, and Likuei Chiou, for enlightening conversations, project assistance, and encouragement. Finally I d like to acknowledge my immediate, extended, and church families for their unconditional love, prayers, support, and gentle reminders of perspective. iv

7 ABSTRACT The ability to perceive speech is related to the listener s ability to differentiate among frequencies (i.e. spectral resolution). Cochlear implant users exhibit variable speech perception and spectral resolution abilities, which can be attributed at least in part to electrode interactions at the periphery (i.e. spatial resolution). However, electrophysiological measures of peripheral spatial resolution have not been found to correlate with speech perception. The purpose of this study was to systematically evaluate auditory processing from the periphery to the cortex using both simple and spectrally complex stimuli in order to better understanding the underlying processes affecting spatial and spectral resolution and speech perception. Eleven adult cochlear implant users participated in this study. Peripheral spatial resolution was assessed using the electrically evoked compound action potential (ECAP) to measure channel interaction functions for thirteen probe electrodes. We evaluated central processing using the auditory change complex (ACC), a cortical response, elicited with both spatial (electrode pairs) and spectral (rippled noise) stimulus changes. Speech perception included a vowel-discrimination task and the BKB-SIN test of keyword recognition in noise. We varied the likelihood of electrode interactions within each participant by creating three experimental programs, or MAPs, using a subset of seven electrodes and varying the spacing between activated electrodes. Linear mixed model analysis was used to account for repeated measures within an individual, allowing for a within-subject interpretation. We also performed regression analysis to evaluate the relationships across participants. Both peripheral and central processing abilities contributed to the variability in performance observed across CI users. The spectral ACC was the strongest predictor of speech perception abilities across participants. When spatial resolution was varied within a person, all electrophysiological measures were significantly correlated with each other and with speech perception. However, the ECAP measures were the best single predictor v

8 of speech perception for the within-subject analysis, followed by the spectral ACC. Our results indicate that electrophysiological measures of spatial and spectral resolution can provide valuable information about perception. All three of the electrophysiological measures used in this study, including the ECAP channel interaction functions, demonstrated potential for clinical utility. vi

9 TABLE OF CONTENTS LIST OF TABLES... ix LIST OF FIGURES... x LIST OF ABBREVIATIONS... xii CHAPTER I INTRODUCTION... 1 CHAPTER II REVIEW OF THE LITERATURE... 9 Brief Background of Multichannel Cochlear Implants... 9 Electrophysiological Measures in CI Users Overview Electrically Evoked Compound Action Potentials Acoustic Change Complex / Electrically Evoked Auditory Change Complex Measures of Spatial Resolution in CI Users Forward Masking Electrode Discrimination Thresholds Using Tripolar Stimulation Measures of Spectral Resolution in CI Users Spectral Ripple Density Spectral Ripple Depth CHAPTER III METHODOLOGY General Experiment Overview Participants Core Electrodes Speech Processor Settings Stimulation Level Issues Related to Setting Appropriate Stimulus Levels General Procedures for Setting Electrode Output Fine Tuning Stimulation Levels for ECAP Fine Tuning C Levels for Spectral ACC and Speech Testing Presentation Levels for Spectral ACC and Speech Testing ECAP Channel Interaction Functions Measurement Quantification Cortical Auditory Evoked Potentials: ACC Recording Procedures General Quantification Stimulation and Quantification: Spatial ACC Stimulation and Quantification: Spectral ACC Speech Perception Vowel Perception BKB-SIN Test Comparing Across Measures of Spatial / Spectral Resolution and Speech Perception CHAPTER IV RESULTS Peripheral and Central Spatial Resolution vii

10 ECAP Channel Interaction Functions Spatial ACC Relationship Between Peripheral and Central Spatial Resolution: Within Subjects Spatial and Spectral Resolution Spatial Resolution Spectral ACC Relationship Between Spatial and Spectral Resolution Speech Perception Vowel Perception BKB-SIN Test CHAPTER V DISCUSSION Summary of Results Genral Caveat ECAP Channel Interaction Functions Potential Limitations Within-Subject Applications Clinical Feasibility Cortical Auditory Evoked Potentials Spatial ACC Spectral ACC Clinical Feasibility Temporal Processing Conclusions APPENDIX A CALIBRATION FOR DIRECT AUDIO INPUT APPENDIX B ALTERNATE ANALYSES APPENDIX C CHANNEL INTERACTION FUNCTIONS REFERENCES viii

11 LIST OF TABLES Table 1. Activated Electrodes and Frequency Allocation Table 2. Participant Demographic Data Table 3. Peripheral and Central Spatial Resolution as Predictors of Spectral Resolution Table 4. Spatial and Spectral Resolution as Predictors of Vowels (% Correct) Table 5. Spatial and Spectral Resolution as Predictors of BKB-SIN Scores (db) Table B1. Comparing Methods of Quantifying Peripheral and Central Electrophysiological Data: Average Coefficient of Determination Across Participants ix

12 LIST OF FIGURES Figure 1. Comparison of Rectified and Sinusoidal Spectral Ripple Envelopes Figure 2. ECAP Waveform Series for a Channel Interaction Function: E55R, Probe Figure 3. Calculating Channel Separation Indices Figure 4. Waveform and Spectrogram of Spectral Ripple Stimulus Figure 5. Electrodogram for Rippled Noise Stimuli Figure 6. Average Electrode Output for Rippled Noise Stimuli Figure 7. Channel Interaction Functions Figure 8. Channel Separation Index as a Function of Electrode Separation Figure 9. Example Cortical Waveforms Elicited with the Electrode- Discrimination Paradigm: E68L Figure 10. Spatial ACC Amplitude as a Function of Electrode Separation Figure 11. Relationship Between Peripheral and Central Spatial Resolution Figure 12. Figure 13. Figure 14. Figure 15. Figure 16. Figure 17. Relationship Between Peripheral and Central Spatial Resolution: Differences for Basal and Apical Electrode Pairs Quantifying Peripheral and Central Measures of Spatial Resolution for Comparisons with Spectral Resolution and Speech Perception: E55R Peripheral and Central Measures of Spatial Resolution for Comparisons with Spectral Resolution and Speech Perception: All Participants Example Cortical Waveforms Elicited with the Ripple-Depth Paradigm: E40R Peripheral Spatial Resolution as a Predictor of Central Spectral Resolution: Quantification Options Peripheral / Central Spatial Resolution as a Predictor of Central Spectral Resolution: Quantification Options Figure 18. Spatial Resolution as a Predictor of Spectral Resolution Figure 19. Vowel Confusion Matrices: Average Across All Participants Figure 20. Electrophysiological Measures as Predictors of Vowel Perception Figure 21. Electrophysiological Measures as Predictors of Word Recognition in Noise x

13 Figure 22. Schematic of Across-Subject Regression Analysis Figure B1. Data Conversion for dd Calculations: E55R Figure B2. Relationship Between Peripheral and Central Spatial Resolution Using dd Figure B3. Relationship Between Peripheral Spatial and Central Spectral Resolution Using dd Figure B4. Relationship Between Peripheral Spatial Resolution and Speech Perception Using dd Figure B5. Central Spatial Resolution as a Function of Electrode Separation: Normalizing the ACC to the Onset Response Figure B6. Central Spatial Resolution as a Function of Electrode Separation: Quantifying the ACC as rms Amplitude Figure B7. Correlation Between ACC rms Amplitude and N 1 -P 2 Amplitude Figure C1. Channel Interaction Functions: E40R Figure C2. Channel Interaction Functions: E Figure C3. Channel Interaction Functions: E55R Figure C4. Channel Interaction Functions: E Figure C5. Channel Interaction Functions: E68L Figure C6. Channel Interaction Functions: F18R Figure C7. Channel Interaction Functions: F19R Figure C8. Channel Interaction Functions: F25R Figure C9. Channel Interaction Functions: F26L Figure C10. Channel Interaction Functions: F2L Figure C11. Channel Interaction Functions: F8R xi

14 LIST OF ABBREVIATIONS Acoustic / Auditory Change Complex Advanced Combination Encoder Channel Separation Index Cochlear Implant Continuous Interleaved Sampling Comfort Level Cortical Auditory Evoked Potential Current Level Electrically Evoked Auditory Brainstem Response Electrically Evoked Compound Action Potential Electrically Evoked Middle Latency Response Electrode Electroencephalographic Initial Stimulation Mismatch Negativity Monopolar Neural Response Telemetry Program Pulses Per Second Ripples Per Octave Root Mean Square Signal-to-noise Ratio Threshold Level Tripolar ACC ACE CSI CI CIS C Level CAEP CL EABR ECAP EMLR E EEG IS MMN MP NRT P or MAP pps rpo rms SNR T Level TP xii

15 1 CHAPTER I INTRODUCTION Cochlear implants (CIs) are routinely recommended for children with severe to profound hearing loss. Post implantation, the clinician is responsible for optimizing the stimulation provided by the device. Because children are often too young to give detailed or reliable feedback, electrophysiological measures can be useful. The electrically evoked compound action potential (ECAP) is one measure currently used to set the output of each electrode. ECAP thresholds can be used to ensure that stimulation provided by the speech processor is detectable (Brown et al., 2000; Hughes et al., 2000a,b). However, an important goal of a CI is to ensure that pediatric recipients have access to sufficient information for speech and language development; detection is necessary but not sufficient. Spectral, temporal, and amplitude resolution are also necessary for listeners to process the fluctuating spectral content and varying amplitudes of complex speech signals (Shannon, 2002), but neither electrophysiological nor psychophysical measures of suprathreshold resolution abilities are included in standard clinical protocols for evaluating performance with CIs or for programming the devices. Additional tools to evaluate performance and to guide clinical decisions are needed, and may be useful even for adults who can participate in speech perception tasks. In this study, we focused on exploring electrophysiological measures that relate to spectral resolution abilities in CI users. Spectral resolution, a property of the auditory system involving differentiation among frequency components, can be assessed by evaluating an individual s ability to discriminate among single frequencies or among complex signals that contain multiple frequency components. The ability to resolve frequency is variable across CI users and is generally poorer than that of hearing-aid users and normal-hearing listeners, even when normal-hearing individuals are tested using simulations of CI processing (e.g. Henry and Turner, 2003; Henry, Turner and Behrens, 2005). Although a cochlear implant system

16 2 limits spectral resolution by the number of intracochlear electrodes and processor settings, CI users demonstrate limitations in performance beyond those attributable to the device (Fishman, Shannon and Slattery, 1997; Fu, Shannon and Wang, 1998; Friesen et al., 2001; Henry and Turner, 2003). The number, functionality and location of surviving neurons, the location of the electrodes relative to stimulable neurons, and the impedance pathway for current spread determine the ability of the implant system to transmit spectral components to unique physical locations, and more specifically, to distinct groups of auditory neurons (discussed in Shannon, 1983). The extent to which stimulation from each electrode results in distinct neural excitation patterns is otherwise known as spatial selectivity, or spatial resolution. Thus peripheral spatial resolution, a property of CI stimulation specific to each individual, underlies spectral resolution abilities. Peripheral spatial resolution can be assessed objectively using a forward-masking paradigm, in which the physical distance between sequentially stimulated masker and probe electrodes is varied (e.g. Cohen et al., 2003; Abbas et al., 2004). If masker and probe pulses excite largely overlapping populations of neurons, the size of the ECAP to probe stimulation will be small because many neurons are in a refractory state. As the masker electrode is moved farther away from the probe electrode and excites different populations of neurons, the size of the ECAP to probe stimulation increases. A channel interaction function can be generated by considering ECAP amplitude to a specific probe electrode when different electrodes along the array are used as maskers. Different shapes and widths of channel interaction functions are observed both across CI users and within CI users across probe electrode locations (e.g. Cohen et al., 2003; Abbas et al., 2004; Eisen and Franck et al., 2005), the differences presumably reflecting differences in neural excitation patterns. ECAP channel interaction functions can be obtained in children (Eisen and Franck, 2005), and the measurements can be performed with commercially available clinical software. Unfortunately, significant correlations with speech perception

17 3 have not been observed (Cohen et al., 2003; Hughes and Abbas, 2006a; Hughes and Stille, 2008; Tang, Benítez and Zeng, 2011; van der Beek, Briaire and Frijns, 2012), and we are not yet able to provide clinical recommendations about the usefulness of ECAP channel interaction functions. Because spectral resolution abilities are correlated with speech perception (Henry and Turner, 2003; Henry et al., 2005; Litvak et al., 2007; Won, Drennan and Rubinstein, 2007; Berenstein et al., 2008; Saoji et al., 2009; Anderson et al., 2011; Spahr et al., 2011; Won et al., 2011b), and because peripheral spatial resolution underlies spectral resolution (e.g. Anderson et al., 2011; Jones et al., 2013), a relationship between spatial resolution and speech perception is expected. Litvak and colleagues (2007) simulated varying degrees of spatial resolution in normal-hearing individuals by changing the slope of vocoder bandpass filters. The normal-hearing listeners performance on spectral resolution measures and speech perception tended to mimic the performance observed across CI users, suggesting that spatial resolution abilities account for some of the variability in performance observed across CI users. However, even when psychophysical methods are used to evaluate peripheral spatial resolution within CI users, results are inconclusive. A number of studies have demonstrated significant correlations between spatial resolution and speech perception (e.g. Nelson et al., 1995; Collins, Zwolan and Wakefield, 1997; Throckmorton and Collins, 1999; Henry et al., 2000; Boex, Kos and Pelizzone, 2003; Jones et al., 2013), but others have not (Zwolan, Collins and Wakefield, 1997; Hughes and Abbas, 2006a; Stickney et al., 2006; Hughes and Stille, 2008; Anderson et al., 2011; Nelson et al., 2011; Azadpour and McKay, 2012). Understanding the dependency of speech perception on peripheral spatial resolution remains unclear. There are a number of potential limitations when attempting to use measures of peripheral spatial resolution, specifically ECAP channel interaction functions, to predict speech perception in CI users. In this study our ultimate goal was to reexamine the relationship between ECAP channel interaction functions and speech perception by

18 4 addressing a number of these limitations. The outcome measures (mentioned below but detailed in Chapter III) were chosen to predominantly reflect spatial / spectral resolution; however, we acknowledge that they do not eliminate all contributions of temporal resolution. For example, forward-masking procedures used to assess spatial resolution are by nature affected by neural recovery from refractoriness (discussed in Throckmorton and Collins, 1999). One limitation of many studies investigating the relationship between ECAP channel interaction functions and speech perception is that spatial resolution was measured at few cochlear locations (apical, middle, basal: Cohen et al., 2003; Hughes and Stille, 2008; Tang et al., 2011; van der Beek et al., 2012). These same studies showed variations in spatial selectivity across measured electrodes within an individual, and sparse sampling will not reflect all of the changes in spatial resolution along the length of the electrode array which presumably is relevant the processing of spectrally complex speech signals. A number of the studies using psychophysical methods to evaluate spatial resolution performed extensive measures of electrode interactions and were able to demonstrate a significant correlation with speech perception (e.g. Nelson et al., 1995; Throckmorton and Collins, 1997; Henry et al., 2000; Jones et al., 2013). Likewise, we performed extensive measures of forward-masked ECAPs to generate channel interaction functions for all of the electrodes activated during speech perception testing in order to fully characterize the spatial resolution of the periphery. A benefit of the electrophysiological measures over the psychophysical measures is that they can be performed in a fraction of the time. A second limitation addressed here is that ECAP channel interaction functions typically are quantified individually, and in terms of width or breadth. Broad interaction functions are consistent with poor spatial resolution; however, broad stimulation patterns can result from many factors which are not consistently associated with poorer speech performance. In CI users, for example, monopolar stimulation and electrode arrays

19 5 farther from the modiolus result in relatively broad stimulation patterns (psychophysical: Cohen et al., 2006; Nelson, Donaldson and Kreft, 2008; Zhu et al., 2012 but see Cohen et al. 2005; electrophysiological: Cohen et al., 2003; Hughes and Abbas 2006a,b; Zhu et al., 2012) but are not necessarily associated with poor speech perception (Pfingst et al., 2001; Hughes and Abbas 2006a; Berenstein et al., 2008). Additionally, by necessity ECAP channel interaction functions are elicited using stimuli presented at suprathreshold levels, and current levels are often above those used in the clinical program or MAP. There is a tendency for channel interaction functions to broaden with stimulus level at least in some individuals and for some probe electrodes (e.g. Abbas et al., 2004; Eisen and Franck, 2005; Hughes and Stille, 2010); however, speech perception scores do not decrease with increasing stimulation level (Firszt et al., 2004). Even in the normal auditory system, stimulation patterns broaden with level (e.g. Gorga et al., 2011), and yet speech perception abilities do not degrade. Although electrodes that stimulate completely overlapping neural populations offer no additional benefit over a single-channel CI, some spatial overlap might provide the central nervous system with the redundancy needed to further process the complex signal (Kiang and Moxon, 1972). In this study we quantified ECAP channel interaction functions as channel separation indices (Hughes, 2008). This index reflects the non-overlapping excitation areas of two channel interaction functions, thereby capturing differences in the locations and shapes of the functions. The channel separation index was found to correlate significantly with pitch ranking (Hughes, 2008) when channel interaction function width did not (Hughes and Abbas, 2006a). This finding suggests that the width of a single channel interaction function might not be meaningful in isolation, but might be meaningful in comparison to the excitation pattern of another electrode. A third limitation addressed in this study is that peripheral electrophysiological measures do not reflect central processing, nor are they affected by decision making, learning, or categorization, all of which influence speech perception. The fact that CI

20 6 users can adapt to such a degraded electrical signal that comes from the implant underscores the importance of the central auditory nervous system in speech perception (Moore and Shannon, 2009). For children, there is particular concern about providing optimal peripheral input for the development of the central auditory nervous system, but it may be that inherent differences in the central auditory nervous system among CI recipients explains differences in performance that cannot be explained by the periphery alone. We explored more central processing by including a central electrophysiological response, namely the obligatory cortical auditory evoked potential (CAEP), as an outcome measure. Changing a parameter in an ongoing stimulus can elicit a second series of positive and negative peaks (labeled P 1 -N 1 -P 2 ). This series has been termed the acoustic / auditory change complex (ACC: Ostroff, Martin and Boothroyd, 1998; Brown et al., 2008), the presence of which is thought to reflect discrimination ability (Martin, Tremblay and Korczak, 2008). Although these cortical responses are preattentive and do not reflect cognition, they do offer insight into intermediate stages of processing between the auditory nerve and speech perception. Another benefit of the obligatory CAEP, and another reason for including it in this study, is that the ACC can be evoked with stimuli ranging in complexity. ECAP measures are obtained using relatively simple stimulation: pulse pairs presented at low stimulation rates. The simplest behavioral tests of spatial resolution (e.g. psychophysical tuning curves and electrode discrimination) use pulse trains at higher stimulation rates for stimulation, and spectrally complex stimuli can be used for central electrophysiological and behavioral assessments of spectral resolution. Examining the peripheral neural response to pulse pairs essentially ignores the interaction that likely occurs across the electrode array when more complex stimuli (such as speech) are used and multiple electrodes are stimulated.

21 7 We elicited the ACC using two stimulation paradigms. For the first paradigm, we used pulse trains to sequentially stimulate two electrodes (Brown et al., 2008). The size of the ACC is partially dependent upon the degree with which the two electrodes stimulate non-overlapping neural populations. Thus, this electrode-discrimination ACC paradigm permits a central measure of spatial resolution using only a slightly more complex stimulus than the pulse pairs used in the forward-masked ECAP paradigm. We compared ECAP channel separation indices with the size of the spatial ACCs, in order to describe the relationship between peripheral and central processing for each individual (addressing the third limitation described previously). We also elicited the ACC by changing the frequency location of spectral peaks within a complex, vowel-like, or rippled noise, stimulus (similar to Won et al., 2011a), which results in more complex stimulation patterns at the periphery (addressing the fourth limitation just described). The size of the ACC is dependent upon the degree with which the spectra of the successive stimuli result in contrasting neural stimulation. Although the physiological contrast depends upon underlying spatial resolution, this ACC response is considered a central measure of spectral resolution because the complex stimulus is presented through the processor. In summary, three electrophysiological measures were included in this study: ECAP channel interaction functions, spatial ACCs and spectral ACCs. These three measures allowed us (1) to compare electrode interactions, or spatial resolution, measured at the periphery with electrode interactions measured at a cortical level on an individual basis, (2) to evaluate the relationship between spatial and spectral resolution, and ultimately (3) to determine the predictive ability of electrophysiological measures of peripheral and central spatial and spectral resolution abilities with speech perception. The hypotheses of this study were as follows: (1.) We hypothesized that ECAP channel interaction functions are predictive of spectral resolution and speech perception. Even though observing a

22 8 relationship between ECAPs and speech perception has not been successful in previous studies, we expected to see a relationship given the more extensive ECAP measures and different methods used to quantify the ECAP data employed in this study. (2.) We hypothesized that if there are differences in central processing across CI users, then we should see evidence of those differences when relating the peripheral and central measures of spatial resolution within each individual. We expected that including these differences in central processing would improve our ability to predict spectral resolution and speech perception. (3.) We hypothesized that electrophysiological measures of spectral resolution are predictive of speech perception. A strong relationship between behavioral measures of spectral resolution and speech perception has been shown previously in numerous studies (e.g. Henry and Turner, 2003; Henry et al., 2005; Litvak et al., 2007; Won et al., 2007; Berenstein et al., 2008; Saoji et al., 2009; Anderson et al., 2011; Spahr et al., 2011; Won et al., 2011b), so we expected that using an electrophysiological measure of spectral resolution, which is significantly correlated with behavioral measures of spectral resolution (Won et al., 2011a) also would be predictive of speech perception. The results of this study improve our understanding of auditory neural processing of simple and complex stimuli at peripheral and central levels, and how this processing is related to speech perception in CI recipients.

23 9 CHAPTER II REVIEW OF THE LITERATURE Brief Background of Multichannel Cochlear Implants Single channel CIs were able to provide recipients with sound awareness, the ability to discriminate among a known set of sounds, enhancement of lipreading, and on occasion improved speech production; however, the devices did not provide useable understanding of running speech when visual cues were removed (e.g. Bilger and Black, 1977; Gantz et al., 1988). The limitations of single channel CIs were not surprising given the differences in neural response patterns to acoustic and electric sinusoidal signals, specifically with respect to the encoding of frequency (Kiang and Moxon, 1972). Single channel devices encoded frequency in the temporal domain, but the human auditory system is limited in its sensitivity to time-based frequency cues below the important frequencies contained within the speech signal (discussed in Clark et al., 1978; Eddington et al., 1978). The use of multiple intracochlear electrodes was proposed as a way, at least in principle, to reintroduce spectral cues into the auditory system by taking advantage of the spatial arrangement of nerve fibers along the length of the cochlea (Kiang and Moxon, 1972; Bilger and Black, 1977; Kiang, Eddington and Delgutte, 1979). Recipients of even the earliest multichannel implants demonstrated pitch perceptions associated with stimulation of each electrode consistent, at least relatively, with the normal tonotopic map (Eddington et al., 1978). It wasn t long before the superiority of multichannel CIs over single channel implants was clearly demonstrated (e.g. Gantz et al., 1988). Not only did recipients of multichannel CIs demonstrate better discrimination among environmental sounds and words in quiet, but they also demonstrated better recognition of words presented in the midst of noise. The most impressive benefit of multichannel CIs was the ability of recipients to understand running speech without visual cues. However, speech perception was still well below that of a normal-hearing individual (Gantz et al., 1988).

24 10 Ensuring electrode selectivity was a concern as multichannel devices were being developed (e.g. Clark et al. 1978; Eddington et al., 1978; Eddington, 1980; Shannon, 1983; White, Merzenich and Gardi, 1984). With simultaneous stimulation of multiple electrodes, the electrical field can interact and either sum or cancel depending on the relative phases. These interactions have perceptual consequences (Shannon, 1983; White et al., 1984), which presumably would degrade speech perception. One method of reducing electrical field interactions is to use pulsatile stimuli instead of analog, and to present the stimuli sequentially from each electrode instead of simultaneously (Wilson et al., 1991). Using a within subject design, Wilson and colleagues demonstrated improved speech perception with a sequential pulsatile stimulation method, or continuous interleaved sampling (CIS), compared to a simultaneous analog stimulation method. Sequential pulsatile stimulation is the primary processing strategy used in current CIs. Even when electrical field interactions are eliminated, electrode interaction can occur at the neural level (Shannon, 1983; White et al., 1984), i.e. different electrodes can stimulate some of the same nerve fibers. These interactions are likely a result of spread of current within the cochlear duct, but can also be affected by other factors including number, density and location of surviving neurons and proximity of electrodes to stimulable neurons (Shannon, 1983). Greater interaction among electrodes means poorer spatial resolution, and poor spatial resolution at the periphery can limit the spectral resolution of the entire auditory system. The focus of this study was to further explore the relationships between spatial and spectral resolution abilities, and these abilities with speech perception in multichannel CI users. Although only adult participants were included in this study, one motivating factor influencing the study design was how to evaluate a child s performance when speech, language or the skills and attention necessary for detailed behavioral testing are lacking. Therefore, we focused our attention on electrophysiological measures of spatial and spectral resolution. Given this focus, the next section reviews electrophysiological

25 11 measures in CI users, primarily the ECAP and ACC. The final two sections review psychophysical and electrophysiological methods for evaluating spatial and spectral resolution abilities in CI users. Electrophysiological Measures in CI Users Overview Electrophysiological measures can be used as diagnostic tools to identify the site of lesion within the auditory pathway and as objective indicators of auditory functioning in individuals who cannot participate in behavioral tasks. Hearing loss can involve any part of the auditory pathway, and implant candidacy is not restricted to adults capable of providing reliable behavioral information. There is a long-standing interest in using electrophysiological measures to uncover differences in neural stimulation and physiological differences that explain variable performance observed among CI recipients. These goals have yet to be completely realized, and the need to understand the relationship between electrophysiological measures and auditory function, namely speech perception, persists. Performing electrophysiological measures in CI users is complicated by the fact that the evoking auditory stimulation is electrical. Electrical stimuli are much larger in amplitude than electrical neural responses and introduce artifact into the electrophysiological recording. Stimulus artifact is a particular problem when attempting to record short-latency, peripheral auditory responses due to temporal overlap between the response and stimulus ringing (e.g. van den Honert and Stypulkowski, 1986; Shallop et al., 1990). It is also an issue for recording more central auditory responses evoked with ongoing stimulation, again because of the overlap in time between stimulation and the physiological response (Martin, 2007, Friesen and Picton, 2010). Despite technical difficulties, evoked potentials generated from the auditory periphery to the cortex have been recorded successfully in CI users (e.g. ECAP: Brown, Abbas and Gantz, 1990; Electrically Evoked Auditory Brainstem Response, EABR: van den Honert and

26 12 Stypulkowski, 1986; Electrically Evoked Middle Latency Response, EMLR: Kilney and Kimenk, 1987; CAEP (N 1 -P 2 ): Kilney, 1991; ACC: Friesen and Tremblay, 2006; Mismatch Negativity, MMN: Kraus et al., 1993; P300: Kaga et al., 1991, Oviatt and Kilney, 1991). In general, the characteristics of electrically evoked responses are grossly similar to their acoustic correlates, with a few exceptions mainly for the most peripheral responses. For example, the EABR wave I component usually is unattainable due to temporal overlap with stimulus artifact. Additionally, absolute EABR wave latencies are shorter and amplitudes are generally larger than observed from acoustic stimulation. Both of these observations can be explained by stimulation that bypasses cochlear processing (van den Honert and Stypulkowski, 1986, Shallop et al., 1990; Abbas and Brown, 1991, Firszt et al., 2002a). Due to the availability of commercial software and ease of recording, ECAPs are often used clinically with the pediatric population to evaluate whether the implant is exciting the auditory system and to help with basic programming of threshold and maximum stimulation levels (e.g. Hughes et al., 2000a,b). Although measures of ECAP channel interaction, which reflect spatial resolution of the periphery (discussed in a separate section), can also be performed with commercial software, these measures are not used clinically due to uncertainty about how to interpret and use the results. The EABR is used less often in the clinic, and similar to ECAPs, EABRs can be used to help with basic programming of stimulation levels (Brown et al., 2000) but not to predict performance. For example, measures of EABR wave V threshold, amplitude, amplitude growth, and latency have not been found to significantly correlate with speech perception (Abbas and Brown, 1991; Makhdoum et al., 1998; Firszt, Chambers and Kraus, 2002b). One explanation for the lack of correlation between peripheral electrophysiological responses and speech perception is that, although peripheral processing is the basis for further processing, variability of more central processing must be considered, particularly in long-term hearing-impaired ears. A few studies have

27 13 investigated processing at various levels of the auditory system within the same individual to evaluate the influence of peripheral processing on more central measures, and to investigate this commonly proposed explanation of why relationships between more peripheral measures and speech perception have not been observed (Makhdoum et al., 1998; Firszt et al., 2002b; Kelly, Purdy and Thorne, 2005). The most central measure in Makhdoum et al. (1998) and Firszt et al. (2002b) was the P 1 -N 1 -P 2 cortical potential, the presence of which indicates signal detection. Kelly et al. (2005) included even later potentials that have been associated with discrimination abilities: the MMN and P300. Significant correlations among these different electrophysiological measures have not been consistently observed, even though more central processing is dependent upon peripheral input. For example, Makhdoum et al., (1998) observed a significant positive correlation between EABR and EMLR component amplitudes (wave V with Na-Pa and Nb-Pb), but oddly, a negative correlation was observed between the latencies of waves V and Pa. Neither the amplitudes nor the latencies of EABR and EMLR components correlated with amplitudes or latencies of the cortical N 1 and P 2 components. Firszt et al., (2002a) only found significant correlations between N 1 and P 2 latencies. No significant correlations of latencies or response amplitudes across the different responses (EABR, EMLR or N 1 -P 2 ) were observed in this study. The observed relationships between more central electrophysiological responses and speech perception abilities have been mixed as well. For example, Firszt et al. (2002b) demonstrated a significant correlation between speech perception and the EMLR using both normalized Na-Pa amplitude and threshold, but neither Makhdoum et al. (1998) nor Kelly et al. (2005) demonstrated significant correlations between speech perception and amplitude or latency of EMLR components. Conflicting results also have been observed with the N 1 -P 2 response. Makhdoum et al. (1998) and Kelly et al. (2005) found significant correlations between P 2 latency and speech perception, but Firszt et al.

28 14 (2002b) did not find significant correlations between amplitude, threshold or latency of N 1 -P 2 waves and speech. Because speech is a complex signal, requiring discrimination abilities beyond simply detection, the ACC, MMN and P300 may hold more promise for relating to speech perception (e.g. Martin et al., 2008). Kelly et al. (2005) demonstrated a significant correlation between the MMN and speech perception but not the P300 and speech perception (note: the P300 listening paradigm was passive in this study). Wable et al. (2000) did not observe a significant correlation between the MMN and speech perception. Several qualitative observations indicate that relationships between electrophysiological responses and speech may be present, even though statistical support is lacking. For example, poorly formed or absent electrophysiological responses have been noted in individuals with poor performance with the implant (Firszt et al., 2002b; Kelly et al., 2005). Perhaps the inconsistent statistical results are due to the inability to capture the important qualities of the response quantitatively, but this is speculation. In this study, we further explored the relationships among peripheral and central electrophysiological measures, and the relationships between these measures and speech perception. We focused on the most peripheral response: the ECAP. The ECAP will be reviewed in more detail both generally (below) and specifically as a measure of spatial resolution (in a later section). Additionally, we chose to use the electrical correlate of the ACC as a central measure of auditory discrimination. Like the MMN, the ACC does not require active listening. Two benefits of the ACC over the MNN include (1) the ACC is a more robust response and (2) an oddball paradigm is not required, which allows faster acquisition of the ACC (discussed in Martin and Boothroyd, 1999). The ACC will be reviewed further in a subsequent section. Electrically Evoked Compound Action Potentials In humans, the ECAP was recorded first in recipients of the Ineraid cochlear implant (Brown et al., 1990). The percutaneous plug allowed access to the implanted

29 15 electrodes, which were used not only for stimulation, but also as recording sites. Neural responses obtained with near-field recordings are larger in amplitude and less noisy than responses obtained with far-field recordings using electrodes placed on the scalp. Stimulus artifact was managed with a masking-subtraction paradigm, which takes advantage of neural refractoriness to derive the response. In the paradigm used by Brown et al., (1990), recordings were obtained for a series of three measurement conditions: probe alone (A), masker-plus-probe (B), and masker alone (C). The probe and masker were stimulation of the same electrode. When responses are elicited with the first condition (A) the recording contains both stimulus artifact and the neural response to the probe. Recordings obtained with the second condition (B) contain stimulus artifact and the neural response to the masker, and stimulus artifact to the probe (assuming that the stimulated neurons are in a refractory state). Recordings obtained with the third condition (C) contain stimulus artifact and the neural response to the masker. The derived response (A-(B-C)) eliminates (theoretically, but in practice reduces ) masker and probe stimulus artifact and the masker neural response, which allows the neural response to the probe to be observed. Transcutaneous communication with the implanted CI components soon replaced the percutaneous plug, and CIs from all manufacturers are now designed with telemetry capabilities so that intracochlear recordings can be made without direct physical access to the electrodes (see Mens, 2007 for a general review of telemetry). For this study, we used the commercially available neural response telemetry (NRT) system from Cochlear Corporation. With this system, the subtraction method described above includes a fourth condition (D), which records and subtracts the amplifier switching artifact from the derived response (Lai and Dillier, 2000). Although there are other techniques used to manage artifact (e.g. scaled template subtraction, alternating polarity), and although modifications to the masking-subtraction technique have been proposed (Miller, Abbas and Brown, 2000), ECAPs were obtained using the masking-subtraction method

30 16 described by Brown et al. (1990) with one variation: the masker stimulus was presented to different electrodes along the array to produce channel interaction functions (e.g. Cohen et al., 2003; Abbas et al., 2004). This specific paradigm will be discussed further in the section that reviews ECAP channel interaction functions as a measure of spatial resolution. Systematic investigations of stimulation (rate, masker level, inter-stimulus interval) and recording (delay, amplifier gain, recording electrode, number of sweeps in the average) parameters have resulted in a set of guidelines for performing NRT with Cochlear Corporation s commercial software (Abbas et al., 1999; Dillier et al., 2002). These guidelines were followed during data collection to optimize recordings for each participant. Acoustic Change Complex / Electrically Evoked Auditory Change Complex A P 1 -N 1 -P 2 potential can be evoked when an ongoing stimulus changes in time (e.g. Jerger and Jerger, 1970). The presence of an obligatory response synchronized with the stimulus change (otherwise known as the ACC) indicates that different stimulus features are represented differently at the neural level, and suggests the capacity for perceptual discrimination (e.g. Ostroff et al., 1998). As such, the ACC has the potential to be used to evaluate suprathreshold processing abilities of the auditory system and can be applied in a number of capacities. For example, the ACC can be used to evaluate the system s sensitivity to intensity or frequency increments (Jerger and Jerger, 1970; Martin and Boothroyd, 2000). In addition to evaluating discrimination between single frequencies, changes within the fine structure of complex stimuli can be used to assess spectral discrimination (Martin and Boothroyd, 1999; Martin and Boothroyd, 2000; Won et al., 2011a). Gap detection paradigms have been used to elicit the ACC as a measure of temporal resolution ability (Lister et al., 2007; 2011), and dichotic stimuli have been used to assess binaural processing (Ross et al., 2007a,b). In addition to evaluating processing abilities relevant to basic perception, phonemic contrasts, which are different in multiple

31 17 domains (i.e. spectral and intensity), have also been used to evoke the ACC (Kaukoranta, Hari and Lounasmaa, 1987; Ostroff et al., 1998; Martin and Boothroyd, 2000; Tremblay et al., 2003) with the thought that results may be more straightforward in their application to speech perception abilities. The amplitude and latency of the ACC systematically varies with the salience of the stimulus features that evoke the response (e.g. Martin and Boothroyd, 2000; Ross et al., 2007b; Won et al., 2011a); consequently, the ACC has been evaluated as a potential tool for comparing across and within clinical populations. Across-Group Comparisons: The ACC has been used to evaluate the effects of sensorineural hearing loss on sensitivity to intensity and frequency increments (Jerger and Jerger, 1970) and to evaluate the effects of aging on sensitivity to interaural phase differences (Ross et al., 2007a) and gap detection (Lister et al., 2011). Both behavioral and ACC responses suggested that the individual with sensorineural hearing loss had better intensity discrimination and poorer frequency discrimination than the person with normal hearing (Jerger and Jerger, 1970). Both behavioral and ACC responses demonstrated that carrier frequency thresholds for interaural phase differences were poorer in older adults than younger adults for both (Ross et al., 2007a). Larger ACC P 1 amplitudes and longer ACC P 2 latencies were observed in older adults compared with younger adults for responses evoked with a gap detection paradigm (Lister et al., 2007; 2011). Within-Group Comparisons: Another application being explored is using the ACC to evaluate the effectiveness of assistive listening devices, such as hearing aids and cochlear implants. Tremblay et al (2006a,b) demonstrated the feasibility of obtaining reliable responses through a hearing aid when stimuli were presented in the soundfield. Soundfield presentation has been used to elicit the ACC in CI users as well (Friesen and Tremblay, 2006; Martin, 2007), although individual electrode output can also be controlled to elicit the response (Brown et al., 2008; Kim et al., 2009). The ACC is

32 18 technically more difficult to obtain in CI users than other late cortical potentials because the nature of the ACC requires ongoing stimulation, which means that stimulus artifact is ongoing and overlaps the neural response in time. Several complex methods of managing artifact have been discussed (e.g. Martin, 2007; Friesen and Picton, 2010), but the relatively simple filtering methods used in Brown et al. (2008) and Kim et al. (2009) were sufficient to successfully manage the artifact, and were the methods used in this study. In addition to demonstrating the feasibility of recording the ACC in CI users, the studies mentioned above have also demonstrated that the ACC is graded in CI users, similar to observations in normal-hearing individuals. Systematic changes in the response amplitude reflect various stimulus features; i.e. smaller ACC N 1 -P 2 amplitudes are observed for smaller contrasts, and larger ACC N 1 -P 2 amplitudes are observed for larger contrasts (Martin, 2007; Brown et al., 2008; Kim et al., 2009). Only two studies have demonstrated that the ACC reflects discrimination ability in CI users. Hoppe et al. (2010) stimulated pairs of adjacent electrodes to evoke the ACC and to evaluate discrimination abilities. Correlations between ACC amplitude / latency and psychophysical discrimination abilities were statistically significant. Won et al. (2011a) used spectral ripple stimuli to evoke the ACC in normal-hearing listeners using CI simulations. The electrophysiological responses were significantly related to perceptual spectral discrimination under the same listening conditions. The ACC has yet to be compared with speech perception measures. This extension is supported given that the ACC can be elicited with speech-like stimuli (e.g. Friesen and Tremblay, 2006; Martin, 2007), that the response reflects the salience of stimulus features (Martin, 2007; Brown et al., 2008; Kim et al., 2009; Won et al., 2011a), and that the response is significantly correlated with behavioral responses (Hoppe et al., 2010; Won et al., 2011a). One purpose of including the ACC in this study was to evaluate the relationship with speech perception.

33 19 Measures of Spatial Resolution in CI Users Both forward-masking and electrode-discrimination paradigms can be used to evaluate the spatial resolution resulting from electrical stimulation of a multichannel electrode array. Forward-masking paradigms are used most often to define the amount of masking or, amount of channel interaction, across electrodes. Electrode discrimination paradigms are used to define the degree of channel independence across electrodes. The former paradigm focuses on overlapping areas of stimulation; the latter paradigm focuses on the separation of stimulation. While they are obviously related, the choice of one method or the other likely depends on the question of interest. This section focuses on reviewing forward-masking and electrode-discrimination measures, but ends with a brief overview of the use of narrow stimulation modes to elicit behavioral thresholds. Relatively good or poor spatial resolution is inferred for each electrode site by examining the variability in thresholds obtained across the electrode array. Forward Masking Forward-masking paradigms take advantage of neural refractoriness to evaluate the influence of a preceding stimulus (masker) on the detection of a following stimulus (probe). These paradigms can be used to evaluate both temporal and spatial properties of peripheral processing, although psychophysical methods are also influenced by more central processing (Relkin and Turner, 1988). To evaluate temporal processing, the masker and probe stimuli are presented to the same cochlear place via electrode stimulation, and the response to probe stimulation is evaluated as a function of interstimulus interval. When masker and probe stimuli are presented to different cochlear places, the response to probe stimulation reflects both temporal and spatial processing (Throckmorton and Collins, 1999). For a primarily spatial application, a relatively short interval is chosen between the masker and probe to take advantage of the reduced responsiveness of neurons responding to the masker. This allows spatial resolution to

34 20 dominate differences observed in the probe response as the distance between masker and probe electrodes increases. Here we will focus on the latter (spatial) application of forward masking. Psychophysical: A detailed review of psychophysical forward-masking methods to evaluate place specificity in CI users, including quantification and interpretation of results, can be found in McKay (2012). Here only the most elemental information regarding psychophysical forward-masking measures is discussed. In psychophysical studies, the masker is typically a longer duration pulse train than the subsequent probe stimulus, and masker-probe intervals range from 1 20 ms across various studies (e.g. Chatterjee and Shannon, 1998; Throckmorton and Collins, 1999; Boex et al., 2003; Cohen et al., 2003, 2006; Kwon and van den Honert, 2006; Hughes and Stille, 2008; Nelson et al., 2008; 2011; Anderson et al., 2011). Two basic methods have been used to evaluate spatial resolution in CI recipients. For one method, masker-electrode stimulation is held constant at a specified level. Either the location of the masker electrode is fixed and the location of the probe electrode is varied, or vice versa. For each test condition, probe stimulation level is varied until the listener detects the probe (Chatterjee and Shannon, 1998; Throckmorton and Collins, 1999; Boex et al., 2003; Cohen et al., 2003, 2006; Kwon and van den Honert, 2006; Hughes and Stille, 2008). For the alternate method, stimulation on the probe electrode is held constant at a specified level. The location of the probe electrode is fixed, and the location of the masker electrode is varied. For each masker electrode, stimulation level is varied until the probe stimulus is undetectable (Nelson et al., 2008; 2011; Bierer and Faulkner, 2010; Anderson et al., 2011). The second method is more similar to procedures used to obtain frequency-tuning curves in non-ci users. Masking functions with this method have been referred to as spatial tuning curves (Nelson et al., 2008). Electrophysiological: For electrophysiological forward-masking studies, the peripherally generated ECAP is typically the response of choice. Single, biphasic

35 21 electrical pulses are used for masker and probe stimuli, and inter-stimulus intervals are around ms (Abbas et al., 1999; Dillier et al., 2002). Two forward-masking methods have been used to evaluate spatial resolution in CI recipients. For the first method, the location of the masker and probe electrodes is fixed, and the location of the recording electrode is varied, resulting in functions that describe the spatial spread of neural excitation along the length of the cochlea (e.g. Cohen, Saunders and Richardson, 2004; Hughes and Stille, 2010; van der Beek et al., 2012). As a result of the recording electrode s sensitivity to electrical field propagation along the cochlear duct, these functions are broader than the extent of the population of neurons excited by the masker stimulus. The second method reduces the influence of electrical field propagation by keeping the location of the recording electrode fixed. The location of the probe is fixed as well, and the location of the masker electrode is varied. Recall from the maskersubtraction paradigm used to remove artifact from the ECAP that the second (B) condition includes the artifact and neural response from the masker, but only artifact to the probe stimulus if the neurons are in a refractory state. When the masker stimulus excites a different group of auditory neurons than the probe, a neural response to the probe will be observed in the B recording, because masking the probe response is incomplete. In this situation, subtracting the B response from the A (probe-alone) response will reduce the derived ECAP amplitude to the probe. Thus, the size of the derived ECAP amplitude reflects the amount of neural overlap between stimulation with the masker and probe electrodes. Larger derived ECAP amplitude indicates greater overlap, and smaller derived ECAP amplitude indicates less overlap. Plotting derived ECAP amplitude as a function of several masker electrodes is known as a channel interaction function. The function describes the extent of neural overlap across electrode sites (e.g. Cohen et al., 2003; Abbas et al., 2004; Eisen and Franck, 2005; Hughes and Abbas, 2006a,b; Hughes and Stille, 2008; 2010; Tang et al., 2011; van der Beek et ak., 2012).

36 22 The channel-interaction paradigm has been used more often than the spread of excitation paradigm and is the paradigm that was chosen for this study. Electrophysiological comparisons with psychophysical studies (below) have been limited to studies using channel interaction paradigms. General Findings: Even though psychophysical and electrophysiological masking paradigms differ in many regards (influence of central processing, stimuli used to elicit the responses and effects of temporal integration, etc.), results obtained with both paradigms are significantly related (Cohen et al., 2003; Hughes and Stille, 2008; Zhu et al., 2012). A number of general characteristics about channel interaction / spatial resolution in CI users have been consistently observed with both methods. Significant variability is present in the shapes of the masking functions observed among CI users; however, a typical finding across individuals and electrode sites is that the greatest amount of masking occurs when the probe and masker locations are overlapping or at least close in space. The amount of masking generally decreases as distance increases, but nonmonotonic masking patterns have been observed in both psychophysical and electrophysiological results. Aberrant current spread, such as crossturn stimulation, has been suggested as an underlying cause. On occasion, detection of / response to the probe stimulus is observed to improve in the presence of the masker (Cohen et al., 2003; Abbas et al., 2004; Eisen and Franck, 2005; Hughes and Abbas, 2006a,b; Hughes and Stille, 2008; Hughes and Stille, 2010; Tang et al., 2011; van der Beek et al., 2012; Zhu et al., 2012). Forward-masking patterns have been used to compare the spatial selectivity resulting from different stimulation modes, electrode arrays, and stimulus levels. Although masking functions can be characterized a number of ways (e.g. width at different levels from the peak, amount of masking), the observations are generally similar. Consistent with expectations based on current spread throughout the cochlea, forward-masking paradigms reveal narrower spatial resolution with bipolar or tripolar

37 23 stimulation than with monopolar stimulation (Boex et al., 2003; Nelson et al., 2008; 2011; Zhu et al., 2012; but see Kwon and van den Honert, 2006) and with periomodiolar electrode arrays compared with straight electrode arrays (Cohen et al., 2003; Eisen and Franck, 2005; Hughes and Abbas 2006a,b). With respect to stimulus level, results are less clear. From electrophysiological studies, masking patterns tend to broaden at higher stimulation levels for some individuals, but consistent level effects are not observed in either electrophysiological or psychophysical studies (Cohen et al., 2003; Abbas et al., 2004; Eisen and Franck, 2005; Nelson et al., 2008; Hughes and Stille, 2010; Nelson et al., 2011; van der Beek et al., 2012; Zhu et al., 2012). Collectively, the above findings are relevant to consider because monopolar stimulus modes, straight electrode arrays, and high stimulus levels do not have detrimental effects on speech perception (Pfingst et al., 2001; Berenstein et al., 2008; Hughes and Abbas 2006a; Firszt et al., 2004). One of the reasons why evaluating spatial resolution is of interest comes from the assumption that greater channel interaction would negatively impact an individual s ability to discriminate speech. It is thought that the variable amounts of channel interaction observed across CI recipients might help explain the variable speech perception scores that are observed across this population. From the data summarized above regarding the effects of stimulus mode, array type, and stimulus level, it is not surprising that correlations between electrophysiological or psychophysical forward-masking measures of channel interaction and speech perception are not observed consistently. A few psychophysical studies have shown significant correlations (Throckmorton and Collins, 1999; Boex et al., 2003), but many have not, and no electrophysiological studies have shown significant correlations (Cohen et al., 2003; Hughes and Abbas, 2006a; Hughes and Stille, 2008; Anderson et al., 2011; Nelson et al., 2011; Tang et al., 2011; van der Beek et al., 2012). There are many variables to consider when making a comparison between a peripheral measure of spatial resolution and speech perception in CI recipients. Specific

38 24 to forward-masking measures of spatial resolution, investigators must decide which forward-masking method to use (variable masker or probe level), the number, location and combination of probe and masker electrodes (and recording electrodes for electrophysiological measures), level of stimulation, and quantification method (amount of masking or threshold shift in linear or logarithmic units; absolute or normalized responses; width, area, slope of the masking function etc.) (e.g. McKay, 2012; van der Beek et al., 2012). A number of decisions regarding the speech perception measures are also necessary, such as speech materials (vowels, consonants, phonemes, words, sentences), whether the test is performed in noise or quiet, how the CI speech processor is set, level of presentation, etc. In addition to possible procedural / analysis issues that might interfere with our ability to observe a relationship between spatial resolution and speech perception, the role of more central processing has yet to be completely understood. Electrode Discrimination Psychophysical: The most straightforward assessment of electrode discrimination involves presenting pulse trains to different electrodes and asking listeners to make a same-different decision (Busby and Clark, 1996; Zwolan et al., 1997; Throckmorton and Collins, 1999; Henry et al., 2000). A limitation of a same-different distinction is that it doesn t isolate a person s sensitivity to spatial cues and easily can be influenced by other cues, such as loudness or timbre differences across electrodes (e.g. Busby and Clark, 1996; Henry et al., 2000). Other measurements similar to electrode discrimination include electrode trajectory discrimination (Busby and Clark, 1996), pitch scaling (Collins et al., 1997), and pitch ranking / labeling (Nelson et al., 1995; Collins et al., 1997; Hughes and Abbas, 2006a,b). In addition to reflecting a person s ability to discriminate between two places of stimulation, these methods are also influenced by place-pitch organization along the electrode array. For example, electrode trajectory involves stimulating electrodes in a basal-to-apical or apical-to-basal direction. Listeners

39 25 are given a reference trajectory and must decide if a test trajectory is in the same or opposite direction. Similarly, pitch ranking requires listeners to determine if a test stimulus is higher or lower in pitch than the reference stimulus, and pitch scaling requires listeners to order electrode stimulation based on pitch percepts. Electrode discrimination can be derived from measures of pitch scaling and pitch ranking, but the derivations are not equivalent to the direct measures (Collins et al., 1997). Electrophysiological: There are no electrophysiological correlates of pitch ranking and pitch scaling; however, electrophysiological correlates of electrode discrimination are available. One measure is calculated from pairs of ECAP channel interaction functions (Hughes, 2008). Additionally, both the ACC and MMN have been evoked using electrode-discrimination paradigms. Hughes (2008) introduced a metric for analyzing pairs of ECAP channel interaction functions: the channel separation index. As the name implies, a channel separation index quantifies differences between the channel interaction functions associated with two probe electrodes. Areas of overlap between the two functions and absolute characteristics about individual functions are ignored. When analyzing channel interaction functions with this method, significant correlations with psychophysical measures of electrode discrimination (specifically, pitch ranking) were observed (Hughes, 2008). The channel separation index is not in frequent use, but it is a measure we used in this study given that one goal was to compare ECAP channel interaction functions (a peripheral measure of spatial resolution) with the ACC evoked with an electrode-discrimination paradigm (a central measure of spatial resolution). Wable and colleagues (2000) elicited the MMN by presenting stimulation on electrode pairs within an oddball paradigm. Electrode spacing was varied between 1, 3, and 5 electrodes. The size of the response to the deviant stimulus reflected electrode spacing; larger responses were observed when the two stimulated electrodes were farther apart, suggesting that they were more easily discriminable. Speech perception abilities

40 26 were evaluated in participants, but no significant correlations were observed with either amplitude or latency of the MMN. When using an electrode-discrimination ACC paradigm, pulse trains are presented to one electrode for a specified amount of time, and then directed to a different electrode (Brown et al., 2008). Like the response to the deviant within an oddball paradigm, the amplitude of the ACC N 1 -P 2 potential increases as the distance between electrode pairs increases (Brown et al., 2008). The amplitude and latency of the ACC N 1 -P 2 potential evoked with adjacent electrodes is significantly correlated with psychophysical electrode discrimination abilities (Hoppe et al., 2010). General Findings: From psychophysical data, we see that there tends to be an order with which pitch percepts vary as electrical stimulation is varied along the length of the array. However, stimulation regions across which pitch doesn t change and pitch reversals are sometimes noted (Nelson et al., 1995; Busby and Clark, 1996; Collins et al., 1997). Both psychophysical and electrophysiological data are consistent in showing that electrode discrimination tends to improve as the spatial distance between electrodes increases (e.g. Nelson et al., 1995; Busby and Clark, 1996; Collins et al., 1997; Wable et al., 2000; Hughes, 2008; Brown et al., 2008). Cochlear implant speech processors can be modified with respect to frequency-toelectrode mapping and assignment of electrodes to processing channels. A few investigators have evaluated the clinical utility of pitch perception and psychophysical measures of electrode discrimination to optimize stimulation parameters for individual CI users. Collins et al. (1997) investigated the effects of both removing indiscriminable electrodes and reordering the frequency-to-electrode map based on pitch perception. Unfortunately, declines in speech perception were observed more often than improvements. In contrast to these findings, Zwolan et al. (1997) found that removing indiscriminable electrodes from CI programs improved speech perception more often than not. Besides the conflicting results, the behavioral methods used to evaluate

41 27 discriminability among electrodes are time consuming and not optimally suited for the clinic. Significant relationships between pitch perception or psychophysical electrode discrimination and speech perception have been observed, but inconsistently. In contrast with forward-masking data, however, more studies have found significant correlations (Nelson et al., 1995; Collins et al., 1997; Throckmorton and Collins, 1999; Henry et al., 2000) than those that have not (Zwolan et al., 1997; Hughes and Abbas, 2006a). The limitations previously discussed when attempting to correlate forward masking and speech perception apply here as well. Neither ECAP channel separation indices nor the ACC elicited with an electrode-discrimination paradigm has been compared with speech perception to date. Thresholds Using Tripolar Stimulation Psychophysical: A more recently developed method to assess spatial selectivity within an individual is to measure behavioral thresholds across the electrode array using a highly focused, tripolar (TP) stimulation mode. Bierer (2007) demonstrated that individuals with more variable thresholds tended to have poorer speech perception abilities. A follow-up study confirmed the suspicion that electrodes associated with high TP thresholds also exhibited broad psychophysical tuning curves (Bierer and Faulkner, 2010). The first author is currently comparing speech perception abilities when electrodes with relatively high or low TP thresholds are deactivated. Preliminary results are promising (Bierer, 2013). Electrophysiological: Bierer, Faulkner and Tremblay (2011) explored using TP stimulation to evoke the ABR in CI users. EABR thresholds were significantly correlated with behavioral thresholds, indicating that this objective measure could be used in place of the psychophysical methods to evaluate spatial selectivity across the electrode array. However, although the EABR is objective, the authors note that clinical applications may be limited by the fact that they are more time consuming than behavioral measures.

42 28 Measures of Spectral Resolution in CI Users In CI users, measures of spatial resolution can be distinguished from spectral resolution in that stimulation is achieved by directly controlling electrode output in the former and by presenting the stimuli through the processor in the latter. One benefit of evaluating spectral resolution, especially if using spectrally complex signals, is that stimuli are more similar to speech signals and results might be more directly related to speech perception than measures of spatial resolution are. One complex signal that has been used to evaluate spectral resolution in CI users is spectral rippled or comb-filtered noise. Rippled noise contains multiple frequency components with systematically varied amplitudes. The spectrum appears rippled or comb-like due to the regular spacing of amplitude peaks and troughs (Supin et al., 1994). The exact shape of the spectral envelope (sinusoidal versus rectified), the frequency spacing of the peaks and troughs (linear versus logarithmic) and the level spacing (linear versus logarithmic amplitude or power) vary across studies in CI users (Henry and Turner, 2003; Henry et al., 2005; Won et al., 2007; Litvak et al., 2007; Berenstein et al., 2008; Saoji et al., 2009; Drennan et al., 2010; Anderson et al., 2011; Spahr et al., 2011; Won et al., 2011a,b,c). Figure 1 highlights the difference between sinusoidal (solid line) and rectified (dotted line) spectral envelopes. In this example, the frequency spacing is logarithmic, with 1 peak and trough every 2 octaves (i.e. 0.5 ripples per octave). The level spacing is logarithmic, with a 30 db difference between peak and trough amplitudes. Regardless of which specific options are chosen when generating ripple stimuli, in the time domain, the waveform is irregular and noise-like, with minimal temporal structure. Spectral resolution can be evaluated by varying the frequency spacing (ripple density) and / or the amplitude difference (ripple depth) between peaks and troughs (Supin et al., 1994). Here we will review both paradigms. Spectral Ripple Density Psychophysical: Spectral ripple density discrimination can be assessed

43 29 psychophysically using a phase-reversal test (Supin et al., 1994). For a given spectral density, two stimuli are created. The stimuli are identical, expect the location of amplitude peaks and troughs are reversed in the two spectra. Within an alternative forced choice procedure, listeners must determine which stimulus is different. When responses are correct, the ripple density is increased; when incorrect, the ripple density is decreased. Ripple density thresholds are generally poorer in CI users than either hearing-aid users or normal-hearing listeners (e.g. Henry et al., 2005), and poorer peripheral spatial resolution has been proposed as an underlying source of this difference. Henry and Turner (2003) evaluated spectral resolution abilities in CI users and in normal-hearing listeners using simulations of CI processing. For normal-hearing listeners ripple density discrimination improved as the number of simulated electrodes was increased. For the CI users, ripple density discrimination improved as the number of electrodes was increased up to about six, after which average performance plateaued even though spectral resolution (the number of electrodes) nominally continued to increase. Although some CI users demonstrated the ability to use more than six channels of information for the spectral resolution task, most did not. The relatively poorer ripple density discrimination in CI users compared with normal-hearing listeners using simulations was interpreted as possibly reflecting poorer underlying spatial resolution. The presumed relationship between spatial resolution and ripple density discrimination is supported in the literature. Anderson and colleagues (2011) demonstrated significant correlations between spatial tuning curves and ripple density discrimination in CI users when limiting ripple bandwidth to an octave centered at the tuning curve probe frequency. Jones and colleagues (2013) made extensive behavioral measures of interactions between adjacent electrodes and those spaced three and five electrodes apart. They found highly significant correlations between the average interactions across electrodes and spectral ripple discrimination. Studies investigating the relationship between ripple density discrimination and

44 30 speech perception in CI users are overwhelming positive. Significant correlations have been observed for vowels, consonants, phonemes, and words in quiet and in noise (Henry and Turner, 2003; Henry et al., 2005; Won et al., 2007; Berenstein et al., 2008; Anderson et al., 2011; Won et al., 2011b). A few studies have not found significant correlations for all of the speech measures they examined. For example, although Berenstein et al. (2008) demonstrated significant correlations between ripple density and speech in quiet and steady-state noise, they did not observe significant correlations with fluctuating noise. Anderson et al. (2011) demonstrated significant correlations between ripple density and word recognition in quiet, but not with vowel perception in quiet or vowels and words in steady-state noise. Even so, the results indicate that the ripple stimulus and density paradigm provide a relatively robust measure of performance. Although the measure is useful for comparing performance across CI users, there is also interest in using the measure to evaluate how successful certain CI processing techniques are at improving spectral resolution within an individual (i.e. tripolar stimulation and virtual channels: Berenstein et al., 2008; virtual channels: Drennan et al., 2010). Berenstein et al. (2008) observed significantly better ripple density discrimination for tripolar stimulation compared with monopolar stimulation, but virtual channel stimulation did not improve ripple density discrimination. In contrast, Drennan et al. (2010) observed significant improvements in ripple density discrimination with virtual channel stimulation. Interestingly, the effects of processing mode on speech perception were not consistent. Drennan et al. (2010) did not observe any improvements in speech perception in quiet or steady-state noise using virtual channel processing and concluded that ripple density appears to be a more sensitive measure than speech perception. Electrophysiological: Electrophysiological measures using ripple density discrimination paradigms have only been reported in one study to date. Won et al. (2011a) demonstrated the feasibility of eliciting the ACC by changing the location of spectral peaks and troughs. Rippled noise was presented for 2.5 s, after which the spectral

45 31 phase was inverted for the remaining 2.5 s. Change responses were successfully elicited in normal-hearing individuals for several ripple densities under CI simulations in which the number of electrodes was varied. Similar to the behavioral data of Henry and Turner (2003), ACC N 1 -P 2 amplitudes were affected by the number of spectral channels; larger amplitudes occurred with greater numbers of activated channels. The ACC amplitudes were significantly correlated with behavioral discrimination results (Won et al., 2011a). Spectral Ripple Depth Psychophysical: As an alternative to varying the frequency spacing between spectral peaks and troughs of ripple stimuli, the peak-to-trough level differences (depth) can be manipulated. As ripple depth decreases, the spectrum flattens. With ripple depth paradigms, listeners are often required to discriminate between stimuli with rippled versus flat spectra. Ripple depth thresholds have been evaluated in CI users (Saoji et al., 2009; Spahr et al., 2011) and normal-hearing listeners under CI simulations (Litvak et al., 2007) using psychophysical alternative forced choice procedures. Similar to observations with ripple density discrimination, significant correlations between ripple depth thresholds and speech perception have been documented (Litvak et al., 2007; Saoji et al., 2009; Spahr et al., 2011). One decision investigators face with the ripple depth paradigm is the choice of ripple density. Saoji and colleagues (2009) evaluated ripple depth thresholds in CI users for ripple densities of 0.25, 0.5, 1 and 2 spectral peaks per octave. In general, a greater ripple depth was necessary for detection as ripple densities increased. The strongest predictors of vowel and consonant recognition were ripple depth thresholds at 0.25 and 0.5 ripples per octave, respectively, but ripple depth thresholds at 0.5 ripples per octave resulted in the highest overall correlation with these two speech measures. Spahr and colleagues (2011) evaluated both ripple density and bandwidth, and concluded that the best stimulus parameters may be dependent upon the specific speech measure. The correlation between low density (0.25 and 0.5 rpo) ripple depth discrimination and

46 32 sentence recognition in quiet and noise was significant, consistent with the results of Saoji et al. (2009). Similar to ripple density, ripple depth discrimination appears to reflect spatial resolution. Litvak et al. (2007) simulated various degrees of spatial resolution in normalhearing listeners by changing filter slopes of a 15-channel vocoder. Ripple depth thresholds were obtained under four vocoder conditions with different degrees of channel overlap due to the filter slope. Based on the findings of Saoji et al (2009), ripple densities of 0.25 and 0.5 were chosen for ripple depth manipulations. As spatial resolution was degraded by the vocoder, ripple depth thresholds increased (worsened) and speech perception decreased. Consistent with the findings of Saoji et al. (2009) in CI users, ripple depth thresholds averaged across 0.25 and 0.5 ripples per octave were strongly related to both vowel and consonant perception abilities. Additionally, the spatial resolution simulations resulted in vowel perception scores from the normal-hearing listeners that were qualitatively and quantitatively similar to the CI data in Saoji et al. (2009). The consonant data from the normal-hearing listeners was similar to that of the CI users, but did not fully match, suggesting that other deficits besides spatial resolution were responsible for degraded consonant identification abilities. Electrophysiological: There are no published studies to this author s knowledge showing the use of a ripple depth paradigm to elicit an electrophysiological response. We evoked the ACC with a ripple depth paradigm using a ripple density 0.5 rpo. Instead of a flat-to-modulated discrimination paradigm, as has been used for psychophysical tests (Litvak et al., 2007; Saoji et al., 2009; Spahr et al., 2011), a phase-inversion paradigm was used to elicit the ACC. Additional details are provided in Chapter III.

47 Figure 1. Comparison of Rectified and Sinusoidal Spectral Envelopes. The spectral envelope of a rippled noise stimulus can take the shape of a sinusoid (black line) or rectified sinusoid (dotted line). In these two examples, the amplitude variations result in spectral peaks at 1000 and 4000 Hz and spectral troughs at 500 and 2000 Hz (0.5 rpo). 33

48 34 CHAPTER III METHODOLOGY General Experiment Overview This study was designed to systematically investigate auditory processing of simple and complex stimuli from the periphery to the cortex within CI users and to evaluate the ability to use electrophysiological measures to predict speech perception within and across CI users. Two main premises of this study were (1) peripheral spatial resolution underlies spectral resolution abilities and ultimately speech perception, and (2) there also may be differences in central processing among individuals which are independent of peripheral processing. Peripheral spatial resolution was assessed using ECAP channel interaction functions, and the relationship between peripheral and central processing within an individual was assessed by comparing ECAP channel interaction functions with the ACC evoked by sequentially stimulating two electrodes. Spectral resolution was assessed by evoking the ACC with a spectral ripple depth detection paradigm. Speech perception was assessed using measures of vowel discrimination (/h/- vowel-/d/, Hillenbrand et al., 1995) and word recognition in noise (BKB-SIN, Etymōtic Research, 2005). The general design of the within-subject portion of this study was modeled after Won et al (2011c) and involved manipulating the speech processor settings to effectively change spatial / spectral resolution within individual CI users. Specifically, these investigators increased the space between activated electrodes, presumably decreasing the likelihood of interaction among them. As expected, spectral resolution abilities were better when listeners used the programs with more space between activated electrodes than when electrodes were adjacent. We created three experimental programs, or MAPs, using seven of the available twenty-two intracochlear electrodes of the Nucleus Contour Advance arrays. Like Won et al. (2011c) each MAP was created with a different spacing (0, 1, or 2 non-activated electrodes) between the activated electrodes as shown in Table

49 35 1A. Activated electrodes are indicated by a gray background. The potential for interaction among stimulated electrodes was greatest with MAP 1 (adjacent active electrodes) and least with MAP 3 (activation of every third electrode). Thirteen different electrodes were activated in at least one experimental program, and these thirteen electrodes are referred to as the core set of electrodes for this study. In order to characterize spatial resolution of the periphery as fully as possible for each participant, ECAP channel interaction functions were generated using each core electrode as a probe electrode. These measures allowed us to directly evaluate whether the experimental design had the desired effect (i.e. whether increasing the space between activated electrodes also increased peripheral spatial resolution), which was inferred by Won et al. (2011c). We were also able to quantify the size of the effect within each person. The spatial ACC was elicited using thirteen pairs from the thirteen core electrodes. The spacing between each pair ranged from 0 9 electrodes. Measures of spectral resolution and speech perception were repeated three times for each participant: once for every experimental MAP. The outcome measures of this study were used to evaluate the relationships between: (1.) Peripheral (ECAP channel interaction) and central (spatial ACC) processing of simple stimuli, (2.) The processing of simple stimulation (ECAP channel interaction and spatial ACC) and complex stimulation (spectral ACC), and (3.) Electrophysiological measures of spatial / spectral resolution (ECAP channel interaction, spatial ACC, spectral ACC) and speech perception (vowels and words within a noise background). Participants Eleven adult recipients of either a Nucleus CI24RE or CI512 device (Cochlear Ltd., Lane Cove, Australia) participated in this study. These two internal devices contain

50 36 the same amplifier, resulting in similar noise floors for peripheral electrophysiological measures. All participants were implanted with the Contour Advance intracochlear electrode array. This array is pre-curved with twenty-two half-banded electrodes numbered from base (1) to apex (22). Two extracochlear electrodes (MP1: positioned under the temporalis muscle; MP2: located on the case of the receiver-stimulator) are available for monopolar stimulation modes (Hughes, 2012). All participants were native English speakers and had more than one year of experience with their device. Two individuals (E51 and F2L) had progressive hearing loss which was identified during childhood and initially managed with hearing-aid use. All other participants had histories consistent with post-lingual deafness. Additional demographic information and details about the participants clinical CI processor settings are included in Table 2. This study was conducted in accordance with guidelines set forth by the Institutional Review Board of the University of Iowa. Core Electrodes For ten of the eleven participants the core electrodes were: 3, 6, 8, 9, 10, 11, 12, 13, 14, 15, 16, 18 and 21 (Table 1A). For one individual (F19R), ECAPs were small / absent when stimulating electrodes 3-6 within loudness tolerance. Therefore, the set of core electrodes was modified. Electrode 2 was used instead of electrode 3, and all other electrodes included in the core set were one apical relative to the other participants. The thirteen core electrodes for F19R were: 2, 7, 9, 10, 11, 12, 13, 14, 15, 16, 17, 19, 22 (Table 1B). These core electrodes resulted in the same relative spacing between activated electrodes across the three experimental MAPs, with one exception: the spacing between the most basal pair of activated electrodes for MAP 3 (electrodes 2 and 7) was 5 electrodes. Speech Processor Settings A laboratory Freedom TM speech processor (SN: ) and headpiece (SN: ) were used for all electrophysiological and speech perception measurements.

51 37 The three seven-electrode experimental programs used for the spectral resolution and speech perception measurements were set with identical processing parameters. No preprocessing strategies were used (i.e. the Environment was set to none ). The processing strategy was simulated CIS (Advanced Combination Encoder (ACE) with seven maxima), using a 900 pulses per second (pps) stimulation rate and a 25 µsec per phase pulse width with an 8 µsec interphase gap. The stimulation mode was monopolar (MP), referenced to MP1. This reference was chosen to match the stimulation mode used for ECAP measurements. The overall bandwidth was Hz (compared to the default bandwidth: Hz). The bandwidths of each nth channel were identical across all three MAPs. For example, the highest frequency channel with filter cutoffs Hz was assigned to the most basal electrode in all three MAPs (Table 1C, color online). The reduced overall bandwidth was chosen to match the bandwidth of the noise used for spectral ACC measurements. Stimulation Level Issues Related to Setting Appropriate Stimulus Levels Ideally, all outcome measures obtained for this study would be elicited with the same stimulus level because spatial spread of excitation / channel interaction (e.g. Cohen et al., 2003, 2004; Abbas et al., 2004; Eisen and Franck, 2005; Hughes and Stille, 2010; van der Beek et al., 2012) and speech perception abilities (Firszt et al., 2004) can be affected by stimulation level. An inherent issue with trying to equate stimulation levels across all outcome measures is that spatial electrophysiological measures are obtained at fixed, suprathreshold levels, which we controlled by directly specifying electrode output. The spectrally complex stimuli used in this study fluctuate in level. These stimuli were delivered through the processor, and the settings of the processor influenced electrode output. Another complicating factor is due to the different stimulation rates used for the different measures. A relatively low stimulation rate (80 pps) was used to elicit ECAPs,

52 38 but a higher stimulation rate (900 pps) was used for ACC and speech perception measures. Stimulation rate has an effect on perceptual loudness, and threshold and uncomfortable loudness perceptions tend to be associated with lower current levels at fast rates (Potts et al., 2007). In other words, a higher current level would be needed for a low-rate stimulus to sound equally loud to a high-rate stimulus. A third issue is that threshold (T) and comfort (C) levels used for setting the minimum and maximum current level output of the implant are often adjusted when multiple electrodes are stimulated together. Specifically, C level is often decreased due to loudness summation (Potts et al., 2007). Based on these two factors combined, it is not surprising that current levels associated with low-rate ECAP thresholds often fall between current levels associated with MAP T and C levels and sometimes exceed C level (e.g. Brown et al., 2000; Hughes et al., 2000a,b). ECAP channel interaction measures are necessarily obtained using higher current levels than those associated with ECAP thresholds, and for all but one participant in this study, ECAP channel interaction measures were obtained using higher current levels than those used during spectral ACC and speech perception measures. Although we could not equate stimulation levels across outcome measures, we did attempt to retain the relative differences in stimulation levels across electrodes required to produce equal loudness in each person. The procedures that follow describe how these stimulation levels were determined. General Procedures for Setting Electrode Output An ascending method was used to find current levels associated with both threshold (T) and uncomfortable (C) loudness for all 22 electrodes. Custom Sound (version 3.2) software was used to present 500 ms bursts of biphasic pulses. Pulse rate was 900 pps. The pulses were 25 µsec per phase with an 8 µsecond interphase gap, and stimulation mode was monopolar (MP1). Stimulation level was raised in 5 current level (CL) steps, and participants indicated perceived loudness on a scale from 0 (no sound) to 10 (too loud) (Advanced Bionics, 2004). Threshold level was considered the highest

53 39 CL resulting in a rating of 2 due to limited loudness growth at low levels for a number of subjects. For example, one person rated current levels 95 through 100 on electrode 12 as 1 (just noticeable), and current levels of 105 through 135 CL were rated as 2 (very soft). For this individual, T level for electrode 12 was set at 135 CL. Single-electrode C level was considered the CL rated as a 10 or the highest possible output before reaching the compliance (voltage) limits of the device. Loudness balancing was performed across the electrode array. Groups of four electrodes were sequentially stimulated in an apical to basal direction 5 or 10 CL below C level, and adjustments were made to individual electrode C levels if necessary. Groups of four electrodes were also stimulated sequentially in an apical to basal direction at 25% of the dynamic range, and T levels were adjusted as needed to achieve equal loudness. Cortical auditory evoked potentials evoked with the electrode-discrimination paradigm for spatial resolution measures were elicited by stimulating each electrode at 80% of its dynamic range between single-electrode, loudness-balanced T and C levels using 900 pps rate. Fine Tuning Stimulation Levels for ECAP A screening was performed to evaluate whether ECAPs could be observed across all 22 electrodes. Stimulation level for each electrode was the single-electrode, loudnessbalanced C level obtained with 900 pps stimuli, even though a rate of 80 pps was used to elicit ECAPs. The Custom Sound EP (version 3.2) Neural Response Telemetry (NRT) system was used to elicit and record ECAPs when masker and probe stimulation was presented to the same electrode. If ECAPs were not present or if amplitudes were < 30 µv, masker and probe levels were increased until either an ECAP of sufficient size was observed or until stimulation levels were perceived as uncomfortable or compliance limits were reached. It was necessary to decrease stimulation level across the electrode array for participant F2L because of stimulus artifact contaminating the ECAP recording. ECAP amplitudes remained > 30 µv at the reduced level. For participant F19R, ECAPs

54 40 were small or absent for many of the basal electrodes, even at maximal stimulation levels. As noted previously, modifications to the specific electrodes considered as part of the core set were made to address this issue (Table 1B). Other parameters (e.g. gain, delay, and recording electrode) were adjusted on an individual basis to obtain clean ECAP recordings (Abbas et al., 1999; Dillier et al., 2002). These individually optimized stimulation / recording parameters were used when performing ECAP channel interaction measures. The adjusted single-electrode C levels were used for both masker and probe stimulation. Fine Tuning C Levels for Spectral ACC and Speech Testing After T and C levels were set for each electrode individually, the core thirteen electrodes were activated together using the Go live tool within Custom Sound (version 3.2). An ACE processing strategy with 7 maxima was used for stimulation. The examiner read portions of The Rainbow Passage (Fairbanks, 1940) and C levels were globally decreased until overall level was considered most comfortable by the listener. This procedure was repeated for the three experimental seven-electrode MAPs. Presentation Levels for Spectral ACC and Speech Testing We presented complex stimuli (rippled noise and vowels) via the direct audio input port on the CI speech processor. Presenting stimuli in this manner eliminated the need to test within a sound booth. An additional benefit with direct audio input is that it eliminates the need to control for head movements or exact position relative to a loudspeaker during lengthy testing and testing across multiple visits. For all three experimental programs, microphone sensitivity was set to 0 and the accessory mixing ratio to 10:1 to ensure that stimulation was solely from the direct audio input port. Participants did not adjust the volume for any of the measures. Stimulation level through the direct audio input port is specified here in terms of its acoustic equivalent. Equivalence was determined as the level through the direct audio input port that resulted in the same electrode output as observed when the signal was

55 41 presented in the soundfield and processed through the implant microphone. The procedure for determining equivalence is detailed in Appendix A. We presented the rippled noise and vowel stimuli at an approximate 55 or 60 dba equivalent level. The relatively low level for the rippled noise and vowel stimuli was chosen to avoid potential ceiling effects with the vowel discrimination task. This level is similar to the low 60 dbspl presentation level Litvak et al. (2007) and Saoji et al. (2009) used for rippled noise, vowel and consonant stimuli, although in those studies, CI users were allowed to adjust the volume to a comfortable level. The standard BKB-SIN test (Etymōtic Research, 2005) was administered at an approximate 65 dba equivalent level. ECAP Channel Interaction Functions Measurement ECAP channel interaction functions were obtained using Custom Sound EP (version 3.2) NRT software and the forward-masking subtraction paradigm described in Chapter II (e.g. Cohen et al., 2003; Abbas et al., 2004). All ECAP channel interaction functions were obtained with the probe stimulus referenced to MP1 and masker stimulus referenced to MP2. The masker-probe interval was 400 µsec. Typically the recording electrode was two electrodes apical to the probe for basal maskers and two electrodes basal to the probe with apical maskers. Across each person / probe electrode, the optimal recording delay ranged from µsec and amplifier gain from db. One hundred sweeps were averaged for each masker-probe pair. Individually optimized stimulation and recording parameters determined at the time of the ECAP screening were entered into Microsoft Excel (2010) templates for the channel interaction measures. Each template was arranged to obtain a series of ECAP measurements. For each series, the location of the probe electrode was fixed and the location of the masker electrode was varied along the length of the array. In other words, the series of waveforms used to generate the channel interaction function for a single probe electrode were collected together before using another core electrode as the probe.

56 42 The order of the masker and probe electrodes was randomized. The parameter templates were imported into Custom Sound EP (version 3.2) as comma separated value (.csv) text files. This partial automation of data collection allowed us to perform ECAP channel interaction functions on all thirteen probe electrodes in less than one hour. An example of one series of ECAP waveforms is provided in Figure 2. The waveforms are offset vertically by masker electrode and arranged in order from 1 (top) to 22 (bottom). Probe stimulation was presented on electrode 12 for this series. The negative and positive peaks of neural responses are marked with crosses. When the masker electrode is distant from the probe, no response is observed. Neural responses tend to increase when the masker electrode nears the probe. This response pattern is typical for channel interaction function measurements. Quantification For each waveform, ECAP amplitude was calculated as the difference between negative and positive peaks, which were manually set by the examiner using a custom MATLAB (MathWorks, 2012a) program. Across all participants and probe electrodes used in this study, ECAP amplitudes at the peak of the channel interaction function ranged from ~ 30 µv to over 600 µv. Measured ECAP amplitude can be affected by a number of factors, such as electrode impedance and distance of the recording electrode from the excited neurons. These factors are considered extraneous for the purposes of this study, and potentially could interfere with the comparisons of interest (peripheral neural excitation patterns) across CI users. In order to reduce the influence of potentially extraneous factors and to aid in the comparison across individuals, we normalized ECAP amplitude to the largest amplitude observed across all thirteen channel interactions functions for a given person (Hughes, 2008). This type of normalization was preferred over normalizing to the largest amplitude within each function separately because it maintained relative differences in excitation patterns across the electrode array within each person.

57 43 ECAP channel separation indices were calculated with the following equation:. (1) For a pair of ECAP channel interaction functions associated with probes xx and yy, the absolute differences in normalized ECAP amplitudes (aa) for each masker electrode (ii = 1 22) were summed (Hughes, 2008). We added a division by the total number of masker electrodes (nn = 22) to the original equation. Four example pairs of channel interaction functions from participant E55R are shown in Figure 3. Probe electrodes are indicated by the dotted gray lines. The maximum normalized ECAP amplitude is < 1.0 for the channel interaction functions shown here, which reflects the normalization procedure. For this individual, the highest ECAP amplitude occurred at the peak of channel interaction function for probe 16 (not shown). Although any two of the channel interaction functions can be paired to calculate a channel separation index, this figure shows the index when probe 12 was always included. The second probe electrode is varied from 11 to 8 (see legends). Amplitude differences between the two functions are indicated by solid black lines. The average of those differences, that is, the channel separation index (CSI), is displayed in each panel. The index increases as the two probe electrodes are separated. A notable quality about the channel separation index is that two channel interaction functions that are entirely separate (no overlap) will rarely have an index of 1.0. Although an index of 1.0 is theoretically possibly, it is not likely to be observed empirically. Additionally, there were instances within this data set when the channel separation index associated with two non-overlapping channel interaction functions was smaller than the index associated with two overlapping channel interaction functions. This is because the channel separation index is sensitive to overall extent of neural excitation as indicated by the breadth of the channel interaction functions. We did not view this as a limitation of the index. Instead, we viewed this index as a way to retain

58 44 information about the overall area of excitation resulting from electrode stimulation. Broader excitation areas might not always be detrimental to performance if excitation areas across electrodes are different. We explored quantifying the differences between two channel interaction functions as dd values because this is a method often used in psychophysical studies to characterize discrimination abilities. Because preliminary analyses indicated that this method was less predictive than the channel separation index when data were pooled across participants, details are not provided in the body of this work. Interested readers are referred to Appendix B for more information. Cortical Auditory Evoked Potentials: ACC Recording Procedures Two differential recordings were obtained simultaneously using surface electrodes placed at the following locations: vertex (+) to contralateral mastoid (-), and vertex (+) to inion (-). An electrode placed off-center on the forehead served as the ground, and vertically placed electrodes above and below the eye were used to monitor eye blinks. At the beginning of each recording session, electrode impedance was less than 5 kohms for each electrode and impedances were within 2 kohms across electrodes. Electroencephalographic (EEG) signals were routed to an OptiAmp 1.10 amplifier (Intelligent Hearing Systems). The amplifier bandpass filtered the EEG signals between 1-30 Hz and applied a gain of 10,000. The analog EEG signals were digitized with a National Instruments DAQ soundcard (6062E) using a sampling rate of 25,000 per second. LabVIEW TM version (National Instruments, 2009) was used for online display of the EEG activity and artifact rejection during recording, and for averaging and storing sets of 100 non-rejected sweeps for further analysis offline. For the first two participants in this study (E60 and F19R), the recording time window was 1200 ms. For all remaining participants, the time window was extended to 1700 ms to allow an estimation of the noise floor during a 500 ms pre-stimulus baseline time window.

59 45 General Quantification We explored two primary methods of quantifying the onset and change responses: peak-picking (subjective) and calculating the root mean square (rms) amplitude over a specified time window (objective). Onset and change N 1 -P 2 amplitudes were analyzed with a custom MATLAB (MathWorks, 2012a) peak-picking algorithm. Initially the program identified minima and maxima within a specified time window, but accuracy was visually confirmed and peak locations were modified if necessary by the examiner. A second examiner independently picked peaks on sets of printed waveforms. Peak locations were compared across examiners for qualitative agreement. The two examiners were in agreement > 80% of the time. When there was an initial disagreement, the examiners discussed the waveforms and came to a consensus. For participant F2L, the P 1 component of the change response appeared more sensitive to the specific stimulus condition than P 2. Thus for this person, the reported peak amplitudes are P 1 -N 1 instead of N 1 -P 2. A benefit of calculating rms amplitude for a pre-determined time window is that it takes into account the breadth of the peaks in addition to the amplitude, and does not require the experimenter to make decisions about exact location of the peak, which can be difficult when responses are broad, small, and/or noisy. Martin and Boothroyd (2000) determined their time window for the rms amplitude calculation from the locations of peaks on group mean waveforms with obvious responses: 50 ms before N 1 and 50 ms after P 2. Their specific response time window was 61 to 252 ms following the stimulus change. Our response time window, chosen based on the response latency and morphology of individual waveforms observed across participants / stimulus conditions, was slightly longer (225 ms). Although we were primarily interested in the change response, we also were interested in the onset response. The onset time window began 50 ms post stimulus onset, and the change time window began 75 ms post stimulus change. We started the change-response time window later than the onset time window to avoid a

60 46 large negativity following the onset P 2 that failed to recover by the time of the change response in a few individuals. Starting the change time window later did not appear to compromise our ability to capture the prominent N 1 and P 2 peaks. In a number of participants, we observed change responses with a longer latency than onset responses, and delaying the time window improved our ability to capture the change response. We used the absolute peak and rms amplitudes and normalized change amplitudes to the onset. Theoretically, normalization might aid in comparing these far-field recorded responses across participants. Normalizing the change to the onset response for a given condition also might be advantageous for comparing response amplitudes within an individual, whose attentional state and alertness may fluctuate over the course of lengthy test sessions, and sessions extending across several days. Assuming that the effects of attention or fatigue are the same for both the onset and change responses, normalizing the change to the onset for a specific condition would theoretically reduce the effects of attention or fatigue on relative size of the ACC. The theoretical advantages of normalization were not obvious for this data set. Therefore, we report only the non-normalized amplitudes in the body of this report. Example results using the normalized amplitudes are shown in Appendix B. Stimulation and Quantification: Spatial ACC Onset and change CAEPs were elicited using sequential stimulation of pairs of electrodes (Brown et al., 2008). Electrode stimulation was controlled with Nucleus Implant Communicator (NIC) routines. Custom software developed to use these routines also generated a pulse, which was routed to LabVIEW TM (National Instruments, 2009) to trigger recording. Cochlear implant stimulation involved a 400-ms train of biphasic pulses output from one electrode, followed by a 400-ms train of biphasic pulses output from a second electrode. The pulse rate was 900 pps and stimulation mode was monopolar (MP1) to match ECAP and speech processor stimulation modes. The electrode pairs were stimulated approximately once every three seconds. The core

61 47 electrodes were used for stimulation, and the center electrode (13 for F19R and 12 for all other participants) was always included as one electrode in the pair. For each pair of electrodes, we collected 100 sweeps when stimulating the center electrode first, and 100 sweeps when stimulating the center electrode second. The two waveforms for the same electrode pairs presented in reverse order were averaged offline prior to analysis. The control condition consisted of presenting stimulation on electrode 12 for 800 ms. We calculated the rms amplitude of the change time window, and used this amplitude as the criterion for determining whether a response was present or absent for the test conditions. Any change rms amplitude less than or equal to the rms amplitude of the control condition was set to 0. Stimulation and Quantification: Spectral ACC Onset and change CAEPs were elicited using a spectral ripple depth detection paradigm (Litvak et al., 2007; Saoji et al., 2009). Instead of eliciting the ACC with a flat changing to rippled spectrum, a phase-inversion paradigm was used. The frequency location of the peaks and troughs were reversed at 400 ms post stimulus onset to elicit the ACC. Psychophysical ripple depth detection thresholds using low ripple densities have been shown to correlate more strongly with consonant and vowel recognition than ripple depth thresholds of higher densities (Saoji et al., 2009); therefore, we used a ripple density of 0.5 ripples per octave (rpo). Similar to Litvak, Saoji, and colleagues, we used a four-octave band of noise ( Hz), resulting in stimuli that contained two spectral peaks and two troughs (Figure 4). Eleven stimuli with ripple depths ranging from 0 to 50 db (5 db steps) were created in MATLAB (MathWorks, 2012a) using a sampling rate of 44.1 khz. The noise was generated by summing 800 sinusoids (200 per octave) with random starting phase within the four-octave frequency range. For the first 400-ms segment, the amplitude of each sinsuoid was determined by 10 ( ) ( / ) (2)

62 48 where cc is ripple depth in db, ff is the frequency of each individual sinusoid in Hz ( ), and rrrrrr = 0.5 (Litvak et al., 2007). This equation results in a sinusoidal amplitude variation across log frequency. For the second 400 ms, the amplitude of each sinusoid was determined by the same equation, but cc was used so that spectral peaks and troughs would be inverted. The second 400 ms was scaled so that rms amplitude would be equal to that of the first 400 ms. The full 800-ms combined stimuli were gated on and off with a 20-ms Hanning window, and a bandpass filter of Hz was applied to remove spectral splatter at the transition between the two stimulus halves. For each ripple depth, ten different stimuli were generated with different fine structures. The ten different stimuli were presented in random order so that any possible cues from a specific fine structure would be minimized in the averaged cortical response. Figures 5 and 6 display the voltage output from the electrodes of a cochlear implant simulator for a number of ripple stimuli with different depths. Figure 5 is in the form of a spectrogram, although electrode number is indicated on the ordinate plane instead of frequency. Figure 6 displays the same data, but amplitude is averaged across time, and the amplitude of the first 400 ms is plotted separately from the amplitude of the second 400 ms. For both figures, ripple depth is displayed in the top right corner of each panel. For these measurements of electrode output, the CI processor was programmed with the first experimental MAP: activation of electrodes 9 through 15, MP1 stimulation mode, and frequency allocation table set from Hz. Threshold and comfort levels were set to 130 and 190 CL, respectively, resulting in a relatively large dynamic range of 60 db. For the 0-dB ripple depth, the electrode output is the same across the entire 800-ms time window; however, the graded output across electrodes is consistent with a filter within the direct audio input cord, which is meant to mimic the microphone frequency response (Zachary Smith, personal communication, September 27, 2012). The microphone output increases by 6 db/octave from Hz to emphasize high-

63 49 frequency, low-intensity speech cues (Clark, 2003). For the remaining panels, there are differences both across the electrodes and across the first 400 ms and the second 400 ms. These differences reflect changes in the stimulus spectrum, indicating that the device is able to transmit salient features of the stimulus. Therefore, any variations in performance among participants should be independent of the device, and instead, will be reflective of individual limitations. MATLAB (MathWorks, 2012a) was used to generate a trigger pulse and to present each ripple stimulus at a rate of approximately one per three seconds. The trigger was routed to LabVIEW TM (National Instruments, 2009) for synchronized recording, and stimuli were routed to the direct audio input port of the processor at an equivalent 55 or 60 dba level. For each participant, the order of the experimental MAPs was randomized. A minimum of two sets of 100 sweeps were recorded for each stimulus condition and averaged offline prior to analysis. We varied ripple depths in 10 db steps in order to identify a ripple depth threshold for each MAP. Threshold was defined as the ripple depth that resulted in a change N 1 -P 2 peak amplitude of 0.5 µv or a rms amplitude for the change time window greater than one standard deviation above the mean prestimulus baseline rms amplitude across all conditions. Thus, for the rms method, the threshold criterion was different across participants. Individuals with lower noise floors had lower threshold criteria than individuals with higher noise floors. Ripple depth threshold was calculated using a linear interpolation across conditions resulting in a response above and below the threshold criterion. When the smallest ripple depth tested resulted in an amplitude larger than the criterion, the no-change control condition (ripple depth of 0 db) was used for interpolation. When no responses were observed across all ripple depths tested, threshold was considered 10 db above the highest ripple depth tested. When using the peak amplitude criterion, threshold was always interpolated. When using the rms criterion, there were 5 instances (out of 33) when no responses were observed at largest ripple depths tested.

64 50 Threshold calculations are more standard for electrophysiological tests; however, because they are also much more time consuming, we were interested in whether the amplitude for a single stimulus condition would also be sensitive to differences across MAPs and participants. To evaluate this, all participants and all MAPs were tested using a ripple depth of 40 db. Speech Perception Vowel Perception Vowel discrimination was performed with MATLAB (MathWorks, 2012a) using a set of modified scripts for psychophysical testing (PSYLAB version 2.4; Hansen, 2012). Vowel discrimination was assessed within a /h/-vowel-/d/ context (Hillenbrand et al., 1995) using a 10- alternative forced choice procedure. The words included had, hayed, head, heard, heed, hid, hoed, hood, hud, and who d. We selected these words from ten of the female speakers within the Hillenbrand database, resulting in ten tokens per word. Each token was presented once without replacement in a random order, resulting in a total of 100 presentations. This was repeated for each experimental program, also used in random order. Participants indicated which word was presented out of the ten possible choices displayed on the screen using a touch screen or mouse click. The percent correct across all words was used for analysis. Prior to each test, participants first listened to the entire set of 100 stimuli using one of the experimental MAPs. The printed word corresponding to the audio signal was displayed on the computer screen. They also listened to one sentence of The Rainbow Passage (Fairbanks, 1940) repeated eight times, spoken each time by a different female talker. Again, the audio signals simultaneously were displayed on the screen in printed form. This familiarization procedure took about 5 minutes, and was repeated each time the participant switched to a new listening program.

65 51 BKB-SIN Test The standard BKB-SIN test involves presenting sentences within a babble background from the same channel. The SNR for the first sentence is +21 db, and the SNR decreases in 3 db steps for each subsequent sentence. The level of the target is held constant, and the level of the background babble is varied. Listeners are asked to repeat the sentence and are scored in terms of the number of key words (3-4) correct. List pairs 9-18 are suggested for CI users, as the hardest SNR is limited to 0 db. Immediately following the vowel perception test and before switching to a new listening condition, the BKB-SIN test was administered. One list (from list pairs 1-8) was used as a practice test to familiarize participants with the test procedures and with the sound of sentences within a noise background. After the practice list, two lists of eight sentences (a list pair ) were presented. The specific list for each listening condition was chosen randomly from those recommended for CI users (9-18). The total number of key words correct across the list pair was used to calculate the signal-to-noise ratio for 50% correct (SNR-50). Comparing Across Measures of Spatial / Spectral Resolution and Speech Perception Channel separation indices were calculated using the channel interaction functions corresponding to each electrode pair used in the electrode-discrimination spatial ACC paradigm. Martin and Boothroyd (2000) used the saturating exponential form y= a*(1 e -x/b ) to model group mean average ACC amplitudes as a function of stimulus increments. We used this same equation to model the relationship between channel separation indices and spatial ACC amplitudes for each participant, using up to thirteen data points per person. For comparison with spectral ACC and speech perception measures, channel separation indices were calculated for the six pairs of adjacent electrodes activated within each experimental program. The six channel separation indices calculated for MAP 1 were for probe electrode pairs 9-10, 10-11, 11-12, 12-13, 13-14, and The electrode

66 52 pairs for MAP 2 were 6-8, 8-10, 10-12, 12-14, 14-16, and The electrode pairs for MAP 3 were 3-6, 6-9, 9-12, 12-15, 15-18, and (Note: these pairs were different for participant F19R). We used the individually optimized aa and bb coefficients relating channel separation index with spatial ACC amplitude to predict what the ACC amplitude would be for the six pairs of adjacent activated electrodes for each program. This prediction assumes that the relationship we observed for the thirteen electrode pairs tested can be generalized to any pairs of electrodes. Because the predicted spatial ACC amplitude was calculated from the channel separation index, we considered it a combined peripheral / central measure of spatial resolution. Linear mixed model analysis was used to evaluate the whether spatial resolution (ECAP channel separation index or predicted spatial ACC) was predictive of spectral resolution and whether the three electrophysiological measures (ECAP channel separation index, predicted spatial ACC, spectral ACC: threshold or amplitude) were predictive of vowel discrimination and speech perception in noise. We used Akaike s Information Criterion (AIC) to compare the different models (Kutner, Nachtsheim and Neter, 2004). Lower scores indicate better fits, and a difference of 2 is generally considered relevant (John VanBuren, personal communication, August 26, 2013). Because the mixed model design requires a within-subject interpretation of the observed relationships, we also performed regression analysis on the data obtained with one program (MAP 3), so that we could evaluate the predictive ability of the electrophysiological measures across participants. This analysis was limited by the small sample size but was of interest with respect to clinical applications of the results.

67 Table 1. Activated Electrodes and Frequency Allocation. A. Experimental Programs MAP MAP MAP B. Experimental Programs for Participant F19R MAP MAP MAP3 1 2* C. Frequency-to-Electrode Allocation Comparison Between Experimental and Default Programs (Hz) MAP1 MAP2 MAP3 Default Note: Color online

68 Table 2. Participant Demographic Data. ID Age (yrs) Sex Ear Reported Etiology Device Age at IS (yrs) Months Post IS Clinic Strategy Clinic Rate Clinic Stim Mode E40R 50 M R Otosclerosis 24RE ACE (8) 900 MP E51 27 F R Pendred 24RE ACE (8) 900 MP Clinic Pulse Width Other Lower Freq: 200 Hz Upper Freq: 6938 Hz Neg gain for apical electrodes. E55R 63 F R Genetic? 24RE ACE (10) 900 MP Upper Freq: 6938 Hz E60 86 F R Unknown 24RE ACE (8) 900 MP T-SPL is pw for E1-E2. E68L 57 F L Unknown 24RE ACE (8) 900 MP E1 deactivated (non-auditory stim) Upper Freq: 6938 Hz F18R 66 F R Meniere s? CI ACE (8) 900 MP F19R 78 M R Unknown CI ACE (8) 900 MP Neg gain for apical electrodes. F25R 60 F R Genetic CI ACE (8) 900 MP F26L 53 F L Unknown CI ACE (8) 900 MP Upper Freq: 5938 Hz F2L 58 M L Congenital, Progressive CI ACE (9) 900 MP F8R 70 F R Unknown CI ACE (8) 900 MP

69 55 E55R Amplitude (100 microv / division) Time (microsec) Figure 2. ECAP Waveform Series for a Channel Interaction Function: E55R, Probe 12. This figure displays a series of waveforms used to generate a channel interaction function. Each waveform was obtained with stimulation of probe electrode 12. The vertical offset is used to separate waveforms obtained with a different masker electrode in order from 1 (top) to 22 (bottom). The waveforms displayed are the average of 100 sweeps. Artifact removal was performed using the subtraction paradigm described within the text. Negative and positive peaks of neural responses are marked with crosses.

70 CSI: 0.07 Probe E55R 11 Probe CSI: 0.11 Probe 10 Normalized ECAP Amplitude Probe CSI: 0.16 Probe Probe CSI: 0.2 Probe 8 Probe Masker Electrode Figure 3. Calculating Channel Separation Indices. Four pairs of channel interaction functions were taken from the data set of participant E55R for this example. The probe electrode for each function is indicated in the legend and by the vertical dotted lines in each panel. Amplitude differences between the two channel interaction functions are indicated by solid black vertical lines. The channel separation index, displayed in the top left corner of each panel, increases as the two probe electrodes are separated.

71 Figure 4. Waveform and Spectrogram of Spectral Ripple Stimulus. Overall amplitude varies randomly across time (top panel), but the amplitude varies sinusoidally across frequency (bottom panel). The spectral peaks (darker areas) at 350 and 1400 Hz during the first 400 ms, are troughs during the second 400 ms. This manner of frequency content change is often referred to as a phase inversion. 57

72 db 5 db Electrode Number db 15 db 5 20 db 30 db Time (ms) Figure 5. Electrodogram for Rippled Noise Stimuli. Only electrodes 9-15 were activated for these measurements, but electrode output for all electrodes is displayed as a function of time. Measured voltage from each electrode is displayed as intensity; darker colors indicate higher voltage. The voltage measured across the deactivated electrodes suggests some cross-talk with the activated channels. The evoking rippled noise stimulus was 800 ms in duration following a 100 ms period of silence. The stimulus consisted of 0.5 ripples per octave, and the location of spectral peaks and troughs was reversed 400 ms after stimulus onset. The amplitude difference between spectral peaks and troughs is indicated in the upper right-hand corner of each panel; amplitude differences across frequency and the phase inversion at 500 ms are reflected in the electrode output.

73 db 5 db RMS Amplitude db 15 db db 30 db Electrode Number Figure 6. Average Electrode Output for Rippled Noise Stimuli. This figure is complementary to Figure 5. Root mean square amplitude, calculated for 400 ms, starting after stimulus onset (gray dots) and for 400 ms starting after the phase inversion (white dots), is plotted as a function of electrode number. Ripple depth is indicated in the upper right hand corner of each panel. The two spectral peaks and troughs are reflected in the electrode output, even at a 5 db ripple depth. The phase inversion is also clearly reflected in electrode output.

74 60 CHAPTER IV RESULTS Peripheral and Central Spatial Resolution ECAP Channel Interaction Functions Figure 7 displays all thirteen channel interaction functions obtained in one participant. Normalized ECAP amplitude is plotted as a function of masker electrode. The dotted vertical lines indicate probe electrodes. For most participants and probe electrodes, the peak of a channel interaction function occurred when masker and probe stimulation were presented to the same electrodes. The pattern of neural excitation reflected by the channel interaction functions is variable across the probe electrodes used in this study, and supports the need to perform extensive measures of peripheral spatial resolution for each CI user. The pattern of channel interaction function shapes across the probe electrodes was also unique for each individual. Interested readers are referred to Appendix C for figures showing the complete sets of 13 channel interaction functions obtained for each participant. We calculated channel separation indices for all channel interaction functions paired with that of the center electrode (12 or 13) for comparison with the spatial ACC data. These channel separation indices for all participants are displayed in Figure 8 as a function of electrode separation relative to the center electrode. Each panel displays the data from a different person. Negative numbers are used when the center electrode was paired with a more basal probe, and positive numbers are used when the center electrode was paired with a more apical probe. ECAP channel separation indices generally increase with electrode separation in either the basal or apical directions, although some nonmonotonic changes are evident (e.g. basal and apical electrodes for E51), and the index appears to saturate with increased distance in some individuals (e.g. the most apical electrodes for E40R).

75 61 Spatial ACC A series of cortical waveforms obtained for one participant using the electrodediscrimination paradigm are displayed in Figure 9. Basal electrode pairs are shown in the left panel and apical electrode pairs in the right panel. The waveforms elicited with different electrode pairs are shifted vertically, starting with the largest electrode separation at the top of each panel (12-3 or 12-21) and ending with the control condition at the bottom of each panel (12-12). The gray shaded regions mark the prestimulus, onset, and change time windows used for the rms amplitude calculations. Change responses are observed for all electrode pairs except the control condition. The size of the ACC was expected to reflect electrode separation, and this can be observed to some extent in Figure 9. For example, the ACC elicited with electrode pair is smaller than the ACC elicited with electrode pair Figure 10 more clearly shows the relationship between ACC amplitude and electrode separation observed across all participants. This figure is identical to Figure 8, except that it displays spatial ACC N 1 -P 2 amplitude rather than ECAP channel separation index as a function of electrode separation relative to electrode 12 or 13. Similar to Figure 8, negative numbers are used when the center electrode was paired with a more basal probe, and positive numbers when the center electrode was paired with a more apical probe. Similar to what we observed with the peripheral ECAP data, ACC amplitude tends to increase with increased space between electrode pairs, and both saturation and nonmonotonicities can be observed with electrode separations in both apical and basal directions. The nonmonotonicities may be the result of noise in the cortical recordings, or may be a reflection of peripheral input. Relationship Between Peripheral and Central Spatial Resolution: Within Subjects One question of interest is not how ECAP channel separation indices or spatial ACC amplitudes vary as a function of electrode separation, but how these two peripheral

76 62 and central measures of spatial resolution are related within an individual. We compared the ECAP channel separation index for two probe electrodes with the size of the ACC when the same two electrodes were stimulated sequentially (Figure 11). Although we chose to focus our analysis on the ACC N 1 -P 2 amplitudes, we used the rms amplitudes to identify points to exclude when quantifying the relationship with ECAP channel separation index. If the rms amplitude across the change time window for an electrode pair was less than that of the control condition, we assumed that the response, even if we picked peaks, was noisy. Those data are indicated with asterisks in Figure 11. For seven participants, all thirteen data points were used in the model fit. For four participants, 1-5 data points were unused due to rms amplitudes smaller than the control condition. These points tend to have lower N 1 -P 2 amplitudes, consistent with our rationale for exclusion. Each person s data were fit with the saturating exponential growth function y=a*(1-e -x/b ) using the MATLAB (MathWorks, 2012a) fit command with a least squares approach to determine the aa and bb coefficients. The starting point for both coefficients was set to 1.0. No lower constraint was used. The aa coefficient (related to the asymptote) was constrained at or below 10, and the bb coefficient (related to the slope) was constrained at or below 50. These constraints were specifically in place for participants F19R and F18R, respectively, and did not affect the r 2 values for these two participants or calculations for the other participants. The aa and bb coefficients and r 2 values are shown in each panel of Figure 11. The differences in aa and bb coefficients across individuals suggests that there are differences in central processing across CI users, even when very simple stimuli are used to elicit a response. We used the aa and bb coefficients to calculate a spatial ACC response to use for predicting spectral resolution and speech perception. This procedure is described in more detail in the next section, but it is worth noting here that the usefulness of the calculated spatial ACC as a predictor of more complex signals is partially dependent upon how well we modeled the data. For some individuals, the model fit was better than others, as

77 63 indicated by the coefficient of determination. The person with the poorest fit was E40R (r 2 =0.48). The peripheral and central spatial resolution data for this participant are replotted in Figure 12, with basal and apical electrode pairs shown separately. For most individuals, there were no apparent differences in the relationship between peripheral and central spatial resolution for basal and apical electrode pairs, but for this person, there were. Disregarding cochlear location resulted in a relatively poor average fit. Spatial and Spectral Resolution Spatial Resolution The channel interaction functions associated with the seven activated electrodes in each MAP are shown for one participant (E55R) in the left panels of Figure 13. The functions tend to be more overlapping for MAP 1 (top panel), and the differences, or separation, across functions are relatively small. As the electrodes are spaced apart (MAPs 2 and 3), the amount of overlap decreases, and the separation increases. For each MAP, channel separation indices were calculated for the six pairs of adjacent activated electrodes. These indices are shown in the top right panel of Figure 13. As expected, the indices tend to be smallest for MAP 1 and largest for MAP 3. For each of the six pairs of adjacent activated electrodes in each MAP, the channel separation index was used to calculate the expected spatial ACC amplitude, using the individually optimized coefficients relating peripheral and central spatial resolution (Figure 11). These expected, or predicted, spatial ACC amplitudes are plotted in the bottom left panel. Like the ECAP channel separation index, predicted spatial ACC amplitude was largest for the adjacent activated electrodes in MAP 3 and smallest for MAP 1. Figure 14 shows the ECAP channel separation indices (top) and predicted spatial ACC amplitudes (bottom) for the adjacent activated electrodes in each MAP observed across all participants as box plots. Similar to the individual example shown in Figure 13, we see that across participants, spatial resolution tends to be poorest with MAP 1 and best with MAP 3, indicating that the experimental design had the intended effects. There is no

78 64 consistent trend across electrode pairs. For each individual and each MAP, we calculated the unweighted average and maximum across the six indices and six predicted spatial ACC responses. We expected that the electrophysiological spectral resolution measures might reflect the best spatial resolution across electrodes within a person, but that speech perception might be more dependent upon spatial resolution across all activated electrodes, and would be more strongly predicted by the average. Spectral ACC Example cortical waveforms obtained using the ripple depth paradigm are displayed in Figure 15. Each panel displays the responses obtained when the participant (E40R) was listening with a different experimental program, indicated in the top right corner. The waveforms elicited with different ripple depths are shifted vertically, with the largest ripple depth (40 db) at the top and the smallest ripple depth (10 db) at the bottom. For all three MAPs, the size of the ACC tends to decrease as ripple depth decreases. For this individual, ripple depth threshold was less than 10 db for all three MAPs. Because we used a linear interpolation between an assumed no-response (0 µv) at 0 db and the size of the response at 10 db, threshold was different for the three MAPs. We also considered ACC amplitude for the same ripple depth (40 db) across all three MAPs. For this individual, the most robust response was obtained with MAP 3; the smallest response was obtained with MAP 1. Relationship Between Spatial and Spectral Resolution Two primary questions of interest were (1) is spatial resolution, measured with simple stimulation, predictive of the resolution of spectrally complex stimuli and (2) is including information about central processing more predictive than if only information about the periphery is used. Because we considered predicted spatial ACC amplitude reflective of both peripheral and central spatial resolution, it was not necessary to combine it with the ECAP channel separation index in a multivariate regression analysis to predict spectral resolution.

79 65 Preliminary Analysis: Before we addressed the primary questions of interest, we first needed to decide how to quantify the predictor and dependent variables. We had a number of options, and the first step in the analysis was to determine whether any method appeared superior. Specifically, we were interested in determining whether (1) there is a difference between subjectively picking peaks on waveforms or using an objective measure to quantify responses, (2) the response to a suprathreshold stimulus is as sensitive to differences across the three programs and across individuals as threshold, and (3) there is a difference between quantifying spatial resolution as an average or maximum across the activated electrodes? Figure 16 shows the spectral resolution data as a function of peripheral spatial resolution (specifically, ECAP channel separation), and Figure 17 shows the spectral resolution data as a function of the combined peripheral / central measures of spatial resolution (predicted spatial ACC amplitude). The six panels in each figure contains the same data, but show different quantification methods. The spectral ACC data (ordinate) are quantified as thresholds (top two panels) using the N 1 -P 2 or rms amplitude criteria, and as an amplitude at a 40 db ripple depth (bottom panels). The spatial resolution data (abscissa) are displayed as an average (left) and a maximum (right) across the adjacent activated electrodes. There are 33 data points in each panel; the data obtained with MAP 1 are colored white, the data obtained with MAP 2 are colored gray, and the data obtained with MAP 3 are colored black. This color scheme is used for all subsequent figures. The regression lines, coefficients, and fits displayed in each panel were obtained with a simple regression that considered all 33 points as independent. In other words, we disregarded the repeated measures for each person for this first analysis stage. Nonlinear, exponential fits were used when spectral ACC data were quantified in terms of a threshold, and linear fits were used when spectral ACC data were quantified as the response amplitude at a 40 db ripple depth.

80 66 There were no clear advantages of any of the quantification methods. We interpreted this to mean that (1) although peak-picking is subjective and potentially biased, the results are the same as when an objective measure is used, (2) the more timeefficient suprathreshold measure of spectral resolution is as good as the time-consuming threshold search, and (3) even though we anticipated a possible advantage of quantifying spatial resolution as a maximum across the activated electrodes, the average was just as predictive. Given these results, we chose to use the N 1 -P 2 amplitude associated with the 40 db ripple depth for our measure of spectral resolution given that it was the most timeefficient method and because the data could be modeled with a linear function. We chose to quantify spatial resolution for the comparison with spectral resolution as the maximum across activated electrodes given our original expectation. Mixed Model Analysis: Results of the mixed model analysis are displayed in Table 3A and Figure 18 (top panels). Both measures of spatial resolution (ECAP channel separation index and the predicted spatial ACC amplitude) are significant predictors of spectral resolution (p<0.0001). The AIC values associated with the two models indicate that the ECAP channel separation index is a better predictor of spectral resolution than the predicted spatial ACC amplitude. In other words, there is no additional benefit of including information about central processing when attempting to predict how improving spatial resolution within an individual will impact spectral resolution abilities. In fact, the model of the combined peripheral and central measure of spatial resolution is worse, which may reflect the assumptions underlying our calculations of the predicted spatial ACC amplitude. Regression Analysis for MAP 3 Data: Results of the regression analysis for the data points obtained with MAP 3 are displayed in Table 3B and the bottom panels of Figure 18. One participant (E55R) was excluded from the analysis after being identified as an outlier (studentized residual > 2.0). This participant s data are indicated by asterisks. Even with the small sample size, the results of this analysis indicate again that

81 67 both measures of spatial resolution are significant predictors of spectral resolution. This analysis is different from the mixed model analysis in that we can interpret the results as meaningful for comparing across participants with different spatial resolution abilities. Also different from the previous mixed model analysis is that predicted spatial ACC amplitude is a better predictor than ECAP channel separation index (r 2 =0.87 compared to r 2 =0.51). When the goal is to make predictions across participants rather than within an individual, our results indicate that a combined measure of peripheral and central processing is more predictive than the information about peripheral spatial resolution by itself. Speech Perception The ultimate goal of this study was to examine the usefulness of electrophysiological measures to predict speech perception. The primary questions of interest were (1) are any of the electrophysiological measures predictive of speech perception (2) is any one of the electrophysiological measures a better predictor than the others, and (3) is it beneficial to combine measures of spatial and spectral resolution abilities to predict speech? For this analysis, we used the average channel separation index across the activated electrodes for the measure of spatial resolution and ACC amplitude at a 40 db ripple depth for the spectral resolution measures. Vowel Perception Average vowel scores ranged from % across participants and MAPs, which was between chance (10%) and ceiling (100%) performance. Vowel confusion matrices were created for the 3 experimental MAPs by averaging responses across all participants. These are shown in Figure 19. Darker shades of gray indicate a greater number of responses. Correct responses lie on the diagonal starting with the upper left corner. In general, responses are the most scattered when participants were listening with MAP 1 (adjacent active electrodes). Performance improved when participants used MAP

82 68 2 (activation of every-other electrode), and the most correct responses were observed when participants listened with MAP 3 (activation of every third electrode). Figure 20 displays the relationships between vowel perception and each of the electrophysiological measures (average ECAP channel separation index, calculated spatial ACC amplitude, ripple depth threshold) in a separate panel. All 33 data points used for the mixed model analysis are displayed in the top panels, and the data for MAP 3 are shown separately in the bottom panels. The regression lines obtained from the statistical analysis are also shown in each panel. Mixed Model Analysis: Results from the mixed model analysis can be found in Table 4A. All three electrophysiological measures are significantly predictive of vowel perception (p<0.01). That is, our results indicate that improving spatial or spectral resolution for a specific individual would be associated with an improvement in their vowel perception abilities. The strong correlation between spectral resolution and vowel perception is not too surprising given a number of studies showing strong correlations between behavioral ripple discrimination measures and speech perception. This is the first study to show that the electrophysiological correlate is also predictive of speech perception. Based on the AIC values, the predicted spatial ACC amplitude was the worst single predictor of vowel perception, although it was still significant. However, the average ECAP channel separation index was not only a significant predictor, but was the best single predictor. In addition to evaluating whether the electrophysiological measures were predictive of speech by themselves, we also were interested in whether combining information about spatial and spectral resolution would improve predictions. When the ECAP channel separation index was combined with spectral resolution into one model, both predictors remained significant, and the decrease in the AIC was > 2.0, suggesting that this more complicated model was in fact better than using either measure as a single predictor. Combining the predicted spatial ACC amplitude with spectral resolution was

83 69 worse than using the spectral ACC by itself, and in fact, predicted spatial ACC amplitude was no longer significant in the combined model. Regression Analysis for MAP 3 Data: Because we also wanted to examine the predictive ability of the electrophysiological measures across CI users, we isolated the data from MAP 3 and performed linear regression analysis (Table 4B). One participant (F26L) was identified as an outlier (studentized residual > 2.0) and was not included in the analysis. The data points from this participant are indicated in Figure 20 with asterisks. Both predicted spatial ACC amplitude and spectral resolution were predictive of vowel scores across these ten participants (p<0.05). Spectral resolution was the best predictor (r 2 =0.65 compared to 0.45). Average ECAP channel separation index was not significant. One limitation of the ECAP measures was that we did not have a large range of ECAP channel separation indices across participants. Because the predicted spatial ACC amplitude and spectral resolution measures were significant individually, we combined the two predictors into one model. When combined, neither was significantly predictive of vowel perception across participants. BKB-SIN Test Figure 21 is identical to Figure 20, except that it displays the relationships between the BKB-SIN test instead of vowel perception scores with the electrophysiological measures. Because the BKB-SIN test is scored as a signal-to-noise ratio for 50 % correct performance, low scores indicate better performance and high scores indicate worse performance. The regression line slopes are negative, which is opposite of the vowel perception data; however, the interpretation is the same for both speech perception measures: better spatial / spectral resolution is associated with better speech perception. Mixed Model Analysis: Results from the mixed model analysis can be found in Table 5A. Similar to the vowel perception results, all three electrophysiological measures are significant predictors of word recognition in noise (p<0.001). Improving the spatial or

84 70 spectral resolution within an individual would be expected to result in improvements in his/her ability to understand words within a noise background. Similar to the vowel results, the AIC suggests that the average ECAP channel separation index is the best predictor, followed by spectral ACC amplitude. We combined these two predictors into one model; both predictors remained significant, and the model improved. Combining predicted spatial ACC amplitude with spectral ACC amplitude did not improve predictions. Regression Analysis for MAP 3 Data: The linear mixed model analysis results for MAP 3 are shown in Table 5B and the bottom panels of Figure 21. There were no outliers identified, so the data for all 11 participants were included. When we look at the predictive ability of the electrophysiological measures across participants, only spectral ACC amplitude is significant (p<0.05), and accounts for just over 50% of the variance. Neither of the spatial resolution measures were significant.

85 71 Table 3. Peripheral and Central Spatial Resolution as Predictors of Spectral Resolution A. Mixed Model Analysis Predictor Variables Intercept (p-value) Slope (p-value) AIC Maximum Adjacent ECAP CSI (0.3284) Predicted Spatial ACC Amplitude 0.21 (0.7058) (<0.0001) (<0.0001) B. Linear Regression Analysis: MAP 3 Data (excluding participant E55R) Predictor Variables Intercept (p-value) Slope (p-value) r 2 Maximum Adjacent ECAP CSI (0.8240) Predicted Spatial ACC Amplitude 0.30 (0.6161) (0.0197) 1.05 (<0.0001)

86 72 Table 4. Spatial and Spectral Resolution as Predictors of Vowels (% Correct) A. Mixed Model Analysis Predictor Variables Intercept (p-value) Slope (p-value) AIC Average Adjacent ECAP CSI 3.95 (0.4933) Predicted Spatial ACC Amplitude (0.0232) (<0.0001) 7.69 (0.0033) Spectral ACC N 1 -P 2 Amplitude (40 db Ripple Depth) (0.0008) 7.48 (<0.0001) Average Adjacent ECAP CSI and Spectral ACC N1-P2 Amplitude (40 db Ripple Depth) 6.88 (0.2175) (0.0016) 3.50 (0.0259) Predicted Spatial ACC Amplitude and Spectral ACC N1-P2 Amplitude (40 db Ripple Depth) (0.0140) 0.59 (0.8222) 7.28 (<0.0001) B. Linear Regression Analysis: MAP 3 Data (excluding participant F26L) Predictor Variables Intercept (p-value) Slope (p-value) r 2 or adjusted r 2 Average Adjacent ECAP CSI (0.2190) Predicted Spatial ACC Amplitude (0.0023) (0.1890) 5.93 (0.033) Spectral ACC N 1 -P 2 Amplitude (40 db Ripple Depth) (0.0007) 5.03 (0.005) 0.65 Predicted Spatial ACC Amplitude and Spectral ACC N1-P2 Amplitude (40 db Ripple Depth) (0.0012) (0.4092) 7.86 (0.0591) 0.59

87 73 Table 5. Spatial and Spectral Resolution as Predictors of BKB-SIN Scores (db) A. Mixed Model Analysis Continued Predictor Variables Intercept (p-value) Slope (p-value) AIC Average Adjacent ECAP CSI (<0.0001) Predicted Spatial ACC Amplitude (<0.0001) Spectral ACC N 1 -P 2 Amplitude (40 db Ripple Depth) (<0.0001) (<0.0001) (0.0002) (<0.0001) Average Adjacent ECAP CSI and Spectral ACC N1-P2 Amplitude (40 db Ripple Depth) (<0.0001) (0.0028) (0.0416) Predicted Spatial ACC Amplitude and Spectral ACC N1-P2 Amplitude (40 db Ripple Depth) (<0.0001) (0.1605) (0.0004) B. Linear Regression Analysis: MAP 3 Data Intercept Predictor Variables (p-value) Average Adjacent ECAP CSI (0.0402) Predicted Spatial ACC Amplitude (0.0003) Spectral ACC N 1 -P 2 Amplitude (40 db Ripple Depth) (<0.0001) Slope (p-value) (0.4654) (0.0775) (0.0123) r

88 74 ECAP Amplitude (Norm) All Probe Electrodes E55R Masker Electrode Figure 7. Channel Interaction Functions. Normalized ECAP amplitude is displayed as a function of masker electrode for all thirteen probe electrodes, resulting in a display of all thirteen channel interactions obtained for participant E55R. Probe electrode is indicated by the dotted vertical lines. The peak of each channel interaction function often occurs when the masker and probe electrodes are equal.

89 E40R E51 E55R E60 E68L F18R Channel Separation Index F19R F25R F26L F2L F8R Electrode Separation re: E12 or E13 Figure 8. Channel Separation Index as a Function of Electrode Separation. For each participant, a channel separation index was calculated for each probe electrode paired with the center electrode (probe 13 for F19R and probe 12 for all others). Negative numbers are used for basal electrode pairs and positive numbers for apical electrode pairs.

90 12 3 E68L Amplitude (10 microv / div) Time (ms) Figure 9. Example Cortical Waveforms Elicited with the Electrode-Discrimination Paradigm: E68L. Each waveform was elicited by sequential stimulation of two electrodes (indicated to the left of each trace): one set of 100 sweeps was elicited when electrode 12 was presented first and one set when electrode 12 was presented second. Basal electrode pairs are displayed in the left panel and apical pairs in the right. In both panels, the top waveform was elicited with the largest electrode separation, and the bottom waveform is the control condition (i.e. electrode 12 was stimulated for 800 ms). The gray areas indicate the time windows used to calculate the rms amplitude for the prestimulus baseline, onset and change responses. N 1 -P 2 peaks are indicated by the plus signs. 76

91 77 8 E40R E51 E55R ACC N 1 P 2 Amplitude (microv) E60 E68L F18R F19R F25R F26L F2L F8R Electrode Separation re: E12 or E13 Figure 10. Spatial ACC Amplitude as a Function of Electrode Separation. For each participant, ACC N 1 -P 2 amplitude is plotted as a function of electrode separation between the pairs of sequentially stimulated electrodes. Electrode 12 (or 13 for F19R) was always one of the two electrodes stimulated. The 0 point on the abscissa is the no-response control condition (e.g. stimulation only on the center electrode). Negative numbers are used for basal electrode pairs and positive numbers for apical electrode pairs.

92 a: 4.97 b: r 2 : 0.48 E40R a: 3.17 b: r 2 : 0.73 E51 a: 5.60 b: r 2 : 0.56 E55R Spatial ACC N 1 P 2 Amplitude (microv) a: 6.59 b: r 2 : 0.91 a: b: 3.12 r 2 : 0.62 a: 1.99 b: r 2 : 0.77 E60 a: 5.37 b: r 2 : 0.79 F19R a: 5.38 b: 5.62 r 2 : 0.86 F2L a: 4.94 b: r 2 : 0.64 E68L a: 1.58 b: r 2 : 0.53 F25R F8R a: 4.25 b: r 2 : 0.63 F18R F26L Channel Separation Index Figure 11. Relationship Between Peripheral and Central Spatial Resolution. For each participant, ACC N 1 -P 2 amplitude is plotted as a function of channel separation index for thirteen electrodes pairs. The asterisks indicate data points that were not included in the analysis (see text). A saturating exponential form was fit to each person s data. Individually optimized coefficients are displayed in each panel, along with the coefficient of determination, which indicates how well the model fit the data.

93 79 Spatial ACC N 1 P 2 Amplitude (microv) Basal Apical E40R Channel Separation Index Figure 12: Relationship Between Peripheral and Central Spatial Resolution: Differences for Basal and Apical Electrode Pairs. The fit describing the relationship between peripheral and central spatial resolution was poorest for participant E40R (r 2 =0.48). A different relationship is observed for basal electrode pairs (circles) than apical electrode pairs (triangles). This difference was not considered when modeling the relationship and likely explains the poorer fit shown in Figure 11.

94 Normalized ECAP Amplitude MAP MAP MAP 3 E55R Masker Electrode Channel Separation Index Predicted Spatial ACC Amplitude (microv) MAP 1 MAP 2 MAP Mean Electrode Pair Figure 13. Quantifying Peripheral and Central Measures of Spatial Resolution for Comparisons with Spectral Resolution and Speech Perception: E55R. The channel interaction functions associated with the seven activated electrodes in each experimental MAP are shown in the left panels. Channel separation indices calculated for the 6 pairs of adjacent activated electrodes in each program are shown in the top, right panel. The predicted spatial ACC amplitude associated with each channel separation index was calculated from the individually optimized coefficients relating peripheral and central spatial resolution (bottom right panel). 80

95 81 Channel Separation Index Predicted Spatial ACC Amplitude (microv) Mean Electrode Pair Figure 14. Peripheral and Central Measures of Spatial Resolution for Comparisons with Spectral Resolution and Speech Perception: All Participants. Channel separation indices calculated for the 6 pairs of adjacent activated electrodes in each program are shown in the top panel. The predicted spatial ACC amplitudes are shown in the bottom panels. MAP 1 data are plotted in white, MAP 2 data in gray, and MAP 3 data in black. Boxes encompass the interquartile range observed across the 11 participants, and whiskers extend to twice this range. The median is indicated with a horizontal line, and outlier points are plotted as asterisks.

96 MAP 1 MAP 2 MAP 3 40 db 40 db 40 db Amplitude (5 microv / div) 30 db 20 db 10 db 30 db 20 db 10 db 30 db 20 db 10 db Time (ms) Figure 15. Example Cortical Waveforms Elicited with the Ripple-Depth Paradigm: E40R. Waveforms obtained with the different experimental MAPs are displayed separately in each panel. Each waveform was elicited with the ripple depth indicated to the left of each trace. The waveforms are offset vertically, with the highest ripple depth (40 db) at the top and the lowest ripple depth (10 db) at the bottom. At least 200 sweeps were averaged. The gray areas indicate the time windows used to calculate the rms amplitude for the prestimulus baseline, onset and change responses. N 1 -P 2 peaks are indicated by the plus signs. 82

97 83 Threshold (db) (from N 1 P 2 Amp) r 2 : 0.48 a: b: r 2 : 0.50 a: b: Spectral ACC Threshold (db) (from rms Amp) r 2 : 0.50 a: b: r 2 : 0.45 a: b: N 1 P 2 Amp (microv) (at 40dB Depth) r 2 : 0.47 m: b: 0.54 r 2 : 0.51 m: b: Average Maximum ECAP Channel Separation Index Figure 16. Peripheral Spatial Resolution as a Predictor of Central Spectral Resolution: Quantification Options. Spectral ACC (ordinate) is plotted as a function of ECAP channel separation index (abscissa). Spectral ACC is quantified as threshold in the top two rows, and as the N 1 -P 2 amplitude at 40 db in the bottom rows. Channel separation index is quantified as the average across adjacent activated electrodes (right column), and the maximum (left column). Each panel contains 33 data points: MAP 1 (white), MAP 2 (gray), MAP 3 (black). Coefficients for exponential fits (top two rows) and linear fits (bottom row) are shown in each panel.

98 84 Threshold (db) (from N 1 P 2 Amp) r 2 : 0.45 a: b: 0.60 r 2 : 0.40 a: b: 0.45 Spectral ACC Threshold (db) (from rms Amp) r 2 : 0.44 a: b: 0.40 r 2 : 0.39 a: b: 0.31 N 1 P 2 Amp (microv) (at 40dB Depth) r 2 : 0.45 m: 0.95 b: 0.20 r 2 : 0.45 m: 0.83 b: Average Maximum Predicted Spatial ACC (microv) Figure 17. Peripheral / Central Spatial Resolution as a Predictor of Central Spectral Resolution: Quantification Options. The layout of this figure is identical to Figure 16, except that spectral ACC responses are plotted as a function of predicted spatial ACC N 1 - P 2 amplitude.

99 85 Spectral ACC N 1 P 2 Amplitude (microv) AIC: p: < m: b: 0.54 r 2 : 0.51 p: m: b: 0.37 AIC: p: < m: 0.79 b: 0.21 r 2 : 0.87 p: < m: 1.05 b: Max Channel Separation Index Pred Spatial ACC Amplitude (microv) Spatial Resolution Figure 18. Spatial Resolution as a Predictor of Spectral Resolution. Spectral ACC N 1 -P 2 amplitude is plotted as a function of ECAP channel separation (left columns) or as a function of the predicted spatial ACC N 1 -P 2 amplitude (right columns). The top rows contain all 33 data points used for the mixed model analysis, and the bottom rows contain the data for MAP 3 only. The outlier is marked with an asterisk. Regression coefficients and p-values are indicated in each panel.

100 Figure 19. Vowel Confusion Matrices: Average Across All Participants. The stimulus is shown on the ordinate, and the response is shown on the abscissa. Confusions for each experimental MAP are shown in a different panel. Darker colors reflect higher percentages of responses. 86

101 100 AIC: p: < b: m: AIC: p: b: m: 7.69 AIC: p: < b: m: Vowel (% Correct) r 2 : 0.20 p: b: m: r 2 : 0.45 p: b: m: 5.93 r 2 : 0.65 p: b: m: Av Channel Separation Index Pred Spatial ACC Amplitude (microv) Electrophysiological Measures Spectral ACC Amplitude (microv) Figure 20. Electrophysiological Measures as Predictors of Vowel Perception. Vowel perception (percent correct) is plotted as a function of the three electrophysiological measures: ECAP channel separation (left), predicted spatial ACC amplitude (middle) and spectral ACC amplitude at a 40 db ripple depth (right). The top rows contain all 33 data points used for the mixed model analysis, and the bottom rows contain the data for MAP 3 only. The outlier is marked with an asterisk. Regression coefficients and p-values are indicated in each panel. Chance performance (10%) is indicated by a dotted horizontal line in each panel. 87

102 BKB SIN SNR 50 (db) AIC: p: < m: b: AIC: p: m: 3.25 b: AIC: p: < m: 2.04 b: r 2 : 0.06 r 2 : 0.31 p: p: m: m: 1.7 b: b: Av Channel Separation Index Pred Spatial ACC Amplitude (microv) Electrophysiological Measures r 2 : 0.52 p: m: 1.56 b: Spectral ACC Amplitude (microv) Figure 21. Electrophysiological Measures as Predictors of Word Recognition in Noise. The signal-to-noise ratio required for 50% performance from the BKB-SIN test (db) is plotted as a function of the three electrophysiological measures: ECAP channel separation (left), predicted spatial ACC amplitude (middle) and spectral ACC amplitude at a 40 db ripple depth (right). The top rows contain all 33 data points used for the mixed model analysis, and the bottom rows contain the data for MAP 3 only. The outlier is marked with an asterisk. Regression coefficients and p-values are indicated in each panel. 88

103 89 CHAPTER V DISCUSSION Summary of Results In this study, we evaluated CI users spatial and spectral resolution abilities using non-invasive electrophysiological measures of peripheral and central processing of simple and complex stimuli. We explored the relationships among these measures, and we evaluated whether they could be used to predict speech perception. We were interested in the ability both to predict changes in performance within an individual and to predict differences in performance across individuals. Our results are summarized below: (1.) Peripheral spatial resolution, quantified using ECAP channel interaction functions, was variable across the thirteen electrodes tested, and the acrosssite pattern was unique for each person who participated in this study. (2.) Central processing of spatial excitation patterns, quantified by relating ECAP channel separation indices to ACC amplitudes for electrode pairs spaced 0 to 9 electrodes apart, was also variable across CI users. (3.) The ECAP channel separation index was significantly predictive of our electrophysiological measure of spectral resolution (ACC amplitude at a 40 db ripple depth). (4.) Adding information about central processing to the ECAP channel separation index did not improve our ability to predict changes in spectral resolution ability within an individual. (Recall that within-subject changes of spatial resolution were imposed by varying the activated electrodes used in each experimental MAP.) However, information about central processing did improve our ability to predict differences in spectral resolution abilities across participants. With the ECAP channel separation index alone, we accounted for approximately 50% of the variability in performance observed

104 90 across participants, but when information about central processing was added to the ECAP measure, we were able to account for 87% of the variability. (6.) All three electrophysiological measures (ECAP channel separation index, predicted spatial ACC amplitude, spectral ACC) were significantly correlated with our two speech measures (vowels, BKB-SIN). (7.) The ECAP channel separation index was the best single predictor of changes in speech perception abilities within an individual as we varied the activated electrodes in the experimental MAPs. Predictions improved when the spectral ACC amplitude was added to the model. (8.) The spectral ACC was the best predictor of differences in speech perception abilities across the CI users who participated in this study and accounted for % of the variability. (9.) Although the ECAP channel separation index was not predictive of speech perception abilities across participants on its own, when combined with information about central processing (i.e. the predicted spatial ACC), it was predictive of vowel perception abilities. When considered together, these results appear to reflect an inherent hierarchy that exists across the outcome measures with regards to the complexity of the evoking stimulus and stages of auditory processing reflected in the response. This is illustrated in Figure 22 using the statistically significant results for the across-subject simple linear regression analyses. The schematic displays the outcome measures in order of stimulus complexity (left-to-right) and stages of processing from the periphery to the cortex (bottom-to-top). Lines begin at the independent variables and terminate with the arrows pointing to the dependent variables. The r 2 values associated with each analysis are displayed on the lines for convenience. The double lines connecting the ECAP channel separation index with the spatial ACC are used to represent the predicted spatial ACC, which was calculated by combining the peripheral ECAP and central spatial ACC

105 91 measures together. Thus, the double lines extending from the spatial ACC indicate that the predicted spatial ACC was the independent variable in the regression analysis. The most peripheral response evoked with the simplest stimulus (ECAP CSI) was significantly correlated with a more central response evoked with a more complex stimuli (spectral ACC), but not with either measure of speech perception, which required more complex processing of more complex stimuli. When information about central processing was included (in the form of the predicted spatial ACC), more of the variability observed in the central measure of spectral resolution was accounted for, and a significant correlation was observed with vowels, but not with the most complex speech test used in this study (BKB-SIN test). The spectral ACC, evoked with a complex, speech-like stimulus, explained more of the variability in vowel performance than measures of spatial resolution and was correlated with speech perception in noise. These results support the rationale behind our study design, which included the two types of electrophysiological measures reflecting different stages of auditory processing, and the use of both simple and complex stimuli to evoke responses. General Caveat In this study we activated different electrodes in three experimental programs in order to change spatial / spectral resolution within each person and to evaluate the effects on speech perception abilities. The experimental programs were vastly different than each participants clinical programs (e.g. only 7 electrodes were activated using a CIS strategy, the frequency range represented in electrode output was Hz, and the monopolar stimulus reference was MP1). Because all three experimental MAPs were novel and participants were given minimal listening practice (~ 15 min per MAP), we expected speech perception measures to be poorer than if participants were allowed to use their clinical MAPs. This was acceptable because our primary interest was not absolute performance, but relative performance (i.e. differences across the three experimental MAPs for a given individual and differences across participants). Although we expected

106 92 differences in performance for a given individual to primarily reflect different degrees of spatial resolution, one concern about the study design is the similarity / dissimilarity between the frequency-to-electrode allocation of the experimental MAPs and that of the CI users everyday MAPs. Most participants used default frequency allocation tables (shown in Table 1C) or slightly adjusted frequency allocation from the default (Table 2). Shifting the place of stimulation from what a listener is accustomed to can be detrimental to acute performance (Fu and Shannon, 1999a,b). This is one advantage of using novel MAPs for all conditions. Nevertheless, because we shifted the place of stimulation by different amounts for the three experimental programs, the frequency-toplace alignment is the most different from default settings for MAP 1, which is also the program with the poorest spatial resolution and poorest speech perception scores. MAP 3 had the most similar frequency-to-place alignment to the default settings and also resulted in the best spatial resolution and best speech perception scores. Thus, the poorest speech perception abilities observed with MAP 1 may be influenced both by poor spatial resolution and lack of time to acclimate to different places of stimulation. We are not able to separate frequency-to-place shifts from spatial or spectral resolution for the mixedmodel analysis with speech perception, and our results should be interpreted with this in mind. However, Fu and Shannon (1999b) did demonstrate that the effects of frequency shifts were independent from the effects of spectral resolution (number of channels in their study). In this study, we included an objective measure of spectral resolution and saw improvements from MAP 1 to MAP 3. We found significant within-subject correlations between spatial and spectral resolution, and these results should not be confounded by frequency-to-place misalignment. These results also suggest that frequency shifts were not solely responsible for the observed relationships between spatial / spectral resolution and speech perception. Additionally, the regression analysis using only MAP 3 data was not confounded by frequency shifts. The fact that spatial and

107 93 spectral resolution were significant predictors of speech perception across participants, even with the small sample size, supports our interpretation of the within-subject effects. A second caveat is that even without the possible confounds of frequency shifts on our interpretation, we used a number of controls to explore the relationships of interest. The seven-channel MAPs were one control, but we also made an effort to reduce a few of the differences in stimulation parameters across the outcome measures. For example, we used a monopolar stimulation mode referenced to MP1 for all measures because this is what we used for the ECAP measures. We also set the processor bandwidth to Hz for speech perception measures to match what was used for the spectral resolution measures. We are not sure to what extent the relationships we observed between electrophysiological measures and speech perception were due to these controls or whether the results can be generalized to relationships with speech perception when listeners are using their everyday programs. ECAP Channel Interaction Functions This is one of several studies to demonstrate significant correlations between measures of spatial resolution and speech perception (e.g. Nelson et al., 1995; Collins et al., 1997; Throckmorton and Collins, 1999; Henry et al., 2000; Boex et al., 2003; Jones et al., 2013); however, it is the first study to show a direct relationship between ECAP channel interaction functions and speech perception. We attribute our significant results to the more extensive measures of peripheral processing (similar to Jones et al., 2013) and the use of the channel separation index (Hughes, 2008) to quantify the ECAP channel interaction functions. Additionally, in this study we found that ECAP measures were most strongly correlated with performance changes within an individual, and previous studies have focused on exploring effects across CI users (Cohen et al., 2003; Hughes and Abbas, 2006a; Hughes and Stille, 2008; Tang et al., 2011; but van der Beek et al., 2012 also used mixed model analysis). Here we will discuss some of the limitations in this study specific to the ECAP measures that could be addressed in future research,

108 94 specifically research interested in across-subject applications. We will also suggest some potential clinical applications based on the strong correlations observed within individuals. Potential Limitations The ECAP channel separation index was not significantly correlated with differences in speech perception observed across participants when used by itself as a predictor, but our analysis was limited by a small sample size (N=10 or 11). The ECAP measures were significantly correlated with the different spectral resolution abilities observed across participants, and when the ECAP measures were combined with information about central processing, we also observed a significant correlation with vowel perception. These significant findings suggest that using ECAP channel interaction functions to predict differences in performance across CI users is worth exploring further. The fact that significant results were obtained in this study and not in previous ECAP studies suggest that the channel separation index may be a better way to quantify channel interaction functions than width or amount of masking (Cohen et al., 2003; Hughes and Abbas, 2006a; Hughes and Stille, 2008; Tang et al., 2011; van der Beek et al., 2012). However, there may be ways to improve upon the metric, specifically with respect to predicting perception of complex signals. Normalization: In this study, we normalized the ECAP amplitude to the largest amplitude observed across probe electrodes within each individual (Hughes, 2008). This normalization was preferred over normalizing each function to its own peak in order to retain relative amplitude differences. However, in one participant (F26L), the normalization was not ideal. For this individual, ECAP amplitudes were greater than 100 µv across the entire set of probe electrodes, which suggests good neural survival across the length of the electrode array, but one apical electrode resulted in a peak amplitude greater than 600 µv. Normalizing to this large amplitude response resulted in small normalized ECAP amplitudes for the majority of the basal electrodes, even though the

109 95 non-normalized amplitudes were large compared to most other participants. The channel separation index is influenced by the overall spread of excitation, but in this individual, the normalized amplitudes suggested limited neural excitation for most of the electrodes, and many of the indices were small. It is not clear if the normalization procedure should be adjusted (e.g. perhaps based on a mean value instead of the maximum and allowed to extend beyond 1.0), but for this one individual, our normalization procedure did not seem to reflect the good neural survival that was indirectly suggested by the non-normalized amplitudes and by the good speech perception scores. Unweighted versus Weighted Average: A possible limitation in this study was our use of an unweighted, average channel separation index across adjacent electrodes to compare with the speech measures. Different frequency regions are more important than others for speech intelligibility, and the relative importance depends upon the speech materials (ANSI, 1997). We did not measure frequency-importance functions for the stimuli used in this study (/h/-vowel-/d/ and BKB-SIN), but previous studies have shown that normal-hearing listeners do not weigh all frequency bands equally when identifying vowels (e.g. Kasturi et al., 2002), or keywords within sentences presented in a noise background (SPIN; ANSI, 1997). We did not consider a weighting function a priori and did not apply weights from previous studies post hoc, as a number of approximations were required. The vowel weights reported by Kasturi and colleagues (2002) were obtained from normal-hearing individuals listening with a CI simulation, and only minor approximations would have been required to account for the different frequency ranges ( Hz in this study compared to ) and number of electrodes (sevenelectrode MAPs in this study compared to six-electrode MAPs). A complication arises from the use of the channel separation index in this study. Channel separation indices are calculated for pairs of electrodes, meaning that each of the six indices calculated for this study was associated with two frequency bands which overlapped. It was not clear how to apply the reported weights to the overlapping frequency bands used for the channel

110 96 separation index calculation. Applying the weights reported in ANSI (1997) for the SPIN test would require even more approximations, as they were not obtained under CI simulations, are reported for octave or third-octave bands, and were obtained with a different set of stimuli than used in this study (SPIN compared to BKB-SIN). Variability: Regardless of whether we had tried to apply weights or not, using the average channel separation index may not be ideal for predicting speech perception. Bierer (2007) measured behavioral thresholds using a highly focused, tripolar (TP) stimulation, in order to explore the electrode-neuron interface across the electrode array. She demonstrated that threshold differences across electrodes were significantly correlated with speech perception; specifically, individuals with greater threshold variability across the electrode array also tended to have to poorer performance. One option we haven t explored with the channel separation index is some metric of variability (range, average differences, standard deviation) across adjacent activated electrodes for each MAP. For studies aiming to compare more than seven ECAP channel interaction measures to speech perception, it may be even more useful to consider using a metric that reflects variability across the electrode array. Internal Excitation Pattern: Any measure based on the channel separation index (average, maximum, standard deviation) may be fundamentally limited in its application to the perception of complex stimuli because the index is calculated for pairs of electrodes. One method used to evaluate the relationship between frequency resolution and speech perception in non-ci users involves calculating an internal excitation pattern or internal spectrum for a complex stimulus. For this procedure, auditory filter shapes are determined across a range of frequencies (i.e. across cochlear sites) for an individual using psychophysical forward-masking procedures. Complex signals are passed through these person-specific filters to calculate an internal spectrum, which is presumed to reflect the pattern of neural stimulation. The underlying assumption is that preservation of spectral differences for different stimuli at the periphery, which are

111 97 reflected in the internal spectra, are important for discrimination (Moore and Glasberg, 1987). The technique has been used to relate frequency resolution to the discrimination of vowels (e.g. Turner and Henn, 1989) and to the discrimination of spectrally rippled versus flat spectrum noise (e.g. Summers and Leek, 1994), providing evidence that differences in internal spectra are related to the person s perceptual discrimination abilities. In theory, the calculation of internal spectra for complex signals could be used to evaluate the relationship between peripheral spatial resolution and discrimination of complex signals in CI users. The forward-masked ECAP channel interaction functions, considered measures of auditory filters in CI users, are straightforward to elicit. The complication is determining the input signal. For non-ci users, the input is the acoustic signal. For CI users, it is necessary to transform the acoustic (or auxiliary) input into electrode output. Electrode output is person-specific and dependent upon a number of factors, including electrode impedance, T- and C- levels, frequency allocation, and processing strategy. Electrode output cannot be measured directly, but would require a model that includes the person-specific information in addition to processing algorithms specific to the cochlear implant companies. This modeling was beyond the scope of this study, and perhaps unnecessary since the channel separation index was sufficient to reveal some relationships between peripheral spatial resolution and the perception of complex signals. However, a method that can incorporate information across all channel interaction functions simultaneously likely would be more predictive of the perception of complex signals. Within-Subject Applications Although we are limited in the extent to which spatial resolution can be improved within an individual, there are ways in which information about peripheral spatial resolution could be used to adjust device stimulation patterns. One method is to deactivate electrodes that result in non-specific stimulation. Although perception can be

112 98 negatively affected by decreasing the number of electrodes (e.g. Friesen et al., 2001), a number of investigators have found improved performance on speech tests when selectively deactivating electrodes (Zwolan et al., 1997; Garadat et al., 2012; Noble et al., 2013, preliminary results presented by Bierer, 2013). In three of these studies, measures suggesting poor peripheral spatial resolution were used as criteria for choosing which electrodes to deactivate. Zwolan and colleagues (1997) deactivated electrodes that were perceptually indiscriminable. Noble and colleagues (2013) deactivated electrodes assumed to result in overlapping neural stimulation based on location of the electrodes within the cochlear duct determined by computerized tomography. Bierer (2013) deactivated electrodes with relatively high thresholds for focused TP stimulation, as high thresholds suggest local regions of poor spatial selectivity (Bierer and Faulkner, 2010). There is likely a limit to how much improvement can be obtained with this procedure, and maybe more so for electrode arrays with few numbers of electrodes. However, based on the success of these studies, it would be worth examining whether deactivating specific electrodes associated with small channel separation indices would also result in improved performance. Another potential application of ECAP channel interaction functions is to use the information about peripheral spatial resolution to determine if a specific processing strategy or stimulation mode would be optimal for an individual CI user. Although some of the clinically available strategies are associated with better performance than others on average (ACE compared to CIS and HiRes with Fidelity 120 compared to HiRes), the best or most preferred strategy is person-specific (e.g. Skinner et al., 2002; Firszt et al., 2009). Clinicians currently rely on their programming experiences and reports by patients to select the processing strategy / number of maxima, and often the default settings within the programming software are left unchanged. Sometimes speech perception tests are performed to compare across different processing strategies, but CI users often need time to acclimate to new listening conditions (e.g. Tyler et al., 1997), and a trial-and-error

113 99 method is not efficient. As new techniques are introduced to improve device transmittance of spectral content (see Bonham and Litvak, 2008 for a review of current focusing / steering), determining whether peripheral neural survival is sufficient to further transmit the detailed spectral content is needed in order to predict whether or not a strategy will benefit the individual. Spectral resolution measures have been used to validate processing strategies (Berenstein et al., 2008; Drennan et al., 2010); however, our results suggest that clinical decisions about how to change processor settings within a person might be better guided with peripheral measures of spatial resolution. The specific manner in which information about ECAP channel interaction functions can be used to help with these more complex decisions is less straightforward than deactivating electrodes with poor spatial selectivity. But, considering that ECAP measures are noninvasive reflections of peripheral neural excitation, they are worth further exploration. Clinical Feasibility Using ECAP measures to evaluate peripheral spatial resolution is faster and / or more cost effective than many of the psychophysical (e.g. electrode discrimination: Zwolan et al., 1997; forward-masked spatial tuning curves: Nelson et al., 2008; channel interaction: Jones et al., 2013) and objective (ABR thresholds using focused stimulation: Bierer et al., 2011; CT imaging: Noble et al., 2013) alternatives. Determining behavioral thresholds for focused TP stimulation across the array (Bierer, 2007) may be faster and cheaper than ECAP methods; however, behavioral methods require cooperation from participants and are not ideal for pediatric and difficult-to-test populations. In this study, we collected thirteen ECAP channel interaction functions in just under 1 hour. The number of intracochlear electrodes across standard-length implant arrays ranges from 12 to 22 (Hughes, 2012), and it seems reasonable to estimate that channel interaction functions could be obtained for an entire set of intracochlear electrodes within 2 hours. This is information is relevant to consider relative to the extensive time requirements

114 100 necessary for psychophysical measures (e.g. Jones et al., 2013 reported 20 hours to obtain 46 psychophysical channel interaction measures). A time-consuming portion of this study was the initial determination of stimulation levels. In this study, we measured T and C levels for 900 pps rate stimulation on all of the electrodes, and we based ECAP masker and probe stimulation levels on these measures. Although not tested directly, we assumed that it was important for ECAP stimulation levels to reflect MAP level variations across the electrode array since we were interested in comparing the electrophysiological responses with perception of stimuli presented through the processor. There are a number of options that can be explored for setting probe / masker stimulus levels without time-consuming measurements on all of the electrodes. One option is to use the same current level across electrodes (e.g. Cohen et al., 2003), although it would still be necessary to confirm that stimulation is not uncomfortable. For adults with clinical MAPs, it may be worth setting levels relative to clinical MAP levels, similar to the procedures used in this study. For children, perhaps a certain level above ECAP thresholds would be sufficient (e.g. Eisen and Franck, 2005). Exact stimulation level will have some effect on overall ECAP amplitude, and also may affect the shapes of the channel interaction functions; however, normalization reduces those differences (Hughes and Stille, 2010). Additionally, it is not certain how influential small differences in channel interaction function shapes will be in the calculation of the channel separation index or when compared to other measures. There may be additional ways to reduce the time requirements needed to obtain extensive channel interaction functions. First, it may be that collecting channel interaction functions for all of the electrodes is unnecessary. Jones et al. (2013) found that averaging psychophysical channel interaction measures at an electrode separation of three was significantly correlated with speech. Perhaps measuring channel interaction functions on half of electrodes in the array would be sufficient. In addition to reducing the number of probe electrodes, it may be worth exploring a reduced set of masker electrodes. Both

115 101 of these methods would make the ECAP measures even faster, and thus, more feasible to recommend for clinical use. Finally, clinical software is already available for performing these ECAP channel interaction functions, and recording / analyzing ECAPs is within the scope of practice of audiologists and does not require much if any additional training. These practical issues are not insignificant, especially if ECAP channel interaction functions prove to be as useful as the more time consuming psychophysical or objective alternatives. Cortical Auditory Evoked Potentials Initial evaluations of the ACC in CI users demonstrated feasibility of recording the response and described the sensitivity of the response to size / extent of the stimulus change (e.g. Friesen and Tremblay, 2006; Martin, 2007; Brown et al., 2008; Kim et al., 2009). More recent studies have demonstrated that the electrophysiological response is correlated with behavioral measures of discrimination (Hoppe et al., 2010; Won et al., 2011a). Complex phonemic contrasts and speech-like signals have been used to elicit the response (Friesen and Tremblay, 2006; Martin, 2007; Won et al., 2011a), but this is the first study to directly relate the ACC with speech perception abilities. Spatial ACC Sensitivity to Stimulus Changes: We used an electrode-discrimination paradigm to elicit the ACC (similar to Brown et al, 2008 and Hoppe et al., 2010). Consistent with the observations of Brown and colleagues, we generally saw increases in ACC amplitude as a function of electrode separation. Additionally, participants in both studies showed variability with regards to amplitude growth rates and nonmonotonic amplitude variations. However, an objective of the present study was not to simply describe differences across individuals, but to evaluate whether the differences observed across CI users were relevant to spectral resolution and speech perception abilities. Central Processing: Because we had ECAP channel separation indices as our measure of peripheral spatial resolution for each person, we used them in combination

116 102 with the spatial ACC responses to quantify differences in central processing across individuals. Based on ECAP measures for each electrode pair, we modeled the changes in spatial ACC amplitude as a function of channel separation index. The initial step was meant to separate central effects from peripheral spatial resolution; the following step added the information about peripheral spatial processing (ECAP channel separation index) and the information about central processing (model fit using spatial ACC data) back together. This combined measure, the predicted spatial ACC, was used to evaluate whether information about central processing was beneficial to consider when evaluating performance differences across individuals, or whether information about peripheral spatial resolution would be sufficient. Our results support the former; when comparing across individuals who use cochlear implants, information about central processing appears relevant. Relationship Between Spatial Resolution and Speech Perception: The importance of central processing may be one reason why studies to date have not been able to demonstrate a relationship between ECAP measures of spatial resolution and speech perception (Cohen et al., 2003; Hughes and Abbas, 2006a; Hughes and Stille, 2008; Tang et al., 2011; van der Beek et al., 2012). Even with the extensive measures of ECAP channel interaction functions and our quantification method (channel separation index), which differed from previous studies, we did not find that the ECAP measures by themselves were significantly related to speech perception abilities when we looked across participants. Adding information about central processing made the relationship between spatial resolution and vowel perception significant. In the previous section we suggested how the ECAP measures could be improved upon further, but it may be that considering differences in central processing (i.e. more central than the auditory nerve) is necessary if the goal is to predict perception, especially across individuals with different ages of onset, durations, and etiologies of hearing loss.

117 103 The MMN is another cortical potential that has been used to examine the relationship between electrode discrimination and speech perception (Wable et al., 2000). Like the ACC, the MMN reflects discrimination abilities and does not require active listening by the participant; however, it is a small, derived response, susceptible to noise, and is most often used to make group comparisons (e.g. Martin et al., 2008). The MMN was not significantly related to any of the four speech perception measures used by Wable et al (2000); however, the sample size (N=6) was even smaller than ours. In addition to the issues surrounding the use of the MMN as a measure of an individual s perceptual abilities, these investigators only measured cortical responses using three electrode pairs, spaced 1, 3, and 5 electrodes apart. We performed more extensive measures of the spatial ACC (thirteen electrode pairs spaced 0 9 electrodes apart and included both basal and apical electrodes) to describe differences in central processing across individuals. We also focused our analysis on responses associated with adjacent electrode pairs activated in the experimental MAPs for the comparison with speech perception. Our finding of a significant relationship between the predicted spatial ACC and vowel perception across participants is consistent with a number of behavioral studies (e.g. Nelson et al., 1995; Collins et al., 1997; Throckmorton and Collins, 1999; Henry et al., 2000; Boex et al., 2003; Jones et al., 2013.) Even though these investigators used behavioral techniques and we used electrophysiological techniques for evaluating spatial resolution, there are a number of methodological similarities. First, like the ACC, behavioral measures of spatial resolution are affected by both peripheral and central processing. Additionally, the majority of studies listed above performed extensive measures of channel interactions. Our results, along with those listed here support two of our hypotheses: (1) extensive measures across the electrode array are necessary for relating spatial resolution with the perception of more complex stimuli and (2) differences in central processing are important to consider. These two factors may

118 104 explain why peripheral electrophysiological (Cohen et al., 2003; Hughes and Abbas, 2006a; Hughes and Stille, 2008; Tang et al., 2011; van der Beek et al., 2012) and many behavioral (Hughes and Stille, 2008; Nelson et al., 2011; Anderson et al., 2011) measures of spatial resolution have not been found to be significantly related to speech perception. Spectral ACC Significant relationships between behavioral measures of spectral resolution ability assessed using rippled noise stimuli and speech perception have been observed across studies using various stimulus paradigms (e.g. ripple density versus depth) and speech perception measures (Henry and Turner, 2003; Henry et al., 2005; Litvak et al., 2007; Won et al., 2007; Berenstein et al., 2008; Saoji et al., 2009; Anderson et al., 2011; Spahr et al., 2011; Won et al., 2011b). Also, Won and colleagues (2011a) demonstrated that the ACC could be evoked within a spectral ripple discrimination paradigm, and that electrophysiological responses were correlated with behavioral measures of ripple discrimination. Although we used a ripple depth detection paradigm to elicit the ACC, based on these studies, we were not surprised to find significant correlations between the spectral ACC and speech perception. Possible Confounds with Spectral Ripple Stimuli: Although spectral rippled noise is a popular stimulus to use for evaluating spectral resolution abilities, there are concerns that listeners may rely on other perceptual abilities for discrimination among the ripple stimuli, namely single-channel loudness cues or pitch perceptions associated with either the level presented on the lowest or highest electrode (edge effects) or the spectral centroid (Azadpour and McKay, 2012, further described in Arnoff and Landsberger, 2013). Even though spectral ripple discrimination ability is correlated with speech perception, there are questions as to whether discrimination is actually a reflection of spatial / spectral resolution or some other perceptual ability. The concerns about possible confounds have been addressed in a number of studies. Anderson and colleagues (2011) created ripple stimuli with and without the

119 105 application of a Hanning window to the edge frequencies. Thresholds obtained with the two types of stimuli were no different, suggesting that edge effects were not a confounding factor, at least for the ripple density paradigm used by these investigators. Won et al. (2011c) examined ripple discrimination ability with and without a level rove. Performance was no different, suggesting that an intensity cue did not dominate ripple thresholds. This conclusion was further supported by their use of a model to predict performance based on single-channel intensity cues. The model performance and that of the CI listeners did not match. Won and colleagues (2011c) also demonstrated improved ripple discrimination abilities when increasing the spacing between activated electrodes, which indirectly provides evidence that ripple thresholds reflect spatial resolution. A number of studies have observed significant correlations between measures of spatial resolution and spectral ripple discrimination abilities (Anderson et al., 2011; Jones et al., 2013; this study). Despite the procedural differences between our study and Jones and colleagues (e.g. behavioral versus electrophysiological measures, use of spectral ripple density paradigm versus depth) both show strong correlations between spatial and spectral resolution abilities. The correlation between spatial resolution of adjacent electrodes and speech perception was 0.97 (Jones et al., 2013) compared with our correlation of 0.93 when we used predicted spatial ACC amplitude. Not only is the relationship between spatial and spectral resolution significant in both studies, but the majority of the variability in spectral resolution abilities observed across individuals is explained by the measures of spatial resolution abilities (r 2 = 0.94 and 0.87, respectively). Although factors other than spatial / spectral resolution may impact performance on spectral ripple discrimination tasks, these results suggest that confounding factors are likely minimal. Thus, the studies using spectral ripple stimuli with either density or depth paradigms are likely appropriately interpreted as demonstrating a relationship between spectral resolution and speech perception abilities in CI users.

120 106 Recently, Arnoff and Landsberger (2013) presented a modified rippled noise stimulus to use for spectral resolution measures that avoids the potential confounds altogether. The spectral-temporally modulated ripple keeps ripple density and depth constant while varying the amplitude of specific frequency components across time. Used within a phase-inversion paradigm, single-channel loudness cues, edge, and centroid effects are effectively eliminated. The authors demonstrated that the dynamic stimulus was sensitive to changes in spectral resolution ability; performance improved as the number of channels used in a CI simulation was increased. This stimulus has yet to be compared with speech perception. Because the spectral-temporally modulated ripple stimulus is time-varying, it would not work within an ACC stimulus paradigm. However, time-varying signals can be used within the oddball paradigm. McLaughlin and colleagues (2013b) used both oddball and ACC paradigms to explore spectral resolution abilities in CI users. A relatively fast stimulus repetition rate of one per second was used for the oddball paradigm. The amplitude of the standard response was small, possibly due to adaptation, but the response to the deviant was even larger than the ACC response evoked with identical stimuli. Because the stimulus change is presented every time with the ACC paradigm, a slow rate is necessary to avoid adaptation effects at higher rates. However, a deviant stimulus is by definition only presented a small fraction of the time, and it may be possible to increase the rate even further without affecting the size of the response to the deviant. A fast oddball paradigm deserves further exploration. Clinical Feasibility Although this study provides evidence that ACC responses evoked with both simple and complex stimuli may be useful for predicting perceptual abilities across individuals with CIs, there are some practical considerations. First, cortical measures are more time consuming to perform than ECAP measures. There may be ways to shorten test time from what was required for this study, and these would be worth exploring. For

121 107 example, for the spatial ACC paradigm, it is unclear whether it was necessary to record responses for thirteen electrode pairs (the protocol for this study) or if a subset would have been sufficient. It may be that shortening the test time by recording responses to few electrode pairs may have been more beneficial than extensive cortical measures, as noise levels tended to increase across the duration of the lengthy sessions or participants had difficultly staying awake. Alternatively, perhaps a completely different measure of central processing could be added to the ECAP measures of spatial resolution to improve predictions of performance across individuals. For the spectral ACC paradigm, although behavioral studies rely on threshold measures, we explored the use of the response amplitude at a single ripple depth in addition to finding an electrophysiological ripple depth threshold. Our results demonstrated that the single response was predictive across individuals in this study, suggesting that the more time-efficient method was adequate. Another disadvantage of recording cortical potentials is that they are typically recorded using far-field electrodes placed on the scalp. This takes time and adds material expenses (electrodes, conductive paste, cleaner, etc.). McLaughlin and colleagues (2013a) used the CI extracochlear electrodes to obtain cortical recordings. Although it is not possible to obtain long-latency responses with clinical software in a time-efficient manner at present time, the preliminary results are promising. Eliminating the need for scalp electrodes would be especially useful for obtaining measures on children; however, cortical measures, even those that can be obtained within a passive listening paradigm, require some cooperation. Listeners must sit relatively still while remaining alert. This combination is difficult to prompt in young children; however, the ACC has been successfully recording in 4-month old infants (Small and Werker, 2012). Lastly, although we focused on the benefits of using cortical measures to compare performance across individuals, there is some indication that these measures are beneficial for evaluating changes in auditory processing within an individual. Cortical measures reflect maturational changes and development of the auditory system (e.g.

122 108 Ponton et al., 1996; 2000; Wunderlich and Cone-Wesson, 2006), and can reflect perceptual changes within a person due to listening experience (e.g. Sharma et al., 2002) or training (e.g. Menning et al., 2000). Although these additional factors may make the response more difficult to interpret, they also indicate the potential for numerous applications. Temporal Processing This study focused on evaluating electrophysiological measures of spatial / spectral resolution abilities; however, temporal resolution is also important for speech perception. Won et al. (2011b) demonstrated that behavioral measures of spectral (ripple discrimination thresholds) and temporal (modulation detection thresholds) processing were independent, and that when combined, the two predictors accounted for an additional 13-25% of the variability than either measure alone. Thus, it appears useful to further explore time-efficient methods of evaluating both spectral and temporal processing abilities in CI users, especially if the goal is to predict speech perception abilities. Like spatial resolution, temporal resolution can be evaluated objectively at peripheral (e.g. Wilson et al., 1997) and central (e.g. Lister et al., 2007; 2011) levels. In addition to predicting performance, it would be useful to compare across-site patterns of temporal (Garadat et al., 2012) and spatial (Zwolan, 1997; Bierer, 2007) resolution. It would be interesting to determine whether an electrode associated with poor temporal processing also exhibits poor spatial resolution, or whether the two measures are independent even at a specific electrode level. As mentioned previously, there is evidence that deactivating electrodes associated with poor spatial selectivity (Zwolan, 1997; Noble et al., 2013; Bierer, 2013) or temporal processing ability (Garadat et al., 2012) improves performance. A combination of temporal / spatial measures across the electrode array may improve the selection of electrodes to deactivate.

123 109 Conclusions We were able to provide evidence that has been elusive in so many other studies: ECAP channel interaction functions do provide relevant information about perceptual abilities, including the perception of speech. Thus, ECAP channel interaction functions should not be disregarded. Instead, we recommend that effort be placed on determining what the measures can tell us about electrode location within the cochlea and neural survival / functioning across the electrode array, what quantity can capture the neural excitation patterns across the electrode array that are reflected in the functions, and how the information can be used to make clinical programming decisions. The general results from this study demonstrate that variability in performance observed across individuals with cochlear implants reflects differences in both peripheral and central processing. The central response evoked with the most complex, speech-like stimulus (spectral ACC) was the most predictive of speech perception abilities across participants. However, although the spectral ACC was most strongly correlated with speech perception, and although the single-point measure is more time-efficient than the numerous electrode pairs used for the spatial ACC paradigm and perhaps even faster than performing ECAP channel interaction functions on all available electrodes, we do not conclude that the spectral ACC is optimal for all situations. For example, although the ultimate goal may be to improve performance on complex listening tasks, the more specific measures of spatial resolution may be more useful for guiding programming decisions. And specifically, the ECAP channel separation indices were most predictive of performance changes within an individual. The best outcome measure or combination of measures obviously depends upon the specific application. In summary, our results indicate that electrophysiological measures of spatial and spectral resolution provide valuable information about underlying neural processing and resulting perception, and the specific clinical application of these measures deserves further attention.

124 Figure 22. Schematic of Across-Subject Regression Analysis. The outcome measures of this study are displayed in order of stimulus complexity (left-to-right) and in order of dependence upon processing at different stages along the auditory pathway, from the periphery (bottom) to more central structures (top). Lines with arrows connect single predictors with the dependent variable; the coefficient of determination is displayed for each comparison. Double lines represent our combined peripheral / central measure of spatial resolution: the predicted spatial ACC. Only statistically significant results are displayed. 110

1- Cochlear Impedance Telemetry

1- Cochlear Impedance Telemetry INTRA-OPERATIVE COCHLEAR IMPLANT MEASURMENTS SAMIR ASAL M.D 1- Cochlear Impedance Telemetry 1 Cochlear implants used presently permit bi--directional communication between the inner and outer parts of

More information

The development of a modified spectral ripple test

The development of a modified spectral ripple test The development of a modified spectral ripple test Justin M. Aronoff a) and David M. Landsberger Communication and Neuroscience Division, House Research Institute, 2100 West 3rd Street, Los Angeles, California

More information

This dissertation is available at Iowa Research Online:

This dissertation is available at Iowa Research Online: University of Iowa Iowa Research Online Theses and Dissertations Spring 2016 The effect that design of the Nucleus Intracochlear Electrode Array and age of onset of hearing loss have on electrically evoked

More information

Implant Subjects. Jill M. Desmond. Department of Electrical and Computer Engineering Duke University. Approved: Leslie M. Collins, Supervisor

Implant Subjects. Jill M. Desmond. Department of Electrical and Computer Engineering Duke University. Approved: Leslie M. Collins, Supervisor Using Forward Masking Patterns to Predict Imperceptible Information in Speech for Cochlear Implant Subjects by Jill M. Desmond Department of Electrical and Computer Engineering Duke University Date: Approved:

More information

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for What you re in for Speech processing schemes for cochlear implants Stuart Rosen Professor of Speech and Hearing Science Speech, Hearing and Phonetic Sciences Division of Psychology & Language Sciences

More information

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair Who are cochlear implants for? Essential feature People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work

More information

Who are cochlear implants for?

Who are cochlear implants for? Who are cochlear implants for? People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work best in adults who

More information

Hearing the Universal Language: Music and Cochlear Implants

Hearing the Universal Language: Music and Cochlear Implants Hearing the Universal Language: Music and Cochlear Implants Professor Hugh McDermott Deputy Director (Research) The Bionics Institute of Australia, Professorial Fellow The University of Melbourne Overview?

More information

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair Who are cochlear implants for? Essential feature People with little or no hearing and little conductive component to the loss who receive little or no benefit from a hearing aid. Implants seem to work

More information

Prelude Envelope and temporal fine. What's all the fuss? Modulating a wave. Decomposing waveforms. The psychophysics of cochlear

Prelude Envelope and temporal fine. What's all the fuss? Modulating a wave. Decomposing waveforms. The psychophysics of cochlear The psychophysics of cochlear implants Stuart Rosen Professor of Speech and Hearing Science Speech, Hearing and Phonetic Sciences Division of Psychology & Language Sciences Prelude Envelope and temporal

More information

The role of periodicity in the perception of masked speech with simulated and real cochlear implants

The role of periodicity in the perception of masked speech with simulated and real cochlear implants The role of periodicity in the perception of masked speech with simulated and real cochlear implants Kurt Steinmetzger and Stuart Rosen UCL Speech, Hearing and Phonetic Sciences Heidelberg, 09. November

More information

Copyright Kathleen Ferrigan Faulkner

Copyright Kathleen Ferrigan Faulkner Copyright 212 Kathleen Ferrigan Faulkner Understanding Frequency Encoding and Perception in Adult Users of Cochlear Implants Kathleen Ferrigan Faulkner A dissertation submitted in partial fulfillment of

More information

An Auditory-Model-Based Electrical Stimulation Strategy Incorporating Tonal Information for Cochlear Implant

An Auditory-Model-Based Electrical Stimulation Strategy Incorporating Tonal Information for Cochlear Implant Annual Progress Report An Auditory-Model-Based Electrical Stimulation Strategy Incorporating Tonal Information for Cochlear Implant Joint Research Centre for Biomedical Engineering Mar.7, 26 Types of Hearing

More information

Effects of Remaining Hair Cells on Cochlear Implant Function

Effects of Remaining Hair Cells on Cochlear Implant Function Effects of Remaining Hair Cells on Cochlear Implant Function 16th Quarterly Progress Report Neural Prosthesis Program Contract N01-DC-2-1005 (Quarter spanning January-March, 2006) P.J. Abbas, C.A. Miller,

More information

Static and Dynamic Spectral Acuity in Cochlear Implant Listeners for Simple and Speech-like Stimuli

Static and Dynamic Spectral Acuity in Cochlear Implant Listeners for Simple and Speech-like Stimuli University of South Florida Scholar Commons Graduate Theses and Dissertations Graduate School 6-30-2016 Static and Dynamic Spectral Acuity in Cochlear Implant Listeners for Simple and Speech-like Stimuli

More information

Acoustics, signals & systems for audiology. Psychoacoustics of hearing impairment

Acoustics, signals & systems for audiology. Psychoacoustics of hearing impairment Acoustics, signals & systems for audiology Psychoacoustics of hearing impairment Three main types of hearing impairment Conductive Sound is not properly transmitted from the outer to the inner ear Sensorineural

More information

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening AUDL GS08/GAV1 Signals, systems, acoustics and the ear Pitch & Binaural listening Review 25 20 15 10 5 0-5 100 1000 10000 25 20 15 10 5 0-5 100 1000 10000 Part I: Auditory frequency selectivity Tuning

More information

Exploring the Source of Neural Responses of Different Latencies Obtained from Different Recording Electrodes in Cochlear Implant Users

Exploring the Source of Neural Responses of Different Latencies Obtained from Different Recording Electrodes in Cochlear Implant Users Audiology Neurotology Original Paper Received: November 15, 2015 Accepted after revision: February 17, 2016 Published online: April 16, 2016 Exploring the Source of Neural Responses of Different Latencies

More information

Neurophysiological effects of simulated auditory prosthesis stimulation

Neurophysiological effects of simulated auditory prosthesis stimulation Neurophysiological effects of simulated auditory prosthesis stimulation 2 th Quarterly Progress Report Neural Prosthesis Program Contract N0-DC-9-207 (no-cost extension period) April 2003 C.A. Miller,

More information

Study Sample: Twelve postlingually deafened adults participated in this study. All were experienced users of the Advanced Bionics CI system.

Study Sample: Twelve postlingually deafened adults participated in this study. All were experienced users of the Advanced Bionics CI system. J Am Acad Audiol 21:16 27 (2010) Comparison of Electrically Evoked Compound Action Potential Thresholds and Loudness Estimates for the Stimuli Used to Program the Advanced Bionics Cochlear Implant DOI:

More information

Implementation of Spectral Maxima Sound processing for cochlear. implants by using Bark scale Frequency band partition

Implementation of Spectral Maxima Sound processing for cochlear. implants by using Bark scale Frequency band partition Implementation of Spectral Maxima Sound processing for cochlear implants by using Bark scale Frequency band partition Han xianhua 1 Nie Kaibao 1 1 Department of Information Science and Engineering, Shandong

More information

HCS 7367 Speech Perception

HCS 7367 Speech Perception Long-term spectrum of speech HCS 7367 Speech Perception Connected speech Absolute threshold Males Dr. Peter Assmann Fall 212 Females Long-term spectrum of speech Vowels Males Females 2) Absolute threshold

More information

Editorial Hearing Aids and the Brain

Editorial Hearing Aids and the Brain International Otolaryngology, Article ID 518967, 5 pages http://dx.doi.org/10.1155/2014/518967 Editorial Hearing Aids and the Brain K. L. Tremblay, 1 S. Scollie, 2 H. B. Abrams, 3 J. R. Sullivan, 1 and

More information

Binaural unmasking with multiple adjacent masking electrodes in bilateral cochlear implant users

Binaural unmasking with multiple adjacent masking electrodes in bilateral cochlear implant users Binaural unmasking with multiple adjacent masking electrodes in bilateral cochlear implant users Thomas Lu a) Department of Otolaryngology Head and Neck Surgery, University of California, Irvine, California

More information

Role of F0 differences in source segregation

Role of F0 differences in source segregation Role of F0 differences in source segregation Andrew J. Oxenham Research Laboratory of Electronics, MIT and Harvard-MIT Speech and Hearing Bioscience and Technology Program Rationale Many aspects of segregation

More information

Psychophysically based site selection coupled with dichotic stimulation improves speech recognition in noise with bilateral cochlear implants

Psychophysically based site selection coupled with dichotic stimulation improves speech recognition in noise with bilateral cochlear implants Psychophysically based site selection coupled with dichotic stimulation improves speech recognition in noise with bilateral cochlear implants Ning Zhou a) and Bryan E. Pfingst Kresge Hearing Research Institute,

More information

Age-related changes in temporal resolution revisited: findings from cochlear implant users

Age-related changes in temporal resolution revisited: findings from cochlear implant users University of Iowa Iowa Research Online Theses and Dissertations Spring 2016 Age-related changes in temporal resolution revisited: findings from cochlear implant users Bruna Silveira Sobiesiak Mussoi University

More information

A Psychophysics experimental software to evaluate electrical pitch discrimination in Nucleus cochlear implanted patients

A Psychophysics experimental software to evaluate electrical pitch discrimination in Nucleus cochlear implanted patients A Psychophysics experimental software to evaluate electrical pitch discrimination in Nucleus cochlear implanted patients M T Pérez Zaballos 1, A Ramos de Miguel 2, M Killian 3 and A Ramos Macías 1 1 Departamento

More information

PLEASE SCROLL DOWN FOR ARTICLE

PLEASE SCROLL DOWN FOR ARTICLE This article was downloaded by:[michigan State University Libraries] On: 9 October 2007 Access Details: [subscription number 768501380] Publisher: Informa Healthcare Informa Ltd Registered in England and

More information

Effects of Remaining Hair Cells on Cochlear Implant Function

Effects of Remaining Hair Cells on Cochlear Implant Function Effects of Remaining Hair Cells on Cochlear Implant Function 2 nd Quarterly Progress Report Neural Prosthesis Program Contract N1-DC-2-15 (Quarter spanning Oct-Dec, 22) C.A. Miller, P.J. Abbas, N. Hu,

More information

2/25/2013. Context Effect on Suprasegmental Cues. Supresegmental Cues. Pitch Contour Identification (PCI) Context Effect with Cochlear Implants

2/25/2013. Context Effect on Suprasegmental Cues. Supresegmental Cues. Pitch Contour Identification (PCI) Context Effect with Cochlear Implants Context Effect on Segmental and Supresegmental Cues Preceding context has been found to affect phoneme recognition Stop consonant recognition (Mann, 1980) A continuum from /da/ to /ga/ was preceded by

More information

Running head: HEARING-AIDS INDUCE PLASTICITY IN THE AUDITORY SYSTEM 1

Running head: HEARING-AIDS INDUCE PLASTICITY IN THE AUDITORY SYSTEM 1 Running head: HEARING-AIDS INDUCE PLASTICITY IN THE AUDITORY SYSTEM 1 Hearing-aids Induce Plasticity in the Auditory System: Perspectives From Three Research Designs and Personal Speculations About the

More information

Spectral peak resolution and speech recognition in quiet: Normal hearing, hearing impaired, and cochlear implant listeners

Spectral peak resolution and speech recognition in quiet: Normal hearing, hearing impaired, and cochlear implant listeners Spectral peak resolution and speech recognition in quiet: Normal hearing, hearing impaired, and cochlear implant listeners Belinda A. Henry a Department of Communicative Disorders, University of Wisconsin

More information

C HAPTER F OUR. Auditory Development Promoted by Unilateral and Bilateral Cochlear Implant Use. Karen Gordon. Introduction

C HAPTER F OUR. Auditory Development Promoted by Unilateral and Bilateral Cochlear Implant Use. Karen Gordon. Introduction C HAPTER F OUR Auditory Development Promoted by Unilateral and Bilateral Cochlear Implant Use Karen Gordon Introduction Auditory development after cochlear implantation in children with early onset deafness

More information

Eighth Quarterly Progress Report N01-DC The Neurophysiological Effects of Simulated Auditory Prosthesis Stimulation

Eighth Quarterly Progress Report N01-DC The Neurophysiological Effects of Simulated Auditory Prosthesis Stimulation Eighth Quarterly Progress Report N01-DC-9-2107 The Neurophysiological Effects of Simulated Auditory Prosthesis Stimulation P.J. Abbas, C.A. Miller, J.T. Rubinstein, B.K. Robinson, Ning Hu Department of

More information

Place specificity of monopolar and tripolar stimuli in cochlear implants: The influence of residual masking a)

Place specificity of monopolar and tripolar stimuli in cochlear implants: The influence of residual masking a) Place specificity of monopolar and tripolar stimuli in cochlear implants: The influence of residual masking a) Claire A. Fielden, b) Karolina Kluk, and Colette M. McKay School of Psychological Sciences,

More information

The effect of development on cortical auditory evoked potentials in normal hearing listeners and cochlear implant users

The effect of development on cortical auditory evoked potentials in normal hearing listeners and cochlear implant users University of Iowa Iowa Research Online Theses and Dissertations Spring 2016 The effect of development on cortical auditory evoked potentials in normal hearing listeners and cochlear implant users Eun

More information

Binaural Hearing. Why two ears? Definitions

Binaural Hearing. Why two ears? Definitions Binaural Hearing Why two ears? Locating sounds in space: acuity is poorer than in vision by up to two orders of magnitude, but extends in all directions. Role in alerting and orienting? Separating sound

More information

JARO. Research Article. Temporal Processing in the Auditory System. Insights from Cochlear and Auditory Midbrain Implantees

JARO. Research Article. Temporal Processing in the Auditory System. Insights from Cochlear and Auditory Midbrain Implantees JARO 14: 103 124 (2013) DOI: 10.1007/s10162-012-0354-z D 2012 The Author(s). This article is published with open access at Springerlink.com Research Article JARO Journal of the Association for Research

More information

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds

Hearing Lectures. Acoustics of Speech and Hearing. Auditory Lighthouse. Facts about Timbre. Analysis of Complex Sounds Hearing Lectures Acoustics of Speech and Hearing Week 2-10 Hearing 3: Auditory Filtering 1. Loudness of sinusoids mainly (see Web tutorial for more) 2. Pitch of sinusoids mainly (see Web tutorial for more)

More information

Rachel A. Scheperle University of Iowa Wendell Johnson Speech & Hearing Center 250 Hawkins Drive Iowa City, IA

Rachel A. Scheperle University of Iowa Wendell Johnson Speech & Hearing Center 250 Hawkins Drive Iowa City, IA Rachel A. Scheperle University of Iowa Wendell Johnson Speech & Hearing Center 250 Hawkins Drive Iowa City, IA 52242 rachel-scheperle@uiowa.edu Educational Background 2015- Post-doctoral Fellow University

More information

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES Varinthira Duangudom and David V Anderson School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA 30332

More information

Auditory nerve. Amanda M. Lauer, Ph.D. Dept. of Otolaryngology-HNS

Auditory nerve. Amanda M. Lauer, Ph.D. Dept. of Otolaryngology-HNS Auditory nerve Amanda M. Lauer, Ph.D. Dept. of Otolaryngology-HNS May 30, 2016 Overview Pathways (structural organization) Responses Damage Basic structure of the auditory nerve Auditory nerve in the cochlea

More information

9/29/14. Amanda M. Lauer, Dept. of Otolaryngology- HNS. From Signal Detection Theory and Psychophysics, Green & Swets (1966)

9/29/14. Amanda M. Lauer, Dept. of Otolaryngology- HNS. From Signal Detection Theory and Psychophysics, Green & Swets (1966) Amanda M. Lauer, Dept. of Otolaryngology- HNS From Signal Detection Theory and Psychophysics, Green & Swets (1966) SIGNAL D sensitivity index d =Z hit - Z fa Present Absent RESPONSE Yes HIT FALSE ALARM

More information

Hearing Research 241 (2008) Contents lists available at ScienceDirect. Hearing Research. journal homepage:

Hearing Research 241 (2008) Contents lists available at ScienceDirect. Hearing Research. journal homepage: Hearing Research 241 (2008) 73 79 Contents lists available at ScienceDirect Hearing Research journal homepage: www.elsevier.com/locate/heares Simulating the effect of spread of excitation in cochlear implants

More information

EXECUTIVE SUMMARY Academic in Confidence data removed

EXECUTIVE SUMMARY Academic in Confidence data removed EXECUTIVE SUMMARY Academic in Confidence data removed Cochlear Europe Limited supports this appraisal into the provision of cochlear implants (CIs) in England and Wales. Inequity of access to CIs is a

More information

J Jeffress model, 3, 66ff

J Jeffress model, 3, 66ff Index A Absolute pitch, 102 Afferent projections, inferior colliculus, 131 132 Amplitude modulation, coincidence detector, 152ff inferior colliculus, 152ff inhibition models, 156ff models, 152ff Anatomy,

More information

DO NOT DUPLICATE. Copyrighted Material

DO NOT DUPLICATE. Copyrighted Material Annals of Otology, Rhinology & Laryngology 115(6):425-432. 2006 Annals Publishing Company. All rights reserved. Effects of Converting Bilateral Cochlear Implant Subjects to a Strategy With Increased Rate

More information

A TEMPORAL MODEL OF FREQUENCY DISCRIMINATION IN ELECTRIC HEARING

A TEMPORAL MODEL OF FREQUENCY DISCRIMINATION IN ELECTRIC HEARING Chapter 7 A TEMPORAL MODEL OF FREQUENCY DISCRIMINATION IN ELECTRIC HEARING The results in this chapter have previously been published: Hanekom, 1.1. 2000, "What do cochlear implants teach us about the

More information

Sound localization psychophysics

Sound localization psychophysics Sound localization psychophysics Eric Young A good reference: B.C.J. Moore An Introduction to the Psychology of Hearing Chapter 7, Space Perception. Elsevier, Amsterdam, pp. 233-267 (2004). Sound localization:

More information

Comment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA)

Comment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA) Comments Comment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA) Is phase locking to transposed stimuli as good as phase locking to low-frequency

More information

SOLUTIONS Homework #3. Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03

SOLUTIONS Homework #3. Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03 SOLUTIONS Homework #3 Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03 Problem 1: a) Where in the cochlea would you say the process of "fourier decomposition" of the incoming

More information

A neural network model for optimizing vowel recognition by cochlear implant listeners

A neural network model for optimizing vowel recognition by cochlear implant listeners A neural network model for optimizing vowel recognition by cochlear implant listeners Chung-Hwa Chang, Gary T. Anderson, Member IEEE, and Philipos C. Loizou, Member IEEE Abstract-- Due to the variability

More information

The impact of frequency compression on cortical evoked potentials and perception

The impact of frequency compression on cortical evoked potentials and perception University of Iowa Iowa Research Online Theses and Dissertations Spring 2014 The impact of frequency compression on cortical evoked potentials and perception Benjamin James Kirby University of Iowa Copyright

More information

Sonic Spotlight. SmartCompress. Advancing compression technology into the future

Sonic Spotlight. SmartCompress. Advancing compression technology into the future Sonic Spotlight SmartCompress Advancing compression technology into the future Speech Variable Processing (SVP) is the unique digital signal processing strategy that gives Sonic hearing aids their signature

More information

Rethinking Cochlear Implant Mapping for Bilateral Users. Learner Outcomes

Rethinking Cochlear Implant Mapping for Bilateral Users. Learner Outcomes Rethinking Cochlear Implant Mapping for Bilateral Users Matthew Goupell University of Maryland College Park Karen Gordon The Hospital for Sick Children, University of Toronto April 25, 213 Matt Goupell

More information

Binaural Hearing. Steve Colburn Boston University

Binaural Hearing. Steve Colburn Boston University Binaural Hearing Steve Colburn Boston University Outline Why do we (and many other animals) have two ears? What are the major advantages? What is the observed behavior? How do we accomplish this physiologically?

More information

BORDERLINE PATIENTS AND THE BRIDGE BETWEEN HEARING AIDS AND COCHLEAR IMPLANTS

BORDERLINE PATIENTS AND THE BRIDGE BETWEEN HEARING AIDS AND COCHLEAR IMPLANTS BORDERLINE PATIENTS AND THE BRIDGE BETWEEN HEARING AIDS AND COCHLEAR IMPLANTS Richard C Dowell Graeme Clark Chair in Audiology and Speech Science The University of Melbourne, Australia Hearing Aid Developers

More information

Long-Term Performance for Children with Cochlear Implants

Long-Term Performance for Children with Cochlear Implants Long-Term Performance for Children with Cochlear Implants The University of Iowa Elizabeth Walker, M.A., Camille Dunn, Ph.D., Bruce Gantz, M.D., Virginia Driscoll, M.A., Christine Etler, M.A., Maura Kenworthy,

More information

Processing Interaural Cues in Sound Segregation by Young and Middle-Aged Brains DOI: /jaaa

Processing Interaural Cues in Sound Segregation by Young and Middle-Aged Brains DOI: /jaaa J Am Acad Audiol 20:453 458 (2009) Processing Interaural Cues in Sound Segregation by Young and Middle-Aged Brains DOI: 10.3766/jaaa.20.7.6 Ilse J.A. Wambacq * Janet Koehnke * Joan Besing * Laurie L. Romei

More information

Quick Guide - eabr with Eclipse

Quick Guide - eabr with Eclipse What is eabr? Quick Guide - eabr with Eclipse An electrical Auditory Brainstem Response (eabr) is a measurement of the ABR using an electrical stimulus. Instead of a traditional acoustic stimulus the cochlear

More information

Speech, Language, and Hearing Sciences. Discovery with delivery as WE BUILD OUR FUTURE

Speech, Language, and Hearing Sciences. Discovery with delivery as WE BUILD OUR FUTURE Speech, Language, and Hearing Sciences Discovery with delivery as WE BUILD OUR FUTURE It began with Dr. Mack Steer.. SLHS celebrates 75 years at Purdue since its beginning in the basement of University

More information

Cochlear Implants. What is a Cochlear Implant (CI)? Audiological Rehabilitation SPA 4321

Cochlear Implants. What is a Cochlear Implant (CI)? Audiological Rehabilitation SPA 4321 Cochlear Implants Audiological Rehabilitation SPA 4321 What is a Cochlear Implant (CI)? A device that turns signals into signals, which directly stimulate the auditory. 1 Basic Workings of the Cochlear

More information

Neurophysiology of Cochlear Implant Users I: Effects of Stimulus Current Level and Electrode Site on the Electrical ABR, MLR, and N1-P2 Response

Neurophysiology of Cochlear Implant Users I: Effects of Stimulus Current Level and Electrode Site on the Electrical ABR, MLR, and N1-P2 Response Neurophysiology of Cochlear Implant Users I: Effects of Stimulus Current Level and Electrode Site on the Electrical ABR, MLR, and N1-P2 Response Jill B. Firszt, Ron D. Chambers, Nina Kraus, and Ruth M.

More information

Effects of Remaining Hair Cells on Cochlear Implant Function

Effects of Remaining Hair Cells on Cochlear Implant Function Effects of Remaining Hair Cells on Cochlear Implant Function N1-DC-2-15QPR1 Neural Prosthesis Program N. Hu, P.J. Abbas, C.A. Miller, B.K. Robinson, K.V. Nourski, F. Jeng, B.A. Abkes, J.M. Nichols Department

More information

Systems Neuroscience Oct. 16, Auditory system. http:

Systems Neuroscience Oct. 16, Auditory system. http: Systems Neuroscience Oct. 16, 2018 Auditory system http: www.ini.unizh.ch/~kiper/system_neurosci.html The physics of sound Measuring sound intensity We are sensitive to an enormous range of intensities,

More information

The REAL Story on Spectral Resolution How Does Spectral Resolution Impact Everyday Hearing?

The REAL Story on Spectral Resolution How Does Spectral Resolution Impact Everyday Hearing? The REAL Story on Spectral Resolution How Does Spectral Resolution Impact Everyday Hearing? Harmony HiResolution Bionic Ear System by Advanced Bionics what it means and why it matters Choosing a cochlear

More information

Simulation of an electro-acoustic implant (EAS) with a hybrid vocoder

Simulation of an electro-acoustic implant (EAS) with a hybrid vocoder Simulation of an electro-acoustic implant (EAS) with a hybrid vocoder Fabien Seldran a, Eric Truy b, Stéphane Gallégo a, Christian Berger-Vachon a, Lionel Collet a and Hung Thai-Van a a Univ. Lyon 1 -

More information

Effects of electrode design and configuration on channel interactions

Effects of electrode design and configuration on channel interactions Hearing Research 211 (2006) 33 45 Research paper Effects of electrode design and configuration on channel interactions Ginger S. Stickney a, *, Philipos C. Loizou b, Lakshmi N. Mishra b,e, Peter F. Assmann

More information

Spectrograms (revisited)

Spectrograms (revisited) Spectrograms (revisited) We begin the lecture by reviewing the units of spectrograms, which I had only glossed over when I covered spectrograms at the end of lecture 19. We then relate the blocks of a

More information

Representation of sound in the auditory nerve

Representation of sound in the auditory nerve Representation of sound in the auditory nerve Eric D. Young Department of Biomedical Engineering Johns Hopkins University Young, ED. Neural representation of spectral and temporal information in speech.

More information

JARO. Research Article. Abnormal Binaural Spectral Integration in Cochlear Implant Users

JARO. Research Article. Abnormal Binaural Spectral Integration in Cochlear Implant Users JARO 15: 235 248 (2014) DOI: 10.1007/s10162-013-0434-8 D 2014 Association for Research in Otolaryngology Research Article JARO Journal of the Association for Research in Otolaryngology Abnormal Binaural

More information

Linguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions.

Linguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions. 24.963 Linguistic Phonetics Basic Audition Diagram of the inner ear removed due to copyright restrictions. 1 Reading: Keating 1985 24.963 also read Flemming 2001 Assignment 1 - basic acoustics. Due 9/22.

More information

HHS Public Access Author manuscript Ear Hear. Author manuscript; available in PMC 2018 March 01.

HHS Public Access Author manuscript Ear Hear. Author manuscript; available in PMC 2018 March 01. The Relationship Between Intensity Coding and Binaural Sensitivity in Adults With Cochlear Implants Ann E. Todd 1, Matthew J. Goupell 2, and Ruth Y. Litovsky 1 1 Waisman Center, University of Wisconsin-Madison,

More information

Spectral-Ripple Resolution Correlates with Speech Reception in Noise in Cochlear Implant Users

Spectral-Ripple Resolution Correlates with Speech Reception in Noise in Cochlear Implant Users JARO 8: 384 392 (2007) DOI: 10.1007/s10162-007-0085-8 JARO Journal of the Association for Research in Otolaryngology Spectral-Ripple Resolution Correlates with Speech Reception in Noise in Cochlear Implant

More information

Effect of mismatched place-of-stimulation on the salience of binaural cues in conditions that simulate bilateral cochlear-implant listening

Effect of mismatched place-of-stimulation on the salience of binaural cues in conditions that simulate bilateral cochlear-implant listening Effect of mismatched place-of-stimulation on the salience of binaural cues in conditions that simulate bilateral cochlear-implant listening Matthew J. Goupell, a) Corey Stoelb, Alan Kan, and Ruth Y. Litovsky

More information

Auditory System & Hearing

Auditory System & Hearing Auditory System & Hearing Chapters 9 and 10 Lecture 17 Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Spring 2015 1 Cochlea: physical device tuned to frequency! place code: tuning of different

More information

Over-representation of speech in older adults originates from early response in higher order auditory cortex

Over-representation of speech in older adults originates from early response in higher order auditory cortex Over-representation of speech in older adults originates from early response in higher order auditory cortex Christian Brodbeck, Alessandro Presacco, Samira Anderson & Jonathan Z. Simon Overview 2 Puzzle

More information

REVISED. The effect of reduced dynamic range on speech understanding: Implications for patients with cochlear implants

REVISED. The effect of reduced dynamic range on speech understanding: Implications for patients with cochlear implants REVISED The effect of reduced dynamic range on speech understanding: Implications for patients with cochlear implants Philipos C. Loizou Department of Electrical Engineering University of Texas at Dallas

More information

Frequency refers to how often something happens. Period refers to the time it takes something to happen.

Frequency refers to how often something happens. Period refers to the time it takes something to happen. Lecture 2 Properties of Waves Frequency and period are distinctly different, yet related, quantities. Frequency refers to how often something happens. Period refers to the time it takes something to happen.

More information

The problem of temporal coding in cochlear implants

The problem of temporal coding in cochlear implants The problem of temporal coding in cochlear implants Ian C. Bruce McMaster University Hamilton, Ontario Outline Why temporal coding for CIs is problematic Analysis of data from Wise et al. (CIAP 2009) and

More information

Electric and Acoustic Stimulation in the Same Ear

Electric and Acoustic Stimulation in the Same Ear EXZELLENZCLUSTER IM Electric and Acoustic Stimulation in the Same Ear Waldo Nogueira, Benjamin Krüger, Marina Imsiecke, Andreas Büchner, Medizinische Hochschule Hannover, Cluster of Excellence Hearing4all,

More information

SP H 588C Electrophysiology of Perception and Cognition Spring 2018

SP H 588C Electrophysiology of Perception and Cognition Spring 2018 SP H 588C Electrophysiology of Perception and Cognition Spring 2018 1 Barbara Cone, Ph.D. Professor 518 SLHS Tel: 626-3710 e-mail: conewess@email.arizona.edu Class Meeting for lecture and labs: Monday

More information

Linguistic Phonetics Fall 2005

Linguistic Phonetics Fall 2005 MIT OpenCourseWare http://ocw.mit.edu 24.963 Linguistic Phonetics Fall 2005 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 24.963 Linguistic Phonetics

More information

C ritical Review: Do we see auditory system acclimatization with hearing instrument use, using electrophysiological measures?

C ritical Review: Do we see auditory system acclimatization with hearing instrument use, using electrophysiological measures? C ritical Review: Do we see auditory system acclimatization with hearing instrument use, using electrophysiological measures? Alasdair Cumming M.Cl.Sc (AUD) Candidate University of Western Ontario: School

More information

Multistage nonlinear optimization to recover neural activation patterns from evoked compound action potentials of cochlear implant users

Multistage nonlinear optimization to recover neural activation patterns from evoked compound action potentials of cochlear implant users > REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 1 Multistage nonlinear optimization to recover neural activation patterns from evoked compound action potentials

More information

Characterization of Temporal Interactions in the Auditory Nerve of Adult and Pediatric Cochlear Implant Users

Characterization of Temporal Interactions in the Auditory Nerve of Adult and Pediatric Cochlear Implant Users University of Iowa Iowa Research Online Theses and Dissertations Summer 2013 Characterization of Temporal Interactions in the Auditory Nerve of Adult and Pediatric Cochlear Implant Users Aayesha Narayan

More information

Best Practice Protocols

Best Practice Protocols Best Practice Protocols SoundRecover for children What is SoundRecover? SoundRecover (non-linear frequency compression) seeks to give greater audibility of high-frequency everyday sounds by compressing

More information

Third Quarterly Progress Report NO1-DC The Neurophysiological Effects of Simulated Auditory Prosthesis Stimulation

Third Quarterly Progress Report NO1-DC The Neurophysiological Effects of Simulated Auditory Prosthesis Stimulation Third Quarterly Progress Report NO1-DC-6-2111 The Neurophysiological Effects of Simulated Auditory Prosthesis Stimulation C.A. Miller, P.J. Abbas, J.T. Rubinstein, and A.J. Matsuoka Department of Otolaryngology

More information

Chapter 9 The consequences of neural degeneration regarding optimal cochlear implant position in scala tympani: A model approach

Chapter 9 The consequences of neural degeneration regarding optimal cochlear implant position in scala tympani: A model approach Chapter 9 The consequences of neural degeneration regarding optimal cochlear implant position in scala tympani: A model approach Jeroen J. Briaire and Johan H.M. Frijns Hearing Research (26), 214(1-2),

More information

An Update on Auditory Neuropathy Spectrum Disorder in Children

An Update on Auditory Neuropathy Spectrum Disorder in Children An Update on Auditory Neuropathy Spectrum Disorder in Children Gary Rance PhD The University of Melbourne Sound Foundations Through Early Amplification Meeting, Chicago, Dec 2013 Overview Auditory neuropathy

More information

Speech conveys not only linguistic content but. Vocal Emotion Recognition by Normal-Hearing Listeners and Cochlear Implant Users

Speech conveys not only linguistic content but. Vocal Emotion Recognition by Normal-Hearing Listeners and Cochlear Implant Users Cochlear Implants Special Issue Article Vocal Emotion Recognition by Normal-Hearing Listeners and Cochlear Implant Users Trends in Amplification Volume 11 Number 4 December 2007 301-315 2007 Sage Publications

More information

Chapter 40 Effects of Peripheral Tuning on the Auditory Nerve s Representation of Speech Envelope and Temporal Fine Structure Cues

Chapter 40 Effects of Peripheral Tuning on the Auditory Nerve s Representation of Speech Envelope and Temporal Fine Structure Cues Chapter 40 Effects of Peripheral Tuning on the Auditory Nerve s Representation of Speech Envelope and Temporal Fine Structure Cues Rasha A. Ibrahim and Ian C. Bruce Abstract A number of studies have explored

More information

Lauer et al Olivocochlear efferents. Amanda M. Lauer, Ph.D. Dept. of Otolaryngology-HNS

Lauer et al Olivocochlear efferents. Amanda M. Lauer, Ph.D. Dept. of Otolaryngology-HNS Lauer et al. 2012 Olivocochlear efferents Amanda M. Lauer, Ph.D. Dept. of Otolaryngology-HNS May 30, 2016 Overview Structural organization Responses Hypothesized roles in hearing Olivocochlear efferent

More information

A Brain Computer Interface System For Auto Piloting Wheelchair

A Brain Computer Interface System For Auto Piloting Wheelchair A Brain Computer Interface System For Auto Piloting Wheelchair Reshmi G, N. Kumaravel & M. Sasikala Centre for Medical Electronics, Dept. of Electronics and Communication Engineering, College of Engineering,

More information

Exploring the parameter space of Cochlear Implant Processors for consonant and vowel recognition rates using normal hearing listeners

Exploring the parameter space of Cochlear Implant Processors for consonant and vowel recognition rates using normal hearing listeners PAGE 335 Exploring the parameter space of Cochlear Implant Processors for consonant and vowel recognition rates using normal hearing listeners D. Sen, W. Li, D. Chung & P. Lam School of Electrical Engineering

More information

A Model for Electrical Communication Between Cochlear Implants and the Brain

A Model for Electrical Communication Between Cochlear Implants and the Brain University of Denver Digital Commons @ DU Electronic Theses and Dissertations Graduate Studies 1-1-2009 A Model for Electrical Communication Between Cochlear Implants and the Brain Douglas A. Miller University

More information

Modern cochlear implants provide two strategies for coding speech

Modern cochlear implants provide two strategies for coding speech A Comparison of the Speech Understanding Provided by Acoustic Models of Fixed-Channel and Channel-Picking Signal Processors for Cochlear Implants Michael F. Dorman Arizona State University Tempe and University

More information

Across-Site Variation in Detection Thresholds and Maximum Comfortable Loudness Levels for Cochlear Implants

Across-Site Variation in Detection Thresholds and Maximum Comfortable Loudness Levels for Cochlear Implants JARO 5: 11 24 (2004) DOI: 10.1007/s10162-003-3051-0 JARO Journal of the Association for Research in Otolaryngology Across-Site Variation in Detection Thresholds and Maximum Comfortable Loudness Levels

More information