Revisiting the right-ear advantage for speech: Implications for speech displays

Size: px
Start display at page:

Download "Revisiting the right-ear advantage for speech: Implications for speech displays"

Transcription

1 INTERSPEECH 2014 Revisiting the right-ear advantage for speech: Implications for speech displays Nandini Iyer 1, Eric Thompson 2, Brian Simpson 1, Griffin Romigh 1 1 Air Force Research Laboratory; 2 Ball Aerospace; Wright Patterson Air Force Base, OH, [nandini.iyer.2, eric.thompson.22.ctr, brian.simpson.4, griffin.romigh]@us.af.mil Abstract Cherry (1953) reported that when listeners were presented with dichotic signal over headphones, they could reliably report words presented to the attended ear, while only being aware of the gross properties of the talker in the unattended ear. More recently, Gallun et al. (2007) showed that there were large differences in performance on dichotic tasks depending on ear of presentation, with significantly larger errors occurring when the target was presented to the left, rather than right ear (i.e., a right-ear advantage). In the current experiment, we explored two factors, type of signal in the non-target ear and uncertainty about the target ear, and their effects on right-ear advantage. The results indicated that the right-ear advantage was modulated by two factors: 1) nature of the speech stimuli presented in the unattended ear, and 2) target ear uncertainty. Substantial differences were observed between listeners in the tasks, leading to varying amounts of right-ear advantage across listeners for the listening conditions tested. These results and their implications for the design of multichannel speech communication displays are discussed, and the use of these methods is recommended as a useful screening tool for selection of personnel who listen to and use multichannel speech displays. Index Terms: dichotic speech perception, right-ear advantage 1. Introduction Early neurophysiological and behavioral work has demonstrated the presence of asymmetries in auditory perception. Much of the experimental work has employed a dichotic listening task, where two auditory stimuli are simultaneously presented, one to each ear, and the listener is asked to report the stimuli presented to one or both ears. [1, 2] was one of the first to demonstrate a right-ear advantage for linguistic material, and argued that the propensity for selecting the stimulus presented to the right ear was due to anatomical properties of the auditory system, particularly due to the fact that the right ear is connected to the language dominant left hemisphere of the brain by preponderantly contralateral connections. And while there has been continuing evidence for the structural theory [3], results from [4] suggests that other mechanisms, such as attention, might also be at play. For example, [5] has suggested that the right-ear advantage can be reduced, or eliminated when a listeners is directed to attend to the left ear. Despite the fairly substantial body of literature on auditory, and more specifically speech asymmetries, few researchers have addressed the potential implications that these results might have on the design of spatial auditory displays. Spatial interfaces endeavor to enhance communications by taking advantage of listeners ability to effectively segregate speech streams when they are spatially separated, thereby improving speech intelligibility and decreasing workload [6]. More recently, a speech display capable of accommodating up to seven simultaneous talkers was proposed [7]; this display utilizes the differences in auditory spatial acuity, so that talkers are more closely spaced in the front and more widely spaced in the periphery. Laboratory studies based on such a spatial speech display resulted in listeners performing about 30-40% better on intelligibility tasks compared to standard diotic communications. Note that the optimal display proposed makes no provisions for the aforementioned asymmetries, nor does it provide any guidelines for adapting such displays for specific users based on measured or known asymmetries. Recently, large differences were reported in dichotic tasks between the two ears [8]; specifically, there were large increases in identifying a target presented to the left, rather than right ear, suggesting a right-ear advantage. In their experiment, they reported an increase in the right-ear advantage, even in conditions where the target ear was clearly indicated in advance. This finding is especially surprising because it contradicts the results from studies that report that the right-ear advantage can be reduced or switched by directing attention to the left ear. This result is also not consistent with early research [9], that suggested that listeners can only process stimuli arriving at one of the ears in a selective attention task. The findings suggest that the right-ear advantages observed in tasks of selective attention might be related to failures in retrieval and processing of the speech stimuli involving more central processes, or simply a propensity for listeners to adopt less optimal strategies in the task. It is important to distinguish between these two possibilities, because they have different implications for the design of speech-based displays. If listeners show a right-ear advantage due to adoption of non-optimal strategies, then it would warrant the development of training modules in order to move them to adopt optimal task-related strategies. On the other hand, if right-ear advantages persist due to failure of retrieval or storage at more central locations, then it would warrant the development of selection or screening tests for listeners who are better able to use spatial speech displays. At the very least, a spatial speech-based display has to incorporate a design guide for listeners who show persistent right-ear advantages even when they are directed to attend to a target ear. The aim of the current study was to explore the parameter space for dichotic speech tasks in order to gain an understanding of conditions under which right-ear advantages might be expected. Speech intelligibility for target signals presented to the left and right ear was measured under three listening conditions with increasing uncertainty about the target ear. In the first condition, with least uncertainty, the target ear was pre-cued at the beginning of every trial, and the target ear remained fixed throughout a block of trials. In the second condition, the uncertainty was increased by randomly varying the target ear within a block of trials, but the target ear was always pre-cued at the onset of every trial. In the last condition, where uncertainty was largest, the target could vary from trial-to-trial and this information was not provided to the listener until after the stimulus presentation (i.e., a post-cued Copyright 2014 ISCA September 2014, Singapore

2 trial). We predicted that listeners adopting optimal strategy should exhibit little to no right-ear advantage in conditions with low uncertainty, and only minimal advantage in high uncertainty conditions mostly due to failures of selections of the target ear, which would manifest itself in the pattern of errors. On the other hand, listeners with non-optimal task strategies would consistently show a right-ear advantage in all listening conditions. Moreover, we also measured the right-ear advantage with three different types of maskers (noise, irrelevant speech and relevant speech maskers), because these three types of maskers are known to mediate performance on dichotic speech intelligibility tasks differently [10]. Adding a contralateral noise masker has no effect on speech intelligibility, but speech maskers can interfere with target identification significantly based on the nature of the speech. For instance, a contralateral relevant speech masker can interfere with target identification to a larger extent than irrelevant speech maskers, mostly because of its similarity to a target signal [11]. [12] has proposed that a speech masker results in competition of central processing resources, so that inputs from the two ears are processed simultaneously to some extent, but only one input can be analyzed at a time. Presumably, any cues that the listeners can use to distinguish which input should be processed will result in better performance. We argue that disambiguating a target phrase from similar interfering stimuli requires more resources than disambiguating a target from irrelevant stimuli, and should result in large right-ear advantages either due to poor selection strategies or greater interferences with central storage and retrieval processes. For the purposes of this study, two types of speech maskers were included to study the extent to which right-ear advantage may be influenced by the nature of a speech masker Listeners 2. Method Ten listeners (19-32 years; five male, five female) participated in the experiment. All had normal audiometric thresholds (<20 db HL at octave frequencies 250 Hz-8 khz) in both ears. All listeners had experience with the stimuli and the tasks used in the experiment, and were paid for their participation. Of the ten listeners, two were left-handed and the remaining were right-handed Stimuli In the experiment, target sentences were drawn from the Coordinate Response Measure (CRM) corpus [13], and consisted of the form Ready [call sign], go to [color] [number] now. Eight possible call-signs (Arrow, Baron, Charlie, Eagle, Hopper, Laker, Ringo, Tiger), four possible colors (red, blue, green, white) and eight possible numbers (1 to 8) spoken by eight talkers (four males, four females) yielded a total of 2048 phrases in the corpus. In this experiment, only phrases containing the two call signs Baron and Charlie were used (a total of 512 phrases). The target phrase was always presented at 65 db SPL along with an ipsilateral (i.e., sameear) noise masker, which was varied according to the signalto-noise ratio (SNR). To maximize predictability, when the target ear was the right ear, a CRM phrase starting with the call sign Baron was selected, whereas when the target ear was the left ear, a CRM phrase with the call sign Charlie was selected. One of three types of maskers was also presented to the contralateral ear (i.e., the ear opposite the target ear): relevant speech masker, irrelevant speech masker and noise. In the relevant speech masker condition, a CRM phrase of the same-sex talker as that presented in the target ear, containing the call sign Baron (when the target ear was the left ear) or Charlie (when the target ear was the right ear), with mutually exclusive color-number keywords was presented to the contralateral ear. The irrelevant speech maskers were excerpts from recordings (two males, two females) of The Wealth of Nations by Adam Smith and were randomly selected for every trial so that the targets and masker talkers were spoken by a same-sex talker. The last type of masker was a noise masker, which was an independently generated token compared to the target ear noise token. When a contralateral speech masker was present, it was accompanied by a noise masker at the same SNR as that in the target ear, so that the stimuli in both ears were degraded to the same extent. Additionally, when a relevant speech masker was present, a control condition was run in which no noise maskers were present in either ear (represented at +). All stimuli were sampled at 44.1 khz Procedures The stimuli were generated on a PC running Matlab and were presented via a sound card (RME Hammerfall DSP Multiface II) to the listeners over headphones (Sennheiser HD280 Pro) while they were seated in sound-treated booths. At the onset of every 25 trial block, the listeners were provided detailed instructions about the listening conditions in the block. Their task was to respond with the color-number combination of a target phrase, presented either to the LEFT ear (always a phrase containing the call sign Charlie ) or the RIGHT ear (always a phrase containing the call sign Baron ). The target phrase was presented with a noise masker at varying SNRs (-6 to -18 db in 3 db steps); the SNR was randomly selected on each trial. In some blocks, a contralateral masker (noise, irrelevant or relevant speech) was presented; when noise was present contralaterally, the level of the noise was the same as that in the target ear. When speech maskers were presented, they were presented with a noise masker at the same SNR as that of the noise in the target ear. Control conditions included blocks with no contralateral masker. There was a total of 150 blocks; each data points represents 60 trials per condition. Within a block of trials, the target ear remained constant (FIXED) or varied from trial-to-trial (RANDOM). When the target ear was randomly selected on every trial, listeners were informed about which ear contained the target phrase via an arrow displayed on a computer monitor before the onset of a trial (PRECUE) or after the trial (POSTCUE). After stimulus presentation, the listeners responded with the color-number pair that they heard on a custom keyboard with 32 response keys (numbers 1-8 on colored backgrounds). 3. Results Figure 1 depicts average proportion correct color-number responses as a function of SNR when the target ear was the right (red diamonds) or left (blue circles). The dark and light colored symbols represent performance in FIXED and RANDOM blocks respectively. Panel a represents performance in the control, no contralateral masker condition and panel d shows performance with a noise only contralateral masker. Panels b and e represent performance in listening conditions with a pre-cued and post-cued contralateral irrelevant speech masker and noise. Similarly, panels c and f depict performance with a contralateral, pre- 458

3 and post-cued relevant speech masker and noise. In each of the panels, the green line represents a logistic fit to the data obtained in the no masker and contralateral noise masker conditions (panels a and d). masker in the post-cue condition (~5.2 db). The trend for improved performance when a target is presented to the right, rather than left ear, is also apparent in the control condition, when listeners just heard two CRM sentences, one in each ear (+ condition in panels c and f). Figure 1: Proportion correct color-number responses for left-ear (blue circles) and right-ear (red diamonds) targets as a function of SNR. The dark colored symbols are from FIXED blocks and the light-colored symbols are from RANDOM blocks. Each panel shows the data from a different contralateral masker and cuing condition: a) No contralateral masker, b) Irrelevant speech + Noise, pre-cue, c) CRM speech + Noise, pre-cue, d) Noise, e) Irrelevant speech + Noise, post-cue, f) CRM speech + Noise, postcue. The green dashed line in each panel is a best-fit logistic function to the data from the None and Noise conditions, shown to ease comparisons across the conditions. For each listening condition, an operational definition of right-ear advantage was the offset measure: the offset was measured as the midpoint between the psychometric functions of each of the average performance curves, and compared to the offset of the logistic fit. This allowed for a comparison across the listening conditions compared to a control condition (no contralateral masker) and a condition where not much right-ear advantage was expected. Comparing the proportion of correct color-number responses when the target phrase was presented with no contralateral masker (panel a) or with a noise contralateral masker (panel d), a right-ear advantage was not apparent. Further, there was no difference in performance between the random and fixed trial blocks; i.e., uncertainty about the target ear did not lead to any difference in performance when the contralateral masker was absent or was noise. When an irrelevant speech masker was presented along with the noise (panel b), and the listener was informed about the target ear prior to the trial, there was a shift in performance of the average offset from that of the logistic fit by approximately 1.1 db. However, there was no significant difference in performance between left-ear and right-ear presentation of the target, and nor was there any difference between the FIXED and VARIABLE conditions. However, when the target ear was cued after the trial (panel e), there was a small (~0.5 db) difference in performance when the target was presented to the right compared to the left ear. The minimal right-ear advantage observed with an irrelevant speech masker (only in the post-cue condition) increased when the contralateral masker was relevant speech. From panel c, the right-ear advantage is about 1 db even in listening conditions when the target ear was fixed throughout the block. When the target ear was randomly chosen in the block, the right-ear benefit was larger (~2.2 db). The greatest right-ear advantage was obtained with a contralateral relevant Figure 2: Difference in offset in the logistic function fit to the individual listener data for the left and right ears with a contralateral relevant speech masker in a) the Pre-cue Fixed condition (yellow stars), b) Pre-cue Random conditions (purple triangles) and 3) Postcue Random condition (green squares). The means across subjects are also shown with 95% confidence interval error bars. In order to compute the right-ear advantage for each of the ten listeners in the study, a threshold difference (Left-Right ear) was computed for each subject and is plotted in Figure 2. For each listener, the threshold difference was calculated computing the offset in db (midpoint of the resulting psychometric function for each listener in each condition) for the right and left ear presentations of the target and then subtracting the two offsets. These differences are believed to reflect the magnitude of the right-ear advantage for each individual listener. Only conditions with a contralateral relevant speech masker are plotted (since the greatest advantages were observed in these conditions), and are ordered in terms of increasing difference in the post-cue, random target ear condition (green squares in figure 2). As is apparent in the figure, there appears to be substantial differences between listeners, so that some of them show very little difference in performance between the ears in all conditions tested, whereas others show large differences even when they were asked to attend to the same ear throughout a whole block of trials (precued, fixed blocks, depicted by yellow stars in the figure). 4. Discussion and Conclusions In this study, there was no right-ear advantage when the interference in the contralateral ear was non-speech. When an irrelevant speech stimulus was presented to the contralateral ear, there was a small advantage when a target signal was presented in the right, rather than left ear, and was seen only in conditions when the target ear was post-cued. When relevant speech was presented contralaterally, the right-ear advantage was substantial, especially in conditions when the target ear varied randomly from trial-to-trial and when the target ear was post-cued; but the right-ear advantage was also evident in conditions with the least amount of uncertainty. There were substantial differences among listeners across conditions. Only one of the listeners showed a left-ear advantage (listener 1), and one showed near-optimal performance in all conditions (i.e., no ear advantage), demonstrating the ability to switch their attention as directed in the listening conditions. Perhaps surprisingly, some listeners had difficulty separating the stimulus in the target ear from that at the non-target ear, even when the target was clearly indicated in advance (listeners 6 and 8 in the pre-cued, fixed conditions; see Figure 2). These results are consistent with the results of [9] and [14], and suggest that listeners can 459

4 only process the semantic content of a stimulus arriving at one ear. Many listeners (4, 5, 9 and 10) could switch their attention as directed before the trial, but showed a strong tendency to report only the right-ear stimulus if there was any uncertainty regarding the ear of presentation of the target (5 and 9 in Figure 2). The findings from this study are consistent with other studies where there was little or no interference with contralateral noise or irrelevant speech maskers [11]. When a right-ear advantage was observed with an irrelevant speech masker, it was only seen in conditions where there was some uncertainty about the target ear (random, post-cue condition). Note that in this condition, there was no ambiguity about the target phrase in this condition, because listeners only heard one CRM phrase; so, listeners could select the target message without much competition. The right-ear advantage in this condition appears to be related to the inability to suppress an irrelevant speech interferer when it is presented to the dominant (right) ear. We speculate that listeners, by default, generally adopt the strategy of listening to the right ear, so that when an irrelevant speech interferer is presented to the right ear, they have to switch and attend to the left ear, which could result in some loss of information (~0.5 db measured in the experiment). Based on results from this experiment, it appears that for right-ear advantages to occur there has to be speech in both ears. Figure 3: Cumulative proportion of responses as a function of SNR for the contralateral CRM conditions. White areas represent the correct color and number responses. Cyan areas show the proportion of masker color and number intrusions. The yellow area shows the hybrid target-masker responses with either the target color and masker number or vice versa. The maroon area shows the proportion of responses in which either the color or number was from neither the target nor the masker. The right-ear advantage is primarily seen when the interferer is relevant speech. Even in conditions with the least amount of uncertainty, the presence of a relevant speech masker resulted in a right-ear advantage. This is consistent with the notion that there is preferential processing of rightear stimuli due to hemispheric asymmetries. The size of this effect is small (~1 db), but increases to 2 db when uncertainty about the target ear is present (RANDOM vs. FIXED trials). The increase in right-ear advantage when the target ear is randomly selected on every trial suggests that some listeners are adopting a non-optimal listening strategy of disregarding the cued target ear and attending to the right ear on every trial before switching and attending to the opposite ear. The fact that the cost of switching is slightly greater with a relevant masker (~0.5 db) compared to that with irrelevant maskers suggests that some of the cost could also be due to interference from the relevant speech masker in the storage/retrieval process, or effort expended to suppress the contralateral relevant masker. One way to distinguish between mechanisms that listeners might have used in the task is to study the pattern of errors that were generated. If the cost was related to interference in storage or retrieval, we would predict that there would be more semantically-similar (intrusions) errors when the target ear was the left ear due to preferential processing of the right ear stimulus. On the other hand, if the cost was related to increased effort to suppress the contralateral masker due to similarity, we would expect more random errors, i.e., errors that belong neither to the target nor the masker. Figure 3 plots the distribution of listeners responses among correct target keywords in white (color and number of the target word; denoted by C for color, and N for number with the subscript T to denote target ; C T &N T ), intrusions from non-target ear in cyan, i.e., the masking phrase (both color and number belonged to the CRM phrase in the non-target ear; C M &N M ), intrusions where either the color and number belonged to the target or masker in yellow (C T &N M or C M &N T ), and responses where the color or number keywords contained randomly selected choices in maroon (random and belonged to neither the target not the masker (C R or N R ). The middle panel in the figure depicts performance in the condition when the target ear varied on every trial but it was pre-cued. It is apparent from the data that a large proportion of errors in the left ear were due to a tendency for listeners to report color number keywords from the right ear, suggesting that most of the right-ear advantage in this condition occurred due to interference from right-ear stimulus. The intrusion errors from the right ear increase in conditions with maximum uncertainty; i.e., the post-cue condition (right-most panel), and it also results in the maximum right-ear advantage observed in the study (~6 db). While the results can be partially explained based on an interference mechanism (large right-ear intrusion when the target ear was left), response competition might not account for all the errors. The alternative is that listeners selected the wrong phrase to process. If this was true, and they were occasionally aware of this error, then we would predict that there would be significantly larger random errors. Examining figure 3, this does not appear to be the case. Listeners appear to make a non-optimal choice of listening exclusively to the right-ear, to the extent that they report mostly keywords heard in the right ear, even when they are asked about the keywords in the left ear. This finding is in agreement with [9] finding that listeners can only process information from one channel at a time. Whether the right-ear advantage is related to preferential treatment of right ear signals or due to listeners adopting a non-optimal right-ear bias is not clear. Further studies are needed to determine if training procedures can alleviate some of the right-ear advantages seen in these results. With regards to using spatial speech-based displays, the right-ear advantages appear to be sufficiently large to be of interest to those who design such speech displays. One such design principle could be based on the dynamic adaptation of such displays based on prioritization of the incoming communication messages, so that the most important messages are presented in the right hemi-field, and lower priority messages are displayed in the left hemi-field. What is clear from the current study is that the adaptive displays should only be used when multiple simultaneous messages are likely to be contextually-similar. Further studies are warranted to determine whether stimulus factors, such as level of signals can be used to offset these asymmetries noted in the study, or whether other factors such as training and experience are sufficient to overcome these asymmetries. 460

5 5. References [1] D. Kimura, Some effects of temporal-lobe damage on auditory perception. Canadian Journal of Psychology, 15, , [2] D. Kimura, Functional asymmetry of the brain in dichotic listening. Cortex, 3, , [3] R. J. Davidson & K. Hughahl, Baseline asymmetries in brain electrical activity predict dichotic listening performance. Neuropsychology, 10, , [4] M. Hiscock, R. Inch, and M. Kinsbourne, Allocation of attention in dichotic listening: Differential effects on the detection and localization of signals. Neuropsychology, 13(3), , [5] A. E. Asbjørnsen and K. Hugdahl, Attentional effects in dichotic listening. Brain and Language, 49, , [6] R. S. Bolia, Effects of spatial intercoms and active noise reduction headsets on speech intelligibility in an AWACS environment. Proceedings of the Human Factors and Ergonomics Society 47th Annual Meeting, , [7] D. S. Brungart, and B. D. Simpson, Optimizing the spatial configuration of a seven-talker speech display. Proceedings of the 9th Meeting of the International Community for Auditory Display, , 2003 [8] F. J. Gallun, C. R. Mason, G. Kidd, Jr., Task-dependent costs in processing two simultaneous auditory stimuli Perception and Psychophysics. 69, , [9] E. C. Cherry. Some experiments on the recognition of speech, with one and with two ears, Journal of the Acoustical Society of America, 25, , [10] T. L. Arbogast, C. R. Mason, and G. Kidd, Jr., The effect of spatial separation on informational and energetic masking of speech. Journal of the Acoustical Society of America, 112, , [11] D. S. Brungart, and B. D. Simpson, Within-ear and across-ear interference in a cocktail-party listening task. Journal of the Acoustical Society of America, 112, , [12] A. M. Treisman, Strategies and models of selective attention. Psychological Review, 76, , 1969 [13] R. S. Bolia, et al., A speech corpus for multitalker communications research. Journal of the Acoustical Society of America, 107, , [14] N. Wood, & N. Cowan, The cocktail party phenomenon revisited: Attention and memory in the classic selective listening procedure of Cherry (1953), Journal of Experimental Psychology: General, 124, ,

Brian D. Simpson Veridian, 5200 Springfield Pike, Suite 200, Dayton, Ohio 45431

Brian D. Simpson Veridian, 5200 Springfield Pike, Suite 200, Dayton, Ohio 45431 The effects of spatial separation in distance on the informational and energetic masking of a nearby speech signal Douglas S. Brungart a) Air Force Research Laboratory, 2610 Seventh Street, Wright-Patterson

More information

Factors That Influence Intelligibility in Multitalker Speech Displays

Factors That Influence Intelligibility in Multitalker Speech Displays THE INTERNATIONAL JOURNAL OF AVIATION PSYCHOLOGY, 14(3), 313 334 Factors That Influence Intelligibility in Multitalker Speech Displays Mark A. Ericson, Douglas S. Brungart, and Brian D. Simpson Air Force

More information

DESIGN CONSIDERATIONS FOR IMPROVING THE EFFECTIVENESS OF MULTITALKER SPEECH DISPLAYS

DESIGN CONSIDERATIONS FOR IMPROVING THE EFFECTIVENESS OF MULTITALKER SPEECH DISPLAYS Proceedings of the 2 International Conference on Auditory Display, Kyoto, Japan, July 2-5, 2 DESIGN CONSIDERATIONS FOR IMPROVING THE EFFECTIVENESS OF MULTITALKER SPEECH DISPLAYS Douglas S. Brungart Mark

More information

Influence of acoustic complexity on spatial release from masking and lateralization

Influence of acoustic complexity on spatial release from masking and lateralization Influence of acoustic complexity on spatial release from masking and lateralization Gusztáv Lőcsei, Sébastien Santurette, Torsten Dau, Ewen N. MacDonald Hearing Systems Group, Department of Electrical

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Psychological and Physiological Acoustics Session 4aPPb: Binaural Hearing

More information

Release from informational masking in a monaural competingspeech task with vocoded copies of the maskers presented contralaterally

Release from informational masking in a monaural competingspeech task with vocoded copies of the maskers presented contralaterally Release from informational masking in a monaural competingspeech task with vocoded copies of the maskers presented contralaterally Joshua G. W. Bernstein a) National Military Audiology and Speech Pathology

More information

Isolating the energetic component of speech-on-speech masking with ideal time-frequency segregation

Isolating the energetic component of speech-on-speech masking with ideal time-frequency segregation Isolating the energetic component of speech-on-speech masking with ideal time-frequency segregation Douglas S. Brungart a Air Force Research Laboratory, Human Effectiveness Directorate, 2610 Seventh Street,

More information

Sound localization psychophysics

Sound localization psychophysics Sound localization psychophysics Eric Young A good reference: B.C.J. Moore An Introduction to the Psychology of Hearing Chapter 7, Space Perception. Elsevier, Amsterdam, pp. 233-267 (2004). Sound localization:

More information

Binaural processing of complex stimuli

Binaural processing of complex stimuli Binaural processing of complex stimuli Outline for today Binaural detection experiments and models Speech as an important waveform Experiments on understanding speech in complex environments (Cocktail

More information

Evaluation of a Danish speech corpus for assessment of spatial unmasking

Evaluation of a Danish speech corpus for assessment of spatial unmasking Syddansk Universitet Evaluation of a Danish speech corpus for assessment of spatial unmasking Behrens, Thomas; Neher, Tobias; Johannesson, René B. Published in: Proceedings of the International Symposium

More information

Masker location uncertainty reveals evidence for suppression of maskers in two-talker contexts

Masker location uncertainty reveals evidence for suppression of maskers in two-talker contexts Masker location uncertainty reveals evidence for suppression of maskers in two-talker contexts Kachina Allen a) and David Alais School of Medical Sciences, University of Sydney, New South Wales, Australia

More information

Spatial unmasking in aided hearing-impaired listeners and the need for training

Spatial unmasking in aided hearing-impaired listeners and the need for training Spatial unmasking in aided hearing-impaired listeners and the need for training Tobias Neher, Thomas Behrens, Louise Kragelund, and Anne Specht Petersen Oticon A/S, Research Centre Eriksholm, Kongevejen

More information

The Impact of Noise and Hearing Loss on the Processing of Simultaneous Sentences

The Impact of Noise and Hearing Loss on the Processing of Simultaneous Sentences The Impact of Noise and Hearing Loss on the Processing of Simultaneous Sentences Virginia Best, 1,2 Frederick J. Gallun, 3 Christine R. Mason, 1 Gerald Kidd, Jr., 1 and Barbara G. Shinn-Cunningham 1 Objectives:

More information

Release from speech-on-speech masking by adding a delayed masker at a different location

Release from speech-on-speech masking by adding a delayed masker at a different location Release from speech-on-speech masking by adding a delayed masker at a different location Brad Rakerd a Department of Audiology and Speech Sciences, Michigan State University, East Lansing, Michigan 48824

More information

Lateralized speech perception in normal-hearing and hearing-impaired listeners and its relationship to temporal processing

Lateralized speech perception in normal-hearing and hearing-impaired listeners and its relationship to temporal processing Lateralized speech perception in normal-hearing and hearing-impaired listeners and its relationship to temporal processing GUSZTÁV LŐCSEI,*, JULIE HEFTING PEDERSEN, SØREN LAUGESEN, SÉBASTIEN SANTURETTE,

More information

PERCEPTION OF UNATTENDED SPEECH. University of Sussex Falmer, Brighton, BN1 9QG, UK

PERCEPTION OF UNATTENDED SPEECH. University of Sussex Falmer, Brighton, BN1 9QG, UK PERCEPTION OF UNATTENDED SPEECH Marie Rivenez 1,2, Chris Darwin 1, Anne Guillaume 2 1 Department of Psychology University of Sussex Falmer, Brighton, BN1 9QG, UK 2 Département Sciences Cognitives Institut

More information

Variation in spectral-shape discrimination weighting functions at different stimulus levels and signal strengths

Variation in spectral-shape discrimination weighting functions at different stimulus levels and signal strengths Variation in spectral-shape discrimination weighting functions at different stimulus levels and signal strengths Jennifer J. Lentz a Department of Speech and Hearing Sciences, Indiana University, Bloomington,

More information

Having Two Ears Facilitates the Perceptual Separation of Concurrent Talkers for Bilateral and Single-Sided Deaf Cochlear Implantees

Having Two Ears Facilitates the Perceptual Separation of Concurrent Talkers for Bilateral and Single-Sided Deaf Cochlear Implantees Having Two Ears Facilitates the Perceptual Separation of Concurrent Talkers for Bilateral and Single-Sided Deaf Cochlear Implantees Joshua G. W. Bernstein, 1 Matthew J. Goupell, 2 Gerald I. Schuchman,

More information

Individual differences in working memory capacity and divided attention in dichotic listening

Individual differences in working memory capacity and divided attention in dichotic listening Psychonomic Bulletin & Review 2007, 14 (4), 699-703 Individual differences in working memory capacity and divided attention in dichotic listening GREGORY J. H. COLFLESH University of Illinois, Chicago,

More information

Spectral processing of two concurrent harmonic complexes

Spectral processing of two concurrent harmonic complexes Spectral processing of two concurrent harmonic complexes Yi Shen a) and Virginia M. Richards Department of Cognitive Sciences, University of California, Irvine, California 92697-5100 (Received 7 April

More information

HCS 7367 Speech Perception

HCS 7367 Speech Perception Babies 'cry in mother's tongue' HCS 7367 Speech Perception Dr. Peter Assmann Fall 212 Babies' cries imitate their mother tongue as early as three days old German researchers say babies begin to pick up

More information

Sound Localization in Multisource Environments: The Role of Stimulus Onset Asynchrony and Spatial Uncertainty

Sound Localization in Multisource Environments: The Role of Stimulus Onset Asynchrony and Spatial Uncertainty Wright State University CORE Scholar Browse all Theses and Dissertations Theses and Dissertations 2011 Sound Localization in Multisource Environments: The Role of Stimulus Onset Asynchrony and Spatial

More information

Localization in speech mixtures by listeners with hearing loss

Localization in speech mixtures by listeners with hearing loss Localization in speech mixtures by listeners with hearing loss Virginia Best a) and Simon Carlile School of Medical Sciences and The Bosch Institute, University of Sydney, Sydney, New South Wales 2006,

More information

Chapter 6. Attention. Attention

Chapter 6. Attention. Attention Chapter 6 Attention Attention William James, in 1890, wrote Everyone knows what attention is. Attention is the taking possession of the mind, in clear and vivid form, of one out of what seem several simultaneously

More information

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening AUDL GS08/GAV1 Signals, systems, acoustics and the ear Pitch & Binaural listening Review 25 20 15 10 5 0-5 100 1000 10000 25 20 15 10 5 0-5 100 1000 10000 Part I: Auditory frequency selectivity Tuning

More information

Learning to detect a tone in unpredictable noise

Learning to detect a tone in unpredictable noise Learning to detect a tone in unpredictable noise Pete R. Jones and David R. Moore MRC Institute of Hearing Research, University Park, Nottingham NG7 2RD, United Kingdom p.r.jones@ucl.ac.uk, david.moore2@cchmc.org

More information

Robust Neural Encoding of Speech in Human Auditory Cortex

Robust Neural Encoding of Speech in Human Auditory Cortex Robust Neural Encoding of Speech in Human Auditory Cortex Nai Ding, Jonathan Z. Simon Electrical Engineering / Biology University of Maryland, College Park Auditory Processing in Natural Scenes How is

More information

Effects of Cochlear Hearing Loss on the Benefits of Ideal Binary Masking

Effects of Cochlear Hearing Loss on the Benefits of Ideal Binary Masking INTERSPEECH 2016 September 8 12, 2016, San Francisco, USA Effects of Cochlear Hearing Loss on the Benefits of Ideal Binary Masking Vahid Montazeri, Shaikat Hossain, Peter F. Assmann University of Texas

More information

Asynchronous glimpsing of speech: Spread of masking and task set-size

Asynchronous glimpsing of speech: Spread of masking and task set-size Asynchronous glimpsing of speech: Spread of masking and task set-size Erol J. Ozmeral, a) Emily Buss, and Joseph W. Hall III Department of Otolaryngology/Head and Neck Surgery, University of North Carolina

More information

The role of periodicity in the perception of masked speech with simulated and real cochlear implants

The role of periodicity in the perception of masked speech with simulated and real cochlear implants The role of periodicity in the perception of masked speech with simulated and real cochlear implants Kurt Steinmetzger and Stuart Rosen UCL Speech, Hearing and Phonetic Sciences Heidelberg, 09. November

More information

IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION. Griffin D. Romigh, Brian D. Simpson, Nandini Iyer

IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION. Griffin D. Romigh, Brian D. Simpson, Nandini Iyer IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION Griffin D. Romigh, Brian D. Simpson, Nandini Iyer 711th Human Performance Wing Air Force Research Laboratory

More information

Benefits to Speech Perception in Noise From the Binaural Integration of Electric and Acoustic Signals in Simulated Unilateral Deafness

Benefits to Speech Perception in Noise From the Binaural Integration of Electric and Acoustic Signals in Simulated Unilateral Deafness Benefits to Speech Perception in Noise From the Binaural Integration of Electric and Acoustic Signals in Simulated Unilateral Deafness Ning Ma, 1 Saffron Morris, 1 and Pádraig Thomas Kitterick 2,3 Objectives:

More information

The extent to which a position-based explanation accounts for binaural release from informational masking a)

The extent to which a position-based explanation accounts for binaural release from informational masking a) The extent to which a positionbased explanation accounts for binaural release from informational masking a) Frederick J. Gallun, b Nathaniel I. Durlach, H. Steven Colburn, Barbara G. ShinnCunningham, Virginia

More information

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES Varinthira Duangudom and David V Anderson School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA 30332

More information

Infant Hearing Development: Translating Research Findings into Clinical Practice. Auditory Development. Overview

Infant Hearing Development: Translating Research Findings into Clinical Practice. Auditory Development. Overview Infant Hearing Development: Translating Research Findings into Clinical Practice Lori J. Leibold Department of Allied Health Sciences The University of North Carolina at Chapel Hill Auditory Development

More information

Auditory scene analysis in humans: Implications for computational implementations.

Auditory scene analysis in humans: Implications for computational implementations. Auditory scene analysis in humans: Implications for computational implementations. Albert S. Bregman McGill University Introduction. The scene analysis problem. Two dimensions of grouping. Recognition

More information

GSI AUDIOSTAR PRO CLINICAL TWO-CHANNEL AUDIOMETER

GSI AUDIOSTAR PRO CLINICAL TWO-CHANNEL AUDIOMETER GSI AUDIOSTAR PRO CLINICAL TWO-CHANNEL AUDIOMETER Setting The Clinical Standard GSI AUDIOSTAR PRO CLINICAL TWO-CHANNEL AUDIOMETER Tradition of Excellence The GSI AudioStar Pro continues the tradition of

More information

Speech segregation in rooms: Effects of reverberation on both target and interferer

Speech segregation in rooms: Effects of reverberation on both target and interferer Speech segregation in rooms: Effects of reverberation on both target and interferer Mathieu Lavandier a and John F. Culling School of Psychology, Cardiff University, Tower Building, Park Place, Cardiff,

More information

Perceptual Effects of Nasal Cue Modification

Perceptual Effects of Nasal Cue Modification Send Orders for Reprints to reprints@benthamscience.ae The Open Electrical & Electronic Engineering Journal, 2015, 9, 399-407 399 Perceptual Effects of Nasal Cue Modification Open Access Fan Bai 1,2,*

More information

Binaural Hearing. Why two ears? Definitions

Binaural Hearing. Why two ears? Definitions Binaural Hearing Why two ears? Locating sounds in space: acuity is poorer than in vision by up to two orders of magnitude, but extends in all directions. Role in alerting and orienting? Separating sound

More information

B. G. Shinn-Cunningham Hearing Research Center, Departments of Biomedical Engineering and Cognitive and Neural Systems, Boston, Massachusetts 02215

B. G. Shinn-Cunningham Hearing Research Center, Departments of Biomedical Engineering and Cognitive and Neural Systems, Boston, Massachusetts 02215 Investigation of the relationship among three common measures of precedence: Fusion, localization dominance, and discrimination suppression R. Y. Litovsky a) Boston University Hearing Research Center,

More information

Congruency Effects with Dynamic Auditory Stimuli: Design Implications

Congruency Effects with Dynamic Auditory Stimuli: Design Implications Congruency Effects with Dynamic Auditory Stimuli: Design Implications Bruce N. Walker and Addie Ehrenstein Psychology Department Rice University 6100 Main Street Houston, TX 77005-1892 USA +1 (713) 527-8101

More information

Supplemental Information: Task-specific transfer of perceptual learning across sensory modalities

Supplemental Information: Task-specific transfer of perceptual learning across sensory modalities Supplemental Information: Task-specific transfer of perceptual learning across sensory modalities David P. McGovern, Andrew T. Astle, Sarah L. Clavin and Fiona N. Newell Figure S1: Group-averaged learning

More information

Hearing Research 323 (2015) 81e90. Contents lists available at ScienceDirect. Hearing Research. journal homepage:

Hearing Research 323 (2015) 81e90. Contents lists available at ScienceDirect. Hearing Research. journal homepage: Hearing Research 323 (2015) 81e90 Contents lists available at ScienceDirect Hearing Research journal homepage: www.elsevier.com/locate/heares Research paper The pupil response reveals increased listening

More information

Trading Directional Accuracy for Realism in a Virtual Auditory Display

Trading Directional Accuracy for Realism in a Virtual Auditory Display Trading Directional Accuracy for Realism in a Virtual Auditory Display Barbara G. Shinn-Cunningham, I-Fan Lin, and Tim Streeter Hearing Research Center, Boston University 677 Beacon St., Boston, MA 02215

More information

Effects of speaker's and listener's environments on speech intelligibili annoyance. Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag

Effects of speaker's and listener's environments on speech intelligibili annoyance. Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag JAIST Reposi https://dspace.j Title Effects of speaker's and listener's environments on speech intelligibili annoyance Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag Citation Inter-noise 2016: 171-176 Issue

More information

HCS 7367 Speech Perception

HCS 7367 Speech Perception Long-term spectrum of speech HCS 7367 Speech Perception Connected speech Absolute threshold Males Dr. Peter Assmann Fall 212 Females Long-term spectrum of speech Vowels Males Females 2) Absolute threshold

More information

S everal studies indicate that the identification/recognition. Identification Performance by Right- and Lefthanded Listeners on Dichotic CV Materials

S everal studies indicate that the identification/recognition. Identification Performance by Right- and Lefthanded Listeners on Dichotic CV Materials J Am Acad Audiol 7 : 1-6 (1996) Identification Performance by Right- and Lefthanded Listeners on Dichotic CV Materials Richard H. Wilson* Elizabeth D. Leigh* Abstract Normative data from 24 right-handed

More information

Separate What and Where Decision Mechanisms In Processing a Dichotic Tonal Sequence

Separate What and Where Decision Mechanisms In Processing a Dichotic Tonal Sequence Journal of Experimental Psychology: Human Perception and Performance 1976, Vol. 2, No. 1, 23-29 Separate What and Where Decision Mechanisms In Processing a Dichotic Tonal Sequence Diana Deutsch and Philip

More information

Pitfalls in behavioral estimates of basilar-membrane compression in humans a)

Pitfalls in behavioral estimates of basilar-membrane compression in humans a) Pitfalls in behavioral estimates of basilar-membrane compression in humans a) Magdalena Wojtczak b and Andrew J. Oxenham Department of Psychology, University of Minnesota, 75 East River Road, Minneapolis,

More information

Effects of dynamic range compression on spatial selective auditory attention in normal-hearing listeners

Effects of dynamic range compression on spatial selective auditory attention in normal-hearing listeners Effects of dynamic range compression on spatial selective auditory attention in normal-hearing listeners Andrew H. Schwartz Harvard/Massachusetts Institute of Technology, Speech and Hearing Bioscience

More information

Chapter 4. Two Types of Attention. Selective Listening 25/09/2012. Paying Attention. How does selective attention work?

Chapter 4. Two Types of Attention. Selective Listening 25/09/2012. Paying Attention. How does selective attention work? Chapter 4 Paying Attention Two Types of Attention How does selective attention work? How do we block out irrelevant information? What can t we block out? How much control do we have over our attention?

More information

Temporal offset judgments for concurrent vowels by young, middle-aged, and older adults

Temporal offset judgments for concurrent vowels by young, middle-aged, and older adults Temporal offset judgments for concurrent vowels by young, middle-aged, and older adults Daniel Fogerty Department of Communication Sciences and Disorders, University of South Carolina, Columbia, South

More information

On the influence of interaural differences on onset detection in auditory object formation. 1 Introduction

On the influence of interaural differences on onset detection in auditory object formation. 1 Introduction On the influence of interaural differences on onset detection in auditory object formation Othmar Schimmel Eindhoven University of Technology, P.O. Box 513 / Building IPO 1.26, 56 MD Eindhoven, The Netherlands,

More information

Role of F0 differences in source segregation

Role of F0 differences in source segregation Role of F0 differences in source segregation Andrew J. Oxenham Research Laboratory of Electronics, MIT and Harvard-MIT Speech and Hearing Bioscience and Technology Program Rationale Many aspects of segregation

More information

Toward an objective measure for a stream segregation task

Toward an objective measure for a stream segregation task Toward an objective measure for a stream segregation task Virginia M. Richards, Eva Maria Carreira, and Yi Shen Department of Cognitive Sciences, University of California, Irvine, 3151 Social Science Plaza,

More information

Dichotic Word Recognition for Young Adults with Normal Hearing. A Senior Honors Thesis

Dichotic Word Recognition for Young Adults with Normal Hearing. A Senior Honors Thesis Dichotic Word Recognition for Young Adults with Normal Hearing A Senior Honors Thesis Presented in Partial Fulfillment of the Requirements for graduation with research distinction in Speech and Hearing

More information

C HAPTER FOUR. Audiometric Configurations in Children. Andrea L. Pittman. Introduction. Methods

C HAPTER FOUR. Audiometric Configurations in Children. Andrea L. Pittman. Introduction. Methods C HAPTER FOUR Audiometric Configurations in Children Andrea L. Pittman Introduction Recent studies suggest that the amplification needs of children and adults differ due to differences in perceptual ability.

More information

Binaural Versus Monaural Listening in Young Adults in Competing Environments. A Senior Honors Thesis

Binaural Versus Monaural Listening in Young Adults in Competing Environments. A Senior Honors Thesis Binaural Versus Monaural Listening in Young Adults in Competing Environments A Senior Honors Thesis Presented in Partial Fulfillment of the Requirements for graduation with research distinction in Speech

More information

Going beyond directionality in noisy conversations with Dynamic Spatial Awareness

Going beyond directionality in noisy conversations with Dynamic Spatial Awareness Going beyond directionality in noisy conversations with Dynamic Spatial Awareness Spatial awareness, or localization, is important in quiet, but even more critical when it comes to understanding conversations

More information

Acoustics, signals & systems for audiology. Psychoacoustics of hearing impairment

Acoustics, signals & systems for audiology. Psychoacoustics of hearing impairment Acoustics, signals & systems for audiology Psychoacoustics of hearing impairment Three main types of hearing impairment Conductive Sound is not properly transmitted from the outer to the inner ear Sensorineural

More information

Attention. Concentrating and focusing of mental effort that is:

Attention. Concentrating and focusing of mental effort that is: What is attention? Concentrating and focusing of mental effort that is: Page 1 o Selective--focus on some things while excluding others o Divisible--able to focus on more than one thing at the same time

More information

Spectral-peak selection in spectral-shape discrimination by normal-hearing and hearing-impaired listeners

Spectral-peak selection in spectral-shape discrimination by normal-hearing and hearing-impaired listeners Spectral-peak selection in spectral-shape discrimination by normal-hearing and hearing-impaired listeners Jennifer J. Lentz a Department of Speech and Hearing Sciences, Indiana University, Bloomington,

More information

Using 3d sound to track one of two non-vocal alarms. IMASSA BP Brétigny sur Orge Cedex France.

Using 3d sound to track one of two non-vocal alarms. IMASSA BP Brétigny sur Orge Cedex France. Using 3d sound to track one of two non-vocal alarms Marie Rivenez 1, Guillaume Andéol 1, Lionel Pellieux 1, Christelle Delor 1, Anne Guillaume 1 1 Département Sciences Cognitives IMASSA BP 73 91220 Brétigny

More information

A Buildup of Speech Intelligibility in Listeners With Normal Hearing and Hearing Loss

A Buildup of Speech Intelligibility in Listeners With Normal Hearing and Hearing Loss Original Article A Buildup of Speech Intelligibility in Listeners With Normal Hearing and Hearing Loss Trends in Hearing Volume 22: 1 11! The Author(s) 218 Article reuse guidelines: sagepub.com/journals-permissions

More information

Adapting bilateral directional processing to individual and situational influences

Adapting bilateral directional processing to individual and situational influences Syddansk Universitet Adapting bilateral directional processing to individual and situational influences Neher, Tobias; Wagener, Kirsten C.; Latzel, Matthias Published in: Proceedings of the International

More information

Frequency refers to how often something happens. Period refers to the time it takes something to happen.

Frequency refers to how often something happens. Period refers to the time it takes something to happen. Lecture 2 Properties of Waves Frequency and period are distinctly different, yet related, quantities. Frequency refers to how often something happens. Period refers to the time it takes something to happen.

More information

Specific Sulci/Fissures:

Specific Sulci/Fissures: Specific Sulci/Fissures: Central Sulcus Longitudinal Fissure Sylvian/Lateral Fissure Transverse Fissure http://www.bioon.com/book/biology/whole/image/1/1-8.tif.jpg http://www.dalbsoutss.eq.edu.au/sheepbrains_me/human_brain.gif

More information

What Is the Difference between db HL and db SPL?

What Is the Difference between db HL and db SPL? 1 Psychoacoustics What Is the Difference between db HL and db SPL? The decibel (db ) is a logarithmic unit of measurement used to express the magnitude of a sound relative to some reference level. Decibels

More information

Dichotic Word Recognition of Young Adults in Noise. A Senior Honors Thesis

Dichotic Word Recognition of Young Adults in Noise. A Senior Honors Thesis Dichotic Word Recognition of Young Adults in Noise A Senior Honors Thesis Presented in Partial Fulfillment of the Requirements for graduation with distinction in Speech and Hearing Sciences in the undergraduate

More information

ATTENTION! Learning Objective Topics. (Specifically Divided and Selective Attention) Chapter 4. Selective Attention

ATTENTION! Learning Objective Topics. (Specifically Divided and Selective Attention) Chapter 4. Selective Attention ATTENTION! (Specifically Divided and Selective Attention) Chapter 4 Learning Objective Topics Selective Attention Visual Tasks Auditory Tasks Early vs. Late Selection Models Visual Search Divided Attention

More information

The Meaning of the Mask Matters

The Meaning of the Mask Matters PSYCHOLOGICAL SCIENCE Research Report The Meaning of the Mask Matters Evidence of Conceptual Interference in the Attentional Blink Paul E. Dux and Veronika Coltheart Macquarie Centre for Cognitive Science,

More information

MULTI-CHANNEL COMMUNICATION

MULTI-CHANNEL COMMUNICATION INTRODUCTION Research on the Deaf Brain is beginning to provide a new evidence base for policy and practice in relation to intervention with deaf children. This talk outlines the multi-channel nature of

More information

Research Article The Acoustic and Peceptual Effects of Series and Parallel Processing

Research Article The Acoustic and Peceptual Effects of Series and Parallel Processing Hindawi Publishing Corporation EURASIP Journal on Advances in Signal Processing Volume 9, Article ID 6195, pages doi:1.1155/9/6195 Research Article The Acoustic and Peceptual Effects of Series and Parallel

More information

The role of low frequency components in median plane localization

The role of low frequency components in median plane localization Acoust. Sci. & Tech. 24, 2 (23) PAPER The role of low components in median plane localization Masayuki Morimoto 1;, Motoki Yairi 1, Kazuhiro Iida 2 and Motokuni Itoh 1 1 Environmental Acoustics Laboratory,

More information

Hearing II Perceptual Aspects

Hearing II Perceptual Aspects Hearing II Perceptual Aspects Overview of Topics Chapter 6 in Chaudhuri Intensity & Loudness Frequency & Pitch Auditory Space Perception 1 2 Intensity & Loudness Loudness is the subjective perceptual quality

More information

Are hearing aids the better rehabilitative choice when compared to PSAPs? On speech-intelligibility and soundquality,

Are hearing aids the better rehabilitative choice when compared to PSAPs? On speech-intelligibility and soundquality, Are hearing aids the better rehabilitative choice when compared to PSAPs? On speech-intelligibility and soundquality, the answer is yes. Filip M. Rønne 12 & Rikke Rossing 1 1 Eriksholm Research Centre,

More information

A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES

A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES Kazuhiro IIDA, Motokuni ITOH AV Core Technology Development Center, Matsushita Electric Industrial Co., Ltd.

More information

Object Substitution Masking: When does Mask Preview work?

Object Substitution Masking: When does Mask Preview work? Object Substitution Masking: When does Mask Preview work? Stephen W. H. Lim (psylwhs@nus.edu.sg) Department of Psychology, National University of Singapore, Block AS6, 11 Law Link, Singapore 117570 Chua

More information

Best Practice Protocols

Best Practice Protocols Best Practice Protocols SoundRecover for children What is SoundRecover? SoundRecover (non-linear frequency compression) seeks to give greater audibility of high-frequency everyday sounds by compressing

More information

Speech Cue Weighting in Fricative Consonant Perception in Hearing Impaired Children

Speech Cue Weighting in Fricative Consonant Perception in Hearing Impaired Children University of Tennessee, Knoxville Trace: Tennessee Research and Creative Exchange University of Tennessee Honors Thesis Projects University of Tennessee Honors Program 5-2014 Speech Cue Weighting in Fricative

More information

Localization 103: Training BiCROS/CROS Wearers for Left-Right Localization

Localization 103: Training BiCROS/CROS Wearers for Left-Right Localization Localization 103: Training BiCROS/CROS Wearers for Left-Right Localization Published on June 16, 2015 Tech Topic: Localization July 2015 Hearing Review By Eric Seper, AuD, and Francis KuK, PhD While the

More information

Simulations of high-frequency vocoder on Mandarin speech recognition for acoustic hearing preserved cochlear implant

Simulations of high-frequency vocoder on Mandarin speech recognition for acoustic hearing preserved cochlear implant INTERSPEECH 2017 August 20 24, 2017, Stockholm, Sweden Simulations of high-frequency vocoder on Mandarin speech recognition for acoustic hearing preserved cochlear implant Tsung-Chen Wu 1, Tai-Shih Chi

More information

THE EFFECT OF A REMINDER STIMULUS ON THE DECISION STRATEGY ADOPTED IN THE TWO-ALTERNATIVE FORCED-CHOICE PROCEDURE.

THE EFFECT OF A REMINDER STIMULUS ON THE DECISION STRATEGY ADOPTED IN THE TWO-ALTERNATIVE FORCED-CHOICE PROCEDURE. THE EFFECT OF A REMINDER STIMULUS ON THE DECISION STRATEGY ADOPTED IN THE TWO-ALTERNATIVE FORCED-CHOICE PROCEDURE. Michael J. Hautus, Daniel Shepherd, Mei Peng, Rebecca Philips and Veema Lodhia Department

More information

S ince the introduction of dichotic digits

S ince the introduction of dichotic digits J Am Acad Audiol 7 : 358-364 (1996) Interactions of Age, Ear, and Stimulus Complexity on Dichotic Digit Recognition Richard H. Wilson*t Melissa S. Jaffet Abstract The effect that the aging process has

More information

J. Acoust. Soc. Am. 116 (2), August /2004/116(2)/1057/9/$ Acoustical Society of America

J. Acoust. Soc. Am. 116 (2), August /2004/116(2)/1057/9/$ Acoustical Society of America The role of head-induced interaural time and level differences in the speech reception threshold for multiple interfering sound sources John F. Culling a) School of Psychology, Cardiff University, P.O.

More information

Predicting the benefit of binaural cue preservation in bilateral directional processing schemes for listeners with impaired hearing

Predicting the benefit of binaural cue preservation in bilateral directional processing schemes for listeners with impaired hearing Syddansk Universitet Predicting the benefit of binaural cue preservation in bilateral directional processing schemes for listeners with impaired hearing Brand, Thomas; Hauth, Christopher; Wagener, Kirsten

More information

Speech Intelligibility Measurements in Auditorium

Speech Intelligibility Measurements in Auditorium Vol. 118 (2010) ACTA PHYSICA POLONICA A No. 1 Acoustic and Biomedical Engineering Speech Intelligibility Measurements in Auditorium K. Leo Faculty of Physics and Applied Mathematics, Technical University

More information

Hemispheric Specialization (lateralization) Each lobe of the brain has specialized functions (Have to be careful with this one.)

Hemispheric Specialization (lateralization) Each lobe of the brain has specialized functions (Have to be careful with this one.) Cerebral Cortex Principles contralaterality the right half of your brain controls the left half of your body and vice versa. (contralateral control.) Localization of function Specific mental processes

More information

INTRODUCTION J. Acoust. Soc. Am. 103 (2), February /98/103(2)/1080/5/$ Acoustical Society of America 1080

INTRODUCTION J. Acoust. Soc. Am. 103 (2), February /98/103(2)/1080/5/$ Acoustical Society of America 1080 Perceptual segregation of a harmonic from a vowel by interaural time difference in conjunction with mistuning and onset asynchrony C. J. Darwin and R. W. Hukin Experimental Psychology, University of Sussex,

More information

The effect of wearing conventional and level-dependent hearing protectors on speech production in noise and quiet

The effect of wearing conventional and level-dependent hearing protectors on speech production in noise and quiet The effect of wearing conventional and level-dependent hearing protectors on speech production in noise and quiet Ghazaleh Vaziri Christian Giguère Hilmi R. Dajani Nicolas Ellaham Annual National Hearing

More information

Sonic Spotlight. SmartCompress. Advancing compression technology into the future

Sonic Spotlight. SmartCompress. Advancing compression technology into the future Sonic Spotlight SmartCompress Advancing compression technology into the future Speech Variable Processing (SVP) is the unique digital signal processing strategy that gives Sonic hearing aids their signature

More information

The development of a modified spectral ripple test

The development of a modified spectral ripple test The development of a modified spectral ripple test Justin M. Aronoff a) and David M. Landsberger Communication and Neuroscience Division, House Research Institute, 2100 West 3rd Street, Los Angeles, California

More information

EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE DESIGN OF SPATIAL AUDIO DISPLAYS

EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE DESIGN OF SPATIAL AUDIO DISPLAYS Proceedings of the 14 International Conference on Auditory Display, Paris, France June 24-27, 28 EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE

More information

Computational Perception /785. Auditory Scene Analysis

Computational Perception /785. Auditory Scene Analysis Computational Perception 15-485/785 Auditory Scene Analysis A framework for auditory scene analysis Auditory scene analysis involves low and high level cues Low level acoustic cues are often result in

More information

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED International Conference on Systemics, Cybernetics and Informatics, February 12 15, 2004 BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED Alice N. Cheeran Biomedical

More information

Even though a large body of work exists on the detrimental effects. The Effect of Hearing Loss on Identification of Asynchronous Double Vowels

Even though a large body of work exists on the detrimental effects. The Effect of Hearing Loss on Identification of Asynchronous Double Vowels The Effect of Hearing Loss on Identification of Asynchronous Double Vowels Jennifer J. Lentz Indiana University, Bloomington Shavon L. Marsh St. John s University, Jamaica, NY This study determined whether

More information

Selective Attention (dichotic listening)

Selective Attention (dichotic listening) Selective Attention (dichotic listening) People attend to one ear by shadowing failed to notice in the other ear when the unattended speech changed to German speech in Czech spoken with English pronunciation

More information

Speech perception in individuals with dementia of the Alzheimer s type (DAT) Mitchell S. Sommers Department of Psychology Washington University

Speech perception in individuals with dementia of the Alzheimer s type (DAT) Mitchell S. Sommers Department of Psychology Washington University Speech perception in individuals with dementia of the Alzheimer s type (DAT) Mitchell S. Sommers Department of Psychology Washington University Overview Goals of studying speech perception in individuals

More information

The cocktail party phenomenon revisited: The importance of working memory capacity

The cocktail party phenomenon revisited: The importance of working memory capacity Psychonomic Bulletin & Review 2001, 8 (2), 331-335 The cocktail party phenomenon revisited: The importance of working memory capacity ANDREW R. A. CONWAY University of Illinois, Chicago, Illinois NELSON

More information