Gregory Galen Lin. at the. May A uthor... Department of Elf'ctricdal Engineering and Computer Science May 28, 1996

Size: px
Start display at page:

Download "Gregory Galen Lin. at the. May A uthor... Department of Elf'ctricdal Engineering and Computer Science May 28, 1996"

Transcription

1 Adaptation to a Varying Auditory Environment by Gregory Galen Lin Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Bachelor of Science in Electrical Science and Engineering and Master of Engineering in Electrical Engineering and Computer Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY May Gregory Galen Lin, MCMXCVI. All rights reserved. The author hereby grants to MIT permission to reproduce and distribute publicly paper and electronic copies of this thesis document in whole or in part, and to grant others the right to do so. A uthor... Department of Elf'ctricdal Engineering and Computer Science May 28, 1996 Certified by,nathaniel I Durlach Research Scientist :5hesis Supervisor Accepted b-y- Fred&r; R. Morgenthaler Chairman, Department Committee on Graduate Students,ASSA-( C UijSETTS ins2 ' OF TECHNOLOGY JUN i;: ng.

2 Adaptation to a Varying Auditory Environment by Gregory Galen Lin Submitted to the Department of Electrical Engineering and Computer Science on May 28, 1996, in partial fulfillment of the requirements for the degree of Bachelor of Science in Electrical Science and Engineering and Master of Engineering in Electrical Engineering and Computer Science Abstract This project investigated sensorimotor adaptation to rearranged auditory cues. Data was collected by presenting subjects with an acoustic cue (a gated pulse-train generating a clicking sound) simulated to come from one of 13 locations (confined to a horizontal azimuthal plane) and recording the subject's estimate of the stimuli location. After each response, the subject was informed of the correct response, providing constant training. Subjects were presented, in order, with unaltered cues, strongly altered cues, weakly altered cues, and unaltered cues. Results show that, in addition to partial adaptation to the changing environment, subjects can partially adapt from strongly altered cues to weakly altered cues. Thesis Supervisor: Nathaniel I Durlach Title: Senior Research Scientist

3 Contents 1 Project 2 Background 2.1 Localization Cues Previous W ork Data Collection 3.1 T ask Setup Experimental Problems 15 5 Data Analysis 5.1 Mean Response Error Resolution Bias Estimating Adaptation Imperfection in auditory cues 5.7 Impact of edges Summary 35 A Warp and Line Fit Results 37

4 List of Figures 2-1 Transformation performed by fn(0) Altered Locations: (a) normal cues (n = 1); (b) cues (n = 4); (c) first set of altered cues (n = 2) 5-1 Runs 2 and 3: Changing from n = 1 to n= Runs 3 and 17: Start and finish of n = Runs 17 and 18: Changing from n = 4 to n = Runs 18 and 32: Start and finish of n = Runs 32 and 33: Changing from n = 2 to n = Runs 33 and 40: Start and finish of n = Observation of linearity Individual Adaptation Results Adaptation over runs... second set of altered

5 List of Tables 3.1 Table of Warp Transformations Subject Exponential Fit Results A.1 Line-Fit values A.2 W arp-fit Values

6 Chapter 1 Project This project investigated subject adaptation to supernormal auditory localization cues. Supernormal auditory localization aims to improve a subject's ability to discriminate the locations of nearby sounds. The proposed experiments will contribute to the understanding of adaptation to supernormal auditory localization cues.

7 Chapter 2 Background 2.1 Localization Cues Sound localization involves processing of three main indicators: interaural intensity difference (IID), interaural time difference (ITD), and spectral cues. IIDs are differences in sound intensity between the subject's ears, where, for example, a more intense sound at the left ear is more likely to correspond to a source on a person's left. ITDs are any differences in sound arrival times between the ears; the closer an ear is to a sound source, the earlier the ear will receive the sound. As in the case with IIDs, ITDs between the two ears help indicate the location of the sound source. The final main indicator used in auditory localization is monaural spectral cue shaping. The outer ear alters a sound according to the sound's frequency and the angle with which it impacts the ear. Unlike IIDs and ITDs, monaural frequency cues depend on the prior knowledge and experience of the subject with these frequency-to-location translations [2]. Localization cues are generated when a sound interacts with a person's head, and the total interaction can be summarized by a head-related transfer function (HRTF). By measuring the intensity, time, and frequency changes of a known source as it enters the ear canal from different locations, a set of coefficients can be determined such that convolution of these coefficients with an audio stream will produce correct spatial signals for the left and right ear.

8 Effects of Transformation i i i: i i- --: - - M 40 Q M )- X a a.....* warp = 4 S-- warp = 3K 2 Na warp = correct location (degrees) 2.2 Previous Work Figure 2-1: Transformation performed by f,(o) In this project, subjects were exposed to an auditory spatial distortion constrained along a constant azimuthal plane described by the expression: 1 2n sin(20) 0' = f,(0) = 1 tan-[ 2n sin( n 2 +(1 + n2) cos(29) where the angle, 9, represents the correct location, 0' is the angle that normally corresponds to the localization cues presented to the subject, and n represents the extent of the audio warping. The term correct will always refer to the location from which the subject is told the source is coming, and the term normal will refer to the location that normally corresponds to the physical cues presented. Thus, subjects are told that the source is at 0, even though the normally-heard position of the source is 0'. The degree of distortion produced by n (or warp) is reflected in figure 2-1 where the x-axis reflects the correct location and the y-axis denotes the normal location. As shown in figure 2-1, a value of n = 1 represents no altering, so that the correct cue locations and normal cue locations are the same. Larger values of n represent more drastic deviations from normal. When the transformed cues are first introduced, subjects will make systematic 8

9 errors in localization. For instance, with n > 1, subjects will tend to hear sounds farther off-center than normal. A subject's adaptation to the transformed audio cues is observed through analysis of their localization performance, summarized by resolution and bias measures. Adaptation is evidenced if subjects overcome the systematic error (bias) in localization judgements over time. Previous work [1] has shown that subjects can partially adapt within a two-hour period (e.g. over time, bias is reduced) when they are exposed to a single cue transformation of the form shown in figure 2-1. Subjects also adapted to a relatively weak transformation (n = 2) followed by a stronger transformation (n = 4) in a single 2-hour session. A single model was able to explain both of these results. However, a pilot study with only 2 subjects indicated that subjects given a relatively strong transformation (n = 4) followed by a relatively weak transformation (n = 2) did not adapt in a way predicted by the model. The work described here investigates the surprising result in more detail.

10 Chapter 3 Data Collection 3.1 Task Data was collected through a series of trials with each subject. Each trial consisted of a burst of clicks, after which the subject responded with the apparent location of the sound source. The response was immediately followed by visual feedback from spatially-positioned light bulbs (fig. 3-1) giving the correct sound source position. Testing and training were thus simultaneous, with each trial adding to the subject's experience with the new auditory space. Twenty-six trials were grouped to form a run, with a stretch of 40 runs making up a session (typically spanning two hours). In each session, subjects were exposed to, in order, 2 runs of normal cues (warp parameter n = 1), 15 runs of strongly warped cues (n = 4), 15 runs of mildly warped cues (n = 2), and 8 final runs of normal cues (n = 1) with a 5 minute break after the 10th and 32nd runs. Subjects were notified each time the degree of warping is changed. 3.2 Setup Subjects were seated facing 13 numbered lights labeled 1 to 13 from left to right. The lights were arranged on a semi-circular path at 10 degree intervals, 5 feet from the subject. Light 7 was visually straight ahead and referenced as 0 degrees, light 1 was

11 located at -60 degrees, and light 13 was located at +60 degrees. With the normal set of cues (fig. 3-1a) each light corresponded to its physical location. Under strongly warped cues (fig. 3-1c), the "normal" sound location corresponding to each lamp was shifted farther off center than the actual lamp location. For example, the sound cues for location number 8 were closer to the normal cues for a source at +30 degrees than for the normal cues for a normal source at +10 degrees (under no warping). The lightly warped cues (fig. 3-1b) gave the same type of distortion as the strongly warped cues (fig. 3-1c), but to a lesser extent (table 3.1).

12 light f (O)n = 1 f (O)n =4 f (O)n = Table 3.1: Table of Warp Transformations

13 The head position of the subject was monitored using a Bird headtracker (a commercial device using electro-magnetic pulses to allow the position of the head to be tracked) mounted on a set of Sennheiser HD-545 headphones. The acoustic stimulus was five 1 millisecond pulses spaced at 100 millisecond intervals sent through a low-pass filter (to prevent aliasing of high-frequency components) and into a Convolvotron. The Convolvotron was special-purpose signal-processing hardware installed in an Intel x86-based PC responsible for mapping an input source to the appropriate location in auditory space. The input signal was first sampled and digitized, then the mapping was accomplished by convolving the input with a pair of transfer functions, one for the right ear and one for the left ear, which contain the direction-dependent effects on sound caused by a head and a pair of ears. This pair of transfer functions was simply the empirically-determined HRTF for a source from the specified direction. Thus, any auditory signal was transformed into a pair of signals (left and right) that contain spatial information. From the Convolvotron, the newly spatialized signal was sent to the headphones. After each presentation, the subject entered a responses (between 1 and 13, corresponding to the numbered sources) on a keyboard which sat on their lap. From the keyboard, the PC collected the response, and after each response, activated the lamp corresponding to the correct sound source position. Through this feedback, the subject was trained to adapt to changes in the mapping between audio cues and the corresponding correct location. Data files with subject responses (recorded by the PC) were updated after every run.

14 -10 i -60 6o' -90 '90 :o -30..,. 9d -o0 0* o -90. Figure 3-1: Altered Locations: (a) normal cues (n = 1); (b) second set of altered cues (n = 4); (c) first set of altered cues (n = 2) 14

15 Chapter 4 Experimental Problems The setup had a few shortcomings that may affect the experimental results. Experiments prior to January 8th, 1996 were conducted in an office room that is not sound-proof. While the headphones provided some isolation they could not completely eliminate the noises caused by the environment. In addition to the computer's continual mechanical hum, the disk-writing operation that occurred between runs was audible to the subject. Experimentation after January 8th was conducted in a soundproof room with the PC located outside of the booth. With this setup, the primary disturbance was a noticeable hum produced by the Bird head-tracking system. Additionally, the HRTFs used in the described experiments was empirically determined from a single "petite female" subject [3]. The localization cues produced by the Convolvotron may be slightly different from the cues that the subject would typically expect (see Imperfection in auditory cues).

16 Chapter 5 Data Analysis Data was averaged across all 8 sessions for each subject to find the statistics below. The resulting values were then averaged across all 5 test subjects to yields the data plotted in figures 2 through 9. Graphs were made for run-pairs corresponding to changes in warp strength (figs 5-1, 5-3, 5-5) and to the beginning and end of a warp (figs 5-2, 5-4, 5-6). 5.1 Mean Response The mean response graphs (figs. 5-1, 5-2, 5-3, 5-4, 5-5, 5-6; panel a) plot correct versus subject response, where correct cue refers to the location to which the experiment trains the subject, and subject response is the (average) response given by subjects when presented with the associated correct cue. If all of a subject's responses are correct, the mean response line will fall exactly on the "correct answer" base line. On run 3 (n = 1 to n = 4; fig 5-1a) subject overestimation produces a sigmoidal response curve as a function of cue location. Over time (trial 3 to trial 17; fig 5-2a), subjects are able to partially adapt, indicated by a response curve closer to the base line response. Comparing runs 17 and 18 (n = 4 to n = 2; fig 5-3a) we see that subjects adjust quickly to the weaker transformation. The mean curve for run 18 is very close to the "correct answer" base line.

17 Continued training on the n = 2 cues (runs 18 to 32; fig 5-4a) produces slight improvement across all cues. On the final change of cues (between runs 32 and 33, n = 2 to n = 1; fig 5-5a) subject responses show underestimation similar to the change introduced between run 17 and 18. Consistent with previous runs, continued exposure improves subject performance (runs 33 to 40; fig 5-6a). 5.2 Error Error (figs. 5-1 to 5-6; panel b) graphs show the difference between subject response and the correct response (noted as subject error). It is the inverse of the bias graphs with the exception of an inversion and normalization by the standard deviation. Error is closely related to bias since it is equal to the error multiplied by -1 and divided by the standard deviation in subject responses. Thus, patterns in error can be understood by reading the discussion of bias results. 5.3 Resolution The resolution (d') between location i and i + 1 is defined as di+, mi+ - mi where mi is the mean subject response for cue location i and ai is the standard deviation of the subject response to location i. Resolution measures a subject's perceived distance between adjacent cue locations normalized by the standard deviation in subject responses, and thus, measures the ability to discriminate between different sound sources. The perceptually closer the sources are to each other, the more difficult it becomes to discern them as separate locations, leading to lower values of resolution. The first change in cues takes place on trial 3 where the warp strength increases

18 from n = 1 (run 2) to n = 4 (run 3). Under n = 4, the average distance between the normal cues just ahead of the subject (cue locations 5 through 9) increases, producing the expected improvement in resolution. With greater separation between the forward-located cues (depicted in fig 3-1a: n = 1, and 3-1c: n = 4), they become easier to resolve. Conversely, because the cues at the edges of the test range become more closely located, resolution begins to suffer. Resolution decreases somewhat as exposure to the warped cues continues between runs 3 and 17 (fig 5-2c). On the change from n = 4 (run 17) to n = 2 (run 18), center resolution degrades. Center cue locations for n = 2 are spaced more closely than the cue locations for n = 4 (compare figure 3-1c with 3-1b) producing the expected degradation in resolution. Larger spacing for locations at the edges of the range generate small resolution improvements in resolution beyond source locations 5 through 9. Continued exposure to n = 2 cues (runs 18 through 32; fig. 5-4) degrades resolution performance, if anything. Upon returning to normal cues (runs 32 to 33; fig. 5-5) little change is seen in resolution. With continued exposure to the normal cues (runs 33 through 40; fig. 5-6), resolution remains relatively constant. 5.4 Bias The bias 3 associated with cue i is iz- mi o1i Bias is a noise-adjusted measure of the error in subject response for a given source position, thus reflecting a subject's error in location as measured in units of response standard deviation. For example, when subjects are initially exposed to more-strongly-warped cues (run 2, n = 1 to run 3, n = 4) the bias should be positive for errors left of center

19 (except at the edges; see Impact of the edges). A simple estimate of bias for sudden changes in warping (ie, from run 2 [n = 1] to run 3 [n = 4] or run 17 [n = 4] to run 18 [n = 2]) can be found by subtracting the corresponding normal positions from the correct position (i.e., subtract fig 3-1a from fig 3-1c to generate crude bias values for n = 1 to n = 4). For cues with a weak to strong change (increasing warp n), an after-effect is caused by subject's overestimation of cue locations. On run 3, the subject first experiences warp n = 4. Assuming that he has adapted to n = 1 (which are normal cues and do not require adaptation; see section Imperfection in auditory cues), then his first exposure to n = 4 will produce responses in which he interprets the physical stimuli like there is no transformation (n = 1). Looking at table 3.1, cue 81n=4 maps approximately halfway between cue 101,=1 and cue 111n=1 (say 10.51,=1) and cue 91n=4 maps to cue 12.51,=1. The new mapping (n = 4) produces an overestimation which is consistent with the data. Additionally, larger shifts in cue remapping leads to greater overestimation which is also consistent with the data in the panel. Figure 5-2d depicts the results for the 3rd to the 17th runs corresponding to the 1st and 15th runs with n = 4. Over time there is a decrease in average bias as subjects adapt to the cue transformation. Conversely, for cues which change from strong to weak (decreasing warp n), subjects generally underestimate the cue locations. On run 18, subjects are exposed to a warp n = 2 that is weaker than the most recent warp (n = 4). In this case, cue 91n=2 maps to cue 8 1 n=4 and cue 131n=2 maps to cue 111n=4. Figure 5-3d results show the expected underestimation caused by decreasing warp strength. Figure 5-4d shows the 1st and 15th exposure to warp n = 2; again bias decreases over time. On run 33, underestimation results when the subject is reintroduced to normal cues n = 1 (down from n = 2) where, from table 3.1, cue 131n=1 maps to cue 111n=2 and cue 91n=1 maps to cue 81n=2 (fig. 5-5d). Because the magnitude of the location shifts are not as drastic as the initial change of n = 1 to n = 4, the magnitude of the error is not as great.

20 Figure 5-6 shows the 1st and 8th runs following the return to normal cues. In each case where the cues change (e.g., figures 5-1, 5-3, and 5-5), the corresponding change in bias is not as large as the differences reflected in table 3.1. Subject training is a continuous process throughout each run, and thus errors made early in the run may be larger than the errors later in the run (which may be reduced by adjustments made later in the run). Additionally, subjects are notified each time a cue is changed, and across the multiple sessions a subject participates in, he may be able to anticipate the new cues as soon as they are presented. Finally, subjects may not be completely adapted to the previous transformation when the cues are changed, resulting in a smaller than predicted change in bias. Even with these circumstances, data still strongly reflects the systematic over- and under-estimation consistent with adaptation (though imperfect) to each new cue transformation.

21 E 10 (a) Mean response.. -o -Run 2 oo -Run Base 1 0 a -1 (b) Difference plot 2-2.o o.- o.- 0. o o o o0oo I " - Run2 :/ I-Run o B ase correct cue location (c) Resolution S- ' location -Run 2 -Run 3 I Base o correct cue location (d) Bias location Figure 5-1: Runs 2 and 3: Changing from n = 1 to n = 4

22 a 15 CD o10 u) (a) Mean response 2 1 (b) Difference plot,',0, 0_ , / U).LJ correct cue location (c) Resolution 5 10 location U) -1 -Run B ase correct cue location (d) Bias 1 A 0-1 m 0 / : - Run 3 / -Run o Base o.i0.-0. location - Run 3. : : /.. S/' o00.0 o.. Figure 5-2: Runs 3 and 17: Start and finish of n = 4

23 15 (a) Mean response (b) Difference plot u) C ol 0. a, 0) a, a, _.v... correct cue location (c) Resolution location O - Run 1' 7 -Run 1] 8 o Base f \ -Run Run 18 \ o Base ' IO correct cue location (d) Bias location Figure 5-3: Runs 17 and 18: Changing from n = 4 to n = 2

24 - (a) Mean response (b) Difference plot correct cue location (c) Resolution o0-o-oo00o-o-o-0.00 location..-. C, )-I Run 18 - Run 32 o B ase... C 5 10 correct cue location (d) Bias location Figure 5-4: Runs 18 and 32: Start and finish of n = 4

25 a) 15 io 010 W. (a) Mean response 2 1 (b) Difference plot CD (I 5 10 correct cue location (c) Resolution S Run B ase correct cue location (d) Bias i location - Run 32 -Run 33 obase 1 O O "0" 0-0-"- n - 1._ Run 32 -Run 33 o Base.o,6 o o o 0.." r 0 5 location Figure 5-5: Runs 32 and 33: Changing from n = 2 to n = 1

26 - L (a) Mean response (b) Difference plot 0 o t5 CD Ar 0 ar ;... So-Run Run 40 -o 0 Base, correct cue location (c) Resolution correct cue location (d) Bias R un 33 - Run 33 : -Run 40 o Base '.> /... o. ooo.o.o.... S/ - 0I location location Figure 5-6: Runs 33 and 40: Start and finish of n = 1

27 5.5 Estimating Adaptation The degree of adaptation can be measured by the slope of the line that best fits mean response as a function of 0', the normal position of the stimuli. Observation of subject response versus normal cue location (figure 8) show that response has a roughly linear shape as a function of 0'. From start to finish of n = 4 exposure (runs 3 and 17, respectively; figs. 5-7a and 5-7b) and from start to finish of n = 2 (runs 18 and 32, respectively; figs. 5-7c and 5-7d) the subject response as a function of normal cue appears linear. However, the slope of the line relating mean response to 0' changes over time. The best-fit was generated by finding the line that minimizes the mean-square error between predicted and measured subject response. Because the correct cue for straight-ahead (light 7) remains the same as the normal cue location for straightahead, each line-fit was forced to contain the point where normal cue straight-ahead is the same as subject response straight-ahead (i.e., only the slope of the line changed; the intercept was assumed fixed). Because some warp levels generate cues that fall outside of the normal response range, only normal cues that fall between +60 and -60 degrees are considered. For example, when the warp level changes from n = 1 to n = 4, cue 2 1 n=4 is presented from -78 degrees and due to his familiarity with the n = 1 space, the best the subject can respond with is location 1. Rather than make assumptions about the adaptation patterns, cues whose normal locations are outside of the normal response range (n = 1; +60 to -60) are left off of adaptation calculations (see Impact of the edges). These line-fit results were compared to a transform-fit approach. Rather than finding the best-fit slope of a line, the subject responses were fitted by varying the warp strength, n, in the transform formula (given on page 7). Tabulation of the mean-square error on a run-by-run basis (tables A.1 and A.2) showed that the line-fit is generally better than the warp-fit. In runs where the warp-fit produced better error results, the difference is very small (i.e., runs 33 to 40).

28 (a) Run 3 (b) Run a CD C 10 a : : : normal location 15 (c) Run : normal location S10 0o 12 6 '4 C) * normal location (d) Run I..... ~ normal location Figure 5-7: Observation of linearity

29 Individual results are presented in figure 5-8. Rates and asymptote values vary across subjects and are summarized in table 5.1. Rate is the time constant associated with the exponential valued in terms of runs. Subject responses that could not successfully fit an exponential are listed as N/A. Comparing subjects, we see that all five subjects appear to adapt to the n = 4 transformation at roughly the same rate. However, it is clear that the rate of adaptation can vary greatly between subjects when changing from strong (n = 4) to weak (n = 2) transformations. For instance, subject LCW adapts slowly to the n = 2 transformation when compared to subject JJP. In contrast, two subjects (MSS and SC) appear to show no change in slope during exposure to n = 2 cues (note the flat line fit to their data in runs 17 through 32); instead, their performance is stable throughout this exposure period.

30 subject JJP JIR LCW MSS SC runs 3-17 asymptote rate runs asymptote rate N/A N/A runs asymptote rate N/A Table 5.1: Subject Exponential Fit Results 0.9 O 0.8 S0.7 Subject: MSS Q0.6..) Subj2c: JIR (c.. I ' I ~~ Subject: SC....I I ' Subject: LCW 00.9 h : o S ) Subject: JJP ; o 0.8 (d)... b _... '''''' S } Figure 5-8: Individual Adaptation Results

31 Figure 5-9 plots the best-fit line slope averaged across the five subjects as a function of run. It appears that the best-fit slope changes gradually when cue transformation changes. Consistent with [1], the average slope appears to exponentially approach an asymptotic value as the subjects adapt to each transformation. Given the inter-subject differences in adaptation rate, little can be said about the relative rate of adaptation from n = 1 to n = 4 compared to adapting from n = 4 to n = 2. But, the rate of adaptation is roughly consistent with the average rate of adaptation in previous experiments [1]. The average asymptote of adaptation across subjects when n = 4 is 0.61 (with a standard deviation of 0.04) and roughly 0.68 (with a standard deviation of 0.03) when n = 2. These values are comparable to the average values for asymptotes of previous experiments where n = 4 (asymptote of 0.59 with a standard deviation of 0.07) and n = 2 (asymptote of 0.73 with a standard deviation of 0.04) [1] especially when inter-subject variability is considered.

32 0.95 Adaptation i runs Figure 5-9: Adaptation over runs

33 5.6 Imperfection in auditory cues The unwarped HRTFs used in the experiment are based on measurements taken by Wightman [3] from the subject SDO, a petite female. Because of the original subject's smaller head, subject interpretation of the audio cues are slightly skewed. The error introduced is predictable and can be accounted for by considering the effects of only the ITD associated with the HRTF. For some angle 0 there is an associated ITD(O) for each subject. Assuming that Wightman's subject SDO has a head smaller than any subject I use, interaural delays presented to my subjects will be smaller than normal for a source at a particular position. That is, angle Ox normally gives rise to ITDSDo(Ox) and ITDtestsubject(Ox) where, generally IITDsDo(Ox)I < IITDtest-subject (Ox) because of SDO's smaller head. When a source from Ox is presented, even for normal cues (n = 1), the subject will perceive the source to be at some position lal < OxlJ While this analysis explains systematic errors in localization (whereby the magnitude of the source angle is underestimated) for normal cues, these errors are very small compared to the errors introduced when the auditory cues are transformed (fig. 2-1). 5.7 Impact of edges Data at the extremes of the testing range must be handled differently. For example, between the second and third runs where the cues change from n = 1 to n = 4, the auditory range changes from +60 to -60 when n = 1 to +82 to -82 when n = 4. Because of this change, the range of auditory cues exceeds the range of possible response positions whenever n > 1. Because subjects are not instantly familiar with the transformed auditory space, they are forced to interpret the cues in the context of the old auditory space. When n = 4 is first introduced, subjects are accustomed to normal cues (n = 1). For

34 instance, with n = 4 the normal cues for auditory sources 1 through 4 and 10 through 13 fall outside the range of responses (+60 to -60 degrees). Under the expanded range, it is likely that when the subject initially hears any cue less than 5 or greater than 9, they will answer 1 or 13, respectively. The difference plot in figure 5-1b, for example, reflects this effect by the sudden decrease in error occurring before cue 4 and after cue 10. The small error at the extremes result from the fact that the response range available to the subjects limits the errors possible at the edge of the range. To minimize error introduced by these edges, the edge data is treated differently in the calculation of adaptation.

35 Chapter 6 Summary Over the two-hour test period, subjects are able to adapt to the various changes introduced into their auditory environment. Error and bias plots show systematic error and adaptation. Errors and bias values always decreases as exposure to a particular warp-strength continues. The mean graphs also demonstrate adaptation as subject response consistently shifts towards the base line. Other indications of adaptation are demonstrated by systematic over- and underestimation at instances where warp strength changes. A weak to strong cue change (run 2 to run 3) produces an overestimation of cue distance from the center while weak to strong cue changes (run 17 to run 18 and run 32 to run 33) lead to underestimation of cue locations with respect to the center. Adaptation can be summarized by the slopes of the line generated by normal cue versus subject response. In this experiment, adaptation happens at a rate comparable to adaptation seen in previous experiments when changing from a weak to a strong warp (n = 1 to n = 4), but is inconsistent across subjects when changing from strong to weak transforms (n = 4 to n = 2 and n = 2 to n = 1). This difference may be the result of the magnitude of the change or the direction of the change. A previous model of adaptation [1] predicts that the exponential rate of adaptation is independent of the order of runs. Current results are consistent with this prediction for the initial change in transformation, but show that subject differences can occur with subsequent cue changes. The same model predicts that the asymptote to which

36 subjects adapt depends only on the transform strength. The asymptote values in current experiments are quantitatively consistent with this model.

37 Appendix A Warp and Line Fit Results

38 run fit-value Table A.1: Line-Fit values MSE

39 fit-value run Table A.2: Warp-Fit Values MSE

40 Bibliography [1] Barbara G. Shinn-Cunningham. Supernormal Auditory Localization Cues in an Auditory Virtual Environment. PhD thesis, Massachusetts Institute of Technology, [2] Elizabeth M. Wenzel. Localization in virtual acoustic displays. Presence, 1(1):80-107, [3] F.L. Wightman and D.J. Kistler. Headphone simulation of free-field listening. Journal of the Acoustical Society of America, 85: , 1989.

Perceptual Plasticity in Spatial Auditory Displays

Perceptual Plasticity in Spatial Auditory Displays Perceptual Plasticity in Spatial Auditory Displays BARBARA G. SHINN-CUNNINGHAM, TIMOTHY STREETER, and JEAN-FRANÇOIS GYSS Hearing Research Center, Boston University Often, virtual acoustic environments

More information

Trading Directional Accuracy for Realism in a Virtual Auditory Display

Trading Directional Accuracy for Realism in a Virtual Auditory Display Trading Directional Accuracy for Realism in a Virtual Auditory Display Barbara G. Shinn-Cunningham, I-Fan Lin, and Tim Streeter Hearing Research Center, Boston University 677 Beacon St., Boston, MA 02215

More information

Effect of spectral content and learning on auditory distance perception

Effect of spectral content and learning on auditory distance perception Effect of spectral content and learning on auditory distance perception Norbert Kopčo 1,2, Dávid Čeljuska 1, Miroslav Puszta 1, Michal Raček 1 a Martin Sarnovský 1 1 Department of Cybernetics and AI, Technical

More information

Adapting to Remapped Auditory Localization Cues: A Decision-Theory Model

Adapting to Remapped Auditory Localization Cues: A Decision-Theory Model Shinn-Cunningham, BG (2000). Adapting to remapped auditory localization cues: A decisiontheory model, Perception and Psychophysics, 62(), 33-47. Adapting to Remapped Auditory Localization Cues: A Decision-Theory

More information

Binaural Hearing. Why two ears? Definitions

Binaural Hearing. Why two ears? Definitions Binaural Hearing Why two ears? Locating sounds in space: acuity is poorer than in vision by up to two orders of magnitude, but extends in all directions. Role in alerting and orienting? Separating sound

More information

3-D Sound and Spatial Audio. What do these terms mean?

3-D Sound and Spatial Audio. What do these terms mean? 3-D Sound and Spatial Audio What do these terms mean? Both terms are very general. 3-D sound usually implies the perception of point sources in 3-D space (could also be 2-D plane) whether the audio reproduction

More information

Sound localization psychophysics

Sound localization psychophysics Sound localization psychophysics Eric Young A good reference: B.C.J. Moore An Introduction to the Psychology of Hearing Chapter 7, Space Perception. Elsevier, Amsterdam, pp. 233-267 (2004). Sound localization:

More information

Sound Localization PSY 310 Greg Francis. Lecture 31. Audition

Sound Localization PSY 310 Greg Francis. Lecture 31. Audition Sound Localization PSY 310 Greg Francis Lecture 31 Physics and psychology. Audition We now have some idea of how sound properties are recorded by the auditory system So, we know what kind of information

More information

3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF. Masayuki Morimoto Motokuni Itoh Kazuhiro Iida

3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF. Masayuki Morimoto Motokuni Itoh Kazuhiro Iida 3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF Masayuki Morimoto Motokuni Itoh Kazuhiro Iida Kobe University Environmental Acoustics Laboratory Rokko, Nada, Kobe, 657-8501,

More information

The role of low frequency components in median plane localization

The role of low frequency components in median plane localization Acoust. Sci. & Tech. 24, 2 (23) PAPER The role of low components in median plane localization Masayuki Morimoto 1;, Motoki Yairi 1, Kazuhiro Iida 2 and Motokuni Itoh 1 1 Environmental Acoustics Laboratory,

More information

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS 19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 27 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS PACS: 43.66.Pn Seeber, Bernhard U. Auditory Perception Lab, Dept.

More information

Binaural Hearing. Steve Colburn Boston University

Binaural Hearing. Steve Colburn Boston University Binaural Hearing Steve Colburn Boston University Outline Why do we (and many other animals) have two ears? What are the major advantages? What is the observed behavior? How do we accomplish this physiologically?

More information

Adaptation to Auditory Localization Cues from an Enlarged Head

Adaptation to Auditory Localization Cues from an Enlarged Head Adaptation to Auditory Localization Cues from an Enlarged Head by Salim Kassem B.S., Electrical Engineering (1996) Pontificia Universidad Javeriana Submitted to the Department of Electrical Engineering

More information

Brian D. Simpson Veridian, 5200 Springfield Pike, Suite 200, Dayton, Ohio 45431

Brian D. Simpson Veridian, 5200 Springfield Pike, Suite 200, Dayton, Ohio 45431 The effects of spatial separation in distance on the informational and energetic masking of a nearby speech signal Douglas S. Brungart a) Air Force Research Laboratory, 2610 Seventh Street, Wright-Patterson

More information

Hearing in the Environment

Hearing in the Environment 10 Hearing in the Environment Click Chapter to edit 10 Master Hearing title in the style Environment Sound Localization Complex Sounds Auditory Scene Analysis Continuity and Restoration Effects Auditory

More information

B. G. Shinn-Cunningham Hearing Research Center, Departments of Biomedical Engineering and Cognitive and Neural Systems, Boston, Massachusetts 02215

B. G. Shinn-Cunningham Hearing Research Center, Departments of Biomedical Engineering and Cognitive and Neural Systems, Boston, Massachusetts 02215 Investigation of the relationship among three common measures of precedence: Fusion, localization dominance, and discrimination suppression R. Y. Litovsky a) Boston University Hearing Research Center,

More information

IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION. Griffin D. Romigh, Brian D. Simpson, Nandini Iyer

IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION. Griffin D. Romigh, Brian D. Simpson, Nandini Iyer IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION Griffin D. Romigh, Brian D. Simpson, Nandini Iyer 711th Human Performance Wing Air Force Research Laboratory

More information

Digital. hearing instruments have burst on the

Digital. hearing instruments have burst on the Testing Digital and Analog Hearing Instruments: Processing Time Delays and Phase Measurements A look at potential side effects and ways of measuring them by George J. Frye Digital. hearing instruments

More information

Systems Neuroscience Oct. 16, Auditory system. http:

Systems Neuroscience Oct. 16, Auditory system. http: Systems Neuroscience Oct. 16, 2018 Auditory system http: www.ini.unizh.ch/~kiper/system_neurosci.html The physics of sound Measuring sound intensity We are sensitive to an enormous range of intensities,

More information

A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES

A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES Kazuhiro IIDA, Motokuni ITOH AV Core Technology Development Center, Matsushita Electric Industrial Co., Ltd.

More information

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane. I. Psychoacoustical Data

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane. I. Psychoacoustical Data 942 955 Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane. I. Psychoacoustical Data Jonas Braasch, Klaus Hartung Institut für Kommunikationsakustik, Ruhr-Universität

More information

Hearing II Perceptual Aspects

Hearing II Perceptual Aspects Hearing II Perceptual Aspects Overview of Topics Chapter 6 in Chaudhuri Intensity & Loudness Frequency & Pitch Auditory Space Perception 1 2 Intensity & Loudness Loudness is the subjective perceptual quality

More information

Angular Resolution of Human Sound Localization

Angular Resolution of Human Sound Localization Angular Resolution of Human Sound Localization By Simon Skluzacek A senior thesis submitted to the Carthage College Physics & Astronomy Department in partial fulfillment of the requirements for the Bachelor

More information

MedRx HLS Plus. An Instructional Guide to operating the Hearing Loss Simulator and Master Hearing Aid. Hearing Loss Simulator

MedRx HLS Plus. An Instructional Guide to operating the Hearing Loss Simulator and Master Hearing Aid. Hearing Loss Simulator MedRx HLS Plus An Instructional Guide to operating the Hearing Loss Simulator and Master Hearing Aid Hearing Loss Simulator The Hearing Loss Simulator dynamically demonstrates the effect of the client

More information

Congruency Effects with Dynamic Auditory Stimuli: Design Implications

Congruency Effects with Dynamic Auditory Stimuli: Design Implications Congruency Effects with Dynamic Auditory Stimuli: Design Implications Bruce N. Walker and Addie Ehrenstein Psychology Department Rice University 6100 Main Street Houston, TX 77005-1892 USA +1 (713) 527-8101

More information

Speech segregation in rooms: Effects of reverberation on both target and interferer

Speech segregation in rooms: Effects of reverberation on both target and interferer Speech segregation in rooms: Effects of reverberation on both target and interferer Mathieu Lavandier a and John F. Culling School of Psychology, Cardiff University, Tower Building, Park Place, Cardiff,

More information

An Auditory System Modeling in Sound Source Localization

An Auditory System Modeling in Sound Source Localization An Auditory System Modeling in Sound Source Localization Yul Young Park The University of Texas at Austin EE381K Multidimensional Signal Processing May 18, 2005 Abstract Sound localization of the auditory

More information

Effect of source spectrum on sound localization in an everyday reverberant room

Effect of source spectrum on sound localization in an everyday reverberant room Effect of source spectrum on sound localization in an everyday reverberant room Antje Ihlefeld and Barbara G. Shinn-Cunningham a) Hearing Research Center, Boston University, Boston, Massachusetts 02215

More information

Neural correlates of the perception of sound source separation

Neural correlates of the perception of sound source separation Neural correlates of the perception of sound source separation Mitchell L. Day 1,2 * and Bertrand Delgutte 1,2,3 1 Department of Otology and Laryngology, Harvard Medical School, Boston, MA 02115, USA.

More information

REACTION TIME AS A MEASURE OF INTERSENSORY FACILITATION l

REACTION TIME AS A MEASURE OF INTERSENSORY FACILITATION l Journal oj Experimental Psychology 12, Vol. 63, No. 3, 289-293 REACTION TIME AS A MEASURE OF INTERSENSORY FACILITATION l MAURICE HERSHENSON 2 Brooklyn College In measuring reaction time (RT) to simultaneously

More information

Perceptual Effects of Nasal Cue Modification

Perceptual Effects of Nasal Cue Modification Send Orders for Reprints to reprints@benthamscience.ae The Open Electrical & Electronic Engineering Journal, 2015, 9, 399-407 399 Perceptual Effects of Nasal Cue Modification Open Access Fan Bai 1,2,*

More information

A Microphone-Array-Based System for Restoring Sound Localization with Occluded Ears

A Microphone-Array-Based System for Restoring Sound Localization with Occluded Ears Restoring Sound Localization with Occluded Ears Adelbert W. Bronkhorst TNO Human Factors P.O. Box 23, 3769 ZG Soesterberg The Netherlands adelbert.bronkhorst@tno.nl Jan A. Verhave TNO Human Factors P.O.

More information

A Memory Model for Decision Processes in Pigeons

A Memory Model for Decision Processes in Pigeons From M. L. Commons, R.J. Herrnstein, & A.R. Wagner (Eds.). 1983. Quantitative Analyses of Behavior: Discrimination Processes. Cambridge, MA: Ballinger (Vol. IV, Chapter 1, pages 3-19). A Memory Model for

More information

Auditory Scene Analysis

Auditory Scene Analysis 1 Auditory Scene Analysis Albert S. Bregman Department of Psychology McGill University 1205 Docteur Penfield Avenue Montreal, QC Canada H3A 1B1 E-mail: bregman@hebb.psych.mcgill.ca To appear in N.J. Smelzer

More information

HCS 7367 Speech Perception

HCS 7367 Speech Perception Long-term spectrum of speech HCS 7367 Speech Perception Connected speech Absolute threshold Males Dr. Peter Assmann Fall 212 Females Long-term spectrum of speech Vowels Males Females 2) Absolute threshold

More information

Minimum Audible Angles Measured with Simulated Normally-Sized and Oversized Pinnas for Normal-Hearing and Hearing- Impaired Test Subjects

Minimum Audible Angles Measured with Simulated Normally-Sized and Oversized Pinnas for Normal-Hearing and Hearing- Impaired Test Subjects Minimum Audible Angles Measured with Simulated Normally-Sized and Oversized Pinnas for Normal-Hearing and Hearing- Impaired Test Subjects Filip M. Rønne, Søren Laugesen, Niels S. Jensen and Julie H. Pedersen

More information

EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE DESIGN OF SPATIAL AUDIO DISPLAYS

EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE DESIGN OF SPATIAL AUDIO DISPLAYS Proceedings of the 14 International Conference on Auditory Display, Paris, France June 24-27, 28 EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE

More information

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES Varinthira Duangudom and David V Anderson School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA 30332

More information

The basic mechanisms of directional sound localization are well documented. In the horizontal plane, interaural difference INTRODUCTION I.

The basic mechanisms of directional sound localization are well documented. In the horizontal plane, interaural difference INTRODUCTION I. Auditory localization of nearby sources. II. Localization of a broadband source Douglas S. Brungart, a) Nathaniel I. Durlach, and William M. Rabinowitz b) Research Laboratory of Electronics, Massachusetts

More information

How high-frequency do children hear?

How high-frequency do children hear? How high-frequency do children hear? Mari UEDA 1 ; Kaoru ASHIHARA 2 ; Hironobu TAKAHASHI 2 1 Kyushu University, Japan 2 National Institute of Advanced Industrial Science and Technology, Japan ABSTRACT

More information

Supplemental Information: Task-specific transfer of perceptual learning across sensory modalities

Supplemental Information: Task-specific transfer of perceptual learning across sensory modalities Supplemental Information: Task-specific transfer of perceptual learning across sensory modalities David P. McGovern, Andrew T. Astle, Sarah L. Clavin and Fiona N. Newell Figure S1: Group-averaged learning

More information

3 CONCEPTUAL FOUNDATIONS OF STATISTICS

3 CONCEPTUAL FOUNDATIONS OF STATISTICS 3 CONCEPTUAL FOUNDATIONS OF STATISTICS In this chapter, we examine the conceptual foundations of statistics. The goal is to give you an appreciation and conceptual understanding of some basic statistical

More information

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED International Conference on Systemics, Cybernetics and Informatics, February 12 15, 2004 BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED Alice N. Cheeran Biomedical

More information

Hearing. Juan P Bello

Hearing. Juan P Bello Hearing Juan P Bello The human ear The human ear Outer Ear The human ear Middle Ear The human ear Inner Ear The cochlea (1) It separates sound into its various components If uncoiled it becomes a tapering

More information

Abstract. 1. Introduction. David Spargo 1, William L. Martens 2, and Densil Cabrera 3

Abstract. 1. Introduction. David Spargo 1, William L. Martens 2, and Densil Cabrera 3 THE INFLUENCE OF ROOM REFLECTIONS ON SUBWOOFER REPRODUCTION IN A SMALL ROOM: BINAURAL INTERACTIONS PREDICT PERCEIVED LATERAL ANGLE OF PERCUSSIVE LOW- FREQUENCY MUSICAL TONES Abstract David Spargo 1, William

More information

A Novel Software Solution to Diagnose the Hearing Disabilities In Human Beings

A Novel Software Solution to Diagnose the Hearing Disabilities In Human Beings A Novel Software Solution to Diagnose the Hearing Disabilities In Human Beings Prithvi B S Dept of Computer Science & Engineering SVIT, Bangalore bsprithvi1992@gmail.com Sanjay H S Research Scholar, Jain

More information

Two Modified IEC Ear Simulators for Extended Dynamic Range

Two Modified IEC Ear Simulators for Extended Dynamic Range Two Modified IEC 60318-4 Ear Simulators for Extended Dynamic Range Peter Wulf-Andersen & Morten Wille The international standard IEC 60318-4 specifies an occluded ear simulator, often referred to as a

More information

Impact of the ambient sound level on the system's measurements CAPA

Impact of the ambient sound level on the system's measurements CAPA Impact of the ambient sound level on the system's measurements CAPA Jean Sébastien Niel December 212 CAPA is software used for the monitoring of the Attenuation of hearing protectors. This study will investigate

More information

Procedure Number 310 TVA Safety Procedure Page 1 of 6 Hearing Conservation Revision 0 January 6, 2003

Procedure Number 310 TVA Safety Procedure Page 1 of 6 Hearing Conservation Revision 0 January 6, 2003 Procedure Number 310 TVA Safety Procedure Page 1 of 6 Hearing Conservation Revision 0 January 6, 2003 1. Purpose 1.1. The purpose of this procedure is to establish a TVA Hearing Conservation Program (HCP)

More information

The use of interaural time and level difference cues by bilateral cochlear implant users

The use of interaural time and level difference cues by bilateral cochlear implant users The use of interaural time and level difference cues by bilateral cochlear implant users Justin M. Aronoff, a) Yang-soo Yoon, and Daniel J. Freed b) Communication and Neuroscience Division, House Ear Institute,

More information

Discrimination and identification of azimuth using spectral shape a)

Discrimination and identification of azimuth using spectral shape a) Discrimination and identification of azimuth using spectral shape a) Daniel E. Shub b Speech and Hearing Bioscience and Technology Program, Division of Health Sciences and Technology, Massachusetts Institute

More information

Abstract.

Abstract. Combining Phase Cancellation, Frequency Shifting and Acoustic Fingerprint for Improved Feedback Suppression Josef Chalupper, Thomas A. Powers, Andre Steinbuss www.siemens.com Abstract Acoustic feedback

More information

CHAPTER ONE CORRELATION

CHAPTER ONE CORRELATION CHAPTER ONE CORRELATION 1.0 Introduction The first chapter focuses on the nature of statistical data of correlation. The aim of the series of exercises is to ensure the students are able to use SPSS to

More information

HEARING AND PSYCHOACOUSTICS

HEARING AND PSYCHOACOUSTICS CHAPTER 2 HEARING AND PSYCHOACOUSTICS WITH LIDIA LEE I would like to lead off the specific audio discussions with a description of the audio receptor the ear. I believe it is always a good idea to understand

More information

Effects of speaker's and listener's environments on speech intelligibili annoyance. Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag

Effects of speaker's and listener's environments on speech intelligibili annoyance. Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag JAIST Reposi https://dspace.j Title Effects of speaker's and listener's environments on speech intelligibili annoyance Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag Citation Inter-noise 2016: 171-176 Issue

More information

INTRODUCTION J. Acoust. Soc. Am. 100 (4), Pt. 1, October /96/100(4)/2352/13/$ Acoustical Society of America 2352

INTRODUCTION J. Acoust. Soc. Am. 100 (4), Pt. 1, October /96/100(4)/2352/13/$ Acoustical Society of America 2352 Lateralization of a perturbed harmonic: Effects of onset asynchrony and mistuning a) Nicholas I. Hill and C. J. Darwin Laboratory of Experimental Psychology, University of Sussex, Brighton BN1 9QG, United

More information

Spectral processing of two concurrent harmonic complexes

Spectral processing of two concurrent harmonic complexes Spectral processing of two concurrent harmonic complexes Yi Shen a) and Virginia M. Richards Department of Cognitive Sciences, University of California, Irvine, California 92697-5100 (Received 7 April

More information

The basic hearing abilities of absolute pitch possessors

The basic hearing abilities of absolute pitch possessors PAPER The basic hearing abilities of absolute pitch possessors Waka Fujisaki 1;2;* and Makio Kashino 2; { 1 Graduate School of Humanities and Sciences, Ochanomizu University, 2 1 1 Ootsuka, Bunkyo-ku,

More information

Hearing. Figure 1. The human ear (from Kessel and Kardon, 1979)

Hearing. Figure 1. The human ear (from Kessel and Kardon, 1979) Hearing The nervous system s cognitive response to sound stimuli is known as psychoacoustics: it is partly acoustics and partly psychology. Hearing is a feature resulting from our physiology that we tend

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Psychological and Physiological Acoustics Session 4aPPb: Binaural Hearing

More information

Group Delay or Processing Delay

Group Delay or Processing Delay Bill Cole BASc, PEng Group Delay or Processing Delay The terms Group Delay (GD) and Processing Delay (PD) have often been used interchangeably when referring to digital hearing aids. Group delay is the

More information

Frequency refers to how often something happens. Period refers to the time it takes something to happen.

Frequency refers to how often something happens. Period refers to the time it takes something to happen. Lecture 2 Properties of Waves Frequency and period are distinctly different, yet related, quantities. Frequency refers to how often something happens. Period refers to the time it takes something to happen.

More information

Appendix B Statistical Methods

Appendix B Statistical Methods Appendix B Statistical Methods Figure B. Graphing data. (a) The raw data are tallied into a frequency distribution. (b) The same data are portrayed in a bar graph called a histogram. (c) A frequency polygon

More information

INTRODUCTION J. Acoust. Soc. Am. 103 (2), February /98/103(2)/1080/5/$ Acoustical Society of America 1080

INTRODUCTION J. Acoust. Soc. Am. 103 (2), February /98/103(2)/1080/5/$ Acoustical Society of America 1080 Perceptual segregation of a harmonic from a vowel by interaural time difference in conjunction with mistuning and onset asynchrony C. J. Darwin and R. W. Hukin Experimental Psychology, University of Sussex,

More information

ICaD 2013 ADJUSTING THE PERCEIVED DISTANCE OF VIRTUAL SPEECH SOURCES BY MODIFYING BINAURAL ROOM IMPULSE RESPONSES

ICaD 2013 ADJUSTING THE PERCEIVED DISTANCE OF VIRTUAL SPEECH SOURCES BY MODIFYING BINAURAL ROOM IMPULSE RESPONSES ICaD 213 6 1 july, 213, Łódź, Poland international Conference on auditory Display ADJUSTING THE PERCEIVED DISTANCE OF VIRTUAL SPEECH SOURCES BY MODIFYING BINAURAL ROOM IMPULSE RESPONSES Robert Albrecht

More information

Modeling Physiological and Psychophysical Responses to Precedence Effect Stimuli

Modeling Physiological and Psychophysical Responses to Precedence Effect Stimuli Modeling Physiological and Psychophysical Responses to Precedence Effect Stimuli Jing Xia 1, Andrew Brughera 2, H. Steven Colburn 2, and Barbara Shinn-Cunningham 1, 2 1 Department of Cognitive and Neural

More information

On the improvement of localization accuracy with nonindividualized

On the improvement of localization accuracy with nonindividualized On the improvement of localization accuracy with nonindividualized HRTF-based sounds Catarina Mendonça 1, AES Member, Guilherme Campos 2, AES Member, Paulo Dias 2, José Vieira 2, AES Fellow, João P. Ferreira

More information

Binaural processing of complex stimuli

Binaural processing of complex stimuli Binaural processing of complex stimuli Outline for today Binaural detection experiments and models Speech as an important waveform Experiments on understanding speech in complex environments (Cocktail

More information

Introductory Motor Learning and Development Lab

Introductory Motor Learning and Development Lab Introductory Motor Learning and Development Lab Laboratory Equipment & Test Procedures. Motor learning and control historically has built its discipline through laboratory research. This has led to the

More information

Cochlear implant patients localization using interaural level differences exceeds that of untrained normal hearing listeners

Cochlear implant patients localization using interaural level differences exceeds that of untrained normal hearing listeners Cochlear implant patients localization using interaural level differences exceeds that of untrained normal hearing listeners Justin M. Aronoff a) Communication and Neuroscience Division, House Research

More information

SoundRecover2 the first adaptive frequency compression algorithm More audibility of high frequency sounds

SoundRecover2 the first adaptive frequency compression algorithm More audibility of high frequency sounds Phonak Insight April 2016 SoundRecover2 the first adaptive frequency compression algorithm More audibility of high frequency sounds Phonak led the way in modern frequency lowering technology with the introduction

More information

Categorical Perception

Categorical Perception Categorical Perception Discrimination for some speech contrasts is poor within phonetic categories and good between categories. Unusual, not found for most perceptual contrasts. Influenced by task, expectations,

More information

Examining the Constant Difference Effect in a Concurrent Chains Procedure

Examining the Constant Difference Effect in a Concurrent Chains Procedure University of Wisconsin Milwaukee UWM Digital Commons Theses and Dissertations May 2015 Examining the Constant Difference Effect in a Concurrent Chains Procedure Carrie Suzanne Prentice University of Wisconsin-Milwaukee

More information

Jitter, Shimmer, and Noise in Pathological Voice Quality Perception

Jitter, Shimmer, and Noise in Pathological Voice Quality Perception ISCA Archive VOQUAL'03, Geneva, August 27-29, 2003 Jitter, Shimmer, and Noise in Pathological Voice Quality Perception Jody Kreiman and Bruce R. Gerratt Division of Head and Neck Surgery, School of Medicine

More information

SOLUTIONS Homework #3. Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03

SOLUTIONS Homework #3. Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03 SOLUTIONS Homework #3 Introduction to Engineering in Medicine and Biology ECEN 1001 Due Tues. 9/30/03 Problem 1: a) Where in the cochlea would you say the process of "fourier decomposition" of the incoming

More information

ON THE ROLE OF IMPUTED VELOCITY IN THE AUDITORY KAPPA EFFECT. Molly J. Henry. A Thesis

ON THE ROLE OF IMPUTED VELOCITY IN THE AUDITORY KAPPA EFFECT. Molly J. Henry. A Thesis ON THE ROLE OF IMPUTED VELOCITY IN THE AUDITORY KAPPA EFFECT Molly J. Henry A Thesis Submitted to the Graduate College of Bowling Green State University in partial fulfillment of the requirements for the

More information

What Is the Difference between db HL and db SPL?

What Is the Difference between db HL and db SPL? 1 Psychoacoustics What Is the Difference between db HL and db SPL? The decibel (db ) is a logarithmic unit of measurement used to express the magnitude of a sound relative to some reference level. Decibels

More information

Speech Cue Weighting in Fricative Consonant Perception in Hearing Impaired Children

Speech Cue Weighting in Fricative Consonant Perception in Hearing Impaired Children University of Tennessee, Knoxville Trace: Tennessee Research and Creative Exchange University of Tennessee Honors Thesis Projects University of Tennessee Honors Program 5-2014 Speech Cue Weighting in Fricative

More information

UvA-DARE (Digital Academic Repository) Perceptual evaluation of noise reduction in hearing aids Brons, I. Link to publication

UvA-DARE (Digital Academic Repository) Perceptual evaluation of noise reduction in hearing aids Brons, I. Link to publication UvA-DARE (Digital Academic Repository) Perceptual evaluation of noise reduction in hearing aids Brons, I. Link to publication Citation for published version (APA): Brons, I. (2013). Perceptual evaluation

More information

Binaural Hearing for Robots Introduction to Robot Hearing

Binaural Hearing for Robots Introduction to Robot Hearing Binaural Hearing for Robots Introduction to Robot Hearing 1Radu Horaud Binaural Hearing for Robots 1. Introduction to Robot Hearing 2. Methodological Foundations 3. Sound-Source Localization 4. Machine

More information

Stimulus any aspect of or change in the environment to which an organism responds. Sensation what occurs when a stimulus activates a receptor

Stimulus any aspect of or change in the environment to which an organism responds. Sensation what occurs when a stimulus activates a receptor Chapter 8 Sensation and Perception Sec 1: Sensation Stimulus any aspect of or change in the environment to which an organism responds Sensation what occurs when a stimulus activates a receptor Perception

More information

Hearing Conservation Program

Hearing Conservation Program Last Reviewed Date: 3/07/2018 Last Revised Date: 7/27/2017 Effective Date: 6/27/1994 Applies To: Employees, Faculty, Students, Others For More Information contact: EHS, Coordinator at 860-486-3613 or valerie.brangan@uconn.edu

More information

CONTRIBUTION OF DIRECTIONAL ENERGY COMPONENTS OF LATE SOUND TO LISTENER ENVELOPMENT

CONTRIBUTION OF DIRECTIONAL ENERGY COMPONENTS OF LATE SOUND TO LISTENER ENVELOPMENT CONTRIBUTION OF DIRECTIONAL ENERGY COMPONENTS OF LATE SOUND TO LISTENER ENVELOPMENT PACS:..Hy Furuya, Hiroshi ; Wakuda, Akiko ; Anai, Ken ; Fujimoto, Kazutoshi Faculty of Engineering, Kyushu Kyoritsu University

More information

Chapter 3 CORRELATION AND REGRESSION

Chapter 3 CORRELATION AND REGRESSION CORRELATION AND REGRESSION TOPIC SLIDE Linear Regression Defined 2 Regression Equation 3 The Slope or b 4 The Y-Intercept or a 5 What Value of the Y-Variable Should be Predicted When r = 0? 7 The Regression

More information

21/01/2013. Binaural Phenomena. Aim. To understand binaural hearing Objectives. Understand the cues used to determine the location of a sound source

21/01/2013. Binaural Phenomena. Aim. To understand binaural hearing Objectives. Understand the cues used to determine the location of a sound source Binaural Phenomena Aim To understand binaural hearing Objectives Understand the cues used to determine the location of a sound source Understand sensitivity to binaural spatial cues, including interaural

More information

Spectral and Spatial Parameter Resolution Requirements for Parametric, Filter-Bank-Based HRTF Processing*

Spectral and Spatial Parameter Resolution Requirements for Parametric, Filter-Bank-Based HRTF Processing* Spectral and Spatial Parameter Resolution Requirements for Parametric, Filter-Bank-Based HRTF Processing* JEROEN BREEBAART, 1 AES Member, FABIAN NATER, 2 (jeroen.breebaart@philips.com) (fnater@vision.ee.ethz.ch)

More information

Using 3d sound to track one of two non-vocal alarms. IMASSA BP Brétigny sur Orge Cedex France.

Using 3d sound to track one of two non-vocal alarms. IMASSA BP Brétigny sur Orge Cedex France. Using 3d sound to track one of two non-vocal alarms Marie Rivenez 1, Guillaume Andéol 1, Lionel Pellieux 1, Christelle Delor 1, Anne Guillaume 1 1 Département Sciences Cognitives IMASSA BP 73 91220 Brétigny

More information

Indoor Noise Annoyance Due to Transportation Noise

Indoor Noise Annoyance Due to Transportation Noise Indoor Noise Annoyance Due to Transportation Noise Hyeon Ku Park* 1 1 Professor, Department of Architectural Engineering, Songwon University, Korea Abstract This study examined the relationship between

More information

ReSound NoiseTracker II

ReSound NoiseTracker II Abstract The main complaint of hearing instrument wearers continues to be hearing in noise. While directional microphone technology exploits spatial separation of signal from noise to improve listening

More information

FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED

FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED FREQUENCY COMPRESSION AND FREQUENCY SHIFTING FOR THE HEARING IMPAIRED Francisco J. Fraga, Alan M. Marotta National Institute of Telecommunications, Santa Rita do Sapucaí - MG, Brazil Abstract A considerable

More information

HOW TO USE THE SHURE MXA910 CEILING ARRAY MICROPHONE FOR VOICE LIFT

HOW TO USE THE SHURE MXA910 CEILING ARRAY MICROPHONE FOR VOICE LIFT HOW TO USE THE SHURE MXA910 CEILING ARRAY MICROPHONE FOR VOICE LIFT Created: Sept 2016 Updated: June 2017 By: Luis Guerra Troy Jensen The Shure MXA910 Ceiling Array Microphone offers the unique advantage

More information

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening AUDL GS08/GAV1 Signals, systems, acoustics and the ear Pitch & Binaural listening Review 25 20 15 10 5 0-5 100 1000 10000 25 20 15 10 5 0-5 100 1000 10000 Part I: Auditory frequency selectivity Tuning

More information

Lecture 8: Spatial sound

Lecture 8: Spatial sound EE E6820: Speech & Audio Processing & Recognition Lecture 8: Spatial sound 1 2 3 4 Spatial acoustics Binaural perception Synthesizing spatial audio Extracting spatial sounds Dan Ellis

More information

Sound localization under conditions of covered ears on the horizontal plane

Sound localization under conditions of covered ears on the horizontal plane coust. Sci. & Tech. 28, 5 (27) TECHNICL REPORT #27 The coustical Society of Japan Sound localization under conditions of covered ears on the horizontal plane Madoka Takimoto 1, Takanori Nishino 2;, Katunobu

More information

Supporting Information

Supporting Information 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 Supporting Information Variances and biases of absolute distributions were larger in the 2-line

More information

P. M. Zurek Sensimetrics Corporation, Somerville, Massachusetts and Massachusetts Institute of Technology, Cambridge, Massachusetts 02139

P. M. Zurek Sensimetrics Corporation, Somerville, Massachusetts and Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 Failure to unlearn the precedence effect R. Y. Litovsky, a) M. L. Hawley, and B. J. Fligor Hearing Research Center and Department of Biomedical Engineering, Boston University, Boston, Massachusetts 02215

More information

A Comparison of Baseline Hearing Thresholds Between Pilots and Non-Pilots and the Effects of Engine Noise

A Comparison of Baseline Hearing Thresholds Between Pilots and Non-Pilots and the Effects of Engine Noise DOT/FAA/AM-05/12 Office of Aerospace Medicine Washington, DC 20591 A Comparison of Baseline Hearing Thresholds Between Pilots and Non-Pilots and the Effects of Engine Noise Dennis B. Beringer Howard C.

More information

Advanced Audio Interface for Phonetic Speech. Recognition in a High Noise Environment

Advanced Audio Interface for Phonetic Speech. Recognition in a High Noise Environment DISTRIBUTION STATEMENT A Approved for Public Release Distribution Unlimited Advanced Audio Interface for Phonetic Speech Recognition in a High Noise Environment SBIR 99.1 TOPIC AF99-1Q3 PHASE I SUMMARY

More information

Audibility of time differences in adjacent head-related transfer functions (HRTFs) Hoffmann, Pablo Francisco F.; Møller, Henrik

Audibility of time differences in adjacent head-related transfer functions (HRTFs) Hoffmann, Pablo Francisco F.; Møller, Henrik Aalborg Universitet Audibility of time differences in adjacent head-related transfer functions (HRTFs) Hoffmann, Pablo Francisco F.; Møller, Henrik Published in: Audio Engineering Society Convention Papers

More information

INTRODUCTION. Institute of Technology, Cambridge, MA Electronic mail:

INTRODUCTION. Institute of Technology, Cambridge, MA Electronic mail: Level discrimination of sinusoids as a function of duration and level for fixed-level, roving-level, and across-frequency conditions Andrew J. Oxenham a) Institute for Hearing, Speech, and Language, and

More information