Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane. I. Psychoacoustical Data

Size: px
Start display at page:

Download "Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane. I. Psychoacoustical Data"

Transcription

1 Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane. I. Psychoacoustical Data Jonas Braasch, Klaus Hartung Institut für Kommunikationsakustik, Ruhr-Universität Bochum, Germany Summary The ability to localize a sound in the free field emitted from one direction (noise burst, 2-ms duration, 2-ms cos 2 -ramps) in the presence of another sound emitted from a different direction (noise burst, 5-ms duration, 2-ms cos 2 -ramps) is measured in anechoic and reverberant virtual environments using individual head-related transfer functions (HRTFs). The target is presented from 13 directions in steps of 15 in the frontal-horizontal plane at different power ratios of target and distracter (T/D-ratio), measured before they are filtered with the HRTFs. The distracter is placed at, or azimuth. When the T/D-ratio is set at db, the perceived directions of the target were significantly shifted away from the actual location of the target in the opposite direction of the distracter location for all distracter directions. This phenomenon is found for all stimuli presented in an anechoic and in a reverberant environment. With decreasing T/D-ratio, the listeners give similar responses for adjacent angles. At the lowest test condition (which was set individually to ;12 db or ;15 db), the listeners answers can be grouped into the following general target positions: left, front and right. In the reverberant condition, this effect is observed at T/D-ratios of ;7 db or ;1 db. Measurements of masked detection thresholds in a further experiment show that the listeners cannot detect the target when it is presented at the direction of the distracter ( azimuth) at T/D-ratios below ;5 db. A 2A-4IFC discrimination experiment in the anechoic condition reveals that the listeners are unable to discriminate between target angles up to 45 apart for low T/D-ratios (;12 db). PACS no Dc, Pn, Qp 1. Introduction In daily life, most sounds that human observers localize do not occur in isolation, but are part of a more complex scenario of concurrent sound sources. These concurrent sources usually affect a person s ability to localize the target sound and can be classified as coherent sounds, namely reflections of the target sounds on surfaces in the environment, and incoherent sounds (distracters), namely sounds emitted from sound sources other than the target. Blauert gives a detailed summary of investigations on human sound localization in multiple sound-source scenarios [1]. A special overview on the effect of room reflections on localization of a single sound source is given by a series of publications by Hartmann and Rakerd [2, 3, 4, 5]. They show that the rise time of the sound plays an important role when localizing a sinusoid in a reverberant environment [4]. A recent study that investigated the influence of a distracter on localization performance in an anechoic envi- Received 14 May 21, revised 16 May 22, accepted 28 June 22. Now with: Bose Corporation, Framingham, Massachusetts, USA ronment was carried out by Good and Gilkey [6]. The ability to localize the target (click train) decreases in the presence of a distracter when the target-to-distracter ratio (T/D-ratio) is lowered. Lorenzi et al. [7] made similar observations in a source-identification experiment. The target was presented at different positions in the presence of a broadband noise distracter. Both investigations show that the localization performance decreases with decreasing T/D-ratio. In particular, when the level of the target is lowered, the root-mean-square (rms) error increases and r 2, the proportion of variance accounted for the best fitting relation between perceived angles and the actual angles, decreases. In both investigations, localization shifts of the target were observed which were induced by the distracter. While Good and Gilkey observed pulling effects in the L/R direction, Lorenzi et al. observed both pulling effects and pushing effects. Pulling effects refer to those cases in which the auditory event of the target is shifted toward the distracter, and pushing effects refer to those cases in which the target is shifted away from the distracter. Recently, Langendijk et al. [8] were able to show that the localization performance also degraded when target and distracter did not overlap temporally. 942 c S. Hirzel Verlag EAA

2 Several investigations have been published to find out why and how perceptual localization shifts of a sound source are induced by a second, preceding sound source [9, 1, 11, 12]. The results of these experiments show that the onset delay between a target and a distracter sound is a major factor for the perceived lateralization shift. The occurring lateralization shift depends also on the sounds (e.g. noise or sinusoids) and their frequency range. For short delays, the perceived lateralization in the presence of a distracter, presented at azimuth or diotically, is smaller than the perceived lateralization when the target is presented alone. On the other hand, for longer delays, the perceived lateralization usually increases and can exceed the lateralization of the target measured in the absence of a distracter. Another finding is that the distracter and target do not have to overlap in spectrum and/or time. These influences of a distracter occur for signals with interaural level differences (ILDs) and/or signals with interaural time differences (ITDs). The sensation level of the target must be taken into account when investigating these effects. Threshold levels of broadband sounds in a free field were measured (i) by Kock [13] for detecting speech in broadband noise; (ii) by Saberi et al. [14] for detecting click trains in broadband noise; and (iii) by Gilkey and Good [15] for detecting band-pass filtered click trains in broadband noise. Saberi et al. [14] and Good and Gilkey [15] also measured the binaural masking level difference (BMLD) 1, Saberi et al. by using an ear-plug for the monaural condition and Good and Gilkey by using a virtual environment with HRTFs. The masked threshold decreases in all three investigations by a value between 6 db for a kHz band-pass filtered click-train and 14 db for a broadband click-train, when the target changed from to azimuth. In all cases, the distracter was presented directly from the front. Even though several experiments have been carried out to investigate how the human ability to localize a target is influenced by the presence of one or more distracters or room reflections, the combined effects of both room reflections and a distracter have not been investigated so far. The main aim of this investigation is to find out how room reflections influence human localization in the presence of a distracter at low T/D-ratios. The second major aim in this investigation is to study how the previously mentioned pulling and pushing effects, observed in the presence of a distracter, change with different T/D-ratios and/or by the presence of room reflections. Do pushing and pulling effects show the same characteristics for different T/D-ratios? Is it only their magnitude that changes with the T/D-ratio and the presence of the reflections, or are the observed pushing and pulling effects of a different kind? In Experiment I, we examine and compare human localization performance at different T/D-ratios in both anechoic and reverberant environments. In Experiment II, masked detection thresholds are measured to study how the localization performance is related to the sensation level. In the third experiment, we test whether different target positions can be discriminated by localization or other cues at low T/D-ratios (Experiment III). Throughout the experiments, broadband noise bursts are used for both the target and distracter signals. The reason for selecting the same type of signal for both is to ensure that the two signals only differ by localization cues. This minimizes the bias caused by differences in other cues and will, at a later point, also simplify the process of developing model algorithms. Despite the similarities between the target and distracter, it is in general not difficult to distinguish them since the listeners were told before the experiments that the distracter would always precede the target. Because the variation of the magnitude of the HRTF with the change of direction is strongly frequency dependent, there are different ways to define the target-todistracter level ratio. One approach is to determine the target-to-distracter ratio before both signals are filtered with the HRTFs. An alternative definition is to take the target level above the masked detection threshold. For several reasons, we choose to use the first definition throughout the investigation. Firstly, the target-to-distracter ratio (T/D-ratio) is easier to determine this way because the masked detection thresholds do not have to be determined for all combinations of target and distracter positions in advance, as it would be necessary in the latter case. Furthermore, the target and the distracter have on average an identical frequency spectrum before they are filtered with the HRTFs, and, before filtering, the T/D-ratio does not depend on the frequency. Secondly, using this definition has a long tradition from experiments on loudspeakers, as the definition is principally equal to the approach to determine the level ratio of the target and distracter at the inputs of the loudspeakers. Keeping a consistent approach with previous studies will make it simpler to compare results to former investigations, especially to [6] and [7] which will be referred to throughout this text. It is important to keep in mind that only when the target is presented from the same direction as the distracter, the target-to-distracter ratio at the left and right eardrums is identical to the T/D-ratio of the source signals. For different directions, the target-to-distracter ratio at the left and right eardrums will be different, according to the magnitude transfer functions of the individual HRTFs. That is one of the reasons why we measured the masked detection thresholds for comparison purposes. 1 The BMLD is calculated as the difference between two masked thresholds of the same test condition, in which one is presented dichotically, the other monotically (by occluding one ear or turning off one headphone channel, [1]). In general, the monotic condition (left or right channel) with the lower detection threshold is chosen as the reference to determine the BMLD. 943

3 2. Experiment I: Localization of noise in the presence of a distracter in an anechoic and a reverberant environment 2.1. Methods Listeners Eleven unpaid listeners (one female, ten males) participated in the anechoic condition of the experiment. Their ages ranged from 23 to 33 years. All the listeners had normal hearing (hearing loss 2 db at octave frequencies between 125 and 8 Hz). Six of the eleven listeners, all male, aged between 23 and 26 years, participated in the reverberant condition. The -db T/D-ratio condition in the reverberant environment was conducted after the other conditions had been tested. Because two of the six listeners were not available at this point, they were replaced by two listeners from the eleven listeners who had taken part in the anechoic conditions. Those two listeners, L8 and L9, only participated in the single target and the -db T/D-ratio condition Apparatus and stimuli In the localization test, the distracter and the target were presented in the frontal horizontal plane: the distracter at an azimuth of, the target from 13 different equidistant positions between ; and azimuth in steps of 15. Both the target and distracter were broadband noise bursts of 2-ms and 5-ms duration (2-ms cos 2 on- and offset ramps, 2 14-Hz frequency range). The onset delay between target and distracter was 2 ms (the distracter was partly preceding the target). The sound pressure level of the distracter was set to approximately 7- db SPL measured at the microphones of a dummy head (Head Acoustics, KK1412) placed at the position of the listener. The SPL of the target was adjusted according to different target-to-distracter ratios (T/D-ratios), considering power ratios of target and the distracter, before they were filtered with the HRTFs. The following T/Dratios were tested in addition to the single-source condition (T/D-ratio=infinite): db, ;1 db, ;12 db, ;15 db in the anechoic environment and db, ;5 db, ;7 db, ;1 db in the reverberant environment. Five listeners reported problems detecting the target in the ;12 db condition in the anechoic environment. In these cases, the T/Dratio in the last session was adjusted to ;7 db instead of ;15 db. Two of the six listeners did not participate in the ;1-dB condition in the reverberant environment for the same reason. Six listeners took part in two further sessions, testing two conditions with different distracter positions: and azimuth (-db T/D-ratio, anechoic environment). For the experiments in a virtual reverberant environment, a rectangular room (6 m 5m 3 m) was simulated using the mirror-image method [16]. To simulate early reflections, wall reflections of the first order and the earliest six reflections of the second order were modeled (Figure 1). Each reflected mirror source was filtered with rel. amplitude time [ms] Figure 1. Artificial room impulse response that was used for the reverberant conditions (left channel, azimuth, elevation). the listener s individual HRTFs (Tucker-Davis Technologies (TDT), PD1), measured at the closest angle. The absorption coefficient ( = 1 ;jrj 2, with R: reflectivity, [17]) of each wall reflection was.369 independent of the frequency (2-dB attenuation [SPL] of the sound ray after each reflection). Only the floor was considered as non-reflecting because the HRTF-catalog did not contain positions below ;1 elevation. (At the time the experiments were carried out, it was not yet possible to measure HRTFs below ;1 elevation with the set-up described above). The listeners were placed in the middle of the virtual room with their ears at a height of 1: m (sitting position). Both the target and distracter were presented at a distance of 2 m from the listener, at the same height (1: m). The late reflections, simulated using a digital reverb processor (Lexicon, LXP1), started immediately after the last second-order early reflection. The reverberation time was.62 s (5 1-Hz band, T method, ISO 354), measured with a real-time analyzer (Norsonic, 84). Both the target and the distracter were presented with reflections in the reverberant condition Measurement and verification of the HRTFs The HRTFs were measured in the anechoic chamber (5.13 m 4.98 m 4.76 m, critical frequency 11 Hz) of the Institut für Kommunikationsakustik at the Ruhr- Universität Bochum. The method, which is fully described in Keller et al. [18] for barn owls and is described in Djelani et al. [19] for humans, can be summarized as follows: a continuously repeated random-phase noise (1 repetitions, 496 samples, sampling rate 44.1 khz) was generated (TDT, PD1). The signal had a flat amplitude spectrum and random-phase spectrum, uniformly distributed between ; and +. During the measurement of the HRTFs, the subject was placed onto a rotatable chair in the middle of a loudspeaker arc. The sound was presented from one of 11 loudspeakers (ITT/Nokia, 1763) mounted to the arc (diameter = 4 m), at elevations from ;1 to, in steps of 1. HRTFs of different azimuths were measured by rotating the chair under the arc. The sound was recorded digitally (TDT, PD1) using two miniature microphones (Sennheiser, KE-4) that were placed into the subjects ear canals, approximately 5 mm from the entrance of the external ear canal. The microphone output signals were amplified (John Hardy, M1; TDT, MA2), lowpass filtered at 2 khz (TDT, FT6) and A/D converted (TDT, PD1), and the recorded signal was averaged synchronously. The HRTFs were measured at 122 positions 944

4 perceived left/right [ ] target alone -1 db -15 db presented left/right [ ] db -12 db Figure 2. Localization performance in an anechoic environment for one listener (L6) in the L/R dimension for different T/Dratios. The distracter was set at. The brackets show the estimated groupings between adjacent angles (clusters: dashed bracket; scatterings: solid brackets). in the upper hemisphere with 1 to 15 resolution. The loudspeakers in the arc were equalized with their inverse transfer functions. For that purpose, the impulse response of the loudspeakers was measured using a microphone with a flat frequency spectrum (Sennheiser, MKH2 P48). The coefficients for the filter with the inverse transfer functions (FIR-filter, 512 coefficients) were calculated from the measured impulse response using a least-square approximation [2]. It was shown that the error of the equalized loudspeaker, defined as the decibel difference between the ideal and the realized filter, was less than 1 db within a frequency range from 11 Hz to 14.5 khz. The headphones (STAX, SR-Lambda) were calibrated, using the inverse transfer function between the headphones and the miniature microphones, which were inserted into the ear at the same position as for the HRTF measurement. This transfer function was measured individually for each listener directly before the measurements of the HRTFs. Before participating in the experiments, the HRTFs were evaluated for each listener in a localization test. The listeners had to localize a single broadband noise burst (2-ms duration, 2-ms cos 2 on- and offset ramps, 2 14-Hz frequency range). The sound was presented in a virtual environment through headphones (STAX, SR- Lambda) using the listeners individual HRTFs. All eleven listeners, who participated in the experiment, were able to localize well with their individual HRTFs and had externalized auditory events. The broadband noise was presented from 12 equidistant positions ( apart) in the horizontal plane and 1 directions in the frontal median plane. Details and results are given in the Appendix Procedure The experiment was divided into several sessions. In each session, the T/D-ratio was kept constant, only the azimuth of the target was pseudo-randomly varied. Each session started with a training phase in which 1 training trials were presented to the listener, without recording the response. During the experiment, each listener was seated on a chair. The listeners were asked to keep their head still during the presentation of the stimuli. After a stimulus had been presented, the listener reported the direction of the externalized auditory event of the target using the GELPmethod [21]. In the GELP-method, the listener indicates the hearing event on a sphere, which is placed in front of him or her, using a magnetic stylus. The position of the stylus on the sphere is measured using a Polhemus Fastrak system and transformed into a polar system with its origin at the center of the sphere. After the response, the next stimulus was presented with a delay of 2 seconds in the anechoic environment and 3 seconds in the reverberant environment. At the end of the training phase, the recording of the listener s responses began. Each stimulus was presented ten times, and the duration of each session was about 14 minutes. No feedback was provided to the listeners during the training phase or the recording phase Results Anechoic environment For data analysis, the presented directions of the stimuli and the perceived directions of the listener are converted into the three-pole coordinate system (e.g. [6, 22]). 2 In Figure 2, the results in the L/R direction for one listener, L6, are shown for the different T/D-ratios in the anechoic condition. In each panel, the perceived L/R direction of the target is plotted against the presented L/R direction for a specific T/D-ratio condition. Judgement angles and target angles are grouped into 15 -wide bins. The area of each circle is proportional to the number of responses measured within the bin for the presented direction that corresponds to the center of that bin. In the reference condition, in the absence of a distracter (Figure 2, top-left panel), the perceived directions of L6 are almost identical to the presented directions. Most of the listeners show a tendency to perceive the sounds presented between (; and ;15 ) and (15 and ) at greater angles than the presented angles. In most of the cases, the plots are slightly s-shaped rather than showing a perfect straight line, as appears on the right hemisphere of L6 s responses. Further, the majority of listeners does not seem to distinguish between 2 In the L/R dimension, is in the left hemisphere and ; in the right hemisphere analogous to the azimuth of the head-related coordinate system [1]. 945

5 rms error 4 standard deviation rms error [ ] standard deviation [ ] 2 1 anechoic environment listener listener rms error [ ] standard deviation [ ] 2 1 reverberant environment listener listener Figure 3. The root-mean-square (rms) error D (left panels) and the response standard deviation s (right panels) for different T/D-ratios and listeners for the anechoic (top panels) and the reverberant condition (bottom panels). the angles (; and ;75 ), (75 and ), the responses of these directions show the same distribution. This can be clearly seen for L6 in the left hemisphere. However, it should not be forgotten that ; and are the limits in the three-pole coordinate system, allowing only one-sided distribution functions at these points. In the test conditions (distracter present), the localization performance degrades as the T/D-ratio is decreased. In the present data, the degradation is manifested by the fact that the distributions of the response to different target angles become more and more similar. In some conditions, the responses to distinctly different target angles are nearly identical, but the reproducibility of the responses is nevertheless high. Therefore, the variance of the response distributions shown in the figures is low. This case will be referred to as clustering. In other conditions, the reproducibility of the responses is poor; thus, the response distributions for different target angles are similar and the variance of the distributions is large. This case will be referred to as scattering. It should be remarked that clustering and scattering are not two distinct effects, but that there is rather a smooth transition between both. Quantitatively, the terms scattering and clustering are applied if the response distributions for two or more neighboring angles are not significantly different according to the Kolmogorov-Smirnov test (p >:5). The distinction between scattering and clustering is based on the width of the distributions, measured by the median over all quartile ranges within a group of similar responses. If this value is 15, the term clustering is used, for a median width of more than 15, the term is scattering. The estimated clusters and scatterings are shown in Figure 2 by brackets (solid line, scatterings; dashed line, clusters). At a T/Dratio of ;15 db, for example, L6 distinguishes only between three locations: right (scattering), for targets that are presented between ; and ;45 ; frontal (cluster), for targets that are presented between ;15 and ; and left (scattering), for targets that are presented between 45 and (Figure 2, bottom-left panel). All the listeners have in common that their responses for very low T/D-ratios show scattering and clustering. It seems that they are only able to discriminate between a very limited number of locations, usually left, frontal and right. In all cases, this tendency is increased by lowering the T/D-ratio. In addition, the root-mean-square (rms) error D and the response standard deviation s become larger when the T/D-ratio decreases (Figure 3, top panels). Some listeners (L5, L7, L8, L1 and L11) already had severe difficulties discriminating between the presented directions of the target at a T/D-ratio of ;12 db. Relatively high localization errors can be observed for those listeners (Figure 3, top panels). In these cases, we did not test the ;15-dB T/D-ratio condition, but rather the ;7-dB T/Dratio condition. There does not seem to be a general rule for when clustering or scattering occurs, although the responses to the target angles near are clustered for all the listeners. 946

6 perceived left/right [ ] target alone -7 db presented left/right [ ] -5 db -1 db Figure 4. Localization performance in a reverberant environment for one listener (L6) in the L/R dimension for different T/D-ratios. The distracter was set at. The brackets show the estimated groupings between adjacent angles (clusters: dashed bracket; scatterings: solid brackets). rms error (D)[ ] standard deviation (s) [ ] Reverberant environment In Figure 4, the results for listener L6 in the reverberant environment are shown. In the absence of a distracter (In Figure 4, top-left panel), the perceived directions of the signal are largely equivalent to the presented directions of the target. The responses in the reverberant environment are similar to the responses measured in the anechoic condition, including the occurrence of clustering at the outer angles. If a distracter (;5-dB T/D-ratio) is added to the target, the distribution of the responses becomes more diffuse (Figure 4, top-right panel). The responses are nearly the same if the T/D-ratio is lowered to ;7 db (Figure 4, bottom-left panel). The responses change, however, if the T/D-ratio is decreased to ;1 db (Figure 4, bottom-right panel). L6, for example, does not perceive auditory events at larger angles (jj > ). Regarding the location of the target, three groupings are again observed: outward left (scattering), directly frontal (cluster) and outward right (scattering). The results for the other five listeners show similar characteristics to those of L6. As in the anechoic condition, the root-mean-square (rms) error D and the response standard deviation s become larger when the T/D-ratio decreases (Figure 3, bottom panels). However, the localization errors that were measured in the reverberant environment are larger than those measured in the anechoic environment at the same T/D-ratio (compare, e.g., the values at ;1 db). In general, the listeners group their responses in three discriminated directions: one outward left, one directly frontal and one outward right, if the T/D-ratio is set at or below a threshold that varies individually between ;7 db and ;1 db. In half of the cases, the listeners discriminate between two instead of three groups at the lowest T/Dref T/D-ratio [db] ref T/D-ratio [db] Figure 5. The root-mean-square (rms) error D (upper panel) and the response standard deviation s (lower panel) for different T/Dratios as the median for all listeners. The dotted curves show the results for the anechoic environment, the dashed curves display the results for the reverberant environment. ratio. There does not seem to be a general rule according to which listeners tend to cluster the perceived responses and according to which their responses are scattered. As in the anechoic condition, the rms error D and the response standard deviation s in the reverberant condition increases when the T/D-ratio decreases (Figure 3, bottom panels). However, in comparison to the anechoic condition (top panels), the strong increase of the errors is observed at higher T/D-ratios. This becomes more obvious when the median calculated over all listeners is plotted in the same graph for both the anechoic and reverberant condition (Figure 5). In Figure 6, the results for the condition with the lowest measured T/D-ratio are shown for the five remaining listeners L1, L2, L3, L5 and L7. Scattering dominates L2 s perception (top-right panel). The distribution functions of the responses can be divided into two groups left-frontal and right. The same grouping can be made for L1, L3 and L5 s perceptions, with the following exceptions: L5 s responses group to clusters, and both clustering (left-frontal) and scattering (right) is observed for L1. For L7, three groups of clusters (; to ;45 ) and scatterings (; to and 45 to ) can be observed Statistical analysis Inspections of Figures 2, 4 and 6 indicate that at low T/Dratios, the listeners give similar responses to adjacent target angles, resulting in clusters and scatterings. In the fol- 947

7 perceived left/right [ ] L1 L3 L presented left/right [ ] L2 L Figure 6. Localization performance in a reverberant environment for the five remaining listeners in the condition with the lowest measured T/D-ratio (L1: ;1 db; L2: ;1 db; L3: ;7 db; L5: ;7 db and L7: ;1 db) in the L/R dimension for different T/D-ratios. The distracter was set at. The brackets show the estimated groupings between adjacent angles (clusters: dashed bracket; scatterings: solid brackets). lowing, results of statistical analyses, conducted to test if the responses to two different angles are significantly different, are reported. The measured distribution functions for the different target positions are compared to each other using the Kolmogorov-Smirnov test [23]. A nonparametric test is considered as appropriate because in several cases the distributions of the responses are not Gaussian. The results for the condition with the lowest measured T/D-ratio in the reverberant environment (Figure 6 and Figure 4, bottom-right panel) are shown in Figure 7 for each listener. The gray-level of each bin shows the probability that the measured answers for two directions indicated at the x and y -axes belong to the same distribution function. The groups of bins with light gray to white color indicate that the listener was not able to discriminate the directions within those groups. There are some individual differences between the listeners, but, in general, the extensions of those groups are in agreement with the observed clusters and scatters. Also in this form of data presentation, it is obvious that the listeners answers can be associated to two or three groups, within which the listener can no longer discriminate significantly between the target positions. presented left/right[ ] L1 L1 L3 L3 L6 L6 presented left/right[ ] p: L2 L2 L5 L5 L7 L7 Figure 7. Probabilities that the responses of two target directions, indicated in the x - and y -axes, have the same distribution function. The probabilities were measured using the Kolmogorov-Smirnov test. The data is shown for six individual listeners for the lowest T/D-ratio that was measured in the reverberant environment. This observation can be used to define a new measure to quantify localization performance. In this approach, we understand localizability as the ability to distinguish between different angles of incidence, rather than the ability to indicate the target at the presented direction. Practically, the distributions of the responses that were measured for all target positions are compared to each other by using the Kolmogorov-Smirnov test. Two directions are accounted to be discriminated, if their responses are significantly different from each other at a significant level of 95% (i.e. p < :5). The distinguishability of directions (DOD) is then the percentage of combinations of directions that are statistically different from each other. Altogether, there are 78 different combinations of target directions to be tested. If all directions are significantly different from each other, the DOD is 1%. If they are not significantly different from each other, the DOD has the value %. Note that only the distributions of responses obtained for the same T/Dratio were compared to each other. The results for the DOD are shown in Figure 8. The left panel shows the results for the anechoic environment, the right panel illustrates those for the reverberant environment. It can be clearly seen that the ability to discriminate between the different target positions decreases when the T/D-ratio is lowered, especially at the low T/D-ratios. In 948

8 anechoic environment reverberant environment discriminated directions [%] discriminated directions [%] listener listener Figure 8. Distinguishability of directions (DOD) for different T/D-ratios and listeners for the anechoic (left panel) and the reverberant condition (right panel). Discrimination of directions (DOD) [%] ref T/D-ratio [db] Figure 9. Distinguishability of directions (DOD) as the median for all listeners. The dotted curves show the results for the anechoic environment; the dashed curves show the results for the reverberant environment. Figure 9, the DOD is plotted as the median for all listeners in dependence of the T/D-ratio for the anechoic (dotted line) and the reverberant environment (dashed line). It must be noted that the number of listeners is not constant for all points. In the absence of a distracter, the listeners are able to discriminate fairly well between the different directions presented (DOD > %). In the anechoic environment, the DOD does not change much if the T/D-ratio is lowered to ;7 db. For lower T/D-ratios, it decreases only slightly (but monotically) for the T/D-ratios ;1 db and ;12 db, but more apparently at a T/D-ratio of ;15 db. The DOD at the lowest tested T/D-ratio is close to %. This value agrees with the phenomena of clustering and scattering described in section For comparison, if the listeners only discriminated between two groups each with six angles, a DOD of 46.15% would be obtained (78 possible combinations ;2 6! combinations within the 2 groups that cannot be discriminated = 36 discriminated combinations). If the listeners discriminated between 3 groups each with 4 directions, the DOD would be 74.36% (78 possible combinations ;2 4! combinations within the 3 groups that cannot be discriminated = 58 discriminated combinations). A DOD value of % is, therefore, in between the case of two existing groups (clusters and/or scatterings) and three existing groups. The shape of the DOD curve for the reverberant condition is similar to the one in the anechoic condition, but the decline starts earlier. At a T/D-ratio of ;1 db, the curve for the reverberant environment has already dropped to almost %. Even though the DOD measure shows a similar dependence on the T/D-ratio as the root-mean-square (rms) error D and the response standard deviation s, we believe that the DOD has, at least theoretically, its advantages in certain conditions. In those cases where the target is not detected anymore when it is presented from certain directions, the DOD could be less dependent on the listener s strategy than D and s. For example, if both the target and distracter are presented from the front, D and s increase for low T/D-ratios when the listener guesses the positions of the target randomly instead of identifying them according to the position of the distracter. As previously mentioned, both types of behavior were observed in the investigation by Lorenzi et al. [7]. The DOD, however, should be less dependent on different strategies of the listeners because the distribution of the responses should be very similar for all directions for which the target cannot be detected, and the DOD just determines if the responses to two different target directions are significantly different. So far, we have analyzed how the localization performance is influenced by reducing the T/D-ratio. However, we could not show yet if a localization shift, as was observed by Heller and Trahiotis [1] or Canévet and Meunier [11], is observable for two broadband signals. Small deviations, as found in their investigations, will not be revealed if the data is grouped into 15 bins and if the variance of the responses is also quite large compared to the deviations of the lateralization shifts. Therefore, the data presented for the -db T/D-ratio averaged over all listeners will be used as a reference in the next section. 949

9 perceived left/right [ ] - - -, anechoic, anechoic, echoic, anechoic presented left/right [ ] Figure 1. Localization performance in the L/R dimension. Each curve shows the median of all listeners. The distracting sound was presented at a T/D-ratio of db at the following angles: /anechoic condition (top-left panel), /reverberant condition (top-right panel), /anechoic condition (bottom-left panel) and /anechoic condition (bottom-right panel). The + s with the dashed curves mark the condition in presence of a distracter; the x s with the solid curves mark the target-alone condition Lateralization shift In Figure 1, the median calculated from all perceived responses and all listeners of the single-source condition and the -db T/D-ratio condition is plotted against the presented direction of the target for the four different conditions. In all cases, the T/D-ratio is db. Unlike in the previous descriptions, the distracter is not constantly set at azimuth (top-left panel, anechoic condition; top-right panel, reverberant condition), but also set at (bottomleft panel) and (bottom-right panel) azimuth in the anechoic environment in two further conditions. In each panel, the single-source condition is represented by x s and a solid curve, the condition at a T/D-ratio of db by + s and a dashed curve. The top-left panel shows the distracter condition in the anechoic environment. Each data point in the figure is calculated from 11 measured data points (1 responses 11 listeners). The position of the auditory events of the listeners is pushed toward the outer angles if the distracter is present. The two curves coincide at azimuth and at azimuth. A sign-test is chosen to investigate if this effect is significant. For each individual listener, the median of the auditory events for each presented direction in the single-source condition, except for azimuth, is compared to the median of the auditory events for the same directions in the distracted condition. It is tested, in how many cases the median of the auditory events of the distracted condition is perceived to be further lateralized than the median of the same direction in the single-source condition. The null hypothesis is tested with a Binomial Sign-test (132 data points, 12 directions 11 listeners, p<1 ;6 ). The bottom panels in Figure 1 show the results for two other distracter positions in the anechoic condition, namely (left) and (right). In both panels, it can be clearly seen that the auditory event of the target has been pushed away from the distracter. For both cases, the auditory events of the target coincide at the distracter angle when the target is presented from the same angle. The curves of the single-source and the distracted condition coincide at the outer angles, independent of the direction of the distracter. For both distracter positions, 72 data points were tested (12 directions 6 listeners). The null hypothesis can be rejected highly significantly for both distracter directions (Binomial Sign-test, : p<1 ;5, : p<1 ;3 ). The top-right panel in Figure 1 shows the results for the azimuth, -db T/D-ratio condition in the reverberant environment. Again, a pushing effect occurs, pushing the auditory event of the target away from the distracter. However, the curves of the single and the distracted condition coincide at the angles ;, and 45. For all the other angles, the pushing effect turns out to be greater than in the anechoic condition. The responses in the singlesource condition do not exceed the angles ; and 7, while the responses in the anechoic span a range of ;8 and 8 in the single-source condition. 12 data points are tested in the reverberant condition (12 directions 6 listeners, with a stimulus repetition rate of 2 for four of the six listeners in this condition). The null hypothesis can be rejected highly significantly (p <1 ;5 ) Discussion Two interim conclusions are drawn. The presence of a distracter shifts the position of the auditory event of the test signal few degrees (in general < 15 ) in the opposite direction of the distracter if the T/D-ratio is set to db (localization shift), while for very low T/D-ratios (< 1 db) clustering and scattering are dominant. The effect that the auditory events are biased if the listener is distracted by a concurrent noise has been discussed in several studies, especially in connection with auditory adaptation. The results of this work agree with those of Canévet and Meunier [11] who measured the perceived lateralization of a sinusoid in presence of a second, distracting sinusoid ( azimuth) in a free field. They found a shift of the auditory event toward the side if the onset delay between the distracter and target was above ms (Canévet and Meunier, Figure 3). The results of another experiment by Meunier et al. [24] showed that a lateralization shift toward the side can be observed at shorter onset delays of 1 ms if narrow-band noise bursts are used (distracter set at azimuth). This is very similar to our results obtained at an onset delay of 2 ms. The results for sinusoidal tones probably differ because the phase relation between the distracter and target are very distinct, while the target and distracter are usually uncorrelated if bursts of noise are used. 95

10 Good and Gilkey [6] reported that the auditory events were attracted toward the distracter (biased judgements). In their experiments, they used a click train for the target and a broadband-noise burst for the distracter. Both the target and the distracter were presented with an onset delay of 1 ms (the distracter was partly preceding). It can be assumed that it was easier to distinguish the target and distracter than it is the case in our investigation, where the target and the distracter were both broadband-noise bursts. In the investigation of Good and Gilkey, the percentage of biased judgements increased, when the T/D-ratio was lowered ([6], Figure 5). However, Good and Gilkey averaged the biased judgements over all dimensions in the three-pole coordinate system, whereas our data is only analyzed in the L/R dimension. They reported that for T/Dratios around db the pushing effect in the L/R dimension competes with a pulling effect in the F/B (front-back) dimension and U/D (up-down) dimension. The rms errors between the presented target and the perceived signal in the U/D dimension exceed those of the other two dimensions ([6], Figure 3). A look at Good and Gilkey s raw data reveals that the judgements of the listener were biased toward the distracter in the F/B dimension and the U/D dimension. The effect in the F/B dimension corresponds to an increase in front-back confusions. The biased judgements increase when the T/D-ratio was lowered. In the L/R dimension, however, the target is pushed away from the distracter. In Good and Gilkey s raw data, this effect is clearly visible in the 8-dB and the 2-dB T/D-ratio condition ([6], Figure 1, they use the same definition of the T/D-ratio as in this investigation) which is consistent with our data. Two different effects appear if a distracter ( azimuth, elevation, with a T/D-ratio near db, 1-ms onset delay) is present: in the F/B and U/D dimension, the auditory events of the target are attracted toward the distracter, whereas in the L/R dimension, the target is perceived to be located more to the sides. The effect that the range of perceived locations is compressed in the reverberant compared to the anechoic condition can be explained by the interaural symmetry of the late reflections that move the auditory event toward the median plane if the late reflections are regarded as additional noise. For the condition with the lowest measured T/D-ratio of ;9 db, the results of Lorenzi et al. ([7], Figure 3) show similar effects of scattering and clustering as our data, which Lorenzi et al. describe in terms of pulling and pushing effects. In their description, the focus is not on the similarity of the response distributions to adjacent target directions, but on the shifts of the hearing events toward the direction of the distracter (pulling effect) or in the opposite direction (pushing effect). The T/D-ratios in this study were sometimes set to very low values. Comparing them to the masked detection thresholds described throughout literature leads us to hypothesize that the target was no longer perceivable in some cases, in particular for target directions close to the distracter direction. Therefore, a second experiment was threshold level [db] lr -r lr - -r azimuth [ ]/channels Figure 11. Detection thresholds for a broadband noise signal distracted by a second noise signal. Each bar shows the median of five listeners. The black bars show the results in the anechoic condition; the white bars those in the reverberant condition. The error bars show the upper and lower quartiles. conducted to determine the detection threshold levels of the sounds used in the localization experiment for both the anechoic and the reverberant environment. These threshold levels will enable better conclusions to be drawn about the findings concerning the scattering and clustering of the listeners responses. 3. Experiment II: Detection thresholds in anechoic and reverberant environments 3.1. Methods The masked detection thresholds of a broadband noise distracted by a second broadband noise were measured using an adaptive three-interval forced-choice (IFC) procedure with adaptive level tracking as described by Levitt [25]. A two-down one-up rule was applied with an initial step size of 8 db. After every second reversal, the step size was reduced to half the step size. After reaching the final step size of 1 db, the level at the reversal points was recorded for the next 1 reversals. The detection thresholds were averaged over the medians from the data collected in two or three runs. Five listeners, L1, L5, L7, L8 and L9, who had already participated in Experiment I, took part in this experiment. Five different conditions were tested in both anechoic and reverberant environments (the given angles refer to the L/R dimension): target /binaural-presentation, target /monaural-presentation (right ear), target /binauralpresentation, target /monaural-presentation (left ear), target /monaural-presentation (right ear). In all cases, the distracter was presented from. In the monaural conditions, one channel was muted electronically (TDT, PA4). Two or three runs of each condition were presented to each listener. The target and distracter are identical to those in Experiment I. 951

11 3.2. Results The average detection thresholds of all the listeners are shown in Figure 11. The different conditions are displayed in the following order (from left to right): target /binaural condition, target /monaural condition (right ear), target /binaural condition, target /monaural condition (left ear), target /monaural condition (right ear). The black bar of each group shows the masked detection threshold for the anechoic condition, while the white bar shows the masked detection threshold for the reverberant condition. Each bar shows the median averaged over the turning points obtained in all runs. The estimation of each median value is based on 12 measurement points in the anechoic environment and 1 measurement points in the reverberant environment. The upper and lower quartiles of the population are given by the error bars. There is only a minimal difference in the detection threshold if the signals are presented in the monaural or the binaural condition when both the target and distracter are set at azimuth. The detection thresholds obtained in the anechoic environment are identical to those obtained in the reverberant environment. The binaural detection threshold is much lower if the target is emitted from. In this case, the detection threshold obtained in the anechoic environment is 5 db below the value obtained in the reverberant environment. The detection threshold increases by 7 db in the anechoic environment and 5 db in the reverberant environment when only the ipsilateral side is presented. The value increases by about 2 db in the anechoic environment and 12 db in the reverberant environment when only the contralateral channel, instead of both channels, is presented, and the detection threshold obtained in the reverberant environment is 2 db below the value obtained in the anechoic environment Discussion Our data for the binaural conditions support the results of Saberi et al. [14], who measured data in an anechoic environment. The monaural data of Saberi et al. are comparable to our data when both the target and distracter are presented at an angle of, but the results are different for the 27 target position. They measured a BMLD of approximately 2 db, while the BMLD averaged over all listeners is 7 db (for the target position) in our anechoic data. However, Saberi et al. measured the monaural condition by occluding the contralateral ear. Wightman and Kistler [22] recently showed that the attenuation caused by the occlusion is not sufficient to eliminate all the binaural cues, especially interaural time differences. Furthermore, their click-train target was probably easier to detect in the monaural condition than it was the case for our broadband noise target. This may explain the deviations from our results. Good and Gilkey [15] measured a BMLD of 6 db for a band-pass filtered click-train in the low-frequency range ( kHz) when the target was presented from. This is consistent with our data. In the reverberant condition, the detection thresholds for the target, both in the binaural and the monaural condition, are similar to the results obtained in the anechoic environment. This can be easily explained. There are no binaural differences between the target and distracter in either environment. Several reasons exist why the detection threshold decreases when the target is changed from to. First, the level of the target sound increases on the ipsilateral ear. Second, different binaural cues between the distracter and target exist for this condition. One explanation for the detection threshold difference between the anechoic and reverberant condition could be that the level difference between the target and distracter at the ipsilateral ear is smaller in the reverberant environment because of the additional reflections. Another explanation could be that there is a masking effect of the binaural cues by the reflecting sources, preventing the listeners from detecting binaural cues at lower T/D-ratios. For a target position at, the detection threshold decreases in the contralateralmonaural condition when reverberation is added. In this case, the additional reflections decrease the level difference between the distracter and target. A comparison of the detection thresholds with the localization data of Experiment I reveals that the listeners could not detect the targets in the localization experiment at low T/D-ratios when they were presented from one of the frontal directions. This explains why the responses cluster at angles near the median plane. In these cases, the listeners seem to guess that the target direction is identical to the direction of the distracter. The listeners in experiments by Lorenzi et al. [7] show different strategies for a similar situation: two of the listeners (CA and TO) consistently localize the target at the position of the distracter. One listener (PA) guesses randomly the direction of incidence, while listener CH changes his strategy during the experiment ([7], Figure 3, panels in the middle column). In Experiment II, evidence was found that the listeners responses cluster or scatter in the frontal region at low T/D-ratios because the targets were presented below detection threshold level. However, clustering and scattering is also observed for the outer angles when the target is audible. The discrimination experiment described in the next section is designed to reveal if the auditory events of adjacent angles can be discriminated at low T/D-ratios by cues other than directional cues. 4. Experiment III: Discriminating different directions at low T/D-ratios In Experiment I, it was observed at low T/D-ratios that the listeners gave very similar responses to adjacent target angles. In these cases, their responses can be grouped into clusters and scatterings. As it was revealed through the Kolmogorov-Smirnov test in Experiment I (Figure 7), the listeners make very similar responses for different target positions within those groupings of clusters and scatterings, indicating that they do not distinguish between the stimuli by localization cues. 952

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane: II. Model Algorithms

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane: II. Model Algorithms 956 969 Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane: II. Model Algorithms Jonas Braasch Institut für Kommunikationsakustik, Ruhr-Universität Bochum, Germany

More information

Binaural Hearing. Why two ears? Definitions

Binaural Hearing. Why two ears? Definitions Binaural Hearing Why two ears? Locating sounds in space: acuity is poorer than in vision by up to two orders of magnitude, but extends in all directions. Role in alerting and orienting? Separating sound

More information

Juha Merimaa b) Institut für Kommunikationsakustik, Ruhr-Universität Bochum, Germany

Juha Merimaa b) Institut für Kommunikationsakustik, Ruhr-Universität Bochum, Germany Source localization in complex listening situations: Selection of binaural cues based on interaural coherence Christof Faller a) Mobile Terminals Division, Agere Systems, Allentown, Pennsylvania Juha Merimaa

More information

Sound localization psychophysics

Sound localization psychophysics Sound localization psychophysics Eric Young A good reference: B.C.J. Moore An Introduction to the Psychology of Hearing Chapter 7, Space Perception. Elsevier, Amsterdam, pp. 233-267 (2004). Sound localization:

More information

The role of low frequency components in median plane localization

The role of low frequency components in median plane localization Acoust. Sci. & Tech. 24, 2 (23) PAPER The role of low components in median plane localization Masayuki Morimoto 1;, Motoki Yairi 1, Kazuhiro Iida 2 and Motokuni Itoh 1 1 Environmental Acoustics Laboratory,

More information

3-D Sound and Spatial Audio. What do these terms mean?

3-D Sound and Spatial Audio. What do these terms mean? 3-D Sound and Spatial Audio What do these terms mean? Both terms are very general. 3-D sound usually implies the perception of point sources in 3-D space (could also be 2-D plane) whether the audio reproduction

More information

Neural correlates of the perception of sound source separation

Neural correlates of the perception of sound source separation Neural correlates of the perception of sound source separation Mitchell L. Day 1,2 * and Bertrand Delgutte 1,2,3 1 Department of Otology and Laryngology, Harvard Medical School, Boston, MA 02115, USA.

More information

3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF. Masayuki Morimoto Motokuni Itoh Kazuhiro Iida

3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF. Masayuki Morimoto Motokuni Itoh Kazuhiro Iida 3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF Masayuki Morimoto Motokuni Itoh Kazuhiro Iida Kobe University Environmental Acoustics Laboratory Rokko, Nada, Kobe, 657-8501,

More information

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS 19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 27 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS PACS: 43.66.Pn Seeber, Bernhard U. Auditory Perception Lab, Dept.

More information

A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES

A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES Kazuhiro IIDA, Motokuni ITOH AV Core Technology Development Center, Matsushita Electric Industrial Co., Ltd.

More information

Effect of spectral content and learning on auditory distance perception

Effect of spectral content and learning on auditory distance perception Effect of spectral content and learning on auditory distance perception Norbert Kopčo 1,2, Dávid Čeljuska 1, Miroslav Puszta 1, Michal Raček 1 a Martin Sarnovský 1 1 Department of Cybernetics and AI, Technical

More information

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening AUDL GS08/GAV1 Signals, systems, acoustics and the ear Pitch & Binaural listening Review 25 20 15 10 5 0-5 100 1000 10000 25 20 15 10 5 0-5 100 1000 10000 Part I: Auditory frequency selectivity Tuning

More information

Binaural Hearing. Steve Colburn Boston University

Binaural Hearing. Steve Colburn Boston University Binaural Hearing Steve Colburn Boston University Outline Why do we (and many other animals) have two ears? What are the major advantages? What is the observed behavior? How do we accomplish this physiologically?

More information

Brian D. Simpson Veridian, 5200 Springfield Pike, Suite 200, Dayton, Ohio 45431

Brian D. Simpson Veridian, 5200 Springfield Pike, Suite 200, Dayton, Ohio 45431 The effects of spatial separation in distance on the informational and energetic masking of a nearby speech signal Douglas S. Brungart a) Air Force Research Laboratory, 2610 Seventh Street, Wright-Patterson

More information

EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE DESIGN OF SPATIAL AUDIO DISPLAYS

EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE DESIGN OF SPATIAL AUDIO DISPLAYS Proceedings of the 14 International Conference on Auditory Display, Paris, France June 24-27, 28 EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE

More information

Publication VI. c 2007 Audio Engineering Society. Reprinted with permission.

Publication VI. c 2007 Audio Engineering Society. Reprinted with permission. VI Publication VI Hirvonen, T. and Pulkki, V., Predicting Binaural Masking Level Difference and Dichotic Pitch Using Instantaneous ILD Model, AES 30th Int. Conference, 2007. c 2007 Audio Engineering Society.

More information

Trading Directional Accuracy for Realism in a Virtual Auditory Display

Trading Directional Accuracy for Realism in a Virtual Auditory Display Trading Directional Accuracy for Realism in a Virtual Auditory Display Barbara G. Shinn-Cunningham, I-Fan Lin, and Tim Streeter Hearing Research Center, Boston University 677 Beacon St., Boston, MA 02215

More information

Variation in spectral-shape discrimination weighting functions at different stimulus levels and signal strengths

Variation in spectral-shape discrimination weighting functions at different stimulus levels and signal strengths Variation in spectral-shape discrimination weighting functions at different stimulus levels and signal strengths Jennifer J. Lentz a Department of Speech and Hearing Sciences, Indiana University, Bloomington,

More information

B. G. Shinn-Cunningham Hearing Research Center, Departments of Biomedical Engineering and Cognitive and Neural Systems, Boston, Massachusetts 02215

B. G. Shinn-Cunningham Hearing Research Center, Departments of Biomedical Engineering and Cognitive and Neural Systems, Boston, Massachusetts 02215 Investigation of the relationship among three common measures of precedence: Fusion, localization dominance, and discrimination suppression R. Y. Litovsky a) Boston University Hearing Research Center,

More information

The use of interaural time and level difference cues by bilateral cochlear implant users

The use of interaural time and level difference cues by bilateral cochlear implant users The use of interaural time and level difference cues by bilateral cochlear implant users Justin M. Aronoff, a) Yang-soo Yoon, and Daniel J. Freed b) Communication and Neuroscience Division, House Ear Institute,

More information

IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION. Griffin D. Romigh, Brian D. Simpson, Nandini Iyer

IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION. Griffin D. Romigh, Brian D. Simpson, Nandini Iyer IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION Griffin D. Romigh, Brian D. Simpson, Nandini Iyer 711th Human Performance Wing Air Force Research Laboratory

More information

The basic mechanisms of directional sound localization are well documented. In the horizontal plane, interaural difference INTRODUCTION I.

The basic mechanisms of directional sound localization are well documented. In the horizontal plane, interaural difference INTRODUCTION I. Auditory localization of nearby sources. II. Localization of a broadband source Douglas S. Brungart, a) Nathaniel I. Durlach, and William M. Rabinowitz b) Research Laboratory of Electronics, Massachusetts

More information

21/01/2013. Binaural Phenomena. Aim. To understand binaural hearing Objectives. Understand the cues used to determine the location of a sound source

21/01/2013. Binaural Phenomena. Aim. To understand binaural hearing Objectives. Understand the cues used to determine the location of a sound source Binaural Phenomena Aim To understand binaural hearing Objectives Understand the cues used to determine the location of a sound source Understand sensitivity to binaural spatial cues, including interaural

More information

Speech segregation in rooms: Effects of reverberation on both target and interferer

Speech segregation in rooms: Effects of reverberation on both target and interferer Speech segregation in rooms: Effects of reverberation on both target and interferer Mathieu Lavandier a and John F. Culling School of Psychology, Cardiff University, Tower Building, Park Place, Cardiff,

More information

The masking of interaural delays

The masking of interaural delays The masking of interaural delays Andrew J. Kolarik and John F. Culling a School of Psychology, Cardiff University, Tower Building, Park Place, Cardiff CF10 3AT, United Kingdom Received 5 December 2006;

More information

Abstract. 1. Introduction. David Spargo 1, William L. Martens 2, and Densil Cabrera 3

Abstract. 1. Introduction. David Spargo 1, William L. Martens 2, and Densil Cabrera 3 THE INFLUENCE OF ROOM REFLECTIONS ON SUBWOOFER REPRODUCTION IN A SMALL ROOM: BINAURAL INTERACTIONS PREDICT PERCEIVED LATERAL ANGLE OF PERCUSSIVE LOW- FREQUENCY MUSICAL TONES Abstract David Spargo 1, William

More information

CONTRIBUTION OF DIRECTIONAL ENERGY COMPONENTS OF LATE SOUND TO LISTENER ENVELOPMENT

CONTRIBUTION OF DIRECTIONAL ENERGY COMPONENTS OF LATE SOUND TO LISTENER ENVELOPMENT CONTRIBUTION OF DIRECTIONAL ENERGY COMPONENTS OF LATE SOUND TO LISTENER ENVELOPMENT PACS:..Hy Furuya, Hiroshi ; Wakuda, Akiko ; Anai, Ken ; Fujimoto, Kazutoshi Faculty of Engineering, Kyushu Kyoritsu University

More information

Discrimination and identification of azimuth using spectral shape a)

Discrimination and identification of azimuth using spectral shape a) Discrimination and identification of azimuth using spectral shape a) Daniel E. Shub b Speech and Hearing Bioscience and Technology Program, Division of Health Sciences and Technology, Massachusetts Institute

More information

Minimum Audible Angles Measured with Simulated Normally-Sized and Oversized Pinnas for Normal-Hearing and Hearing- Impaired Test Subjects

Minimum Audible Angles Measured with Simulated Normally-Sized and Oversized Pinnas for Normal-Hearing and Hearing- Impaired Test Subjects Minimum Audible Angles Measured with Simulated Normally-Sized and Oversized Pinnas for Normal-Hearing and Hearing- Impaired Test Subjects Filip M. Rønne, Søren Laugesen, Niels S. Jensen and Julie H. Pedersen

More information

Neural System Model of Human Sound Localization

Neural System Model of Human Sound Localization in Advances in Neural Information Processing Systems 13 S.A. Solla, T.K. Leen, K.-R. Müller (eds.), 761 767 MIT Press (2000) Neural System Model of Human Sound Localization Craig T. Jin Department of Physiology

More information

A Microphone-Array-Based System for Restoring Sound Localization with Occluded Ears

A Microphone-Array-Based System for Restoring Sound Localization with Occluded Ears Restoring Sound Localization with Occluded Ears Adelbert W. Bronkhorst TNO Human Factors P.O. Box 23, 3769 ZG Soesterberg The Netherlands adelbert.bronkhorst@tno.nl Jan A. Verhave TNO Human Factors P.O.

More information

What Is the Difference between db HL and db SPL?

What Is the Difference between db HL and db SPL? 1 Psychoacoustics What Is the Difference between db HL and db SPL? The decibel (db ) is a logarithmic unit of measurement used to express the magnitude of a sound relative to some reference level. Decibels

More information

P. M. Zurek Sensimetrics Corporation, Somerville, Massachusetts and Massachusetts Institute of Technology, Cambridge, Massachusetts 02139

P. M. Zurek Sensimetrics Corporation, Somerville, Massachusetts and Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 Failure to unlearn the precedence effect R. Y. Litovsky, a) M. L. Hawley, and B. J. Fligor Hearing Research Center and Department of Biomedical Engineering, Boston University, Boston, Massachusetts 02215

More information

The basic hearing abilities of absolute pitch possessors

The basic hearing abilities of absolute pitch possessors PAPER The basic hearing abilities of absolute pitch possessors Waka Fujisaki 1;2;* and Makio Kashino 2; { 1 Graduate School of Humanities and Sciences, Ochanomizu University, 2 1 1 Ootsuka, Bunkyo-ku,

More information

Audibility of time differences in adjacent head-related transfer functions (HRTFs) Hoffmann, Pablo Francisco F.; Møller, Henrik

Audibility of time differences in adjacent head-related transfer functions (HRTFs) Hoffmann, Pablo Francisco F.; Møller, Henrik Aalborg Universitet Audibility of time differences in adjacent head-related transfer functions (HRTFs) Hoffmann, Pablo Francisco F.; Møller, Henrik Published in: Audio Engineering Society Convention Papers

More information

J. Acoust. Soc. Am. 114 (2), August /2003/114(2)/1009/14/$ Acoustical Society of America

J. Acoust. Soc. Am. 114 (2), August /2003/114(2)/1009/14/$ Acoustical Society of America Auditory spatial resolution in horizontal, vertical, and diagonal planes a) D. Wesley Grantham, b) Benjamin W. Y. Hornsby, and Eric A. Erpenbeck Vanderbilt Bill Wilkerson Center for Otolaryngology and

More information

Comment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA)

Comment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA) Comments Comment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA) Is phase locking to transposed stimuli as good as phase locking to low-frequency

More information

On the influence of interaural differences on onset detection in auditory object formation. 1 Introduction

On the influence of interaural differences on onset detection in auditory object formation. 1 Introduction On the influence of interaural differences on onset detection in auditory object formation Othmar Schimmel Eindhoven University of Technology, P.O. Box 513 / Building IPO 1.26, 56 MD Eindhoven, The Netherlands,

More information

William A. Yost and Sandra J. Guzman Parmly Hearing Institute, Loyola University Chicago, Chicago, Illinois 60201

William A. Yost and Sandra J. Guzman Parmly Hearing Institute, Loyola University Chicago, Chicago, Illinois 60201 The precedence effect Ruth Y. Litovsky a) and H. Steven Colburn Hearing Research Center and Department of Biomedical Engineering, Boston University, Boston, Massachusetts 02215 William A. Yost and Sandra

More information

I. INTRODUCTION. J. Acoust. Soc. Am. 113 (3), March /2003/113(3)/1631/15/$ Acoustical Society of America

I. INTRODUCTION. J. Acoust. Soc. Am. 113 (3), March /2003/113(3)/1631/15/$ Acoustical Society of America Auditory spatial discrimination by barn owls in simulated echoic conditions a) Matthew W. Spitzer, b) Avinash D. S. Bala, and Terry T. Takahashi Institute of Neuroscience, University of Oregon, Eugene,

More information

Effect of source spectrum on sound localization in an everyday reverberant room

Effect of source spectrum on sound localization in an everyday reverberant room Effect of source spectrum on sound localization in an everyday reverberant room Antje Ihlefeld and Barbara G. Shinn-Cunningham a) Hearing Research Center, Boston University, Boston, Massachusetts 02215

More information

Hearing in the Environment

Hearing in the Environment 10 Hearing in the Environment Click Chapter to edit 10 Master Hearing title in the style Environment Sound Localization Complex Sounds Auditory Scene Analysis Continuity and Restoration Effects Auditory

More information

Effect of microphone position in hearing instruments on binaural masking level differences

Effect of microphone position in hearing instruments on binaural masking level differences Effect of microphone position in hearing instruments on binaural masking level differences Fredrik Gran, Jesper Udesen and Andrew B. Dittberner GN ReSound A/S, Research R&D, Lautrupbjerg 7, 2750 Ballerup,

More information

TESTING A NEW THEORY OF PSYCHOPHYSICAL SCALING: TEMPORAL LOUDNESS INTEGRATION

TESTING A NEW THEORY OF PSYCHOPHYSICAL SCALING: TEMPORAL LOUDNESS INTEGRATION TESTING A NEW THEORY OF PSYCHOPHYSICAL SCALING: TEMPORAL LOUDNESS INTEGRATION Karin Zimmer, R. Duncan Luce and Wolfgang Ellermeier Institut für Kognitionsforschung der Universität Oldenburg, Germany Institute

More information

INTRODUCTION J. Acoust. Soc. Am. 103 (2), February /98/103(2)/1080/5/$ Acoustical Society of America 1080

INTRODUCTION J. Acoust. Soc. Am. 103 (2), February /98/103(2)/1080/5/$ Acoustical Society of America 1080 Perceptual segregation of a harmonic from a vowel by interaural time difference in conjunction with mistuning and onset asynchrony C. J. Darwin and R. W. Hukin Experimental Psychology, University of Sussex,

More information

The extent to which a position-based explanation accounts for binaural release from informational masking a)

The extent to which a position-based explanation accounts for binaural release from informational masking a) The extent to which a positionbased explanation accounts for binaural release from informational masking a) Frederick J. Gallun, b Nathaniel I. Durlach, H. Steven Colburn, Barbara G. ShinnCunningham, Virginia

More information

Spectral processing of two concurrent harmonic complexes

Spectral processing of two concurrent harmonic complexes Spectral processing of two concurrent harmonic complexes Yi Shen a) and Virginia M. Richards Department of Cognitive Sciences, University of California, Irvine, California 92697-5100 (Received 7 April

More information

Temporal offset judgments for concurrent vowels by young, middle-aged, and older adults

Temporal offset judgments for concurrent vowels by young, middle-aged, and older adults Temporal offset judgments for concurrent vowels by young, middle-aged, and older adults Daniel Fogerty Department of Communication Sciences and Disorders, University of South Carolina, Columbia, South

More information

Localization 103: Training BiCROS/CROS Wearers for Left-Right Localization

Localization 103: Training BiCROS/CROS Wearers for Left-Right Localization Localization 103: Training BiCROS/CROS Wearers for Left-Right Localization Published on June 16, 2015 Tech Topic: Localization July 2015 Hearing Review By Eric Seper, AuD, and Francis KuK, PhD While the

More information

The role of high frequencies in speech localization

The role of high frequencies in speech localization The role of high frequencies in speech localization Virginia Best a and Simon Carlile Department of Physiology, University of Sydney, Sydney, NSW, 2006, Australia Craig Jin and André van Schaik School

More information

HEARING AND PSYCHOACOUSTICS

HEARING AND PSYCHOACOUSTICS CHAPTER 2 HEARING AND PSYCHOACOUSTICS WITH LIDIA LEE I would like to lead off the specific audio discussions with a description of the audio receptor the ear. I believe it is always a good idea to understand

More information

Angular Resolution of Human Sound Localization

Angular Resolution of Human Sound Localization Angular Resolution of Human Sound Localization By Simon Skluzacek A senior thesis submitted to the Carthage College Physics & Astronomy Department in partial fulfillment of the requirements for the Bachelor

More information

Hearing. Juan P Bello

Hearing. Juan P Bello Hearing Juan P Bello The human ear The human ear Outer Ear The human ear Middle Ear The human ear Inner Ear The cochlea (1) It separates sound into its various components If uncoiled it becomes a tapering

More information

Signals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor

Signals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor Signals, systems, acoustics and the ear Week 5 The peripheral auditory system: The ear as a signal processor Think of this set of organs 2 as a collection of systems, transforming sounds to be sent to

More information

Supporting Information

Supporting Information 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 Supporting Information Variances and biases of absolute distributions were larger in the 2-line

More information

The development of a modified spectral ripple test

The development of a modified spectral ripple test The development of a modified spectral ripple test Justin M. Aronoff a) and David M. Landsberger Communication and Neuroscience Division, House Research Institute, 2100 West 3rd Street, Los Angeles, California

More information

Study of perceptual balance for binaural dichotic presentation

Study of perceptual balance for binaural dichotic presentation Paper No. 556 Proceedings of 20 th International Congress on Acoustics, ICA 2010 23-27 August 2010, Sydney, Australia Study of perceptual balance for binaural dichotic presentation Pandurangarao N. Kulkarni

More information

Physiological measures of the precedence effect and spatial release from masking in the cat inferior colliculus.

Physiological measures of the precedence effect and spatial release from masking in the cat inferior colliculus. Physiological measures of the precedence effect and spatial release from masking in the cat inferior colliculus. R.Y. Litovsky 1,3, C. C. Lane 1,2, C.. tencio 1 and. Delgutte 1,2 1 Massachusetts Eye and

More information

Development of a new loudness model in consideration of audio-visual interaction

Development of a new loudness model in consideration of audio-visual interaction Development of a new loudness model in consideration of audio-visual interaction Kai AIZAWA ; Takashi KAMOGAWA ; Akihiko ARIMITSU 3 ; Takeshi TOI 4 Graduate school of Chuo University, Japan, 3, 4 Chuo

More information

Auditory System & Hearing

Auditory System & Hearing Auditory System & Hearing Chapters 9 and 10 Lecture 17 Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Spring 2015 1 Cochlea: physical device tuned to frequency! place code: tuning of different

More information

Hearing. Figure 1. The human ear (from Kessel and Kardon, 1979)

Hearing. Figure 1. The human ear (from Kessel and Kardon, 1979) Hearing The nervous system s cognitive response to sound stimuli is known as psychoacoustics: it is partly acoustics and partly psychology. Hearing is a feature resulting from our physiology that we tend

More information

Sound Localization in Multisource Environments: The Role of Stimulus Onset Asynchrony and Spatial Uncertainty

Sound Localization in Multisource Environments: The Role of Stimulus Onset Asynchrony and Spatial Uncertainty Wright State University CORE Scholar Browse all Theses and Dissertations Theses and Dissertations 2011 Sound Localization in Multisource Environments: The Role of Stimulus Onset Asynchrony and Spatial

More information

Speech intelligibility in simulated acoustic conditions for normal hearing and hearing-impaired listeners

Speech intelligibility in simulated acoustic conditions for normal hearing and hearing-impaired listeners Speech intelligibility in simulated acoustic conditions for normal hearing and hearing-impaired listeners Ir i s Arw e i l e r 1, To r b e n Po u l s e n 2, a n d To r s t e n Da u 1 1 Centre for Applied

More information

The Effect of Analysis Methods and Input Signal Characteristics on Hearing Aid Measurements

The Effect of Analysis Methods and Input Signal Characteristics on Hearing Aid Measurements The Effect of Analysis Methods and Input Signal Characteristics on Hearing Aid Measurements By: Kristina Frye Section 1: Common Source Types FONIX analyzers contain two main signal types: Puretone and

More information

Binaural processing of complex stimuli

Binaural processing of complex stimuli Binaural processing of complex stimuli Outline for today Binaural detection experiments and models Speech as an important waveform Experiments on understanding speech in complex environments (Cocktail

More information

Dynamic-range compression affects the lateral position of sounds

Dynamic-range compression affects the lateral position of sounds Dynamic-range compression affects the lateral position of sounds Ian M. Wiggins a) and Bernhard U. Seeber MRC Institute of Hearing Research, University Park, Nottingham, NG7 2RD, United Kingdom (Received

More information

Issues faced by people with a Sensorineural Hearing Loss

Issues faced by people with a Sensorineural Hearing Loss Issues faced by people with a Sensorineural Hearing Loss Issues faced by people with a Sensorineural Hearing Loss 1. Decreased Audibility 2. Decreased Dynamic Range 3. Decreased Frequency Resolution 4.

More information

Keywords: time perception; illusion; empty interval; filled intervals; cluster analysis

Keywords: time perception; illusion; empty interval; filled intervals; cluster analysis Journal of Sound and Vibration Manuscript Draft Manuscript Number: JSV-D-10-00826 Title: Does filled duration illusion occur for very short time intervals? Article Type: Rapid Communication Keywords: time

More information

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED International Conference on Systemics, Cybernetics and Informatics, February 12 15, 2004 BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED Alice N. Cheeran Biomedical

More information

Localization of sound in rooms. V. Binaural coherence and human sensitivity to interaural time differences in noise

Localization of sound in rooms. V. Binaural coherence and human sensitivity to interaural time differences in noise Localization of sound in rooms. V. Binaural coherence and human sensitivity to interaural time differences in noise Brad Rakerd Department of Communicative Sciences and Disorders, Michigan State University,

More information

Spectral-peak selection in spectral-shape discrimination by normal-hearing and hearing-impaired listeners

Spectral-peak selection in spectral-shape discrimination by normal-hearing and hearing-impaired listeners Spectral-peak selection in spectral-shape discrimination by normal-hearing and hearing-impaired listeners Jennifer J. Lentz a Department of Speech and Hearing Sciences, Indiana University, Bloomington,

More information

INTRODUCTION J. Acoust. Soc. Am. 100 (4), Pt. 1, October /96/100(4)/2352/13/$ Acoustical Society of America 2352

INTRODUCTION J. Acoust. Soc. Am. 100 (4), Pt. 1, October /96/100(4)/2352/13/$ Acoustical Society of America 2352 Lateralization of a perturbed harmonic: Effects of onset asynchrony and mistuning a) Nicholas I. Hill and C. J. Darwin Laboratory of Experimental Psychology, University of Sussex, Brighton BN1 9QG, United

More information

DOES FILLED DURATION ILLUSION TAKE PLACE FOR VERY SHORT TIME INTERVALS?

DOES FILLED DURATION ILLUSION TAKE PLACE FOR VERY SHORT TIME INTERVALS? DOES FILLED DURATION ILLUSION TAKE PLACE FOR VERY SHORT TIME INTERVALS? Emi Hasuo, Yoshitaka Nakajima, and Kazuo Ueda Graduate School of Design, Kyushu University -9- Shiobaru, Minami-ku, Fukuoka, - Japan

More information

Systems Neuroscience Oct. 16, Auditory system. http:

Systems Neuroscience Oct. 16, Auditory system. http: Systems Neuroscience Oct. 16, 2018 Auditory system http: www.ini.unizh.ch/~kiper/system_neurosci.html The physics of sound Measuring sound intensity We are sensitive to an enormous range of intensities,

More information

THE RELATION BETWEEN SPATIAL IMPRESSION AND THE PRECEDENCE EFFECT. Masayuki Morimoto

THE RELATION BETWEEN SPATIAL IMPRESSION AND THE PRECEDENCE EFFECT. Masayuki Morimoto THE RELATION BETWEEN SPATIAL IMPRESSION AND THE PRECEDENCE EFFECT Masayuki Morimoto Environmental Acoustics Laboratory, Faculty of Engineering, Kobe University Rokko Nada Kobe 657-85 Japan mrmt@kobe-u.ac.jp

More information

Impact of the ambient sound level on the system's measurements CAPA

Impact of the ambient sound level on the system's measurements CAPA Impact of the ambient sound level on the system's measurements CAPA Jean Sébastien Niel December 212 CAPA is software used for the monitoring of the Attenuation of hearing protectors. This study will investigate

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Noise Session 3aNSa: Wind Turbine Noise I 3aNSa5. Can wind turbine sound

More information

A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER

A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER ARCHIVES OF ACOUSTICS 29, 1, 25 34 (2004) INTELLIGIBILITY OF SPEECH PROCESSED BY A SPECTRAL CONTRAST ENHANCEMENT PROCEDURE AND A BINAURAL PROCEDURE A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER Institute

More information

Characterizing individual hearing loss using narrow-band loudness compensation

Characterizing individual hearing loss using narrow-band loudness compensation Characterizing individual hearing loss using narrow-band loudness compensation DIRK OETTING 1,2,*, JENS-E. APPELL 1, VOLKER HOHMANN 2, AND STEPHAN D. EWERT 2 1 Project Group Hearing, Speech and Audio Technology

More information

Frequency refers to how often something happens. Period refers to the time it takes something to happen.

Frequency refers to how often something happens. Period refers to the time it takes something to happen. Lecture 2 Properties of Waves Frequency and period are distinctly different, yet related, quantities. Frequency refers to how often something happens. Period refers to the time it takes something to happen.

More information

Effect of ear-defenders (ear-muffs) on the localization of sound

Effect of ear-defenders (ear-muffs) on the localization of sound Brit. J. Industr. Med., 9,, - Effect of ear-defenders (ear-muffs) on the localization of sound G. R. C. ATHERLEY and W. G. NOBLE epartment of Pure and Applied Physics, University of Salford and epartment

More information

Binaural hearing and future hearing-aids technology

Binaural hearing and future hearing-aids technology Binaural hearing and future hearing-aids technology M. Bodden To cite this version: M. Bodden. Binaural hearing and future hearing-aids technology. Journal de Physique IV Colloque, 1994, 04 (C5), pp.c5-411-c5-414.

More information

Perceptual Plasticity in Spatial Auditory Displays

Perceptual Plasticity in Spatial Auditory Displays Perceptual Plasticity in Spatial Auditory Displays BARBARA G. SHINN-CUNNINGHAM, TIMOTHY STREETER, and JEAN-FRANÇOIS GYSS Hearing Research Center, Boston University Often, virtual acoustic environments

More information

Two Modified IEC Ear Simulators for Extended Dynamic Range

Two Modified IEC Ear Simulators for Extended Dynamic Range Two Modified IEC 60318-4 Ear Simulators for Extended Dynamic Range Peter Wulf-Andersen & Morten Wille The international standard IEC 60318-4 specifies an occluded ear simulator, often referred to as a

More information

Perceptual Effects of Nasal Cue Modification

Perceptual Effects of Nasal Cue Modification Send Orders for Reprints to reprints@benthamscience.ae The Open Electrical & Electronic Engineering Journal, 2015, 9, 399-407 399 Perceptual Effects of Nasal Cue Modification Open Access Fan Bai 1,2,*

More information

Digital. hearing instruments have burst on the

Digital. hearing instruments have burst on the Testing Digital and Analog Hearing Instruments: Processing Time Delays and Phase Measurements A look at potential side effects and ways of measuring them by George J. Frye Digital. hearing instruments

More information

Evaluation of Auditory Characteristics of Communications and Hearing Protection Systems (C&HPS) Part III Auditory Localization

Evaluation of Auditory Characteristics of Communications and Hearing Protection Systems (C&HPS) Part III Auditory Localization Evaluation of Auditory Characteristics of Communications and Hearing Protection Systems (C&HPS) Part III Auditory Localization by Paula P. Henry ARL-TR-6560 August 2013 Approved for public release; distribution

More information

Diotic and dichotic detection with reproducible chimeric stimuli

Diotic and dichotic detection with reproducible chimeric stimuli Diotic and dichotic detection with reproducible chimeric stimuli Sean A. Davidson Department of Biomedical and Chemical Engineering, Institute for Sensory Research, Syracuse University, 61 Skytop Road,

More information

Changing expectations about speed alters perceived motion direction

Changing expectations about speed alters perceived motion direction Current Biology, in press Supplemental Information: Changing expectations about speed alters perceived motion direction Grigorios Sotiropoulos, Aaron R. Seitz, and Peggy Seriès Supplemental Data Detailed

More information

Synthesis of Spatially Extended Virtual Sources with Time-Frequency Decomposition of Mono Signals

Synthesis of Spatially Extended Virtual Sources with Time-Frequency Decomposition of Mono Signals PAPERS Synthesis of Spatially Extended Virtual Sources with Time-Frequency Decomposition of Mono Signals TAPANI PIHLAJAMÄKI, AES Student Member, OLLI SANTALA, AES Student Member, AND (tapani.pihlajamaki@aalto.fi)

More information

Auditory Scene Analysis

Auditory Scene Analysis 1 Auditory Scene Analysis Albert S. Bregman Department of Psychology McGill University 1205 Docteur Penfield Avenue Montreal, QC Canada H3A 1B1 E-mail: bregman@hebb.psych.mcgill.ca To appear in N.J. Smelzer

More information

Effect of mismatched place-of-stimulation on the salience of binaural cues in conditions that simulate bilateral cochlear-implant listening

Effect of mismatched place-of-stimulation on the salience of binaural cues in conditions that simulate bilateral cochlear-implant listening Effect of mismatched place-of-stimulation on the salience of binaural cues in conditions that simulate bilateral cochlear-implant listening Matthew J. Goupell, a) Corey Stoelb, Alan Kan, and Ruth Y. Litovsky

More information

Effects of speaker's and listener's environments on speech intelligibili annoyance. Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag

Effects of speaker's and listener's environments on speech intelligibili annoyance. Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag JAIST Reposi https://dspace.j Title Effects of speaker's and listener's environments on speech intelligibili annoyance Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag Citation Inter-noise 2016: 171-176 Issue

More information

Binaural interference and auditory grouping

Binaural interference and auditory grouping Binaural interference and auditory grouping Virginia Best and Frederick J. Gallun Hearing Research Center, Boston University, Boston, Massachusetts 02215 Simon Carlile Department of Physiology, University

More information

THE EFFECT OF A REMINDER STIMULUS ON THE DECISION STRATEGY ADOPTED IN THE TWO-ALTERNATIVE FORCED-CHOICE PROCEDURE.

THE EFFECT OF A REMINDER STIMULUS ON THE DECISION STRATEGY ADOPTED IN THE TWO-ALTERNATIVE FORCED-CHOICE PROCEDURE. THE EFFECT OF A REMINDER STIMULUS ON THE DECISION STRATEGY ADOPTED IN THE TWO-ALTERNATIVE FORCED-CHOICE PROCEDURE. Michael J. Hautus, Daniel Shepherd, Mei Peng, Rebecca Philips and Veema Lodhia Department

More information

INTRODUCTION TO PURE (AUDIOMETER & TESTING ENVIRONMENT) TONE AUDIOMETERY. By Mrs. Wedad Alhudaib with many thanks to Mrs.

INTRODUCTION TO PURE (AUDIOMETER & TESTING ENVIRONMENT) TONE AUDIOMETERY. By Mrs. Wedad Alhudaib with many thanks to Mrs. INTRODUCTION TO PURE TONE AUDIOMETERY (AUDIOMETER & TESTING ENVIRONMENT) By Mrs. Wedad Alhudaib with many thanks to Mrs. Tahani Alothman Topics : This lecture will incorporate both theoretical information

More information

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE Copyright SFA - InterNoise 2000 1 inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering 27-30 August 2000, Nice, FRANCE I-INCE Classification: 6.3 PSYCHOLOGICAL EVALUATION

More information

How high-frequency do children hear?

How high-frequency do children hear? How high-frequency do children hear? Mari UEDA 1 ; Kaoru ASHIHARA 2 ; Hironobu TAKAHASHI 2 1 Kyushu University, Japan 2 National Institute of Advanced Industrial Science and Technology, Japan ABSTRACT

More information

ICaD 2013 ADJUSTING THE PERCEIVED DISTANCE OF VIRTUAL SPEECH SOURCES BY MODIFYING BINAURAL ROOM IMPULSE RESPONSES

ICaD 2013 ADJUSTING THE PERCEIVED DISTANCE OF VIRTUAL SPEECH SOURCES BY MODIFYING BINAURAL ROOM IMPULSE RESPONSES ICaD 213 6 1 july, 213, Łódź, Poland international Conference on auditory Display ADJUSTING THE PERCEIVED DISTANCE OF VIRTUAL SPEECH SOURCES BY MODIFYING BINAURAL ROOM IMPULSE RESPONSES Robert Albrecht

More information

HST.723J, Spring 2005 Theme 3 Report

HST.723J, Spring 2005 Theme 3 Report HST.723J, Spring 2005 Theme 3 Report Madhu Shashanka shashanka@cns.bu.edu Introduction The theme of this report is binaural interactions. Binaural interactions of sound stimuli enable humans (and other

More information