Improved method for accurate sound localization

Size: px
Start display at page:

Download "Improved method for accurate sound localization"

Transcription

1 Acoust. Sci. & Tech. 7, 3 () PAPER Improved method for accurate sound localization Akihiro Kudo, Hiroshi Higuchi, Haruhide Hokari and Shoji Shimada Nagaoka University of Technology, 13 1 Kamitomioka-machi, Nagaoka, 9 1 Japan ( Received May, Accepted for publication 1 December ) Abstract: When implementing out-of-head sound localization with headphones, it is well known that using head-related transfer functions (HRTFs) other than those of a listener degrades sound image localization, and enhances localization error and front-back confusion. Some studies have indicated that moving a sound image eases these problems. We focus on moving sound images to achieve highly accurate localization, and propose a swing sound image method that enables sound image swing between two locations on a horizontal plane. Listening tests reveal that the proposed method greatly reduces the front-back confusion. Keywords: Moving sound image, Switching transfer functions, Dynamic localization cues, Localization accuracy, Front-back confusion PACS number: 3..Lj, 3..Pn, 3..Qp [DOI: 1.1/ast.7.13] 1. INTRODUCTION kudo@audio.nagaokaut.ac.jp When implementing out-of-head sound localization with headphones, the using of nonindividualized headrelated transfer functions (HRTFs) usually degrades the sound image localization accuracy [1 3], because each person has unique HRTFs due to the sound reflection and diffraction created by the human head and torso. With respect to this degradation, there are certain auditory illusions involving the angular difference between the presented and perceived azimuths of sound images, the front-back confusion, and the rising and distance uncertainties in sound images. In particular, it is thought that the similarities in interaural differences between the frontal and back sound images cause the front-back confusion [1]. Horizontal localization cues are interaural level differences (ILDs), interaural time differences (ITDs) and spectral cues []. It is said that the ILDs and ITDs are cues for left-right judgment, while the spectral cues are used in front-back judgment. Although the HRTFs should be measured for each subject to achieve accurate sound localization, this is considered impractical. Many researchers have attempted to develop a system that allows nonindividualized HRTFs to match the sound localization performance of individualized HRTFs. Wenzel et al. [1] examined the localization performance of 1 subjects with nonindividualized HRTFs measured from a person whose localization ability was good in both free-field and headphone listening. Using these good but not individualized HRTFs significantly increased the front-back confusion rate. Shimada et al. [] studied the sound localization performance of eight nonindividualized HRTFs obtained from 9 individualized HRTFs. These nonindividualized HRTFs were synthesized with an algorithm that converted the individualized HRTFs to low-dimension vectors (from 1 to 1), which were classified into several clusters by the LBG method. The results indicate that the optimal nonindividualized HRTFs achieving accurate localization differ among subjects. Møller et al. [3] carried out localization tests, in which free-field listening was compared with headphone listening using binaural signals. The results show that nonindividualized binaural signals increase localization errors on a median plane and tended to cause the frontal sound source to be perceived at the rear. The results obtained by Wenzel et al., Shimada et al. and Møller et al. [1 3] confirm that nonindividualized HRTFs degrade the localization accuracy Need for Dynamic Localization Cues If we want to identify the location of a sound source, we naturally turn our head. Since head rotation is equivalent to sound source movement on the circumference, our expectation is that sound source movement contributes to 13

2 A. KUDO et al.: IMPROVED METHOD FOR SOUND LOCALIZATION the accurate perception of sound source location. Wallach [] conducted two experiments in 19. In one experiment, the sound localization performance of a blindfolded subject, who was seated and immobile but whose chair was rotated, was examined. In the other experiment, the sound localization performance of a subject seated inside a rotating screen was examined. The results of both experiments show that actual head movement is not necessary for resolving the ambiguities of front-back judgment. The above discussion indicates that both rotating the head and moving the sound image can play a pivotal role in creating effective localization cues. 1.. Review of Previous Studies on Dynamic Localization Cues Impact of head movement on localization Thurlow and Runge [] examined the impact of induced head movement on sound localization. Four types of induced head movement were considered: rotation, pivot (head shaking on a transverse plane), rotation-pivot, and no motion. Inexperienced subjects participated in localization tests. The results show that rotation and rotation-pivot reduce the front-back confusion rate in the frontal direction more than pivot and no motion. They also observed the head movements of subjects in localizing sound sources [7] and classified these movements into 3 classes: tip, pivot, and rotation. They reported that rotation is more frequently used than the other movements. Kato et al. [] compared localization accuracies using natural auralization and a test arrangement in which both concha were plugged with a plastic molding compound, using the head movements on the horizontal and median planes. They reported that 1) head movements improve the localization accuracy even with the test arrangement and ) head rotation provides a higher localization accuracy on both planes than the other movements. These results imply that horizontal head rotation overcomes the deficiencies of nonindividualized HRTFs, improving the localization accuracy. Asahi and Matsuoka [9], Boerger et al. [1], Kawaura et al. [11], Kimura and Suzuki [1], and Wightman and Kistler [13] also studied the impact of head rotation on sound localization in virtual sound source reproduction with headphones. They also concluded that head rotation decreases the front-back confusion Impact of moving sound image on localization Rosenblum et al. [1] assumed that the acoustical variables of moving sound image localization are the ITDs, Doppler shifts, and changes in sound pressure level over time. They assessed the impacts of these variables in a localization test using an ambulance siren. The results show that the most important cue in being able to determine, at a time instant, the position at which a sound image is closest to a subject, is the change in sound pressure level over time. This is followed by the ITDs and Doppler shifts in decreasing order of importance. The localization of a moving sound image created by a loudspeaker array with vector base amplitude panning was studied by Gröhn and coworkers [1,1]. They reported that the statistical median values of absolute localization error in azimuth increase with the movement of a sound image. Localization tests in which used headphones were used to capture a moving sound image, were performed by Robinson and Greenfield [17]. They used music as the stimulus, and the moving sound image was made to orbit the subject s head at a constant distance (1 m) with the nonindividualized HRTFs obtained from an acoustic mannequin. Each subject was instructed to indicate the trajectory and direction of the moving sound image on a diagram of an overhead view. The moving sound image was presented only once to each subject to remove the adaptation effect. They reported that 1) the sound image movement triggers the externalization of the sound image, ) the perception of the moving sound image varies among subjects, and 3) the moving sound image does not affect the front-back judgment. The results obtained by Göhn et al. and Robinson et al. indicate that sound image movement does not reduce the localization error or front-back confusion. Wightman and Kistler [13] conducted localization tests, in which moving sound images were placed on the circumference at a constant distance. All moving sound images were individually synthesized by sequentially convolving the HRTFs at 1 intervals; it was unknown whether interpolation is used. Two localization tests were performed and eight subjects participated. In the first test, the sound image moved in azimuth in the 1 s period: the direction of which, clockwise or counterclockwise, was determined randomly, and was not revealed to the subjects. These subjects were asked to indicate the position of the moving sound image. In the second test, the subjects were permitted to control the movement direction by pushing buttons on a keyboard. They reported that the front-back confusion is suppressed only when the subject knows the direction of moving the sound image. Uematsu et al. [1] examined the localization accuracies of a stationary image and a moving sound image. The HRTFs were individually measured at 1 intervals on a horizontal plane. For comparison, alternative HRTFs were prepared by all-pole model approximation using tenorder linear predictive coefficients as nonindividualized HRTFs. Before creating the moving sound image by timevariant convolution, the HRTFs were interpolated at 1 intervals by a simple linear interpolation method. The 13

3 Acoust. Sci. & Tech. 7, 3 () movement velocity was set at 3 /s. They reported that the front-back confusion rate is reduced even when the unidirectional moving sound image is synthesized with nonindividualized HRTFs and the movement direction is not revealed to the subjects. The disagreement between the results obtained by Wightman and Kistler, and Uematsu et al. may indicate that movement perception depends on the conditions of the moving sound images including the initial location of the movement, movement velocity, type of stimulus, trajectory of the movement [19], direction of the movement, and auditory ability of the subject Goal of Present Study Our final goal is to create a system that allows nonindividualized HRTFs to match the performance of individualized ones. For this, we should improve the sound localization accuracy on a horizontal plane by reducing the difference between the presented and perceived azimuth angles and the front-back confusion. 1.. Strategy of Present Study Although physical head movements improve the localization accuracy, special hardware, a head tracker, is required for detecting head movement. On the other hand, the sound image movement approach requires only a signal processing method that enables the switching of the HRTFs without wave discontinuity. We studied this approach to achieve sound image movement []. In the present study, it was hypothesized that a clear change of localization cues in time, i.e. dynamic localization cues, makes sound image localization more accurate. Our strategy for confirming this hypothesis consisted of two steps. In the first step, a localization test was performed to compare the accuracies of individualized and nonindividualized HRTFs in localizing stationary sound images as well as horizontally unidirectional moving sound images. This test was also performed to obtain additional quantitative data on the localization of a moving sound image and was a reconfirmation of the previous study by Wightman and Kistler, and Uematsu et al. [13,1]. This was considered necessary to clarify whether a moving sound image improves the localization accuracy. Following Uematsu et al., the direction of the moving sound was not revealed. In the second step, we developed a novel method of presenting the sound image that alternated between two discrete sound locations on a horizontal plane to realize a dynamic localization cue. We called this presentation method the swing sound image method. Two experiments were conducted to examine the impact of the swing sound image method on localization. The swing sound image parameters used were the swing angle, switching time T, and azimuth, as shown in Sect..1. In the first experiment, the parameters to be used in the second experiment were determined by assessing the subjective displacement of a swing sound image. In the second experiment, the localization accuracies of the swing and stationary sound images were examined with individualized and nonindividualized HRTFs.. GENERAL METHOD.1. s Ten male staff members from our laboratory, five undergraduate students and five graduate students, participated in the experiments. None had any history of any kind of hearing problem... Measurements of HRTFs HRTFs were measured in an anechoic chamber. Our definition of the transfer functions follows that in [1]. The procedure used to measure the HRTFs is described below. Each subject sat on a seat equipped with a headrest in the anechoic chamber. Next, an M-sequence signal (1 Hz 13 khz) of 1 orders was radiated from each loudspeaker at intervals on a horizontal plane. The source distance from the subject to the face of each loudspeaker was 1. m. The loudspeakers (Soundevice SD-. models) were leveled with the subject s ears. The sound pressure level of the signal was db at the entrance of the subject s ear canal. Miniature microphones (RION UC-9, 3. mm by.7 mm by.1 mm) were inserted about mm into the entrance of the subject s ear canal. The outputs of these microphones were converted into a 1-bit linear pulse code at a sampling frequency of khz using an A/D converter (SDS DASmini MODEL-1). Finally, impulse responses were calculated by the Hadamard conversion method. It is well known that accurate binaural reproduction systems should cancel the acoustical transfer characteristic from headphones (SONY MDR-ED31LP) to the entrance of the canal for both ears [1, ]. The procedure we used for canceling the headphone characteristic was as follows: 1) an inverse impulse response to the headphone characteristic was individually obtained using the Levinson-Durbin algorithm in the time domain where the target impulse response is that of a band-pass filter [], ) the inverse impulse responses to the loudspeaker characteristics were calculated using the same method as mentioned above, 3) the inverse impulse responses of headphones and loudspeakers were convoluted, and finally, these impulse responses were convolved with the impulse responses of the HRTFs []. In addition, the HRTFs of a head-and-torso simulator (HATS, KOUKEN SAMRAI), whose head and torso dimensions equal those of the average Japanese, were also measured and used as the nonindividualized HRTFs. 13

4 A. KUDO et al.: IMPROVED METHOD FOR SOUND LOCALIZATION.3. Sound Source Signal and Range of Localization A white noise (13 khz bandwidth) was used as the sound source signal in all experiments. In this research, all stimuli were presented on a horizontal right half-plane, and localization on this plane was also considered. 3. STEP 1: COMPARISON OF LOCALIZATION ACCURACIES OF STATIONARY AND UNIDIRECTIONAL MOVING SOUND IMAGES 3.1. s Five graduate students, who were skilled in localization testing, participated in this experiment. 3.. Stimuli The stimuli were digitally synthesized on a computer (Sun SparcStation 1) at a sampling frequency of khz and 1-bit quantization. The stationary sound image was synthesized using the white noise by typical convolution, and the moving sound image was synthesized by timevariant convolution based on a fade-in/fade-out method [] (the frame length was set at 3, samples, and fadein/fade-out time was set at 9 samples) using the interpolated transfer functions at 1 intervals; it is known that synthesizing a continuously moving sound image requires the interpolation of the impulse responses of HRTFs, because the instantaneous amplitude of stimulus sound must continuously fluctuate in time and space. The interpolation method proposed by Matsumoto et al. [7] was used. When synthesizing a continuously moving sound image, the following problems are encountered: a spread spectrum occurs in a frequency domain due to many HRTF switching operations, wave discontinuity occurs at the moment of switching, and the spread spectrum itself provides a localization cue. In this study, the fade-in/fadeout method was adopted to suppress this wave discontinuity. Hence, the spread spectrum did not occur, and the HRTF switching operations did not affect sound localization. The stationary and moving sound images had durations of 1 s and s, respectively. The sound images in 13 presented azimuth directions from to 1 at 1 intervals were created. The movement velocity of 1 /s was adopted from preliminary localization tests. That is, the moving sound image moved in one direction over 3 within s. All stimuli were presented via the same headphones as those used in the transfer function measurements. The gain of the headphone amplifier was adjusted by setting the sound pressure level of the frontal stimulus at roughly db (A-weighted) as perceived at the eardrum of the HATS. The stimuli were converted from digital to analog using a D/A converter (SDS DASBOX 1A) Procedure Each subject, wearing the same headphones as those used in the transfer function measurements, sat on a seat with a headrest in a test room, a soundproofed chamber that reduced outside noise by at least db. The stimuli were presented via the headphones. There were 13 numbered plates around the subject at 1 intervals on a horizontal plane, and the subject was instructed to reply with the location of the sound image at intervals by referencing the positions of the numbered plates. During trials, the subject s head was immobile, although head turning to confirm the azimuth of the stimulus was permitted in rest breaks between trials. With respect to the moving sound image, its end position was also indicated by the subject. There were trial states: the combinations of HRTFs (individualized and nonindividualized) and sound images (stationary and moving). Each stimulus was presented 1 times per azimuth position. With respect to the moving sound image, clockwise and counterclockwise movements were presented times each. All stimuli were presented in random order. A total of trials, 13 azimuth HRTFs (individualized and nonindividualized) sound images (stationary and moving) 1 times, were divided into sessions with a break between each session. 3.. Results The sound localization accuracy was analyzed using the front-back confusion rate and average angular error. The front-back confusion rate was obtained from the total number of front-back errors divided by the total number of trials. The angular error was calculated by removing the front-back confusions [19,], e.g., when the presented azimuth was 3 and the perceived azimuth was 1, the perceived azimuth was corrected to 3. The average angular error (A.A.E.) was calculated as A.A.E. ¼ 1 X M X N j pre;d;i per;d;i j; M N where pre is the presented azimuth, per is the perceived azimuth, M is the number of azimuth positions, and N is the number of trials per azimuth. Figures 1 and show the representative localization responses. In these figures, the horizontal and vertical axes show the presented and perceived azimuths, respectively, and corresponds to the frontal azimuth; the circle size is proportional to the response frequency of a subject. Corr.coef. and F-B conf. represent the correlation coefficient and front-back confusion rate, respectively. The three letters, starting from S, indicate subject s initials for distinguishing the subjects. Figure 3 shows the front-back confusion rates and average angular errors of all the subjects. From Fig. 3, accurate sound localization was achieved for all the subjects when 137

5 Acoust. Sci. & Tech. 7, 3 () stationary sound image moving sound image stationary sound image moving sound image HRTFs : s own Corr. coef. =.99 F-B conf. =. % Corr. coef. =.9 F-B conf. =. % HRTFs s own Corr. coef. =.9 F-B conf. =. % Corr. coef. =.93 F-B conf. =. % HRTFs : HATS s Corr. coef. =.77 F-B conf. = 9. % Corr. coef. =.9 F-B conf. =. % HRTFs HATS s Corr. coef. =. F-B conf. = 7. % Corr. coef. =.9 F-B conf. =. % Fig. 1 Results of localization test: the subject. Fig. Results of localization test: the subject. Fig. 3 Front-back confusion rates and average angular errors for all subjects. individualized (subject s own) HRTFs were used; the use of nonindividualized (HATS s) HRTFs degraded the localization accuracy. The results also show that the localization accuracy with movement is highly subject dependent; the subjects SHT, and could localize the unidirectional moving sound images, while the subjects and SYT could not. 3.. Discussion Comparing localization accuracy of stationary sound image with those of previous studies To validate our results, it is important to compare them with the results of previous studies; unfortunately, this comparison is difficult because of the different conditions used. Table 1 shows a comparison of localization accuracies obtained from the literature. With regard to the angular error, our results agree with those of Martin et al. and Senova et al. Moreover, with regard to the front-back confusion rate, our results agree with those of Martin et al. However, our results do not agree with those of Wenzel and Kistler, and Uematsu et al., indicating that front-back confusion is frequently observed even when subject s own HRTFs are used Statistical analysis We conducted statistical analyses to determine the relationships between four conditions. Figure shows the results of applying the two-sample test for equality of proportions; the significant level was %. In the figure, the front-back confusion rate decreases along the direction of the arrow. For the subject, the front-back confusion rate was significantly reduced by the sound image movement, even with the use of nonindividualized HRTFs. On the other hand, for the subject, a significant increase in confusion rate was observed. For three of the five subjects, unidirectional sound image movement with nonindividualized HRTFs reduced the front-back confusion rate. 13

6 A. KUDO et al.: IMPROVED METHOD FOR SOUND LOCALIZATION Table 1 Comparison of localization accuracies obtained in our study with those obtained in previous studies. Angular error[ ] Front-back confusion rate[%] Own HRTFs Non-ind. HRTFs Own HRTFs Non-ind. HRTFs Martin et al. [9] Wenzel et al. [1] Uematsu et al. [1] Senova et al. [] about Our research 1..,. 1.3,.., 7. 1.,. (mean) 7.1 (mean). (mean) 9. (mean) Non-ind. represents the use of nonindividualized HRTFs. Sound Image Sound Image Stationary Moving Stationary Moving H R T F s s own HATS s SHT SYT SHT SYT SHT SYT SHT SYT significance no significance H R T F s s own HATS s SYT SHT SYT SHT SYT SHT SHT SYT significance no significance Fig. Front-back confusion rate: four conditions for all subjects. Fig. Average angular error: four conditions for all subjects. Figure shows the relationships between four conditions in terms of the average angular error using Welch s formula [3] for a test of difference between two population means when the population variances are unequal and unknown, respectively; was %. The average angular error decreases along the arrow. The sound image movement tended to increase the average angular error. However, the subject could utilize the movement for reducing the angular error significantly. In general, a localization result depends on the subject s localization ability and the parameters (movement angle, moving velocity, duration time and presented azimuth ) of a unidirectional moving sound image. We first employed 3 /s as the unidirectional moving velocity of a sound image in a preliminary experiment. However, the movement velocity was very high, and the localization accuracy was degraded significantly. Consequently, a movement velocity of 1 /s was used, and the duration time was set at s to maintain the movement angle constant at 3. But, it is not clear whether a movement velocity of 1 /s is most appropriate, because we did not conduct a localization test employing the movement velocities lower than 1 /s and higher than 3 /s. The localization results we obtained were dependent on subjects. Although we believe that the causes, the localization accuracies are not improved by a unidirectional movement, are that some of subjects participated in this experiment originally could not utilize the unidirectional movement for an improvement of localization accuracies and/or the parameters of the unidirectional moving sound image are inappropriate for the subjects, it is not able to identify the causes at this moment. These results indicate that 1) the use of nonindividualized HRTFs increases the front-back confusion rate, which is in agreement with those of previous studies, as mentioned above, ) the front-back confusion rate with a unidirectional moving sound image considerably differs among the subjects, and 3) the unidirectional movement of the sound image increases the angular error Analysis of differences between results of previous studies and those of our study There are differences between the localization results of previous studies and those of our study. These differences seem to depend on the task instructions given to the subjects, because it is known that an inadequate task yields biased localization results. From a comparison of the tasks set in previous studies, we observed the following: 1) Since Senova et al. incorporated the feedback that permitted subjects to confirm the sound source location after the sound presentation in the localization task, their 139

7 Acoust. Sci. & Tech. 7, 3 () result exhibited a small angular error. ) While Wightman and Kistler adopted a task in which blindfolded subjects orally reported the perceived azimuth, Uematsu et al. adopted an another task in which no blindfolded subjects reported the perceived azimuth, using a computer system based on the graphical user interface that allowed subjects to confirm the perceived azimuth visually on a monitor; consequently, the front-back confusion rate obtained by Wightman and Kistler was larger than that obtained by Uematsu et al., indicating that visual information decreases the error in azimuth perception. If subject s own HRTFs were used and the subjects had a good localization ability with a static sound image, the moving sound image also yielded a high localization accuracy: this result is in agreement with those of Wightman and Kistler, Uematsu et al. and our study. However, when using nonindividualized HRTFs, Uematsu et al. concluded, unlike us, that the unidirectional moving sound image improves the localization accuracy. This indicates that the unidirectional movement is not necessarily a frontback localization cue, because a localization accuracy with the movement is easily affected by the other factors, e.g. movement velocity Analysis of test condition differences for unidirectional moving sound image Here, we analyze the test conditions to understand better the disagreement between the results of previous studies. Table shows the test conditions used in our study and those used by Wightman and Kistler and Uematsu et al. There are certain differences between the conditions, such as the type of stimulus, movement velocity, movement angle and angular interval in interpolation. The bandwidths of the stimuli do not matter, because more than 9% correct front-back judgements were made with bandwidths up to 1 khz [31]. The impact of movement angles cannot be argued, since the angles were similar in all studies. Of particular interest is the lower movement velocity used in our study than in previous studies. The movement velocity in our study was chosen after conducting a preliminary localization test, as mentioned in Sect According to Perrott and Musicant [3], the dynamic minimum audible angle, which is an indicator of the acuity of auditory motion perception, is small when the movement velocity is either extremely high or low. This indicates the existence of an optimal movement velocity. These considerations suggest that when using a unidirectional moving sound image to improve the localization accuracy, it is necessary at least to determine and use the optimal movement velocity.. STEP : ACHIEVING SWING SOUND IMAGE AND LOCALIZATION ACCURACY.1. Definition of Swing Sound Image The swing sound image method is based on the alternating presentation of a sound image on a horizontal plane. Figure shows the relationship between a swing sound image and its three parameters, namely, swing angle, switching time T, and presented azimuth. The swing angle is the angular displacement of the swing sound image. The switching time T is one-half of the repetition period. The swing sound image moves over the swing angle, where the presented azimuth indicates the center of the swing. Based on our hypothesis, the presentation of the swing sound image is discrete in that the image does not move continuously between two locations. Figure 7 shows the displacement of a swing sound image. Its swing follows a constant rhythm in time and space. Table Comparison of the test conditions with previous research using unidirectional moving sound image. Our research Wightman and Kistler [13] Uematsu et al. [1] Number of subjects 7 unknown Place in measurement Anechoic chamber Anechoic chamber Anechoic chamber Angular interval of measurement 1 1 Stimulus sound White noise White Gaussian noise burst Pink noise Bandwidth of stimulus 1 Hz 13 khz Hz 1 khz 1 khz Duration of stimulus s 1 s 1 s Presented azimuth Movement velocity 1 /s /s 3 /s Movement angle 3 3 HRTFs were s own, measured from HATS s own s own HRTFs No processing Minimum-phase Angular interval in interpolation No processing and approximation using AR model 1 Unknown 1 1

8 A. KUDO et al.: IMPROVED METHOD FOR SOUND LOCALIZATION Soundproof Chamber 1 trial Test number announcement stationary sound swing sound Test number announcement Location of Sound Image A Location of Sound Image B Fig. 7 Headphones Swing Angle Switching Time T Presented azimuth Sound Image B Headphone Amp. Switching Time T Sound Image A Switching Time T Center of sound images LPF D/A Input 1 : test_signal_l.dsb Input : test_signal_r.dsb Time Displacement of swing sound image in time and space... Determination of Swing Sound Image Parameters In this experiment, the parameters of the swing sound image were determined by assessing the subjective displacement...1. s Five inexperienced undergraduate students participated in this experiment.... Stimuli created by the swing sound image method and its parameters Following our hypothesis, swing sound images were created by the time-variant convolution of a sound source at a sampling frequency of khz without the interpolation of the impulse responses of HRTFs to achieve a discrete presentation of the sound images; the overlap-add technique with a modified hamming window (OLAM) and the fade-in/fade-out method (FIFO) [] were employed to remove discontinuities at HRTF switching points. The sound source was the same white noise used in our previous test on a unidirectional moving sound image. The length of the modified hamming window, frame length, and frame shift were set at samples,,19 samples, and samples, respectively. The frame length and fade-in/ fade-out time were set at 3, samples and 9 samples, respectively. DA done... Fig. Schematic diagram and relationship between swing sound image and its parameters. P C Table 3. s.3 s. s 3. s Fig. Layout of stimuli. Relationship between score and rating. Score Rating of subjective displacement Imperceptible Perceptible, but not clear 3 Slightly clear Clear 1 Very clear In total, we created swing sound images in seven presented azimuth directions (Fig. ) from to 1 at angular intervals of 3, that is,,3,,9, 1, 1 and 1. By referencing the MAMA shown in the literature [3], the swing angle was set at, and. The switching time T was set at. s,. s, 1. s and. s from the minimum MAMA divided by the movement velocity [33]...3. Procedure Each subject wearing headphones sat on a seat with a headrest in a test room, in the same manner as in our previous test, as shown in Fig.. First, a test number was announced, and then a stationary sound image and a swing sound image were presented to the subject, as shown in Fig., via the headphones. The subject s task was to evaluate the subjective displacement of the swing sound image by comparing the two sound images. For comparison, a five-grade scale shown in Table 3 was used. Each stimulus was presented times, and all stimuli were presented in random order. At each trial, the presented azimuth of the stationary sound image was equal to the center position of swing sound image movement.... Results Figure 9 shows an example of presented azimuth versus average scores for all the subjects with T values of. s and 1. s. Low average scores indicate that the displacement of the swing sound image is clearly perceived. These results show that the average score decreases as the swing angle increases.... Discussion A dependence analysis of the average score on the presented azimuth makes it possible to identify dependences by testing the significance between average scores at each presented azimuth. However, many comparisons between average scores are required, because there are 7 11

9 Acoust. Sci. & Tech. 7, 3 () Average score T 1.s = (%) = (%) 3 = (%) Average score T.s 3 = (19%) = (19%) = (%) Fig. 9 Average score versus presented azimuth characteristics. presented azimuth values,, switching times, T, and 3 swing angles,. For example, for the switching time of T ¼ : s, the swing angle is. Given the binominal coefficient of 7 C, we obtain 1 comparison results, since there are 7 presented azimuth values. For expressing these dependences uniquely, we introduce the measure of the percentage of significant combinations to all possible combinations (PSP). In Fig. 9, the values in parentheses are the PSPs; Welch s test was used and the significant level was %. The PSPs between the swing angles and those between the switching times T were calculated in the same manner as the presented azimuth. These results are shown in Tables and. These results indicate that 1) as the switching time T decreases and/or swing angle increases, the subjective displacement of the swing sound image is more clearly perceived, ) the dependence of the subjective displacement on the presented azimuth is not significant, 3) the PSPs between the switching times T of. s and 1. s, and those between the switching times T of 1. s and. s are smaller than the others. Hence, we adopted the switching times T of. s and 1. s, and T and were maintained constant at each presented azimuth..3. Localization Test In this test, the localization accuracies, average angular Table PSPs between swing angles..3% 1% 7.1% Table PSPs between switching times T. T. s 1. s. s. s 1.9% 1.% 1%. s.% 1.% 1. s 9.% errors and front-back confusion rates of the swing and stationary sound images were compared with individualized and nonindividualized HRTFs s The five graduate students who participated in this test also participated in the previous experiment on the unidirectional moving sound image..3.. Stimuli The swing and stationary sound images were synthesized in the same manner as in the previous test; both sound images were held for a duration of 3 s. The switching times T of. s and 1. s were used, as mentioned above, and the swing angle was set at,,,1, and Procedure The playback system used was the same as that used in the previous experiment. The swing and stationary sound images were presented to the subjects, and the task of the subjects was to indicate the perceived azimuth of the sound image. With respect to the swing sound image, the center of the movement was targeted. Each stimulus was randomly presented times. A total of 13 trials (stationary sound image: 13 presented azimuth HRTFs (individualized and nonindividualized) times and swing sound image: 13 presented azimuth switching times T swing angles HRTFs (individualized and nonindividualized) times) were divided into sessions..3.. Results The localization accuracy was analyzed using the frontback confusion rate and average angular error. Figures 1 and 11 present the front-back confusion rate and average angular error for all the subjects, using the swing sound images synthesized by the OLAM and FIFO methods, respectively. In these figures, the results at the swing angle of correspond to those using the stationary sound images. These figures indicate that the swing sound images reduce the front-back confusion rate, particularly for the subjects and, whose localization accuracy is degraded by the use of nonindividualized HRTFs. This advantage was not observed in the average angular error. 1

10 A. KUDO et al.: IMPROVED METHOD FOR SOUND LOCALIZATION switching time T.s switching time T 1.s HRTFs: s own A B HRTFs: HATS s C D SHT SHT : SYT SYT Fig. 1 Front-back confusion rates and average angular errors for all the subjects. The swing sound images were synthesized by the OLAM method. switching time T.s switching time T 1.s HRTFs: s own HRTFs: HATS s A C B D SHT 1 SHT SYT SYT Fig. 11 Front-back confusion rates and average angular errors for all the subjects. The swing sound images were synthesized by the FIFO method..3.. Discussion The impact of the swing angle on the localization accuracy was statistically analyzed for each subject and switching time T. Tables and 7 show the results of the test of the significance between the front-back confusion rates of the stationary sound image synthesized with individualized HRTFs and the swing sound image synthesized with nonindividualized HRTFs; the two-sample test for equality of proportions was used. In these tables, the symbols and indicate a significant difference at the significant levels of % and 1%, respectively. With respect to the results obtained by the OLAM method, there were no significant differences at the swing angles of more than for the subject. This indicates that the nonindividualized HRTFs yield a front-back judgment that Table Results of statistical analysis of front-back confusion rate. The swing sound images were synthesized by the OLAM method. Swing angle 1 3 SHT Switching. s time T 1. s. s 1. s. s 1. s SYT. s 1. s. s 1. s 13

11 Acoust. Sci. & Tech. 7, 3 () Table 7 Results of statistical analysis of front-back confusion rate. The swing sound images were synthesized by the FIFO method. Swing angle 1 3 SHT Switching. s time T 1. s. s 1. s. s 1. s SYT. s 1. s. s 1. s Table 9 Results of statistical analysis of average angular errors between stationary and swing sound images. The swing sound images were synthesized by the FIFO method. Swing angle 1 3 SHT Switching. s time T 1. s. s 1. s. s 1. s SYT. s 1. s. s 1. s Table Results of statistical analysis of average angular errors between stationary and swing sound images. The swing sound images were synthesized by the OLAM method. Swing angle 1 3 SHT Switching. s time T 1. s. s 1. s. s 1. s SYT. s 1. s. s 1. s matches that achieved with the individualized HRTFs when the swing angle exceeds. For the subjects SHT and, the swing angles more than 1 were required to achieve the same performance. With respect to the results obtained by the FIFO method, the swing angles more than were required for the subjects SHT and. Tables and 9 show the results of a statistical analysis of the average angular errors between the stationary and swing sound images created by the OLAM and FIFO methods, respectively; Welch s test was used for analyzing the statistical significance. The symbols and have the same meanings as those mentioned above. Of particular interest was the significant difference for the subject. That is, the swing sound images increased the average angular error at the switching time T of. s. The angular error significantly increased with the swing angle for the subjects, SHT, and SYT at the swing angles above. Therefore, the swing angle of is the most appropriate among the parameters we employed. From Fig. 1, the front-back confusion rate at the swing angle of was, for the subjects SHT,, and, larger than that with the stationary sound image. Although the subjects could perceive the swing of the sound image, the swing angle was very small to function as a front-back localization cue. Thus, since a very small swing angle makes front-back judgment ambiguous, a suitable swing angle must be chosen to reduce the frontback confusion rate. There is a question as to why the swing sound image enhances: the front-back confusion rate. The swing sound image consists of a discrete or repetitive presentation of the sound image. One issue is to clarify whether this discrete or repetitive presentation improves the front-back judgment. The number of repetitions between the two locations of the swing sound image depends on the switching time T, since stimuli duration times are maintained constant. For example, when the switching time is. s, the swing sound image alternates between two locations (points A and B) as A!B!A!B!A!B!A!B!A!B!A!B. As shown in Figs. 1 and 11, since the front-back confusion rate does not strongly depend on the switching times for the subjects and whose front-back confusion rates increase with nonindividualized HRTFs, the number of repetitions is not important in reducing the front-back confusion rate. This indicates that the discrete presentation of a sound image reduces the front-back confusion rate.. CONCLUSION In this study, we investigated the impact of the dynamic localization cue on the localization accuracy to achieve our goal of improving the localization accuracy achievable with nonindividualized HRTFs in two steps. In the first step, we studied the localization accuracy possible with unidirectional moving sound images. The results show that 1) the use of nonindividualized HRTFs significantly increases the front-back confusion rate, ) the front-back confusion rate of the unidirectional sound image movement varies considerably among the subjects, and 3) the use of the unidirectional moving sound images increases the average angular error. 1

12 A. KUDO et al.: IMPROVED METHOD FOR SOUND LOCALIZATION In the second step, a novel sound presentation technique, the swing sound image method, was proposed, as one approach to implementing the dynamic localization cue. Its parameters were described, and a subjective assessment test was conducted for determining parameters by assessing the subjective displacement of a swing sound image. The results indicate that 1) as the switching time T decreases short and/or swing angle increases, the subjective displacement of the swing sound image is more clearly perceived, ) subjective displacement is not clearly dependent on the presented azimuth, and 3) the PSPs between the switching times T of. s and 1. s, and those between the switching times T of 1. s and. s are relatively smaller than the others. Hence, we adopted the switching times T of. s and 1. s, and the switching time T and swing angle were maintained constant at each presented azimuth. Finally, a localization test was conducted to compare the localization accuracies achieved using the stationary and swing sound images. We found that 1) the swing sound image considerably reduces the front-back confusion rate for one subject whose front-back confusion rate is greatly increased using nonindividualized HRTFs, ) the average angular error significantly increases with the swing angle, and 3) the swing angle of around is appropriate for suppressing the increase in angular error. These results support our hypothesis that establishing a clear dynamic localization cue is of prime importance in allowing the use of nonindividualized HRTFs. Future goals include clarifying auditory evidence for swing sound image localization. REFERENCES [1] E. M. Wenzel, M. Arruda, D. J. Kistler and F. L. Wightman, Localization using nonindividualized head-related transfer functions, J. Acoust. Soc. Am., 9, (1993). [] S. Shimada, N. Hayashi and S. Hayashi, A clustering method for sound localization transfer functions, J. Audio Eng. Soc.,, 77 (199). [3] H. Møller, M. F. Sørensen, C. B. Jensen and D. Hammershøi, Binaural Technique: Do We Need Individual Recordings?, J. Audio Eng. Soc.,, 1 9 (199). [] J. Blauert, Spatial Hearing (MIT Press, Cambridge, Mass., 1997). [] H. Wallach, The role of head movements and vestibular and visual cues in sound localization, J. Exp. Psychol., 7, (19). [] W. R. Thurlow and P. S. Runge, Effect of induced head movements on localization of direction of sounds, J. Acoust. Soc. Am.,, (197). [7] W. R. Thurlow, J. W. Mangels and P. S. Runge, Head movements during sound localization, J. Acoust. Soc. Am.,, 9 93 (197). [] M. Kato, H. Uematsu, M. Kashino and T. Hirahara, The effects of head motion on human sound localization, Proc. Autumn Meet. Acoust. Soc. Jpn., pp. (1). [9] N. Asahi and S. Matsuoka, Effect of head rotation on sound localization, Tech. Rep. Psychol. Physiol. Acoust. Acoust. Soc. Jpn., H37-1, pp (197). [1] G. Boerger, P. Laws and J. Blauert, Stereophonic headphone reproduction with variation of various transfer functions by means of rotational head movements, Acustica, 39, (1977). [11] J. Kawaura, Y. Suzuki, F. Asano and T. Sone, Sound localization in headphone reproduction by simulating transfer functions from the sound source to the external ear, J. Acoust. Soc. Jpn. (J),, 7 7 (199). [1] D. Kimura and Y. Suzuki, A consideration about the effect of head movement on the sound localization, IEICE Tech. Rep., EA1-, pp. 7 (1). [13] F. L. Wightman and D. J. Kistler, Resolution of front-back ambiguity in spatial hearing by listener and source movement, J. Acoust. Soc. Am., 1, 1 3 (1999). [1] L. D. Rosenblum, C. Carello and R. E. Pastore, Relative effectiveness of three stimulus variables for locating a moving sound source, Perception, 1, 17 1 (197). [1] M. Gröhn, T. Lokki and T. Takala, Static and dynamic sound source localization in a virtual room, Proc. AES nd Int. Conf. Virtual, Synthetic and Entertainment Audio, Espoo, Finland,, preprint 13 (). [1] M. Gröhn, Localization of a moving virtual sound source in a virtual room, the effect of a distracting auditory stimulus, Proc. Int. Conf. on Auditory Display, ICAD-1 (). [17] D. J. M. Robinson and R. G. Greenfield, A binaural simulation which renders out of head localisation with low cost digital signal processing of head related transfer functions and pseudo reverberation, J. Audio Eng. Soc. 1th Convention, preprint 73 (199). [1] H. Uematsu, M. Kato and M. Kashino, The influence of sound source movement on the extracranial localization, Proc. Spring Meet. Acoust. Soc. Jpn., pp. 3 (). [19] D. R. Begault, 3-D Sound for Virtual Reality and Multimedia (Academic Press, Boston, 199). [] A. Kudo, H. Hokari and S. Shimada, A study on switching of the transfer functions focusing on sound quality, Acoust. Sci. & Tech.,, 7 7 (). [1] S. Yano, H. Hokari, S. Shimada and H. Irisawa, A study on the transfer functions of sound localization using binaural earphones, J. Audio Eng. Soc. 1th Convention, preprint (199). [] F. L. Wightman and D. J. Kistler, Headphone simulation of free-field listening. II: Psychophysical validation, J. Acoust. Soc. Am.,, 7 (199). [3] H. Møller, D. Hammershøi, C. B. Jensen and M. F. Sørensen, Transfer characteristics of headphones measured on human ears, J. Audio Eng. Soc., 3, 3 17 (199). [] S. Yano, H. Hokari and S. Shimada, A study on the personal difference in the transfer functions of sound localization using stereo earphones, IEICE Trans. Fundam., E3-A, 77 7 (). [] H. Irisawa, S. Shimada, H. Hokari and S. Hosoya, Study of a fast method to calculate inverse filters, J. Audio Eng. Soc.,, 11 (199). [] S. Yano, H. Hokari, S. Shimada and H. Irisawa, A study on the derivation of transfer functions for sound image localization using stereo earphones, J. Audio Eng. Soc., 7, 9 7 (1999). [7] M. Matsumoto, M. Tohyama and H. Yanagawa, A method of interpolating binaural impulse responses for moving sound images, Acoust. Sci. & Tech.,, 9 (3). [] M. A. Senova, K. I. Mcanally and R. L. Martin, Localization of virtual sound as a function of head-related impulse response duration, J. Audio Eng. Soc.,, 7 (1). [9] R. L. Martin, K. I. Macanally and M. A. Senova, Free-field 1

13 Acoust. Sci. & Tech. 7, 3 () equivalent localization of virtual audio, J. Audio Eng. Soc., 9, 1 (1). [3] E. L. Lehmann, Testing Statistical Hypotheses (John Wiley & Sons, New York, 19). [31] R. B. King and S. R. Oldfield, The impact of signal bandwidth on auditory localization: Implications for the design of threedimensional audio displays, Hum. Factors, 39, 7 9 (1997). [3] D. R. Perrott and A. D. Musicant, Dynamic minimum audible angle: Binaural spatial acuity with moving sound sources, J. Aud. Res., 1, 7 9 (191). [33] K. Saberi and D. R. Perrott, Minimum audible movement angles as a function of sound source trajectory, J. Acoust. Soc. Am.,, 39 (199). 1

The role of low frequency components in median plane localization

The role of low frequency components in median plane localization Acoust. Sci. & Tech. 24, 2 (23) PAPER The role of low components in median plane localization Masayuki Morimoto 1;, Motoki Yairi 1, Kazuhiro Iida 2 and Motokuni Itoh 1 1 Environmental Acoustics Laboratory,

More information

3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF. Masayuki Morimoto Motokuni Itoh Kazuhiro Iida

3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF. Masayuki Morimoto Motokuni Itoh Kazuhiro Iida 3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF Masayuki Morimoto Motokuni Itoh Kazuhiro Iida Kobe University Environmental Acoustics Laboratory Rokko, Nada, Kobe, 657-8501,

More information

3-D Sound and Spatial Audio. What do these terms mean?

3-D Sound and Spatial Audio. What do these terms mean? 3-D Sound and Spatial Audio What do these terms mean? Both terms are very general. 3-D sound usually implies the perception of point sources in 3-D space (could also be 2-D plane) whether the audio reproduction

More information

Binaural Hearing. Why two ears? Definitions

Binaural Hearing. Why two ears? Definitions Binaural Hearing Why two ears? Locating sounds in space: acuity is poorer than in vision by up to two orders of magnitude, but extends in all directions. Role in alerting and orienting? Separating sound

More information

A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES

A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES Kazuhiro IIDA, Motokuni ITOH AV Core Technology Development Center, Matsushita Electric Industrial Co., Ltd.

More information

Trading Directional Accuracy for Realism in a Virtual Auditory Display

Trading Directional Accuracy for Realism in a Virtual Auditory Display Trading Directional Accuracy for Realism in a Virtual Auditory Display Barbara G. Shinn-Cunningham, I-Fan Lin, and Tim Streeter Hearing Research Center, Boston University 677 Beacon St., Boston, MA 02215

More information

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS 19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 27 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS PACS: 43.66.Pn Seeber, Bernhard U. Auditory Perception Lab, Dept.

More information

IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION. Griffin D. Romigh, Brian D. Simpson, Nandini Iyer

IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION. Griffin D. Romigh, Brian D. Simpson, Nandini Iyer IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION Griffin D. Romigh, Brian D. Simpson, Nandini Iyer 711th Human Performance Wing Air Force Research Laboratory

More information

Effect of spectral content and learning on auditory distance perception

Effect of spectral content and learning on auditory distance perception Effect of spectral content and learning on auditory distance perception Norbert Kopčo 1,2, Dávid Čeljuska 1, Miroslav Puszta 1, Michal Raček 1 a Martin Sarnovský 1 1 Department of Cybernetics and AI, Technical

More information

Sound localization psychophysics

Sound localization psychophysics Sound localization psychophysics Eric Young A good reference: B.C.J. Moore An Introduction to the Psychology of Hearing Chapter 7, Space Perception. Elsevier, Amsterdam, pp. 233-267 (2004). Sound localization:

More information

A Microphone-Array-Based System for Restoring Sound Localization with Occluded Ears

A Microphone-Array-Based System for Restoring Sound Localization with Occluded Ears Restoring Sound Localization with Occluded Ears Adelbert W. Bronkhorst TNO Human Factors P.O. Box 23, 3769 ZG Soesterberg The Netherlands adelbert.bronkhorst@tno.nl Jan A. Verhave TNO Human Factors P.O.

More information

CONTRIBUTION OF DIRECTIONAL ENERGY COMPONENTS OF LATE SOUND TO LISTENER ENVELOPMENT

CONTRIBUTION OF DIRECTIONAL ENERGY COMPONENTS OF LATE SOUND TO LISTENER ENVELOPMENT CONTRIBUTION OF DIRECTIONAL ENERGY COMPONENTS OF LATE SOUND TO LISTENER ENVELOPMENT PACS:..Hy Furuya, Hiroshi ; Wakuda, Akiko ; Anai, Ken ; Fujimoto, Kazutoshi Faculty of Engineering, Kyushu Kyoritsu University

More information

Auditory Presence, Individualized Head-Related Transfer Functions, and Illusory Ego-Motion in Virtual Environments

Auditory Presence, Individualized Head-Related Transfer Functions, and Illusory Ego-Motion in Virtual Environments Auditory Presence, Individualized Head-Related Transfer Functions, and Illusory Ego-Motion in Virtual Environments Aleksander Väljamäe 1, Pontus Larsson 2, Daniel Västfjäll 2,3 and Mendel Kleiner 4 1 Department

More information

Perceptual Plasticity in Spatial Auditory Displays

Perceptual Plasticity in Spatial Auditory Displays Perceptual Plasticity in Spatial Auditory Displays BARBARA G. SHINN-CUNNINGHAM, TIMOTHY STREETER, and JEAN-FRANÇOIS GYSS Hearing Research Center, Boston University Often, virtual acoustic environments

More information

On the improvement of localization accuracy with nonindividualized

On the improvement of localization accuracy with nonindividualized On the improvement of localization accuracy with nonindividualized HRTF-based sounds Catarina Mendonça 1, AES Member, Guilherme Campos 2, AES Member, Paulo Dias 2, José Vieira 2, AES Fellow, João P. Ferreira

More information

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane. I. Psychoacoustical Data

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane. I. Psychoacoustical Data 942 955 Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane. I. Psychoacoustical Data Jonas Braasch, Klaus Hartung Institut für Kommunikationsakustik, Ruhr-Universität

More information

Sound localization under conditions of covered ears on the horizontal plane

Sound localization under conditions of covered ears on the horizontal plane coust. Sci. & Tech. 28, 5 (27) TECHNICL REPORT #27 The coustical Society of Japan Sound localization under conditions of covered ears on the horizontal plane Madoka Takimoto 1, Takanori Nishino 2;, Katunobu

More information

J. Acoust. Soc. Am. 114 (2), August /2003/114(2)/1009/14/$ Acoustical Society of America

J. Acoust. Soc. Am. 114 (2), August /2003/114(2)/1009/14/$ Acoustical Society of America Auditory spatial resolution in horizontal, vertical, and diagonal planes a) D. Wesley Grantham, b) Benjamin W. Y. Hornsby, and Eric A. Erpenbeck Vanderbilt Bill Wilkerson Center for Otolaryngology and

More information

EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE DESIGN OF SPATIAL AUDIO DISPLAYS

EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE DESIGN OF SPATIAL AUDIO DISPLAYS Proceedings of the 14 International Conference on Auditory Display, Paris, France June 24-27, 28 EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE

More information

Audibility of time differences in adjacent head-related transfer functions (HRTFs) Hoffmann, Pablo Francisco F.; Møller, Henrik

Audibility of time differences in adjacent head-related transfer functions (HRTFs) Hoffmann, Pablo Francisco F.; Møller, Henrik Aalborg Universitet Audibility of time differences in adjacent head-related transfer functions (HRTFs) Hoffmann, Pablo Francisco F.; Møller, Henrik Published in: Audio Engineering Society Convention Papers

More information

The basic mechanisms of directional sound localization are well documented. In the horizontal plane, interaural difference INTRODUCTION I.

The basic mechanisms of directional sound localization are well documented. In the horizontal plane, interaural difference INTRODUCTION I. Auditory localization of nearby sources. II. Localization of a broadband source Douglas S. Brungart, a) Nathaniel I. Durlach, and William M. Rabinowitz b) Research Laboratory of Electronics, Massachusetts

More information

Spectral and Spatial Parameter Resolution Requirements for Parametric, Filter-Bank-Based HRTF Processing*

Spectral and Spatial Parameter Resolution Requirements for Parametric, Filter-Bank-Based HRTF Processing* Spectral and Spatial Parameter Resolution Requirements for Parametric, Filter-Bank-Based HRTF Processing* JEROEN BREEBAART, 1 AES Member, FABIAN NATER, 2 (jeroen.breebaart@philips.com) (fnater@vision.ee.ethz.ch)

More information

Binaural Hearing. Steve Colburn Boston University

Binaural Hearing. Steve Colburn Boston University Binaural Hearing Steve Colburn Boston University Outline Why do we (and many other animals) have two ears? What are the major advantages? What is the observed behavior? How do we accomplish this physiologically?

More information

Neural System Model of Human Sound Localization

Neural System Model of Human Sound Localization in Advances in Neural Information Processing Systems 13 S.A. Solla, T.K. Leen, K.-R. Müller (eds.), 761 767 MIT Press (2000) Neural System Model of Human Sound Localization Craig T. Jin Department of Physiology

More information

Brian D. Simpson Veridian, 5200 Springfield Pike, Suite 200, Dayton, Ohio 45431

Brian D. Simpson Veridian, 5200 Springfield Pike, Suite 200, Dayton, Ohio 45431 The effects of spatial separation in distance on the informational and energetic masking of a nearby speech signal Douglas S. Brungart a) Air Force Research Laboratory, 2610 Seventh Street, Wright-Patterson

More information

Speech segregation in rooms: Effects of reverberation on both target and interferer

Speech segregation in rooms: Effects of reverberation on both target and interferer Speech segregation in rooms: Effects of reverberation on both target and interferer Mathieu Lavandier a and John F. Culling School of Psychology, Cardiff University, Tower Building, Park Place, Cardiff,

More information

Angular Resolution of Human Sound Localization

Angular Resolution of Human Sound Localization Angular Resolution of Human Sound Localization By Simon Skluzacek A senior thesis submitted to the Carthage College Physics & Astronomy Department in partial fulfillment of the requirements for the Bachelor

More information

THE RELATION BETWEEN SPATIAL IMPRESSION AND THE PRECEDENCE EFFECT. Masayuki Morimoto

THE RELATION BETWEEN SPATIAL IMPRESSION AND THE PRECEDENCE EFFECT. Masayuki Morimoto THE RELATION BETWEEN SPATIAL IMPRESSION AND THE PRECEDENCE EFFECT Masayuki Morimoto Environmental Acoustics Laboratory, Faculty of Engineering, Kobe University Rokko Nada Kobe 657-85 Japan mrmt@kobe-u.ac.jp

More information

Lecture 8: Spatial sound

Lecture 8: Spatial sound EE E6820: Speech & Audio Processing & Recognition Lecture 8: Spatial sound 1 2 3 4 Spatial acoustics Binaural perception Synthesizing spatial audio Extracting spatial sounds Dan Ellis

More information

On the Improvement of Localization Accuracy with Non-individualized HRTF-Based Sounds

On the Improvement of Localization Accuracy with Non-individualized HRTF-Based Sounds On the Improvement of Localization Accuracy with Non-individualized HRTF-Based Sounds CATARINA MENDONÇA, 1 (Catarina.Mendonca@ccg.pt) AES Associate Member, GUILHERME CAMPOS, 2 AES Full Member, PAULO DIAS

More information

Neural correlates of the perception of sound source separation

Neural correlates of the perception of sound source separation Neural correlates of the perception of sound source separation Mitchell L. Day 1,2 * and Bertrand Delgutte 1,2,3 1 Department of Otology and Laryngology, Harvard Medical School, Boston, MA 02115, USA.

More information

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane: II. Model Algorithms

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane: II. Model Algorithms 956 969 Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane: II. Model Algorithms Jonas Braasch Institut für Kommunikationsakustik, Ruhr-Universität Bochum, Germany

More information

The role of high frequencies in speech localization

The role of high frequencies in speech localization The role of high frequencies in speech localization Virginia Best a and Simon Carlile Department of Physiology, University of Sydney, Sydney, NSW, 2006, Australia Craig Jin and André van Schaik School

More information

Effect of source spectrum on sound localization in an everyday reverberant room

Effect of source spectrum on sound localization in an everyday reverberant room Effect of source spectrum on sound localization in an everyday reverberant room Antje Ihlefeld and Barbara G. Shinn-Cunningham a) Hearing Research Center, Boston University, Boston, Massachusetts 02215

More information

This will be accomplished using maximum likelihood estimation based on interaural level

This will be accomplished using maximum likelihood estimation based on interaural level Chapter 1 Problem background 1.1 Overview of the proposed work The proposed research consists of the construction and demonstration of a computational model of human spatial hearing, including long term

More information

Congruency Effects with Dynamic Auditory Stimuli: Design Implications

Congruency Effects with Dynamic Auditory Stimuli: Design Implications Congruency Effects with Dynamic Auditory Stimuli: Design Implications Bruce N. Walker and Addie Ehrenstein Psychology Department Rice University 6100 Main Street Houston, TX 77005-1892 USA +1 (713) 527-8101

More information

An Auditory System Modeling in Sound Source Localization

An Auditory System Modeling in Sound Source Localization An Auditory System Modeling in Sound Source Localization Yul Young Park The University of Texas at Austin EE381K Multidimensional Signal Processing May 18, 2005 Abstract Sound localization of the auditory

More information

HEARING AND PSYCHOACOUSTICS

HEARING AND PSYCHOACOUSTICS CHAPTER 2 HEARING AND PSYCHOACOUSTICS WITH LIDIA LEE I would like to lead off the specific audio discussions with a description of the audio receptor the ear. I believe it is always a good idea to understand

More information

Binaural synthesis Møller, Henrik; Jensen, Clemen Boje; Hammershøi, Dorte; Sørensen, Michael Friis

Binaural synthesis Møller, Henrik; Jensen, Clemen Boje; Hammershøi, Dorte; Sørensen, Michael Friis Aalborg Universitet Binaural synthesis Møller, Henrik; Jensen, Clemen Boje; Hammershøi, Dorte; Sørensen, Michael Friis Published in: Proceedings of 15th International Congress on Acoustics, ICA'95, Trondheim,

More information

Localization 103: Training BiCROS/CROS Wearers for Left-Right Localization

Localization 103: Training BiCROS/CROS Wearers for Left-Right Localization Localization 103: Training BiCROS/CROS Wearers for Left-Right Localization Published on June 16, 2015 Tech Topic: Localization July 2015 Hearing Review By Eric Seper, AuD, and Francis KuK, PhD While the

More information

The basic hearing abilities of absolute pitch possessors

The basic hearing abilities of absolute pitch possessors PAPER The basic hearing abilities of absolute pitch possessors Waka Fujisaki 1;2;* and Makio Kashino 2; { 1 Graduate School of Humanities and Sciences, Ochanomizu University, 2 1 1 Ootsuka, Bunkyo-ku,

More information

Abstract. 1. Introduction. David Spargo 1, William L. Martens 2, and Densil Cabrera 3

Abstract. 1. Introduction. David Spargo 1, William L. Martens 2, and Densil Cabrera 3 THE INFLUENCE OF ROOM REFLECTIONS ON SUBWOOFER REPRODUCTION IN A SMALL ROOM: BINAURAL INTERACTIONS PREDICT PERCEIVED LATERAL ANGLE OF PERCUSSIVE LOW- FREQUENCY MUSICAL TONES Abstract David Spargo 1, William

More information

Gregory Galen Lin. at the. May A uthor... Department of Elf'ctricdal Engineering and Computer Science May 28, 1996

Gregory Galen Lin. at the. May A uthor... Department of Elf'ctricdal Engineering and Computer Science May 28, 1996 Adaptation to a Varying Auditory Environment by Gregory Galen Lin Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of

More information

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening AUDL GS08/GAV1 Signals, systems, acoustics and the ear Pitch & Binaural listening Review 25 20 15 10 5 0-5 100 1000 10000 25 20 15 10 5 0-5 100 1000 10000 Part I: Auditory frequency selectivity Tuning

More information

Noise-Robust Speech Recognition in a Car Environment Based on the Acoustic Features of Car Interior Noise

Noise-Robust Speech Recognition in a Car Environment Based on the Acoustic Features of Car Interior Noise 4 Special Issue Speech-Based Interfaces in Vehicles Research Report Noise-Robust Speech Recognition in a Car Environment Based on the Acoustic Features of Car Interior Noise Hiroyuki Hoshino Abstract This

More information

Digital. hearing instruments have burst on the

Digital. hearing instruments have burst on the Testing Digital and Analog Hearing Instruments: Processing Time Delays and Phase Measurements A look at potential side effects and ways of measuring them by George J. Frye Digital. hearing instruments

More information

Two Modified IEC Ear Simulators for Extended Dynamic Range

Two Modified IEC Ear Simulators for Extended Dynamic Range Two Modified IEC 60318-4 Ear Simulators for Extended Dynamic Range Peter Wulf-Andersen & Morten Wille The international standard IEC 60318-4 specifies an occluded ear simulator, often referred to as a

More information

How high-frequency do children hear?

How high-frequency do children hear? How high-frequency do children hear? Mari UEDA 1 ; Kaoru ASHIHARA 2 ; Hironobu TAKAHASHI 2 1 Kyushu University, Japan 2 National Institute of Advanced Industrial Science and Technology, Japan ABSTRACT

More information

ICaD 2013 ADJUSTING THE PERCEIVED DISTANCE OF VIRTUAL SPEECH SOURCES BY MODIFYING BINAURAL ROOM IMPULSE RESPONSES

ICaD 2013 ADJUSTING THE PERCEIVED DISTANCE OF VIRTUAL SPEECH SOURCES BY MODIFYING BINAURAL ROOM IMPULSE RESPONSES ICaD 213 6 1 july, 213, Łódź, Poland international Conference on auditory Display ADJUSTING THE PERCEIVED DISTANCE OF VIRTUAL SPEECH SOURCES BY MODIFYING BINAURAL ROOM IMPULSE RESPONSES Robert Albrecht

More information

HearIntelligence by HANSATON. Intelligent hearing means natural hearing.

HearIntelligence by HANSATON. Intelligent hearing means natural hearing. HearIntelligence by HANSATON. HearIntelligence by HANSATON. Intelligent hearing means natural hearing. Acoustic environments are complex. We are surrounded by a variety of different acoustic signals, speech

More information

Synthesis of Spatially Extended Virtual Sources with Time-Frequency Decomposition of Mono Signals

Synthesis of Spatially Extended Virtual Sources with Time-Frequency Decomposition of Mono Signals PAPERS Synthesis of Spatially Extended Virtual Sources with Time-Frequency Decomposition of Mono Signals TAPANI PIHLAJAMÄKI, AES Student Member, OLLI SANTALA, AES Student Member, AND (tapani.pihlajamaki@aalto.fi)

More information

OCCLUSION REDUCTION SYSTEM FOR HEARING AIDS WITH AN IMPROVED TRANSDUCER AND AN ASSOCIATED ALGORITHM

OCCLUSION REDUCTION SYSTEM FOR HEARING AIDS WITH AN IMPROVED TRANSDUCER AND AN ASSOCIATED ALGORITHM OCCLUSION REDUCTION SYSTEM FOR HEARING AIDS WITH AN IMPROVED TRANSDUCER AND AN ASSOCIATED ALGORITHM Masahiro Sunohara, Masatoshi Osawa, Takumi Hashiura and Makoto Tateno RION CO., LTD. 3-2-41, Higashimotomachi,

More information

B. G. Shinn-Cunningham Hearing Research Center, Departments of Biomedical Engineering and Cognitive and Neural Systems, Boston, Massachusetts 02215

B. G. Shinn-Cunningham Hearing Research Center, Departments of Biomedical Engineering and Cognitive and Neural Systems, Boston, Massachusetts 02215 Investigation of the relationship among three common measures of precedence: Fusion, localization dominance, and discrimination suppression R. Y. Litovsky a) Boston University Hearing Research Center,

More information

Perception of tonal components contained in wind turbine noise

Perception of tonal components contained in wind turbine noise Perception of tonal components contained in wind turbine noise Sakae YOKOYAMA 1 ; Tomohiro KOBAYASHI 2 ; Hideki TACHIBANA 3 1,2 Kobayasi Institute of Physical Research, Japan 3 The University of Tokyo,

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 2013 http://acousticalsociety.org/ ICA 2013 Montreal Montreal, Canada 2-7 June 2013 Psychological and Physiological Acoustics Session 4aPPb: Binaural Hearing

More information

Juha Merimaa b) Institut für Kommunikationsakustik, Ruhr-Universität Bochum, Germany

Juha Merimaa b) Institut für Kommunikationsakustik, Ruhr-Universität Bochum, Germany Source localization in complex listening situations: Selection of binaural cues based on interaural coherence Christof Faller a) Mobile Terminals Division, Agere Systems, Allentown, Pennsylvania Juha Merimaa

More information

Effects of speaker's and listener's environments on speech intelligibili annoyance. Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag

Effects of speaker's and listener's environments on speech intelligibili annoyance. Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag JAIST Reposi https://dspace.j Title Effects of speaker's and listener's environments on speech intelligibili annoyance Author(s)Kubo, Rieko; Morikawa, Daisuke; Akag Citation Inter-noise 2016: 171-176 Issue

More information

Using 3d sound to track one of two non-vocal alarms. IMASSA BP Brétigny sur Orge Cedex France.

Using 3d sound to track one of two non-vocal alarms. IMASSA BP Brétigny sur Orge Cedex France. Using 3d sound to track one of two non-vocal alarms Marie Rivenez 1, Guillaume Andéol 1, Lionel Pellieux 1, Christelle Delor 1, Anne Guillaume 1 1 Département Sciences Cognitives IMASSA BP 73 91220 Brétigny

More information

INTRODUCTION J. Acoust. Soc. Am. 103 (2), February /98/103(2)/1080/5/$ Acoustical Society of America 1080

INTRODUCTION J. Acoust. Soc. Am. 103 (2), February /98/103(2)/1080/5/$ Acoustical Society of America 1080 Perceptual segregation of a harmonic from a vowel by interaural time difference in conjunction with mistuning and onset asynchrony C. J. Darwin and R. W. Hukin Experimental Psychology, University of Sussex,

More information

Systems Neuroscience Oct. 16, Auditory system. http:

Systems Neuroscience Oct. 16, Auditory system. http: Systems Neuroscience Oct. 16, 2018 Auditory system http: www.ini.unizh.ch/~kiper/system_neurosci.html The physics of sound Measuring sound intensity We are sensitive to an enormous range of intensities,

More information

Aalborg Universitet. Control of earphone produced binaural signals Hammershøi, Dorte; Hoffmann, Pablo Francisco F.

Aalborg Universitet. Control of earphone produced binaural signals Hammershøi, Dorte; Hoffmann, Pablo Francisco F. Aalborg Universitet Control of earphone produced binaural signals Hammershøi, Dorte; Hoffmann, Pablo Francisco F. Published in: Acustica United with Acta Acustica Publication date: 211 Document Version

More information

Welcome to the LISTEN G.R.A.S. Headphone and Headset Measurement Seminar The challenge of testing today s headphones USA

Welcome to the LISTEN G.R.A.S. Headphone and Headset Measurement Seminar The challenge of testing today s headphones USA Welcome to the LISTEN G.R.A.S. Headphone and Headset Measurement Seminar The challenge of testing today s headphones USA 2017-10 Presenter Peter Wulf-Andersen Engineering degree in Acoustics Co-founder

More information

CHAPTER 1. Simon Carlile 1. PERCEIVING REAL AND VIRTUAL SOUND FIELDS

CHAPTER 1. Simon Carlile 1. PERCEIVING REAL AND VIRTUAL SOUND FIELDS 1 CHAPTER 1 AUDITORY SPACE Simon Carlile 1. PERCEIVING REAL AND VIRTUAL SOUND FIELDS 1.1. PERCEIVING THE WORLD One of the greatest and most enduring of intellectual quests is that of self understanding.

More information

Auditory System & Hearing

Auditory System & Hearing Auditory System & Hearing Chapters 9 and 10 Lecture 17 Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Spring 2015 1 Cochlea: physical device tuned to frequency! place code: tuning of different

More information

Evaluation of Auditory Characteristics of Communications and Hearing Protection Systems (C&HPS) Part III Auditory Localization

Evaluation of Auditory Characteristics of Communications and Hearing Protection Systems (C&HPS) Part III Auditory Localization Evaluation of Auditory Characteristics of Communications and Hearing Protection Systems (C&HPS) Part III Auditory Localization by Paula P. Henry ARL-TR-6560 August 2013 Approved for public release; distribution

More information

PERCEPTION OF AUDITORY-VISUAL SIMULTANEITY CHANGES BY ILLUMINANCE AT THE EYES

PERCEPTION OF AUDITORY-VISUAL SIMULTANEITY CHANGES BY ILLUMINANCE AT THE EYES 23 rd International Congress on Sound & Vibration Athens, Greece 10-14 July 2016 ICSV23 PERCEPTION OF AUDITORY-VISUAL SIMULTANEITY CHANGES BY ILLUMINANCE AT THE EYES Hiroshi Hasegawa and Shu Hatakeyama

More information

An active unpleasantness control system for indoor noise based on auditory masking

An active unpleasantness control system for indoor noise based on auditory masking An active unpleasantness control system for indoor noise based on auditory masking Daisuke Ikefuji, Masato Nakayama, Takanabu Nishiura and Yoich Yamashita Graduate School of Information Science and Engineering,

More information

Minimum Audible Angles Measured with Simulated Normally-Sized and Oversized Pinnas for Normal-Hearing and Hearing- Impaired Test Subjects

Minimum Audible Angles Measured with Simulated Normally-Sized and Oversized Pinnas for Normal-Hearing and Hearing- Impaired Test Subjects Minimum Audible Angles Measured with Simulated Normally-Sized and Oversized Pinnas for Normal-Hearing and Hearing- Impaired Test Subjects Filip M. Rønne, Søren Laugesen, Niels S. Jensen and Julie H. Pedersen

More information

Hearing II Perceptual Aspects

Hearing II Perceptual Aspects Hearing II Perceptual Aspects Overview of Topics Chapter 6 in Chaudhuri Intensity & Loudness Frequency & Pitch Auditory Space Perception 1 2 Intensity & Loudness Loudness is the subjective perceptual quality

More information

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE Copyright SFA - InterNoise 2000 1 inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering 27-30 August 2000, Nice, FRANCE I-INCE Classification: 6.3 PSYCHOLOGICAL EVALUATION

More information

BASIC NOTIONS OF HEARING AND

BASIC NOTIONS OF HEARING AND BASIC NOTIONS OF HEARING AND PSYCHOACOUSICS Educational guide for the subject Communication Acoustics VIHIAV 035 Fülöp Augusztinovicz Dept. of Networked Systems and Services fulop@hit.bme.hu 2018. október

More information

Study of perceptual balance for binaural dichotic presentation

Study of perceptual balance for binaural dichotic presentation Paper No. 556 Proceedings of 20 th International Congress on Acoustics, ICA 2010 23-27 August 2010, Sydney, Australia Study of perceptual balance for binaural dichotic presentation Pandurangarao N. Kulkarni

More information

On the influence of interaural differences on onset detection in auditory object formation. 1 Introduction

On the influence of interaural differences on onset detection in auditory object formation. 1 Introduction On the influence of interaural differences on onset detection in auditory object formation Othmar Schimmel Eindhoven University of Technology, P.O. Box 513 / Building IPO 1.26, 56 MD Eindhoven, The Netherlands,

More information

Convention Paper 9620

Convention Paper 9620 Audio Engineering Society Convention Paper 9620 Presented at the 141st Convention 2016 September 29 October 2 Los Angeles, USA This Convention paper was selected based on a submitted abstract and 750-word

More information

Audio Engineering Society. Convention Paper. Presented at the 128th Convention 2010 May London, UK

Audio Engineering Society. Convention Paper. Presented at the 128th Convention 2010 May London, UK Audio Engineering Society Convention Paper Presented at the 128th Convention 2010 May 22 25 London, UK The papers at this Convention have been selected on the basis of a submitted abstract and extended

More information

Hearing. Figure 1. The human ear (from Kessel and Kardon, 1979)

Hearing. Figure 1. The human ear (from Kessel and Kardon, 1979) Hearing The nervous system s cognitive response to sound stimuli is known as psychoacoustics: it is partly acoustics and partly psychology. Hearing is a feature resulting from our physiology that we tend

More information

Effect of microphone position in hearing instruments on binaural masking level differences

Effect of microphone position in hearing instruments on binaural masking level differences Effect of microphone position in hearing instruments on binaural masking level differences Fredrik Gran, Jesper Udesen and Andrew B. Dittberner GN ReSound A/S, Research R&D, Lautrupbjerg 7, 2750 Ballerup,

More information

Signals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor

Signals, systems, acoustics and the ear. Week 5. The peripheral auditory system: The ear as a signal processor Signals, systems, acoustics and the ear Week 5 The peripheral auditory system: The ear as a signal processor Think of this set of organs 2 as a collection of systems, transforming sounds to be sent to

More information

21/01/2013. Binaural Phenomena. Aim. To understand binaural hearing Objectives. Understand the cues used to determine the location of a sound source

21/01/2013. Binaural Phenomena. Aim. To understand binaural hearing Objectives. Understand the cues used to determine the location of a sound source Binaural Phenomena Aim To understand binaural hearing Objectives Understand the cues used to determine the location of a sound source Understand sensitivity to binaural spatial cues, including interaural

More information

Discrimination and identification of azimuth using spectral shape a)

Discrimination and identification of azimuth using spectral shape a) Discrimination and identification of azimuth using spectral shape a) Daniel E. Shub b Speech and Hearing Bioscience and Technology Program, Division of Health Sciences and Technology, Massachusetts Institute

More information

Effect of ear-defenders (ear-muffs) on the localization of sound

Effect of ear-defenders (ear-muffs) on the localization of sound Brit. J. Industr. Med., 9,, - Effect of ear-defenders (ear-muffs) on the localization of sound G. R. C. ATHERLEY and W. G. NOBLE epartment of Pure and Applied Physics, University of Salford and epartment

More information

Proceedings of Meetings on Acoustics

Proceedings of Meetings on Acoustics Proceedings of Meetings on Acoustics Volume 19, 13 http://acousticalsociety.org/ ICA 13 Montreal Montreal, Canada - 7 June 13 Engineering Acoustics Session 4pEAa: Sound Field Control in the Ear Canal 4pEAa13.

More information

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED International Conference on Systemics, Cybernetics and Informatics, February 12 15, 2004 BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED Alice N. Cheeran Biomedical

More information

Human Sensitivity to Interaural Phase Difference for Very Low Frequency Sound

Human Sensitivity to Interaural Phase Difference for Very Low Frequency Sound Acoustics 28 Geelong, Victoria, Australia 24 to 26 November 28 Acoustics and Sustainability: How should acoustics adapt to meet future demands? Human Sensitivity to Interaural Phase Difference for Very

More information

Binaural hearing and future hearing-aids technology

Binaural hearing and future hearing-aids technology Binaural hearing and future hearing-aids technology M. Bodden To cite this version: M. Bodden. Binaural hearing and future hearing-aids technology. Journal de Physique IV Colloque, 1994, 04 (C5), pp.c5-411-c5-414.

More information

Binaural Hearing for Robots Introduction to Robot Hearing

Binaural Hearing for Robots Introduction to Robot Hearing Binaural Hearing for Robots Introduction to Robot Hearing 1Radu Horaud Binaural Hearing for Robots 1. Introduction to Robot Hearing 2. Methodological Foundations 3. Sound-Source Localization 4. Machine

More information

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES Varinthira Duangudom and David V Anderson School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA 30332

More information

Sound Localization PSY 310 Greg Francis. Lecture 31. Audition

Sound Localization PSY 310 Greg Francis. Lecture 31. Audition Sound Localization PSY 310 Greg Francis Lecture 31 Physics and psychology. Audition We now have some idea of how sound properties are recorded by the auditory system So, we know what kind of information

More information

Hearing in the Environment

Hearing in the Environment 10 Hearing in the Environment Click Chapter to edit 10 Master Hearing title in the style Environment Sound Localization Complex Sounds Auditory Scene Analysis Continuity and Restoration Effects Auditory

More information

Lundbeck, Micha; Grimm, Giso; Hohmann, Volker ; Laugesen, Søren; Neher, Tobias

Lundbeck, Micha; Grimm, Giso; Hohmann, Volker ; Laugesen, Søren; Neher, Tobias Syddansk Universitet Sensitivity to angular and radial source movements in anechoic and echoic single- and multi-source scenarios for listeners with normal and impaired hearing Lundbeck, Micha; Grimm,

More information

Echo Canceller with Noise Reduction Provides Comfortable Hands-free Telecommunication in Noisy Environments

Echo Canceller with Noise Reduction Provides Comfortable Hands-free Telecommunication in Noisy Environments Canceller with Reduction Provides Comfortable Hands-free Telecommunication in Noisy Environments Sumitaka Sakauchi, Yoichi Haneda, Manabu Okamoto, Junko Sasaki, and Akitoshi Kataoka Abstract Audio-teleconferencing,

More information

A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER

A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER ARCHIVES OF ACOUSTICS 29, 1, 25 34 (2004) INTELLIGIBILITY OF SPEECH PROCESSED BY A SPECTRAL CONTRAST ENHANCEMENT PROCEDURE AND A BINAURAL PROCEDURE A. SEK, E. SKRODZKA, E. OZIMEK and A. WICHER Institute

More information

Linguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions.

Linguistic Phonetics. Basic Audition. Diagram of the inner ear removed due to copyright restrictions. 24.963 Linguistic Phonetics Basic Audition Diagram of the inner ear removed due to copyright restrictions. 1 Reading: Keating 1985 24.963 also read Flemming 2001 Assignment 1 - basic acoustics. Due 9/22.

More information

Spatial unmasking in aided hearing-impaired listeners and the need for training

Spatial unmasking in aided hearing-impaired listeners and the need for training Spatial unmasking in aided hearing-impaired listeners and the need for training Tobias Neher, Thomas Behrens, Louise Kragelund, and Anne Specht Petersen Oticon A/S, Research Centre Eriksholm, Kongevejen

More information

Computational Perception /785. Auditory Scene Analysis

Computational Perception /785. Auditory Scene Analysis Computational Perception 15-485/785 Auditory Scene Analysis A framework for auditory scene analysis Auditory scene analysis involves low and high level cues Low level acoustic cues are often result in

More information

INTRODUCTION TO PURE (AUDIOMETER & TESTING ENVIRONMENT) TONE AUDIOMETERY. By Mrs. Wedad Alhudaib with many thanks to Mrs.

INTRODUCTION TO PURE (AUDIOMETER & TESTING ENVIRONMENT) TONE AUDIOMETERY. By Mrs. Wedad Alhudaib with many thanks to Mrs. INTRODUCTION TO PURE TONE AUDIOMETERY (AUDIOMETER & TESTING ENVIRONMENT) By Mrs. Wedad Alhudaib with many thanks to Mrs. Tahani Alothman Topics : This lecture will incorporate both theoretical information

More information

Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation

Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation Aldebaro Klautau - http://speech.ucsd.edu/aldebaro - 2/3/. Page. Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation ) Introduction Several speech processing algorithms assume the signal

More information

Development of a new loudness model in consideration of audio-visual interaction

Development of a new loudness model in consideration of audio-visual interaction Development of a new loudness model in consideration of audio-visual interaction Kai AIZAWA ; Takashi KAMOGAWA ; Akihiko ARIMITSU 3 ; Takeshi TOI 4 Graduate school of Chuo University, Japan, 3, 4 Chuo

More information

Adapting to Remapped Auditory Localization Cues: A Decision-Theory Model

Adapting to Remapped Auditory Localization Cues: A Decision-Theory Model Shinn-Cunningham, BG (2000). Adapting to remapped auditory localization cues: A decisiontheory model, Perception and Psychophysics, 62(), 33-47. Adapting to Remapped Auditory Localization Cues: A Decision-Theory

More information

TOPICS IN AMPLIFICATION

TOPICS IN AMPLIFICATION August 2011 Directional modalities Directional Microphone Technology in Oasis 14.0 and Applications for Use Directional microphones are among the most important features found on hearing instruments today.

More information