IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION. Griffin D. Romigh, Brian D. Simpson, Nandini Iyer

Similar documents
Binaural Hearing. Why two ears? Definitions

Sound localization psychophysics

3-D SOUND IMAGE LOCALIZATION BY INTERAURAL DIFFERENCES AND THE MEDIAN PLANE HRTF. Masayuki Morimoto Motokuni Itoh Kazuhiro Iida

3-D Sound and Spatial Audio. What do these terms mean?

Effect of spectral content and learning on auditory distance perception

Trading Directional Accuracy for Realism in a Virtual Auditory Display

EFFECTS OF TEMPORAL FINE STRUCTURE ON THE LOCALIZATION OF BROADBAND SOUNDS: POTENTIAL IMPLICATIONS FOR THE DESIGN OF SPATIAL AUDIO DISPLAYS

Brian D. Simpson Veridian, 5200 Springfield Pike, Suite 200, Dayton, Ohio 45431

19 th INTERNATIONAL CONGRESS ON ACOUSTICS MADRID, 2-7 SEPTEMBER 2007 THE DUPLEX-THEORY OF LOCALIZATION INVESTIGATED UNDER NATURAL CONDITIONS

The role of low frequency components in median plane localization

Proceedings of Meetings on Acoustics

Audio Engineering Society. Convention Paper. Presented at the 128th Convention 2010 May London, UK

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

Binaural Hearing. Steve Colburn Boston University

Neural correlates of the perception of sound source separation

A NOVEL HEAD-RELATED TRANSFER FUNCTION MODEL BASED ON SPECTRAL AND INTERAURAL DIFFERENCE CUES

Sound Localization PSY 310 Greg Francis. Lecture 31. Audition

HEARING AND PSYCHOACOUSTICS

ICaD 2013 ADJUSTING THE PERCEIVED DISTANCE OF VIRTUAL SPEECH SOURCES BY MODIFYING BINAURAL ROOM IMPULSE RESPONSES

Lecture 8: Spatial sound

Cochlear implant patients localization using interaural level differences exceeds that of untrained normal hearing listeners

The basic mechanisms of directional sound localization are well documented. In the horizontal plane, interaural difference INTRODUCTION I.

21/01/2013. Binaural Phenomena. Aim. To understand binaural hearing Objectives. Understand the cues used to determine the location of a sound source

INTRODUCTION J. Acoust. Soc. Am. 103 (2), February /98/103(2)/1080/5/$ Acoustical Society of America 1080

Discrimination and identification of azimuth using spectral shape a)

An Auditory System Modeling in Sound Source Localization

Abstract. 1. Introduction. David Spargo 1, William L. Martens 2, and Densil Cabrera 3

PERCEPTION OF URGENCY AND SPATIALIZATION OF AUDITORY ALARMS. A. Guillaume, M. Rivenez, G. Andéol and L. Pellieux

Localization 103: Training BiCROS/CROS Wearers for Left-Right Localization

Hearing II Perceptual Aspects

Combating the Reverberation Problem

Lecture 3: Perception

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane. I. Psychoacoustical Data

Sound localization under conditions of covered ears on the horizontal plane

Spectral and Spatial Parameter Resolution Requirements for Parametric, Filter-Bank-Based HRTF Processing*

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES

Effect of source spectrum on sound localization in an everyday reverberant room

Neural System Model of Human Sound Localization

Juha Merimaa b) Institut für Kommunikationsakustik, Ruhr-Universität Bochum, Germany

Speech segregation in rooms: Effects of reverberation on both target and interferer

The role of high frequencies in speech localization

Systems Neuroscience Oct. 16, Auditory system. http:

Study of perceptual balance for binaural dichotic presentation

Effect of microphone position in hearing instruments on binaural masking level differences

What Is the Difference between db HL and db SPL?

Perceptual Plasticity in Spatial Auditory Displays

The development of a modified spectral ripple test

Using 3d sound to track one of two non-vocal alarms. IMASSA BP Brétigny sur Orge Cedex France.

The basic hearing abilities of absolute pitch possessors

Audibility of time differences in adjacent head-related transfer functions (HRTFs) Hoffmann, Pablo Francisco F.; Møller, Henrik

CONTRIBUTION OF DIRECTIONAL ENERGY COMPONENTS OF LATE SOUND TO LISTENER ENVELOPMENT

J. Acoust. Soc. Am. 114 (2), August /2003/114(2)/1009/14/$ Acoustical Society of America

Jitter, Shimmer, and Noise in Pathological Voice Quality Perception

Factors That Influence Intelligibility in Multitalker Speech Displays

Publication VI. c 2007 Audio Engineering Society. Reprinted with permission.

Hearing. Figure 1. The human ear (from Kessel and Kardon, 1979)

Binaural detection with narrowband and wideband reproducible noise maskers. IV. Models using interaural time, level, and envelope differences

THE RELATION BETWEEN SPATIAL IMPRESSION AND THE PRECEDENCE EFFECT. Masayuki Morimoto

Proceedings of Meetings on Acoustics

Release from informational masking in a monaural competingspeech task with vocoded copies of the maskers presented contralaterally

On the influence of interaural differences on onset detection in auditory object formation. 1 Introduction

BASIC NOTIONS OF HEARING AND

Influence of acoustic complexity on spatial release from masking and lateralization

Binaural Hearing for Robots Introduction to Robot Hearing

Improved method for accurate sound localization

HST.723J, Spring 2005 Theme 3 Report

Dynamic-range compression affects the lateral position of sounds

J. Acoust. Soc. Am. 116 (2), August /2004/116(2)/1057/9/$ Acoustical Society of America

Lateralized speech perception in normal-hearing and hearing-impaired listeners and its relationship to temporal processing

Hearing. Juan P Bello

This will be accomplished using maximum likelihood estimation based on interaural level

Hearing in the Environment

HCS 7367 Speech Perception

Binaural processing of complex stimuli

16.400/453J Human Factors Engineering /453. Audition. Prof. D. C. Chandra Lecture 14

Sound Localization in Multisource Environments: The Role of Stimulus Onset Asynchrony and Spatial Uncertainty

Minimum Audible Angles Measured with Simulated Normally-Sized and Oversized Pinnas for Normal-Hearing and Hearing- Impaired Test Subjects

Auditory System & Hearing

Spectral processing of two concurrent harmonic complexes

Representation of sound in the auditory nerve

INTRODUCTION J. Acoust. Soc. Am. 100 (4), Pt. 1, October /96/100(4)/2352/13/$ Acoustical Society of America 2352

Revisiting the right-ear advantage for speech: Implications for speech displays

Physiological measures of the precedence effect and spatial release from masking in the cat inferior colliculus.

Auditory Presence, Individualized Head-Related Transfer Functions, and Illusory Ego-Motion in Virtual Environments

Spatial unmasking in aided hearing-impaired listeners and the need for training

The Structure and Function of the Auditory Nerve

BINAURAL DICHOTIC PRESENTATION FOR MODERATE BILATERAL SENSORINEURAL HEARING-IMPAIRED

Gregory Galen Lin. at the. May A uthor... Department of Elf'ctricdal Engineering and Computer Science May 28, 1996

Linking dynamic-range compression across the ears can improve speech intelligibility in spatially separated noise

Localization in the Presence of a Distracter and Reverberation in the Frontal Horizontal Plane: II. Model Algorithms

Spatial hearing and sound localization mechanisms in the brain. Henri Pöntynen February 9, 2016

Effect of mismatched place-of-stimulation on the salience of binaural cues in conditions that simulate bilateral cochlear-implant listening

Speech enhancement with multichannel Wiener filter techniques in multimicrophone binaural hearing aids a)

The Perceptual and Statistics Characteristic of Spatial Cues and its application

Development of a new loudness model in consideration of audio-visual interaction

Angular Resolution of Human Sound Localization

The masking of interaural delays

HCS 7367 Speech Perception

Comment by Delgutte and Anna. A. Dreyer (Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA)

Processing Interaural Cues in Sound Segregation by Young and Middle-Aged Brains DOI: /jaaa

Transcription:

IN EAR TO OUT THERE: A MAGNITUDE BASED PARAMETERIZATION SCHEME FOR SOUND SOURCE EXTERNALIZATION Griffin D. Romigh, Brian D. Simpson, Nandini Iyer 711th Human Performance Wing Air Force Research Laboratory 261 7th street, WPAFB, OH 45344, USA griffin.romigh@us.af.mil ABSTRACT While several potential auditory cues responsible for sound source externalization have been identified, less work has gone into providing a simple and robust way of manipulating perceived externalization. The current work describes a simple approach for parametrically modifying individualized head-related transfer function spectra that results in a systematic change in the perceived externalization of a sound source. Methods and results from a subjective evaluation validating the technique are presented, and further discussion relates the current method to previously identified cues for auditory distance perception. 1. INTRODUCTION Most spatial auditory displays are built around the understanding that, by replicating the natural cues listeners use to determine a sound source s location, any single-channel sound source can be imbued with spatial attributes. These cues are the complex function of space, frequency, and individual listener known as a Head- Related Transfer Function (HRTF). HRTFs capture the acoustic transformation a sound undergoes as it travels from a specific location in space, interacts with a listener s head, shoulders, and outer ears, and arrives separately at the two eardrums [1]. A singlechannel sound filtered with the right-ear and left-ear HRTF corresponding to a specific location can then be presented over headphones with directional accuracy and fidelity comparable to a freefield source provided the HRTF measurements were individualized to the listener [2, 3, 4]. Unfortunately, due to logistical issues associated with acquiring individualized HRTF measurements, most spatial displays utilize non-individualized HRTFs or subsets of the cues contained therein (e.g., only gross binaural cues), resulting in less perceptually accurate spatial representations. A frequent bane for headphone-based displays is the problem of poor sound source externalization. Sometimes referred to as inside-the-headlocatedness [5], poor externalization (or internalization) is the perceptual phenomena where sound sources are perceived to originate from a location within a listener s head rather than from out in space where the display designer had intended. Blauert [5] was one of the first to throughly review the literature associated with This work is licensed under Creative Commons Attribution Non Commercial 4. International License. The full terms of the License are available at http://creativecommons.org/licenses/by-nc/4. externalization and suggested that the lack of externalization was merely an endpoint on the continuum of perceived auditory distance (i.e. a sound source appears so close to you that it is perceived inside your head). The theory that perceived externalization is part of auditory distance perception was backed up by the experimental evidence of Hartmann and Wittenberg [6]. They showed that perceived externalization could be manipulated systematically for a harmonic tone complex by zeroing out inter-aural phase differences (IPD) for tonal components above a given frequency; the lower the critical frequency the less externalized the sound source was judged to be up to a cutoff frequency near 1kHz. Hartmann and Wittenberg [6] also showed that externalization was not affected by forcing a single frequency independent interaural timing difference (ITD) cue, and that both monaural magnitude spectra need to be preserved across the entire frequency range to ensure good externalization not just the gross interaural level difference (ILD). Proper externalization (and/or distance perception) has also been liked to a number of other factors including the use of dynamic head-motion cues [7], ratio of direct to reverberant energy [8], and the high frequency roll off [9]. Despite the previous efforts, it is not clear what the role of spectral features are in perceived externalization. At the extremes, there is a very clear relationship between the presence of monaural spectral features and externalization, such that an absence of spectral features when implementing only a frequency independent ILD causes poor externalization, while the full representation of the spectral features when implementing an individualized HRTF produces good externalization. It is less clear however what is perceived with compressed spectra, where the narrowband spectral cues are present yet potentially diminished in magnitude. Based on that question and the desire for a simple parameterization with which externalization could be effectively modulated, the current investigation aimed at determining whether externalization could be reliably controlled through simple modifications to the monaural magnitude spectrum contained in an HRTF. 2. PARAMETERIZATION From previous literature it is clear that the two extremes of externalization can be attained using methods that differ only in spectral representation of their spatial filters. On one hand, if spatial information is imparted on a sound source using only binaural cues (ILD and ITD), the sound source will appear as though it originates from somewhere along the interaural axis inside the listener s head [1]. On the other hand, well localized and externalized sound DOI: 1.21785/icad16.35

sources can be created virtually if the source is rendered with an individualized HRTF that preserves the ITD and both left-ear and right-ear monaural spectral cues [11]. These two methods of spatial presentation differ only in the way the spectrum of the sound source at each ear is modified. The parameterization detailed in this section provides a straightforward method to linearly interpolate between the spectra that are known to result in good externalization and spectra which are known to result in strong internalization. Starting from an individualized HRTF measurement, the following method systematically varies the prominence of spectral features of the left-ear log-power spectrum; an identical procedure is used to modify the right-ear spectrum. If we define Hφ,θ[k] L to be the individualized left-ear log-power spectrum (i.e. decibel scale) corresponding to a location with azimuth 18 o φ < 18 o and elevation 9 o φ < 9 o, the location-specific, frequencyindependent average monaural level A L φ,θ is defined as in Eq. 1. A L φ,θ = 1 Hφ,θ[k] L (1) K Here, k is used to represent one of K discrete frequency values in the valid positive frequency range for the HRTF. For the present work 2 k 174, making K = 173, which represents the positive frequencies from approximately Hz to 15 khz for a 512 length DFT at a sampling rate of 44.1 khz. This average level is subtracted from the measured HRTF to give the left spectrum S L φ,θ[k] as in Eq. 2, which contains all of the spectral features. k S L φ,θ[k] = H L φ,θ[k] A L φ,θ (2) These two components are then weighted and recombined to give the transformed spectrum H L φ,θ[k] as in Eq. 3 H φ,θ[k] L = α 1 SL φ,θ[k] + A L φ,θ (3) The parameter α in Eq. 3 varies the magnitude of the spectral features contained in the final transformed spectrum H φ,θ[k] L from full scale for α = 1 to nil for α =. The transformed spectra corresponding to several levels of the α parameter are shown in Fig. 1 for the left and right ears of a representative subject at two locations on the horizontal plane. 3.1. Subjects 3. METHODS Eight paid listeners (5 males, 3 females) with normal audiometric thresholds participated in the subjective evaluation experiment. Listeners participated in 6 trials broken into self-paced 3 minute blocks over the course of two weeks. All listeners had previous experience with virtual spatial audio in the context of objective localization experiments conducted within the laboratory, however, all listeners were believed to be naive to both the subjective evaluation method presented below and to the formalized concept of externalization at the onset of the experiment. As such, the subjects were presented with the following verbal description of externalization at the onset of every experimental trial to familiarize them with the question at hand. Externalization: To what extent does the virtual source sound outside your head? In this type of trial you will be asked to judge how each virtual source is positioned relative to Relative Magnitude (db) 3 6 9 1 15 α = α = α = 4 α = 6 α = 8 α = 1 Front Right Ear Left Ear.5k 1k 2k 4k 8k Frequency (Hz) Left 45 o.5k 1k 2k 4k 8k Frequency (Hz) Figure 1: Transformed HRTF spectra for a single subject at to locations on the horizontal plane (panels) as a function of the α parameter. Spectra were given a 3α db gain for plotting purposes. yourself. When listening over headphones, some sounds may appear as though they originate from inside your head (completely internalized) while others may sound as though they clearly come from a physical location out in space (completely externalized); variations between these two extremes are also possible where a sound might appear to come from on your face, head, or neck or just outside your body. 3.2. Task Description A single experimental trial consisted of an implementation of the Multi-Stimulus Test with Hidden Reference and Anchor (MUSHRA) [12]. In this task, listeners were presented with the GUI depicted in Fig. 2. By pressing the selection buttons on the GUI (labeled REF, A, B,... F ), listeners were able to selectively listen to each stimulus one at a time as many times as they desired and in any order throughout a trial. Listeners were asked to compare the various stimuli both to each other and to a reference stimulus and provide an externalization rating for each stimulus according to the scale in Table 1 which was always visible to the subjects at the left of the GUI. They were also were provided written instructions on the use of the GUI and informed that the reference stimulus should correspond to a rating of 1 on the provided scale. At any time during a trial they could reexamine the verbal description of externalization, adjust the overall stimulus level, and leave comments utilizing the GUI. 3.3. Stimuli On a given trial seven different stimuli were employed, the reference stimulus and six test stimuli. The reference stimulus always consisted of a virtual sound source rendered with a full HRTF,

4. RESULTS Across all three base stimuli and both azimuths, two subjects showed a negative correlation with the average trend ( r = -.78, r = -.81) computed with their data removed. While is it conceivable that their reference HRTF produced poorly externalized stimuli through some type of measurement artifact (thus resulting in a low externalization rating for alpha = 1), the near perfect reverse ratings exhibited, including rating the anchor, which contained only gross ITD and ILD cues, with a high externalization rating, leads the authors to believe that the subjects misunderstood the task or rating scale. Because of this these two outliers were removed from from remaining data and analysis. Figure 2: Graphical user interface for the MUSHRA task used during the subjective evaluations. Rating Description 8-1 Outside arms reach 6-8 Within arms reach 4-6 Just outside my body - 4 On the surface of my face, head, or neck - Inside my head Table 1: Verbal descriptions of different levels of externalization and the corresponding rating values used for the subjective evaluation. while the test stimuli consisted of virtual sound sources rendered with HRTFs that had been transformed as described in Sec. 2 for α values of,, 4, 6, 8, and 1. The α = 1 stimulus was identical to the reference stimulus, therefore acting as the hidden reference and the α = stimulus acted as the hidden anchor. All individualized HRTFs had been previously measured on each listener with the methods described in [3] and consisted of 256 minimum-phase DFT coefficients (sampled at 44.1 khz) for each ear and a corresponding ITD value at 2 spatial locations, in front of and 45 o to the left of the subject). HRTFs were converted to the log-power (decibel) domain, transformed, and ultimately converted back to 256-tap minimum-phase filters. Test stimuli, each 5 s in duration, were generated by convolving one of three single-channel base signals (broadband noise, music, spoken English) with the resulting right and left filters and delaying the contralateral ear by the ITD value. The music and speech samples were taken from the Sound Quality Assessment Material recordings for subjective tests (Track 7 and Track 5, respectively) [13]. All stimuli were normalized post-filtering to have the same average initial level of 6 db SPL. Each subject participated in ten trials for each location and stimulus type for a total of 6 trials. Externalization Rating 1 9 8 7 6 5 4 3 1 47 1438 1487 1575 1581 1584 4 6 8 1 Figure 3: Average externalization ratings for the filtered noise base stimuli at the left 45 o position by subject. Error bars represent 95% confidence intervals for the pooled data. Figure 3 shows the average ratings for the remaining subjects at the left 45 o azimuth location averaged across the three base stimuli as a function of the α level. It is clear from Fig. 3 that subjects showed a systematic linear relationship between the α parameter and their externalization ratings. Repeated measures ANOVA results indicates a significant main effect for the α parameter on the externalization rating (F(5,5) = 9.73, p <.1). In general subjects indicated the α = condition (containing only ITD and ILD cues) was Inside (the) head or On the surface. Less consensus is seen in the slope of the functions however, and likewise the rating given to the full HRTF condition (α = 1), where ratings varied from Outside arms reach to Just outside (the) body. In contrast, ANOVA results did not indicate a statistically significant main effect for base stimulus type (F(2,2) = 2.35, p =.15). Results comparing the ratings for the three base stimuli types averaged across subjects and location are shown in Fig. 4. While not statistically significant, the average data does show a slight trend for externalization ratings to be highest in speech condition compared to the music for bandpass filtered noise. Figure 5 shows the ratings for the two locations as a function of the α parameter averaged across subjects and stimulus type. Clearly evident in the figure is an interaction with the stimulus location and the α level; the ratings start lower but increase more rapidly for the left 45 location compared to the front. This observation is backed up by ANOVA results which show a significant interaction for α and location (F(5,5) = 4.891, p =.3), but

Subjective Rating 1 9 8 7 6 5 4 3 1 4 6 8 1 Stimulus Type Noise Music Speech Figure 4: Average externalization ratings pooled over subject and location. Marker color represents base stimulus type. Error bars represent 95% confidence intervals for the pooled data. no significant main effect for location itself (F(1,1) = 3.943, p =.121). Subjective Rating 1 9 8 7 6 5 4 3 1 4 6 8 1 Location Front Left 45 o Figure 5: Average externalization ratings pooled over subject and base stimulus type. Marker color represents location. Error bars represent 95% confidence intervals for the pooled data. 5. DISCUSSION The current results agree with the existing literature for the two extreme α conditions. The internalized ratings given to α = condition agree with previous studies utilizing only gross binuaral cues, and the full HRTF condition where α = 1 showed the expected externalized ratings. More interestingly, the intermediate results suggest that manipulating the strength of narrowband spectral features clearly affects the perceived externalization in a systematic fashion. This implies that the parameterization technique introduced here might be adequate for an externalization or distance based display technology. Somewhat surprisingly, no effect was found for the different types of base stimuli, despite the fact that they differed significantly in terms of their long-term average spectral profile. Based on those differences the results of Little et al. [9] would suggest that the high-frequency roll off seen for the speech and to a lesser extent music would result in higher externalization ratings compared to the flat spectrum noise. This discrepancy could be explained by the blocked nature the experiment since different base stimuli were never compared directly. It is also likely that the high frequency roll off cue is only used when the different stimuli are assumed to be from the same original sound source, a situation clearly not applicable across base stimuli. The interaction between the α parameter and the stimulus location may be additional evidence to suggest a relationship between externalization and distance perception. By examining the spectral profiles illustrated in Fig. 1, we can clearly see known low-frequency ILD distance cues present for the lateral location that are not available for the front location. The left 45 o profiles clearly show an increase in low frequency ILD cues as α is decreased similar to the near-field HRTF cues observed by Brungart et al. [14], and an increase in spectral roll off as α is increased similar to the propagation-related distance cue described by Little [9]. The front location only contains the roll off cue due to it s lack of ILD. In addition to the current positve results, to be valuable as a display technology the parameterization should preserve both the perceived sound quality and localization accuracy. Further research will have to be completed in order to investigate these factors. 6. SUMMARY The current work describes a simple parameterized method for controlling the perceived externalization of a sound source based on flattening of the monaural HRTF spectra. Examinations show that this method can be related to previously observed cues used for auditory distance perception, and a subjective evaluation demonstrates the technique is capable of producing the desired perceptual results. 7. REFERENCES [1] S. Mehrgardt and V. Mellert, Transformation of the external human ear, J. Acoust. Soc. Am., vol. 61, pp. 1567 1576, 1977. [2] A. W. Bronkhorst, Localization of real and virtual sound sources, J. Acoust. Soc. Am., vol. 98, pp. 2542 2553, 1995. [3] D. S. Brungart, G. D. Romigh, and B. D. Simpson, Rapid collection of HRTFs and comparison to free-field listening, in International Workshop on the Principles and Applications of Spatial Hearing, 9. [4] R. Martin, K. McAnally, and M. Senova, Free-field equivalent localization of virtual audio, J. Acoust. Soc. Am., vol. 49, pp. 14 22, 1. [5] J. Blauert, Spatial Hearing. The MIT Press, 1997. [6] W. M. Hartman and A. Wittenberg, On the externalization of sound images, J. Acoust. Soc. Am., vol. 99, pp. 3678 3688, 1996. [7] W. O. Brimijoin, A. W. Boyd, and M. A. Akeroyd, The contribution of head movement to the externalization and internalization of sounds, PLoS, vol. 8, p. 1, 13.

[8] P. Zahorik, D. S. Brungart, and A. W. Bronkhorst, Auditory distance perception in humans: A summary of past and present research, Acta Acustica, vol. 91, pp. 49 4, 5. [9] A. D. Little, D. H. Mershon, and P. H. Cox, Spectral content as a cue to percieved auditory distance, Perception, vol. 21, pp. 45 416, 1992. [1] A. W. Mills, Lateralization of high frequency tones, J. Acoust. Soc. Am., vol. 32, pp. 132 134, 196. [11] D. J. Kistler and F. L.. Wightman, A model of head-related transfer functions based on principal components analysis and minimum-phase reconstruction, J. Acoust. Soc. Am., vol. 91, pp. 1637 1647, 1992. [12] G. A. Soulodre and M. C. Lavoie, Subjective evaluation of large and small impairments in audio codecs, in AES International Conference, Florence, 1999. [13] EBU TECH 3253: Sound Quality Assessment Material recordings for subjective tests. [14] D. S. Brungart and W. M. Rabinowitz, Auditory localization of nearby sources; head-related transfer functions, J. Acoust. Soc. Am., vol. 13, pp. 1465 1479, 1999.