Organization of sequential sounds in auditory memory

Similar documents
An investigation of the auditory streaming effect using event-related brain potentials

Auditory sensory memory in 2-year-old children: an event-related potential study

Activation of the auditory pre-attentive change detection system by tone repetitions with fast stimulation rate

Event-related brain activity associated with auditory pattern processing

Temporal integration: intentional sound discrimination does not modulate stimulus-driven processes in auditory event synthesis

The Role of Large-Scale Memory Organization in the Mismatch Negativity Event-Related Brain Potential

Primitive intelligence in the auditory cortex

Perceptual and cognitive task difficulty has differential effects on auditory distraction

Separate memory-related processing for auditory frequency and patterns

Figure 1. Source localization results for the No Go N2 component. (a) Dipole modeling

Event-Related Potentials Recorded during Human-Computer Interaction

ELECTROPHYSIOLOGY OF UNIMODAL AND AUDIOVISUAL SPEECH PERCEPTION

Activation of brain mechanisms of attention switching as a function of auditory frequency change

Auditory Scene Analysis

Processing Interaural Cues in Sound Segregation by Young and Middle-Aged Brains DOI: /jaaa

Early Occurrence Of Auditory Change Detection In The Human Brain

Rapid Context-based Identification of Target Sounds in an Auditory Scene

Title change detection system in the visu

Auditory information processing during human sleep as revealed by event-related brain potentials

Independence of Visual Awareness from the Scope of Attention: an Electrophysiological Study

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

Rhythm and Rate: Perception and Physiology HST November Jennifer Melcher

Effects of multiple congruent cues on concurrent sound segregation during passive and active listening: An event-related potential (ERP) study

Title of Thesis. Study on Audiovisual Integration in Young and Elderly Adults by Event-Related Potential

Neural Correlates of Complex Tone Processing and Hemispheric Asymmetry

Electrophysiological Substrates of Auditory Temporal Assimilation Between Two Neighboring Time Intervals

Reward prediction error signals associated with a modified time estimation task

International Journal of Neurology Research

Development of infant mismatch responses to auditory pattern changes between 2 and 4 months old

Atypical processing of prosodic changes in natural speech stimuli in school-age children with Asperger syndrome

Manuscript under review for Psychological Science. Direct Electrophysiological Measurement of Attentional Templates in Visual Working Memory

(Visual) Attention. October 3, PSY Visual Attention 1

Congruency Effects with Dynamic Auditory Stimuli: Design Implications

DATA MANAGEMENT & TYPES OF ANALYSES OFTEN USED. Dennis L. Molfese University of Nebraska - Lincoln

The face-sensitive N170 encodes social category information Jonathan B. Freeman, Nalini Ambady and Phillip J. Holcomb

Dissociable neural correlates for familiarity and recollection during the encoding and retrieval of pictures

Effects of prior stimulus and prior perception on neural correlates of auditory stream segregation

Decay time of the auditory sensory memory trace during wakefulness and REM sleep

A study of the effect of auditory prime type on emotional facial expression recognition

Behavioral and electrophysiological effects of task-irrelevant sound change: a new distraction paradigm

Effect of intensity increment on P300 amplitude

Title Page. Altering the primacy bias How does a prior task affect mismatch negativity (MMN)?

Takwa Adly Gabr Assistant lecturer of Audiology

RAPID COMMUNICATION Scalp-Recorded Optical Signals Make Sound Processing in the Auditory Cortex Visible

Hearing II Perceptual Aspects

Expectations Modulate the Magnitude of Attentional Capture by Auditory Events

Computational Perception /785. Auditory Scene Analysis

Twenty subjects (11 females) participated in this study. None of the subjects had

Auditory Scene Analysis. Dr. Maria Chait, UCL Ear Institute

Effects of discrepancy between imagined and perceived sounds on the N2 component of the event-related potential

Conscious control of movements: increase of temporal precision in voluntarily delayed actions

Does Contralateral Delay Activity Reflect Working Memory Storage or the Current Focus of Spatial Attention within Visual Working Memory?

Ear and Hearing (In press, 2005)

Does contralateral delay activity reflect working memory storage or the current focus of spatial attention within visual working memory?

PERCEPTION OF UNATTENDED SPEECH. University of Sussex Falmer, Brighton, BN1 9QG, UK

Neural correlates of short-term perceptual learning in orientation discrimination indexed by event-related potentials

The neural code for interaural time difference in human auditory cortex

Running head: HEARING-AIDS INDUCE PLASTICITY IN THE AUDITORY SYSTEM 1

Theta Oscillation Related to the Auditory Discrimination Process in Mismatch Negativity: Oddball versus Control Paradigm

Title Page. Altering the primacy bias How does a prior task affect mismatch negativity (MMN)?

Active suppression after involuntary capture of attention

Event-related brain potentials reveal covert distractibility in closed head injuries

The face-sensitive N170 encodes social category information Jonathan B. Freeman a, Nalini Ambady a and Phillip J. Holcomb a

Studying the time course of sensory substitution mechanisms (CSAIL, 2014)

Implicit, Intuitive, and Explicit Knowledge of Abstract Regularities in a Sound Sequence: An Event-related Brain Potential Study

Rotman Research Institute, Baycrest Centre for Geriatric Care, and Department of Psychology, University of Toronto, Canada

Entrainment of neuronal oscillations as a mechanism of attentional selection: intracranial human recordings

Language Speech. Speech is the preferred modality for language.

Supporting Information

Asymmetry between the upper and lower visual fields: An event-related potential study

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES

J Jeffress model, 3, 66ff

AccuScreen ABR Screener

Multisensory Visual Auditory Object Recognition in Humans: a High-density Electrical Mapping Study

Timing & Schizophrenia. Deana Davalos Colorado State University

The role of selective attention in visual awareness of stimulus features: Electrophysiological studies

What is novel in the novelty oddball paradigm? Functional significance of the novelty P3

Analysis of in-vivo extracellular recordings. Ryan Morrill Bootcamp 9/10/2014

Self-face recognition in attended and unattended conditions: an event-related brain potential study

Left-hemisphere dominance for processing of vowels: a whole-scalp neuromagnetic study

The Clock Ticking Changes Our Performance

Chapter 11: Sound, The Auditory System, and Pitch Perception

Separate What and Where Decision Mechanisms In Processing a Dichotic Tonal Sequence

Final Summary Project Title: Cognitive Workload During Prosthetic Use: A quantitative EEG outcome measure

The overlap of neural selectivity between faces and words: evidences

Supporting Information

Report. Spatial Attention Can Be Allocated Rapidly and in Parallel to New Visual Objects. Martin Eimer 1, * and Anna Grubert 1 1

The Clock Ticking Changes Our Performance

Cognition and Event -Related Pot en t ials

Tracking the Development of Automaticity in Memory Search with Human Electrophysiology

Essential feature. Who are cochlear implants for? People with little or no hearing. substitute for faulty or missing inner hair

= + Auditory Scene Analysis. Week 9. The End. The auditory scene. The auditory scene. Otherwise known as

Auditory Physiology PSY 310 Greg Francis. Lecture 30. Organ of Corti

Binaural Hearing. Why two ears? Definitions

Music training enhances the rapid plasticity of P3a/P3b event-related brain potentials for unattended and attended target sounds

Development of auditory sensory memory from 2 to 6 years: an MMN study

Stefan Debener a,b, *, Scott Makeig c, Arnaud Delorme c, Andreas K. Engel a,b. Research report

Towards natural human computer interaction in BCI

Basics of Perception and Sensory Processing

Electrophysiological evidence of enhanced distractibility in ADHD children

Transcription:

COGNITIVE NEUROSCIENCE AND NEUROPSYCHOLOGY Organization of sequential sounds in auditory memory Elyse S. Sussman CA and Valentina Gumenyuk Department of Neuroscience, Albert Einstein College of Medicine,141 Pelham Parkway S, Bronx, NY1461, USA CA Corresponding Author: esussman@aecom.yu.edu Received June 5; accepted 9 June 5 A repeating ve-tone pattern was presented at several stimulus rates (, 4, 6, and 8 ms onset-to-onset) to determine at what temporal proximity the ve-tone repeating unit would be represented in memory. The mismatch negativity component of event-related brain potentials was used to index how the sounds were organized in memory when participants had no task with the sounds. Only at the -ms onset-to-onset pace was the vetone sequence unitized in memory. At presentation rates of 4 ms and above, the regularity (a di erent frequency tone occurred every fth tone) was not detected and mismatch negativity was elicited by these tones in the sequence. The results show that temporal proximity plays a role in unitizing successive sounds in auditory memory.these results also suggest that global relationships between successive sounds are represented at the level of auditory cortices. NeuroReport16:1519^153 c 5 Lippincott Williams & Wilkins. Key words: Auditory; Event-related potentials; Grouping; Interstimulus interval; Mismatch negativity; Sensory memory INTRODUCTION How do we organize sound elements into perceptual wholes? In the auditory system, the relationship between individual sounds or sound elements is crucial for perceiving sound events. We understand auditory objects as they unfold in time. For example, recognition of a series of footsteps, of a melodic theme in music, or of a series of words in a sentence is dependent on the order of events and their relationship to each other over time. A fundamental question in cognitive neuroscience is how stimulus elements are combined to form perceptual units. The Gestalt theorists, describing processes of the visual system, recognized that the whole is greater than the sum of its parts [1]. This characteristic statement suggests that simple evaluation of stimulus elements is not enough to form perceptual wholes. According to the Gestalt theory, a number of basic principles (or laws ) describe how we perceive stimulus features as whole objects in the environment. For example, the law of proximity states that elements occurring closely together are likely to be seen as belonging together. Applying this law to the auditory domain, in the current study, we investigated whether the stimulus-driven factor of temporal proximity (tones presented successively in a single stream) itself could alter the grouping and storage of sounds in memory. The current study is based on previous work by Sussman and colleagues [], who showed that the underlying neural representation of successive sounds was altered by a temporal change from slow to fast presentation of a recurring five-tone pattern (AAAABAAAABy, where A represents a tone of one frequency and B represents a tone of a different frequency). The five-tone repeating pattern was presented separately at two different paces: in one condition tones were presented at a slow pace (1.3 s onset-to-onset) and in the other at a fast pace (1 ms onset-to-onset). The organization of the sounds changed from a single repeating tone at the slow pace to a five-tone repeating pattern at the fast pace. The switch in organization was indexed by whether the mismatch negativity () component of event-related brain potentials (ERPs) was elicited by the infrequently occurring B tone. The reflects detection of deviance from an ongoing representation of a given regularity (or standard ) in a sequence [3]. Thus, the B tone would be detected as differing in frequency from the A tone and elicit only when the standard regularity was represented as a single repeating tone. In Sussman et al. [], was elicited when the tones were presented at the slow pace. This could mean that the individual tones were too far apart to be automatically grouped together, regardless of the regularity in the sequence (the B tone occurred every fifth tone). In contrast, at the fast pace, no was elicited by the B tones. This was interpreted as a change in the organization of the sounds on the basis of temporal proximity. Simply speeding up the presentation rate of the stimulus sequence altered the regularity extracted from the sequence. At the fast pace, the brain automatically detected the five-tone sequential tone pattern as a single repeating unit. The B tone was part of the repeating five-tone pattern; it was not deviant (consequently, no ). Thus, we showed that temporal proximity was a factor in automatic grouping of sequential sounds. The purpose of the current study was to determine at what temporal proximity the neural representation of the sequential tone pattern would automatically switch from a single repeating tone to a five-tone repeating unit. In other words, at what pace would the five-tone unit be represented in memory? We used the paradigm of Sussman et al. [], the five-tone repeating sequence (AAAABAAAABy), presented at four different paces [, 4, 6, and 8 ms 959-4965 c Lippincott Williams & Wilkins Vol 16 No 13 8 September 5 1519

E. S. SUSSMAN AND V.GUMENYUK stimulus onset asynchrony (SOA), onset-to-onset]. should be elicited by the B tone when the single repeating tone is the unit represented in memory, whereas no should be elicited by the B tone when the five-tone pattern is represented in memory as the unit. MATERIALS AND METHODS Nine healthy adults (six women) between the ages of and 4 years (mean age 3 years), with no history of neurological problems, were paid to participate in the study. All participants passed a hearing screening. Participants gave informed consent after the experimental protocol was explained to them. Two stimuli, 5 ms duration pure tones (7.5 ms rise/fall time, 85 db sound pressure level), were presented binaurally through insert earphones. One tone was 88 Hz (the A tone) and the other tone was 988 Hz (the B tone). The two tones were presented in a five-tone repeating pattern (AAAABAAAABAAAABy) continuously throughout each sequence. Thus, the B tone occurred as every fifth tone in the sequence. This sequence was presented at four different stimulus rates, separately in four conditions that denote the onset-to-onset pace:, 4, 6, and 8 ms (grouped conditions). A control for the ms condition was used ( ms random condition), in which the A and B tones were presented randomly in the sequence with the same probability as the regular sequences (i.e. 8% for the A tone and % for the B tone). The purpose for the control condition was twofold: to show that could be elicited when no patterns were to be detected and to show that the fast pace itself does not preempt elicitation. In all, 1 tones were presented in each condition, yielding 4 B tones and 96 A tones for each condition. The order of the runs was counterbalanced across participants. Participants had no task with the sounds. They were instructed to ignore the auditory stimuli and watch selfselected captioned video. Electroencephalogram recording and analyses: Electroencephalogram was recorded with a 3 channel electrode cap (1 International system) plus electrodes placed over the left and right mastoids. F7 and F8 electrode sites were used to monitor the horizontal electrooculogram, and FP1 and electrode placed below the left eye were used to monitor the vertical electrooculogram. The tip of the nose was used as the reference during recording. The electroencephalogram and electrooculogram were digitized (Neuroscan Synamps amplifier, Neuroscan Synamps, El Paso, Texas, USA) at 5 Hz (.5 1 Hz bandpass) and digitally filtered offline between 1 and 15 Hz. Artifact rejection was set to exclude activity exceeding 775 mv. ERPs were averaged separately for the standards (A tones) and deviants (B tones) in each condition. Difference waveforms were obtained by subtracting the ERPs elicited by the standard from the ERPs elicited by the deviants. The peak latency of the was selected at Fz in the grand-mean difference waveforms as 116 ms for the ms random and grouped SOA conditions; 114 ms for the 4-ms SOA condition; 154 ms for the 6-ms and 148 ms for the 8-ms SOA conditions. A 4-ms time window, centered on the grand-mean peak, was used to obtain mean voltage for each participant. A three-way analysis of variance (ANOVA) for repeated measures, with factors of condition stimulus type [standard (A) and deviant (B)] electrode sites (Fz, F3, F4, FC1, FC), was used to determine the presence of for the grouped sequence conditions. The Tukey HSD (honestly significantly different) test was used for post-hoc analysis. Additionally, we analyzed the topography of the recorded ERP components using a reference-free measure of the scalp current density (SCD). Maps showing scalp voltage topography and SCD were computed on the mean amplitude difference waveforms for each condition, corresponding to the peak latency of the. The SCD analysis, which is an estimate of the second spatial derivative of the voltage potential (the Laplacian), was performed using BESA FOCUS software. The SCD analysis of topography sharpens the differences in the scalp fields, providing better information about the cortical generators and about the occurrence of components in each hemisphere. RESULTS Figure 1 displays the ERPs and corresponding difference waveforms. The N1 component (the obligatory response to sound onset) was elicited by standard and deviant stimuli in each condition (middle column). The amplitude of the N1 was affected by stimulus rate, increasing in amplitude with longer onset-to-onset times. A three-way ANOVA revealed an interaction between stimulus type and condition (F[3,4]¼3.4, po.4) and no other interactions. Post-hoc analyses showed that the ERPs corresponding with the standard were significantly different from the ERPs corresponding with the deviant in the latency range of the at the designated frontocentral electrode sites in all grouped conditions except for the ms grouped condition. This result showed that the component was elicited by the B tones when the temporal proximity of the tones was 4, 6, and 8 ms, with no elicited by the B tone at the fastest SOA condition ( ms) of the grouped sequences. It should be noted that was elicited by the B tones at the fastest pace when the B tones in the sound sequence were randomly presented (F[1,8]¼16.55, po.1). This result shows that refractoriness of the generator cannot explain the absence of in the grouped ms condition and supports the conclusion that the five-tone pattern was detected only at the fastest presentation rate. Figure 1 also displays the scalp voltage and SCD maps of the at their respective peak latency or expected peak latency (the ms grouped condition peak latency was chosen according to the ms random condition peak latency ). The voltage maps are characterized by negative fields that are maximum at frontocentral sites in the conditions in which was elicited ( ms random, 4 8 ms grouped conditions), with positive field potentials at sites below the Sylvian fissure. SCD analysis shows the frontocentral negative field as having bilateral current sinks at F3-FC1 and F4-FC electrodes. The right focus is slightly more frontal than the left. This is typical distribution for, with bilateral generators in auditory cortices [4]. This scalp distribution was not observed for the ms grouped condition, in which no significant difference was observed between the ERP response elicited by the A and B tones. 15 Vol 16 No 13 8 September 5

ORGANIZATION OF SEQUENTIAL SOUNDS Condition (Stimulus rate) Event-related potentials (Fz) Randomized sequence: A A A B A A A A A A B A A A A B... µv Voltage Scalp distribution SCD ms 1 1 3 4 ms Grouped sequence: A A A A B A A A A B A A A A B... 116 ms ms 1 1 3 4 116 ms 4 ms 1 1 3 4 114 ms 6 ms 1 1 3 4 154 ms 8 ms 1 1 3 4 B A B A (difference) 148 ms (.1 µv/line) (.1 µv/cm 3 ) Fig. 1. Grand-mean event-related brain potentials elicited by the A tones (dashed line) and B tones (thin solid line) and the corresponding B A di erence waveforms (thick solid line) are displayed for the Fz electrode in the middle column for all conditions, randomized and grouped (denoted in rst column).the right column displays the top view of the scalp voltage distribution and scalp current density (SCD) maps for the (measured from the B A di erence waveforms at their corresponding peak latency).the blackened circle denotes the location of the Fz electrode. Note that no was elicited by the B tone in the ms grouped condition. DISCUSSION Temporal proximity of sequential sounds influences how they are grouped and stored in auditory memory. At a fairly rapid pace ( ms, grouped condition) no was elicited by B tones demonstrating that they were automatically grouped to a five-tone unit (A A A A B). At Vol 16 No 13 8 September 5 151

E. S. SUSSMAN AND V.GUMENYUK stimulus rates of 4 ms and above, s were elicited by B tones in the grouped conditions (4 8 ms SOA) demonstrating that the sounds were represented individually, as a single repeating tone, from which the B tone was detected as having a different frequency. The regularity of the B tone was not detected when the participant had no task with the sounds. The results of the study show that automatic grouping of successive tones can be influenced by the temporal proximity of the sounds, only when the elements come in fairly close succession. It appears that a fairly fast pace is needed for automatically unitizing sequential sounds. Interestingly, the switch in representation of the standard as a five-tone unit or a single repeating tone does not appear to be bound to the limits of sensory memory. An alternative interpretation of the results of Sussman et al. [] was that the five-tone pattern was not unitized at the slow presentation rate because not enough repetitions of the standard unit could be repeated within the memory span (estimated to be 1 s [5]) before a deviant occurred. That is, the difference in elicitation would not be based on grouping but rather on a limitation of the memory. However, in the current study, this interpretation does not seem to hold. At the 4 ms pace, repetition of the five-tone unit still falls well within the limitations of sensory memory. Although it is not known how many repetitions of a standard are needed when the standard consists of multiple tones, if one assumes that at least three repetitions of the standard unit are needed before can be elicited by a deviant [6], then approximately 5 s time elapsing in the presentation of three standards and a deviant in the 4 ms condition of the current study would still be well within the limits of auditory sensory memory for eliciting. Thus, the grouping of the sounds appears not to be made simply by detection of the unit that falls within the memory span. The factor of proximity has an influence on what is automatically detected as belonging together. The results of the current study are consistent with other studies showing that temporal proximity has an effect on how sounds are stored in memory [,7 14]. Atienza et al. ([7], Experiment 1) presented a six-tone train that alternated between two frequencies (5 and 1 Hz) at an interstimulus interval of 1 ms (half the trains had the pattern ABABAB and the other half of the trains BABABA), with varying intertrain intervals (15, 18, 4, 36 ms). When the B-tone train followed an A tone train, the B-tone elicited an at all intertrain intervals except 36 ms. An interpretation of these results is that the was elicited at the shorter intertrain intervals because the tones of the successive trains were grouped together with the other tones; they were not separated into discrete train units. On the other hand, at the longest intertrain interval, the trains may have been processed as separate trains of tones separated by a silent period. This interpretation assumes that the sounds did not segregate to two streams on the basis of frequency proximity and that the alternation of the two tones was detected as such. The notion would be that the last tone of one train joined together with the first tone of the next, proximal train at short intervals but not when the longer 36 ms interval separated the trains. In the current study, at the shortest interval, the tones were grouped together to five-tone units (without any additional silence to demarcate the units within the sequence). However, they were not automatically grouped when 4 ms separated the individual tones. The timing governing the grouping of successive tones in Atienza et al. [7] is consistent with the timing found in the current study. Van Zuijen et al. [1] tested grouping of sequential sounds by the Gestalt principles of pitch similarity and good continuation of pitch to determine whether four-tone patterns would be detected similarly for musicians and nonmusicians. Deviants elongated the tone group but did not violate the Gestalt principle. Musicians detected deviants in both conditions (s were elicited by deviants); however, the nonmusicians detected deviants, eliciting, only in the pitch similarity condition. Thus, we found that whereas musical expertise can influence the manner in which sounds are grouped in memory, certain grouping processes operate independently of musical expertise. The current results extend the findings of these previous studies. We show automatic grouping of sequential sounds in memory by demonstrating that the timing between sounds, the temporal proximity of the sounds, can govern how the sounds are unitized in memory for longer sequences of sounds (five-tone sound patterns). Moreover, the current data show that a repeating regularity in the sound sequence in and of itself does not guarantee that the brain will pick up the regularity automatically. At the longer interstimulus intervals, attention to a task that involves noticing the regularity is needed for the regularity to be stored as a unit in memory [15]. The rapid rate required for successive sounds to be heard as belonging together may be the result of less ambiguity in sound organization at faster paces. When there is ambiguity (e.g. due to temporal or frequency proximity), attention may be required to resolve the organization one way or the other. At any given moment, the auditory system must take into account the characteristics of the total mixture of sounds (e.g. frequency, timbre, spatial location, timing) to determine the likely organization of sound input. The results of the current study show that stimulus-driven factors act on the grouping of sequential sounds in a single stream. However, temporal proximity is only one factor that influences storage of sounds. Deutsch [16], for example, found that frequency proximity dominated over spatial location for sound grouping. Deutsch alternated two different frequency tones (4 and 8 Hz) dichotically to the left and right ears simultaneously. Listeners perceived the input as a sequence of 4 Hz tones heard in one ear and a sequence of 8 Hz tones heard in the other ear. Thus, the sounds grouped into an illusory percept of two streams organized on the basis of frequency proximity despite the simultaneous input of the two sounds to both ears. Attention can also be used to switch the representation from a single unit to a unit consisting of five tones [15]. Certainly, at a pace of 4 ms, the five-tone pattern could be detected with attention to the sounds. Sussman et al. [15] showed that maintaining the five-tone pattern with attention influenced the regularity held in memory that underlies elicitation. Taken together, the results emphasize the fact that the representation of the repeating regularity influences what is detected as deviant and thus influences whether is elicited. 15 Vol 16 No 13 8 September 5

ORGANIZATION OF SEQUENTIAL SOUNDS CONCLUSION We tested the role of temporal proximity in the automatic grouping of sequential sounds to determine the timing between sounds that is needed to unitize them in auditory memory. A fairly rapid pace ( ms onset-to-onset) was needed for sounds to be represented in memory as a single repeating unit. At a 4 ms pace, the unit in memory used in the deviance detection process was a single repeating tone. The results demonstrate that temporal proximity plays a role in unitizing successive sounds in auditory memory. In the visual system, the perceptual organization of whole objects is thought to be dependent on various Gestalt grouping processes that operate even in the absence of attention [17,18]. The results of the current study, along with other studies showing automatic grouping of sequential sounds in sensory memory [,7 14], are consistent with this theoretical position. REFERENCES 1. Koffka K. Principles of Gestalt Psychology. New York: Harcourt, Brace, & Co.; 1935.. Sussman E, Ritter W, Vaughan HG Jr. Stimulus predictability and the mismatch negativity system. Neuroreport 1998; 9:4167 417. 3. Näätänen R, Tervaniemi M, Sussman E, Paavilainen, Winkler I. Preattentive cognitive processing ( primitive intelligence ) in the auditory cortex as revealed by the mismatch negativity (). Trends Neurosci 1; 4:83 88. 4. Alho K. Cerebral generators of mismatch negativity () and its magnetic counterpart (m) elicited by sound changes. Ear Hearing 1995; 16:38 51. 5. Sams M, Hari R, Rif J, Knuutila J. The human auditory sensory memory trace persists about l s: neuromagnetic evidence. J Cogn Neurosci 1993; 5:363 37. 6. Cowan N, Winkler I, Teder W, Näätänen R. Short- and longterm prerequisites of the mismatch negativity in the auditory event-related potential (ERP). J Exp Psychol Learn Mem Cogn 1993; 19: 99 91. 7. Atienza M, Cantero JL, Grau C, Gomez C, Dominguez-Marin E, Escera C. Effects of temporal encoding on auditory object formation: a mismatch negativity study. Cogn Brain Res 3; 16:359 371. 8. Deike S, Gaschler-Markefski B, Brechmann A, Scheich H. Auditory stream segregation relying on timbre involves left auditory cortex. Neuroreport 4; 15:1511 1514. 9. Kanoh S, Futami R, Hoshimiya N. Sequential grouping of tone sequence as reflected by the mismatch negativity. Biol Cybern 4; 91: 388 395. 1. Sussman E, Ritter W, Vaughan HG Jr. An investigation of the auditory streaming effect using event-related brain potentials. Psychophysiology 1999; 36: 34. 11. Takegata R, Roggia SM, Winkler I. Effects of temporal grouping on the memory representation of inter-tone relationships. Biol Psychol 5; 68:41 6. 1. Van Zuijen T, Sussman E, Winkler I, Näätänen R, Tervaniemi M. Grouping of sequential sounds an event-related potential study comparing musicians and non-musicians. J Cogn Neurosci 4; 16: 331 338. 13. Yabe H, Asai R, Hiruma T, Sutoh T, Koyama S, Kakigi R et al. Sound perception affected by nonlinear variation of accuracy in memory trace. Neuroreport 4; 15:813 817. 14. Yabe H, Matsuoka T, Sato Y, Hiruma T, Sutoh T, Koyama S et al. Time may be compressed in sound representation as replicated in sensory memory. Neuroreport 5; 16:95 98. 15. Sussman E, Winkler I, Huoutilainen M, Ritter W, Näätänen R. Top-down effects on the initially stimulus-driven auditory organization. Cogn Brain Res ; 13:393 45. 16. Deutsch D. An auditory illusion. Nature 1974; 51:37 39. 17. Humphreys GW. Neural representation of objects in space: a dual coding account. Philos Trans R Soc B 1998; 353:1341 1351. 18. Mattingly JB, Davis G, Driver J. Pre-attentive filling-in of visual surfaces in parietal extinction. Science 1997; 75:671 674. Acknowledgement: This research was supported by the National Institutes of Health (R1DC463). Vol 16 No 13 8 September 5 153