Learning to find a shape

Similar documents
The role of priming. in conjunctive visual search

Pre-Attentive Visual Selection

The role of visual field position in pattern-discrimination learning

Does scene context always facilitate retrieval of visual object representations?

Cross-Trial Priming of Element Positions in Visual Pop-Out Search Is Dependent on Stimulus Arrangement

The impact of item clustering on visual search: It all depends on the nature of the visual search

Visual Search: A Novel Psychophysics for Preattentive Vision

PSYC20007 READINGS AND NOTES

A saliency map in primary visual cortex

Neural correlates of short-term perceptual learning in orientation discrimination indexed by event-related potentials

EFFECTS OF NOISY DISTRACTORS AND STIMULUS REDUNDANCY ON VISUAL SEARCH. Laurence D. Smith University of Maine

Contextual influences in V1 as a basis for pop out and asymmetry in visual search

Spatial Distribution of Contextual Interactions in Primary Visual Cortex and in Visual Perception

Effects of similarity and history on neural mechanisms of visual selection

Attention and Scene Perception

VISUAL PERCEPTION OF STRUCTURED SYMBOLS

Vision Research 50 (2010) Contents lists available at ScienceDirect. Vision Research. journal homepage:

Perceptual learning on orientation and direction discrimination

The Effects of Reading Speed on Visual Search Task

The Ebbinghaus illusion modulates visual search for size-defined targets: Evidence for preattentive processing of apparent object size

Efficient Visual Search without Top-down or Bottom-up Guidance: A Putative Role for Perceptual Organization

Templates for Rejection: Configuring Attention to Ignore Task-Irrelevant Features

The effects of subthreshold synchrony on the perception of simultaneity. Ludwig-Maximilians-Universität Leopoldstr 13 D München/Munich, Germany

Contextual cueing by global features

Mechanisms of generalization perceptual learning

Perceptual Learning in the Absence of Task or Stimulus Specificity

Bottom-Up Guidance in Visual Search for Conjunctions

Abrupt learning and retinal size specificity in illusory-contour perception Nava Rubin*, Ken Nakayama* and Robert Shapley

Discrete Resource Allocation in Visual Working Memory

Rule-Based Learning Explains Visual Perceptual Learning and Its Specificity and Transfer

General and specific perceptual learning in radial speed discrimination

Reading Assignments: Lecture 5: Introduction to Vision. None. Brain Theory and Artificial Intelligence

Supplementary Note Psychophysics:

IAT 355 Perception 1. Or What You See is Maybe Not What You Were Supposed to Get

Lateral Geniculate Nucleus (LGN)

Visual Working Memory Represents a Fixed Number of Items Regardless of Complexity Edward Awh, Brian Barton, and Edward K. Vogel

Attention and Perceptual Learning Modulate Contextual Influences on Visual Perception

Attention capacity and task difficulty in visual search

THE LOWER VISUAL SEARCH EFFICIENCY FOR CONJUNCTIONS IS DUE TO NOISE AND NOT SERIAL ATTENTIONAL PROCESSING

Visual working memory for simple and complex visual stimuli

Cross-trial priming in visual search for singleton conjunction targets: Role of repeated target and distractor features

Demonstrations of limitations in the way humans process and

(Visual) Attention. October 3, PSY Visual Attention 1

Framework for Comparative Research on Relational Information Displays

Selective Attention. Inattentional blindness [demo] Cocktail party phenomenon William James definition

Measurement and modeling of center-surround suppression and enhancement

Feature Integration Theory Revisited: Dissociating Feature Detection and Attentional Guidance in Visual Search

Tracking an object through feature-space

Priming in visual search: Separating the effects of target repetition, distractor repetition and role-reversal

Orientation Specific Effects of Automatic Access to Categorical Information in Biological Motion Perception

Limits to the Use of Iconic Memory

Report. Spatial Attention Can Be Allocated Rapidly and in Parallel to New Visual Objects. Martin Eimer 1, * and Anna Grubert 1 1

UC Merced Proceedings of the Annual Meeting of the Cognitive Science Society

Measuring the attentional effect of the bottom-up saliency map of natural images

Visual Context Dan O Shea Prof. Fei Fei Li, COS 598B

Implicit memory influences the allocation of attention in visual cortex

Limitations of Object-Based Feature Encoding in Visual Short-Term Memory

Perceptual grouping in change detection

EDGE DETECTION. Edge Detectors. ICS 280: Visual Perception

Pupil Dilation as an Indicator of Cognitive Workload in Human-Computer Interaction

Supplemental Information: Task-specific transfer of perceptual learning across sensory modalities

VISUAL PERCEPTION & COGNITIVE PROCESSES

Visual memory decay is deterministic

Theoretical Neuroscience: The Binding Problem Jan Scholz, , University of Osnabrück

An Information Theoretic Model of Saliency and Visual Search

CONCEPT LEARNING WITH DIFFERING SEQUENCES OF INSTANCES

Change detection is easier at texture border bars when they are parallel to the border: Evidence for V1 mechanisms of bottom ^ up salience

Visual Selection and Attention

Introduction to Computational Neuroscience

Mechanisms of generalization in perceptual learning

Categorical Perception

MECHANISMS OF PERCEPTUAL LEARNING

Selective bias in temporal bisection task by number exposition

Decline of the McCollough effect by orientation-specific post-adaptation exposure to achromatic gratings

(This is a sample cover image for this issue. The actual cover is not yet available at this time.)

A bottom up visual saliency map in the primary visual cortex --- theory and its experimental tests.

V1 (Chap 3, part II) Lecture 8. Jonathan Pillow Sensation & Perception (PSY 345 / NEU 325) Princeton University, Fall 2017

Flexible Retinotopy: Motion-Dependent Position Coding in the Visual Cortex

Vision Research 61 (2012) Contents lists available at ScienceDirect. Vision Research. journal homepage:

HOW DOES PERCEPTUAL LOAD DIFFER FROM SENSORY CONSTRAINS? TOWARD A UNIFIED THEORY OF GENERAL TASK DIFFICULTY

Rapid Resumption of Interrupted Visual Search New Insights on the Interaction Between Vision and Memory

Set-Size Effects in Identification and Localization

Visual experience can substantially alter critical flicker fusion thresholds

M Cells. Why parallel pathways? P Cells. Where from the retina? Cortical visual processing. Announcements. Main visual pathway from retina to V1

2012 Course: The Statistician Brain: the Bayesian Revolution in Cognitive Sciences

2/3/17. Visual System I. I. Eye, color space, adaptation II. Receptive fields and lateral inhibition III. Thalamus and primary visual cortex

Visual attention and things that pop out. Week 5 IAT 814 Lyn Bartram

Modeling Visual Search Time for Soft Keyboards. Lecture #14

TRAINING TO IMRPOVE COLLISION DETECTION IN OLDER ADULTS

Top-down search strategies cannot override attentional capture

The influence of visual motion on fast reaching movements to a stationary object

Project exam in Cognitive Psychology PSY1002. Autumn Course responsible: Kjellrun Englund

What Matters in the Cued Task-Switching Paradigm: Tasks or Cues? Ulrich Mayr. University of Oregon

Grouped Locations and Object-Based Attention: Comment on Egly, Driver, and Rafal (1994)

Perceptual learning in object recognition: object specificity and size invariance

Perceptual grouping in change detection

Attentional set interacts with perceptual load in visual search

C ontextual modulation is a general phenomenon that relates to changes in the perceived appearance of

Learning to Ignore the Mask in Texture Segmentation Tasks

High-capacity spatial contextual memory

Transcription:

articles Learning to find a shape M. Sigman and C. D. Gilbert The Rockefeller University, 1230 York Avenue, New York, New York 10021-6399, USA Correspondence should be addressed to C.D.G. (gilbert@rockvax.rockefeller.edu) We studied the transition of stimuli from novel to familiar in visual search and in the guidance of attention to a particular object. Ability to identify an object improved dramatically over several days of. The learning was specific for the object s position in the visual field, orientation and configuration. Improvement was initially localized to one or two positions near the fixation spot and then expanded radially to include the full area of the stimulus array. Characteristics of this learning process may reflect a shift in the cortical representation of complex features toward earlier stages in the visual pathway. In a visual search task, a target must be detected within a field of distractors. The target can be defined by various attributes, such as color, orientation or form. Depending on the combination of target and distractors, search efficiency may be influenced by the number of distractors 1 4. According to feature integration theory, the difficulty of visual search is determined by a target s uniqueness in the map of some elementary feature 1. Visual input is processed in two stages. The first stage uses a set of retinotopically organized maps coding for an elementary attribute such as color or orientation. This stage operates in parallel across visual space, but it produces no information about conjunctions of elementary features. Detection of conjunctions represents a second stage that operates serially to produce the percept of a whole object. This theory considers a shape to be a conjunction of elementary features or strokes 3. Identifying a shape would therefore require serial search, and the ability to identify it should diminish with the number of distractors. However, this degradation of performance can be strongly counteracted by familiarity, even when the lowlevel features of targets and distractors are held constant. For instance, recognizing the digit 2 among an array of the digit 5 becomes much harder when rotating the entire image by 90 renders the characters less familiar 5. Under some circumstances, performance in a visual search task can also be improved by priming. The effect of priming is thought to be limited to simple visual attributes and is passive and automatic 6,7. Here we show that perceptual learning extends the range of priming effects and is important in the ability to guide attention to a particular object. We chose a visual search task that involved searching for a triangle (a target defined by form) among an array of distractors, in this case triangles of other orientations (Fig. 1a). We show that perceptual learning dramatically increases the ability to find a shape. Moreover, we show specificity of this learning for visuotopic position, object orientation and object configuration. RESULTS Performance before We used a visual search task in which the observer was required to find a target embedded in an array of distractors. The target consisted of a triangle of one of four possible orientations (up, left, right or down), surrounded by triangles of the other three orientations. The triangles were presented in a 5 5 stimulus array with a centrally positioned fixation spot. A screen, in which the target was either present in a randomized position or absent, was presented every 3 seconds for a duration of 300 milliseconds (Fig. 1a). The subject s task was to report whether the target was present. We measured the percent of correct responses for a fixed presentation time. To compensate for guessing, 20% of the trials were a null condition in which no target was present. A separate false-positive rate was calculated for each experiment. This false-positive rate, fp, was used to adjust the percentage of positive responses, p, according to the formula p =(p fp)/(1 fp) to yield p, the true-positive rate, which we averaged for all subjects performing each experiment. The falsepositive rate was below 3% for all subjects. The tests of significance were carried out using a two-tailed t-test over the data collected from all subjects; error bars correspond to standard deviations. In certain types of search tasks, the target attracts attention even when the observer has no knowledge about its characteristics. For example, even without a previous cue, a red object embedded in an array of blue distractors will draw the viewer s attention 6. Our search task, however, required that the viewer have explicit knowledge of the object sought. Subjects could not perform the task unless they were instructed to find a triangle of a particular orientation. Experiments were run in blocks of 150 trials with a target of a single orientation. Before each block, subjects were informed of the orientation of the target. Each session consisted of eight different blocks. In two sessions before, performance levels on detecting triangles of the four different orientations were tested. Naive subjects showed an average performance below 20% for all different orientations. Effects of After having measured performance levels before, we chose a single orientation, and the subject was trained by repeating blocks in this particular orientation. Different orientations were used as targets for different subjects. All subjects substantially improved performance over the period. Training stopped when subjects reached threshold, which we arbitrarily set in the range of 70 80% correct responses (Fig. 1b). For different subjects, the time to reach threshold varied between 4 and 6 days, corresponding to 5000 7000 trials. To measure the change in performance for the trained and untrained orientations, we then repeated the two test sessions in which subjects were tested on triangles of the four orientations. 264 nature neuroscience volume 3 no 3 march 2000

Spatial dependence of the learning The results above were averaged over all spatial locations at which the stimuli appeared. One can examine the visuotopic specificiarticles a Target (not present) Target (present) b c d Days trained Trained orientation Untrained orientation Before Before After Before After Trained orientation Untrained orientation After 1 month after After on a second orientation Fig. 1. Training on triangles of a particular orientation resulted in improvements in detection specific to the object at the trained orientation. (a) A stimulus consisting of a 5 5 array composed of triangles of 4 possible orientations (right, left, up or down) was presented for 300 ms. The target, a triangle of particular orientation, was present in 80% of the cases. The distractors were triangles in the three remaining orientations. (b) One subject s progress through the course of learning. (c) Averaged responses (four subjects) for the trained orientation. Performance improved fivefold after. No change was seen for the untrained orientations. (d) Improvement (averaged over two subjects) lasted for at least one month without practice with no degradation in performance. Two subjects were trained on a second orientation; subsequently, performance for the first trained orientation degraded, reflecting a negative-transfer function in the orientation domain. Subjects showed an average 5-fold increase of performance in detecting triangles of the trained orientation (p = 15.4 ± 5.3% before ; p = 74.0 ± 2.9% after ; significance, p < 10 6 ) but no significant increase in detection of triangles of the untrained orientations (p = 19.5 ± 4.7% before, p = 21.3 ± 5.0% after ; significance, p > 0.3, average over 4 subjects; Fig. 1c). Once for one orientation was completed, we waited one month and repeated the test session to examine retention of the improvement. There was no change in the effect after the 1-month hiatus (p = 74 ± 2.5% after learning and p = 77 ± 1.9% after hiatus, average over 2 subjects, p > 0.2), indicating that the improvement showed no extinction over time (Fig. 1d). After on triangles of a second orientation, however, performance on the initially trained orientation declined, dropping from p = 74.0 ± 2.9% after the first to p = 57.0 ± 3.5% after subjects were trained for 7 days in a second orientation, averaged over 2 subjects (significance, p < 0.05; Fig. 1d). ty of the effect by comparing the change in performance at specific locations within the array. In particular, we wanted to determine if the learning occurred sequentially in different locations of the visual field or if the improvement resulted from a globally and uniformly increased ability in all the locations of the array. Learning tended to occur sequentially in different locations of the visual field, expanding from the fovea to the periphery, and the spatial pattern of performance levels was very similar for consecutive or nearly consecutive blocks (Fig. 2a). Furthermore, the expansion in the spatial coverage of the learning tended to occur between adjacent sites in the array. This spatial correlation was not exclusively a function of eccentricity. That is, if a subject was more likely to detect a target in one particular position in the array than in another, he would more easily detect a target in the same and neighboring locations in the subsequent trials. To quantify this, we measured the Euclidean distance between the blocks. Put simply, distance is a quantitative measure of the spatial differences in performance level between consecutive blocks, D 1, or blocks separated by greater intervals (D 2, D 3,..., D n ). We plotted Euclidean distances between positions as a function of block separation under three different conditions, either in the real data, in shuffled data (in which the responses nature neuroscience volume 3 no 3 march 2000 265

articles were scrambled in the different positions for each block of the original data) or in angularly scrambled data, in which positions were shuffled but all points retained their initial radial distance from the fixation point (Fig. 2b). The increase in Euclidean distance as a function of block separation in the original data demonstrated a strong correlation between successive blocks. The results for the scrambled data show that this correlation was not fully accounted for by the increase in the mean rate of correct responses, because scrambling the responses through different positions (keeping the total rate of responses constant through blocks) increased the distance between neighboring trials considerably. The results for the angularly scrambled data showed that the correlation was not simply radial correlation due to the progression of learning from fovea to periphery; rather, they demonstrated correlation of precise locations throughout the course of learning. a b Scrambled data Angularly scrambled data Real data It is important to remember that the target was presented with equal probability at each location within the trained array. Thus, the observed visuotopic specificity was not due to on particular locations within the array. To exclude the possibility that the improvement might result merely from an increase in speed in deciding whether a particular shape matched the target, regardless of its position, we tested the trained subjects in a condition in which both target and a variable number of distractors were presented outside the area of the array. We then compared performance levels for various numbers of distractors outside the array with that for the same number of distractors presented along with targets within the array. Within the area of the array, performance levels did not change with the number of distractors when the target was at the trained orientation, but did change when it was at an untrained orientation. Outside of the array, levels of performance for both trained and untrained orientations decreased with increasing number of distractors (Fig. 3). This shows that the improvement was specific for a particular shape and for a particular region of the visual field on which subjects were trained. Distance Block separation (number of blocks) Fig. 2. Learning showed visuotopic specificity. It progressed serially from fovea to the periphery; positions showing improvement were correlated from trial to trial. (a) Percent of correct responses, for one subject, as a function of position for different blocks during the learning period. Each square corresponds to one block of 150 trials; within each square, the gray-scale value of each circle represents the performance level for a particular location within the 5 5 array. The fixation spot in the center of the array is indicated with a small black circle. (b) The learning showed spatial specificity. Average distance between blocks for 4 subjects (see Methods) were significantly smaller for measured data than for scrambled data in 2 different conditions, either with all 24 positions scrambled or with only those positions equidistant from the fovea scrambled. Form specificity of the learning The last series of experiments were designed to test whether our search task involved solely what are considered low-level mechanisms, such as orientation discrimination or texture segmentation. We tested subjects who had been trained on our search task with two novel stimulus configurations. In the first configuration, triangles were replaced by arrowheads, which were still clearly recognizable as pointing left, right, up or down, but which did not have the property of closure (Fig. 4b). The learning effect was measured as the ratio between performance levels using the trained and untrained orientations. The orientation specificity in the levels of performance for closed triangles did not transfer to the arrowheads. The means for arrowheads were p = 36.2 ± 8.0% in the trained orientation and p = 35.0 ± 5.5% in the untrained orientation (averaged over 3 subjects; significance, p > 0.5; Fig. 4d). This shows that subjects learned not to discriminate orientation but actually to find an object at a particular orientation. In the second configuration, the target was not changed, but the field of distractors was completely novel. This was done to determine whether the learning was specific for the target itself or for a more generalized textural difference between foreground and background. The new figures used as distractors did 266 nature neuroscience volume 3 no 3 march 2000

articles a c Number of distractors not include the triangles in the other three orientations (Fig. 4c) and were presented at different contrasts to increase distractor variability, making the task more difficult 3. We observed that the specificity of the extended to this new background: for the trained orientation, p = 68.1 ± 11.0%, compared with p = 37.9 ± 11% for untrained orientation (averaged over 3 subjects; significance, p <10 5 ; Fig. 4d). DISCUSSION We studied the effects of on a search task in which the target was defined by form. In this task, search efficiency was significantly increased as a consequence of learning. This learning was object specific and resulted from a progressive acquisition of the ability to identify the given object in different locations in the visual field. The results suggest that learning in visual search can be targeted to a specific object. Although it is suggested that learning in visual search involves a general improvement in performing searches 8, other studies show orientation dependence of learning pop-out detection 9. The task used in our experiments did not involve texture segmentation or orientation discrimination, but identification of an oriented object. This is supported by three observations. First, the subjects could not perform the task if they did not have previous knowledge of the target characteristics. Triangles of a particular orientation embedded in triangles of other orientations could not be detected as unique objects. Second, we showed that learning in this task was specific for the target and transferred to different backgrounds. In contrast, learning effects in texture discrimination are specific for the field of distractors but not for the target 10. Third, b Trained orientation Untrained orientation Trained region Untrained region Fig. 3. Performance as a function of number of distractors within and outside the region. After in the 5 5 array (gray square), performance was tested outside the region. The target and either 7 (a) or 23 (b) distractors were presented at eccentricities ranging from 3.5 to 4.7. (c) Performance within the region was tested for a 3 3 array (7 distractors) and a 5 5 array (25 distractors), and was averaged over 3 subjects. Within the region, performance for the untrained orientation, but not for the trained orientation, declined with number of distractors. Outside the region, performance for both trained and untrained orientations declined with number of distractors. the failure of effects to transfer to arrowheads suggests that this task was not a simple orientationdiscrimination task. Interestingly, after, all subjects claimed that there was no conscious perceptual distinction between different triangles, even though they performed considerably better for target triangles of one orientation than for other orientations. The effect of learning on this task represents an extension of findings on the involuntary nature of priming, which is thought to be limited to simple visual (elementary) attributes as opposed to form 6. The degradation of performance on figures of new orientations after learning one orientation further extends the analogies with priming effects for position, which shows distractor inhibition 7, and might result from a difficulty in ignoring targets whose processing has been automated as a consequence of perceptual learning 11 13. Learning in this task was not just a consequence of perceptual exposure, but must have involved top-down influences 13 21. This is clear because learning occurred only for the target orientation, even though the subject was exposed seven times more often to triangles in each of the untrained orientations during the whole course of learning. Identification of form demands attention 22 26, as is suggested by the decrease in performance with increases in number of distractors. This implies not only that attention is required to obtain learning, but that, conversely, learning is required to rapidly direct the attentional mechanism toward a particular object. Visual search tasks are usually classified as parallel or serial based on whether the performance depends on the number of distractors 1, though it is suggested that this classification represents not a real dichotomy, but two extreme cases of a continuum 3,27,28. Here we show that increasing the number of distractors within the region did not change search efficiency for the trained orientation. However, search efficiency diminished when distractors were added outside of the region. This showed that the dependence of search efficiency on number of distractors may be a function of distractor position as a consequence of perpetual learning. Another characteristic of learning nature neuroscience volume 3 no 3 march 2000 267

articles a b c Trained Arrowheads New background Target (present) Target (present) Target (present) d Trained Arrowheads New background Fig. 4. Learning was specific for object configuration and transferred to a new background. (a) The array. (b) Test using arrowheads as target and distractors. (c) Target used in (triangle) in a new field of distractors. For the new background, we used open figures, circles, squares, diamonds and semicircles as distractors (c). The target was presented at the same luminance (60 cd per m 2 ) used in the other experiments. Distractors were randomly assigned a contrast of 33 91 cd per m 2. (d) Performance was averaged over three subjects for targets and for arrowheads in trained and untrained orientations, which lacked the feature of closure but were still oriented figures. Performance did not differ for arrowheads in trained and untrained orientations. Performance in the new background (average over three subjects) was better for the trained orientation, demonstrating object specificity of the learning. was that the inherent dependence of performance on position within the visual field was greatly reduced with, as for the detection of oriented gratings 9. In our experiments, the target could appear at any position within the array. Therefore, the spatial dependence of the learning observed within the region was not due to the location of the stimulus, as is the case when localized improvement results from practice in a fixed position of the visual field 10,19,29,30. This specificity results, therefore, from intrinsic mechanisms that may reflect the sequence of sites targeted by the search strategy. It could be argued that the improvement resulted from an increase in the speed with which the subjects, independent of visuotopic position, could determine whether a given item was the target. This, combined with a search strategy in which subjects started scanning close to the fovea and proceeded to the periphery, could account for the spatial specificity we found within the region. If this were the case, a subject trained for left triangles should perform better for left than for right triangles outside of the area used for. However, we showed that when target and distractors were presented outside of the region, detectability was no better for the trained orientation. Poorer performance for targets of the trained orientation presented outside the region, therefore, showed that the improvement was localized to a particular region of the visual field. Based on its spatial specificity, one may speculate that early cortical processing might be involved in this process. The progression of learning across the visual field suggests that representations of the trained object may be built repeatedly for different positions across the cortical area. Even within V1, cells are selective for much more complex stimulus configurations than originally believed 31 34, suggesting a role for V1 in the identification of complex forms. Connections within V1 are plastic 35,36, and modification of these connections may contribute to the plasticity of elementary-feature maps. Representation of more complex features at earlier levels may enhance efficiency and rapidity in recognizing these features in a complex background at the expense of requiring multiple shape representations in areas showing smaller receptive fields and greater visuotopic order. METHODS Psychophysical experiments on human observers (male and female, 23 27 years of age) were designed to study the effects of learning in a visual search task. All subjects gave written informed consent in accordance with procedures and protocols approved by the Rockefeller University Institutional Review Board. Stimuli were presented on a NEC monitor 5FGp refreshed at a rate of 60 Hz, and were observed a distance of 150 cm with both eyes, with normal pupil apertures and without head restraint. Each trial consisted of a 3000-ms cycle. A 5 5 array consisting of a central fixation spot and 24 shapes in the remaining locations was presented for 300 ms; a response was recorded during the subsequent 270- ms interstimulus interval. As an auditory cue to alert the observer, a short beep was sounded at the onset of the visual stimulus. The psychophysical experiments investigated the observer s ability to identify a target triangle among an array of distractors. Both target and distractors were presented at high contrast (60 cd per m 2 ) against a uniform background (2 cd per m 2 ). The target randomly appeared in any 268 nature neuroscience volume 3 no 3 march 2000

articles location within the array. After each presentation, the subject indicated whether or not a target shape was present by pressing the appropriate button of a computer mouse. The array subtended 4.2 4.2, and a small fixation spot of one arcmin radius (1 ) was positioned in its center. Figures used as target or distractors in the different experiments were equilateral triangles, squares, diamonds or arrowheads. The sides of all shapes were 27 in length and their centers were separated by 54. Average distance as a function of block separation D n, was calculated as follows. From each block we calculated a 5 5 matrix (M a, is the corresponding matrix to block a) where each position of the matrix (m a i,j)is defined as the level of performance in the corresponding location of the visual field in this block (Fig. 2a). We then have an ordered array of matrixes, and we can consider the distances between any two of those matrixes. d(m a, M b ) = Σ 5 Σ 5 (m a i,j m b i,j) 2 i=1 j=1 We then define the average distance as block separation to be D n = d(m i, M i+n ) i With the exception of one subject (author M.S.), all subjects were naive and were told only what the target was for each block. A session for one day consisted of eight blocks, each comprising 150 trials. All results correspond to the average values for all subjects performing each experiment, and two-tailed t-tests over the data collected from all subjects were used as tests of significance. All errors plotted correspond to standard deviations. Individual tests of significance gave comparable results. ACKNOWLEDGEMENTS We thank R. Crist for discussions and comments on the manuscript. This work was supported by NIH grant EY07968 and a Burroughs Wellcome fellowship to M.S. RECEIVED 7 JULY 1999; ACCEPTED 4 JANUARY 2000 1. Treisman, A & Gelade, G. A feature integration theory of attention. Cognit. Psychol. 12, 97 136 (1980). 2. Sagi, D. & Julesz, B. Where and what in vision. Science 228, 1217 1219 (1985). 3. Duncan, J. & Humphrey, G. W. Visual search and stimulus similarity. Psychol. Rev. 96, 433 458 (1989). 4. Rubinstein, B. S. & Sagi, D. Spatial variability as a limiting factor in texturediscrimination tasks: implications for performance asymmetries. J. Opt. Soc. Am. A 7, 1632 1643 (1990). 5. Wang, Q., Cavanagh, P. & Green, M. Familiarity and pop-out in visual search. Percept. Psychophys. 56, 495 500 (1994). 6. Maljovic, V. & Nakayama, K. Priming of pop-out detection: role of features. Mem. Cognit. 22, 657 672 (1994). 7. Maljovic, V. & Nakayama, K. Priming of pop-out: II. role of position. Percept. Psychophys. 58, 977 991 (1996). 8. Sireteanu, R. & Rettenbach, R. Perceptual learning in visual search: fast, enduring but non-specific. Vision Res. 35, 2037 2043 (1995). 9. Efron, R. & Yund, E. W. Guided search: the effects of learning. Brain Cogn. 31, 369 386 (1996). 10. Karni, A. & Sagi, D. Where practice makes perfect in texture discrimination: evidence for primary visual cortex plasticity. Proc. Natl. Acad. Sci. USA 88, 4966 4970 (1991). 11. Schneider, W. & Shiffrin, R. M. Controlled and automatic human information processing: I. detection, search and attention. Psychol. Rev. 84, 1 66 (1977). 12. Shiffrin, R. M. & Schneider, W. Controlled and automatic human information processing: II. perceptual learning, automatic attending and a general theory. Psychol. Rev. 84, 127 191 (1977). 13. Treisman, A., Verira, A. & Hayes, A. Automaticity and preattentive processing. Annu. Rev. Neurosci. 105, 341 362 (1992). 14. Ahissar, M. & Hochstein, S. Learning pop-out detection: specificities to stimulus characteristics. Vision Res. 36, 3487 3500 (1996). 15. Braun, J. Vision and attention: the role of. Nature 393, 424 425 (1998). 16. Ahissar, M. & Hochstein, S. Attentional control of early perceptual learning. Proc. Natl. Acad. Sci. USA 90, 5718 5722 (1993). 17. Ito, M., Westheimer, G. & Gilbert, C. D. Attention and perceptual learning modulate contextual influences on visual perception. Neuron 20, 1191 1197 (1998). 18. Fahle, M. & Morgan, M. No transfer of perceptual learning between similar stimuli in the same retinal position. Curr. Biol. 6, 292 297 (1996). 19. Crist, R. E, Kapadia, M., Westheimer, G. & Gilbert, C. D. Perceptual learning of spatial localization: specificity for orientation, position and context. J. Neurophysiol. 78, 2889 2894 (1997). 20. Shiu, L. P. & Pashler, H. Improvement in line orientation discrimination is retinally local but dependent on cognitive set. Percept. Psychophys. 52, 582 588 (1992). 21. Ahissar, M. & Hochstein, S. Task difficulty and the specificity of perceptual learning. Nature 387, 401 406 (1997). 22. Bravo, M. J. & Nakayama, K. The role of attention in different visual search tasks. Percept. Psychophys. 51, 465 472 (1992). 23. Wolfe, J. M. in Current Directions in Psychological Sciences 124 128 (Cambridge Univ. Press, Cambridge, 1992). 24. Wolfe, J. M., Cave, K. R. & Franzels, S. R. Guided Search: an alternative to the feature integration model of visual search. J. Exp. Psychol. Hum. Percept. Perform. 15, 419 433 (1989). 25. Joseph, J. S., Chun, M. M. & Nakayama, K. Attentional requirements in a preattentive feature search task. Nature 387, 805 807 (1997). 26. Chun, M. M. & Jiang, Y. Contextual cueing: implicit learning and memory of visual context guides spatial attention. Cognit. Psychol. 36, 28 71 (1998). 27. Braun, J. & Sagi, D. Vision outside the focus of attention. Percept. Psychophys. 48, 45 58 (1990). 28. Nakayama, K. & Joseph, J. S. in The Attentive Brain (ed. Parasuraman, R.) 279 298 (MIT Press, Cambridge, Massachusetts 1997). 29. Fiorentini, A. & Berardi, N. Learning in grating waveform discrimination: Specificity for orientation and spatial frequency. Vision Res. 21, 1149 1158 (1981). 30. Nazir, T. A. & O Regan, J. K. Some results on translation invariances in the human visual system. Spat. Vis. 5, 81 100 (1990). 31. Kapadia, M. K., Ito, M., Gilbert, C. D. & Westheimer, G. Improvements in visual sensitivity by changes in local context: Parallel studies in human observers and in V1 of alert monkeys. Neuron 15, 843 856 (1995). 32. Posner, M. I. & Gilbert, C. D. Attention and primary visual cortex. Proc. Natl. Acad. Sci. USA 96, 2585 2587 (1999). 33. Sillito, A. M., Grieve, K. L., Jones, H. E., Cudeiro, J. & Davis, J. Visual cortical mechanisms detecting focal orientation discontinuities. Nature 378, 492 496 (1995). 34. Das, A. & Gilbert, C. D. Topography of contextual modulations mediated by short-range interactions in primary visual cortex. Nature 399, 655 661 (1999). 35. Darian-Smith, C. & Gilbert, C. D. Axonal sprouting accompanies functional reorganization in adult cat striate cortex. Nature 368, 737 740 (1994). 36. Gilbert, C. D., Das, A., Ito, M., Kapadia, M. & Westheimer, G. Spatial integration and cortical dynamics. Proc. Natl. Acad. Sci. USA 93, 615 622 (1996). nature neuroscience volume 3 no 3 march 2000 269