Running head: PERCEPTUAL GROUPING AND SPATIAL SELECTION 1. The attentional window configures to object boundaries. University of Iowa

Similar documents
Grouped Locations and Object-Based Attention: Comment on Egly, Driver, and Rafal (1994)

Object-based attention in Chinese readers of Chinese words: Beyond Gestalt principles

Functional Fixedness: The Functional Significance of Delayed Disengagement Based on Attention Set

Attentional Capture Under High Perceptual Load

Irrelevant features at fixation modulate saccadic latency and direction in visual search

The path of visual attention

Rejecting salient distractors: Generalization from experience

Attentional Window and Global/Local Processing

Emotion. Enhanced Attentional Capture in Trait Anxiety

Random visual noise impairs object-based attention

Automatic focusing of attention on object size and shape

Are In-group Social Stimuli more Rewarding than Out-group?

Perceptual grouping determines the locus of attentional selection

Searching through subsets:

Attentional set interacts with perceptual load in visual search

Working Memory Load and Stroop Interference Effect

Perceptual grouping in change detection

Dissociating location-specific inhibition and attention shifts: Evidence against the disengagement account of contingent capture

Perceptual grouping in change detection

Hemispheric performance in object-based attention

Direct Evidence for Active Suppression of Salient-but-Irrelevant Sensory Inputs

Taming the White Bear: Initial Costs and Eventual Benefits of Distractor Inhibition

Object-based selection is contingent on attentional control settings

Attentional Spreading in Object-Based Attention

Offsets and prioritizing the selection of new elements in search displays: More evidence for attentional capture in the preview effect

Journal of Experimental Psychology: Human Perception and Performance

Short-term and long-term attentional biases to frequently encountered target features

The initial stage of visual selection is controlled by top-down task set: new ERP evidence

Top-down search strategies cannot override attentional capture

Object Substitution Masking: When does Mask Preview work?

Reduced attentional capture in action video game players

Object-based attention with endogenous cuing and positional certainty

Spatially Diffuse Inhibition Affects Multiple Locations: A Reply to Tipper, Weaver, and Watson (1996)

Object-based selection in the two-rectangles method is not an artifact of the three-sided directional cue

ARTICLE IN PRESS. Vision Research xxx (2008) xxx xxx. Contents lists available at ScienceDirect. Vision Research

Templates for Rejection: Configuring Attention to Ignore Task-Irrelevant Features

Shifting attention into and out of objects: Evaluating the processes underlying the object advantage

Selective attention to the parts of an object

Effects of Task Relevance and Stimulus-Driven Salience in Feature-Search Mode

Attention and Scene Perception

How does attention spread across objects oriented in depth?

Entirely irrelevant distractors can capture and captivate attention

The Relation between Visual Working Memory and Attention: Retention of Precise Color Information. in the Absence of Effects on Perceptual Selection

(Visual) Attention. October 3, PSY Visual Attention 1

Department of Computer Science, University College London, London WC1E 6BT, UK;

Splitting Visual Focal Attention? It Probably Depends on Who You Are

Attentional Capture in Singleton-Detection and Feature-Search Modes

The Spatial Distribution of Attention Within and Across Objects

Feature Integration Theory Revisited: Dissociating Feature Detection and Attentional Guidance in Visual Search

Working Memory Load and the Stroop Interference Effect

The role of top-down spatial attention in contingent attentional capture

Chen, Z. (2009). Not all features are created equal: Processing asymmetries between

Interference with spatial working memory: An eye movement is more than a shift of attention

Spatial attention: normal processes and their breakdown

Top-down search strategies determine attentional capture in visual search: Behavioral and electrophysiological evidence

Enhanced visual perception near the hands

Object-based attention: Shifting or uncertainty?

Learning to overcome distraction

Redundancy gains in pop-out visual search are determined by top-down task set: Behavioral and electrophysiological evidence

Enhanced discrimination in autism

Attentional control and reflexive orienting to gaze and arrow cues

Event-Related Potential Indices of Attentional Gradients Across the Visual Field

THE SPATIAL EXTENT OF ATTENTION DURING DRIVING

Intelligent Object Group Selection

Retinal location and its effect on the spatial distribution of visual attention. by Paula Goolkasian

The Influence of the Attention Set on Exogenous Orienting

The role of spatial and non-spatial information. in visual selection

2/27/2014. What is Attention? What is Attention? Space- and Object-Based Attention

Invariant Effects of Working Memory Load in the Face of Competition

Attentional Capture by Salient Color Singleton Distractors Is Modulated by Top-Down Dimensional Set

Overriding Age Differences in Attentional Capture With Top-Down Processing

Vision Research. Task-irrelevant stimulus salience affects visual search q. Dominique Lamy *, Loren Zoaris. abstract

Follow this and additional works at: Part of the Neurosciences Commons

Bottom-Up Guidance in Visual Search for Conjunctions

Attentional capture is contingent on the interaction between task demand and stimulus salience

Goal-Directed and Stimulus-Driven Determinants of Attentional Control

Grouping does not require attention

The eyes fixate the optimal viewing position of task-irrelevant words

Time to Guide: Evidence for Delayed

Can Irrelevant Onsets Capture Attention? Searching for a Unified Model of Attention Capture

RADAR Oxford Brookes University Research Archive and Digital Asset Repository (RADAR)

Contextual cost: When a visual-search target is not where it should be. Tal Makovski Yuhong V. Jiang

Cross-Trial Priming of Element Positions in Visual Pop-Out Search Is Dependent on Stimulus Arrangement

Attention Capture While Switching Search Strategies: Evidence for a Breakdown in Top-Down Attentional Control

The Simon Effect as a Function of Temporal Overlap between Relevant and Irrelevant

Decline of the McCollough effect by orientation-specific post-adaptation exposure to achromatic gratings

(This is a sample cover image for this issue. The actual cover is not yet available at this time.)

The Problem of Latent Attentional Capture: Easy Visual Search Conceals Capture by Task-Irrelevant Abrupt Onsets

Oculomotor consequences of abrupt object onsets and offsets: Onsets dominate oculomotor capture

Task Specificity and the Influence of Memory on Visual Search: Comment on Võ and Wolfe (2012)

Top-Down Control of Visual Attention: A Rational Account

Strength of object representation: its key role in object-based attention for determining the competition result between Gestalt and top-down objects

Learning Within-Category Attribute Correlations in a One-Attribute Visual Search Classification Paradigm

Which way is which? Examining symbolic control of attention with compound arrow cues

Components of reward-driven attentional capture. Li Z. Sha Yuhong V. Jiang. Department of Psychology, University of Minnesota

HOW DOES PERCEPTUAL LOAD DIFFER FROM SENSORY CONSTRAINS? TOWARD A UNIFIED THEORY OF GENERAL TASK DIFFICULTY

Early posterior ERP components do not reflect the control of attentional shifts toward expected peripheral events

Congruency Effects with Dynamic Auditory Stimuli: Design Implications

Perceptual organization influences visual working memory

Transcription:

Running head: PERCEPTUAL GROUPING AND SPATIAL SELECTION 1 The attentional window configures to object boundaries University of Iowa

Running head: PERCEPTUAL GROUPING AND SPATIAL SELECTION 2 ABSTRACT When searching your desk for a pen, how do you constrain your attention to the desk and avoid being distracted by your office partner s bright shoes? The attention literature posits that people constrain attention to relevant locations with an attentional window. This window filters in visual information within its spatial window and filters out information outside of its window. For example, if your attentional window is configured to your desk, your visual system will process the objects on the desk, but not objects on the floor. One unanswered question about the attentional window is whether it can configure to objects or, instead, acts as a spotlight on different vague locations (i.e. zoom-lens). To answer this, observers completed a simple search task with a salient distractor and were cued to search for the target at either locations or objects. When locations were cued, salient distractors at both cued and uncued locations captured attention, indicating imprecise configuration of the attentional window. When objects were cued, only salient distractors on the cued object captured attention, indicating the attentional window configures to specific objects; not vague locations. This research demonstrates that when searching your desk for a pen, you are able to specifically search items on your desk and not just search the vague location of your desk.

Running head: PERCEPTUAL GROUPING AND SPATIAL SELECTION 3 Research on the ability to avoid distraction (i.e. attentional capture) has shown irrelevant, salient distractors prevent us from efficiently finding targets in some situations, but not others. The attentional window hypothesis (Belopolsky, et al., 2007) offers one explanation for why the effect of distractors is situation dependent. In this hypothesis, observers attention can be spatially diffuse or focused and only irrelevant events within this window-like space are distracting (irrelevant events outside of this window are not distracting). One unanswered question is whether the attentional window can be precisely configured to a single object or if it functions as a zoom-lens, which must maintain a vague spotlight like, location based, distribution (Eriksen & St. James, 1986)? Object-based attention research has shown we re faster to find two items within a single object than two items between objects, even when the distance between the items is held constant (e.g. Egly et al., 1994; Vecera, 1994). Also, Cosman and Vecera (2012) demonstrated that distractors on the same object as the target are more distracting than distractors on a different object. These findings suggest that observers attentional windows may naturally configure to objects. For example, Figure 1B represents a situation where observers are told which object the target will appear upon (i.e. cued). If the attentional window functions like a zoom-lens and is unable to configure to object boundaries, then salient distractors on both the cued and uncued object (all the locations) will slow our ability to find the target (i.e. capture attention). On the other hand, if the attentional window can configure to object boundaries, then only salient distractors on the cued object will capture attention. Kerzel, Born, and Schonhammer (2012) recently found observers were able to constrain the attentional window to an inner or outer ring of items, but observers were not able to constrain capture to a set of items without this spatial separation between the relevant groups. We suggest that observers were not able to constrain

Running head: PERCEPTUAL GROUPING AND SPATIAL SELECTION 4 their attention to the relevant groups because there was not strong enough perceptual grouping cues (e.g. object boundaries). For example, the items do not group together via gestalt grouping cues. In the current experiments, to test whether observers can constrain their attention to the cued object or locations, we used the additional singleton paradigm (Theeuwes, 1992) in which observers search for a shape singleton among homogeneously-shaped distractors and respond to the orientation of a line within the target shape. Critically, on half the trials, one of the distractors is a different color (i.e. color-singleton or a salient distractor). This additional singleton distractor is irrelevant to the search task since the color singleton can never be the target. Nonetheless, observers response times (RTs) to the target are slowed by the presence of the color singleton (i.e. attentional capture). Importantly, according to the attentional window hypothesis, if color singletons capture attention at both cued and uncued locations, then the attentional window covers both cued and uncued locations. Conversely, if color singletons only capture attention at cued locations, then the window is constrained to only these cued locations. In experiment 1A, replicating Kerzel et al. (2012), before each trial, observers were cued, which 4 locations (of 8) were possible target locations (See Figure 1A). Because there are no perceptual grouping cues, we predict that color singletons at at both cued and uncued locations will slow RTs to the target (i.e. capture attention), indicating that the attentional window covers both cued and uncued locations. In experiment 1B, we sought to test if the attentional window can be constrained to object boundaries by informing observers in each block which one of two objects the target would appear upon (see figure 1B). If the attentional window is constrained by object boundaries, then only singleton distractors on the cued object, and not singleton distractors on the uncued object, will slow RTs to the target. This would indicate that the

Running head: PERCEPTUAL GROUPING AND SPATIAL SELECTION 5 attentional window only covers locations on the cued object. Methods Twelve University of Iowa undergraduates participated in both experiments. All observers completed the additional singleton paradigm with a red target circle among 7 red distractor diamonds. On trials with a color singleton, one distractor was green. On half the trials the target contained a vertical line and on the other half, the target contained a horizontal line. Participants reported the line orientation via button response. Stimuli appeared equally spaced around an imaginary circle with a radius of 8.12. The items were all roughly 1.4 and the lines within them were.65 x.15. All items appeared on a grey background for 2000 ms or until response. After completing 64 practice trials, observers completed 8 blocks of 112 trials. In experiment 1A, a fixation dot appeared for 500 ms followed by 4 arrow cues, which appeared for 500 ms along with the fixation point (see Figure 1A). Following the arrow cues, the search display (8 items) appeared and remained on the screen with the fixation dot and arrows until the participants responded. In half the blocks, the arrows pointed at the 4 cardinal locations. They pointed at the 4 diagonal locations in the other half. Block order was counterbalanced. In experiment 1B, the fixation point appeared for 1000 ms followed by the search display superimposed upon a large cross and circle (see Figure 1B). The orientation of the cross ( plus or x ) and the color of the two objects changed randomly from trial to trial. The target appeared on the cross in half the blocks and the circle in the other half. Again, block order was counterbalanced. Results & Discussion A one-way repeated measure analysis of variance (ANOVA) with three factors: color singleton absent, present at a cued location (object in Experiment 1B), and present at an uncued

Running head: PERCEPTUAL GROUPING AND SPATIAL SELECTION 6 location (object in Experiment 1B) was performed on correct RTs of less than 2000 ms. In experiment 1A, the ANOVA found a significant effect F(2, 22) = 5.88, p <.01 (see Figure 1C). As expected, planned comparisons with shared variance from the ANOVA found search RTs were slower when the color singleton appeared at a cued location than when it was absent p <.005. Planned comparisons failed to find slower RTs both when the color singleton was present at an uncued location than when it was absent, p >.1 and when the color singleton appeared at cued than uncued locations, p >.09. Thus, Experiment 1A failed to find evidence of capture by distractors at uncued locations, but it also failed to find evidence that there was less capture by distractors at the uncued locations than by distractors at the cued locations, leaving us uncertain about the distribution of the attentional window at uncued locations. In experiment 1B, the ANOVA also found a significant effect, F(2, 22) = 8.02, p <.005 (See Figure 1D). Again, planned comparisons with shared variance from the ANOVA found RTs were significantly slower when the color was present at a cued location than absent, p <.005. In contrast to Experiment 1As and supporting the hypothesis that objects allow the attentional window to be configured to object boundaries, planned comparisons confirmed RTs were not significantly slower when the color singleton appeared on the uncued object than when it was absent, p >.75. Also, search RTs to the target were significantly slower when the color singleton appeared on the cued object than when it appeared on the uncued object, p <.005. Basically, when the color singleton distractor appeared at an uncued location, it was as if the distractor was not present in the display. These results demonstrate that given sufficient perceptual grouping cues, observers are able to constrain attention to relevant non-contiguous locations. In other words, observers are able to configure their attentional window specifically to objects and not just to the vague location of objects.

Running head: PERCEPTUAL GROUPING AND SPATIAL SELECTION 7 References 1. Belopolsky, A. V., Zwaan, L., Theeuwes, J., & Kramer, A. F. (2007). The size of an attentional window modulates attentional capture by color singletons. Psychonomic Bulletin & Review, 14(5), 934-8. 2. Cosman, J.D., & Vecera, S.P. (2012). Object-based attention dominates perceptual load to modulate visual distraction. Journal of Experimental Psychology: Human Perception and Performance, 38(3), 576-9. 3. Egly, R., Driver, J., Rafal, R.D. (1994). Shifting visual attention between objects and locations: Evidence from normal and parietal lesion subjects. Journal of experimental psychology: General, 123(2), 161-177. 4. Eriksen, C.W., St. James, J.D.(1986). Visual attention within and around the field of focal attention: A zoom lens model. Perception & Psychophysics, 40(4), 255-240. 5. Kerzel, D., Born, S., & Schönhammer, J. (in press). Perceptual grouping allows for attention to cover noncontiguous locations and suppress capture from nearby locations. Journal of Experimental Psychology: Human Perception and Performance. 6. Theeuwes, J. (1992). Perceptual selectivity for color and form. Perception & psychophysics, 51, 599-606. 7. Vecera, S.P. (1994). Grouped locations and object-based attention: Comment on Egly, Driver, and Rafal (1994). Journal of experimental psychology: General, 123(3), 316-320.

Running head: PERCEPTUAL GROUPING AND SPATIAL SELECTION 8 Figure 1