Attention is required for the perceptual integration of action object pairs

Similar documents
The Meaning of the Mask Matters

Implied actions between paired objects lead to affordance selection by inhibition Xu, Shan; Humphreys, Glyn; Heinke, Dietmar

Selective bias in temporal bisection task by number exposition

On the failure of distractor inhibition in the attentional blink

Object Substitution Masking: When does Mask Preview work?

Reversing the affordance effect: negative stimulus response compatibility observed with images of graspable objects

Attentional Capture Under High Perceptual Load

The Attentional Blink is Modulated by First Target Contrast: Implications of an Attention Capture Hypothesis

Supplementary experiment: neutral faces. This supplementary experiment had originally served as a pilot test of whether participants

Does scene context always facilitate retrieval of visual object representations?

The path of visual attention

Interaction Between Social Categories in the Composite Face Paradigm. Wenfeng Chen and Naixin Ren. Chinese Academy of Sciences. Andrew W.

Is a small apple more like an apple or more like a cherry? A study with real and modified sized objects.

Congruency Effects with Dynamic Auditory Stimuli: Design Implications

(Visual) Attention. October 3, PSY Visual Attention 1

SELECTIVE ATTENTION AND CONFIDENCE CALIBRATION

Attention enhances feature integration

Finding Memory in Search: The Effect of Visual Working Memory Load on Visual Search. 1 Department of Psychology, University of Toronto

Two eyes make a pair: facial organization and perceptual learning reduce visual extinction

The Effect of Target Repetition on Semantic Priming in a Three-Target RSVP Task

Interpreting Instructional Cues in Task Switching Procedures: The Role of Mediator Retrieval

Visual Context Dan O Shea Prof. Fei Fei Li, COS 598B

Repetition blindness is immune to the central bottleneck

The number line effect reflects top-down control

Size (mostly) doesn t matter: the role of set size in object substitution masking

The effects of perceptual load on semantic processing under inattention

Orientation Specific Effects of Automatic Access to Categorical Information in Biological Motion Perception

The role of cognitive effort in subjective reward devaluation and risky decision-making

Grouped Locations and Object-Based Attention: Comment on Egly, Driver, and Rafal (1994)

Rapid Resumption of Interrupted Visual Search New Insights on the Interaction Between Vision and Memory

Enhanced visual perception near the hands

PAUL S. MATTSON AND LISA R. FOURNIER

Enhanced discrimination in autism

Attentional requirements in perceptual grouping depend on the processes involved in the organization

Identify these objects

RADAR Oxford Brookes University Research Archive and Digital Asset Repository (RADAR)

Invariant Effects of Working Memory Load in the Face of Competition

Functional Fixedness: The Functional Significance of Delayed Disengagement Based on Attention Set

CONGRUENCE EFFECTS IN LETTERS VERSUS SHAPES: THE RULE OF LITERACY. Abstract

What Matters in the Cued Task-Switching Paradigm: Tasks or Cues? Ulrich Mayr. University of Oregon

Image generation in a letter-classification task

Visual Hand Primes and Manipulable Objects

Irrelevant features at fixation modulate saccadic latency and direction in visual search

Chapter 6. Attention. Attention

FAILURES OF OBJECT RECOGNITION. Dr. Walter S. Marcantoni

Perceptual Fluency Affects Categorization Decisions

Satiation in name and face recognition

Auditory Dominance: Overshadowing or Response Competition?

The Simon Effect as a Function of Temporal Overlap between Relevant and Irrelevant

Are In-group Social Stimuli more Rewarding than Out-group?

Source memory and the picture superiority effect

Phil 490: Consciousness and the Self Handout [16] Jesse Prinz: Mental Pointing Phenomenal Knowledge Without Concepts

Which way is which? Examining symbolic control of attention with compound arrow cues

Perceptual load modulates the processing of distractors presented at task-irrelevant locations during the attentional blink

priming disappear, while stays intact? Why does long-term positive long-term negative priming

Object-based attention with endogenous cuing and positional certainty

What matters in the cued task-switching paradigm: Tasks or cues?

Five shades of grey: Generalization in distractor-based retrieval of S-R episodes

Do you have to look where you go? Gaze behaviour during spatial decision making

Influencing Categorical Choices Through Physical Object Interaction

Random visual noise impairs object-based attention

October 2, Memory II. 8 The Human Amnesic Syndrome. 9 Recent/Remote Distinction. 11 Frontal/Executive Contributions to Memory

The eyes fixate the optimal viewing position of task-irrelevant words

How to trigger elaborate processing? A comment on Kunde, Kiesel, and Hoffmann (2003)

CONTEXT OF BACKWARD COMPATIBILITY EFFECTS

Reflexive Spatial Attention to Goal-Directed Reaching

(SAT). d) inhibiting automatized responses.

Frank Tong. Department of Psychology Green Hall Princeton University Princeton, NJ 08544

HOW DOES PERCEPTUAL LOAD DIFFER FROM SENSORY CONSTRAINS? TOWARD A UNIFIED THEORY OF GENERAL TASK DIFFICULTY

Synaesthesia. Hao Ye

Templates for Rejection: Configuring Attention to Ignore Task-Irrelevant Features

Consolidation and restoration of memory traces in working memory

A297B Unilateral field advantage Hayes, Swallow, & Jiang. The unilateral field advantage in repetition detection: Effects of perceptual grouping and

The impact of item clustering on visual search: It all depends on the nature of the visual search

Taking control of reflexive social attention

Selective Attention. Inattentional blindness [demo] Cocktail party phenomenon William James definition

Supplemental Information: Task-specific transfer of perceptual learning across sensory modalities

Encoding of Elements and Relations of Object Arrangements by Young Children

Cultural Differences in Cognitive Processing Style: Evidence from Eye Movements During Scene Processing

The Role of Color and Attention in Fast Natural Scene Recognition

From concept to percept: Priming the perception of ambiguous figures

A Return to the Gorilla What Effects What

To appear in Quarterly Journal of Experimental Psychology. The temporal dynamics of effect anticipation in course of action planning

TMS Disruption of Time Encoding in Human Primary Visual Cortex Molly Bryan Beauchamp Lab

Conflict-Monitoring Framework Predicts Larger Within-Language ISPC Effects: Evidence from Turkish-English Bilinguals

Importance of Deficits

Recognizing partially visible objects

Does category labeling lead to forgetting?

Tracking the compatibility effect of hand grip and stimulus size

Entirely irrelevant distractors can capture and captivate attention

Older adults associative deficit in episodic memory: Assessing the role of decline in attentional resources

ARTICLE IN PRESS. Vision Research xxx (2008) xxx xxx. Contents lists available at ScienceDirect. Vision Research

Task Preparation and the Switch Cost: Characterizing Task Preparation through Stimulus Set Overlap, Transition Frequency and Task Strength

Evidence for divided automatic attention

Perceptual grouping determines the locus of attentional selection

The effects of subthreshold synchrony on the perception of simultaneity. Ludwig-Maximilians-Universität Leopoldstr 13 D München/Munich, Germany

Temporal Selection Is Suppressed, Delayed, and Diffused During the Attentional Blink Edward Vul, Mark Nieuwenstein, and Nancy Kanwisher

The Role of Task-Related Learned Representations in Explaining Asymmetries in Task Switching

Transfer of Dimensional Associability in Human Contingency Learning

Perceptual grouping in change detection

Transcription:

DOI 10.1007/s00221-015-4435-1 RESEARCH ARTICLE Attention is required for the perceptual integration of action object pairs Nicolas A. McNair 1 Irina M. Harris 1 Received: 29 April 2015 / Accepted: 30 August 2015 / Published online: 10 September 2015 Springer-Verlag Berlin Heidelberg 2015 Abstract Previous studies have demonstrated that functionally related objects are perceptually grouped during visual identification if they are depicted as if interacting with each other (Green and Hummel in J Exp Psychol Hum Percept Perform 32(5):1107 1119, 2006). However, it is unclear whether this integration requires attention or occurs pre-attentively. Here, we used a divided-attention task with variable attentional load to address this question. Participants matched a word label to a target object that was immediately preceded by a briefly presented, task-irrelevant tool that was either functionally related or unrelated to the word label (e.g., axe log or hammer log ). The tool was either positioned to interact with the target object or faced away from it. The amount of attention available to process the tool was manipulated by asking participants to make a concurrent perceptual discrimination of varying difficulty on a surrounding frame stimulus. The previously demonstrated advantage for the related-and-interacting condition was replicated under conditions of no or low attentional load. This benefit disappeared under high competing attentional load, indicating that attention is required to integrate functionally related objects together into a single perceptual unit. Keywords Tools Action affordance Perceptual grouping Context effects Object recognition Attentional load * Nicolas A. McNair nicolas.mcnair@sydney.edu.au 1 School of Psychology, University of Sydney, Sydney, NSW 2006, Australia Introduction Recent research has demonstrated that the functional relationships between objects play an important role in their perceptual processing. A striking example comes from patients with visual extinction, who experience difficulty perceiving multiple stimuli when they are presented simultaneously in both visual fields. Typically, the item presented in the visual field contralateral to the patient s brain lesion is extinguished from consciousness by the item presented in the ipsilesional visual field. However, this impairment is ameliorated if the objects in question are functionally related and are positioned such that they appear to be interacting (e.g., a corkscrew pointing towards the top of a wine bottle; Riddoch et al. 2003). This improvement does not appear to be related to the semantic association between the objects, as it is not found with word stimuli or stimuli that are associatively, but not functionally, related (e.g., fork and spoon). Instead, Humphreys and colleagues proposed that these effects arise as a result of the objects being integrated into a single perceptual unit through repeated experience (Humphreys and Riddoch 2007; Riddoch et al. 2006). In other words, these functionally related items become unitary perceptual objects that are then selected together by attentional mechanisms, in much the same way as other perceptually grouped stimuli (Gilchrist et al. 1996; Kimchi and Razpurker-Apfeld 2004; Kimchi et al. 2007; Pomerantz 2003). Support for the perceptual integration of functionally related objects comes from a study with healthy participants conducted by Green and Hummel (2006). In their study, subjects were required to match a name with a target object (e.g., the word glass to a picture of a glass). They found that matching performance (i.e., target identification)

26 Fig. 1 Trial sequence used in the experiment. The diamond stimulus was shown for 700 ms before being joined by the distractor tool, facing either left or right, for a further 50 ms. Following a 50-ms blank screen, the target action recipient was shown, offset to either the left or right of fixation. After another 50-ms blank screen, a word label consisting of the name of an action recipient was displayed in the centre of the screen for up to 2500 ms or until the subject responded was facilitated when an irrelevant distractor object, which was presented either immediately before or after the target, was both functionally related to the word label (e.g., jug and glass ) and was positioned such that the target and distractor objects appeared to be interacting with each other (e.g., jug pouring into the glass). Importantly, by varying the temporal asynchrony of the appearance of the objects (SOA), Green and Hummel were able to demonstrate that the naming benefit for related-and-interacting objects (hence referred to as a related-interacting advantage ) only occurred with short temporal asynchrony (100 ms), but not with a longer asynchrony (250 ms). It is well established that when visual stimuli are presented in close temporal succession, they are experienced as an integrated percept under certain parameters of stimulus duration and temporal separation typically, a brief duration of the leading stimulus and a short SOA that does not exceed about 120 ms (Allport 1968; Di Lollo 1980; Hogben and Di Lollo 1974). Therefore, Green and Hummel suggested that the identification advantage for the related-and-interacting stimuli was due to perceptual integration of the items, rather than post-perceptual processing of their semantic association. More specifically, they concluded that target identification was facilitated when the perceptually integrated stimuli depicted a familiar configuration of two objects related through the manner in which they are used together. They likened this to the word superiority effect, whereby a letter is identified more easily when it occurs within a familiar word than in a nonword (Green and Hummel 2006). Objects can be functionally related to each other in a variety of ways. For example, a chair and a table are used together contextually, as one sits on a chair at a table. Alternatively, a hammer is related to a nail through a direct action that the hammer typically exerts on the nail (the hammer is used to pound a nail). Moreover, in the latter example, the relationship is defined by the active object (hammer) being associated with particular motor programs that facilitate its action on the other object i.e., a hammer is grasped in a particular functional way that is appropriate for pounding a nail. In their studies of patients with visual extinction, Riddoch and colleagues (Riddoch et al. 2003) showed that the action relationship between objects plays a special role in alleviating the extinction. While objects that formed related-and-interacting functional pairs generally showed reduced extinction, when only one object was detected this was more likely to be the active object in the pair. This was the case even when the active object was presented on the side contralateral to the lesion. In contrast, when the objects were not interacting, patients exhibited the usual bias towards the ipsilesional object. This suggests that the action relationship between the two objects is coded pre-attentively. That the action properties of an object may be processed pre-attentively and even in the absence of awareness has been suggested previously in a number of contexts. For example, studies that used continuous flash suppression to suppress objects from awareness (Almeida et al. 2008, 2010) found semantic priming from tools, but not other categories such as animals. Tools and the recipients of their actions also show enhanced processing during periods of disrupted attentional engagement, as in the attentional blink, when they occur as functionally congruent pairs of targets (Adamo and Ferber 2009; McNair and Harris 2014). Given this, the perceptual integration of a related-and-interacting pair of objects might also form pre-attentively. We address this question in the present study using a task closely modelled on that used by Green and Hummel (2006). In accordance, we conceive of a functional group as a familiar configuration of objects positioned appropriately for being used together (e.g., a hammer pointing

27 towards a nail, ready to pound on it, or a jug pouring into a glass; see Fig. 1). We used a slightly tighter definition of functional relationship than that of Green and Hummel by restricting our functionally related objects to pairs of objects linked through a specific action relationship (i.e., a tool and a recipient of that tool s action, and we use these terms throughout to refer to the objects). Two objects were presented in rapid succession (SOA of 100 ms). The first object was a tool, and the second was either an appropriate action recipient of the tool (e.g., hammer followed by nail) or an unrelated action recipient (e.g., key followed by a nail). These objects were either positioned as if they were interacting with each other, or the tool faced away from the subsequent action recipient. Participants were required to attend to the second object (the action recipient) and ignore the first object (the tool). The object sequence was followed by a word label which the participants had to match to the preceding action recipient. Under the same conditions, Green and Hummel found that identification of the target object was facilitated when the label was related to the tool distractor and the two objects were interacting with each other, relative to when they were not interacting or were unrelated. Based on Green and Hummel s findings, and the results of patients with visual extinction, we therefore expected that identification of action recipients that occurs within such functional groups (i.e., the related-interacting condition) will be enhanced. Additionally, we varied whether the target object appeared to the left or right of fixation. This was done in Green and Hummel s study, but they did not report the results broken down by side of target. We were interested in whether the positioning of the target would make a difference, especially in the related-interacting condition, because it would result in the tool being positioned such that the handle pointed towards the right or the left hand. This could affect the tool s perceived readiness for action and potentially influence the perceptual integration of the objects and their identification. For example, Yoon et al. (2010) found that when participants had to determine whether two objects were functionally related, responses were faster when the active object in the pair (i.e., the tool) was positioned for use with the right hand and a similar result was found when subjects had to identify the objects in such pairings (Roberts and Humphreys 2011a). To anticipate our results, we did indeed find a related-interacting advantage that was only present when the target object was to the left of fixation and, therefore, the tool was positioned for use with the right hand. The critical manipulation of interest in our study was that we varied the amount of attention available to the tool by requiring participants to perform a concurrent perceptual discrimination task on an additional stimulus, while the tool was presented on the screen. In a preliminary experiment (Experiment 1a), we first established that the mere addition of this stimulus did not interfere with the related-interacting performance advantage previously reported by Green and Hummel (2006). In a subsequent experiment (Experiment 1b), we had participants who directly attend to the extraneous stimulus and manipulated the attentional load of the concurrent discrimination by varying its degree of difficulty. To anticipate the results of this manipulation, we found that the related-interacting advantage was still present under low attentional-load conditions, but that it was abolished when the attentional demands of the extraneous task were increased. General methods Participants Twenty-four participants (11 females; mean age = 26.4; all right handed) with normal, or corrected-to-normal, vision were recruited for the study. They performed both experiments: Experiment 1a first and then Experiment 1b at a later date within the following week. Experimental procedures were approved by the Human Research Ethics Committee of the University of Sydney, and all participants gave their informed consent. Stimuli Object stimuli consisted of line drawings presented on a white background, subtending approximately 2.3 of visual angle. They were obtained from Snodgrass and Vanderwart (1980) except where otherwise noted. Eight sets of tool/ action recipient associated pairs were used (16 stimuli in total; see Table 1). The attentional-load stimuli consisted of a diamond-shaped frame, subtending 13.12 of visual angle, with gaps in each of the four sides. These stimuli have been used previously to manipulate attentional load Table 1 Stimulus sets used in the experiment Set Tool Action recipient 1 Hammer Nail 2 Screwdriver Screw 3 Saw Plank a 4 Scissors Paper a 5 Axe Log a 6 Wrench Nut (steel) 7 Jug Glass a 8 Key Lock a Drawing was created for the experiment, rather than taken from Snodgrass and Vanderwart (1980)

28 Fig. 2 Examples of the different conditions used in the experiment. The distractor tool was positioned as if either interacting or not interacting with the target action recipient. Note that / refers to whether the distractor tool is semantically/functionally related to the word label (here the label is nut ). Adapted from Green and Hummel (2006) (Chong et al. 2008; Mattingley et al. 2006; Rorden et al. 2008) and have the advantage that attentional demand can be manipulated without changing the physical display or the response requirements. Two of the opposing sides (e.g., lower left and upper right) contained gaps that could be easily discriminated (2.06 vs. 5.14 ). The other two opposing sides had gaps that were much harder to discriminate (3.21 vs. 4.00 ). The placement axis of each gappair, as well as the position of the larger gap of each pair, was systematically varied to create eight such diamonds. Low attentional-load and High attentional-load conditions were created by directing participants attention towards the easily discriminated gaps and the more difficult pair of gaps, respectively. This attentional-load task has been shown to engage brain areas associated with the right-hemisphere attentional network (Chong et al. 2008). Stimuli were displayed on a 17 CRT monitor (85 Hz refresh) at a viewing distance of ~57 cm. Presentation was controlled via MATLAB, using the Psychophysics Toolbox (version 3; Brainard 1997; Pelli 1997; Kleiner et al. 2007), running on an Apple Mac Mini computer. Procedure The procedure followed that of Green and Hummel (2006) with the addition of the diamond frame and task. Trials were self-initiated. Upon pressing a key, a cross was presented in the centre of the screen. Participants were required to maintain their fixation on the centre of the screen throughout the entire trial. After 750 ms, the cross was replaced by one of the diamond stimuli (see Fig. 1). In Experiment 1a, participants were told that they could completely ignore this stimulus. However, during Experiment 1b, participants were instructed to covertly attend to the gaps on two opposing sides of the diamond (e.g., upper left/lower right). The particular opposing sides to which they were to attend were given at the beginning of each block. The diamond was shown, by itself, for 700 ms before being joined by a tool presented at fixation for a further 50 ms. We arrived at this timing after pilot testing, in order to allow participants to perform both tasks to a satisfactory level. Participants were told to ignore this stimulus. Then both the diamond and the tool disappeared for 50 ms before another object (the action recipient) was presented approximately 4.5 to the left or right of fixation. After 50 ms, this was replaced with a word label (24pt Arial font) at fixation consisting of a name of an action recipient. Participants were given 2500 ms to respond as quickly and as accurately as possible whether this label matched the previously shown action recipient. Participants pressed either the z or /? keys to make their response, using the left and right hands, respectively. The mapping of each key to match and mismatch responses was counterbalanced between participants. In Experiment 1b, after their matching response, participants then made an unspeeded response to indicate which of the two gaps they had attended to was larger (Gap discrimination). The up arrow key was pressed for the upper gap, while the down arrow key was pressed for the lower gap. Both of these responses were made with the right hand. Each attentional-load condition (No attentional-load condition in Experiment 1a; Low and High attentionalload conditions in Experiment 1b) consisted of four blocks. For the Low and High attentional-load conditions, in two of the blocks participants were required to attend to the lower-left/upper-right gaps of the diamond. In the other two blocks, they attended to the upper-left/ lower-right gaps of the diamond. The order of the Low and High attentional-load blocks was randomised. Each block began with 10 practice trials followed by 64 experimental trials. This resulted in a total of 256 experimental trials per attentional-load condition. Across these trials, we systematically varied the following factors in a fully crossed design: (1) whether the tool distractor and label were related (/); (2) whether the tool distractor and action recipient target were presented as if interacting with each other (Interacting/Non-interacting); (3) whether the action recipient target was shown to left or right of fixation (Left/Right); and (4) whether the label and action recipient targets matched or not (Match/Mismatch). See Fig. 2 for a depiction of the different conditions.

29 Analysis Participants performance on action recipient-name matching was assessed using d (a bias-free measure of sensitivity) as well as reaction time data from trials with correct responses. Note that the d measure incorporates performance across both the match and mismatch conditions. The d data from Experiment 1a (No attentional load) and Experiment 1b (Low and High attentional load) were each analysed using within-subjects ANOVAs with relationship between the tool distractor and label (two levels: / ), Alignment between distractor and target (two levels: Interacting/Non-interacting), and Target Location (two levels: Left/Right) as factors, and with the additional factor of Load in Experiment 1b. Our effect of interest is greater sensitivity on trials where the tool and label are and the tool and action recipient are Interacting (i.e., the interaction between Relationship and Alignment), and how this in turn may interact with the side of target presentation and attentional load. Participant s performance in the gap discrimination task in Experiment 1b was analysed using an ANOVA with the same factors (Relationship, Alignment, Target Location) and the additional factor of Load (Low/High). Finally, Reaction Time was analysed using Relationship, Alignment, Target Location, and Load, as well as the additional factor of Matching (two levels: Match/Mismatch). Experiment 1a Results A table detailing the Hits, False Alarms, d and RTs broken down for all conditions can be found in Appendix. The following analyses focus on d and RT. Sensitivity Figure 3 shows d scores in the different conditions averaged across participants. We found a significant main effect of Relationship [F (1,23) = 4.31, p =.049, η 2 =.158], whereby matching accuracy was higher when the tool distractor was related to the word label than when it was unrelated. Importantly, there was a significant interaction between Relationship and Alignment [F (1,23) = 5.8, p =.024, η 2 =.203] and a three-way interaction between Relationship, Alignment, and Target Location [F (1,23) = 5.21, p =.032, η 2 =.185). Post hoc tests revealed that when the tool distractor was related to the word label and the action recipient was located to the left of fixation, accuracy was higher when the tool was positioned as if interacting with the action recipient, compared to when the Sensi vity (d') 3.5 3.25 3 2.75 2.5 2.25 2 Le Interac ng Non-interac ng Target Loca on Right Fig. 3 Mean sensitivity (d ) for matching of target object with word label in Experiment 1a. Error bars indicate ±1 SEM Reaction Time (ms) 900 850 800 750 700 650 600 550 500 Match Mismatch Fig. 4 Mean reaction time (ms) for matching of target object with word label in Experiment 1a. Error bars indicate ±1 SEM tool and recipient were not interacting (-Interacting d : 3.11, -Non-interacting d : 2.78, p <.001; see Fig. 3). Note that when the recipient target object is positioned to the left of fixation, the -Interacting condition depicts the most familiar and ready-for-action object configuration, with the tool handle positioned towards the right hand and pointing towards an appropriate action recipient (positioned on the left), whereas this familiar configuration is broken in the Non-interacting condition (see Fig. 2). Interestingly, this benefit for interacting objects was not seen when the action recipient was located to the right meaning that the tool was now positioned with the handle towards the left (-Interacting d : 2.8, -Non-interacting d : 2.81, p =.926). Furthermore,

30 there were no differences in matching accuracy when the tool was unrelated to the word label, regardless of the location of the target action recipient (Left: -Interacting d : 2.71, -Non-interacting d : 2.81, p =.258; Right: -Interacting d : 2.72, -Non-interacting d : 2.67, p =.653). Reaction time Reaction times for correct responses were significantly faster when the tool distractor was related to the word label than when they were unrelated [F (1,23) = 8.02, p =.01, η 2 =.267], see Fig. 4. They were also faster when the word label matched the target object than when they did not match [F (1,23) = 29.55, p <.001, η 2 =.573]. These effects were moderated by a significant interaction [F (1,23) = 9.06, p =.006, η 2 =.292]. Overall, the RT congruency effect (Match < Mismatch) was greater when the tool distractor was related to the word label (Match: 694 ms, Mismatch: 782 ms, p <.001) than when it was unrelated (Match: 731 ms, Mismatch: 780 ms, p =.005; see Fig. 4). There were no significant effects involving Alignment in the case of RTs. Discussion The results of this experiment broadly replicated those of Green and Hummel (2006), showing an identification advantage for the related-interacting condition in which the tool and label were functionally related and the tool and the action recipient were positioned together for appropriate action. Our results also provide an important extension to Green and Hummel s findings. By separating the trials into those in which the action recipient appeared on the left versus the right side of the display, we ascertained that the related-interacting advantage only occurred if the action recipient was located in the left visual field. In this particular condition, the handle of the tool falls in the right visual field where it is readily available to be grasped by the right hand. Thus, this particular configuration is the most commonly experienced arrangement for a functionally interacting pair of objects and the one most conducive to acting on the recipient object with the tool. Given this, our results strongly support the idea that the tool and the action recipient object pair form a perceptually integrated group by virtue of their co-occurrence throughout our experience with this object configuration (Riddoch et al. 2006). We will expand on the theoretical implications of this finding in the General discussion, after presenting the results of Experiment 1b. We also found that reaction times were faster for match compared to mismatch responses and this effect was stronger on trials in which the tool was related to the word (and the action recipient). Given that this reaction time advantage for related tools was found for both interacting and non-interacting trials, this pattern of results most likely reflects a general priming effect from the tool to the word label. This seems to be independent of the perceptual integration of the tool and action recipient, which occurs on the basis of the readiness for action between a target and its action recipient. In Experiment 1b, we investigated whether this perceptual integration is disrupted by a concurrent attentionally demanding perceptual discrimination. Experiment 1b Results Gap discrimination task Gap discrimination in the High attentional-load condition (84.5 %) was worse than in the Low attentional-load condition (96.2 %; F (1,23) = 23.13, p <.001, η 2 =.501), with accuracy similar to previous studies employing the same stimuli (Chong et al. 2008; Mattingley et al. 2006), confirming that the attentional-load manipulation was successful. No other significant effects or interactions were found. Sensitivity The d values for this experiment are depicted in Fig. 5. As a first step, these were entered in a four-way repeated-measures ANOVA, with the factors of Attentional Load, Relationship (/), Alignment (Interacting/Noninteracting) and Target Location (Left/Right). Although there was no overall effect of Load, there were significant interactions between Load and Alignment [F (1,23) = 7.73, p =.011, η 2 =.252], Load, Relationship, and Alignment [F (1,23) = 6.86, p =.015, η 2 =.230], and Load, Alignment, and Target Location [F (1,23) = 15.52, p =.001, η 2 =.403]. Given these interactions, we conducted separate analyses for the Low and High attentional-load conditions, which mirror the analysis conducted on the data of Experiment 1a, with a view to investigating the fate of the related-interacting effect under differing load conditions. In the Low attentional-load condition, we replicated the significant interaction between Relationship and Alignment [F (1,23) = 5.81, p =.024, η 2 =.202] seen in Experiment 1a without attentional load (see Fig. 5). Post hoc tests revealed that sensitivity was higher when related objects were depicted as interacting (d : 3.17) than when the tool was facing away from the action recipient (d : 3; p =.018; see Fig. 5). There was no difference in sensitivity when the tool and action recipient were unrelated (Interacting d : 3.02,

31 Fig. 5 Mean sensitivity (d ) for matching of target object with word label in each experimental condition for each attentionalload condition in Experiment 1b. Error bars indicate ±1 SEM Sensi vity (d') 3.5 3.25 3 2.75 2.5 Interac ng Non-interac ng 2.25 2 Le Right Le Right Low Load High Load Condi on Non-interacting d : 3.03; p =.835). As observed in Experiment 1a, this related-interacting advantage was larger when the recipient was positioned to the left of fixation; however, the differences were not large enough to derive a three-way interaction with Target Location. In marked contrast, the benefit for related-interacting trials was not present in the High attentional-load condition (see Fig. 5), suggesting a breakdown of the perceptual integration of related-and-interacting objects. There was a significant interaction between Alignment and Target Location [F (1,23) = 10.37, p =.004, η 2 =.311]. Participants showed significantly lower performance on trials in which the tool and action recipient objects appeared to interact and the recipient was located to the left of fixation (Interacting d : 2.94, Non-interacting d : 3.18; p =.001; see Fig. 5), but no difference between the interacting versus non-interacting conditions when the action recipient was located to the right of fixation (Interacting d : 3.14, Non-interacting d : 3.04; p =.101). As can be seen in Fig. 5, this pattern of performance was the same whether the tool was related or unrelated to the word label. Reaction time Reaction times were significantly faster on trials where the word label matched the target objects compared to when they did not match [F (1,23) = 30.74, p <.001, η 2 =.572], and there was a significant interaction between Matching and Relationship [F (1,23) = 29.51, p <.001, η 2 =.562]. As Reac on Time (ms) 900 850 800 750 700 650 600 550 500 Match Mismatch Low Load Condition High Load Fig. 6 Reaction time (ms) for matching of target object with word label in each experimental condition for each attentional-load condition in Experiment 1b. Error bars indicate ±1 SEM observed under conditions of No attentional-load in Experiment 1a, the RT congruency effect was greater when the tool was related to the action recipient and label (Match: 706 ms, Mismatch: 809 ms, p <.001) compared to when it was unrelated (Match: 735 ms, Mismatch: 786 ms, p =.001; see Fig. 6). There was no significant effect of Load or any other interactions.

32 Discussion The first thing to note is that the gap discrimination task clearly engaged the participants attention, especially in the High attentional-load condition, where accuracy of the gap discrimination was considerably lower than in the Low attentional-load condition. This decrement in performance cannot be attributed to different perceptual load or different response requirements in the two conditions, as these aspects were exactly the same in both conditions. The only aspect that differed was the difficulty of the gap discrimination; hence, the attentional resources are needed to be dedicated to this task. It could be argued that the increased difficulty in the High attentional-load condition might have caused the participants to adopt a different strategy in that condition, such as making eye movements to the difficult gaps, which could have had a detrimental effect on their ability to perceive the objects. We did not measure eye movement, so we cannot completely discount this possibility. However, we do not think this is likely, given that there was no main effect of Load in either accuracy or reaction times. If subjects were making additional eye movements in the High attentional-load condition, or engaging extra cognitive processes, we would expect to see an overall slowing of their decisions in the object task and/or more errors overall. Instead, the effect of Load was only apparent in accuracy and only at the level of specific interaction effects. We believe this is more consistent with an explanation in terms of attentional load interfering with specific aspects of the object identification task, as detailed below. The results in the object verification task obtained under conditions of Low attentional load mirrored those obtained without any attentional load in Experiment 1a: name verification was better when the tool was functionally related to the word label and the tool and the action recipient object were positioned to interact with each other. As in Experiment 1a, this advantage was effectively confined to trials in which the target object was presented on the left side, and consequently the tool was positioned with its handle pointing to the right although in this experiment this was only a trend. The lack of a significant interaction with target location is most likely attributable to the overall higher accuracy in the Low attentional-load condition, probably as a result of practice effects (given that this condition always followed the No-load condition of Experiment 1a). This indicates that the perceptual integration of the tool and action recipient was maintained under conditions of divided attention when the attentional demands of the competing task were minimal. A completely different pattern was obtained under High attentional-load conditions. Most importantly, the related-interacting advantage was no longer present, suggesting that loading attention interfered with the perceptual integration of the tool and the action recipient. Instead, the results suggest an interaction between the visual field in which the action recipient was positioned and whether the tool was interacting with it. This interaction is somewhat difficult to interpret, but appears to reflect better sensitivity for non-interacting object pairs relative to interacting pairs when the action recipient object is presented on the left, and no difference in sensitivity for interacting and non-interacting pairs (or even a trend towards the opposite pattern) when the action recipient target is presented on the right. The most parsimonious way to summarise this pattern is that identification of the action recipient object is worse when the tool is positioned with the handle pointing towards the right side (which happens for interacting pairs with the action recipient on the left and non-interacting pairs with the action recipient on the right this can be readily appreciated in the top row of Fig. 2). This pattern of results can be interpreted in the following way. If the perceptual integration of the objects breaks down under high attentional load, the two objects are processed relatively independently and, therefore, are subject to attentional competition. When the tool distractor object is positioned with its handle towards the right hand, it is processed more efficiently, due to its readiness for action, and this gives the tool a competitive advantage over the action recipient, resulting in poorer name verification for the action recipient target object. This explanation is consistent with the results obtained by Riddoch et al. (2003) in patients with extinction, who tend to show a bias towards the active tool-like object when they fail to perceive both objects, as well as with similar results in healthy participants who also demonstrate a bias in processing the active object in functionally related object pairs (Roberts and Humphreys 2010, 2011a, b). The disadvantage suffered by the action recipient target objects under these conditions was the same when the tool was related to the word label and when it was unrelated, suggesting that it is not modulated by the semantic properties the objects. Once the perceptual integration has broken down, objects seem to compete for attention no matter their relationship. Similar to the findings of Experiment 1a, participants had significantly faster RTs for match compared to mismatch responses and this RT congruency effect was more pronounced for trials in which the tool was related to the word label. This pattern of results was the same for Low and High attentional-load conditions, suggesting that the tool picture primed semantically congruent responses regardless of the attentional load placed by the concurrent task. This provides further support for our proposal that high attentional load interferes specifically with the perceptual integration of related-interacting object configurations, rather than affecting the semantic representation (including the action-related properties) of the tool itself.

General discussion In this experiment, we investigated the role of attention in the perceptual integration of functionally associated objects. We found that under conditions of No or Low attentional load, matching a word label to a target object was facilitated when they were preceded by a related tool that was correctly positioned to interact with the target object, but this facilitation was not present under conditions of High attentional load. Our basic finding of better identification for relatedinteracting object pairs replicates results obtained by Green and Hummel (2006) using a similar paradigm and is consistent with findings of attentional enhancement for related-interacting object pairs in patients with visual extinction (Riddoch et al. 2003, 2006) and normal participants (Adamo and Ferber 2009; McNair and Harris 2014; Roberts and Humphreys 2011a, b). The present results provide an important extension to Green and Hummel s (2006) findings, by showing that the related-interacting advantage was only present when the action recipient object was positioned on the left side of fixation. In our experiment, when the action recipients are located in the left visual field the handle of the tool distractor is positioned such that it extends into the right visual field and towards the right hand. Some recent studies have demonstrated a right visualfield advantage in processing action objects (Garcea et al. 2012; Verma and Brysbaert 2011), consistent with faster access to a left-hemisphere lateralised tool network [see Johnson-Frey (2004) for a review]. Given that the handle is the most salient part of manipulable objects (e.g., Riggio et al. 2006), having the handle extend in the right visual field could contribute to the lateralised effect we observed. This asymmetry favouring situations where the tool is positioned with the handle towards the right is also consistent with findings of spatial compatibility effects for graspable objects (e.g., Symes et al. 2007). This occurs when the handles of manipulable objects are presented such that they are more accessible to one hand or another, resulting in an enhanced response from the compatible hand, in terms of both reaction times and motor-evoked potentials (Buccino et al. 2009; Tucker and Ellis 2004). This has been shown to not be just a Simon effect (i.e., faster right-hand responses for right visual-field stimuli and vice versa), but instead derives from the functional properties of the object (Buccino et al. 2009; Makris et al. 2011; Pellicano et al. 2010). For instance, it does not hold if objects have broken handles that still extend towards the left or right but no longer afford a functional grasp (Buccino et al. 2009). Symes et al. (2007) found that the spatial compatibility effect is stronger when the handle is in a position to be grasped by the right hand (i.e., pointing to the right). Even more relevant to 33 the present paradigm, Yoon et al. (2010) found that when participants had to determine whether two objects were functionally related, responses were faster when the active object in the pair (i.e., the tool) was positioned for use with the right hand. A similar result was found when subjects had to identify the objects in such pairings (Roberts and Humphreys 2011a). All this evidence strongly points to an enhancement of object identification when action objects are presented as ready-for-action, both by themselves and in a functionally related group. It could be argued that an advantage for related-interacting objects could be due to the tool acting as a simple spatial cue directing attention towards the location of the subsequent, apparently interacting, action recipient, coupled with a semantic priming effect from the tool to the word label. However, the pattern of results here, as well as other in other studies, is not consistent with this argument. Firstly, we did not see any evidence that the alignment of the objects (interacting vs. non-interacting) or the relationship between the tool and the label independently affected performance (see Figs. 3, 5). Rather, identification accuracy was higher only when both of these conditions were satisfied, and furthermore, only when the tool was positioned to be used by the right hand. Secondly, while it has been shown that an unattended tool can act as a spatial cue towards a subsequent target recipient (Roberts and Humphreys 2011b), this was a slow-developing process that emerged approximately 400 ms post-cue onset. This is far too slow to result in effective cueing in the paradigm we employed. Moreover, this cueing effect was insensitive to the relationship between the tool and the recipient (i.e., it occurred for unrelated toolrecipient object pairs as well). Therefore, we would argue that the related-interacting advantage that we found to be specific to situations when the object pair is ideally positioned for action provides strong support for the idea that the tool and action recipient are perceptually integrated by virtue of the action relationship provided by their co-occurrence in this familiar configuration. The second important finding of our study is that this perceptual integration was disrupted under conditions of High attentional load, indicating that attention is necessary for perceptually integrating the objects into a functional unit. A related finding was reported by Roberts and Humphreys (2011a), who found that redirecting spatial attention away from the object pair as a whole, by cueing only a single object or an empty location, reduced the accuracy advantage for naming the passive object (i.e., action recipient object in our terminology) when the objects were related and shown as interacting. Roberts and Humphreys concluded that altering the distribution of spatial attention disrupted the grouping between objects. Our results provide complementary evidence that reducing the availability

34 of attentional resources, even without changing the spatial distribution of attention (which was the same in our Low and High attentional-load conditions), has a similar effect. Interestingly, there was no effect of attentional load on reaction times for the match/mismatch decision. Across all three attentional-load conditions, we found a greater reaction time advantage for match compared to mismatch responses when the tool was related to matching word/ action recipient objects. This most likely represents a semantic congruency effect, as it was not modulated by the spatial configuration or positioning of the objects. That this semantic congruency was not affected by attentional load suggests that the tool distractor was still processed sufficiently to extract semantic information and prime the subsequent word, even under high attentional load. Interestingly, the results obtained in the High attentional-load condition suggest that processing the motor properties of the tool was intact, because a ready-for-action tool (i.e., with the handle pointing to the right hand) seemed to bias attention towards it, at the expense of the action recipient object. This finding is consistent with a proposal by Creem and Proffitt (2001) that motor properties form part of the semantic representation of a tool. Therefore, the present results seem to indicate that loading attention interfered specifically with the perceptual integration of functionally related objects, rather than with the semantic representation of the individual objects themselves. This conclusion is strengthened by the fact that the perceptual integration is dependent on perceptual factors, such as the spatial configuration and specific positioning of the objects, while the semantic priming effect is not affected by these factors. At first blush, our finding that reduced attention impairs perceptual integration of functional object pairs appears to contradict findings from visual extinction patients who demonstrate recovery from extinction with functionally related-interacting action object pairs. However, there is now extensive evidence that perceptual and semantic processing of extinguished objects is relatively intact (e.g., Baylis et al. 1993; Vuilleumier and Sagiv 2001) and that the impairment in these patients occurs at the level of resolving attentional and response competition (Rafal et al. 2002). Thus, if the initial perception of the objects is largely intact, the familiar configuration of a functional group could reduce the competition, as these items are represented an integrated perceptual unit, rather than two objects competing for attention. In our experiment, on the other hand, depleting attentional resources during the initial perceptual encoding appears to prevent the perceptual integration from occurring in the first place. A final point that merits some discussion is an aspect of Green and Hummel s (2006) results which was not apparent in the present experiment. In their study, across all four experiments, Green and Hummel found that alongside the identification advantage for related-interacting object pairs, participants also showed poorer performance for unrelated-interacting objects compared to when the objects were not interacting. This was primarily driven by elevated false alarm rates in the unrelatedinteracting condition. Green and Hummel struggled to provide a satisfactory explanation for this somewhat mysterious and anomalous result. We observed no such effect in our experiment, which leads us to suspect that it may have been peculiar to the stimuli used in Green and Hummel s study. Our experiment used a more specific definition of a functional group, one pertaining to the action exerted by the leading object on a second object that was the usual recipient of that action. It has been demonstrated that there is a dissociation between objects that simply cooccur in a familiar spatial configuration and those which additionally have a direct action relationship (Humphreys et al. 2006). Therefore, the relationship between our stimuli was more tightly focussed on their joint action, and this may be responsible for the absence of any effects in the unrelated conditions, as these items did not have any shared action associations. In conclusion, the present study provides evidence that pairs of functionally related objects are perceptually integrated when they occur in a configuration that depicts the functional interaction between them. This integration is disrupted under high attentional load. These results, therefore, demonstrate that attention plays an important role in integrating functionally related objects into a meaningful configuration. Acknowledgments This research was supported by ARC Grant DP0879206 and ARC Future Fellowship FT0992123 awarded to IMH. Appendix See Table 2.

35 Table 2 Accuracy (Hits, False Alarms, d ) measures and reaction times across all conditions Condition Acc. (%) Hits False alarms d RT (ms) Acc. (%) RT (ms) No Load Non-interacting Left 90.9.881.080 2.81 766 Mismatch 93.2 794 Match 88.6 739 Right 89.9.875.091 2.67 763 Mismatch 91.8 792 Match 88.0 734 Interacting Left 90.2.889.104 2.71 744 Mismatch 90.7 769 Match 89.7 718 Right 90.2.893.105 2.72 750 Mismatch 90.5 767 Match 89.9 733 Non-interacting Left 91.3.906.093 2.78 740 Mismatch 91.6 783 Match 91.0 697 Right 91.3.898.095 2.81 745 Mismatch 91.6 796 Match 91.0 694 Interacting Left 94.2.939.085 3.11 732 Mismatch 93.1 776 Match 95.4 688 Right 91.3.909.099 2.80 733 Mismatch 91.0 771 Match 91.6 696 Low Load Non-interacting Left 92.7.899.069 2.97 759 Mismatch 94.5 786 Match 90.9 732 Right 93.8.918.065 3.09 760 Mismatch 94.7 780 Match 93.0 740 Interacting Left 92.8.914.084 2.99 753 Mismatch 93.0 782 Match 92.7 724 Right 93.7.912.060 3.06 748 Mismatch 95.6 777 Match 91.9 718 Non-interacting Left 91.8.911.100 2.91 753 Mismatch 91.3 796 Match 92.3 711 Right 93.8.933.085 3.08 762 Mismatch 92.9 808 Match 94.8 715 Interacting Left 95.2.954.083 3.23 758 Mismatch 93.0 814 Match 97.4 703 Right 94.0.936.083 3.10 738 Mismatch 93.0 796 Match 95.0 681 High Load Non-interacting Left 94.1.924.063 3.14 784 Mismatch 95.1 796

36 Table 2 continued Condition Acc. (%) Hits False alarms d RT (ms) Acc. (%) RT (ms) Match 93.8 743 Right 92.7.905.078 2.93 728 Mismatch 93.1 771 Match 91.6 713 Interacting Left 93.9.898.072 2.91 795 Mismatch 94.0 790 Match 90.5 757 Right 92.4.928.076 3.10 735 Mismatch 93.8 801 Match 94.3 713 Non-interacting Left 92.8.937.070 3.20 815 Mismatch 94.6 819 Match 95.4 712 Right 96.2.950.094 3.09 703 Mismatch 91.0 812 Match 97.0 694 Interacting Left 92.9.920.085 2.92 814 Mismatch 92.1 808 Match 92.9 712 Right 93.9.934.078 3.15 707 Mismatch 93.8 819 Match 94.8 703 References Adamo M, Ferber S (2009) A picture says more than a thousand words: behavioural and ERP evidence for attentional enhancements due to action affordances. Neuropsychologia 47:1600 1608 Allport DA (1968) Phenomenal simultaneity and the perceptual moment hypothesis. Br J Psychol 59:395 406 Almeida J, Mahon BZ, Nakayama K, Caramazza A (2008) Unconscious processing dissociates along categorical lines. Proc Natl Acad Sci USA 105:15214 15218 Almeida J, Mahon BZ, Caramazza A (2010) The role of the dorsal visual processing stream in tool identification. Psychol Sci 21(6):772 778 Baylis GC, Driver J, Rafal RD (1993) Visual extinction and stimulus repetition. J Cogn Neurosci 5(4):453 466 Brainard DH (1997) The psychophysics toolbox. Spat Vis 10:433 436 Buccino G, Sato M, Cattaneo L, Rodà F, Riggio L (2009) Broken affordances, broken objects: a TMS study. Neuropsychologia 47:3074 3078 Chong TT-J, Williams MA, Cunnington R, Mattingly JB (2008) Selective attention modulates inferior frontal gyrus activity during action observation. Neuroimage 40:298 307 Creem SH, Proffitt DR (2001) Grasping objects by their handles: a necessary interaction between cognition and action. J Exp Psychol Hum Percept Perform 27(1):218 228 Di Lollo V (1980) Temporal integration in visual memory. J Exp Psychol Gen 109(1):75 97 Garcea FE, Almeida J, Mahon BZ (2012) A right visual field advantage for visual processing of manipulable objects. Cogn Affect Behav Neurosci 12:813 825 Gilchrist ID, Humphreys GW, Riddoch MJ (1996) Grouping and extinction: evidence for low-level modulation of selection. Cogn Neuropsychol 13(8):1223 1249 Green C, Hummel JE (2006) Familiar interacting object pairs are perceptually grouped. J Exp Psychol Hum Percept Perform 32(5):1107 1119 Hogben JH, Di Lollo V (1974) Perceptual integration and perceptual segregation of brief visual stimuli. Vis Res 14:1059 1069 Humphreys GW, Riddoch MJ (2007) How to define an object: evidence from the effects of action on perception and attention. Mind Lang 22(5):534 547 Humphreys GW, Riddoch MJ, Fortt H (2006) Action relations, semantics relations, and familiarity of spatial position in Balint s syndrome: crossover effects on perceptual report and on localization. Cogn Affect Behav Neurosci 6(3):236 245 Johnson-Frey SH (2004) The neural bases of complex tool use in humans. Trends Cogn Sci 8(2):71 78 Kimchi R, Razpurker-Apfeld I (2004) Perceptual grouping and attention: not all groupings are equal. Psychon Bull Rev 11(4):687 696 Kimchi R, Yeshurun Y, Cohen-Savransky A (2007) Automatic, stimulus-driven attentional capture by objecthood. Psychon Bull Rev 14(1):166 172 Kleiner M, Brainard D, Pelli D (2007). What s new in Psychtoolbox-3? Tutorial session presented at the 30th European conference on visual perception, Arezzo, Italy Makris S, Hadar AA, Yarrow K (2011) Viewing objects and planning actions: on the potentiation of grasping behaviours by grasping objects. Brain Cogn 77(2):257 264 Mattingley JB, Payne JM, Rich A (2006) Attentional load attenuates synaesthetic priming effects in grapheme-colour synaesthesia. Cortex 42(2):213 221 McNair NA, Harris IM (2014) The conceptual action relationship between a tool and its action recipient modulates their joint perception. Atten Percept Psychophys 76:214 229 Pelli DG (1997) The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spat Vis 10:437 442 Pellicano A, Iani C, Borghi AM, Rubichi S, Nicoletti R (2010) Simonlike and functional affordance effects with tools: the effects of object perceptual discrimination and object action state. Q J Exp Psychol 63(11):2190 2201 Pomerantz JR (2003) Wholes, holes, and basic features in vision. Trends Cogn Sci 7(11):471 473