Meaning-based guidance of attention in scenes as revealed by meaning maps

Similar documents
Nature Structural & Molecular Biology: doi: /nsmb.2419

Computational Cognitive Science

Daniel Boduszek University of Huddersfield

Sum of Neurally Distinct Stimulus- and Task-Related Components.

Supplementary materials for: Executive control processes underlying multi- item working memory

Psychology Research Process

Supplementary Figure 1. Localization of face patches (a) Sagittal slice showing the location of fmri-identified face patches in one monkey targeted

Performance and Saliency Analysis of Data from the Anomaly Detection Task Study

Simple Linear Regression the model, estimation and testing

Supplementary Figure 1

Supplementary Figure 1: Attenuation of association signals after conditioning for the lead SNP. a) attenuation of association signal at the 9p22.

Validating the Visual Saliency Model

Templates for Rejection: Configuring Attention to Ignore Task-Irrelevant Features

Two-Way Independent Samples ANOVA with SPSS

Incomplete immune response to Coxsackie B viruses. associates with early autoimmunity against insulin

Rapid evolution of highly variable competitive abilities in a key phytoplankton species

(Visual) Attention. October 3, PSY Visual Attention 1

Research Methods in Forest Sciences: Learning Diary. Yoko Lu December Research process

Is the straddle effect in contrast perception limited to secondorder spatial vision?

Theta sequences are essential for internally generated hippocampal firing fields.

PRINTABLE VERSION. Quiz 1. True or False: The amount of rainfall in your state last month is an example of continuous data.

Nature Neuroscience: doi: /nn Supplementary Figure 1. Neuron class-specific arrangements of Khc::nod::lacZ label in dendrites.

Supplementary experiment: neutral faces. This supplementary experiment had originally served as a pilot test of whether participants

Unit 7 Comparisons and Relationships

Shining in the Center: Central Gaze Cascade Effect on Product Choice

Supplementary Information Methods Subjects The study was comprised of 84 chronic pain patients with either chronic back pain (CBP) or osteoarthritis

Chapter 9: Comparing two means

Supplemental material on graphical presentation methods to accompany:

Supplementary Material. other ethnic backgrounds. All but six of the yoked pairs were matched on ethnicity. Results

SCALAR TIMING (EXPECTANCY) THEORY: A COMPARISON BETWEEN PROSPECTIVE AND RETROSPECTIVE DURATION. Abstract

Online Appendix Material and Methods: Pancreatic RNA isolation and quantitative real-time (q)rt-pcr. Mice were fasted overnight and killed 1 hour (h)

Pupil Dilation as an Indicator of Cognitive Workload in Human-Computer Interaction

The North Carolina Health Data Explorer

Changing expectations about speed alters perceived motion direction

The Role of Color and Attention in Fast Natural Scene Recognition

Biostatistics. Donna Kritz-Silverstein, Ph.D. Professor Department of Family & Preventive Medicine University of California, San Diego

Aesthetic Response to Color Combinations: Preference, Harmony, and Similarity. Supplementary Material. Karen B. Schloss and Stephen E.

Understandable Statistics

Chapter 2--Norms and Basic Statistics for Testing

Automaticity of Number Perception

G5)H/C8-)72)78)2I-,8/52& ()*+,-./,-0))12-345)6/3/782 9:-8;<;4.= J-3/ J-3/ "#&' "#% "#"% "#%$

Cultural Differences in Cognitive Processing Style: Evidence from Eye Movements During Scene Processing

Integrating Episodic Memories and Prior Knowledge. at Multiple Levels of Abstraction. Pernille Hemmer. Mark Steyvers. University of California, Irvine

Supplemental Information. Direct Electrical Stimulation in the Human Brain. Disrupts Melody Processing

Description of new EyeSuite visual field and trend analysis functions

Psy201 Module 3 Study and Assignment Guide. Using Excel to Calculate Descriptive and Inferential Statistics

Unit 1 Exploring and Understanding Data

Supplementary Materials

Business Statistics Probability

The influence of clutter on real-world scene search: Evidence from search efficiency and eye movements

Supporting Information for

Supplementary Materials for

SUPPLEMENTAL MATERIAL

Two distinct mechanisms for experiencedependent

Chapter 3: Describing Relationships

Nature Neuroscience: doi: /nn Supplementary Figure 1. Trial structure for go/no-go behavior

Method Comparison Report Semi-Annual 1/5/2018

Supplementary Figures

Describe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo

Measures of Dispersion. Range. Variance. Standard deviation. Measures of Relationship. Range. Variance. Standard deviation.

Nature Biotechnology: doi: /nbt.3828

Making Inferences from Experiments

Learning to classify integral-dimension stimuli

What Case Study means? Case Studies. Case Study in SE. Key Characteristics. Flexibility of Case Studies. Sources of Evidence

Running head: PERCEPTUAL GROUPING AND SPATIAL SELECTION 1. The attentional window configures to object boundaries. University of Iowa

A Model for Automatic Diagnostic of Road Signs Saliency

Effect of Visuo-Spatial Working Memory on Distance Estimation in Map Learning

Nature Immunology: doi: /ni Supplementary Figure 1

Describe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo

Nature Neuroscience: doi: /nn Supplementary Figure 1

Question 1(25= )

Supplementary Figure 1. Using DNA barcode-labeled MHC multimers to generate TCR fingerprints

Influence of Low-Level Stimulus Features, Task Dependent Factors, and Spatial Biases on Overt Visual Attention

Stats 95. Statistical analysis without compelling presentation is annoying at best and catastrophic at worst. From raw numbers to meaningful pictures

Actions in the Eye: Dynamic Gaze Datasets and Learnt Saliency Models for Visual Recognition

Supplementary Materials

Results. Example 1: Table 2.1 The Effect of Additives on Daphnia Heart Rate. Time (min)

Main Study: Summer Methods. Design

Still important ideas

Statistics 2. RCBD Review. Agriculture Innovation Program

Reward prediction based on stimulus categorization in. primate lateral prefrontal cortex

Skala Stress. Putaran 1 Reliability. Case Processing Summary. N % Excluded a 0.0 Total

Supporting Information

Measuring populations to improve vaccination coverage: Supplementary Information

Kidane Tesfu Habtemariam, MASTAT, Principle of Stat Data Analysis Project work

Supplementary Material

DO STIMULUS FREQUENCY EFFECTS OCCUR WITH LINE SCALES? 1. Gert Haubensak Justus Liebig University of Giessen

Gathering and Repetition of the Elements in an Image Affect the Perception of Order and Disorder

Designing Psychology Experiments: Data Analysis and Presentation

PSYCHOLOGY 300B (A01) One-sample t test. n = d = ρ 1 ρ 0 δ = d (n 1) d

Nature Immunology: doi: /ni Supplementary Figure 1. RNA-Seq analysis of CD8 + TILs and N-TILs.

Title. Author(s)Ono, Daisuke; Honma, Ken-Ichi; Honma, Sato. CitationScientific reports, 5: Issue Date Doc URL

Nature Medicine: doi: /nm.4084

Mnemonic representations of transient stimuli and temporal sequences in the rodent hippocampus in vitro

Table S1. Trait summary for Northern Leaf Blight resistance in the maize nested association mapping (NAM) population and individual NAM families

Conjunctive Search for One and Two Identical Targets

Meta Analysis. David R Urbach MD MSc Outcomes Research Course December 4, 2014

Active suppression after involuntary capture of attention

The control of attentional target selection in a colour/colour conjunction task

IgG3 regulates tissue-like memory B cells in HIV-infected individuals

Transcription:

SUPPLEMENTARY INFORMATION Letters DOI: 1.138/s41562-17-28- In the format provided by the authors and unedited. -based guidance of attention in scenes as revealed by meaning maps John M. Henderson 1,2 * and Taylor R. Hayes 2 1 Department of Psychology, University of California, Davis, CA 95618, USA. 2 Center for Mind and Brain, University of California, Davis, CA 95618, USA. *e-mail: johnhenderson@ucdavis.edu Nature Human Behaviour www.nature.com/nathumbehav 217 Macmillan Publishers Limited, part of Springer Nature. All rights reserved.

-Based Guidance of Attention in Scenes as Revealed by Maps John M. Henderson 1,2 and Taylor R. Hayes 2 1 Department of Psychology, University of California, Davis 2 Center for Mind and Brain, University of California, Davis

Supplementary Methods Analyses Excluding Map Centers To ensure that the advantage of meaning maps over saliency maps in predicting attention was not due to a center bias advantage for the meaning maps, we conducted a supplementary set of analyses in which the data from the central 3 degrees of each map was removed from consideration. Differences in map successes in this analysis could therefore not be attributed to differences in the ability of meaning maps to predict central fixations. The results of these analyses was qualitatively and quantitatively very similar to the complete analyses. Supplementary Figure 1 shows the correlation of meaning and salience for each scene. On average across all scenes the correlation was.81 (SD=.7). A one sample t-test confirmed that the correlation was significantly greater than zero, t(39) = 69.5, p <.1, 95% CI [.79,.84]. and salience also each accounted for unique variance (i.e., 34% of the variance was not shared). Supplementary Figure 2 presents the linear correlation data used to assess the degree to which meaning maps and saliency maps accounted for shared and unique variance in the attention maps data for each scene. Each data point shows the R 2 value for the prediction maps (meaning and saliency) and the observed attention maps for saliency (blue) and meaning (red). The top of Supplementary Figure 2 shows the squared linear correlations. On average across the 4 scenes, meaning accounted for 52% of the variance in fixation density (M=.52, SD=.11) and saliency account for 37% of the variance in fixation density (M =.37, SD =.13). A two-tailed t-test revealed this difference was statistically significant, t(78) = 5.53, p <.1, 95% CI [.9,.2]. To examine the unique variance in attention explained by meaning and salience when controlling for their shared variance, we computed squared semi-partial correlations. These correlations, shown in the bottom of Supplementary Figure 2, revealed that across the 4 scenes, meaning captured more than 4 times as much unique variance (M=.19, SD=.1)

as saliency (M =.4, SD =.4). A two-tailed t-test confirmed that this difference was statistically significant, t(78) = 8.27, p <.1, 95% CI [.11,.18]. These results confirm those of the complete analysis and indicate that meaning was better able than salience to explain the distribution of attention over scenes even when scene centers were excluded. Supplementary Figure 3 shows the temporal time-step analyses with the map centers removed. Linear correlation and semi-partial correlation were conducted as in the main time-step analyses based on a series of attention maps generated from each sequential eye fixation (1st, 2nd, 3rd, etc.) in each scene. Using the same testing and false discovery rate correction as in the main analyses, all 38 time points were significantly different in both the linear and semi-partial analyses (FDR<.5). In the linear correlation analysis (top of Supplementary Figure 3), meaning accounted for 29.8%, 32.2%, and 31.% of the variance in the first 3 fixations, whereas salience accounted for only 9.6%, 15.7%,and 17.6% of the variance in the first 3 fixations. When controlling for the correlation among the two prediction maps with semi-partial correlations, the advantage for the meaning maps observed in the overall analyses was also found to hold across time steps, as shown in the bottom of Supplementary Figure 3. accounting for 23.3%, 21.6%, and 17.8% of the unique variance in the first 3 fixations, whereas salience accounted for 3.3%, 5.%, and 4.4% of the unique variance in the first 3 fixations, respectively. Replication with Aesthetic Judgment Task To ensure that the observed results replicate over viewing instruction, we ran a second experiment using twelve of the original scenes under two viewing instructions. The twelve scenes were selected at random before the results of the main experiment were known. Subjects were instructed to memorize the scenes (memorization task) or to indicate how much they liked each scene on a 1-3 scale (aesthetic judgment task). Forty-six subjects viewed the twelve scenes, with each subject seeing six scenes in each of the instruction conditions. Each subject saw each scene once, with assignment of scene to task counterbalanced across subjects, so data for each scene in each condition was based on 23 subjects. Order of task

was counterbalanced across subjects. Scenes were each viewed for twelve seconds as in the main experiment. Attention maps generated from these subjects were then compared to the meaning and saliency maps as described in the main experiment. The results for the memorization and aesthetic judgment tasks are shown in Supplementary Figures 4 and 5 respectively. The new data in each figure are shown as individual data points superimposed on the data figures from the original experiment. Each data point shows the R 2 value for each prediction map (meaning and saliency) and the observed attention maps. The top panels of Supplementary Figures 4 and 5 show the squared linear correlations. For the memorization task, on average across the 12 scenes, meaning accounted for 52% of the variance in fixation density (M=.52, SD=.12) and saliency account for 3% of the variance in fixation density (M=.3, SD=.12). A two-tailed ttest revealed this difference was statistically significant, t(22) = 4.57, p <.1, 95% CI [.12,.33]. For the aesthetic judgment task, on average across the 12 scenes, meaning accounted for 57% of the variance in fixation density (M=.57, SD=.9) and saliency account for 3% of the variance in fixation density (M=.3, SD=.12). A two-tailed t-test revealed this difference was statistically significant, t(22) = 6.27, p <.1, 95% CI [.18,.37]. The bottom panels of Supplementary Figures 4 and 5 show the squared semi-partial correlations to examine the unique variance in attention explained by meaning and salience when controlling for their shared variance. For the memorization task, these correlations revealed that across the 12 scenes, meaning captured more than 2 times as much unique variance (M=.24, SD=.1) as saliency (M=.1, SD=.2). A two-tailed t-test revealed this difference was statistically significant, t(22) = 7.5, p <.1, 95% CI [.16,.28]. For the aesthetic judgment task, across the 12 scenes meaning captured more than 25 times as much unique variance (M=.29, SD=.11) as saliency (M=.1, SD=.2). A two-tailed t-test revealed this difference was statistically significant, t(22) = 8.35, p <.1, 95% CI [.21,.34]. These results confirm those of the main experiment and indicate that meaning was better able than salience to explain the distribution of attention over scenes even when scene centers were excluded.

Supplementary Figures Correlation 1..5 2 4 6 8 1 12 14 16 18 2 22 24 26 28 3 32 34 36 38 4 Scene Number All Scenes Supplementary Figure 1 Correlation between saliency and meaning maps excluding map centers. The line plot shows the correlation between saliency and meaning maps for each scene. The scatter box plot on the right shows the corresponding grand mean (black horizontal line), 95% confidence intervals (colored box), and 1 standard deviation (black vertical line) across all 4 scenes. Linear Corr. (R 2 ) Unique Corr. (R 2 ).8.6.4.2.6.4.2 2 4 6 8 1 12 14 16 18 2 22 24 26 28 3 32 34 36 38 4 Scene Number All Scenes Supplementary Figure 2 Squared linear correlation and semi-partial correlation by scene and across all scenes excluding map centers. The line plots show the linear correlation (top) and semi-partial correlation (bottom) between fixation density and meaning (red) and salience (blue) by scene. The scatter box plots on the right show the corresponding grand mean (black horizontal line), 95% confidence intervals (colored box), and 1 standard deviation (black vertical line) for meaning and salience across all 4 scenes.

Linear Corr. (R 2 ) Unique Corr. (R 2 ).4.3.2.1.3.2.1 2 4 6 8 1 12 14 16 18 2 22 24 26 28 3 32 34 36 38 Fixation Number Supplementary Figure 3 Squared linear correlation and squared semi-partial correlation as a function of fixation number excluding map centers. The top panel shows the squared linear correlation between fixation density and meaning (red) and salience (blue) as a function of fixation order across all 4 scenes. The bottom panel shows the corresponding semi-partial correlation as a function of fixation order across all 4 scenes. Error bars represent standard error of the mean. Linear Corr. (R 2 ) Unique Corr. (R 2 ).8.6.4.2.6.4.2 2 4 6 8 1 12 14 16 18 2 22 24 26 28 3 32 34 36 38 4 Scene Number All Scenes Supplementary Figure 4 Squared linear correlation and semi-partial correlation by scene and across scenes for the memorization condition of the replication experiment (dark symbols) superimposed on the data from the original experiment (light lines). The plots show the linear correlation (top) and semi-partial correlation (bottom) between fixation density and meaning (red) and salience (blue) by scene. The scatter box plots on the right show the corresponding grand mean (black horizontal line), 95% confidence intervals (colored box), and 1 standard deviation (black vertical line) for meaning and salience across all 12 scenes.

Linear Corr. (R 2 ) Unique Corr. (R 2 ).8.6.4.2.6.4.2 2 4 6 8 1 12 14 16 18 2 22 24 26 28 3 32 34 36 38 4 Scene Number All Scenes Supplementary Figure 5 Squared linear correlation and semi-partial correlation by scene and across scenes for the aesthetic judgment condition of the replication experiment (dark symbols) superimposed on the data from the original experiment (light lines). The plots show the linear correlation (top) and semipartial correlation (bottom) between fixation density and meaning (red) and salience (blue) by scene. The scatter box plots on the right show the corresponding grand mean (black horizontal line), 95% confidence intervals (colored box), and 1 standard deviation (black vertical line) for meaning and salience across all 12 scenes.

Supplementary Figure 6 All scenes and corresponding attention, meaning, and saliency maps. Figure by Henderson & Hayes, 217; available at http://dx.doi.org/1.684/m9.figshare.536572 under a CC- BY4. license.