The "Aha! moment: How prior knowledge helps disambiguate ambiguous information. Alaina Baker. Submitted to the Department of Psychology

Similar documents
the remaining half of the arrays, a single target image of a different type from the remaining

(SAT). d) inhibiting automatized responses.

SENSATION AND PERCEPTION KEY TERMS

Differences of Face and Object Recognition in Utilizing Early Visual Information

The Effects of Social Reward on Reinforcement Learning. Anila D Mello. Georgetown University

{djamasbi, ahphillips,

Consciousness The final frontier!

The Color of Similarity

Running head: EFFECTS OF EMOTION ON TIME AND NUMBER 1. Fewer Things, Lasting Longer: The Effects of Emotion on Quantity Judgments

Introduction to Computational Neuroscience

UBC Social Ecological Economic Development Studies (SEEDS) Student Report

Test review. Comprehensive Trail Making Test (CTMT) By Cecil R. Reynolds. Austin, Texas: PRO-ED, Inc., Test description

Phil 490: Consciousness and the Self Handout [16] Jesse Prinz: Mental Pointing Phenomenal Knowledge Without Concepts

Neurophysiology and Information

(Visual) Attention. October 3, PSY Visual Attention 1

Supplementary experiment: neutral faces. This supplementary experiment had originally served as a pilot test of whether participants

Conflict-Monitoring Framework Predicts Larger Within-Language ISPC Effects: Evidence from Turkish-English Bilinguals

Chapter 6. Attention. Attention

Running Head: TRUST INACCURATE INFORMANTS 1. In the Absence of Conflicting Testimony Young Children Trust Inaccurate Informants

Frank Tong. Department of Psychology Green Hall Princeton University Princeton, NJ 08544

Experimental Design I

CRITICALLY APPRAISED PAPER (CAP)

Optimal Flow Experience in Web Navigation

Views of autistic adults on assessment in the early years

Examining Effective Navigational Learning Strategies for the Visually Impaired

Today s Agenda. Human abilities Cognition Review for Exam1

Sensation and Perception

Augmented Cognition to enhance human sensory awareness, cognitive functioning and psychic functioning: a research proposal in two phases

Are In-group Social Stimuli more Rewarding than Out-group?

Supplemental Materials for Learning absolute meaning from variable exemplars. 1. Additional analyses of participants responses in Experiments 1 and 2

Optimal exploration strategies in haptic search

The Heart Wants What It Wants: Effects of Desirability and Body Part Salience on Distance Perceptions

PERSON PERCEPTION September 25th, 2009 : Lecture 5

Competing Frameworks in Perception

Competing Frameworks in Perception

Black 1 White 5 Black

Admission Test Example. Bachelor in Law + Bachelor in Global Governance - BIG

The Effects of Voice Pitch on Perceptions of Attractiveness: Do You Sound Hot or Not?

The Clock Ticking Changes Our Performance

Social Psychology of Networks: Influence of Emotion on Perception of Personal and. Professional Networks. Sara B. Soderstrom. Northwestern University

The Effects of Action on Perception. Andriana Tesoro. California State University, Long Beach

Ingredients of Difficult Conversations

New Mexico TEAM Professional Development Module: Autism

Orientation Specific Effects of Automatic Access to Categorical Information in Biological Motion Perception

SENSORY FUNCTIONING CHAPTER 44

UNIT. Experiments and the Common Cold. Biology. Unit Description. Unit Requirements

Eye movements, recognition, and memory

The Wellbeing Course. Resource: Mental Skills. The Wellbeing Course was written by Professor Nick Titov and Dr Blake Dear

HTS Report EPS. Emotional Processing Scale. Technical Report. Anna Patient ID Date 29/09/2016. Hogrefe Ltd, Oxford

Modeling the Influence of Situational Variation on Theory of Mind Wilka Carvalho Mentors: Ralph Adolphs, Bob Spunt, and Damian Stanley

Prof. Greg Francis 7/31/15

HUMANITIES 001: CREATIVE MINDS W E E K 3

HSPC/IRB Description of Research Form (For research projects involving human participants)

Who Needs Cheeks? Eyes and Mouths are Enough for Emotion Identification. and. Evidence for a Face Superiority Effect. Nila K Leigh

Supplementary Material for The neural basis of rationalization: Cognitive dissonance reduction during decision-making. Johanna M.

Schizophrenia. This factsheet provides a basic description of schizophrenia, its symptoms and the treatments and support options available.

Culture Differences in an Inattentional Blindness Study

Lecture 2.1 What is Perception?

Cultural Differences in Cognitive Processing Style: Evidence from Eye Movements During Scene Processing

Theoretical Neuroscience: The Binding Problem Jan Scholz, , University of Osnabrück

Affective Priming: Valence and Arousal

SELECTIVE ATTENTION AND CONFIDENCE CALIBRATION

Viewpoint dependent recognition of familiar faces

The Top Seven Myths About Hypnosis And the real truth behind them!

Statistics Anxiety among Postgraduate Students

EXECUTIVE FUNCTIONING AND GRADE POINT AVERAGE IN COLLEGE STUDENTS. Keli Fine

AMERICAN JOURNAL OF PSYCHOLOGICAL RESEARCH

FAILURES OF OBJECT RECOGNITION. Dr. Walter S. Marcantoni

Evaluating the Evidence for Paranormal Phenomena

Intelligent Object Group Selection

Everyday Problem Solving and Instrumental Activities of Daily Living: Support for Domain Specificity

Running head: EFFECTS OF COLOR, CONGRUENCY AND INTERFERENCE

Task Preparation and the Switch Cost: Characterizing Task Preparation through Stimulus Set Overlap, Transition Frequency and Task Strength

Why is dispersion of memory important*

!!!!!!! !!!!!!!!!!!!

Laura N. Young a & Sara Cordes a a Department of Psychology, Boston College, Chestnut

(In)Attention and Visual Awareness IAT814

The Role of Modeling and Feedback in. Task Performance and the Development of Self-Efficacy. Skidmore College

The Helping Relationship

Introduction to PSYCHOLOGY

Visual Context Dan O Shea Prof. Fei Fei Li, COS 598B

IT S A WONDER WE UNDERSTAND EACH OTHER AT ALL!

Interaction Between Social Categories in the Composite Face Paradigm. Wenfeng Chen and Naixin Ren. Chinese Academy of Sciences. Andrew W.

Your Safety System - a User s Guide.

ENGAGE: Level of awareness activity

Perceived Stress Factors and Academic Performance of the Sophomore IT Students of QSU Cabarroguis Campus

Sensation and Perception

Testing the Persuasiveness of the Oklahoma Academy of Science Statement on Science, Religion, and Teaching Evolution

CHAPTER THIRTEEN Managing Communication

PSYCHOLOGY 300B (A01) One-sample t test. n = d = ρ 1 ρ 0 δ = d (n 1) d

ID# Exam 1 PS 325, Fall 2004

Practice Test Questions

Myth One: The Scientific Method

Running head: EFFECT OF HIGH ATTRACTIVENESS ON PERCEIVED INTELLIGENCE 1

Fundamentals of Cognitive Psychology, 3e by Ronald T. Kellogg Chapter 2. Multiple Choice

2. includes facts about situations and people, and includes how we categorize, judge, and infer, solve problems, and perform actions.

Experimental Testing of Intrinsic Preferences for NonInstrumental Information

The Effects of Color, Congruency and Distractors on Short Term Memory. Jenny Braun. Hanover College

Effects of Cognitive Load on Processing and Performance. Amy B. Adcock. The University of Memphis

Cue Saliency and Age as Factors Affecting Performance in a Card Sorting Task

Transcription:

The A-ha! Moment 1 The "Aha! moment: How prior knowledge helps disambiguate ambiguous information Alaina Baker Submitted to the Department of Psychology of Northeastern University for the degree of Bachelor of Science in Psychology with Honors in the Discipline Lisa Feldman Barrett, PhD, Honors Project Faculty Advisor April, 2017

The A-ha! Moment 2 Abstract We encounter ambiguous information every day. Previous research suggests that prior knowledge is necessary for making sense of this information and that such meaning-making is effortless and automatic (Barrett, 2017; Barrett & Bar, 2009). The present study sought to confirm and extend this idea in the visual domain. We hypothesized that participants would be able to disambiguate ambiguous visual information more easily and quickly after exposure to relevant perceptual knowledge. Moreover, we predicted that older participants would have greater success in disambiguating information, even before presentation of relevant perceptual information, given their greater exposure to relevant perceptual experiences over their lifetime. We recruited 68 participants at the Museum of Science, Boston, who rated their ability to see objects in a series of ambiguous (distorted) images both before and after being exposed to a clear (non-distorted) version of the image. In line with our hypotheses, participants rated the ambiguous image as less ambiguous after exposure to the corresponding clear (original) image and did so more quickly. Further, we found that neutral images were more easily and quickly disambiguated than positive or negative images. Overall, these findings reveal the significance of previous experience in alleviating our experiential blindness and making sense of the perceptual ambiguities in our world.

The A-ha! Moment 3 The "Aha! moment: How prior knowledge helps disambiguate ambiguous information If you were presented with the photo below (Figure 1) and asked to make sense of its contents, could you? If you turn to appendix A, you will see a clearer image; once you look at it, return to this ambiguous one. Figure 1: Ambiguous Image Now that you have relevant perceptual experience with the content of this photo, it is easier to make sense of its once entirely ambiguous contents. Moreover, this change in your perception probably occurred effortlessly and automatically. Without the previous experience and knowledge, though, you likely could not disambiguate the image. That is, you were experientially blind. This demonstration illustrates an important feature of perception. We tend to believe that our visual experiences are driven entirely by the external world: we see an object how it really is because we are just passively taking in wavelengths of light. However, emerging neuroscience evidence suggests that, not only do internal contexts (like how we feel and our past experience) influence perception, but they actually drive perception (Barrett, 2017;

The A-ha! Moment 4 Barrett & Bar, 2009). The brain s prediction for what it will see in the next moment shapes perceptual experience before it occurs. Perception involves the integration of sensory input and prior knowledge (e.g., Summerfield & de Lange, 2014). Contrary to more classical views of perception, predictive coding theories of perception (e.g., Barrett, 2017; Clark, 2013) posit that prior knowledge (predictions) actually precede and actively shape the processing of incoming sensory information in real time. That is, the brain is constantly attempting to match incoming sensory inputs with apriori expectations or predictions. The brain attempts to explain away sensory input by making its best guess about what it will see (or hear, or feel, etc.) in the next moment. Components of the sensory signal that coincide with (i.e., are predicted by) the current winning hypothesis are not processed further; the perception of these components becomes what was predicted. Unexplained (i.e., not predicted) components of the sensory signal are transmitted up the predictive hierarchy as prediction error. The better the match, the less prediction error that climbs that predictive hierarchy. Any prediction error that does flow up the hierarchy can modify the internal model that the brain is using to generate the predictions, leading to better predictions in the future (Barrett, 2017; Clark, 2013; Summerfield & de Lange, 2014). In this way, unpredicted sensory information can inform future predictions, helping the brain improve its sensory predictions for similar stimuli or experiences in the future. According to these theories, prior knowledge will influence perception the most when incoming sensory information is ambiguous or imprecise (Summerfield & de Lange, 2014). Given that predictions are so critical in the process of perception and that predictions are shaped by your prior experiences and knowledge, structural regularities in visual information allow the formation of more accurate expectations about future sensory stimulation. Thus, repeated

The A-ha! Moment 5 sensory input normally reduces the corresponding neuronal responses because it subsequently increases predictability (Clark, 2013). Therefore, the more you experience and learn from new experiences, the better your predictions and the more efficient your perception should become. The current experiment aims to further investigate how experiential blindness is resolved; that is, how ambiguous information is disambiguated through the use of prior knowledge. We examine this by asking participants to identify ambiguous images before and after exposure to a clearer version of the same image (called the original image). Using this paradigm, we will test the hypothesis that prior knowledge aids in disambiguating ambiguous perceptual information. We make two specific predictions related to this hypothesis. First, we predict that participants will be able to disambiguate ambiguous images more easily and quickly after exposure to the unambiguous, original version of those images (i.e., after they have prior knowledge or perceptual experience on which to draw). Second, we predict that disambiguation ability should increase with age. Because older individuals will likely have been exposed to a larger variety of objects and situations throughout their longer lifetimes, they should have a larger variety of prior experience from which they can build predictions to use in disambiguating the world around them. Thus, in the present study, we predict that older participants will have greater success in disambiguating the images (due to their greater prior experience), even before presentation of the unambiguous, original image. Finally, as an exploratory analysis we examine whether the affective valence of the images (i.e., neutral, positive, or negative) influences how prior knowledge is utilized to disambiguate ambiguous perceptual information.

The A-ha! Moment 6 Method Participants The sample consisted of 68 visitors (34 male, 34 female) to the Museum of Science in Boston, MA, who voluntarily chose to engage with the researcher while at the Museum. The final sample ranged from age 8 to age 77 (M=28.15 years, SD=16.27 years) and was comprised of 77% White, 7% Black, and 7% Asian participants. Nine percent of participants identified themselves as more than one race. To be eligible, potential participants needed to be at least six years of age, have normal or corrected-to-normal vision, and speak English. Participants completed the experiment in one experimental session, lasting 10-15 min, with the researcher in the Hall of Human Life at the Museum of Science. Participants were not compensated monetarily, but the experiment was advertised at the museum as a chance to learn about how scientists investigate a wide array of topics related to human biology and health and help advance these fields through [their] participation. Materials This study utilized a set of 68 original (clear) images of varying objects (e.g., animals, plants, foods), and a set of 68 ambiguous versions of these images (one matched to each of the clear images; See Figure 2). To create the ambiguous version of each image, a clear (original) version of an image was imported into Gimp software. There, the image mode was switched to grayscale, the colors were inverted, and the artistic oilify filter, which uses a very high-contrast filter, was added to degrade the clarity of the image. Images were selected from the internet such that they fit into one of three categories based on their affective valence at face-value; images were positive (e.g., an image of a kitten), neutral (e.g., an image of clothes), or negative (e.g., an image of a snarling cheetah). The final set of 68 paired images were selected from a larger set of

The A-ha! Moment 7 262 images that were normed for valence, arousal, and ambiguity on Amazon s Mechanical Turk (N=100). Ratings of pleasantness (e.g., this image made me feel intensely unpleasant or intensely pleasant) and ratings of activation (e.g., this image made me feel intensely deactivated or intensely activated) after viewing each image confirmed that images fit the pre-chosen affective categories to which they were assigned (i.e., positive, neutral, negative). The 68 images utilized in the current study were chosen with the maximal combination of (1) very high ratings of ambiguity for the first image and (2) very low ratings of ambiguity for the second image (3) in their respective affective categories (neutral, positive, negative). Figure 2: Ambiguous (left) and Original (right) Image Pair Procedure Consent and Assent Process: Depending on the age of the participant, a consent form was signed. If the participant was over 18, verbal consent was given after thoroughly reviewing the form with the researcher. If the participant was under 18, a consent form was reviewed and signed by a parent and/or guardian. If the participant was a child who was unable to read, an assent form was read to the participant by a researcher.

The A-ha! Moment 8 Ambiguous Image Task: This task was conducted on a laptop and was run using E-Prime software (version 2.0; Psychology Software Tools, INC.). On each trial of this Ambiguous Image Task, an ambiguous image was presented for 4 seconds. Participants were then asked, Did you see anything? which they rated on a four-point scale either definitely yes (1), a little bit (2), not really (3), or definitely no (4). If they rated (1) or (2) they were asked a second question, what did you see?, which they answered aloud. The researcher recorded the participant s answer(s) in a separate spreadsheet. The participant was then shown the original image for 2 seconds, followed by its matched ambiguous image a second time for 2 seconds, after which the participants answered the same two questions again. Participants each completed 26 trials of this task, where the 26 original images and matched ambiguous images they saw were drawn at random from the total set of 68 available pairs of images. Post-Experimental Questionnaire and Demographic Information Survey: Following the Ambiguous Image Task, participants answered a brief questionnaire about their experience(s) during the task (see Appendix B for experimental questionnaire) and completed a demographic survey where they were asked to report their gender, age, race, and ethnicity. If participants were unlikely to know their own demographic information and/or were under 18 years of age, the demographic form was given to the parent/guardian to fill out for the particpant. Debriefing: Lastly, participants were guided through a debriefing form by the researcher and were given the opportunity to ask questions about the experiment. They were also given two stickers (one provided by the museum and one provided by the researcher) to indicate their participation.

The A-ha! Moment 9 Results Ratings of Ambiguity. A 3x2 repeated-measures ANOVA, with valence of the image (positive vs. negative vs. neutral) and presentation number (first presentation vs. second presentation) as within-subjects measures, revealed a significant main effect of valence on ratings of ambiguity, F(2, 65)= 23.05, p<.001. A post-hoc Fisher s least significant difference test revealed that neutral images were rated as significantly less ambiguous (M=2.25, SE=.05) than both positive (M=2.55, SE=.07) and negative (M=2.70, SE=.08) images, ps<.001, which did not differ in rated ambiguity, p=.06. Consistent with predictions, this analysis also revealed a main effect for presentation number (F(1, 65)= 104.98, p<.001), such that the ambiguous images were rated as more perceptually ambiguous following their first presentation (M=2.79, SE=.06) than their second presentation (M=2.21, SE=.07). Results did not reveal a significant interaction between valence and presentation number, F(2, 65)=2.44, p=.09, suggesting that the effect of image valence was consistent across both presentations of the ambiguous image. See Figure 3. 3.2 Self-reported Perceptual Ambiguity 3 2.8 2.6 2.4 2.2 2 Neutral Positive Negative 1.8 First Presentation Second Presentation Figure 3. Ratings of perceptual ambiguity by image valence and presentation number.

The A-ha! Moment 10 Reaction Times. A similar 3x2 repeated-measures ANOVA failed to reveal either a significant main effect of valence on reaction times, F(2, 65)= 1.50, p=.23, or a significant interaction between valence and presentation number on reaction times, F(2, 65)=1.24, p=.29. However, as predicted, this analysis did reveal a significant main effect for presentation number, F(1, 65)= 20.51, p<.001, such that the ambiguous images were rated more slowly after the first presentation (M=3635.16, SE=258.87) than the second presentation (M=2848.62, SE=249.62) of the ambiguous images. See Figure 4. 4500 4000 Reaction Time (ms) 3500 3000 2500 2000 1500 1000 Neutral Positive Negative 500 0 First Presentation Second Presentation Figure 4. Ratings of reaction time by image valence and presentation number. Age-related Effects. Contrary to predictions, there were no significant correlations between participants ages and their ratings of perceptual ambiguity for the ambiguous image on either its first presentation, r(67)=0.17, p=0.18, or its second presentation, r(67)=.08, p=.53. Discussion In support of our hypothesis, results demonstrated that participants rated ambiguity of the image more quickly (i.e., shorter reaction time) following the second presentation of the ambiguous image than the first presentation. This supports the idea that disambiguation of

The A-ha! Moment 11 ambiguous percepts occurs faster when a person has had prior exposure to a similar (or identical) percept. Prior knowledge might help guide participants attention to salient aspects of the ambiguous display, leading to its perceptual resolution. Also in line with our hypothesis, results additionally demonstrated that the ability to disambiguate an image significantly increased from the first presentation of the ambiguous image to the second presentation of the ambiguous image following exposure to an original, unambiguous version of the image. This suggests that participants can more successfully form coherent percepts when they have prior perceptual experience or knowledge from which to draw. Contrary to our hypothesis, results did not demonstrate that perceptual ability increased with age. We assumed age might be associated with greater prior experience and knowledge, allowing older individuals to resolve perceptual ambiguity more aptly and more quickly. Our findings suggest that perhaps age is not the best indicator of previous perceptual experience. Instead, future research might focus on other variables that might better approximate the amount of past relevant experience an individual has to draw from, such as expertise in a specific domain (e.g., knowledge of animals or food). For example, if a 6-year-old child happens to be an expert on animal species, he or she might be able to resolve perceptual ambiguity related to animals more quickly and accurately than an elderly individual who has seen few of the same stimuli in his or her lifetime. However, the present study may also have been insufficiently powered to detect this kind of effect, despite recruiting participants with a very wide range of ages, so future studies should recruit larger samples of relevant age groups. Further, exploratory analyses revealed that image valence impacted ratings of perceptual ambiguity at both presentations of the ambiguous image (both before and after exposure to the unambiguous original image). For example, whether an image was considered neutral (e.g., a

The A-ha! Moment 12 mushroom), positive (e.g., a kitten), or negative (e.g., a bear) influenced an individual s ability to disambiguate the ambiguous images, such that neutral images were rated as the least ambiguous when compared to positive or negative images. This could be due to a variety of factors. First, the experiment was conducted at the Museum of Science, where strict constraints were placed on the emotional evocativeness of the stimuli allowed to be utilized in the study. Because of this limitation, participants saw very few negative images and many more neutral images within the Ambiguous Image Task. In a way, this may have trained participants to expect to encounter certain kinds of images within the task itself (i.e., neutral images) over others (i.e., negative images). That is, participants may have been using their prior experience within the task to shape their perceptual predictions about what they would likely see on the next trial (i.e., another neutral image), and predicted images should be more ably and quickly disambiguated. A second explanation of this finding is that participants may have more prior experience with the objects in the neutral images than those in the more evocative images because of the familiarity of and constant exposure to the (neutral) everyday objects depicted in these images; things that are predicted should, in fact, appear less ambiguous. Finally, a third possible explanation is that this finding may have also been due to uncontrolled low-level visual features of the images from the different valence categories, where the neutral images may have been easier to disambiguate, not because of their familiarity, predictability, or valence, but because of visual properties inherent to the images themselves. Future research should further explore the influence of image valence on perceptual ambiguity. In the current study, highly negative images (e.g., a photo of a venomous snake) could not be used. Future research could include more trials of non-neutral images, particularly those with negative valence. By including these images, we might better reveal the impact of

The A-ha! Moment 13 emotional salience on how prior knowledge is deployed to make sense of ambiguous information. Future research could also look at individual differences that might moderate these factors for particular images. For example, is someone who has had a frightening encounter with snakes more likely to disambiguate a negative image of a snake faster and more accurately? It is possible he or she might disambiguate the image faster, but only when a snake is present. He or she should be less accurate when the object has some snake-like features, but is not a snake. Similarly, if someone has an affective disorder marked by increased negative affect (e.g., major depressive disorder), will he or she be more likely to see less perceptual ambiguity when viewing negatively-valenced images? Indeed, a recent study by Teufel et al. (2015), which used similar ambiguous image stimuli, showed that psychosis was related to greater reliance on prior knowledge and hence better disambiguation ability. The current findings demonstrate the importance of previous experience and knowledge in perceiving ambiguous stimuli. Though we often believe that we perceive and experience things as they happen in the immediate environment, the results of this experiment add to the growing body of evidence demonstrating that internal contexts drive perception. Our brain is constantly issuing predictions that alter what we see in the next moment, and we are able to correct what we fail to predict accurately in one moment because we are constantly accruing new experiences and updating our predictions every moment of every day.

The A-ha! Moment 14 References: Barrett, L. F. (2017). How emotions are made: The secret life of the brain. Houghton Mifflin Harcourt. Barrett, L. F., & Bar, M. (2009). See it with feeling: Affective predictions during object perception. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1521), 1325-1334. Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(03), 181-204. Summerfield, C., & De Lange, F. P. (2014). Expectation in perceptual decision making: Neural and computational mechanisms. Nature Reviews Neuroscience, 15(11), 745-756. Teufel, C., Subramaniam, N., Dobler, V., Perez, J., Finnemann, J., Mehta, P. R.,... & Fletcher, P. C. (2015). Shift toward prior knowledge confers a perceptual advantage in early psychosis and psychosis-prone healthy individuals. Proceedings of the National Academy of Sciences, 112(43), 13401-13406.

The A-ha! Moment 15 Appendix A. Original Image Version of Figure 1

The A-ha! Moment 16 Appendix B. Experimental Questionnaire