The Effect of Contextual Information and Emotional Clarity on Emotional Evaluation

Similar documents
Emotions. These aspects are generally stronger in emotional responses than with moods. The duration of emotions tend to be shorter than moods.

Education and the Attribution of Emotion to Facial Expressions

Running head: GENDER DIFFERENCES IN EMOTION JUDGMENT

Emotions and Motivation

What is Emotion? Emotion is a 4 part process consisting of: physiological arousal cognitive interpretation, subjective feelings behavioral expression.

Valence and Gender Effects on Emotion Recognition Following TBI. Cassie Brown Arizona State University

Cindy Hamon-Hill. Submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy

The Effects of Body Posture on Emotion Perception: A Developmental and. Theoretical Analysis. Matthew Homer. of the requirements for the degree

PSYC 222 Motivation and Emotions

Interaction Between Social Categories in the Composite Face Paradigm. Wenfeng Chen and Naixin Ren. Chinese Academy of Sciences. Andrew W.

Does It Matter Where We Meet? The Role of Emotional Context in Evaluative First Impressions

Putting Facial Cues Facial Expressions Back in Context

HHS Public Access Author manuscript Emotion. Author manuscript.

Contrastive Analysis on Emotional Cognition of Skeuomorphic and Flat Icon

A study of the effect of auditory prime type on emotional facial expression recognition

Moralization Through Moral Shock: Exploring Emotional Antecedents to Moral Conviction. Table of Contents

Running head: CULTURES 1. Difference in Nonverbal Communication: Cultures Edition ALI OMIDY. University of Kentucky

Running Head: STEREOTYPES AND PREJUDICE AFFECT EMOTION RECOGNITION. Stereotypes and Prejudice Affect the Recognition of Emotional Body Postures

1/12/2012. How can you tell if someone is experiencing an emotion? Emotion. Dr.

Memory for emotional faces in naturally occurring dysphoria and

Context Is Routinely Encoded During Emotion Perception

Culture and Emotion THE EVOLUTION OF HUMAN EMOTION. Outline

MODULE 41: THEORIES AND PHYSIOLOGY OF EMOTION

Contextual Influences on Men s Perceptions of Women s Sexual Interest

Inherently Ambiguous. An Argument for Contextualized Emotion Perception HILLEL AVIEZER AND RAN R. HASSIN

Emotion Perception in Schizophrenia: Context Matters

Artificial Emotions to Assist Social Coordination in HRI

Sociable Robots Peeping into the Human World

Temporal Context and the Recognition of Emotion from Facial Expression

Who Needs Cheeks? Eyes and Mouths are Enough for Emotion Identification. and. Evidence for a Face Superiority Effect. Nila K Leigh

EBT Daily Check In. 1. Accept your current state, please scroll to the bottom. 2. Use a tool, please scroll to the corresponding tool.

Affective Game Engines: Motivation & Requirements

Do Facial Expressions Signal Specific Emotions? Judging Emotion From the Face in Context

Differences in holistic processing do not explain cultural differences in the recognition of facial expression

Relative Contributions of Expressive Behavior and Contextual Information to the Judgment of the Emotional State of Another

Mixed emotions: Holistic and analytic perception of facial expressions

Measures. What is child physical abuse? Theories of CPA. Social Cognition and CPA risk. Social Information Processing Model (Milner, 1994)

Emotional Conception

Arrogant or Self-Confident? The Use of Contextual Knowledge to Differentiate Hubristic and Authentic Pride from a Single. Nonverbal Expression

EMOTIONS S E N I O R S P E C I A L I S T I N P S Y C H I A T R Y A N D S E X T H E R A P Y

Perceiving emotion in crowds: the role of dynamic body postures on the perception of emotion in crowded scenes

Fukuoka University of Education

Cultural Differences in Cognitive Processing Style: Evidence from Eye Movements During Scene Processing

Sherri Widen, Ph.D Present Research Associate, Emotion Development Lab, Boston College

The "Aha! moment: How prior knowledge helps disambiguate ambiguous information. Alaina Baker. Submitted to the Department of Psychology

Emotions of Living Creatures

Fudan University, China

12/14 Jelinek, Hauschildt, Moritz & Schneider; Welcome to. Metacognitive Training for Depression (D-MCT)

Emotion Recognition in Simulated Social Interactions

PERCEPTUAL Motor Development

Channel Dominance in Decoding Affective Nonverbal Behavior Displayed by Avatars 1. What You See is What You Get:

Negative affect varying in motivational intensity influences scope of memory

Nineteen Relational Brain-Skills That Must Be Learned

What's in a face? FACIAL EXPRESSIONS. Do facial expressions reflect inner feelings? Or are they social devices for influencing others?

Brain and Cognition, 48(2-3), (2002) Evaluation of nonverbal emotion in face and voice: some preliminary findings on a new battery of tests

Developmental Social Cognition Cognitive Development

Emotional McGurk Effect? A Cross-Cultural Investigation on Emotion Expression under Vocal and Facial Conflict

The within-subjects design in the study of facial expressions

This is a repository copy of Differences in holistic processing do not explain cultural differences in the recognition of facial expression.

Common neural mechanisms for the evaluation of facial trustworthiness and emotional expressions as revealed by behavioral adaptation

Pride. Jessica L. Tracy. University of British Columbia

Comparison of Multisensory Display Rules. in Expressing Complex Emotions between Cultures

Desirability Bias: Do Desires Influence Expectations? It Depends on How You Ask.

The Regulation of Emotion

Journal of Experimental Child Psychology

Social context influences recognition of bodily expressions

A Preliminary Study on the Biased. Attention and Interpretation in the Recognition of Face-Body. Compound of the Individuals with.

Human Classification for Web Videos

SPIRITUAL HEALTH THERAPY: Intervention Skills Training. Intervention Skills for Spiritual Care

UMEME: University of Michigan Emotional McGurk Effect Data Set

Judging Approachability on the Face of It: The Influence of Face and Body Expressions on the Perception of Approachability

Human Emotion. Psychology 3131 Professor June Gruber

On Shape And the Computability of Emotions X. Lu, et al.

CHAPTER 6: Memory model Practice questions at - text book pages 112 to 113

Fuzzy Model on Human Emotions Recognition

Non-verbal Cues of Dutch Soccer Players After a Match

Nothing in biology makes sense except in the light of evolution Theodosius Dobzhansky Descent with modification Darwin

Connecting to the Guest. Dr. John L. Avella Ed.D Cal State Monterey Bay

Framework for Comparative Research on Relational Information Displays

Best Practices in Dementia Care

General Psych Thinking & Feeling

Oxford Handbooks Online

The obligatory nature of holistic processing of faces in social judgments

SELECTIVE ATTENTION AND CONFIDENCE CALIBRATION

Overview. Basic concepts Theories of emotion Universality of emotions Brain basis of emotions Applied research: microexpressions

A Course in Mental Toughness: John C. Panepinto, PsyD

The Effects of Facial Similarity on the Express of Displaced Aggression

Oxford Handbooks Online

Introduction to Psychology. Lecture no: 27 EMOTIONS

The efficiency of binding spontaneous trait inferences to actorsõ faces

The measure of an illusion or the illusion of a measure?

Emotion Lecture 26 1

CREATING A MORE VALIDATING FAMILY ENVIRONMENT

Chapter 7 Motivation and Emotion

EMOTIONS difficult to define (characteristics =)

Interpreting Human and Avatar Facial Expressions

Spontaneous Trait Inferences Are Bound to Actors Faces: Evidence From a False Recognition Paradigm

Helping Your Asperger s Adult-Child to Eliminate Thinking Errors

Emotional Body Language Displayed by Artificial Agents

Drive-reducing behaviors (eating, drinking) Drive (hunger, thirst) Need (food, water)

Transcription:

American International Journal of Social Science Vol. 6, No. 4, December 2017 The Effect of Contextual Information and Emotional Clarity on Emotional Evaluation Fada Pan*, Leyuan Li, Yanyan Zhang, Li Zhang & Yuhong Ou Department of Applied Psychology School of Education Science Nantong University Nantong, P. R. China Abstract The present study aimed to investigate the effect of contextual information and emotional clarity on valence and employed a three-factor mixed design in which participants were required to view the picture combined with athlete s facial expression and contextual information, and then to evaluate the emotional intensity of the athlete s expression using a 7-point scale. The results showed that the matched contextual information enhanced the emotional evaluation to the facial expression, namely making the valence of winning facial expression more positive and the losing more negative, and that the non-matched contextual information weakened the emotional evaluation. More importantly, the emotional evaluation was even reversed by the non-matched contextual information when the emotional clarity was lower. The results suggested that the matched contextual information had an exaggeration effect on valence and that the influence of non-matched context on emotional evaluation was more significantly impacted by the emotional clarity. Keywords: Emotional clarity, Contextual information, Valence, Facial expression, Emotional evaluation 1. Introduction Facial expression has a significant effect on social communication and interaction. However, people judge others emotion through not only facial cues but also contexts, such as body posture, voice, and language. Therefore, the recognition of emotions usually depends on the combinational impact of facial expression and context information. Researches on emotion have been trying to figure out how the facial emotion information was read by observers (e.g., Nakamura, Buck & Kenny, 1990; Aviezer, Hassin, Bentin & Trope, 2008; Yamashita, Fujimura, Katahira, Honda, Okada & Okanoya, 2016). Discrete-category perspective defined emotion as a discrete category represented by different behavior and the physiological pattern, namely basic facial expressions (Ekman, 1999). Nakamura et al. (1990) found that facial expression played a lead role in emotional evaluation when the meanings conveyed by expression and situation were both clear. Discrete-category perspective suggested that people had the ability to accurately identify the expressions in any circumstances and the reason why people couldn t accurately judge the emotions sometimes was that the emotional sense of expression was not clear enough (Ekman & Cordaro, 2011). However, the discrete-category perspective did not really take into account the situation factors (Calder & Young, 2005). Aviezer et al. (2008) combined standard facial expressions with various backgrounds, such as waving a fist, and the participants were told to judge the valence of facial expressions. The research found that the participants classified emotions according to the situations rather than facial expressions. For example, when the facial expression of anger was presented alone, it would be recognized quickly and accurately. When the anger was transplanted to a hurdling athlete s face, the participant could recognize emotions related to determination, not just anger. Barrett and Kensinger (2010) found that it could be insufficient to perceive emotion accurately when the facial expression was viewed independently. The contextual effect on emotion was inevitable. Dimensional perspective proposed that the recognition of facial expression involved two processes. In the first stage, valence and arousal were read quickly and automatically, which was not influenced by the contextual information. In the second stage, it involved not only the further processing of valence and arousal, but also the processing of context (Russell, 1997). 88

ISSN 2325-4149 (Print), 2325-4165 (Online) Center for Promoting Ideas, USA www.aijssnet.com Carroll and Russell (1996) used the Goodenough-Tinker paradigm to pair a fearful face with the angry-inducing context and asked the participants to judge the facial emotions. The results indicated that most of the participants reported the emotions consistent with the context. Russell (1997) proposed that when the valence of facial emotion was opposite from that of the situation, the face won. However, the situation would be more dominant when their valences were consistent with each other. Although there were many differences, both the discrete category and the dimensional perspective proposed that affective information was read out from the face in the early perceptual stage, which was not affected by the context (Aviezer et al., 2008). Recently, Aviezer et al. (2008) proposed the hypothesis of perceptual similarity to reconcile the contradiction in the previous studies. They assumed the body would serve as powerful context and perceptual similarity, namely the perceived similarity between facial expressions, which was an important factor to influence the context effect. Perceptual similarity determined whether the situation would influence the recognition of facial expression. That was to say, the emotional evaluation was mostly influenced by the facial expression when it was emotionally accurate. However, once the facial expression was emotionally ambiguous, the context would determine the recognition of emotion (Hassin, Aviezer & Bentin, 2013). The research found that body posture further overrode the face in judging valence while facial expression was emotionally ambiguous (Aviezer et al., 2008). In summary, the more similar facial expressions were, the stronger the influence of contexts would be, which was referred to as confusability effect (Hassin et al., 2013). In addition, some studies also discovered the role of external situation in classifying strong positive or negative emotions (Aviezer, Trope & Todorov, 2012; Hietanen & Astikainen, 2013). For example, when asked to judge players emotion after a match, observers more accurately identified winners and losers shown only the body posture than shown only the face (Aviezer et al., 2012). However, Zhang et al. (2014) found that a context effect existed even if the stimulus similarity between the context and expression was controlled. Other studies indicated that spontaneous faces rather than posed faces could be a good predictor of what happens in the real world (Kayyal & Russell, 2013). Kayyal et al. (2015) studied the emotion judgment based on 15 spontaneous facial expressions of athletes, in which context (correct context, incorrect context, and no context) was a between-subjects factor. The correct context indicated that the facial expression matched the written information about the competition result. Incorrect context indicated the opposite. No context indicated the face was presented with no information about the competition result. The results revealed that the contextual cues overrode the facial expressions on valence. According to the hypothesis of perceptual similarity (Aviezer et al., 2008), emotional clarity (the quality of being clearly perceived as positive or negative in a facial expression) was considered as being outside the scope in their study. Therefore, the present study wants to find how emotional clarity influences emotional evaluation. We assume that the non-matched contextual information will override the emotional evaluation on ambiguous expression. 2. Methods 2.1 Participants Forty-eight undergraduates from Nantong University (22 males, 26 females) whose average age was 21.79±1.29 volunteered to participate in the present study. They were right-handed, without mental illness, color blindness, or color weakness, and had normal or corrected-to-normal vision. A small gift was given in return for participation. This study was approved by the Human Ethics Committee of Nantong University. Informed consent was obtained from all participants in the present study. The sample size and sufficient power were tested by the G-power software and the results showed that the Power was 0.9996 when the total sample size was 48 which was exactly the sample size of the present study. If the Power was set to an ideal value of 0.8, the proper sample size will be estimated and the results showed that the actual power was 0.8350 and the total minimum sample size was 18. These results suggest the Power and sample size of the present study are sufficient enough. 2.2 Materials The original 30 pictures were found from the Internet, which were all athletes facial expressions without any signs of winning or losing, such as a gold medal. High emotional clarity (emotional accurate, EAC) and low emotional clarity (emotional ambiguous, EAM) were selected prior to the experiment. 89

American International Journal of Social Science Vol. 6, No. 4, December 2017 EAC faces were selected based on participants judgment to correctly distinguish between winners and losers in the absence of context. Ten faces whose correct rates were more than 0.8 were chosen as EAC faces, and another ten faces whose correct rates were between 0.4 and 0.6 were chosen as EAM faces. The formal pictures consisted of twenty athletes facial expressions, of which ten were EAC and the rest were EAM. Besides, half of these ten EAC faces were winning facial expression and half were losing. So were the ten EAM faces. All the formal pictures were made into black-and-white, 400-px 400-px images. The variable of contextual information included three levels: Matched, Non-matched, and Unrelated. Participants were evenly assigned to one of the three conditions at random. The Matched indicated the contextual information was consistent with the athlete s facial expression, winning or losing. The Non-matched indicated the opposite. Unrelated condition indicated that an athlete s facial expression paired with contextual information which was not related to the expression, such as geographically-related or politically-related nouns. Four Chinese characters were applied as contextual information and the number of words was matched among the three conditions. The contextual information and the picture were presented at the same time in the present study. Figure 1. It s an example of the materials. The Chinese characters 输掉比赛 mean lose in the game which was matched with this athlete s facial expression losing. So, this condition was named matched. The second Chinese characters 赢得比赛 mean won in the game which didn`t matched with the facial expression and then was named non-matched. The 民族企业 means national enterprise which was unrelated with the expression 2.3 Procedure A 2 (Facial Expression: Win vs. Lose) 2 (emotional clarity: High vs. Low) 3 (Contextual Information: Matched vs. Non-matched vs. Unrelated) mixed design was applied in the present study, of which Contextual Information was a between-subjects factor. Participants were asked to observe each picture and to judge the intensity of seven emotions (happy, sad, angry, proud, afraid, disgusted and excited) displayed by the person, and then pressed the corresponding number keys on the keyboard to give a score to each of the seven emotions using 7-point Likert scale ranging from 0 (not at all) to 6 (extremely). 2.4 Measures A positive score was the average of scores for happy, proud and excited. A negative score was the average of scores for sad, angry, afraid and disgusted. The valence score, as a dependent variable, was that the positive score minus the negative score. Higher valence scores indicated more positive emotions and it was more negative when the valence scores were lower. Consequently, the higher valence score indicated that the emotional evaluation was more positive; meanwhile, the lower valence score indicated the opposite. 3. Results A 2 2 3 repeated-measures ANOVA revealed the main effects of the facial expression (F (1, 45) = 159.31, p <.001, η2 = 0.78), emotional clarity (F (1, 45) = 28.68, p <.001, η2 = 0.39), and contextual information (F (2, 45) = 5.94, p <.01, η2 = 0.21), were all significant. 90

ISSN 2325-4149 (Print), 2325-4165 (Online) Center for Promoting Ideas, USA www.aijssnet.com Figure 2. Mean valence scores (and standard error) of Win and Lose are shown for high or low emotional clarity respectively on Matched, Non-matched or Unrelated contextual information. There was a significant three-way interaction among the three independent variables (F (2, 45) = 8.80, p <.01, η2 = 0.28). Further simple interaction analysis indicated that the valence scores of facial expressions, winning and losing, were significantly different in all three contextual information conditions (F (1, 45) = 257.15, p <.001; F (1, 45) = 115.90, p <.001; F (1, 45) = 46.84, p <.001). In Unrelated condition, valence score for winning facial expression (Mean = 1.44, SD = 0.24) was significantly higher than that for losing facial expression (Mean = - 0.81, SD = 0.26). In Matched condition, valence score for winning facial expression (Mean = 3.89, SD = 0.24) was significantly higher than that for losing facial expression (Mean = -1.37, SD = 0.26), which meant that compared with that in unrelated condition, the winning was rated more positively, while the losing was appraised more negatively. However, in Non-matched condition, valence score for winning (Mean = 0.46, SD = 0.24) was significantly lower than that for losing (Mean = 1.59, SD = 0.26), which meant that the winning was rated more negatively, while the losing was appraised more positively. When taking into account the emotional clarity, it showed that for the winning facial expression in all three contextual information conditions, the valence score of EAC expressions was significantly higher than that of EAM expressions (F (1, 45) = 22.84, p <.001; F (1, 45) = 16.76, p <.001; F (1, 45) = 87.8, p <.001). For losing facial expression in Non-matched and Unrelated condition, the valence score of EAC expressions was significantly lower than that of EAM expressions (F (1, 45) = 15.41, p <.001;F (1, 45) = 21.18, p <.001). From another perspective, for EAC expressions, the valence score for winning was significantly higher than that for losing in all three contextual conditions (F (1, 45) = 240.60, p <.001;F (1, 45) = 41.65, p <.001;F (1, 45) = 110.05, p <.001). For EAM expressions, the valence score for winning was significantly higher than that for losing in Matched condition (F (1, 45) = 147.09, p <.001); however, in Non-matched condition, the valence score for winning was significantly lower than that for losing (F (1, 45) = 151.38, p <.001); in Unrelated condition, there was no difference between winning and losing facial expression (F (1, 45) = 1.23, p >.05). That meant, only when emotions as difficult to decipher (EAM), the perceived valence of happy faces was reduced in the unmatched context compared to the matched context. 4. Discussion The present study explored the effect of facial expression and contextual information on valence which indicated that the matched contextual information enhanced the emotional evaluation, namely making the valence of winning facial expression more positive and the losing more negative, and that the non-matched contextual information weakened the emotional evaluation. More importantly, the emotional evaluation was even reversed by the non-matched contextual information when the emotional clarity was lower. The present study first showed that contextual information could only override EAM faces instead of always overriding the facial expressions on valence (Kayyal et al., 2015). This finding supported the hypothesis of perceptual similarity (Aviezer et al., 2008) which meant the judgment was mostly influenced by the facial expression when it was emotionally accurate; otherwise, the context could have a strong effect on the perception of emotion. The previous studies, such as Kayyal et al. (2015), found that the contextual information took an important part in emotional evaluation as well as emotion recognition. The present study is different from the previous researches mainly because we are concerned about the impact of emotional clarity on the emotional evaluation. From the results of present study, the emotional evaluation tended to be exaggerated due to the matched context and we called it Exaggeration Effect of the Matched Context. However, the non-matched context weakened the emotional evaluation and even reversed the valence score in the condition of low emotional clarity (EAM) rather than high emotional clarity (EAC). This meant that the emotional evaluation on EAM was completely dominated by the non-matched context and we called it Reversal Effect of the Non-matched Context. 91

American International Journal of Social Science Vol. 6, No. 4, December 2017 EAC was perceived more emotionally clearly, so participants could easily distinguish winning and losing. It would be difficult to identify the athletes emotions in the condition of EAM, under which the ambiguous expression made the emotion perception difficult and then the contextual information could disambiguate them to show the features congruent with context. In addition, although the emotional evaluation was mainly based on facial expression in the case of EAC, it was still influenced by non-matched context, which made the valence score significantly different than that in the matched context condition. When contextual information and facial expression mismatched, participants were more likely to suppress the emotion they perceived even if they were aware of the incongruence (Kalokerinos, Greenaway, & Casey, 2017). We also found that participants were apt to add cognitive consistency in order to reduce conflicts. For instance, when the expression was losing but the context showed that the athlete won the game, the facial expression tended to be explained in line with contextual information, such as tears of joy, to reduce the cognitive conflict, and then the reversal effect appeared. It followed that the emotional evaluation could either be correctly made or misled by contextual cues. Overall, the matched contextual information had an exaggeration effect on valence and the non-matched contextual information weakened that judgment and had a reversal effect on EAM expression. However, there are also limitations to the present study which will be improved in the future. More athletes facial expressions and participants will be fine to verify the present results; meanwhile, we speculate that different cognitive styles might have a certain influence on the emotional evaluation, which will be a meaningful topic in the future. In addition, Smith (2016) indicated that emotional extremes (very weak and very strong emotions) were associated with EAM, whereas moderated intensity emotions EAC. Therefore, emotional intensity should be seriously considered in further related studies. Funding This work was supported by the Social Sciences Foundation of Jiangsu Province, China (14SHC006); the Program for Twelfth Five-Year Plan of Educational Science of Jiangsu Province, China (C-a/2013/01/003); the Humanities and Social Sciences Foundation of the Chinese Ministry of Education (12YJC190026); and the Annual 2015 National Training Program of Innovation and Entrepreneurship for Undergraduates (201510304044). Declaration of Conflicting Interests The Author(s) declare(s) that there is no conflict of interest. References Aviezer, H., Hassin, R., Bentin, S., & Trope, Y. (2008). Putting facial expressions back in context. In N. Ambady & J. Skowroski (Eds.), First impressions (pp. 255-288). New York: Guilford Press. Aviezer, H., Trope, Y., & Todorov, A. (2012). Body cues, not facial expressions, discriminate between intense positive and negative emotions. Science, 338, 1225-1229. http://dx.doi.org/10.1126/science.1224313 Barrett, L. F., & Kensinger, E. A. (2010). Context is routinely encoded during emotion perception. Psychological Science, 21, 595-599. http://dx.doi.org/10.1177/0956797610363547 Calder, A. J., & Young, A. W. (2005). Understanding the recognition of facial identity and facial expression. Nature Review Neuroscience, 6, 641-651. http://dx.doi.org/10.1038/nrn1724 Carroll, J. M., & Russell, J. A. (1996). Do facial expressions signal specific emotions? Judging emotion from the face in context. Journal of personality and social psychology, 70, 205-218. http://dx.doi.org/10.1037/0022-3514.70.2.205 Ekman, P. (1999). Basic emotions. In T. Dalgleish & M. Power (Eds.), Handbook of cognition and emotion (pp. 134-157). Sussex, UK: Wiley. Ekman, P., & Cordaro, D. (2011). What is meant by calling emotions basic. Emotion Review, 3(4), 364 370. http://dx.doi.org/10.1177/1754073911410740 Hassin, R. R., Aviezer, H., & Bentin, S. (2013). Inherently ambiguous: Facial expressions of emotions, in context. Emotion Review, 5(1), 60-65. http://dx.doi.org/10.1177/1754073912451331 92

ISSN 2325-4149 (Print), 2325-4165 (Online) Center for Promoting Ideas, USA www.aijssnet.com Hietanen, J. K., & Astikainen, P. (2013). N170 response to facial expressions is modulated by the affective congruency between the emotional expression and preceding affective picture. Biological Psychology, 92, 114-124. http://dx.doi.org/10.1016/j.biopsycho.2012.10.005 Kalokerinos, E. K., Greenaway, K. H., & Casey, J. P. (2017). Context shapes social judgments of positive emotion suppression and expression. Emotion, 17(1). doi:10.1037/emo0000222. Kayyal, M. H., & Russell, J. A. (2013). Americans and Palestinians judge spontaneous facial expressions of emotion. Emotion, 13, 891-904. http://dx.doi.org/10.1037/a0033244 Kayyal, M., Widen, S., & Russell, J. A. (2015). Context is more powerful than we think: contextual cues override facial cues even for valence. Emotion, 15, 287-291. http://dx.doi.org/10.1037/emo0000032 Nakamura, M., Buck, R., & Kenny, D. A. (1990). Relative contributions of expressive behavior and contextual information to the judgment of the emotional state of another. Journal of Personality and Social Psychology, 59, 1032-1039. http://dx.doi.org/10.1037/0022-3514.59.5.1032 Russell, J. A. (1997). Reading emotions from and into faces: Resurrecting a dimensional contextual perspective. In J. A. Russell & J. M. Fernandez-Dols (Eds.), The psychology of facial expressions (pp. 295 320). New York: Cambridge University Press. Smith, L. K. (2016). Emotional extremes inhibit emotional clarity (Doctoral dissertation). Yamashita, Y., Fujimura, T., Katahira, K., Honda, M., Okada, M., & Okanoya, K. (2016). Context sensitivity in the detection of changes in facial emotion. Scientific Reports, 6. http://dx.doi.org/10.1038/srep27798 Zhang, M., Fu, Q., Chen, Y. H., & Fu, X. (2014). Emotional context influences micro-expression recognition. Plos One, 9(4), e95018. https://doi.org/10.1371/journal.pone.0095018 93