Human Classification for Web Videos

Size: px
Start display at page:

Download "Human Classification for Web Videos"

Transcription

1 Human Classification for Web Videos Aaron Manson Final Report for COMP4650 Advanced Computing Research Project Bachelor of Advanced Computing (Honours) Research School of Computer Science, College of Engineering and Computer Science Australian National University, Canberra, Australian Capital Territory, Australia October

2 Acknowledgements Firstly, I d like to thank my Mum, Narelle, and my sister, Kehani, both of whom have provided more support and assistance through this year than I could have ever hoped for. I d like to thank my friend Chris who has provided me with support throughout this year and been a venting outlet for me during the frustrating moments of this project, of which there were many. To all 117 of the kind people who took time out of their busy lives to take part in this experiment, thank you. This experiment only exists because of the kindness of strangers. To my Co-Supervisors Sabrina Caldwell and Tom Gedeon for their assistance throughout this experience. 1

3 Abstract Emotions are central to human life and are a critical area for Computer Science. Modelling of emotions in artificial intelligence, creation of increasingly realistic visual experiences in video games and virtual reality, rely on the ability of people to distinguish emotions. There is little research into distinguishing genuine and acted emotions, even from Psychology. This project investigates the accuracy of distinguishing between genuine and acted anger, fear, surprise and happiness and between fear and surprise by presenting a series of videos of genuine or acted emotions. Participants categorised videos as genuine or acted (fear or surprise in Part 2). Results indicate people are generally poor at distinguishing acted and genuine emotions with accuracies mostly in the range of 60-65%. Furthermore, the results suggest acted fear is more readily recognised than genuine fear and surprise more readily recognised than fear. These results can give insights into how humans distinguish acted and genuine emotions for use in models of emotion in artificial intelligence and about how to improve the likelihood that emotional content will be correctly perceived in computing, video games and virtual reality applications. 2

4 Table of Contents Acknowledgements... 1 Abstract... 2 Lists of Figures... 6 Lists of Tables Introduction Motivation Objectives Report Outline Section 1: Introduction Section 2: Literature Review Section 4: Experimental Purpose and Hypotheses Section 5: Design and Implementation Section 6: Participants and Procedures Section 7: Results Section 8: Discussion Section 9: Challenges and Limitations Section 10: Conclusion Section 11: Future Work Literature Review Distinguishing Emotions Distinguishing Acted and Emotions Related Work Experimental Purpose and Hypotheses Experimental Purpose Hypotheses Design and Implementation Programming Languages Stimuli Experimental Platform Extended Experiment Platform Design SONA Instructional Video Experimental Platform Experience

5 5. Participants and Procedures Participants Procedures Results General Results Block Level Experiment Part 1 vs Acted Anger Surprise Happiness Fear Experiment Part 2 Fear vs Surprise Fear vs Surprise ( and Acted Analysis) Fear vs Surprise (Fear vs Surprise Analysis) Discussion Discussion Data Sets Discussion Experiment Part 1 Block Level Discussion Experiment Part 1 vs Acted Discussion Experiment Part 2 Block Level Discussion Experiment Part 2 Fear vs Surprise General Discussion Challenges and Limitations Challenges Limitations Conclusion Future Work References Appendix 1: Final Project Description Project Title: Human Classification for Web Videos Learning Objectives: Project Description: Appendix 2: Independent Study Contract Appendix 3: List of Artefacts Appendix 4: README File

6 Appendix 5: Video Stimuli Still Images Appendix 6: Latin Squares Appendix 7: Main Complete Graphs and Tables Appendix 8: YY Removed Complete Graphs and Tables Appendix 9: YN Removed Complete Graphs and Tables Appendix 10: Emotion Regulation Questionnaire Appendix 11: SONA Participant Recruitment Information Appendix 12: Paper Survey Information and Answer Sheet Appendix 13: Stimuli Source Videos

7 Lists of Figures Figure 4.1 Still frames of genuine and acted stimuli Figure 4.2 Instruction page for the experiment web platform Figure 4.3 First demographics page Figure 4.4 Ethnicity Demographics Page Figure 4.5 Example of a video stimulus page Figure 4.6 Example of a video stimulus question page Figure 4.7 An example Break page Figure 4.8 The Emotional Regulation Questionnaire (ERQ) page Figure 6.1 Mean accuracy for the 4 emotion blocks in part Figure 6.2 Composition of the total responses for acted and genuine anger Figure 6.3 Mean Accuracy for genuine and acted anger Figure 6.4 Composition of the total responses for genuine and acted surprise Figure 6.5 Mean Accuracy for genuine and acted surprise Figure 6.6 Composition of the total responses for genuine and acted happiness Figure 6.7 Mean Accuracy for genuine and acted happiness Figure 6.8 Composition of the total responses for genuine and acted fear Figure 6.9 Mean Accuracy for genuine and acted fear Figure 6.10 Composition of the total responses for genuine and acted fear and surprise Figure 6.11 Mean Accuracy for genuine fear and surprise and acted fear and surprise Figure 6.12 Composition of the total responses for fear and surprise Figure 6.13 Mean Accuracy for fear and surprise Figure 7.1 Facial Action Units (AU) associated with the 6 universal emotions

8 Lists of Tables Table 6.1 Unpaired t-tests on mean accuracy between main, and the YY removed data Table 6.2 Unpaired t-tests on mean accuracy between the main, and the YN removed data Table 6.3 Unpaired t-tests on mean accuracy between the main and the YY removed data Table 6.4 Unpaired t-tests on mean accuracy between the main, and YN removed data set for genuine and acted emotion Table 6.5 Unpaired t-tests on mean accuracy between emotion blocks Table 6.6 Unpaired t-tests between emotion blocks and chance level accuracy Table 6.7 Unpaired t-tests on mean accuracy Table 6.8 Unpaired t-tests on mean accuracy Table 6.9 Unpaired t-tests on mean accuracy Table 6.10 Unpaired t-tests on mean accuracy 46 Table 6.11 Unpaired t-tests on mean accuracy Table 6.12 Unpaired t-tests on mean accuracy.. 48 Table 6.13 Unpaired t-tests on mean accuracy Table 6.14 Unpaired t-tests on mean accuracy Table 6.15 Unpaired t-tests on mean Table 6.16 Unpaired t-tests on mean accuracy Table 6.17 Unpaired t-tests on mean accuracy Table 6.18 Unpaired t-tests on mean accuracy.. 56 Table 7.1 Comparison of mean accuracy from this experiment, anger from Chen (et al., 2017) and smiles from Hossain and Gedeon (2017) Table 7.2 Unpaired t-test between anger happiness and smiles Table 7.3 Comparison of mean accuracy of genuine and acted emotions for anger and happiness and smiles Table 7.4 Unpaired t-test on mean accuracy for genuine and acted anger and happiness and smiles

9 1. Introduction 1.1 Motivation Emotions are a vitally important aspect of our everyday lives. People express themselves through their emotions and we surround ourselves with emotionally engaging content; be they books, movies, video games or emerging virtual reality experiences. The developers of these different kinds of content are aware of the importance of emotions and therefore sculpt the experiences to be emotionally engaging to captivate their audiences. This is evident in the recent popularity of converting books which create immersive and emotionally engaging worlds into movies, television shows and video games (i.e. The Lord of the Rings, Harry Potter, The Hunger Games and Game of Thrones). Furthermore, emotions play an important role in the Computer Science field of artificial intelligence (AI). This is particularly true for neural networks (NN) where models that simulate how humans process emotions will be needed before AIs can be created to be used to run robots; which one day hopefully will exhibit a level of (seemingly) human intelligence. The more concrete the literature on emotions becomes, the better the models of emotion used in computer science will be. To support the creation of this content, research into how people perceive and distinguish emotions is important. This will enable developers to utilise aspects of emotions that people are effective at perceiving and avoiding spending resources on aspects of emotions that people are poor at perceiving. It is also important to know how 8

10 people perceive and distinguish emotions to help accurate modelling of emotions. The ability of people to distinguish between different universal emotions is well founded in the literature with studies such as Hassin, Aviezer and Bentin (2013) and Waller, Cray and Burrows (2008). However, there is comparatively little research on the ability of people to distinguish between acted and genuine displays of emotion. As reported by Chen et al. (2017), there is some previous research by Côté, Hideg and van Kleef (2013) which examined surface acted and deep acted anger, but this could not be definitively resolved into corresponding to genuine and acted anger. The research by Chen et al. (2017) examined acted and genuine anger and found similar results to a previous study by Hossain and Gedeon (2017) for accuracy in distinguishing genuine and acted smiles. In both of those studies, with about 20 subjects in each, the recognition of genuine versus acted emotion was in the order of 60%. This provided motivation for the current experiment to examine if these results for acted and genuine emotions and facial expressions extend to other emotions such as fear and surprise, and whether the results were robust over larger numbers of subjects. Expanding this aspect of emotions could provide useful knowledge for the creation of emotional content in the previously mentioned Computer Science fields of video games and virtual reality as well as in the more general field of human computer interaction. Additionally, this could allow for better modelling of emotions for artificial intelligence tools. 1.2 Objectives The primary objective of this experiment is to examine if the results for distinguishing acted and genuine smiles (Hossain and Gedeon, 2017) and acted and genuine anger (Chen et al., 2017) are reflected when examining the emotions fear, surprise and 9

11 happiness. This experiment aims to collect only conscious response data, rather than both conscious and physiological response data as in the previous research. Instead, this project aims to have a substantially larger participant number to examine if the effects for consciously distinguishing between acted and genuine displays of an emotion remain consistent with a larger participant pool. This experiment additionally has objectives for examining the disparity between recognising fear and surprise using an extended version of the experimental platform and design used by Chen (et al., 2017) to determine if the disparity reported in the literature (Du and Martinez, 2011; Gagnon et al., 2009; Gosselin and Simard, 1999; Roy-Charland et al., 2014) for more typical posed picture stimuli experiments, hold for genuine stimuli. Finally, this experiment will examine the effect of positively identifying the subject of a video stimulus on mean accuracy to eliminate recognition effects which may have been present in the study by Chen (et al., 2017); as the effect or removing these data points was not analysed. This is to ensure accuracy of identifying genuine and acted displays of emotion are not based on recognition of the subject in the stimulus but rather on the emotions being presented in the stimuli. 1.3 Report Outline Section 1: Introduction The introduction to the report includes the motivations behind this experimental study, the experimental objectives and the report outline. 10

12 Section 2: Literature Review The literature review contains a brief overview of previous research into distinguishing emotion including the effects of contextual background on performance and an overview of previous studies which investigated distinguishing acted and genuine emotions. Section 4: Experimental Purpose and Hypotheses The experimental purpose and hypotheses section outlines the specific investigative goals of the experiment, the formulation of the hypotheses of the experiment and a formal statement of experimental hypotheses. Section 5: Design and Implementation The design and implementation section describes the base experimental platform for the experiment, the extensions made to the platform for use in this experiment, the production of stimuli for use in the experiment platform, the programming tools and languages used in developing the extended experimental platform and for creation of the stimuli and the experimental platform experience. Section 6: Participants and Procedures The participants and procedures section describes the recruitment methods for obtaining participants, the demographics of the participants for this experiment and the experimental procedure. Section 7: Results The results section provides a summary and analysis of the data collected from participants who completed the experiment. This includes sections for data level 11

13 analysis, block level analysis, and analyses of distinguishing acted and genuine emotions in experiment Part 1 and distinguishing fear and surprise in experiment Part 2. Section 8: Discussion The discussion section provides a discussion and interpretation of the results for each of the results sub-sections and a general discussion on more generalised interpretations and implications of the results and their applicability to Computer Science and Psychology. Section 9: Challenges and Limitations The challenges and limitations section provides a discussion of the challenges experienced throughout the study, how, when possible, these challenges were overcome, and the limitations of the experiment and the experimental platform used for conducting the experiment. Section 10: Conclusion The conclusion provides a brief summary of the experiment and the potential implications of the experimental results for the fields of Computer Science and Psychology. Section 11: Future Work The future work section discusses some possible avenues for further investigation of the results and implications of this study. 12

14 2. Literature Review 2.1 Distinguishing Emotions Previous research in the psychological literature has examined the ability of people to distinguish between different emotions. Of particular interest is the ability to distinguish between the different universal emotions (happiness, anger, fear, surprise, sadness and disgust) which are considered to be recognisable by all humans regardless of culture (Hassin, Aviezer and Bentin, 2013; Waller, Cray and Burrows, 2008). This has resulted in many studies such as those by Du and Martinez (2011) and Masuda et al. (2008), which along with other studies such as (e.g. Schmidt and Cohn, 2001) have investigated the ability of people to distinguish the different universal emotions in a range of contexts such as in different cultures (Masuda et al., 2008) and in children (Gosselin and Simard, 1999; Gagnon et al., 2009). Traditionally these studies have been conducted using clinical stimuli of posed emotions such as those used by McLellan et al. (2010) and Du and Martinez (2011). However, a recent trend in the literature has been emerging which has found the contextual backgrounds for emotional displays is important for accurately distinguishing between different emotions (Aviezer et al., 2008; Aviezer et al., 2009; Aviezer et al., 2011; Aviezer, Hassin and Bentin, 2012; Hassin, Aviezer and Bentin, 2013; Righart and De Gelder, 2008). This trend has supported the use of stimuli (including background context) to add contextual cues which can be used in distinguishing between emotions (Aviezer et al., 2008; Righart and De Gelder, 2008) and moving away from the more clinical stimuli found in some studies. This raises concerns about the results from studies using the 13

15 clinical style stimuli as they may be underestimating the ability of people to distinguish emotions which causes concerns in fields such as Computer Science where accurate and effective modelling requires quality research and data to form a strong basis. Part of this body of research has focused on the ability of people to distinguish between fear and surprise. While specific values for accuracy are varied depending on the study and methodology (Du and Martinez, 2011; Gagnon et al., 2009; Gosselin and Simard, 1999; Roy-Charland et al., 2014), all report the same effect that accuracy for recognising surprise is greater than that for fear. In one case this disparity has been found to be as large as 93% accuracy for surprise and 49% accuracy for fear (Du and Martinez, 2011). While this section of the literature is currently undergoing change due to the increased push towards background contexts, it has been more thoroughly examined than acted and genuine emotions, which comparatively have attracted far less research effort so far. 2.2 Distinguishing Acted and Emotions The literature looking at distinguishing acted and genuine emotions is more scattered and varied than that of distinguishing emotions in general. Research by Ekman and Friesen (1982) found evidence of different types of smiles, which use the classification of felt smiles and false smiles and map to the terms genuine and acted smiles which are used throughout this report. Research has additionally demonstrated that people who identify as having been socially rejected, exhibit higher performance when distinguishing genuine and acted smiles. This has been attributed to a heightened social need for the ability to distinguish acted and genuine smiles, to help prevent further social rejection (Bernstein et al., 2008). Further research found that the accuracy of 14

16 distinguishing between genuine and acted smiles in Chinese participants was related to the amount of focus on the eyes while viewing the emotional display (Mai et al., 2011). A study by McLellan et al. (2010) found evidence of a sensitivity towards perceiving genuine and acted emotional stimuli (photographs) for sadness and fear but not for happiness. A later study by McLellan et al. (2012) used an fmri machine to find different neural activity in participants when presented with genuine and acted stimuli. Additionally, research (Douglas, Porter and Johnston, 2012) found that depression impacts the ability of people to differentiate genuine and acted sadness. While these different studies all add to the literature for genuine and acted emotions, there is an absence of research which applies a consistent experimental methodology to multiple emotions to examine the ability of people to distinguish between acted and genuine displays of emotion. 2.3 Related Work Previous research by Hossain and Gedeon (2017) examined the ability of people to distinguish acted and genuine smiles, comparing both conscious responses and physiological responses in the form of pupil dilation. This research was built upon by Chen (et al., 2017) who used the same methodology, and examined the ability of people to distinguish between acted and genuine anger. The mean accuracy of conscious responses for these studies were 59% and 57% respectively, which is effectively the same. Despite Hossain and Gedeon (2017) using clinical style video stimuli with cropping to only include the face, Chen (et al., 2017) used close cropped video stimuli which 15

17 included some background context. The reason for this was stated as being due to the increased movement in the anger stimuli (Chen et al., 2017), however, this has fortunately fit well into the emerging trend of including background context in stimuli. For this reason and because Chen (et al., 2017) examine emotion rather than facial expressions, this study is closely modelled on their study. 16

18 3. Experimental Purpose and Hypotheses 3.1 Experimental Purpose The purpose of this experiment was to further investigate the conscious discrimination component of the findings by Hossain and Gedeon (2017) and Chen et al. (2017) using consistent stimuli, on a larger sample size, and across a wider range of emotions. This was to investigate whether those findings: 1) remain consistent for larger samples, 2) whether uniform video stimuli would result in different findings and 3) whether the previous findings for smiles (Hossain and Gedeon, 2017) and anger (Chen et al., 2017) would hold true for fear and surprise. 3.2 Hypotheses Based on the previous research mentioned above, which found the accuracy for detecting genuine and acted smiles, and genuine and acted anger to be 57% (Hossain and Gedeon, 2017) and 59% (Chen et al., 2017), it is expected that similar levels of accuracy will be found for anger, happiness and surprise in this experiment. Given that previous research (Du and Martinez, 2011; Gagnon et al., 2009; Gosselin and Simard, 1999; Roy-Charland et al., 2014) demonstrates the lower accuracy for identifying fear from other emotions it is expected that the mean accuracy for distinguishing genuine fear from acted fear will be lower than for the other emotions. Furthermore, based on the rates for mistaking fear for surprise in the literature (Du and Martinez, 2011) it is expected that the accuracy of identifying surprise stimuli in Part 2 will be higher than the accuracy of identifying fear stimuli. 17

19 Consistent with previous research (Hossain and Gedeon, 2017; Chen et al., 2017) it is expected that there will be no significant difference between genuine and acted accuracy for emotions. Therefore, based on these arguments the formal hypotheses for this experiment are: - H1: Accuracy of distinguishing genuine and acted emotions will be the same for a given emotion. - H2: Accuracy of distinguishing genuine and acted emotions will be consistent for happiness, anger and surprise. - H3: Accuracy of distinguishing genuine and acted emotions for fear will be significantly lower for fear (relative to happiness, anger and surprise). - H4: Accuracy of distinguishing surprise for fear will be greater than distinguishing fear from surprise. 18

20 4. Design and Implementation 4.1 Programming Languages One step in creating the video stimuli involved adjusting the videos for contrast and luminance. This was achieved using the MatLab scripts which were provided as part of the existing platform. The existing platform was written in HTML, CSS and JavaScript for the website content, style and functionality, and PHP for outputting data to a log file. As the platform was already established using these languages the extension of the platform as part of this experiment continued to use these languages. 4.2 Stimuli For comparability and consistency of stimuli the experiment platform continued to use the anger stimuli used by Chen and Gedeon (2017). This dictated how the stimuli for the fear, surprise and happiness conditions needed to look. This inclusion of some cropped background context was additionally desirable due to previous research suggesting that background contexts affect the ability of people to distinguish emotions (Aviezer et al., 2008; Aviezer et al., 2009; Aviezer et al., 2011; Aviezer, Hassin and Bentin, 2012; Hassin, Aviezer and Bentin, 2013; Righart and De Gelder, 2008). It is also believed that the use of stimuli of this type is more realistic and less clinical than those used in studies such as Hossain and Gedeon (2017). The raw videos of acted and genuine emotions were sourced using YouTube. Of these videos, a total of 30 videos were categorised as displaying fear (15 each acted and 19

21 genuine), 30 videos categorised as displaying surprise (15 each acted and genuine) and 20 videos categorised as displaying happiness (10 each acted and genuine) were found. The videos were selected to balance for ethnicity, gender and background context to keep blocks consistent and comparable. Figure 4.1 provides an example of acted and genuine emotions displayed in the videos for anger (4.1A 1 & 2), fear (4.1B 1 & 2), happiness (4.1C 1 & 2) and surprise (4.1D 1 & 2). A comprehensive list of all raw videos sourced for this project, including still images of each emotion portrayed in each video, are included within A1 B1 C1 D1 A2 B2 C2 D2 appendix 15. Acted and genuine emotions were Figure 4.1 Still frames of genuine and acted stimuli. categorised based on the context of the original videos in which they were found. Reality television shows were used as the source of the raw genuine emotion videos and fictional television shows and movies were used as the source of the raw acted 20

22 emotion videos. All raw videos were obtained from television shows or movies to match cinematic quality in terms of lighting, camera style and camera angles or movement. Most raw videos displaying genuine emotion were found from episodes of Fear Factor (Fear Factor, n.d.) which exhibited a good mix of fear and surprise at the tasks required of contestants and happiness upon successful completion. This also provided a good source for balancing of gender and ethnicity in the videos. The displays of emotion of interest were then isolated, loosely cropped and cut to form 80 (448 x 448 pixel) video stimuli of close-up displays of emotion using Camtasia. These stimuli were then passed through the MatLab scripts for normalising contrast and luminance to create the final 80 stimuli which would be used in conjunction with the 20 anger stimuli from Chen and Gedeon (2017). The stimuli were categorised into three main categories corresponding to fear, surprise and happiness. Happiness was straight forward as the videos all display someone smiling which is a facial expression linked with happiness (Ekman and Friesen, 1982; Mai et al., 2011) and this was further obvious from the context of the source videos. The stimuli for fear and surprise were categorised based on established facial indicators (Du and Martinez, 2011) as well as context of the source videos. However, not all displays of fear or surprise conform to the established indicators but based on the context of the source videos are clearly fear or surprise. Due to this disparity videos for fear and surprise were used and categorised primarily based on the context of the source videos when the context was obvious. This decision was made to include more realistic and varied displays of emotion rather than clinically restrictive displays. 21

23 The stimuli were categorised into the four different emotion categories (fear, surprise, happiness and anger). Ten stimuli (five acted and five genuine) from the fear and surprise categories were then taken to produce the block for Part 2 of the experiment for making the fear or surprise two-alternative forced choice judgement. The stimuli selected for this block were those in their respective blocks that were least obviously their respective emotion. For example, surprise stimuli included both positive and negative surprise contexts. As fear contexts are more negative, the surprise stimuli from more negative contexts were used to avoid the potential creation of a disparity between the contexts of the fear stimuli and those of the surprise stimuli in this condition. This process resulted in five blocks of 20 stimuli: Anger, Surprise, Fear, Happiness and Fear vs Surprise. All blocks consist of ten genuine and ten acted stimuli. The Fear vs Surprise block additionally satisfies the requirement of ten fear stimuli and 10 surprise stimuli. Still images of all 100 stimuli can be found in appendix Experimental Platform The base experimental platform used in this experiment is that from Chen and Gedeon (2017). However, that platform was designed to run on a local machine and only be used for a single emotion block of 20 stimuli. As this was not sufficient for this experiment, the platform was extended to have the following features: Ability to handle multiple blocks of stimuli Be accessible online Produce separate log files for each participant 22

24 Automatic adjustment of stimuli sequences for participants o Both block order and stimuli order 4.4 Extended Experiment Platform Design The experiment platform originally consisted of 20 stimuli pages each with a matching question page. This was extended to consist of 100 stimuli pages each with a matching question page. Additional break pages were also created to signal the change from one block to another. The JavaScript handling stimuli ordering was adapted to read in block and stimuli sequences from text files. These sequences were created using a 4x4 Latin square design for blocks and a 20x20 Latin square design for stimuli. These Latin squares can be found in appendix 6. The adapted JavaScript used the text files to determine the starting sequence combination and from there could select the stimuli and block ordering for the experiment. It was decided to use a new stimuli sequence for each block the participant viewed. Thus, resulted in each participant having one Latin square sequence which dictated the block order for anger, surprise, fear and happiness and five Latin square sequences which dictated stimuli order; one each for anger, surprise, fear, happiness, and fear vs surprise. This allowed for better coverage of the different block and stimuli sequence combinations with fewer participants in the event participation in the experiment was low. This proved unnecessary as full coverage of all 80 combinations (4x20) was achieved however, this could be an important design feature for future experiments using this platform. The base platform logged all participant data into a single log file. While convenient for a locally run, face-to-face experiment design this was not possible for the crowd 23

25 sourced online nature of this experiment. Therefore, the PHP logging functionality of the experiment was adapted to create individual participant logs based on date and their experimental sequence identifier in a designated log folder. This allowed for the possibility that two participants completing the same sequence on the same day could be logged into one file. This was considered acceptable due to the automatic sequence rotation described below which would require a minimum of 81 participants to do the experiment in a single calendar day for the issue to occur. That level of participation was considered unrealistic and thus this caveat was deemed acceptable. The base platform was conducted face-to-face, so stimuli sequences were changed manually using a variable in the code. This was not possible for this experiment platform and thus had to be automated. The JavaScript of the platform was modified to update the sequence contained in the text files used to establish the block and stimuli sequences for a participant as soon as a participant started the experiment proper. That is, as soon as they progressed to the first stimuli page. This ensured that new participants starting the experiment while other participants were already completing the experiment received their own unique sequence. The experiment platform was made accessible online using the HCC Workshop server hosted at the Australian National University, which is used primarily to engage the interest of potential students in research. The people who run the server also allow student experiments to be hosted on the server. The extended experiment platform was hosted on this server. An issue of sporadic logging arose and resulted in the loss of data early in the experiment. This issue is discussed fully in the challenges and limitations section. 24

26 4.5 SONA Instructional Video An instructional video explaining requirements and methods for obtaining SONA research participation credit was created to place at the beginning of the experiment, prior to the instruction page for Part 1. SONA is a research participation platform used by the Research School of Psychology at the Australian National University. Recruitment of participants was intended to be through SONA, therefore this video was created to ensure participants were well instructed about how to efficiently ensure they received credit for participating in the experiment. The video, included in the artefact files, consists of a short video explanation given by the experimenter. This was scripted, filmed and edited (using Camtasia) specifically for this experiment. 4.6 Experimental Platform Experience The experiment platform was accessible online and locally on the experimenter s local machine (laptop) and both were used to deliver the experiment platform to participants. The experimental platform experience consisted of six categories of web pages: SONA video introduction page Instructional pages Demographic pages Stimuli pages Question pages The Emotional Regulation Questionnaire page Break pages 25

27 The experiment platform includes the SONA instructional video described above. This video was only experienced by participants indicating they wished to obtain SONA research participation credit. For all other participants, the URL provided started with the instruction page for Part 1 of the experiment. The experiment platform contained two almost identical instruction pages, one for each part of the experiment (Figure 4.2). These provided instructions for viewing the stimuli and deciding if the stimuli were acted or genuine (Part 1) or demonstrating fear or surprise (Part 2). The experiment platform contains two demographic pages. The first requires the participant to supply age and gender information and whether they wear glasses or contacts prior to starting the experiment (Figure 4.3). Although, the glasses/contacts question was not pertinent to this experiment it was in the experiment conducted by Chen and Gedeon (2017) and is in another experiment currently utilising the extended platform designed in this experiment. Therefore, it was decided to keep this question. 26

28 Figure 4.2 Instruction page for the experiment web platform 27

29 The second demographics page is presented at the end of the experiment after the Emotional Regulation Questionnaire (ERQ) page and elicits information about the ethnicity and primary language of the participant (Figure 4.4). Figure 4.3 First demographics page Figure 4.4 Ethnicity Demographics Page The experiment platform contains 100 stimuli pages each of which displays a black screen with a 400x400 pixel stimulus video in the middle (Figure 4.5). After the stimulus video has finished two buttons appear below the video allowing selection of genuine or acted emotion in Part 1 (fear or surprise in Part 2). Each stimulus page is followed by a corresponding question page which asks the Figure 4.5 Example of a video stimulus page 28

30 participant to report their confidence in their decision, whether they have seen the video before and if so whether that influenced their decision (Figure 4.6). Figure 4.6 Example of a video stimulus question page The experiment platform also contains break pages (Figure 4.7) every ten videos, midblock and end-of-block. These serve to break up the experiment into more manageable components. Figure 4.7 An example Break page After the completion of Part 2 of the experiment the platform contains a webpage (Figure 4.8) which presents participants with the Emotional Regulation Questionnaire (ERQ). 29

31 Figure 4.8 The Emotional Regulation Questionnaire (ERQ) page. 30

32 5. Participants and Procedures 5.1 Participants The initial intention was to recruit participants via SONA, a research recruitment platform used by the Research School of Psychology at the Australian National University. Research participation credit for courses was offered to students who participated through this medium. However, due to issues with the platform, which will be discussed fully in the challenges and limitations section of this report, the number of participants recruited via this method was zero. The experiment listing used on SONA is in appendix 11. As an alternative social media, primarily Facebook, was used as a method of crowd sourcing participants. Posts were put on various public Facebook groups such as community pages, research participation groups and honours student groups. An example post can be found in appendix 12. Additional information and instructions were contained in the survey answer sheet provided with all posts (appendix 13). This included information such as how best to perform the experiment, duration and ethics approval number. Several recipients of the posts asked to share the request for participants to other groups they were members of; universally consent was given. Through this method, a web of posts across social media was used to crowd source the participants for the experiment. Ultimately, the experiment had 117 participants (81 females, 35 males, 1 not disclosed) with a minimum age of 16 and a maximum age of 80 (mean age = ). Participants were mostly of Caucasian ethnicity with a scattering of people of Asian, Aboriginal, 31

33 Indian and other ethnicities. The participants were mostly based in Australia, although people as far away as the UK and Canada also participated in the experiment. One participant who was recruited through a social media post was provided credit through SONA for 1 hour of participation as they were an ANU student who saw the post and inquired about the possibility for experiment participation credit in SONA. Further information including additional ethics approval information was given upon request from participants. As the experiment was conducted in an opt in manner and required returning an answer sheet by (or equivalent method) consent was considered to be given when the participant returned the answer sheet. In this way no personally identifying information beyond age, gender, ethnicity, primary language and whether they wear glasses/contacts was collected in order to ensure maximum privacy of the participants. 5.2 Procedures Participants could complete the experiment in one of two ways: 1) in person on the experimenter s local machine (laptop) or 2) online. The procedure for the two methods differed slightly. The local machine recorded all answers without issues so no further effort by the participants was required. The online version however, presented issues with recording the volume of data correctly (see Challenges and Limitations). To ensure accurate recording of results for online participants a survey answer sheet was provided to participants to fill out while completing the experiment (appendix 13). All participants started the experiment proper on the introduction page containing instructions for completing the experiment (Figure 4.2). Participants then viewed a 32

34 demo video to demonstrate the style of videos contained in the experiment. Once ready the participant could begin Part 1 of the experiment. In Part 1 of the experiment participants were presented four blocks of 20 videos each corresponding to a different emotion (anger, surprise, fear and happiness). Within each block the participants viewed ten videos displaying genuine emotion and ten video displaying acted emotion with a duration of seconds each. After each video participants were asked to make a two-alternative forced choice between genuine and acted emotion. Participants clicked one of two buttons indicating their decision. A question page followed each video which asked participants to answer how confident they were with their decision (Likert scale 1-5), if they had seen the video before and if yes, if it influenced their decision. Following every ten videos a break page was presented to the participant allowing for a short break if required. At the completion of Part 1 the instruction page for Part 2 was presented to the participant. In Part 2 of the experiment participants were presented with a single block of 20 videos consisting of ten videos (five each genuine and acted) displaying fear and ten videos (five each genuine and acted) displaying surprise. Participants completed Part 2 the same way as in Part 1 of the experiment but the two-alternative forced choice they were asked to make was instead if the videos displayed fear or surprise. 33

35 6. Results 6.1 General Results The results for all 117 participants were analysed using 3 levels of restriction on which answers were included in the analysis. The least restrictive level of analysis did not remove any responses from the data set. This data set has been labelled as the Main data set. The middle restriction level of analysis removed answers from the data set where the participant indicated that they had seen the stimulus video before and that their recognition of the video influenced their decision. This data set has been labelled as the YY removed data set. The most restrictive level of analysis used the YY removed data set and additionally removed all answers from the data set where the participant indicated that they had seen the stimulus video before but stated that it did not influence their decision. This was done in case participants were influenced but not consciously aware of it. This data set has been labelled as the YN removed data set. For both the YY removed and YN removed data sets any answers which were removed also had to satisfy an additional criterion. Only correct answers were removed. In the case where the answer was incorrect this was deemed that the participant was mistaken, and the response was equivalent to a normal response. The YY removed and YN removed data sets were compared to the Main data set using an unpaired t-test to check for any statistically significant differences. No statistically significant differences were found between the Main data set and either 34

36 YY removed or YN removed data sets for the block level analysis (Tables 6.1 and 6.2 respectively) or genuine and acted level analysis (Tables 6.3 and 6.4 respectively). Despite the 3 data sets not showing any statistically significant differences from each other there were some differences in t-test results within the data sets. Therefore, it was decided to use the most conservative data set, YN removed for the remainder of this results section. However, the figures and tables in their entirety for all 3 data sets can be found in appendices 7, 8 and 9 for comparison. Table 6.1 Unpaired t-tests on mean accuracy between the main, raw data set and the YY removed data set at the block level. Main VS YY Removed Paired Anger Surprise Fear Happiness Fear vs Surprise Two-tailed p CI diff CI lower CI Upper t df s.e of diff Sig? No No No No No 35

37 Table 6.2 Unpaired t-tests on mean accuracy between the main, raw data set and the YN removed data set at the block level. Main VS YN Removed Paired Anger Surprise Fear Happiness Fear vs Surprise Two-tailed P CI diff CI lower CI Upper t df s.e of diff Sig? No No No No No 36

38 Table 6.3 Unpaired t-tests on mean accuracy between the main data set and the YY removed data set for genuine and acted emotion. Main VS YY Removed Paired Anger Surprise Fear Happiness Fear Vs Surprise Fear Vs Surprise Acted Acted Acted Acted Acted Fear Surprise Twotailed P CI diff CI lower CI Upper t df s.e of diff Sig? No No No No No No No No No No No No 37

39 Table 6.4 Unpaired t-tests on mean accuracy between the main, raw data set and the YN removed data set for genuine and acted emotion. Main VS YN Removed Paired Anger Surprise Fear Happiness Fear Vs Surprise Fear Vs Surprise Acted Acted Acted Acted Acted Fear Surprise Two- tailed P CI diff CI lower CI Upper t df s.e of diff Sig? No No No No No No No No No No No No 38

40 Mean Accuracy (%) 6.2 Block Level The mean accuracies for the five blocks of emotions examined in Parts 1 and 2 of the experiment can be seen in Figure 6.1. There were no statistically significant differences found between anger, surprise and happiness (mean accuracy = 65.3%, 62.7% and 61.9% respectively) using unpaired two-tailed t-tests (Table 6.5) which returned a P- value of between anger and surprise, a P-value of between anger and happiness and a P-value of between happiness and surprise. Block-Level performance at distinguishing acted and genuine (or Fear vs Surprise) emotions (YN Removed) Anger Surprise Fear Happiness FvsS Emotion Block Figure 6.1 Mean accuracy for the 4 emotion blocks in Part 1 and the fear vs surprise block in Part 2. Anger [Mean Accuracy = 65.3%, Min = 20%, Max 94.7%, SD = 17.2%, S.E.M = 1.6%], Surprise [Mean Accuracy = 62.7%, Min = 30%, Max 95%, SD = 14.8%, S.E.M = 1.4%], Fear [Mean Accuracy = 55.3%, Min = 15%, Max 90%, SD = 19.9%, S.E.M = 1.8%], Happiness [Mean Accuracy = 61.9%, Min = 21.1%, Max 94.7%, SD = 14.8%, S.E.M = 1.4%] & Fear vs Surprise [Mean Accuracy = 58.0%, Min = 30%, Max 73.5%, SD = 8.6%, S.E.M = 0.8%]. 39

41 Table 6.5 Unpaired t-tests on mean accuracy between emotion blocks using the YN Removed data set. UnPaired t test Anger/Surprise Anger/Happiness Surprise/Happiness Surprise/Fear Anger/Fear Happiness/Fear Two tailed P CI diff CI lower CI Upper t df s.e of diff Sig? No No No Very Stat Sig Extremely Stat Sig Very Stat Sig 40

42 Comparing the mean accuracies for surprise, anger and happiness with fear (mean accuracy = 55.3%) using unpaired two-tailed t-tests resulted in statistically significant differences (Table 6.5) with P-values , < and respectively. The fear vs surprise block had a mean accuracy of 58.0% and a standard deviation of 8.6%. This mean accuracy placed performance in the fear vs surprise judgement in the middle between the lower accuracy for fear and the higher accuracy for anger, surprise and happiness. When compared to chance level, 50% for a 2-alternative forced choice design, performance was statistically higher than chance for all blocks (Table 6.6). 6.3 Experiment Part 1 vs Acted Anger For the anger emotion block 33% of all answers were correctly identified genuine anger, 32% were correctly identified acted anger, 19% were incorrectly identified genuine anger and 16% were incorrectly identified acted anger (Figure 6.2). The mean accuracy for genuine anger was 63.3% with a standard deviation of 19.3% and the mean accuracy for acted anger was 67.7% with a standard deviation of 24.7% (Figure 6.3). 41

43 Table 6.6 Unpaired t-tests between emotion blocks and chance level accuracy using the YN Removed data set. Block Accuracy Vs Chance UnPaired t test Anger Surprise Fear Happiness Fear VS Surprise two tailed P CI diff CI lower CI Upper t df s.e of diff Sig? Extremely Stat Sig Extremely Stat Sig Stat Sig Extremely Stat Sig Extremely Stat Sig 42

44 Number of Answers Anger Response Composition (YN Removed) Acted Incorrect 16% 33% Acted 32% Incorrect 19% Incorrect Acted Acted Incorrect Figure 6.2 Composition of the total responses for acted and genuine anger. Performance at distinguishing acted and genuine anger (YN Removed) Type of Emotion Acted Figure 6.3 Mean Accuracy for genuine anger [Mean Accuracy = 63.3%, Min = 10%, Max 100%, SD = 19.3%, S.E.M = 1.8%] and acted anger [Mean Accuracy = 67.7%, Min = 10%, Max 100%, SD = 24.7%, S.E.M = 2.3%]. 43

45 An unpaired two-tailed t-test was conducted between the mean accuracies of correctly identifying genuine and acted anger, and there was no statistically significant difference (Table 6.7) with a P-value of Unpaired two-tailed t-tests were conducted between the mean accuracies of genuine and acted anger and chance level using a mean accuracy of 50.0% to represent chance and a standard deviation equal to the standard deviation of genuine and acted anger respectively. These t-tests returned a P- value of P < for both comparisons between acted anger and chance, and between genuine anger and chance (Table 6.8). Table 6.7 Unpaired t-tests on mean accuracy between genuine and acted anger for the YN removed data set. Anger Two tailed P CI diff CI lower CI Upper t df 232 s.e of diff Sig? No Acted Table 6.8 Unpaired t-tests on mean accuracy between genuine anger and chance, and acted anger and chance for the YN removed data set. Anger Acted two tailed P CI diff CI lower CI Upper t df s.e of diff Sig? Extremely Stat Sig Extremely Stat Sig 44

46 6.3.2 Surprise For the surprise emotion block 33% of all answers were correctly identified genuine surprise, 29% were correctly identified acted surprise, 20% were incorrectly identified genuine surprise and 18% were incorrectly identified acted surprise (Figure 6.4). The mean accuracy for genuine surprise was 62.7% with a standard deviation of 19.6% and the mean accuracy for acted surprise was 62.4% with a standard deviation of 18.4% (Figure 6.5). Surprise Response Composition (YN Removed) Acted Incorrect 18% 33% Acted 29% Incorrect 20% Incorrect Acted Acted Incorrect Figure 6.4 Composition of the total responses for genuine and acted surprise. An unpaired two-tailed t-test was conducted between the mean accuracies of correctly identifying genuine and acted surprise, and there was no statistically significant difference (Table 6.9) with a P-value of Unpaired two-tailed t-tests were conducted between the mean accuracies of genuine and acted surprise and chance level using a mean accuracy of 50.0% to represent chance and a standard deviation equal to the standard deviation of genuine and acted surprise respectively. These t- 45

47 Number of Answers tests returned a P-value of P < for both comparisons between acted surprise and chance, and between genuine surprise and chance (Table 6.10). Performance at distinguishing acted and genuine surprise (YN Removed) Type of Emotion Acted Figure 6.5 Mean Accuracy for genuine surprise [Mean Accuracy = 62.7%, Min = 20%, Max 100%, SD = 19.6%, S.E.M = 1.8%] and acted surprise [Mean Accuracy = 62.4%, Min = 12.5%, Max 100%, SD = 18.4%, S.E.M = 1.7%]. Table 6.9 Unpaired t-tests on mean accuracy between genuine and acted surprise for the YN removed data set. Surprise two tailed P CI diff CI lower CI Upper t df 232 s.e of diff Sig? No Acted Table 6.10 Unpaired t-tests on mean accuracy between genuine surprise and chance and acted surprise and chance for the YN removed data set. Surprise Acted two tailed P CI diff CI lower CI Upper t df s.e of diff Sig? Extremely Stat Sig Extremely Stat Sig 46

48 6.3.3 Happiness For the happiness emotion block 33% of all answers were correctly identified genuine happiness, 29% were correctly identified acted happiness, 19% were incorrectly identified genuine happiness and 19% were incorrectly identified acted happiness (Figure 6.6). The mean accuracy for genuine happiness was 63.3% with a standard deviation of 18.4% and the mean accuracy for acted happiness was 60.4% with a standard deviation of 19.6% (Figure 6.7). Happiness Response Composition (YN Removed) Acted Incorrect 19% 33% Acted 29% Incorrect 19% Incorrect Acted Acted Incorrect Figure 6.6 Composition of the total responses for genuine and acted happiness. An unpaired two-tailed t-test was conducted between the mean accuracies of correctly identifying genuine and acted happiness, and there was no statistically significant difference (Table 6.11) with a P-value of Unpaired two-tailed t-tests were conducted between the mean accuracies of genuine and acted happiness and chance level using a mean accuracy of 50.0% to represent chance and a standard deviation equal to the standard deviation of genuine and acted happiness respectively. These t- 47

49 Number of Answers tests returned a P-value of P < for both comparisons between acted happiness and chance, and between genuine happiness and chance (Table 6.12) Performance at distinguishing acted and genuine happiness (YN Removed) Type of Emotion Acted Figure 6.7 Mean Accuracy for genuine happiness [Mean Accuracy = 63.3%, Min = 20%, Max 100%, SD = 18.4%, S.E.M = 1.7%] and acted happiness [Mean Accuracy = 60.4%, Min = 10%, Max 100%, SD = 19.6%, S.E.M = 1.8%]. Table 6.11 Unpaired t-tests on mean accuracy between genuine and acted happiness for the YN removed data set. Table 6.12 Unpaired t-tests on mean accuracy between genuine happiness and chance, and acted happiness and chance for the YN removed data set. Happiness two tailed P CI diff CI lower CI Upper t df 232 s.e of diff Sig? No Acted Happiness Acted two tailed P CI diff CI lower CI Upper t df s.e of diff Sig? Extremely Stat Sig Extremely Stat Sig 48

50 6.3.4 Fear For the fear emotion block 23% of all answers were correctly identified genuine fear, 32% were correctly identified acted fear, 28% were incorrectly identified genuine fear and 17% were incorrectly identified acted fear (Figure 6.8). The mean accuracy for genuine fear was 45.3% with a standard deviation of 19.7% and the mean accuracy for acted fear was 66.2% with a standard deviation of 29.1% (Figure 6.9). Fear Response Composition (YN Removed) Acted Incorrect 17% 23% Acted 32% Incorrect 28% Incorrect Acted Acted Incorrect Figure 6.8 Composition of the total responses for genuine and acted fear. An unpaired two-tailed t-test was conducted between the mean accuracies of correctly identifying genuine and acted fear, and an extremely statistically significant difference was found with a P-value of P < (Table 6.13). Unpaired two-tailed t-tests were conducted between the mean accuracies of genuine and acted fear and chance level using a mean accuracy of 50.0% to represent chance and a standard deviation equal to the standard deviation of genuine and acted fear respectively. These t-tests returned a 49

51 Number of Answers P-value of P < for the comparison between acted fear and chance (Table 6.14). However, the t-test between genuine fear and chance returned a P-value of P = indicating that there is no statistically significant difference between the mean accuracy for identifying genuine fear and correctly identifying genuine fear due to chance (Table 6.14). Performance at distinguishing acted and genuine fear (YN Removed) Type of Emotion Acted Figure 6.9 Mean Accuracy for genuine fear [Mean Accuracy = 45.3%, Min = 0%, Max 100%, SD = 19.7%, S.E.M = 1.8%] and acted fear [Mean Accuracy = 66.2%, Min = 0%, Max 100%, SD = 29.0%, S.E.M = 2.7%]. 50

52 Table 6.13 Unpaired t-tests on mean accuracy between genuine and acted fear for the YN removed data set. Table 6.14 Unpaired t-tests on mean between accuracy genuine fear and chance, and between acted fear and chance for the YN removed data set. Fear two tailed P Acted CI diff CI lower CI Upper t df 232 s.e of diff Sig? Extremely Stat Sig Fear Acted two tailed P CI diff CI lower CI Upper t df s.e of diff Extremely Stat Sig? No Sig 51

53 6.4 Experiment Part 2 Fear vs Surprise Fear vs Surprise ( and Acted Analysis) For the fear vs surprise block 29% of all answers were correctly identified genuine emotion, 29% were correctly identified acted emotion, 22% were incorrectly identified genuine emotion and 20% were incorrectly identified acted emotion (Figure 6.10). The mean accuracy for genuine emotion was 56.3% with a standard deviation of 12.0% and the mean accuracy for acted emotion was 59.7% with a standard deviation of 12.4% (Figure 6.11). Fear VS Surprise Acted and Response Composition (YN Removed) Acted Incorrect 20% 29% Acted 29% Incorrect 22% Incorrect Acted Acted Incorrect Figure 6.10 Composition of the total responses for genuine and acted fear and surprise. An unpaired two-tailed t-test was conducted between the mean accuracies of correctly identifying genuine and acted emotion, and there was a statistically significant difference (Table 6.15) with a P-value of Unpaired two-tailed t-tests were conducted between the mean accuracies of genuine and acted emotion and chance 52

54 Number of Answers level using a mean accuracy of 50.0% to represent chance and a standard deviation equal to the standard deviation of genuine and acted emotion respectively. These t- tests returned a P-value of P < for both comparisons between acted emotion and chance, and between genuine emotion and chance (Table 6.16) Performance at distinguishing acted and genuine fear from acted and genuine surprise (YN Removed) Type of Emotion Acted Figure 6.11 Mean Accuracy for genuine fear and surprise [Mean Accuracy = 56.3%, Min = 30%, Max 90%, SD = 12.0%, S.E.M = 1.1%] and acted fear and surprise [Mean Accuracy = 59.7%, Min = 30%, Max 90%, SD = 12.4%, S.E.M = 1.1%]. 53

55 Table 6.15 Unpaired t-tests on mean accuracy between genuine fear and surprise, and acted fear and surprise for the YN removed data set Table 6.16 Unpaired t-tests on mean accuracy between genuine fear and surprise against chance, and acted fear and surprise against chance for the YN removed data set Fear vs Surprise two tailed P CI diff CI lower CI Upper t df 232 s.e of diff Sig? Stat Sig Acted Fear vs Surprise Acted two tailed P CI diff CI lower CI Upper t df s.e of diff Sig? Extremely Stat Sig Extremely Stat Sig Fear vs Surprise (Fear vs Surprise Analysis) For the fear vs surprise block 27% of all answers were correctly identified fear, 31% were correctly identified surprise, 23% were incorrectly identified fear and 19% were incorrectly identified surprise (Figure 6.12). The mean accuracy for fear was 53.5% with a standard deviation of 16.0% and the mean accuracy for surprise was 62.5% with a standard deviation of 12.2% (Figure 6.13). 54

56 Fear VS Surprise Response Composition (YN Removed) Surprise Incorrect 19% Fear 27% Surprise 31% Fear Incorrect 23% Fear Fear Incorrect Surprise Surprise Incorrect Figure 6.12 Composition of the total responses for fear and surprise. An unpaired two-tailed t-test was conducted between the mean accuracies of correctly identifying fear and surprise, and there was an extremely statistically significant difference (Table 6.17) with a P-value of < Unpaired two-tailed t-tests were conducted between the mean accuracies of fear and surprise, and chance level using a mean accuracy of 50.0% to represent chance and a standard deviation equal to the standard deviation of fear and surprise respectively. These t-tests returned a P-value of P < for the comparison between surprise and chance (Table 6.18). However, the t-test between fear and chance returned a P-value of P = indicating that there is no statistically significant difference between the mean accuracy for identifying fear and correctly identifying fear due to chance (Table 6.18). 55

57 Number of Answers Performance at distinguishing fear and surprise (YN Removed) Fear Type of Emotion Surprise Figure 6.13 Mean Accuracy for fear [Mean Accuracy = 53.5%, Min = 10%, Max 90%, SD = 16.0%, S.E.M = 1.5%] and surprise [Mean Accuracy = 62.5%, Min = 22.2%, Max 90%, SD = 12.2%, S.E.M = 1.1%]. Table 6.17 Unpaired t-tests on mean accuracy between fear and surprise for the YN removed data set Table 6.18 Unpaired t-tests on mean accuracy between fear and chance, and between surprise and chance for the YN removed data set Fear vs Surprise Fear two tailed P Surprise CI diff CI lower CI Upper t df 232 s.e of diff Sig? Extremely Stat Sig Fear Fear vs Surprise Surprise two tailed P CI diff CI lower CI Upper t df s.e of diff Sig? No Extremely Stat Sig 56

58 7. Discussion 7.1 Discussion Data Sets The results (Tables ) showed that there was no difference statistically between the 3 different data sets, Main, YY removed and YN removed. Despite this lack of difference, it was decided to use the most restrictive YN removed data set. This decision was made to be conservative with the results and ensure that correctly identifying the person in the stimuli did not result in inflated performance. The importance of this decision can be seen in comparing tables and figures in appendices 7, 8 and 9 which show a significant difference between acted and genuine anger in the Main data set with an inflated acted accuracy but no statistical difference once the stimuli which were recognised from outside the experiment were excluded. Similar differences for other conditions can be seen by comparing the tables in appendices 7, 8 and Discussion Experiment Part 1 Block Level The block level results from Part 1 of the experiment indicate that there is no difference in the accuracy of distinguishing between acted and genuine emotions between the emotions of anger, surprise and happiness (Table 6.5). This was expected given previous work by Chen et al. (2017) on genuine and acted anger which found the accuracy for anger was consistent with the previously reported accuracy for genuine and acted smiles by Hossain and Gedeon (2017). The results of this experiment further 57

59 suggest the performance for acted and genuine surprise continues this trend of consistency in performance across the different emotions. The result that accuracy for identifying genuine and acted fear was significantly lower than for happiness, anger and surprise was expected (Figure 6.1 and Table 6.5). It is well documented in the literature (Du and Martinez, 2011; Gagnon et al., 2009; Gosselin and Simard, 1999; Roy-Charland et al., 2014) that people have difficulty distinguishing fear from other emotions. Based on this knowledge, it was reasoned that if people are bad at distinguishing fear in general, it is also likely that people are bad at distinguishing between different types of fear. However, it does raise the question as to why anger is consistent with happiness and surprise as previous research has shown that people do not distinguish anger from other emotions as well as happiness and surprise (Du and Martinez, 2011). One possible reason for the similarity between happiness, surprise and anger is that acted and genuine displays of emotion for these 3 emotions are more common occurrences in society than for fear. People often pretend to be happy to avoid being socially impolite and dumping their worries on others, have to fake surprise over a present they knew they were receiving, or pretend to be angry to stop children behaving poorly. In contrast, fear is not an emotion that is frequently seen in society, outside of television, and in general is coupled with more obvious contextual cues such as the person running away whilst possibly screaming. It is plausible that reduced performance in distinguishing acted and genuine displays of fear is because people are not frequently exposed to fearful expressions and when they are exposed to them, are responding to other, less subtle contextual cues. 58

60 The results indicate that accuracy for all four emotions was greater than chance level performance (Table 6). The accuracy for fear was only 55.3% (Figure 6.1; appendix 9) which is only marginally better than chance but still statistically different (Table 6.6). Together the accuracy range of the four emotions was 55.3% (fear) to 65.3% (anger) which, while an increase in accuracy over chance, is not a large improvement when the potential for improvement over chance is considered. As chance level for a twoalternative forced choice scenario is 50%, an accuracy range of 55.3%-65.3% is only an increase of 5.3%-15.3% out of a possible 50% improvement over chance. In the worst case (fear) this means people only correctly distinguished 1 in 10 (10%) of the stimuli that people are not expected to get correct purely by chance. In the best case (anger) it was only ~30% which is less than 1 in 3 correct. This suggests the ability of people to distinguish acted and genuine emotions for the 4 emotions of fear, anger, surprise and happiness is poor. Despite the poor performance of people in general at distinguishing between acted and genuine emotions across all four emotions this is not the case for everyone. Some people performed extremely well with the maximum accuracies for anger, surprise and happiness being 94.7%, 95% and 94.7% respectively (appendix 9). While fear was slightly lower, the maximum accuracy was still 90% which is much higher than the mean accuracy. At the other end of the spectrum, others performed equally poorly with minimum accuracies of 20%, 30%, 21.1% and 15% for anger, surprise, happiness and fear respectively (appendix 9). Consistent performance well below chance implies that the person can recognise the difference but assigns the wrong label. Future analysis of the data to attempt to discover any contributing factors to this large variance in 59

61 performance would be interesting and valuable but is, unfortunately, beyond the scope of this experiment. 7.3 Discussion Experiment Part 1 vs Acted The results for experiment Part 1 indicate that there is no difference in the ability of people to distinguish between acted and genuine displays of happiness, anger or surprise (Tables 6.7, 6.9 and 6.11). This is consistent with previous results for smiles and anger by Hossain and Gedeon (2017) and Chen (et al., 2017) respectively. However, the results indicate there is a difference between the ability of people to distinguish acted and genuine fear (Table 6.13). Accuracy for identifying genuine fear was only 45.3% which was statistically no different than chance (Table 6.14) while the accuracy for identifying acted fear was 66.2% which was found to be higher than chance accuracy. Surprisingly, the accuracy for acted fear was higher than all emotions except acted anger (67.7%; tables 6.7, 6.9, 6.11 and 6.13 or appendix 9). When combined with the block level, results for fear indicate that the reason for the lower accuracy for distinguishing acted and genuine fear compared to happiness, surprise and anger is solely due to the inability of people to correctly identify genuine fear. This result was unexpected given the consistency for smiles and anger in previous studies (Hossain and Gedeon, 2017; Chen and Gedeon, 2017). One explanation for the lower accuracy for identifying genuine fear compared to acted fear is that acted fear is more prevalent in society and therefore we have greater exposure, and thus experience, with emotional displays of acted fear. People are more likely to see fear in the context of a scary movie or television show than in their every- 60

62 day lives. This difference in exposure could have led to a greater ability to identify acted fear through experiential learning, which does not occur for genuine fear due to a lack of genuinely fearful events from which to learn. Similar to the block level results, the minimum and maximum accuracies for distinguishing between acted and genuine displays of the four emotions demonstrate a large variance as seen in (figure 8; appendix 9) with most conditions having a maximum of 100% accuracy and a minimum of 10%-20%. Of particular interest are the minimums and maximums for genuine and acted fear which are both 0% and 100% respectively. Despite the accuracy of identifying genuine fear being no different to chance and the accuracy of identifying acted fear having an accuracy of 66.2%, the minimum and maximum accuracies are identical. Analysis of potential contributors to this effect is beyond the scope of this experiment and potentially impossible with the data collected due to the minimalistic nature of the demographical information collected from participants. However, this could be an interesting effect to examine in a future experiment. 7.4 Discussion Experiment Part 2 Block Level The results of experiment Part 2 for fear vs surprise at the block level indicate that the ability of people to distinguish between fear and surprise is greater than chance (mean accuracy = 58.0%) (Figure 6.1 and Table 6.6). The accuracy for the fear vs surprise block falls in the range of the mean accuracies for fear (55.3%) and surprise (62.7%). This demonstrates the consistency of the experimental design as this falls exactly where a combination of fear and surprise would be expected to fall given their respective accuracies. 61

63 The minimum and maximum accuracies for the fear vs surprise block were 30% and 80% respectively (Figure 6.1; appendix 9). This indicates that the ability of people to distinguish fear from surprise is more uniform across people than the ability to distinguish genuine and acted emotions. One explanation for this is that distinguishing between different emotions requires less finesse than distinguishing acted and genuine emotions. The differences between emotions are likely to be larger than the subtler nuances which distinguish genuine from acted displays of a particular emotion. Therefore, it is possible that in general people can distinguish between emotions more readily because it is less nuanced and therefore easier to learn. 7.5 Discussion Experiment Part 2 Fear vs Surprise As a check for consistency, the fear vs surprise block was analysed using the acted and genuine categorisation as were the emotion blocks in experiment Part 1. This was to check if the accuracies for acted and genuine displays of fear and surprise remained consistent even when the judgement being asked of the participants changed from acted or genuine to fear or surprise. The results indicate that the trend in accuracies from experiment Part 1 were preserved in experiment Part 2 (Tables 6.7, 6.9, 6.11, 6.13 and 6.15). The accuracy for genuine fear and surprise was 56.3% while the accuracy for acted fear and surprise was 59.7% (Table 6.15). These accuracies are statistically different from each other and from accuracy due to chance (Tables 6.15 and 6.16). This reflects the results from experiment Part 1 with the accuracy of genuine fear being lower than acted fear and both are lower than genuine and acted surprise, which would be expected by adding fear into the mix. 62

64 The main purpose of experiment Part 2 was to examine accuracies for fear and surprise to see if these emotions maintained the trends established in the literature (Du and Martinez, 2011; Gagnon et al., 2009; Gosselin and Simard, 1999; Roy-Charland et al., 2014) using the videos in this experimental platform. The reason for doing this was twofold. Firstly, to investigate if the trend held, and secondly as a form of validity check for the platform and videos used. The results indicate that people are more accurate at identifying surprise (mean accuracy 62.5%) than fear (53.5%) (Table 6.17). The accuracy for identifying surprise was statistically higher than chance level accuracy but the accuracy of identifying fear was not (Table 6.18). These results follow the expected pattern of people being better at identifying surprise than fear and the trend of fear being mistaken for surprise more than surprise being mistaken for fear. Similar to genuine fear in experiment Part 1, identifying fear in experiment Part 2 had an accuracy equivalent to that of chance. This is lower than values reported in previous research (Gosselin and Simard, 1999; Roy-Charland et al., 2014) which suggests that the ability to distinguish fear is lower than the other universal emotions such as surprise, happiness and anger but certainly higher than chance level. Some studies have reported accuracies this low (Du and Martinez, 2011). The lower accuracy found in this experiment could be due to the presence of genuine fear. Often the stimuli used in studies of this type are posed in that the facial expressions are deliberately sculpted by an actor to ensure they match the defined standards for an expression of that emotion. Such stimuli are often very clinical in nature and feel awkward. This experiment, however, uses video footage with emotions classified based on context which are more realistic, have a wider variation in the facial expressions for 63

65 an emotion and are more indicative of displays of emotion in society. The result is that this experiment has an equal number of acted and genuine displays of fear in the fear vs surprise block while the studies that use posed stimuli will have fearful displays of emotion consisting solely of acted fear. However, the results from experiment Part 1 indicate the ability of people to recognise acted fear is significantly higher than that of genuine fear. This difference provides a possible explanation for the lower accuracy at identifying fear in this experiment than in previous research. 7.6 General Discussion Examining the hypotheses stated in section (3.2): H1: This hypothesis was confirmed for happiness, surprise and anger based on the results in experiment Part 1. However, this hypothesis did not hold for fear which demonstrated chance level accuracy for distinguishing genuine fear but not for acted fear. H2: This hypothesis was confirmed. The results from experiment Part 1 indicate no difference in performance at distinguishing acted and genuine emotion at the block level for happiness, surprise and anger. H3: This hypothesis was confirmed. The results from experiment Part 1 indicate the mean accuracy for distinguishing genuine and acted fear is lower than that for happiness, surprise and anger, which were all found to be the same. H4: This hypothesis was confirmed. The results from experiment Part 2 indicate that people are significantly better at distinguishing surprise from fear than distinguishing fear from surprise. 64

66 Comparison of the results from this experiment with those from previous research by Hossain and Gedeon (2017) using t-tests (Table 7.2 on the mean accuracy at the block level between happiness and smiles indicate there is no difference between the accuracy for happiness in this experiment (mean accuracy = 61.9%) and that of smiles (mean accuracy = 59%) in the experiment by Hossain and Gedeon (2017) (Table 7.1). However, when comparing mean accuracy for anger from this experiment with the results from Chen (et al., 2017) the mean accuracy in this experiment is significantly higher (Tables 7.1 and 7.2). Table 7.1 Comparison of mean accuracy for anger and happiness from this experiment, anger from Chen (et al., 2017) and smiles from Hossain and Gedeon (2017). Anger Happiness Chen's Anger Smiles Mean Accuracy (%) STD Dev N

67 Table 7.2 Unpaired t-test on mean accuracy between anger from this experiment and anger from Chen (et al., 2017), and between happiness from this experiment and smiles from Hossain and Gedeon (2017). Unpaired t test Anger/Chen's Anger Happiness/Smiles two tailed P CI diff CI lower CI Upper t df s.e of diff Sig? Stat sig No Comparison between the current experiment results and previous results for genuine and acted anger and happiness/smiles (Tables 7.3 and 7.4) indicate significantly higher accuracy for both genuine anger and happiness in this study but no difference in accuracy for acted anger and happiness (Tables 7.3 and 7.4). It is believed that the higher accuracies found in this experiment are different due to the comparatively larger sample size in this study (N = 117) than in those studies (N = 10 and N = 32) for Hossain and Gedeon (2017) and Chen (et al., 2017) respectively, which has resulted in a better estimate of the true mean accuracy for these conditions. Despite this experiment examining happiness rather than smiles explicitly, it was expected that these results would be consistent. This expectation was due to the presence of a smile being the primary facial expression indicating a happy emotional state. While it is not the case that the presence of a smile implies happiness (acted smiles), it was the case that all the stimuli which contained an emotional display of happiness contained smiles. Thus, while it made sense to be comparing emotions to emotions (happiness to anger) rather than a facial expression to an emotion (smiles to 66

68 anger), it is believed that these results for happiness implicitly include those for smiles. This is further supported by the lack of difference for mean accuracy for happiness found in this experiment and that by Hossain and Gedeon (2017) for smiles. There is a difference in detail between Hossain s genuine smiles recognition of 50% and 68% for acted, as opposed to happiness with genuine 63% and acted 61%. This may be due to the removal of context showing the face only. The pattern of Chen s anger with percent recognition of acted being greater than genuine is repeated for anger in this experiment, and is consistent with this suggestion as some background contexts remain in Chen s experiment. 67

69 Table 7.3 Comparison of mean accuracy of genuine and acted emotions for anger and happiness from this study with anger from Chen (et al., 2017) and smiles from Hossain and Gedeon (2017). Anger Chen's Anger Happiness Smiles Acted Acted Acted Acted Mean Accuracy (%) STD Dev (%) N Table 7.4 Unpaired t-test on mean accuracy for genuine and acted anger and happiness from this study with anger from Chen (et al., 2017) and smiles from Hossain and Gedeon (2017). Anger Anger Happiness/smiles Happiness/Smiles Gen Chen's Gen Act Chen's Act Gen Gen smiles Act Act Smiles two tailed P CI diff CI lower CI Upper t df s.e of diff Sig? Stat sig No Stat Sig No 68

70 A further reason for using emotions rather than facial expressions is the lack of a clear term for equivalent facial expressions to smiles for emotions such as fear, surprise and anger. The term fearful expression or an expression of surprise is used to indicate features (figure 7.1) which together result in an expression being categorised as a display of fearful or surprised emotion but there is no equivalent term to smiles (in being a single word which expresses the usual emotional expression, which can be used for other purposes). If this research was extended to include disgust and sadness this would provide an opposite to smiles in the form of frowns for sadness, but disgust also lacks an equivalent. These results contribute to both the fields of Psychology and Computer Science. The reason for this is because every possible facet of Computer Science has one central component us humans. Most facets of computer science interact with people in some way. People develop the algorithms, the supercomputers, the hardware consisting of billions of transistors, people designed the logic gates and the internet with all the websites it contains. Not only do people develop every facet of computer science, many programs interact with people directly, for example via virtual reality, video games and websites are all designed specifically for human interaction. Even artificial intelligence including neural networks attempt to model the workings of the human brain and perhaps one day create robots with human-like intelligence. Therefore, humancentred research is vital for computer science. 69

71 AU12 HAPPINESS AU6 AU11 AU1 SADNESS AU15 AU10, 17 SURPRISE AU7, 22, 23, 24 A20 FEAR DISGUST ANGER AU4 AU9, 16 AU5 AU2 Figure 7.1 Core (solid lines) and supplementary (dotted lines) facial Action Units (AU) associated with the 6 universal emotions. Summarises Table 1 from Waller, Cray and Burrows (2008). 70

72 Psychology allows us to better understand how people think and behave and therefore gives insights in how best to develop the facets of computer science to interact with people. The results of this research have important insights to offer multiple fields of computer science such as virtual reality, video games, human computing interaction application and artificial intelligence, including neural networks. Many video games attempt to create realistic experiences for players, which for story driven games usually includes an emotionally engaging story. This research indicates developers need to be careful when deciding exactly how they are going to present these emotional displays to their players, and what the emotions are modelled on. This is important for games wishing to create emotional displays of fear as, unlike the other emotions, using a template based on genuine fear during the development of the emotional expression in the game will detrimentally affect the ability of the player to recognise the expression as fearful. Instead fearful emotions should be based on templates for displays of acted fear. Furthermore, due to the low ability in general for people to distinguish between genuine and acted display of emotion, game developers need to provide more obvious cues to displays of deceit, sarcasm and other instances where the facial expression and the intention do not match. In general, players will not be able to distinguish between acted and genuine emotional displays based on facial cues provided by the developers. These implications also affect virtual reality (VR) and human computer interaction (HCI) applications. The idea of both these areas of computing is to give people the best experience possible. For VR this means the most visually immersive experience, for HCI applications it means an easy and enjoyable experience. Therefore, if these areas 71

73 want to include emotional content, they need to ensure the content is tailored to produce the best experience for people. In the context of this research this means making sure the emotional content used to create the experience can be perceived and distinguished by people. As with video games, it is no use modelling a fearful display of emotion in a virtual reality experience based on genuine fear as this will decrease the likelihood of the users having the experience the developers intended. For fields of computer science such as artificial intelligence (AI), including neural networks (NN) the research from this experiment is as equally important as it is for the more user-oriented fields just discussed. This stems from what some aspects of AI and NN are attempting to achieve. NN are trying to model the pathways of the brain to simulate how human brains work. Thus, it is a necessity to know what the expected outcome is before a NN can be built to simulate it. Therefore, if a NN is attempting to model how humans can experience and interpret displays of emotion they first need to have research on which to base the model. Research experiments such as this one that examine psychological phenomenon from a computer science perspective are of vital importance to building up the research on which to base the models used in NN. This importance extends to artificial intelligence (AI) in general, where the creation of an artificial intelligence based on human intelligence would be a crowning achievement. However, before this can be achieved it is necessary to understand human intelligence. The only way this can be achieved is through research into how humans behave, think and perceive the world; such as that contained within this project. In the creation of a human-like AI it will be important to be able to model how humans perceive emotions and what humans can and cannot do. This research clearly shows that something we 72

74 cannot do is readily distinguish between acted and genuine emotions which is an important piece in the massive puzzle that will need to be solved and modelled before such an achievement in AI could occur. For Psychology, the inability of people to distinguish genuine fear while being better at distinguishing acted fear generates an interesting problem for some previous psychological research, in terms of its validity. Posed stimuli have been used in the past for studies examining the ability of people to recognise different emotions including fear. However, the results from this experiment indicate that people will perform better with acted fear as they are better able to recognise it than genuine fear. This means previous studies could be overestimating the ability of people to distinguish fear from other emotions. This experiment also raises interesting questions. As discussed earlier, the ability of people to distinguish acted from genuine displays of emotion ranged from approximately 1 in 3 correct to 1 in 10 correct and even to 0 correct above chance level for genuine fear which had chance level performance. Therefore, despite being exposed to both acted and genuine displays of emotion in society and the media on a regular basis, people are poor at distinguishing between them. This raises the question: why are people so poor at distinguishing between acted and genuine emotions? One possible explanation is that being effective at distinguishing between genuine and acted emotions could lead to socially awkward situations such as a child realising their parent is not genuinely happy to receive the rock they found or not genuinely surprised by a drawing the child drew. This realisation would result in negative experiences for both 73

75 the parent and child, so it is possible that our experiences discourage developing effective recognition of genuine and acted emotion. One caveat with this possibility is the range in performance by different participants. Thus, while the idea of experiences contributing to poor distinction of genuine and acted emotions could be possible in a general sense, some people do develop a near perfect ability to distinguish between acted and genuine emotion, at least for a single emotion. It is expected that this is also developed by experience. If a person has lived through experiences where identifying an emotion is critically important, then it follows that the person would have developed the ability to effectively recognise it. Whatever the reason, this effect warrants further investigation to examine potential explanations. 74

76 8. Challenges and Limitations 8.1 Challenges There were several challenges that occurred throughout the course of the experiment. The first occurred early with the difficulty in finding genuine displays of emotion that matched the same overall feel and quality of the acted videos. To solve this issue reality television clips were used to minimise the differences in lighting, camera style, camera angles and quality of the camera work between acted and genuine stimuli. The reality television show Fear Factor became a prime source of genuine emotions due to the scary and disgusting tasks regularly eliciting fearful and surprised emotional responses from contestants. A second challenge related to the challenge mentioned above was also encountered. This was the difficulty in finding displays of genuine fear by males. To balance the stimuli properly the stimuli were balanced across several criteria, one of which was gender. Finding enough genuine fearful emotional displays by males turned out to be very time consuming and held up the progress of the experiment for a couple of weeks. This was solved using Fear Factor as a source of videos and eventually required sampling enough seasons (a substantial and emotionally challenging task) to finally fill out the required male fearful display allotment. Once development and testing of the extended platform began, it quickly became evident that the original code base was designed to be run on a local machine and required manual selection of video sequences for each participant. To convert the code 75

77 base to a platform that could log data into separate files and perform automatic sequence rotation, required a massive overhaul of the JavaScript behind the original platform with only the skeleton of the previous system remaining in place. This created a challenge in terms of time constraints. The initial expectation was that the platform was mostly ready for a multi-block experiment with only an extension to a multi-block experimental design being needed, which was not the case. In hindsight it would have been quicker to have started from scratch using the existing platform as a guide only, however the desire for consistency and the project timeframe initially lent itself to extending the existing platform. The most difficult challenge presented itself once the experiment went live on the HCC Workshop server. Throughout local and online testing runs, no issues were encountered; though testing was certainly not as thorough as it could have been due to time constraints. However, once participants started engaging with the experiment platform, it was discovered that the server had issues with logging the results of participants; which resulted in lost data. The cause of this issue was two-fold. The primary cause of lost data was due to the incompatibility of the JavaScript with most browsers. The original experiment platform was run locally on Google Chrome and by chance both testing browsers on the development environment and testing environment were also Google Chrome. This resulted in a spectrum of issues from Firefox refusing to log data at all to Internet Explorer and Microsoft Edge which sat anywhere on the spectrum depending on version. The secondary cause was that the static design of the website was implemented without a database, therefore logging occurred as writes to text files. It quickly became 76

78 apparent that at times the HCC Workshop server was unable to service all logging requests due to its load which resulted in more lost data. This issue was solved with a non-technical work-around. Unfortunately, time did not allow for a technical solution to this issue as the experiment platform was already live. Instead a paper-based instruction and answer sheet in the form of a Word document was supplied to participants, who could then fill out their answers on the sheet and return it via . This proved to be a huge success and the result was complete data for 117 people. The final two challenges that were encountered were both related to participants. The first was the inability of people to follow instructions. Several participants completed the experiment using Firefox despite being told to use Chrome and failed to fill in the survey sheet despite having to read the instructions on the sheet to find the URL to participate in the experiment. This resulted in no data being recorded for those participants. Despite repeated amendments to the instruction sheet and recruitment posts this was not something that was ever mitigated completely. In the same vein, coverage of the 80 necessary sequences was made excessively difficult by people who decided to start the experiment online only to stop after a handful of videos. This resulted in the automatic rotation of sequences kicking in to prevent doubling up on a sequence but no full sequence being recorded. This was overcome with careful management of the sequence files to ensure all gaps were eventually filled. The final issue was the lack of participants who registered to participate in the experiment through SONA, the platform used by the Research School of Psychology at the Australian National University to link up research studies with research participants. 77

79 The requirements for use of the system turned out to be excessively restrictive for honours students, requiring all honours research studies to be conducted face-to-face which destroyed the purpose of an online web survey which depended on crowd sourcing. Despite attempts to alleviate this burden, the hurdles proved to be too restrictive and were not conducive to engaging student participation. This challenge was overcome by pivoting to a different platform to use as the source of participants Facebook. Posts were placed in public community pages, honours groups and University research groups asking for participants. Additionally, any person who asked if they could re-post the request in a new place was given permission and eventually a web of posts was created which managed to recruit participants as far away as the UK and Canada. 8.2 Limitations The experimental platform has a couple of limitations which are linked to some of the challenges above. The platform uses a static web page design which is a poor choice compared to the possibility of a dynamic web page design. However, a complete redesign of the system was beyond the scope of this project. In fact, the redesigning of this experimental system would entail a project in and of itself and would be a beneficial contribution given the ability of this platform to be used for multiple different experiments. A dynamic design using a system such as React, Ruby on Rails and Flux with a proper database for data would alleviate the issues experienced with the server logging and allow for much better manipulation of states concerned with participant sequences to remove the inconveniences of people starting the experiment and not finishing. 78

80 The second limitation of the experimental platform is the reliance on writing to text files to record the data. This could be removed in the dynamic design mentioned above and replaced with proper interactions with a database through a datastore layer using the Flux model. The experimental design itself also has a couple of limitations. The first is that the classification of the videos into categories is subjective. Despite all attempts to make sure the emotional context of the videos was obvious to the developer this is still a subjective judgement and is therefore open to interpretation. The second issue is that while balancing as many aspects of the stimuli as possible was attempted, it is possible that cues exist that enabled people to make their judgements based on something other than the emotion being displayed. While deliberate care was taken to balance background context, gender, ethnicity, camera effects and other such factors, this possibility remains. Despite this possibility, the belief is that all possible steps to minimise this potential limitation were taken. The final limitation of the study is that it is not culturally diverse. Despite the original intentions to have a diverse enough participant pool to look for cultural effects this was not possible due the majority of participants being of Caucasian ethnicity. This has been attributed to a side effect of the recruitment of participants being through social media in the predominately Caucasian country of Australia. 79

81 9. Conclusion The study found results consistent with previous work by Hossain and Gedeon (2017) and Chen et al. (2017) for genuine and acted smiles and anger respectively. Some results found accuracies to be significantly higher than for those previous studies which is considered to be a better estimate of the true mean accuracy due to the substantially higher participant number in this experiment. The research in this experiment additionally adds surprise to the list of happiness, smiles and anger which all exhibit the same level of performance overall with no differentiation in ability to distinguish genuine and acted emotions. Fear also added, and was found to be different with only chance level performance for genuine fear and performance on acted fear to be comparable to that of the other emotions. The research also found higher accuracy in distinguishing surprise than fear occurs for the video stimuli used in this experiment. These effects are useful to the fields of Computer Science and Psychology as they extend the knowledge base about how people perceive and interpret emotions. For Computer Science this research helps guide the types of templates which should be used when representing emotions in visual media such as video games and virtual reality and the effects that need to be considered when modelling emotions in artificial intelligence, particularly in neural networks. For Psychology it raises questions about why people seem to perform poorly in distinguishing genuine and acted emotions and why genuine fear performance is no different from chance, while performance for acted fear is higher. Additionally, it questions whether previously reported estimates for identifying fear using posed stimuli may be overestimations. 80

82 This project suggests that further research should be conducted into how and why these effects occur and whether these effects hold for all universal emotions (sadness and disgust). Research comparing the conscious data from this experiment to physiological data of the style collected in Chen et al. (2017) would be beneficial to see if the differences reported between physiological and conscious responses in that study hold for the extra emotions examined in this study. 81

83 10. Future Work Some future work based on this experiment has already begun with this experimental platform being used in a study that is investigating the similarities and differences between conscious responses, as examined here, and physiological responses in the form of pupil dilation as examined in Chen et al. (2017). One avenue of future work would be to investigate the ability to distinguish acted and genuine emotion for disgust and sadness to cover all universal emotions. Additional work could be conducted on the experimental platform to convert the platform into a dynamic web page design which would be beneficial if the intention is to continue to use and extend the platform into the future. There is a large variance in the performance at recognising genuine and acted emotions with minimum accuracies as low as 10% and maximum accuracies as high are 100%. This would be particularly interesting for fear as the accuracies for distinguishing genuine and acted fear different, but the range was identical. The large variance could also be due to people performing better on some emotions than others and future investigation of the data could examine if people specialise in one or more emotions are perform consistently across emotions. It would be interesting and valuable to investigate the effects and possible contributing factors; however, this is beyond the scope of this experiment. Finally, the data collected during this study has only partially been investigated due to the timeframe of the project. The data could be further analysed looking for trends in 82

84 performance such as if people are universally good at making the genuine and acted judgements or if people tend to specialise towards particular emotions, or looking at whether people can perform well in identifying just genuine or acted emotions but not both. There is potential for a large amount of future analysis of just this data that has been collected. 83

85 References Aviezer, Hillel., Hassin, R., Bentin, Shlomo. and Trope, Yaacov 2008a. Putting facial expressions back in context. First impressions, pp Aviezer, H., Bentin, S., Hassin, R., Meschino, W., Kennedy, J., Grewal, S., Esmail, S., Cohen, S. and Moscovitch, M. (2009). Not on the face alone: perception of contextualized face expressions in Huntington's disease. Brain, 132(6), pp Aviezer, H., Bentin, S., Dudarev, V. and Hassin, R. (2011). The automaticity of emotional face-context integration. Emotion, 11(6), pp Aviezer, H., Hassin, R. and Bentin, S. (2012). Impaired integration of emotional faces and affective body context in a rare case of developmental visual agnosia. Cortex, 48(6), pp Bernstein, M., Young, S., Brown, C., Sacco, D. and Claypool, H. (2008). Adaptive Responses to Social Exclusion: Social Rejection Improves Detection of Real and Fake Smiles. Psychological Science, 19(10), pp Chen, L., Gedeon, T., Hossain, M.Z. and Caldwell, S. (2017). Are you really angry? Detecting emotion veracity as a proposed tool for interaction. In Proceedings of the 29th Australian Conference on Human-Computer Interaction (OzCHI '17), Brisbane, QLD, Australia, November 2017, 5 pages. Côté, S., Hideg, I. and van Kleef, G. (2013). The consequences of faking anger in negotiations. Journal of Experimental Social Psychology, 49(3), pp Douglas, K., Porter, R. and Johnston, L. (2012). Sensitivity to posed and genuine facial expressions of emotion in severe depression. Psychiatry Research, 196(1), pp Du, S. and Martinez, A. (2011). The resolution of facial expressions of emotion. Journal of Vision, 11(13), pp

86 Ekman, P. and Friesen, W. (1982). Felt, false, and miserable smiles. Journal of Nonverbal Behavior, 6(4), pp Gagnon, M., Gosselin, P., Hudon-ven der Buhs, I., Larocque, K. and Milliard, K. (2009). Children s Recognition and Discrimination of Fear and Disgust Facial Expressions. Journal of Nonverbal Behavior, 34(1), pp Fear Factor, (n.d.). [TV programme] NBC: NBC Universal Television Distribution. Gosselin, P. and Simard, J. (1999). Children's Knowledge of Facial Expressions of Emotions: Distinguishing Fear and Surprise. The Journal of Genetic Psychology, 160(2), pp Hassin, R., Aviezer, H. and Bentin, S. (2013). Inherently Ambiguous: Facial Expressions of Emotions, in Context. Emotion Review, 5(1), pp Hess, U. and Kleck, R. (1990). Differentiating emotion elicited and deliberate emotional facial expressions. European Journal of Social Psychology, 20(5), pp Hossain, M.Z. and Gedeon, T.D. (2017) Classifying Posed and Real Smiles from Observers Peripheral Physiology, 11 th EAI International Conference on Pervasive Computing Technologies for Healthcare, Barcelona, Spain, May 23 26, 2017 Mai, X., Ge, Y., Tao, L., Tang, H., Liu, C. and Luo, Y. (2011). Eyes Are Windows to the Chinese Soul: Evidence from the Detection of Real and Fake Smiles. PLoS ONE, 6(5), p.e Masuda, T., Ellsworth, P., Mesquita, B., Leu, J., Tanida, S. and Van de Veerdonk, E. (2008). Placing the face in context: Cultural differences in the perception of facial emotion. Journal of Personality and Social Psychology, 94(3), pp McLellan, T., Johnston, L., Dalrymple-Alford, J. and Porter, R. (2010). Sensitivity to genuine versus posed emotion specified in facial displays. Cognition & Emotion, 24(8), pp McLellan, T., Wilcke, J., Johnston, L., Watts, R. and Miles, L. (2012). Sensitivity to posed and genuine displays of happiness and sadness: A fmri study. Neuroscience Letters, 531(2), pp

87 Righart, R. and De Gelder, B. (2008). Recognition of facial expressions is influenced by emotional scene gist. Cognitive, Affective, & Behavioral Neuroscience, 8(3), pp Roy-Charland, A., Perron, M., Beaudry, O. and Eady, K. (2014). Confusion of fear and surprise: A test of the perceptual-attentional limitation hypothesis with eye movement monitoring. Cognition and Emotion, 28(7), pp Schmidt, K. and Cohn, J. (2001). Human facial expressions as adaptations: Evolutionary questions in facial expression research. American Journal of Physical Anthropology, 116(S33), pp Waller, B., Cray, J. and Burrows, A. (2008). Selection for universal facial emotion. Emotion, 8(3), pp

88 Appendix 1: Final Project Description Project Title: Human Classification for Web Videos Learning Objectives: Experience with web based data collection Experience with experiment design Experience with statistical data analysis Project Description: Brief literature survey Extend a web based collection tool Design and run a human centred computing experiment o Use 4 disparate emotions to test discrimination o Use two conditions previously investigated and two new conditions o Collect click data for minimum 40 subjects Statistical analysis of results Integrate all code developed into HCC Workshop tool Write report 87

89 Appendix 2: Independent Study Contract 88

90 89

91 Appendix 3: List of Artefacts The experimental platform used in this experiment was used originally used by Hossain and Gedeon (2017) and updated by Chen et al. (2017). This platform was further extended for use in this study. The following list includes all artefacts used for this experiment: 1. Extended experimental web platform a. 80 new stimuli web pages based on a modified design of the previous platform b. 80 new stimuli question web pages based on a modified design of the previous platform c. 5 new inter-block web pages d. 2 new PHP files for updating text files used for participant sequences e. 1 new introduction page with a new introductory video f. 20 modified stimuli web pages g. 20 modified stimuli question web pages h. 1 almost completely reworked redirect.js JavaScript file i. 1 heavily modified writetotxt.php PHP file j. Minor edits to all other pre-existing pages to use terminology consistent with the new experimental design new stimuli videos covering 3 emotions (happiness, fear and surprise) 90

92 a. Evenly split between genuine and acted stimuli 3. 1 new introductory video 4. A template participant information and answer sheet completed survey sheets (submitted directly to Professors Tom Gedeon and Sabrina Caldwell due to privacy) 6. 1 excel spreadsheet containing 3 data set variants and accompanying tables and graphs (submitted directly to Professors Tom Gedeon and Sabrina Caldwell due to privacy) 7. A Changelog file containing specifics on the changes made to the experimental platform. 8. Sequence_identification.xlsx spreadsheet allowing calculation of participant sequence ID s and conversion from web page to video numbers. 9. MatLab script files that accompanied the original platform which were used to normalise the video stimuli 10. Source list for the original videos from which the stimuli were created. 91

93 Appendix 4: README File The README.md file is contained in the artefacts/platform/ directory. It contains the following instructions: *The website requires PHP running environment.* This requires some form of webserver to be setup (apache [httpd on centos] was used for this experiment, python for its utilities and PHP installed. With the corrected environment set up, in the browser, type in the path of the folder, e.g. localhost/index.html. Then you can make use of the experiment platform. The data output will occur as users select their answers. You can find the output data in the logs directory named by sequence combination and date. You can change the testing sequence by setting the values in latin_condition.txt and block_condition.txt. the content in each must be a single line. latin_condition.txt must have the word 'sequence' followed by a letter from a to t, for example "sequencem" without the quotes. The same is true for block_condition.txt except it should be a number from 1 to 4 instead of a letter. It is recommended to use Google Chrome for the best experience with this platform. Be aware Chrome caches the.js and.txt files so the cache should be cleared between runs to force an update to the next sequence. 92

94 Appendix 5: Video Stimuli Still Images This appendix contains a collection of still frames of the videos used in each of the blocks for this experiment. Images are tagged to indicate their classification with the first letter indicating genuine (G) or acted (A) and the second letter indicating the emotion displayed in the video: Anger (A), Happiness (H), Surprise (S) and Fear (F). 93

95 AA1 GA1 AA2 GA2 AA3 AA4 AA5 GA3 GA4 AA6 AA7 GA5 AA8 GA6 GA7 GA8 AA9 GA9 GA10 AA1 94

96 GF1 AF1 GF2 AF2 GF3 AF3 GF4 AF4 GF5 AF5 GF6 AF6 GF7 AF7 GF8 AF8 GF9 AF9 GF1 AF10 95

97 GH AH1 GH AH2 GH AH3 GH AH4 GH AH5 GH AH6 GH AH7 GH AH8 GH AH9 GH1 AH1 96

98 GS1 AS1 GS2 AS2 GS3 AS3 GS4 AS4 GS5 AS5 GS6 AS6 GS7 AS7 GS8 AS8 GS9 AS9 GS1 AS10 97

99 GF1 AS11 AF1 GS1 GF1 AS1 AF1 GS1 GF1 AS13 AF1 GS1 GF1 AS1 AF1 GS14 GF15 AS1 AF15 GS15 98

Conduct an Experiment to Investigate a Situation

Conduct an Experiment to Investigate a Situation Level 3 AS91583 4 Credits Internal Conduct an Experiment to Investigate a Situation Written by J Wills MathsNZ jwills@mathsnz.com Achievement Achievement with Merit Achievement with Excellence Conduct

More information

Running head: GENDER DIFFERENCES IN EMOTION JUDGMENT

Running head: GENDER DIFFERENCES IN EMOTION JUDGMENT Running head: GENDER DIFFERENCES IN EMOTION JUDGMENT Gender Differences for Speed and Accuracy in the Judgment of the Six Basic Emotions Samantha Lumbert Rochester Institute of Technology 256 Abstract

More information

UNCORRECTED PAGE PROOFS

UNCORRECTED PAGE PROOFS 462 U2 AOS1: Interpersonal behaviour Research methods and ethics Case study 1 Emotional and behavioural responses to racism Kawakami and colleagues (2009) predicted that one reason why racism and prejudice

More information

Emotions and Deception Detection Skills

Emotions and Deception Detection Skills Emotions and Deception Detection Skills Smarter Procurement using science-based people skills Based on the Science of Dr Paul Ekman Alan Hudson, Managing Director, EI Asia Pacific Who Are We? EI Asia Pacific

More information

Hearing First C O N S U L T A N C Y C A S E S T U D Y

Hearing First C O N S U L T A N C Y C A S E S T U D Y Hearing First C O N S U L T A N C Y C A S E S T U D Y OVERVIEW Hearing First needed help to launch their community of practice for professionals who undertake critical work helping children with hearing

More information

Affective Game Engines: Motivation & Requirements

Affective Game Engines: Motivation & Requirements Affective Game Engines: Motivation & Requirements Eva Hudlicka Psychometrix Associates Blacksburg, VA hudlicka@ieee.org psychometrixassociates.com DigiPen Institute of Technology February 20, 2009 1 Outline

More information

IAASB Main Agenda (September 2005) Page Agenda Item. Analysis of ISA 330 and Mapping Document

IAASB Main Agenda (September 2005) Page Agenda Item. Analysis of ISA 330 and Mapping Document IAASB Main Agenda (September 2005 Page 2005 1869 Agenda Item 4-D.1 Analysis of ISA 330 and Mapping Document 1. Exhibit 1 sets out statements in ISA 330 that use the present tense to describe auditor actions,

More information

Mr Chris Chapman ACMA Chair The ACMA PO Box Q500 Queen Victoria Building NSW July Dear Mr Chapman

Mr Chris Chapman ACMA Chair The ACMA PO Box Q500 Queen Victoria Building NSW July Dear Mr Chapman Mr Chris Chapman ACMA Chair The ACMA PO Box Q500 Queen Victoria Building NSW 1230 24 July 2012 Dear Mr Chapman Broadcasting Services Act 1999: caption quality next steps The Australian Communications Consumer

More information

Valence and Gender Effects on Emotion Recognition Following TBI. Cassie Brown Arizona State University

Valence and Gender Effects on Emotion Recognition Following TBI. Cassie Brown Arizona State University Valence and Gender Effects on Emotion Recognition Following TBI Cassie Brown Arizona State University Knox & Douglas (2009) Social Integration and Facial Expression Recognition Participants: Severe TBI

More information

Motivation represents the reasons for people's actions, desires, and needs. Typically, this unit is described as a goal

Motivation represents the reasons for people's actions, desires, and needs. Typically, this unit is described as a goal Motivation What is motivation? Motivation represents the reasons for people's actions, desires, and needs. Reasons here implies some sort of desired end state Typically, this unit is described as a goal

More information

Differences in holistic processing do not explain cultural differences in the recognition of facial expression

Differences in holistic processing do not explain cultural differences in the recognition of facial expression THE QUARTERLY JOURNAL OF EXPERIMENTAL PSYCHOLOGY, 2017 VOL. 70, NO. 12, 2445 2459 http://dx.doi.org/10.1080/17470218.2016.1240816 Differences in holistic processing do not explain cultural differences

More information

PSYC 222 Motivation and Emotions

PSYC 222 Motivation and Emotions PSYC 222 Motivation and Emotions Session 6 The Concept of Emotion Lecturer: Dr. Annabella Osei-Tutu, Psychology Department Contact Information: aopare-henaku@ug.edu.gh College of Education School of Continuing

More information

CPSC81 Final Paper: Facial Expression Recognition Using CNNs

CPSC81 Final Paper: Facial Expression Recognition Using CNNs CPSC81 Final Paper: Facial Expression Recognition Using CNNs Luis Ceballos Swarthmore College, 500 College Ave., Swarthmore, PA 19081 USA Sarah Wallace Swarthmore College, 500 College Ave., Swarthmore,

More information

Culture and Emotion THE EVOLUTION OF HUMAN EMOTION. Outline

Culture and Emotion THE EVOLUTION OF HUMAN EMOTION. Outline Outline Culture and Emotion The Evolution of Human Emotion Universality in Emotion- The Basic Emotions Perspective Cultural Differences in Emotion Conclusion Chapter 8 THE EVOLUTION OF HUMAN EMOTION Emotion:

More information

This is a repository copy of Differences in holistic processing do not explain cultural differences in the recognition of facial expression.

This is a repository copy of Differences in holistic processing do not explain cultural differences in the recognition of facial expression. This is a repository copy of Differences in holistic processing do not explain cultural differences in the recognition of facial expression. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/107341/

More information

Getting through a diagnosis of Autism How to support family members

Getting through a diagnosis of Autism How to support family members Getting through a diagnosis of Autism How to support family members Introduction To some a diagnosis is the Holy Grail at the end of a long journey of convincing others that there are issues. To others

More information

Emotional Intelligence Questionnaire (EIQ16)

Emotional Intelligence Questionnaire (EIQ16) MSP Feedback Guide 2009 Emotional Intelligence Questionnaire (EIQ16) Feedback to Test Takers Introduction (page 2 of the report) The Emotional Intelligence Questionnaire (EIQ16) measures aspects of your

More information

Overview. Survey Methods & Design in Psychology. Readings. Significance Testing. Significance Testing. The Logic of Significance Testing

Overview. Survey Methods & Design in Psychology. Readings. Significance Testing. Significance Testing. The Logic of Significance Testing Survey Methods & Design in Psychology Lecture 11 (2007) Significance Testing, Power, Effect Sizes, Confidence Intervals, Publication Bias, & Scientific Integrity Overview Significance testing Inferential

More information

The Hospital Anxiety and Depression Scale Guidance and Information

The Hospital Anxiety and Depression Scale Guidance and Information The Hospital Anxiety and Depression Scale Guidance and Information About Testwise Testwise is the powerful online testing platform developed by GL Assessment to host its digital tests. Many of GL Assessment

More information

Introduction to Cultivating Emotional Balance

Introduction to Cultivating Emotional Balance Introduction to Cultivating Emotional Balance History of CEB Results of Research: CEB participants showed: Significant decrease in depression, anxiety and hostility over 5-week period Significant increase

More information

Temporal Context and the Recognition of Emotion from Facial Expression

Temporal Context and the Recognition of Emotion from Facial Expression Temporal Context and the Recognition of Emotion from Facial Expression Rana El Kaliouby 1, Peter Robinson 1, Simeon Keates 2 1 Computer Laboratory University of Cambridge Cambridge CB3 0FD, U.K. {rana.el-kaliouby,

More information

A Survey on Code Coverage as a Stopping Criterion for Unit Testing

A Survey on Code Coverage as a Stopping Criterion for Unit Testing A Survey on Code Coverage as a Stopping Criterion for Unit Testing Ben Smith and Laurie Williams North Carolina State University [bhsmith3, lawilli3]@ncsu.edu Abstract The evidence regarding code coverage

More information

Why do Psychologists Perform Research?

Why do Psychologists Perform Research? PSY 102 1 PSY 102 Understanding and Thinking Critically About Psychological Research Thinking critically about research means knowing the right questions to ask to assess the validity or accuracy of a

More information

Emotion Recognition using a Cauchy Naive Bayes Classifier

Emotion Recognition using a Cauchy Naive Bayes Classifier Emotion Recognition using a Cauchy Naive Bayes Classifier Abstract Recognizing human facial expression and emotion by computer is an interesting and challenging problem. In this paper we propose a method

More information

Facial expression recognition with spatiotemporal local descriptors

Facial expression recognition with spatiotemporal local descriptors Facial expression recognition with spatiotemporal local descriptors Guoying Zhao, Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and Information Engineering, P. O. Box

More information

R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology

R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology ISSN: 0975-766X CODEN: IJPTFI Available Online through Research Article www.ijptonline.com FACIAL EMOTION RECOGNITION USING NEURAL NETWORK Kashyap Chiranjiv Devendra, Azad Singh Tomar, Pratigyna.N.Javali,

More information

Beyond AI: Bringing Emotional Intelligence to the Digital

Beyond AI: Bringing Emotional Intelligence to the Digital Beyond AI: Bringing Emotional Intelligence to the Digital World Emotions influence every aspect of our lives how we live, work and play to the decisions we make We are surrounded by hyper-connected devices,

More information

Professional Development: proposals for assuring the continuing fitness to practise of osteopaths. draft Peer Discussion Review Guidelines

Professional Development: proposals for assuring the continuing fitness to practise of osteopaths. draft Peer Discussion Review Guidelines 5 Continuing Professional Development: proposals for assuring the continuing fitness to practise of osteopaths draft Peer Discussion Review Guidelines February January 2015 2 draft Peer Discussion Review

More information

Emotional Quotient. Andrew Doe. Test Job Acme Acme Test Slogan Acme Company N. Pacesetter Way

Emotional Quotient. Andrew Doe. Test Job Acme Acme Test Slogan Acme Company N. Pacesetter Way Emotional Quotient Test Job Acme 2-16-2018 Acme Test Slogan test@reportengine.com Introduction The Emotional Quotient report looks at a person's emotional intelligence, which is the ability to sense, understand

More information

CHAPTER ONE CORRELATION

CHAPTER ONE CORRELATION CHAPTER ONE CORRELATION 1.0 Introduction The first chapter focuses on the nature of statistical data of correlation. The aim of the series of exercises is to ensure the students are able to use SPSS to

More information

Lesson 1: Making and Continuing Change: A Personal Investment

Lesson 1: Making and Continuing Change: A Personal Investment Lesson 1: Making and Continuing Change: A Personal Investment Introduction This lesson is a review of the learning that took place in Grade 11 Active Healthy Lifestyles. Students spend some time reviewing

More information

Hierarchically Organized Mirroring Processes in Social Cognition: The Functional Neuroanatomy of Empathy

Hierarchically Organized Mirroring Processes in Social Cognition: The Functional Neuroanatomy of Empathy Hierarchically Organized Mirroring Processes in Social Cognition: The Functional Neuroanatomy of Empathy Jaime A. Pineda, A. Roxanne Moore, Hanie Elfenbeinand, and Roy Cox Motivation Review the complex

More information

An assistive application identifying emotional state and executing a methodical healing process for depressive individuals.

An assistive application identifying emotional state and executing a methodical healing process for depressive individuals. An assistive application identifying emotional state and executing a methodical healing process for depressive individuals. Bandara G.M.M.B.O bhanukab@gmail.com Godawita B.M.D.T tharu9363@gmail.com Gunathilaka

More information

Video Captioning Basics

Video Captioning Basics Video Captioning Basics Perhaps the most discussed aspect of accessible video is closed captioning, but misinformation about captioning runs rampant! To ensure you're using and creating accessible video

More information

C/S/E/L :2008. On Analytical Rigor A Study of How Professional Intelligence Analysts Assess Rigor. innovations at the intersection of people,

C/S/E/L :2008. On Analytical Rigor A Study of How Professional Intelligence Analysts Assess Rigor. innovations at the intersection of people, C/S/E/L :2008 innovations at the intersection of people, technology, and work. On Analytical Rigor A Study of How Professional Intelligence Analysts Assess Rigor Daniel J. Zelik The Ohio State University

More information

The Effects of Mental Imagery with Ocean Virtual Reality on Creative Thinking

The Effects of Mental Imagery with Ocean Virtual Reality on Creative Thinking The Effects of Mental Imagery with Ocean Virtual Reality on Creative Thinking Chih-Hsuan Chang, National Taiwan Ocean University, Taiwan Cheng-Chieh Chang, National Taiwan Ocean University, Taiwan Ping-Hsuan

More information

FACIAL EXPRESSION RECOGNITION FROM IMAGE SEQUENCES USING SELF-ORGANIZING MAPS

FACIAL EXPRESSION RECOGNITION FROM IMAGE SEQUENCES USING SELF-ORGANIZING MAPS International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998 FACIAL EXPRESSION RECOGNITION FROM IMAGE SEQUENCES USING SELF-ORGANIZING MAPS Ayako KATOH*, Yasuhiro FUKUI**

More information

EMOTIONAL INTELLIGENCE QUESTIONNAIRE

EMOTIONAL INTELLIGENCE QUESTIONNAIRE EMOTIONAL INTELLIGENCE QUESTIONNAIRE Personal Report JOHN SMITH 2017 MySkillsProfile. All rights reserved. Introduction The EIQ16 measures aspects of your emotional intelligence by asking you questions

More information

Primary Health Networks Greater Choice for At Home Palliative Care

Primary Health Networks Greater Choice for At Home Palliative Care Primary Health Networks Greater Choice for At Home Palliative Care Brisbane South PHN When submitting the Greater Choice for At Home Palliative Care Activity Work Plan 2017-2018 to 2019-2020 to the Department

More information

Psychology 2019 v1.3. IA2 high-level annotated sample response. Student experiment (20%) August Assessment objectives

Psychology 2019 v1.3. IA2 high-level annotated sample response. Student experiment (20%) August Assessment objectives Student experiment (20%) This sample has been compiled by the QCAA to assist and support teachers to match evidence in student responses to the characteristics described in the instrument-specific marking

More information

Research Proposal on Emotion Recognition

Research Proposal on Emotion Recognition Research Proposal on Emotion Recognition Colin Grubb June 3, 2012 Abstract In this paper I will introduce my thesis question: To what extent can emotion recognition be improved by combining audio and visual

More information

Understanding Consumers Processing of Online Review Information

Understanding Consumers Processing of Online Review Information Understanding Consumers Processing of Online Review Information Matthew McNeill mmcneil@clemson.edu Nomula Siddarth-Reddy snomula@clemson.edu Dr. Tom T. Baker Clemson University School of Marketing 234

More information

Who Needs Cheeks? Eyes and Mouths are Enough for Emotion Identification. and. Evidence for a Face Superiority Effect. Nila K Leigh

Who Needs Cheeks? Eyes and Mouths are Enough for Emotion Identification. and. Evidence for a Face Superiority Effect. Nila K Leigh 1 Who Needs Cheeks? Eyes and Mouths are Enough for Emotion Identification and Evidence for a Face Superiority Effect Nila K Leigh 131 Ave B (Apt. 1B) New York, NY 10009 Stuyvesant High School 345 Chambers

More information

SUMMARY OF RESULTS FROM INDIGENOUS SUICIDE PREVENTION PROGRAMS DELIVERED BY INDIGENOUS PSYCHOLOGICAL SERVICES Westerman, 2007

SUMMARY OF RESULTS FROM INDIGENOUS SUICIDE PREVENTION PROGRAMS DELIVERED BY INDIGENOUS PSYCHOLOGICAL SERVICES Westerman, 2007 SUMMARY OF RESULTS FROM INDIGENOUS SUICIDE PREVENTION PROGRAMS DELIVERED BY INDIGENOUS PSYCHOLOGICAL SERVICES Westerman, 2007 Studies continue to point to the escalating number of suicides amongst Aboriginal

More information

Summary Report EU Health Award 2017

Summary Report EU Health Award 2017 EPSA Vaccination Awareness Public Health Campaign 2016 Summary Report EU Health Award 2017 1. Introduction & Case Situation Background With WHO European Vaccine Action Plan 2015-2020 being developed and

More information

CONDUCTING TRAINING SESSIONS HAPTER

CONDUCTING TRAINING SESSIONS HAPTER 7 CONDUCTING TRAINING SESSIONS HAPTER Chapter 7 Conducting Training Sessions Planning and conducting practice sessions. It is important to continually stress to players that through practice sessions

More information

Framework for Comparative Research on Relational Information Displays

Framework for Comparative Research on Relational Information Displays Framework for Comparative Research on Relational Information Displays Sung Park and Richard Catrambone 2 School of Psychology & Graphics, Visualization, and Usability Center (GVU) Georgia Institute of

More information

A Matrix of Material Representation

A Matrix of Material Representation A Matrix of Material Representation Hengfeng Zuo a, Mark Jones b, Tony Hope a, a Design and Advanced Technology Research Centre, Southampton Institute, UK b Product Design Group, Faculty of Technology,

More information

Artificial Emotions to Assist Social Coordination in HRI

Artificial Emotions to Assist Social Coordination in HRI Artificial Emotions to Assist Social Coordination in HRI Jekaterina Novikova, Leon Watts Department of Computer Science University of Bath Bath, BA2 7AY United Kingdom j.novikova@bath.ac.uk Abstract. Human-Robot

More information

TRACOM Sneak Peek. Excerpts from APPLICATIONS GUIDE

TRACOM Sneak Peek. Excerpts from APPLICATIONS GUIDE TRACOM Sneak Peek Excerpts from APPLICATIONS GUIDE applications guide Table of Contents TABLE OF CONTENTS 1 Introduction 2 Strategies for Change 4 Behavioral EQ: A Review 10 What does Behavioral EQ Look

More information

Identifying Signs of Depression on Twitter Eugene Tang, Class of 2016 Dobin Prize Submission

Identifying Signs of Depression on Twitter Eugene Tang, Class of 2016 Dobin Prize Submission Identifying Signs of Depression on Twitter Eugene Tang, Class of 2016 Dobin Prize Submission INTRODUCTION Depression is estimated to affect 350 million people worldwide (WHO, 2015). Characterized by feelings

More information

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China A Vision-based Affective Computing System Jieyu Zhao Ningbo University, China Outline Affective Computing A Dynamic 3D Morphable Model Facial Expression Recognition Probabilistic Graphical Models Some

More information

Assessing the Foundations of Conscious Computing: A Bayesian Exercise

Assessing the Foundations of Conscious Computing: A Bayesian Exercise Assessing the Foundations of Conscious Computing: A Bayesian Exercise Eric Horvitz June 2001 Questions have long been posed with about whether we might one day be able to create systems that experience

More information

9698 PSYCHOLOGY. Mark schemes should be read in conjunction with the question paper and the Principal Examiner Report for Teachers.

9698 PSYCHOLOGY. Mark schemes should be read in conjunction with the question paper and the Principal Examiner Report for Teachers. CAMBRIDGE INTERNATIONAL EXAMINATIONS Cambridge International General Certificate of Secondary Education MARK SCHEME for the May/June 2015 series 9698 PSYCHOLOGY 9698/21 Paper 2 (Core Studies 2), maximum

More information

The Ordinal Nature of Emotions. Georgios N. Yannakakis, Roddy Cowie and Carlos Busso

The Ordinal Nature of Emotions. Georgios N. Yannakakis, Roddy Cowie and Carlos Busso The Ordinal Nature of Emotions Georgios N. Yannakakis, Roddy Cowie and Carlos Busso The story It seems that a rank-based FeelTrace yields higher inter-rater agreement Indeed, FeelTrace should actually

More information

GHABP Scoring/ Administration instructions GHABP Complete questionnaire. Buy full version here - for $7.00

GHABP Scoring/ Administration instructions GHABP Complete questionnaire. Buy full version here - for $7.00 This is a Sample version of the Glasgow Hearing Aid Benefit Profile- KIT (GHABP- KIT). The full version of Disability Assessment For Dementia (DAD) comes without sample watermark.. The full complete KIT

More information

GCE. Psychology. Mark Scheme for January Advanced Subsidiary GCE Unit G541: Psychological Investigations. Oxford Cambridge and RSA Examinations

GCE. Psychology. Mark Scheme for January Advanced Subsidiary GCE Unit G541: Psychological Investigations. Oxford Cambridge and RSA Examinations GCE Psychology Advanced Subsidiary GCE Unit G541: Psychological Investigations Mark Scheme for January 2011 Oxford Cambridge and RSA Examinations OCR (Oxford Cambridge and RSA) is a leading UK awarding

More information

Funnelling Used to describe a process of narrowing down of focus within a literature review. So, the writer begins with a broad discussion providing b

Funnelling Used to describe a process of narrowing down of focus within a literature review. So, the writer begins with a broad discussion providing b Accidental sampling A lesser-used term for convenience sampling. Action research An approach that challenges the traditional conception of the researcher as separate from the real world. It is associated

More information

The social brain. We saw that people are better than chance at predicting a partner s cooperation. But how?

The social brain. We saw that people are better than chance at predicting a partner s cooperation. But how? The social brain We saw that people are better than chance at predicting a partner s cooperation. But how? cognitive adaptations: is there a cheater-detection module? reading minds with a theory of mind

More information

Upon starting the executable, a map with four exits at its corners is loaded.

Upon starting the executable, a map with four exits at its corners is loaded. Pedestrian Simulation Software Version 0.0, Model and Documentation Peter Stephenson August 2007 Simulation Instructions Upon starting the executable, a map with four exits at its corners is loaded. People

More information

Spotting Liars and Deception Detection skills - people reading skills in the risk context. Alan Hudson

Spotting Liars and Deception Detection skills - people reading skills in the risk context. Alan Hudson Spotting Liars and Deception Detection skills - people reading skills in the risk context Alan Hudson < AH Business Psychology 2016> This presentation has been prepared for the Actuaries Institute 2016

More information

Introduction to affect computing and its applications

Introduction to affect computing and its applications Introduction to affect computing and its applications Overview What is emotion? What is affective computing + examples? Why is affective computing useful? How do we do affect computing? Some interesting

More information

A study of association between demographic factor income and emotional intelligence

A study of association between demographic factor income and emotional intelligence EUROPEAN ACADEMIC RESEARCH Vol. V, Issue 1/ April 2017 ISSN 2286-4822 www.euacademic.org Impact Factor: 3.4546 (UIF) DRJI Value: 5.9 (B+) A study of association between demographic factor income and emotional

More information

Gender Differences Associated With Memory Recall. By Lee Morgan Gunn. Oxford May 2014

Gender Differences Associated With Memory Recall. By Lee Morgan Gunn. Oxford May 2014 Gender Differences Associated With Memory Recall By Lee Morgan Gunn A thesis submitted to the faculty of The University of Mississippi in partial fulfillment of the requirements of the Sally McDonnell

More information

Analysis of data in within subjects designs. Analysis of data in between-subjects designs

Analysis of data in within subjects designs. Analysis of data in between-subjects designs Gavin-Ch-06.qxd 11/21/2007 2:30 PM Page 103 CHAPTER 6 SIMPLE EXPERIMENTAL DESIGNS: BEING WATCHED Contents Who is watching you? The analysis of data from experiments with two conditions The test Experiments

More information

Analysis of Confidence Rating Pilot Data: Executive Summary for the UKCAT Board

Analysis of Confidence Rating Pilot Data: Executive Summary for the UKCAT Board Analysis of Confidence Rating Pilot Data: Executive Summary for the UKCAT Board Paul Tiffin & Lewis Paton University of York Background Self-confidence may be the best non-cognitive predictor of future

More information

YOU R SELF. Project Owner: Ricardina Menezes

YOU R SELF. Project Owner: Ricardina Menezes Project Owner: Ricardina Menezes Start and end When you have finished Ricardina Menezes Introduction In life, confronting obstacles and solving problems can be a painful process that most of us try to

More information

Running head: FACIAL EXPRESSION AND SKIN COLOR ON APPROACHABILITY 1. Influence of facial expression and skin color on approachability judgment

Running head: FACIAL EXPRESSION AND SKIN COLOR ON APPROACHABILITY 1. Influence of facial expression and skin color on approachability judgment Running head: FACIAL EXPRESSION AND SKIN COLOR ON APPROACHABILITY 1 Influence of facial expression and skin color on approachability judgment Federico Leguizamo Barroso California State University Northridge

More information

What is Emotion? Emotion is a 4 part process consisting of: physiological arousal cognitive interpretation, subjective feelings behavioral expression.

What is Emotion? Emotion is a 4 part process consisting of: physiological arousal cognitive interpretation, subjective feelings behavioral expression. What is Emotion? Emotion is a 4 part process consisting of: physiological arousal cognitive interpretation, subjective feelings behavioral expression. While our emotions are very different, they all involve

More information

Special guidelines for preparation and quality approval of reviews in the form of reference documents in the field of occupational diseases

Special guidelines for preparation and quality approval of reviews in the form of reference documents in the field of occupational diseases Special guidelines for preparation and quality approval of reviews in the form of reference documents in the field of occupational diseases November 2010 (1 st July 2016: The National Board of Industrial

More information

Social and Community Studies SAS 2014

Social and Community Studies SAS 2014 Sample unit of work Gender and identity The sample unit of work provides teaching strategies and learning experiences that facilitate students demonstration of the dimensions and objectives of Social and

More information

Subject module in Psychology

Subject module in Psychology Page 1 Subject module in Psychology DATE/REFERENCE JOURNAL NO. 30. november 2017 2012-900 Please note that only the Danish version of the Studieordning has legal validity. The Danish version is the official

More information

Outline. Emotion. Emotions According to Darwin. Emotions: Information Processing 10/8/2012

Outline. Emotion. Emotions According to Darwin. Emotions: Information Processing 10/8/2012 Outline Emotion What are emotions? Why do we have emotions? How do we express emotions? Cultural regulation of emotion Eliciting events Cultural display rules Social Emotions Behavioral component Characteristic

More information

Using simulated body language and colours to express emotions with the Nao robot

Using simulated body language and colours to express emotions with the Nao robot Using simulated body language and colours to express emotions with the Nao robot Wouter van der Waal S4120922 Bachelor Thesis Artificial Intelligence Radboud University Nijmegen Supervisor: Khiet Truong

More information

BANKSIA PARK INTERNATIONAL HIGH SCHOOL. Assessment Task Year Investigating a genetic disorder

BANKSIA PARK INTERNATIONAL HIGH SCHOOL. Assessment Task Year Investigating a genetic disorder BANKSIA PARK INTERNATIONAL HIGH SCHOOL Assessment Task Year 8-10 Subject: Science Weighting: 10% Year Level: 10 Due Date: 20 th May Wed W 4 Task Name: Task Type: Teacher: Investigating a genetic disorder

More information

Effects on an Educational Intervention on Ratings of Criminality, Violence, and Trustworthiness based on Facial Expressions By Lauren N.

Effects on an Educational Intervention on Ratings of Criminality, Violence, and Trustworthiness based on Facial Expressions By Lauren N. Effects on an Educational Intervention on Ratings of Criminality, Violence, and Trustworthiness based on Facial Expressions By Lauren N. Johnson Volume 1 2016 Facial expressions are a universal symbol

More information

EMOTIONAL INTELLIGENCE

EMOTIONAL INTELLIGENCE EMOTIONAL INTELLIGENCE Ashley Gold, M.A. University of Missouri St. Louis Colarelli Meyer & Associates TOPICS Why does Emotional Intelligence (EI) matter? What is EI? Industrial-Organizational Perspective

More information

John Smith 20 October 2009

John Smith 20 October 2009 John Smith 20 October 2009 2009 MySkillsProfile.com. All rights reserved. Introduction The Emotional Competencies Questionnaire (ECQ) assesses your current emotional competencies and style by asking you

More information

Why (and how) Superman hides behind glasses: the difficulties of face matching

Why (and how) Superman hides behind glasses: the difficulties of face matching Why (and how) Superman hides behind glasses: the difficulties of face matching Kay L. Ritchie 1,2 & Robin S. S. Kramer 1 1 Department of Psychology, University of York, York, UK. 2 School of Psychology,

More information

How to Manage Seemingly Contradictory Facet Results on the MBTI Step II Assessment

How to Manage Seemingly Contradictory Facet Results on the MBTI Step II Assessment How to Manage Seemingly Contradictory Facet Results on the MBTI Step II Assessment CONTENTS 3 Introduction 5 Extraversion with Intimate and Expressive 8 Introversion with Expressive and Receiving 11 Sensing

More information

Evaluation: Controlled Experiments. Title Text

Evaluation: Controlled Experiments. Title Text Evaluation: Controlled Experiments Title Text 1 Outline Evaluation beyond usability tests Controlled Experiments Other Evaluation Methods 2 Evaluation Beyond Usability Tests 3 Usability Evaluation (last

More information

Emotional Development

Emotional Development Emotional Development How Children Develop Chapter 10 Emotional Intelligence A set of abilities that contribute to competent social functioning: Being able to motivate oneself and persist in the face of

More information

This is the accepted version of this article. To be published as : This is the author version published as:

This is the accepted version of this article. To be published as : This is the author version published as: QUT Digital Repository: http://eprints.qut.edu.au/ This is the author version published as: This is the accepted version of this article. To be published as : This is the author version published as: Chew,

More information

How to guide to the Control Self Assessment (CSA) tool and process

How to guide to the Control Self Assessment (CSA) tool and process How to guide to the Control Self Assessment (CSA) tool and process Contents 1. What is CSA?... 2 2. Why CSA?... 2 3. Training and support... 3 4. Processes... 3 5. Frequently asked questions (FAQ s)...

More information

What is Beautiful is Good Online: Physical Attractiveness, Social Interactions and Perceived Social Desirability on Facebook Abstract

What is Beautiful is Good Online: Physical Attractiveness, Social Interactions and Perceived Social Desirability on Facebook Abstract What is Beautiful is Good Online: Physical Attractiveness, Social Interactions and Perceived Social Desirability on Facebook Abstract The present research examined whether physically attractive Facebook

More information

The Effect of Contextual Information and Emotional Clarity on Emotional Evaluation

The Effect of Contextual Information and Emotional Clarity on Emotional Evaluation American International Journal of Social Science Vol. 6, No. 4, December 2017 The Effect of Contextual Information and Emotional Clarity on Emotional Evaluation Fada Pan*, Leyuan Li, Yanyan Zhang, Li Zhang

More information

The following is a brief summary of the main points of the book.

The following is a brief summary of the main points of the book. In their book The Resilience Factor (Broadway Books 2002), Reivich and Shatte describe the characteristics, assumptions and thinking patterns of resilient people and show how you can develop these characteristics

More information

A Study of Product Interface Design from Lifestyle Angle for the Geriatric A Case of Digital TV Interactive Function Menu Design

A Study of Product Interface Design from Lifestyle Angle for the Geriatric A Case of Digital TV Interactive Function Menu Design A Study of Product Interface Design from Lifestyle Angle for the Geriatric A Case of Digital TV Interactive Function Menu Design Chih-Fu Wu 1, Chun-Ming Lien 1,2*, and Fang-Ting Chao 3 1, 3 Department

More information

Inferences: What inferences about the hypotheses and questions can be made based on the results?

Inferences: What inferences about the hypotheses and questions can be made based on the results? QALMRI INSTRUCTIONS QALMRI is an acronym that stands for: Question: (a) What was the broad question being asked by this research project? (b) What was the specific question being asked by this research

More information

DRAI 10 November 1992 Display Rule Assessment Inventory (DRAI): Norms for Emotion Displays in Four Social Settings

DRAI 10 November 1992 Display Rule Assessment Inventory (DRAI): Norms for Emotion Displays in Four Social Settings DRAI 10 November 1992 Display Rule Assessment Inventory (DRAI): Norms for Emotion Displays in Four Social Settings copyright (c) Intercultural and Emotion Research Laboratory Department of Psychology San

More information

Understanding Emotions. How does this man feel in each of these photos?

Understanding Emotions. How does this man feel in each of these photos? Understanding Emotions How does this man feel in each of these photos? Emotions Lecture Overview What are Emotions? Facial displays of emotion Culture-based and sex-based differences Definitions Spend

More information

A Human-Markov Chain Monte Carlo Method For Investigating Facial Expression Categorization

A Human-Markov Chain Monte Carlo Method For Investigating Facial Expression Categorization A Human-Markov Chain Monte Carlo Method For Investigating Facial Expression Categorization Daniel McDuff (djmcduff@mit.edu) MIT Media Laboratory Cambridge, MA 02139 USA Abstract This paper demonstrates

More information

Chapter 13: Introduction to Analysis of Variance

Chapter 13: Introduction to Analysis of Variance Chapter 13: Introduction to Analysis of Variance Although the t-test is a useful statistic, it is limited to testing hypotheses about two conditions or levels. The analysis of variance (ANOVA) was developed

More information

Captioning Your Video Using YouTube Online Accessibility Series

Captioning Your Video Using YouTube Online Accessibility Series Captioning Your Video Using YouTube This document will show you how to use YouTube to add captions to a video, making it accessible to individuals who are deaf or hard of hearing. In order to post videos

More information

Contrastive Analysis on Emotional Cognition of Skeuomorphic and Flat Icon

Contrastive Analysis on Emotional Cognition of Skeuomorphic and Flat Icon Contrastive Analysis on Emotional Cognition of Skeuomorphic and Flat Icon Xiaoming Zhang, Qiang Wang and Yan Shi Abstract In the field of designs of interface and icons, as the skeuomorphism style fades

More information

D4.10 Demonstrator 4 EoT Application

D4.10 Demonstrator 4 EoT Application Horizon 2020 PROGRAMME ICT-01-2014: Smart Cyber-Physical Systems This project has received funding from the European Union s Horizon 2020 research and innovation programme under Grant Agreement No 643924

More information

Risky Stuff. Teacher s Guide. Objectives

Risky Stuff. Teacher s Guide. Objectives Risky Stuff Teacher s Guide Objectives To define STDs, including HIV/AIDS To explain how various STDs are and are not spread To explain risk-taking behaviors associated with the spread of sexually transmitted

More information

SPSS Correlation/Regression

SPSS Correlation/Regression SPSS Correlation/Regression Experimental Psychology Lab Session Week 6 10/02/13 (or 10/03/13) Due at the Start of Lab: Lab 3 Rationale for Today s Lab Session This tutorial is designed to ensure that you

More information

Identifying Identity. you is not the equivalence to me. You are different from me and I am different from you,

Identifying Identity. you is not the equivalence to me. You are different from me and I am different from you, Le 1 Dan-Linh Le Professor Suzara Oakes Core 80A, sec 19 22 October 2015 Essay Project 1, Final Draft Identifying Identity The words you and me combined together may constitute an us. However, the word

More information