Affective Storytelling Automatic Measurement of Story Effectiveness from Emotional Responses Collected over the Internet

Size: px
Start display at page:

Download "Affective Storytelling Automatic Measurement of Story Effectiveness from Emotional Responses Collected over the Internet"

Transcription

1 Affective Storytelling Automatic Measurement of Story Effectiveness from Emotional Responses Collected over the Internet Daniel McDuff PhD Proposal in Media Arts & Sciences Affective Computing Group, MIT Media Lab June 6, 2012 Executive Summary Emotion is key to the effectiveness of narratives and storytelling whether it be in influencing memory, likability or persuasion. Stories, even if fictional, have the ability to induce a genuine emotional response. However, the understanding of the role of emotions in storytelling and advertising effectiveness has been limited due to the difficulty in measuring emotions in real-life contexts. Video advertising is a ubiquitous form of a short story usually seconds designed to influence, persuade and engage, in which media with emotional content is frequently used and this will be one of the focuses of this thesis. The lack of understanding of the effects of emotion in advertising results in large amounts of wasted time, money and other resources. Facial expressions, head gestures, heart rate, respiration rate and heart rate variability can inform us about the emotional valence, arousal and attention of a person. In this thesis I propose to demonstrate how automatically detected naturalistic and spontaneous facial responses and physiological responses can be used to predict the effectiveness of stories. I propose a framework for automatically measuring facial and physiological responses in addition to self-report and behavioral measures to content (e.g. video advertisements) over the Internet in order to understand the role of emotions in story effectiveness. Specifically, I will present analysis of the first large scale data of facial, physiological, behavioral and self-report responses to video content collected in-the-wild using the cloud. I will develop models for evaluating the effectiveness of stories (e.g. likability, persuasion and memory) based on the automatically extracted features. This work will be evaluated on the success in predicting measures of story effectiveness that are useful in creation of content whether that be in copy-testing or content development. i

2 Affective Storytelling Automatic Measurement of Story Effectiveness from Emotional Responses Collected over the Internet Daniel McDuff PhD Proposal in Media Arts & Sciences Affective Computing Group, MIT Media Lab Thesis Committee Rosalind Picard Professor of Media Arts and Sciences, MIT Media Lab Thesis Supervisor Jeffrey Cohn Professor of Psychology University of Pittsburgh Ashish Kapoor Senior Research Scientist Microsoft Research, Redmond Thales Teixeira Assistant Professor of Business Administration Harvard Business School ii

3 Abstract Emotion is key to the effectiveness of narratives and storytelling whether it be in influencing memory, likability or persuasion. Stories, even if fictional, have the ability to induce a genuine emotional response. However, the understanding of the role of emotions in storytelling and advertising effectiveness has been limited due to the difficulty in measuring emotions in real-life contexts. Video advertising is a ubiquitous form of a short story usually seconds designed to influence, persuade and engage, in which media with emotional content is frequently used and this will be one of the focuses of this thesis. Facial expressions, head gestures, heart rate, respiration rate and heart rate variability can inform us about emotional valence and arousal and attention. In this thesis I propose to demonstrate how automatically detected naturalistic and spontaneous facial responses and physiological responses can be used to predict the effectiveness of stories. The results will be used to inform the creation and evaluation of new content. I propose a framework for automatically measuring facial and physiological responses in addition to self-report and behavioral measures to content (e.g. video advertisements) over the Internet in order to understand the role of emotions in story effectiveness. Specifically, I will present analysis of the first large scale data of facial, physiological, behavioral and self-report responses to video content collected in-the-wild using the cloud. I will develop models for evaluating the effectiveness of stories (e.g. likability, persuasion and memory) based on the automatically extracted features. 1 Introduction There remains truth in Ray and Batra s [28] statement: an inadequate understanding of the role of affect in advertising has probably been the cause of more wasted advertising money than any other single reason. This statement applies beyond advertising to many other forms of media and is due in part to the lack of understanding about how to measure emotion. This thesis proposal deals with evaluating the effectiveness of emotional content in storytelling and advertising beyond the laboratory environment using remotely measured facial and physiological responses. I will analyze challenging ecologically valid data collected over the Internet in the same contexts in which the media would normally be consumed and build a framework and set of models for automatic evaluation of effectiveness based on affective responses. The face is one of the richest sources of communicating affective and cognitive information [11]. In addition, physiological reactions, such as changes in heart rate and other vital signs, are partially controlled by the autonomic nervous system and as such are manifestations of emotional processes [36]. Recent work has demonstrated that both facial behavior and physiological information can be measured directly from videos of the human face and as such emotion valence and arousal can be measured remotely. Previous work has shown that many people are willing to engage and share visual images from their webcam over the Internet and these images and videos can be used for training automatic algorithms for learning [32, 34, 22]. Moreover, webcams are now ubiquitous and have become a standard component on many media devices, laptops and tablets. In 2010, the number of camera phones in use totaled 1.8 billion, which accounted for a third of all mobile phones 1. In addition, 1 1

4 about half of the videos shared on Facebook every day are personal videos recorded from a desktop or phone camera 2. Traditionally consumer testing of video advertising, whether by self-report, facial response or physiology, has been conducted in laboratory settings. Lab-based studies, while controlled, are subject to bias from the presence of an experimenter and other factors (e.g. comfort with the context) unrelated to advertising interest that may impact the participants emotional experience [35]. Conducting experiments outside a lab-based context can help avoid such problems. Self-report is the current standard measure of affect, where people are typically interviewed, asked to rate their feeling on a Likert scale or turn a dial to quantify their state (affect dial approaches). While convenient and inexpensive, self-report is problematic because it is also subject to biasing from the context, increased cognitive load and other factors of little relevance to the stimulus being tested [30]. Self-report has a number of drawbacks including the difficulty for people to access information about their emotional experiences and their willingness to report feelings even if they didn t have them [8]. For many the act of introspection is challenging to perform in conjunction with another task and may in itself alter that state [21]. Although affect dial approaches provide a higher resolution report of a subject s response compared to a post-hoc survey, subjects are often required to view the stimuli twice in order to help the participant introspect on their emotional state. Unlike self-report, facial expressions and physiological responses are implicit, non-intrusive and do not interrupt a person s experience. In addition, as with affect dial ratings, facial and physiological responses allow for continuous and dynamic representation of how affect changes over time. This represents a much richer data than can be obtained via a post-hoc survey. A small number of marketing studies consider the measurement of emotions via physiological [6], facial [18] or brain responses [3]. However, these are invariably performed in laboratory settings and are restricted to a limited demographic. Advertising and online media is global: movie trailers, advertisements and other content can now be viewed the world over via the Internet and not just on selected television networks. It is important that marketers understand the nuances in responses across a diverse demographic and a broad set of geographic locations. For instance, advertising that works in certain cultural contexts may not be effective in others. A majority of the studies of emotion in advertising have only considered a homogeneous subject pool, such as university undergraduates or a group from one location. There is evidence to suggest that emotions can be universally expressed on the face [10] and our framework allows for the evaluation of advertising effectiveness across a large and diverse demographic much more efficiently than is possible via lab-based experiments. The aim of the proposed research is to utilize a framework for measuring facial, physiological, self-report and behavioral responses to commercials over the Internet in order to understand the role of emotions in advertising effectiveness (e.g. likability, persuasion and sales) and to design an automated system for predicting success based on these signals. This incorporates first-in-theworld studies of measurement of these parameters via the cloud and allows the robust exploration of phenomena across a diverse demographic and a broad set of geographic locations

5 2 Contributions The main contributions of this thesis are described below: 1. To use a custom cloud based framework for collecting a large corpus of response videos to online media content (advertisements, movie trailers, etc.) with ground truth success (sharing, likability, persuasion and sales). To collect data from a diverse population to a broad range of content. 2. To automatically analyze facial responses, gestures and physiological reactions using computer vision algorithms. 3. To design, train and evaluate, a set of models for predicting key measures of story/advertisement effectiveness based on facial responses, gestures and physiological features automatically extracted from the videos. 4. To propose generalizable emotional profiles that describe an effective story/advertisement in order to practically inform the development of new content. 5. To implement a system (demo) that incorporates the findings into a fully automated classification of a response to a story/advertisement. The predicted label will be the effect of the story in changing likability/persuasion. 3 Background and Related Work 3.1 Storytelling, Marketing and Emotion Emotion is key to the effectiveness of narratives and storytelling [15]. Stories, even if fictional, have the ability to induce a genuine emotional response [14]. However, there are nuances in the emotional response to narrative representations compared to everyday social dialogue [25] and therefore context specific models need to be designed. Marketing, and more specifically advertising, makes much use of narratives and stories. The role of emotion in marketing and advertising has been considered extensively since early work by Zajonc [37] that argued that emotions function independently of cognition and can indeed override it. It is widely held that emotions play a significant part in the decision-making process of purchasing and advertising is often seen as an effective source of enhancement of these emotional associations [24]. In advertising the states of amusement, surprise and confusion are of particular interest and measurement of valence and arousal should be useful in distinguishing between these states. In a study of TV commercials, Hazlett and Hazlett [18] found that facial responses, measured using facial electromyography (EMG), were a stronger discriminator between commercials and was more strongly related to recall than self-report information. Lang [20] found that phasic changes in heart rate could act as an indication of attention and tonic changes could act as an indication of arousal. The combination of physiology and facial responses is likely to improve recognition of emotions further still. 3

6 Sales is arguably the key measure of success of advertising and predicting behavioral measures of success from responses will be our main focus. However, the success of an advertisement varies from person to person and sales figures at this level are often not available, therefore I will also consider other measures of success, in particular liking, memory (recall and recognition) and persuasion. Ad liking was found to be the best predictor of sales success in the Advertising Research Foundation Copy validation Research Project [17]. Biel [5] and Gordon [13] state that likability is the best predictor of sales effectiveness. Explicit memory of advertising (recall and recognition) is one of the most frequently used metrics for measuring advertising success. Independent studies have demonstrated the sales validity of recall [17, 24]. Indeed, recall was found to be the second best predictor of advertising effectiveness (after ad liking) as measured by increased sales in the Advertising Research Foundation Copy validation Research Project [17]. Behavioral methods such as ad zapping or banner click through rates are frequently used methods of measuring success. Teixeira et al. [33] show that inducing affect is important in engaging viewers in online video adverts and in reducing the frequency of zapping (skipping the advertisement). They demonstrated that joy was one of the states that stimulated viewer retention in the commercial. With our web based framework I can test behavioral measures (such as sharing or click through) outside the laboratory in natural consumption contexts. 3.2 Facial Actions, Physiology, and Emotions Charles Darwin was one of the first to demonstrate universality in facial expressions in his book, The Expression of the Emotions in Man and Animals [9]. Since then a number of other studies have demonstrated that facial actions communicate underlying emotional information and that some of these expressions are consistent across cultures [10]. There are two main approaches for coding of facial displays, sign judgment and message judgment. Sign judgment involves the labeling of facial muscle movements or actions, such as those defined in the FACS [12] taxonomy, message judgments are labels of human perceptual judgment of the underlying state. In this proposal I focus on sign judgments, specific action unit intensities, as they are objective and not open to contextual variation. The Facial Action Coding System (FACS) [12] is the most comprehensive labeling system. FACS 2002 defines 27 action units (AU) - 9 upper face and 18 lower face, 14 head positions and movements, 9 eye positions and movements and 28 other descriptors, behaviors and visibility codes [7]. The action units can be further defined using five intensity ratings from A (minimum) to E (maximum). More than 7000 AU combinations have been observed [29]. Physiological changes, such as heart rate (HR), respiration rate (RR) and heart rate variability (HRV), are partially controlled by the autonomic nervous system, these are important in describing emotional responses in the real world [16]. Physiological changes can contain information about both the emotional arousal and valence of a person. By measuring facial responses, gestures, HR, RR and HRV we are able to capture elements of both the valence and arousal dimensions of emotion. In addition, we can capture levels of viewer attention. These three dimensions are likely to be important in predicting effectiveness from responses. 4

7 3.3 Remote Measurement of Facial Actions and Physiology The first example of automated facial expression recognition was presented by Suwo et al. [31]. Over the past 20 years there have been significant advances in the state of the art in action unit recognition [38]. Our preliminary work has shown that certain actions, such as smiles, can be accurately detected in low resolution, unconstrained videos collected via the Internet [23]. We have shown that heart rate (HR), respiration rate (RR) and heart rate variability (HRV) can be measured remotely using camera based technology [26, 27]. This method has been validated on webcam videos with a resolution of 640x480 pixels and a frame rate of 15 fps (correlation with contact sensor measurements for HR: r=1.00; for RR: r=0.94; for HRV HF and LF: 0.94; all correlations p<0.001). Video of this quality should be obtainable over the Internet using our framework. 3.4 Machine Learning for Affective Computing The interpretation of facial and physiological responses is a challenging pattern recognition problem. The data are ecologically valid but noisy and require state of the art techniques in order to achieve strong performance predicting measures of likability, persuasion or sales. The aim is to take advantage of the huge quantities of data (1,000 s of video responses) that can be collected using our web based framework to design models that generalize across a range of content, gender, age and cultural demographics and a broad set of locations. In hierarchical Bayesian models prior information can be used in a tiered approach to make context specific predictions. I plan to implement state-of-the-art models, the first examples to be trained on ecologically valid data collected via the Internet. Increasingly, the importance of considering temporal information and dynamics of facial expressions has been highlighted. Dynamics can be important in distinguishing between the underlying meaning behind an expression [2, 19]. I will implement a method that considers temporal responses to commercials taking advantage of the rich moment-to-moment data that can collected using automated facial and physiological analysis. Hidden Markov Models and Conditional Random Fields have been shown to be effective at modeling affective information. With multimodal information the coupling of multiple models may improve the predictions. Hierarchical Bayesian models have been used to model the interplay of emotions and attention on behavior in advertising [33]. These techniques provide the ability to describe the data temporally and in terms of multiple modalities. 4 Proposed Research 4.1 Aim I propose to analyze story effectiveness based emotional responses of viewers using facial and physiological responses measuring over the Internet. The technology allows for the remote measurement of affect via a webcam and I will design a custom framework and set of models for automatic evaluation of advertising effectiveness based on this research. The dependent measures will be based on established metrics for story and advertising success, including: sales, persuasion, 5

8 sharing and likability. Achieving this aim will involve the identification of generalizable facial action and physiological features and models that are adaptable to contexts. This work is the first large scale study to consider physiological and facial responses measured in-the-wild via the cloud to understand the impact of emotional content in storytelling and advertising and how to use it to maximum effect. Figure 4 shows a summarization of the framework proposed which is based on Barrett et al. s dual-process model of emotion [4]. The valence, arousal and attention of the user may be represented by latent variables within the models that are trained and not predicted explicitly. 4.2 Methodology I will use a web based framework for collecting responses over the Internet. The first iteration of this framework was presented in [22] and is shown in Figure 1. This framework allows the efficient collection of thousands of naturalistic and spontaneous responses to online videos. Figure 2(a) shows example frames from data collected via this framework. Recruitment of participants has initially been performed by creating a social interface that allows people to share a graph of their automatically analyzed smile response with others but recruitment can also be performed via Mechanical Turk, or another crowd marketplace, with financial incentives. The latter will be used for more in depth studies in which voluntary participation is difficult to obtain. The facial response videos, an example of which is shown in Figure 2(b), will be analyzed using automated facial action unit detection algorithms developed by Affectiva or MIT. As an example, Affectiva s AU12 algorithm is based on Local Binary Pattern (LBP) features with the resulting features being classified using decision tree classifiers. This outputs a frame-by-frame measurement of smile probability. An example of the smile probability output is also shown in Figure 2(b). Although the algorithms will be trained with binary examples (e.g. AU12 vs. non- AU12) the probability outputs tend to be positively correlated with the intensity of the action, as shown in Figure 2(b). However, we must acknowledge that this interpretation not always be accurate. Classifiers for AU1+2 (Frontalis/eyebrow raise), AU4 (Corrugator/brow furrow) and AU12 (Zygomatic Major/smile) will be used in addition to any others that are available by the time that analysis is performed. AU1+2, AU4 and AU12 should capture the main components of surprise, confusion and amusement responses. Head turning, tilting and general motion will be calculated through the use of a head pose detector and facial feature tracker. The intention is to capture information about the attention of the viewers. Heart rate, respiration rate and heart rate variability features are calculated using a non-contact method described in [26, 27]. Figure 3 shows graphically how our algorithm can be used to extract the blood volume pulse (BVP) and subsequently HR, RR and HRV information from the RGB channels in a video containing a face. Specifically, the facial region within each video frame is segmented automatically and a spacial average of the RGB color values calculated for the region of interest (ROI). For a given time window (typically 20-30s) the raw RGB signals are normalized and detrended. A blind source separation technique (Independent Component Analysis (ICA)) is then used to calculate a set of source signals. The source signal with the strongest BVP signal is filtered and used to calculate the HR, RR and HRV. This method has been validated against contact sensors and proven to be accurate. There will be limitations involved in collecting data over the Internet, the uncontrolled nature of this research presents several challenges. Firstly, clean data is not always available, motion 6

9 SERVER 4. Video of webcam 5. Video processed to footage stored calculate facial and physiological response CLIENT HOMEPAGE/ INTRODUCTION CONSENT MEDIA SELF-REPORT Participant visits site and is introduced to the study. Participant asked if they will allow access to their webcam stream. Flash capture of webcam footage. Frames sent to server. Media clip played simultaneously. 7. User can answer self-report questions Behavioral measures - sharing/ click through - recorded Figure 1: Overview of what the user experience and web-based framework that is used to crowdsource the facial videos. The video from the webcam is streamed in real-time to a server where automated facial expression analysis is performed. All the video processing can be performed on the server side. and context of the users will vary considerably and result in greater noise within our measurements than if the data were collected in a laboratory. In addition, the video recordings are likely to have a lower frame rate and resolution compared to those that could be collected in a laboratory. In which case some more subtle and faster micro-expressions may be missed and the physiological measurements will be noisier. Secondly, detailed and reliable profiles of the participants may be difficult to ensure in all cases. In order to address these weaknesses we will compare the results obtained against those from analyses of datasets collected within controlled laboratory settings. The computer vision methods for extracting facial and physiological response features will be validated in controlled studies with ground truth measures and against videos of differing qualities in order to ensure reliability on data collected over the Internet. Specifically, I intend to recruit a number of subjects (10-20) and record video that matches those collected over the Internet with ground truth measures of physiology. The accuracy of the system can be characterized under these conditions. The AU detection algorithms will be tested against hand labeled examples of frames collected over the Internet as shown in [23]. By performing analysis online we can collect data from large populations with considerable representation from diverse subgroups (gender/age/cultural background). We will recruit 150 participants for the second study proposed below and a similar number for the subsequent studies. In these cases recruitment will be possible through existing market research participant pools. However, recruitment can also occur through a variety of other mechanisms (such as voluntary means and paid crowd marketplaces) and by using self-report measures of age, gender and cultural background. The extracted features will be collected alongside self-report responses, as these are the current standard, and behavioral metrics. In order to minimize effects due to primacy and recency the order in which advertisements are presented will be randomized. I plan to collaborate with MIT 7

10 (a) Ad Response (b) Smile Track Smile Probability Time (s) Figure 2: a) Example frames of data collected using a web-based framework similar to that described in Figure 1. b) A series of frames from one particular video, showing an AU12 (smile/amusement) response. The smile track demonstrates how greater smile intensity is positively correlated with the probability output from the classifier. (a) Automated face tracking (b) Channel separation (c) Raw traces (d) Signal components (e) Analysis of BVP Red Channel Red Signal t1 t2 Separated Source 1 tn Heart rate Green Channel Separated Source 2 Green Signal Respiration rate Signal Separation tn t2 t1 t1 t2 tn Blue Channel Blue Signal t1 t2 Separated Source 3 Heart rate variability HF/LF tn Figure 3: Graphical illustration of our algorithm for extracting heart rate, respiration rate and heart rate variability from video images of a human face as described in [27]. 8

11 Stimuli Emotion Measured Response Effect Valence Physiology HR, RR, HRV Story/Narrative Arousal Facial Behavior Likeability Memory Persuasion Purchase Sharing Attention Head Gestures Controlled Processing Figure 4: Schematic of the proposed research model. Inspired by Barrett et al. s dual-process view of emotion [4]. The measured responses will capture information about the valence, arousal and attention of the viewer and will be used to predict the effects of the story/narrative. Media Lab member companies in order to obtain sales data related to the advertisements. 4.3 Studies I propose to carry out a series of studies in this research. A preliminary study has already been performed and was the first-in-the-world attempt to collect facial responses to videos on a large scale over the Internet. This involved testing three commercials which appeared during the 2011 Super Bowl. The website was live for over a year and can be found at [1]. Visitors to the website were asked to opt-in to watch short videos and have their facial expressions recorded and analyzed. Immediately following each video, visitors completed a short self-report questionnaire. The videos from the webcam were streamed in real-time at 15 frames a second at a resolution of 320x240 to a server where automated facial expression analysis is performed. Approximately 7,000 videos were collected in this study. This data will be used to build models for predicting advertising liking purely from automatically measured behavior. In addition, I will investigate whether advertising liking can be predicted effectively from only a subset of the response (e.g. the first 25% or 50%). The second study will extend the framework and methodology used in the first study to a much greater number of commercials and I will extend the self-report questioning to cover more in-depth questions. Specifically, I will be collecting and analyzing data for 150 viewers and 16 commercials (with each viewer watching a subset of the commercials). Video recordings of the participant s responses to the content will be collected and analyzed as described in the Methodology section. Self-report measures of persuasion, likability and familiarity will be recorded (post viewing Likert scale reports). Pre- and post-launch sales data for the products will be available. The videos collected in this study will be of a similar quality as above (resolution: 320x240, frame rate: 15 fps). This dataset will allow me to extend the modeling carried out in the preliminary study to build and evaluate models for predicting likability, persuasion and sales. 9

12 The third study I propose will be collecting and analyzing data for a set of advertisement concepts around different product ranges. This will involve approximately 100 viewers watching multiple (2 or 3) advertisement concepts. Self-report measures of persuasion, likability and familiarity will be recorded. This study will compare similar but different advertising concepts for the same product. I will investigate the ability for measured emotional responses to distinguish between the efficacy of subtly different concepts for the same product. The structure of the latter two studies will allow for richer data to be collected and a more controlled experimental design whilst still allowing us to collect naturalistic and spontaneous data in-the-wild. I will investigate the role of facial behavior and head gestures, HR, RR and HRV in predicting the variables of persuasion, likability and sales. The dimensions of valence, arousal and attention will be modeled as latent variables within the model. As described above I will be carrying out small-scale lab based studies to evaluate the accuracy of the physiological measurement under a greater range of conditions. This will involve a smaller number of participants (10-20) viewing content on a computer or laptop whilst a video is recorded of their face. The method will be evaluated by its correlation with, and accuracy when compared to, measurements from contact sensors. Data for 16 participants has been collected already, if necessary further data collection can be performed. For these experiments recruitment can be from the local community. 4.4 Plan for Completion of the Research Table 1 shows my tentative plan for completion of the research described in this proposal. Timeline Work Progress January-March 2011 Analysis of Data from preliminary study completed April-June 2012 Design of studies ongoing September-November 2012 Implementation of studies planned November-March 2013 Analysis of data collected planned March 2013 First thesis outline planned April-June 2013 Complete analysis of study data planned July 2013 Second thesis outline planned August-December 2013 Thesis writing planned January-February 2014 Thesis defense planned Table 1: Plan for completion of my doctoral thesis research. 4.5 Human Subjects Approval The protocol for all studies will be approved by the Massachusetts Institute of Technology Committee On Use of Humans as Experimental Subjects (COUHES). 10

13 4.6 Collaborations I will be collaborating with Thales Teixeira at Harvard Business School on the modeling of effectiveness based on emotional responses. I will be working at Affectiva for one semester in order to complete parts of the data collection described. I will be building on the data collection framework and using the facial action unit detection algorithms. 5 Biography djmcduff@mit.edu Web: media.mit.edu/ djmcduff References Daniel McDuff is a PhD candidate in the Affective Computing group at the MIT Media Lab. McDuff received his bachelor degree, with first-class honors, and master degree in engineering from Cambridge University. Prior to joining the Media Lab, he worked for the Defense Science and Technology Laboratory (DSTL) in the UK. He is interested in using computer vision and machine learning to enable the automated recognition of affect, particularly in the domain of storytelling and advertising. [1] Web address of data collection site: [2] Z. Ambadar, J.F. Cohn, and L.I. Reed. All smiles are not created equal: Morphology and timing of smiles perceived as amused, polite, and embarrassed/nervous. Journal of nonverbal behavior, 33(1):17 34, [3] T. Ambler, A. Ioannides, and S. Rose. Brands on the brain: Neuro-images of advertising. Business Strategy Review, 11(3):17 30, [4] L.F. Barrett, K.N. Ochsner, and J.J. Gross. On the automaticity of emotion. Social psychology and the unconscious: The automaticity of higher mental processes, pages , [5] A.L. Biel. Love the ad. buy the product? Admap, September, [6] P.D. Bolls, A. Lang, and R.F. Potter. The effects of message valence and listener arousal on attention, memory, and facial muscular responses to radio advertisements. Communication Research, 28(5): , [7] J.F. Cohn, Z. Ambadar, and P. Ekman. Observer-based measurement of facial expression with the Facial Action Coding System. Oxford: NY, [8] R.R. Cornelius. The science of emotion: Research and tradition in the psychology of emotions. Prentice-Hall, Inc, [9] C. Darwin, P. Ekman, and P. Prodger. The expression of the emotions in man and animals. Oxford University Press, USA, [10] P. Ekman. Facial expression and emotion. American Psychologist, 48(4):384, [11] P. Ekman, W.V. Freisen, and S. Ancoli. Facial signs of emotional experience. Journal of Personality and Social Psychology, 39(6):1125,

14 [12] P. Ekman and W.V. Friesen. Facial action coding system [13] W. Gordon. What do consumers do emotionally with advertising? Journal of Advertising research, 46(1), [14] M.C. Green. Transportation into narrative worlds: The role of prior knowledge and perceived realism. Discourse Processes, 38(2): , [15] M.C. Green, J.J. Strange, and T.C. Brock. Narrative impact: Social and cognitive foundations. Lawrence Erlbaum, [16] H. Gunes, M. Piccardi, and M. Pantic. From the lab to the real world: Affect recognition using multiple cues and modalities. Affective computing: focus on emotion expression, synthesis, and recognition, pages , [17] R.I. Haley. The arf copy research validity project: Final report. In Transcript Proceedings of the Seventh Annual ARF Copy Research Workshop, [18] R.L. Hazlett and S.Y. Hazlett. Emotional response to television commercials: Facial emg vs. self-report. Journal of Advertising Research, 39:7 24, [19] M. E. Hoque and R.W. Picard. Acted vs. natural frustration and delight: many people smile in natural frustration. In Automatic Face & Gesture Recognition and Workshops (FG 2011), 2011 IEEE International Conference on. IEEE, [20] A. Lang. Involuntary attention and physiological arousal evoked by structural features and emotional content in tv commercials. Communication Research, 17(3): , [21] M.D. Lieberman, N.I. Eisenberger, M.J. Crockett, S.M. Tom, J.H. Pfeifer, and B.M. Way. Putting feelings into words. Psychological Science, 18(5):421, [22] D. McDuff, R. El Kaliouby, and R. Picard. Crowdsourced data collection of facial responses. In Proceedings of the 13th international conference on Multimodal Interaction. ACM, [23] D. J. McDuff, R. E. Kaliouby, and R. W. Picard. Crowdsourcing Facial Responses to Online Videos. IEEE Transactions on Affective Computing, [24] A. Mehta and S.C. Purvis. Reconsidering recall and emotion in advertising. Journal of Advertising Research, 46(1):49, [25] B. Parkinson and A.S.R. Manstead. Making sense of emotion in stories and social life. Cognition & Emotion, 7(3-4): , [26] M.Z. Poh, D.J. McDuff, and R.W. Picard. Non-contact, automated cardiac pulse measurements using video imaging and blind source separation. Optics Express, 18(10): , [27] M.Z. Poh, D.J. McDuff, and R.W. Picard. Advancements in noncontact, multiparameter physiological measurements using a webcam. Biomedical Engineering, IEEE Transactions on, 58(1):7 11, [28] M.L. Ray and R. Batra. Emotion and persuasion in advertising: What we do and don t know about affect. Graduate School of Business, Stanford University, [29] K.R. Scherer and P. Ekman. Methodological issues in studying nonverbal behavior. Handbook of methods in nonverbal behavior research, pages 1 44, [30] N. Schwarz and F. Strack. Reports of subjective well-being: Judgmental processes and their methodological implications. Well-being: The foundations of hedonic psychology, pages 61 84, [31] M. Suwa, N. Sugie, and K. Fujimora. A preliminary note on pattern recognition of human emotional expression. In International Joint Conference on Pattern Recognition, pages , [32] G.W. Taylor, I. Spiro, C. Bregler, and R. Fergus. Learning Invariance through Imitation. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, [33] T. Teixeira, M. Wedel, and R. Pieters. Emotion-induced engagement in internet video ads. Journal of Marketing Research, (ja):1 51,

15 [34] J. Whitehill, G. Littlewort, I. Fasel, M. Bartlett, and J. Movellan. Toward practical smile detection. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 31(11): , [35] F.H. Wilhelm and P. Grossman. Emotions beyond the laboratory: Theoretical fundaments, study design, and analytic strategies for advanced ambulatory assessment. Biological Psychology, 84(3): , [36] P. Winkielman, G.G. Berntson, and J.T. Cacioppo. The psychophysiological perspective on the social mind. Blackwell handbook of social psychology: Intraindividual processes, pages , [37] R.B. Zajonc. Feeling and thinking: Preferences need no inferences. American psychologist, 35(2):151, [38] Z. Zeng, M. Pantic, G.I. Roisman, and T.S. Huang. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 31(1):39 58,

16 Committee Biographies Jeffrey Cohn Professor of Psychology University of Pittsburg Jeffrey Cohn is Professor of Psychology at the University of Pittsburgh and Adjunct Faculty at the Robotics Institute at Carnegie Mellon University. He has led interdisciplinary and interinstitutional efforts to develop advanced methods of automatic analysis of facial expression and prosody; and applied those tools to research in human emotion, social development, non-verbal communication, psychopathology, and biomedicine. He co-chaired the 2008 IEEE International Conference on Automatic Face and Gesture Recognition (FG2008) and the 2009 International Conference on Affective Computing and Intelligent Interaction (ACII2009). He has co-edited two recent special issues of the Journal of Image and Vision Computing. His research has been supported by grants from the National Institutes of Health, National Science Foundation, Autism Foundation, Office of Naval Research, Defense Advanced Research Projects Agency, and the Technical Support Working Group. Ashish Kapoor Senior Research Scientist Microsoft Research, Redmond Ashish Kapoor is a researcher with the Adaptive Systems and Interaction Group at Microsoft Research, Redmond. He is focusing on Machine Learning and Computer Vision with applications in User Modelling, Affective Computing and Computer-Human interaction scenarios. Ashish did a PhD at the MIT Media Lab and his Doctoral thesis looked at building Discriminative Models for Pattern Recognition with incomplete information (semi-supervised learning, imputation, noisy data etc.). Most of the earlier work focused on building new machine learning models for affect recognition. A significant part of that work involved automatic analysis of non-verbal behavior and physiological responses. Thales Teixeira Assistant Professor of Business Administration Harvard Business School Thales Teixeira is Assistant Professor in the Marketing Department of the Harvard Business School. His research focuses on the economics of attention. He explores the rules of (implicit) transaction of attention in a marketplace in which consumer attention is a scarce resource, arguably even scarcer than money or time. His work has also appeared in Marketing Science. He received his PhD in Business from University of Michigan and holds a Master of Arts in Statistics (University of Sao Paulo, Brazil) and a Bachelor of Arts in Administration (University of Sao Paulo, Brazil). Before entering academia, he consulted for companies such as Microsoft and Hewlett-Packard. At Harvard, he teaches an MBA course in Marketing. 14

Discovering Facial Expressions for States of Amused, Persuaded, Informed, Sentimental and Inspired

Discovering Facial Expressions for States of Amused, Persuaded, Informed, Sentimental and Inspired Discovering Facial Expressions for States of Amused, Persuaded, Informed, Sentimental and Inspired Daniel McDuff Microsoft Research, Redmond, WA, USA This work was performed while at Affectiva damcduff@microsoftcom

More information

Event Detection: Ultra Large-scale Clustering of Facial Expressions

Event Detection: Ultra Large-scale Clustering of Facial Expressions Event Detection: Ultra Large-scale Clustering of Facial Expressions Thomas Vandal, Daniel McDuff and Rana El Kaliouby Affectiva, Waltham, USA Abstract Facial behavior contains rich non-verbal information.

More information

Blue Eyes Technology

Blue Eyes Technology Blue Eyes Technology D.D. Mondal #1, Arti Gupta *2, Tarang Soni *3, Neha Dandekar *4 1 Professor, Dept. of Electronics and Telecommunication, Sinhgad Institute of Technology and Science, Narhe, Maharastra,

More information

Temporal Context and the Recognition of Emotion from Facial Expression

Temporal Context and the Recognition of Emotion from Facial Expression Temporal Context and the Recognition of Emotion from Facial Expression Rana El Kaliouby 1, Peter Robinson 1, Simeon Keates 2 1 Computer Laboratory University of Cambridge Cambridge CB3 0FD, U.K. {rana.el-kaliouby,

More information

Emotion Recognition using a Cauchy Naive Bayes Classifier

Emotion Recognition using a Cauchy Naive Bayes Classifier Emotion Recognition using a Cauchy Naive Bayes Classifier Abstract Recognizing human facial expression and emotion by computer is an interesting and challenging problem. In this paper we propose a method

More information

Affective Game Engines: Motivation & Requirements

Affective Game Engines: Motivation & Requirements Affective Game Engines: Motivation & Requirements Eva Hudlicka Psychometrix Associates Blacksburg, VA hudlicka@ieee.org psychometrixassociates.com DigiPen Institute of Technology February 20, 2009 1 Outline

More information

Valence-arousal evaluation using physiological signals in an emotion recall paradigm. CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry.

Valence-arousal evaluation using physiological signals in an emotion recall paradigm. CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry. Proceedings Chapter Valence-arousal evaluation using physiological signals in an emotion recall paradigm CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry Abstract The work presented in this paper aims

More information

Generalization of a Vision-Based Computational Model of Mind-Reading

Generalization of a Vision-Based Computational Model of Mind-Reading Generalization of a Vision-Based Computational Model of Mind-Reading Rana el Kaliouby and Peter Robinson Computer Laboratory, University of Cambridge, 5 JJ Thomson Avenue, Cambridge UK CB3 FD Abstract.

More information

From Dials to Facial Coding: Automated Detection of Spontaneous Facial Expressions for Media Research

From Dials to Facial Coding: Automated Detection of Spontaneous Facial Expressions for Media Research From Dials to Facial Coding: Automated Detection of Spontaneous Facial Expressions for Media Research Evan Kodra, Thibaud Senechal, Daniel McDuff, Rana el Kaliouby Abstract Typical consumer media research

More information

This is the accepted version of this article. To be published as : This is the author version published as:

This is the accepted version of this article. To be published as : This is the author version published as: QUT Digital Repository: http://eprints.qut.edu.au/ This is the author version published as: This is the accepted version of this article. To be published as : This is the author version published as: Chew,

More information

Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners

Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners Hatice Gunes and Maja Pantic Department of Computing, Imperial College London 180 Queen

More information

Facial Behavior as a Soft Biometric

Facial Behavior as a Soft Biometric Facial Behavior as a Soft Biometric Abhay L. Kashyap University of Maryland, Baltimore County 1000 Hilltop Circle, Baltimore, MD 21250 abhay1@umbc.edu Sergey Tulyakov, Venu Govindaraju University at Buffalo

More information

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China A Vision-based Affective Computing System Jieyu Zhao Ningbo University, China Outline Affective Computing A Dynamic 3D Morphable Model Facial Expression Recognition Probabilistic Graphical Models Some

More information

PHYSIOLOGICAL RESEARCH

PHYSIOLOGICAL RESEARCH DOMAIN STUDIES PHYSIOLOGICAL RESEARCH In order to understand the current landscape of psychophysiological evaluation methods, we conducted a survey of academic literature. We explored several different

More information

Face Analysis : Identity vs. Expressions

Face Analysis : Identity vs. Expressions Hugo Mercier, 1,2 Patrice Dalle 1 Face Analysis : Identity vs. Expressions 1 IRIT - Université Paul Sabatier 118 Route de Narbonne, F-31062 Toulouse Cedex 9, France 2 Websourd Bâtiment A 99, route d'espagne

More information

Facial expression recognition with spatiotemporal local descriptors

Facial expression recognition with spatiotemporal local descriptors Facial expression recognition with spatiotemporal local descriptors Guoying Zhao, Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and Information Engineering, P. O. Box

More information

Beyond AI: Bringing Emotional Intelligence to the Digital

Beyond AI: Bringing Emotional Intelligence to the Digital Beyond AI: Bringing Emotional Intelligence to the Digital World Emotions influence every aspect of our lives how we live, work and play to the decisions we make We are surrounded by hyper-connected devices,

More information

Affect Recognition for Interactive Companions

Affect Recognition for Interactive Companions Affect Recognition for Interactive Companions Ginevra Castellano School of Electronic Engineering and Computer Science Queen Mary University of London, UK ginevra@dcs.qmul.ac.uk Ruth Aylett School of Maths

More information

Towards Human Affect Modeling: A Comparative Analysis of Discrete Affect and Valence-Arousal Labeling

Towards Human Affect Modeling: A Comparative Analysis of Discrete Affect and Valence-Arousal Labeling Towards Human Affect Modeling: A Comparative Analysis of Discrete Affect and Valence-Arousal Labeling Sinem Aslan 1, Eda Okur 1, Nese Alyuz 1, Asli Arslan Esme 1, Ryan S. Baker 2 1 Intel Corporation, Hillsboro

More information

R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology

R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology ISSN: 0975-766X CODEN: IJPTFI Available Online through Research Article www.ijptonline.com FACIAL EMOTION RECOGNITION USING NEURAL NETWORK Kashyap Chiranjiv Devendra, Azad Singh Tomar, Pratigyna.N.Javali,

More information

Affective pictures and emotion analysis of facial expressions with local binary pattern operator: Preliminary results

Affective pictures and emotion analysis of facial expressions with local binary pattern operator: Preliminary results Affective pictures and emotion analysis of facial expressions with local binary pattern operator: Preliminary results Seppo J. Laukka 1, Antti Rantanen 1, Guoying Zhao 2, Matti Taini 2, Janne Heikkilä

More information

Emotion based E-learning System using Physiological Signals. Dr. Jerritta S, Dr. Arun S School of Engineering, Vels University, Chennai

Emotion based E-learning System using Physiological Signals. Dr. Jerritta S, Dr. Arun S School of Engineering, Vels University, Chennai CHENNAI - INDIA Emotion based E-learning System using Physiological Signals School of Engineering, Vels University, Chennai Outline Introduction Existing Research works on Emotion Recognition Research

More information

Human Factors & User Experience LAB

Human Factors & User Experience LAB Human Factors & User Experience LAB User Experience (UX) User Experience is the experience a user has when interacting with your product, service or application Through research and observation of users

More information

The Ordinal Nature of Emotions. Georgios N. Yannakakis, Roddy Cowie and Carlos Busso

The Ordinal Nature of Emotions. Georgios N. Yannakakis, Roddy Cowie and Carlos Busso The Ordinal Nature of Emotions Georgios N. Yannakakis, Roddy Cowie and Carlos Busso The story It seems that a rank-based FeelTrace yields higher inter-rater agreement Indeed, FeelTrace should actually

More information

Artificial Emotions to Assist Social Coordination in HRI

Artificial Emotions to Assist Social Coordination in HRI Artificial Emotions to Assist Social Coordination in HRI Jekaterina Novikova, Leon Watts Department of Computer Science University of Bath Bath, BA2 7AY United Kingdom j.novikova@bath.ac.uk Abstract. Human-Robot

More information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, S. Lee, U. Neumann, S. Narayanan Emotion

More information

July 2014-present Postdoctoral Fellowship, in the Department of Experimental Psychology,

July 2014-present Postdoctoral Fellowship, in the Department of Experimental Psychology, Xuelian Zang Date of Birth: April 28th, 1986 Citizenship: Chinese Mobile: +49(0)159 0372 3091 Email: zangxuelian@gmail.com Address: Sennesweg 17, 85540, Haar, Munich, Germany Education July 2014-present

More information

Skin color detection for face localization in humanmachine

Skin color detection for face localization in humanmachine Research Online ECU Publications Pre. 2011 2001 Skin color detection for face localization in humanmachine communications Douglas Chai Son Lam Phung Abdesselam Bouzerdoum 10.1109/ISSPA.2001.949848 This

More information

Facial Expression Biometrics Using Tracker Displacement Features

Facial Expression Biometrics Using Tracker Displacement Features Facial Expression Biometrics Using Tracker Displacement Features Sergey Tulyakov 1, Thomas Slowe 2,ZhiZhang 1, and Venu Govindaraju 1 1 Center for Unified Biometrics and Sensors University at Buffalo,

More information

Study on Aging Effect on Facial Expression Recognition

Study on Aging Effect on Facial Expression Recognition Study on Aging Effect on Facial Expression Recognition Nora Algaraawi, Tim Morris Abstract Automatic facial expression recognition (AFER) is an active research area in computer vision. However, aging causes

More information

Estimating Intent for Human-Robot Interaction

Estimating Intent for Human-Robot Interaction Estimating Intent for Human-Robot Interaction D. Kulić E. A. Croft Department of Mechanical Engineering University of British Columbia 2324 Main Mall Vancouver, BC, V6T 1Z4, Canada Abstract This work proposes

More information

Face Emotions and Short Surveys during Automotive Tasks

Face Emotions and Short Surveys during Automotive Tasks Face Emotions and Short Surveys during Automotive Tasks LEE QUINTANAR, PETE TRUJILLO, AND JEREMY WATSON March 2016 J.D. Power A Global Marketing Information Company jdpower.com Introduction Facial expressions

More information

MN 400: Research Methods. PART II The Design of Research

MN 400: Research Methods. PART II The Design of Research MN 400: Research Methods PART II The Design of Research 1 MN 400: Research Methods CHAPTER 6 Research Design 2 What is Research Design? A plan for selecting the sources and types of information used to

More information

AFOSR PI Meeting Dec 1-3, 2014 Program Director: Dr. Darema Dynamic Integration of Motion and Neural Data to Capture Human Behavior

AFOSR PI Meeting Dec 1-3, 2014 Program Director: Dr. Darema Dynamic Integration of Motion and Neural Data to Capture Human Behavior AFOSR PI Meeting Dec 1-3, 2014 Program Director: Dr. Darema Dynamic Integration of Motion and Neural Data to Capture Human Behavior Dimitris Metaxas(PI, Rutgers) D. Pantazis (co-pi, Head MEG Lab, MIT)

More information

VIDEO SURVEILLANCE AND BIOMEDICAL IMAGING Research Activities and Technology Transfer at PAVIS

VIDEO SURVEILLANCE AND BIOMEDICAL IMAGING Research Activities and Technology Transfer at PAVIS VIDEO SURVEILLANCE AND BIOMEDICAL IMAGING Research Activities and Technology Transfer at PAVIS Samuele Martelli, Alessio Del Bue, Diego Sona, Vittorio Murino Istituto Italiano di Tecnologia (IIT), Genova

More information

SmileTracker: Automatically and Unobtrusively Recording Smiles and their Context

SmileTracker: Automatically and Unobtrusively Recording Smiles and their Context SmileTracker: Automatically and Unobtrusively Recording Smiles and their Context Natasha Jaques * MIT Media Lab 75 Amherst St. Cambridge, MA 02142 USA jaquesn@mit.edu * Both authors contributed equally

More information

Potential applications of affective computing in the surveillance work of CCTV operators

Potential applications of affective computing in the surveillance work of CCTV operators Loughborough University Institutional Repository Potential applications of affective computing in the surveillance work of CCTV operators This item was submitted to Loughborough University's Institutional

More information

Pupillary Responses of Asian Observers in Discriminating Real from Fake Smiles: a Preliminary Study

Pupillary Responses of Asian Observers in Discriminating Real from Fake Smiles: a Preliminary Study Pupillary Responses of Asian Observers in Discriminating Real from Fake Smiles: a Preliminary Study M.Z. Hossain 1, T.D. Gedeon 1, R. Sankaranarayana 1, D. Apthorp 2, A. Dawel 2 1 Research School of Computer

More information

A Dynamic Model for Identification of Emotional Expressions

A Dynamic Model for Identification of Emotional Expressions A Dynamic Model for Identification of Emotional Expressions Rafael A.M. Gonçalves, Diego R. Cueva, Marcos R. Pereira-Barretto, and Fabio G. Cozman Abstract This paper discusses the dynamics of emotion

More information

A framework for the Recognition of Human Emotion using Soft Computing models

A framework for the Recognition of Human Emotion using Soft Computing models A framework for the Recognition of Human Emotion using Soft Computing models Md. Iqbal Quraishi Dept. of Information Technology Kalyani Govt Engg. College J Pal Choudhury Dept. of Information Technology

More information

Facial Expression Recognition Using Principal Component Analysis

Facial Expression Recognition Using Principal Component Analysis Facial Expression Recognition Using Principal Component Analysis Ajit P. Gosavi, S. R. Khot Abstract Expression detection is useful as a non-invasive method of lie detection and behaviour prediction. However,

More information

Distributed Multisensory Signals Acquisition and Analysis in Dyadic Interactions

Distributed Multisensory Signals Acquisition and Analysis in Dyadic Interactions Distributed Multisensory Signals Acquisition and Analysis in Dyadic Interactions Ashish Tawari atawari@ucsd.edu Cuong Tran cutran@cs.ucsd.edu Anup Doshi anup.doshi@gmail.com Thorsten Zander Max Planck

More information

Facial Expression Analysis for Estimating Pain in Clinical Settings

Facial Expression Analysis for Estimating Pain in Clinical Settings Facial Expression Analysis for Estimating Pain in Clinical Settings Karan Sikka University of California San Diego 9450 Gilman Drive, La Jolla, California, USA ksikka@ucsd.edu ABSTRACT Pain assessment

More information

GfK Verein. Detecting Emotions from Voice

GfK Verein. Detecting Emotions from Voice GfK Verein Detecting Emotions from Voice Respondents willingness to complete questionnaires declines But it doesn t necessarily mean that consumers have nothing to say about products or brands: GfK Verein

More information

Emotion Analysis Using Emotion Recognition Module Evolved by Genetic Programming

Emotion Analysis Using Emotion Recognition Module Evolved by Genetic Programming THE HARRIS SCIENCE REVIEW OF DOSHISHA UNIVERSITY, VOL. 57, NO. 2 July 2016 Emotion Analysis Using Emotion Recognition Module Evolved by Genetic Programming Rahadian YUSUF*, Ivan TANEV*, Katsunori SHIMOHARA*

More information

MPEG-4 Facial Expression Synthesis based on Appraisal Theory

MPEG-4 Facial Expression Synthesis based on Appraisal Theory MPEG-4 Facial Expression Synthesis based on Appraisal Theory L. Malatesta, A. Raouzaiou, K. Karpouzis and S. Kollias Image, Video and Multimedia Systems Laboratory, National Technical University of Athens,

More information

A Common Framework for Real-Time Emotion Recognition and Facial Action Unit Detection

A Common Framework for Real-Time Emotion Recognition and Facial Action Unit Detection A Common Framework for Real-Time Emotion Recognition and Facial Action Unit Detection Tobias Gehrig and Hazım Kemal Ekenel Facial Image Processing and Analysis Group, Institute for Anthropomatics Karlsruhe

More information

Recognising Emotions from Keyboard Stroke Pattern

Recognising Emotions from Keyboard Stroke Pattern Recognising Emotions from Keyboard Stroke Pattern Preeti Khanna Faculty SBM, SVKM s NMIMS Vile Parle, Mumbai M.Sasikumar Associate Director CDAC, Kharghar Navi Mumbai ABSTRACT In day to day life, emotions

More information

Get The FACS Fast: Automated FACS face analysis benefits from the addition of velocity

Get The FACS Fast: Automated FACS face analysis benefits from the addition of velocity Get The FACS Fast: Automated FACS face analysis benefits from the addition of velocity Timothy R. Brick University of Virginia Charlottesville, VA 22904 tbrick@virginia.edu Michael D. Hunter University

More information

CROWD ANALYTICS VIA ONE SHOT LEARNING AND AGENT BASED INFERENCE. Peter Tu, Ming-Ching Chang, Tao Gao. GE Global Research

CROWD ANALYTICS VIA ONE SHOT LEARNING AND AGENT BASED INFERENCE. Peter Tu, Ming-Ching Chang, Tao Gao. GE Global Research CROWD ANALYTICS VIA ONE SHOT LEARNING AND AGENT BASED INFERENCE Peter Tu, Ming-Ching Chang, Tao Gao GE Global Research ABSTRACT For the purposes of inferring social behavior in crowded conditions, three

More information

The Temporal Connection Between Smiles and Blinks

The Temporal Connection Between Smiles and Blinks The Temporal Connection Between Smiles and Blinks Laura C Trutoiu, Jessica K Hodgins, and Jeffrey F Cohn Abstract In this paper, we present evidence for a temporal relationship between eye blinks and smile

More information

Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH)

Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH) Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH) Matt Huenerfauth Raja Kushalnagar Rochester Institute of Technology DHH Auditory Issues Links Accents/Intonation Listening

More information

EXPLORING THE EMOTIONAL USER EXPERIENCE. Bill Albert, PhD Executive Director User Experience Center Bentley University

EXPLORING THE EMOTIONAL USER EXPERIENCE. Bill Albert, PhD Executive Director User Experience Center Bentley University EXPLORING THE EMOTIONAL USER EXPERIENCE Bill Albert, PhD Executive Director User Experience Center Bentley University Motivation 2 5 challenges Defining the emotional UX Measuring the emotional UX Getting

More information

Signal Processing in the Workplace. Daniel Gatica-Perez

Signal Processing in the Workplace. Daniel Gatica-Perez Signal Processing in the Workplace Daniel Gatica-Perez According to the U.S. Bureau of Labor Statistics, during 2013 employed Americans worked an average of 7.6 hours on the days they worked, and 83 percent

More information

Externalization of Cognition: from local brains to the Global Brain. Clément Vidal, Global Brain Institute

Externalization of Cognition: from local brains to the Global Brain. Clément Vidal, Global Brain Institute Externalization of Cognition: from local brains to the Global Brain Clément Vidal, Global Brain Institute clement.vidal@philosophons.com 1 Introduction Humans use tools. create, use and refine tools. extends

More information

A Human-Markov Chain Monte Carlo Method For Investigating Facial Expression Categorization

A Human-Markov Chain Monte Carlo Method For Investigating Facial Expression Categorization A Human-Markov Chain Monte Carlo Method For Investigating Facial Expression Categorization Daniel McDuff (djmcduff@mit.edu) MIT Media Laboratory Cambridge, MA 02139 USA Abstract This paper demonstrates

More information

ANALYSIS AND DETECTION OF BRAIN TUMOUR USING IMAGE PROCESSING TECHNIQUES

ANALYSIS AND DETECTION OF BRAIN TUMOUR USING IMAGE PROCESSING TECHNIQUES ANALYSIS AND DETECTION OF BRAIN TUMOUR USING IMAGE PROCESSING TECHNIQUES P.V.Rohini 1, Dr.M.Pushparani 2 1 M.Phil Scholar, Department of Computer Science, Mother Teresa women s university, (India) 2 Professor

More information

Predicting About-to-Eat Moments for Just-in-Time Eating Intervention

Predicting About-to-Eat Moments for Just-in-Time Eating Intervention Predicting About-to-Eat Moments for Just-in-Time Eating Intervention CORNELL UNIVERSITY AND VIBE GROUP AT MICROSOFT RESEARCH Motivation Obesity is a leading cause of preventable death second only to smoking,

More information

DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation

DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation Biyi Fang Michigan State University ACM SenSys 17 Nov 6 th, 2017 Biyi Fang (MSU) Jillian Co (MSU) Mi Zhang

More information

Tao Gao. January Present Assistant Professor Department of Communication UCLA

Tao Gao. January Present Assistant Professor Department of Communication UCLA Contact Information Tao Gao January 2018 Department of Statistics, UCLA Email : tao.gao@stat.ucla.edu 8117 Math Sciences Bldg. Web : www.stat.ucla.edu/~taogao Los Angeles, CA 90095-1554 Phone : 310-983-3998

More information

ITU-T. FG AVA TR Version 1.0 (10/2013) Part 3: Using audiovisual media A taxonomy of participation

ITU-T. FG AVA TR Version 1.0 (10/2013) Part 3: Using audiovisual media A taxonomy of participation International Telecommunication Union ITU-T TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU FG AVA TR Version 1.0 (10/2013) Focus Group on Audiovisual Media Accessibility Technical Report Part 3: Using

More information

Advertisement effectiveness estimation based on crowdsourced multimodal affective responses

Advertisement effectiveness estimation based on crowdsourced multimodal affective responses Advertisement effectiveness estimation based on crowdsourced multimodal affective responses Genki Okada Chiba University Chiba, Japan g-okada@chiba-u.jp Kenta Masui Chiba University Chiba, Japan k_masui@chiba-u.jp

More information

Classification of valence using facial expressions of TV-viewers

Classification of valence using facial expressions of TV-viewers Classification of valence using facial expressions of TV-viewers Master s Thesis Yorick H. Holkamp Classification of valence using facial expressions of TV-viewers THESIS submitted in partial fulfillment

More information

Understanding Facial Expressions and Microexpressions

Understanding Facial Expressions and Microexpressions Understanding Facial Expressions and Microexpressions 1 You can go to a book store and find many books on bodylanguage, communication and persuasion. Many of them seem to cover the same material though:

More information

Research Proposal on Emotion Recognition

Research Proposal on Emotion Recognition Research Proposal on Emotion Recognition Colin Grubb June 3, 2012 Abstract In this paper I will introduce my thesis question: To what extent can emotion recognition be improved by combining audio and visual

More information

2017 Media Guide and Rate Card

2017 Media Guide and Rate Card 2017 Media Guide and Rate Card WHO IS REAL AGRICULTURE? RealAgriculture was started in 2008 by Shaun Haney, a farmer and seedsman who wanted to discuss the issues impacting his customer base. In a very

More information

Prediction of Negative Symptoms of Schizophrenia from Facial Expressions and Speech Signals

Prediction of Negative Symptoms of Schizophrenia from Facial Expressions and Speech Signals Prediction of Negative Symptoms of Schizophrenia from Facial Expressions and Speech Signals Debsubhra CHAKRABORTY PhD student Institute for Media Innovation/ Interdisciplinary Graduate School Supervisor:

More information

Anxiety Detection during Human-Robot Interaction *

Anxiety Detection during Human-Robot Interaction * Anxiety Detection during Human-Robot Interaction * Dana Kulić and Elizabeth Croft Department of Mechanical Engineering University of British Columbia Vancouver, Canada {dana,ecroft}@mech.ubc.ca Abstract

More information

Situation Reaction Detection Using Eye Gaze And Pulse Analysis

Situation Reaction Detection Using Eye Gaze And Pulse Analysis Situation Reaction Detection Using Eye Gaze And Pulse Analysis 1 M. Indumathy, 2 Dipankar Dey, 2 S Sambath Kumar, 2 A P Pranav 1 Assistant Professor, 2 UG Scholars Dept. Of Computer science and Engineering

More information

Face Emotions and Short Surveys during Automotive Tasks. April 2016

Face Emotions and Short Surveys during Automotive Tasks. April 2016 Face Emotions and Short Surveys during Automotive Tasks April 2016 Presented at the 2016 Council of American Survey Research Organizations (CASRO) Digital Conference, March 2016 A Global Marketing Information

More information

Technology Design 1. Masters of Arts in Learning and Technology. Technology Design Portfolio. Assessment Code: TDT1 Task 3. Mentor: Dr.

Technology Design 1. Masters of Arts in Learning and Technology. Technology Design Portfolio. Assessment Code: TDT1 Task 3. Mentor: Dr. Technology Design 1 Masters of Arts in Learning and Technology Technology Design Portfolio Assessment Code: TDT1 Task 3 Mentor: Dr. Teresa Dove Mary Mulford Student ID: 000163172 July 11, 2014 A Written

More information

The challenge of representing emotional colouring. Roddy Cowie

The challenge of representing emotional colouring. Roddy Cowie The challenge of representing emotional colouring Roddy Cowie My aim: A. To outline the way I see research in an area that I have been involved with for ~15 years - in a way lets us compare notes C. To

More information

Ambient Sensing Chairs for Audience Emotion Recognition by Finding Synchrony of Body Sway

Ambient Sensing Chairs for Audience Emotion Recognition by Finding Synchrony of Body Sway Ambient Sensing Chairs for Audience Emotion Recognition by Finding Synchrony of Body Sway Ryo Wataya, Daisuke Iwai, Kosuke Sato Graduate School of Engineering Science, Osaka University Machikaneyama1-3,

More information

SPEECH EMOTION RECOGNITION: ARE WE THERE YET?

SPEECH EMOTION RECOGNITION: ARE WE THERE YET? SPEECH EMOTION RECOGNITION: ARE WE THERE YET? CARLOS BUSSO Multimodal Signal Processing (MSP) lab The University of Texas at Dallas Erik Jonsson School of Engineering and Computer Science Why study emotion

More information

Motivational Affordances: Fundamental Reasons for ICT Design and Use

Motivational Affordances: Fundamental Reasons for ICT Design and Use ACM, forthcoming. This is the author s version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version will be published soon. Citation:

More information

Real Time Sign Language Processing System

Real Time Sign Language Processing System Real Time Sign Language Processing System Dibyabiva Seth (&), Anindita Ghosh, Ariruna Dasgupta, and Asoke Nath Department of Computer Science, St. Xavier s College (Autonomous), Kolkata, India meetdseth@gmail.com,

More information

A MULTIMODAL NONVERBAL HUMAN-ROBOT COMMUNICATION SYSTEM ICCB 2015

A MULTIMODAL NONVERBAL HUMAN-ROBOT COMMUNICATION SYSTEM ICCB 2015 VI International Conference on Computational Bioengineering ICCB 2015 M. Cerrolaza and S.Oller (Eds) A MULTIMODAL NONVERBAL HUMAN-ROBOT COMMUNICATION SYSTEM ICCB 2015 SALAH SALEH *, MANISH SAHU, ZUHAIR

More information

Comparison of Deliberate and Spontaneous Facial Movement in Smiles and Eyebrow Raises

Comparison of Deliberate and Spontaneous Facial Movement in Smiles and Eyebrow Raises J Nonverbal Behav (2009) 33:35 45 DOI 10.1007/s10919-008-0058-6 ORIGINAL PAPER Comparison of Deliberate and Spontaneous Facial Movement in Smiles and Eyebrow Raises Karen L. Schmidt Æ Sharika Bhattacharya

More information

Human Visual Behaviour for Collaborative Human-Machine Interaction

Human Visual Behaviour for Collaborative Human-Machine Interaction Human Visual Behaviour for Collaborative Human-Machine Interaction Andreas Bulling Perceptual User Interfaces Group Max Planck Institute for Informatics Saarbrücken, Germany bulling@mpi-inf.mpg.de Abstract

More information

School of Health Sciences. School of Health Sciences Psychology.

School of Health Sciences. School of Health Sciences Psychology. School of Health Sciences School of Health Sciences Psychology www.nup.ac.cy UNDERGRADUATE PROGRAMME BSc in Psychology Programme Description The Bachelor of Science in Psychology Programme aims to provide

More information

Manual Annotation and Automatic Image Processing of Multimodal Emotional Behaviours: Validating the Annotation of TV Interviews

Manual Annotation and Automatic Image Processing of Multimodal Emotional Behaviours: Validating the Annotation of TV Interviews Manual Annotation and Automatic Image Processing of Multimodal Emotional Behaviours: Validating the Annotation of TV Interviews J.-C. Martin 1, G. Caridakis 2, L. Devillers 1, K. Karpouzis 2, S. Abrilian

More information

Human Emotion Recognition from Body Language of the Head using Soft Computing Techniques

Human Emotion Recognition from Body Language of the Head using Soft Computing Techniques Human Emotion Recognition from Body Language of the Head using Soft Computing Techniques Yisu Zhao Thesis submitted to the Faculty of Graduate and Postdoctoral Studies In partial fulfillment of the requirements

More information

A Comparison of Collaborative Filtering Methods for Medication Reconciliation

A Comparison of Collaborative Filtering Methods for Medication Reconciliation A Comparison of Collaborative Filtering Methods for Medication Reconciliation Huanian Zheng, Rema Padman, Daniel B. Neill The H. John Heinz III College, Carnegie Mellon University, Pittsburgh, PA, 15213,

More information

Implementation of Automatic Retina Exudates Segmentation Algorithm for Early Detection with Low Computational Time

Implementation of Automatic Retina Exudates Segmentation Algorithm for Early Detection with Low Computational Time www.ijecs.in International Journal Of Engineering And Computer Science ISSN: 2319-7242 Volume 5 Issue 10 Oct. 2016, Page No. 18584-18588 Implementation of Automatic Retina Exudates Segmentation Algorithm

More information

Determining Emotions via Biometric Software

Determining Emotions via Biometric Software Proceedings of Student-Faculty Research Day, CSIS, Pace University, May 5 th, 2017 Determining Emotions via Biometric Software Thomas Croteau, Akshay Dikshit, Pranav Narvankar, and Bhakti Sawarkar Seidenberg

More information

Estimating Multiple Evoked Emotions from Videos

Estimating Multiple Evoked Emotions from Videos Estimating Multiple Evoked Emotions from Videos Wonhee Choe (wonheechoe@gmail.com) Cognitive Science Program, Seoul National University, Seoul 151-744, Republic of Korea Digital Media & Communication (DMC)

More information

Multimodal interactions: visual-auditory

Multimodal interactions: visual-auditory 1 Multimodal interactions: visual-auditory Imagine that you are watching a game of tennis on television and someone accidentally mutes the sound. You will probably notice that following the game becomes

More information

Recognizing Scenes by Simulating Implied Social Interaction Networks

Recognizing Scenes by Simulating Implied Social Interaction Networks Recognizing Scenes by Simulating Implied Social Interaction Networks MaryAnne Fields and Craig Lennon Army Research Laboratory, Aberdeen, MD, USA Christian Lebiere and Michael Martin Carnegie Mellon University,

More information

Emotion Detection Using Physiological Signals. M.A.Sc. Thesis Proposal Haiyan Xu Supervisor: Prof. K.N. Plataniotis

Emotion Detection Using Physiological Signals. M.A.Sc. Thesis Proposal Haiyan Xu Supervisor: Prof. K.N. Plataniotis Emotion Detection Using Physiological Signals M.A.Sc. Thesis Proposal Haiyan Xu Supervisor: Prof. K.N. Plataniotis May 10 th, 2011 Outline Emotion Detection Overview EEG for Emotion Detection Previous

More information

THE EFFECTS OF SOUNDS IN ADVERTISING TOWARD CONSUMERS EMOTIONAL RESPONSE

THE EFFECTS OF SOUNDS IN ADVERTISING TOWARD CONSUMERS EMOTIONAL RESPONSE THE EFFECTS OF SOUNDS IN ADVERTISING TOWARD CONSUMERS EMOTIONAL RESPONSE Maria Stefany Gunawan Jiwanto International Business Management Program Faculty of Economics University of Atma Jaya Yogyakarta

More information

Viewpoint dependent recognition of familiar faces

Viewpoint dependent recognition of familiar faces Viewpoint dependent recognition of familiar faces N. F. Troje* and D. Kersten *Max-Planck Institut für biologische Kybernetik, Spemannstr. 38, 72076 Tübingen, Germany Department of Psychology, University

More information

Intro to HCI evaluation. Measurement & Evaluation of HCC Systems

Intro to HCI evaluation. Measurement & Evaluation of HCC Systems Intro to HCI evaluation Measurement & Evaluation of HCC Systems Intro Today s goal: Give an overview of the mechanics of how (and why) to evaluate HCC systems Outline: - Basics of user evaluation - Selecting

More information

High-level Vision. Bernd Neumann Slides for the course in WS 2004/05. Faculty of Informatics Hamburg University Germany

High-level Vision. Bernd Neumann Slides for the course in WS 2004/05. Faculty of Informatics Hamburg University Germany High-level Vision Bernd Neumann Slides for the course in WS 2004/05 Faculty of Informatics Hamburg University Germany neumann@informatik.uni-hamburg.de http://kogs-www.informatik.uni-hamburg.de 1 Contents

More information

Garbay Catherine CNRS, LIG, Grenoble

Garbay Catherine CNRS, LIG, Grenoble Garbay Catherine CNRS, LIG, Grenoble Sensors capturing world-wide information in the physical, cyber or social realms Open data initiatives, Web 4.0, crowdsourcing Multi-sensory immersion: skinput, novel

More information

mirroru: Scaffolding Emotional Reflection via In-Situ Assessment and Interactive Feedback

mirroru: Scaffolding Emotional Reflection via In-Situ Assessment and Interactive Feedback mirroru: Scaffolding Emotional Reflection via In-Situ Assessment and Interactive Feedback Liuping Wang 1, 3 wangliuping17@mails.ucas.ac.cn Xiangmin Fan 1 xiangmin@iscas.ac.cn Feng Tian 1 tianfeng@iscas.ac.cn

More information

A Multimodal Interface for Robot-Children Interaction in Autism Treatment

A Multimodal Interface for Robot-Children Interaction in Autism Treatment A Multimodal Interface for Robot-Children Interaction in Autism Treatment Giuseppe Palestra giuseppe.palestra@uniba.it Floriana Esposito floriana.esposito@uniba.it Berardina De Carolis berardina.decarolis.@uniba.it

More information

Dutch Multimodal Corpus for Speech Recognition

Dutch Multimodal Corpus for Speech Recognition Dutch Multimodal Corpus for Speech Recognition A.G. ChiŃu and L.J.M. Rothkrantz E-mails: {A.G.Chitu,L.J.M.Rothkrantz}@tudelft.nl Website: http://mmi.tudelft.nl Outline: Background and goal of the paper.

More information

Using simulated body language and colours to express emotions with the Nao robot

Using simulated body language and colours to express emotions with the Nao robot Using simulated body language and colours to express emotions with the Nao robot Wouter van der Waal S4120922 Bachelor Thesis Artificial Intelligence Radboud University Nijmegen Supervisor: Khiet Truong

More information

Advanced FACS Methodological Issues

Advanced FACS Methodological Issues 7/9/8 Advanced FACS Methodological Issues Erika Rosenberg, University of California, Davis Daniel Messinger, University of Miami Jeffrey Cohn, University of Pittsburgh The th European Conference on Facial

More information