Real-time classification of evoked emotions using facial feature tracking and physiological responses

Size: px
Start display at page:

Download "Real-time classification of evoked emotions using facial feature tracking and physiological responses"

Transcription

1 Int. J. Human-Computer Studies 66 (2008) Real-time classification of evoked emotions using facial feature tracking and physiological responses Jeremy N. Bailenson a,, Emmanuel D. Pontikakis b, Iris B. Mauss c, James J. Gross d, Maria E. Jabon e, Cendri A.C. Hutcherson d, Clifford Nass a, Oliver John f a Department of Communication, Stanford University, Stanford, CA 94305, USA b Department of Computer Science, Stanford University, Stanford, CA 94305, USA c Department of Psychology, 2155 South Race Street, University of Denver, Denver, CO 80208, USA d Department of Psychology, Stanford University, Stanford, CA 94305, USA e Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA f Department of Psychology, University of California, Berkeley, CA 94720, USA Received 15 February 2007; received in revised form 28 October 2007; accepted 29 October 2007 Communicated by S. Brave Available online 1 November 2007 Abstract We present automated, real-time models built with machine learning algorithms which use videotapes of subjects faces in conjunction with physiological measurements to predict rated emotion (trained coders second-by-second assessments of sadness or amusement). Input consisted of videotapes of 41 subjects watching emotionally evocative films along with measures of their cardiovascular activity, somatic activity, and electrodermal responding. We built algorithms based on extracted points from the subjects faces as well as their physiological responses. Strengths of the current approach are (1) we are assessing real behavior of subjects watching emotional videos instead of actors making facial poses, (2) the training data allow us to predict both emotion type (amusement versus sadness) as well as the intensity level of each emotion, (3) we provide a direct comparison between person-specific, gender-specific, and general models. Results demonstrated good fits for the models overall, with better performance for emotion categories than for emotion intensity, for amusement ratings than sadness ratings, for a full model using both physiological measures and facial tracking than for either cue alone, and for person-specific models than for gender-specific or general models. r 2007 Elsevier Ltd. All rights reserved. Keywords: Affective computing; Facial tracking; Emotion; Computer vision 1. Introduction The number of applications in which a user s face is tracked by a video camera is growing exponentially. Cameras are constantly capturing images of a person s face on cell phones, webcams, even in automobiles often with the goal of using that facial information as a Corresponding author. Tel.: ; fax: addresses: bailenson@stanford.edu (J.N. Bailenson), manos@cs.stanford.edu (E.D. Pontikakis), imauss@psy.du.edu (I.B. Mauss), james@psych.stanford.edu (J.J. Gross), mjabon@stanford.edu (M.E. Jabon), hutcherson@psych.stanford.edu (C.A.C. Hutcherson), nass@stanford.edu (C. Nass), oliver.john@berkeley.edu (O. John). clue to understand more about the current state of mind of the user. For example, many car companies (currently in Japan and soon in the US and Europe) are installing cameras in the dashboard with the goal of detecting angry, drowsy, or drunk drivers. Similarly, advertisers on web portals are seeking to use facial information to determine the effect of specific billboards and logos, with the intention of dynamically changing the appearance of a website in response to users emotions regarding the advertisements. Moreover, video game companies are interested in assessing the player s emotions during game play to help gauge the success of their products. There are at least two goals in developing real-time algorithms to detect facial emotion using recordings of /$ - see front matter r 2007 Elsevier Ltd. All rights reserved. doi: /j.ijhcs

2 304 J.N. Bailenson et al. / Int. J. Human-Computer Studies 66 (2008) individuals facial behavior. The first is to assist in the types of human computer interaction (HCI) applications described above. The second is to advance our theoretical understanding of emotions and facial expression. By using learning algorithms to link rich sets of facial anchor points and physiological responses to emotional responses rated by trained judges, we can develop accurate models of how emotions expressed in response to evocative stimuli are captured via facial expressions and physiological responses. By examining these algorithms, social scientists who study emotion will have a powerful tool to advance their knowledge of human emotion. 2. Related work 2.1. Psychological research on emotion assessment In the psychological literature, emotion has been defined as an individual s response to goal-relevant stimuli that includes behavioral, physiological, and experiential components (Gross and Thompson, 2007). In the present paper, we focus on the assessment of the first two of these components. There are at least three main ways in which psychologists assess facial expressions of emotions (see Rosenberg and Ekman, 2000, for additional details). The first approach is to have naı ve coders view images or videotapes, and then make holistic judgments concerning the degree to which they see emotions on target faces in those images. While relatively simple and quick to perform, this technique is limited in that the coders may miss subtle facial movements, and in that the coding may be biased by idiosyncratic morphological features of various faces. Furthermore, this technique does not allow for isolating exactly which features in the face are responsible for driving particular emotional expressions. The second approach is to use componential coding schemes in which trained coders use a highly regulated procedural technique to detect facial actions. For example, the Facial Action Coding System (Ekman and Friesan, 1978) is a comprehensive measurement system that uses frame-by-frame ratings of anatomically based facial features ( action units ). Advantages of this technique include the richness of the dataset as well as the ability to uncover novel facial movements and configurations from data mining the anchor points. The disadvantage of this system is that the frame-by-frame coding of the points is extremely laborious. The third approach is to obtain more direct measures of muscle movement via facial electromyography (EMG) with electrodes attached on the skin of the face. While this allows for sensitive measurement of features, the placement of the electrodes is difficult and also relatively constraining for subjects who wear them. This approach is also not helpful for coding archival footage. The use of computer vision algorithms promises to be a solution that maximizes the benefits of the above stated techniques while reducing many of the costs. In the next section, we discuss some of the previous models of detecting facial emotions through computer algorithms Computer vision work Automatic facial expression recognition and emotion recognition have been researched extensively. One approach has been to evaluate intensity of facial action units (Kimura and Yachida, 1997; Lien et al., 1998; Sayette et al., 2001). Other experiments, such as Essa and Pentland (1997), have represented intensity variation in smiling using optical flow. They measured intensity of face muscles for discriminating between different types of facial actions. Similarly, Ehrlich et al. (2000) emphasized the importance of facial motion instead of the actual face snapshots to recognize emotion in a network environment. While much of the work analyzes the front view of the face, Pantic and Patras (2006) developed a system for automatic recognition of facial action units and analyzed those units using temporal models from profile-view face image sequences. Many types of algorithms have been employed in this endeavor. For example, Sebe et al. (2002) used video sequences of faces to show that the Cauchy distribution performs better than the Gaussian distribution on recognizing emotions. Similarly, Tian et al. (2000) discriminated intensity variation in eye closure as reliably as did human coders by using Gabor features and an artificial neural network. Zhang et al. (1998) showed that a combination of facial point geometry and texture features, such as Gabor wavelets, led to more accurate estimation of the current facial gesture. Moreover, recent work in Bartlett et al. (2005) has continued to make use of representations based on a combination of feature geometry and texture features. A system developed by Lyons (2004) automatically translated facial gestures to actions using vision techniques. For a more detailed review of the state of the art of current systems, see Li and Jain (2005) or Lyons and Bartneck (2006). In terms of using the facial tracking data to predict affective states, the pioneering work of Picard et al. (see Picard, 1997 for an early example, and Picard and Daily, 2005, for a recent review of this work) has demonstrated across a number of types of systems that it is possible to track various aspects of the face, and that by doing so one can gain insight into the mental state of the person whose face is being tracked. More recently, el Kaliouby et al. (el Kaliouby et al., 2003; Michel and el Kaliouby, 2003; el Kaliouby and Robinson, 2005) have developed a general computational model for facial affect inference and have implemented it as a real-time system. This approach used dynamic Bayesian networks for recognizing six classes of complex emotions. Their experimental results demonstrated that it is more efficient to assess a human s emotion by looking at the person s face historically over a two second window instead of just the current frame. Their system was designed to classify discrete emotional classes as opposed to the intensity of each emotion.

3 J.N. Bailenson et al. / Int. J. Human-Computer Studies 66 (2008) More generally, there has been much work in human computer interaction using learning algorithms to predict human behavior. For example, work by Curhan and Pentland (2007) utilized automatic feature extraction from spoken voice to predict quite reliably the outcome of very complex behavior in terms of performance in negotiations. The models presented in the current paper will aid researchers who seek to use real-time computer vision to predict various types of human behavior by providing accurate, real-time methods for extracting emotional information to use as input for those more elaborate psychological processes. 3. Our approach There are a number of factors that distinguish the current approach from previous ones. First, the stimuli used as input are videotapes of people who were watching film clips designed to elicit intense emotions. The probability that we accessed actual emotional behavior is higher than in studies that used deliberately posed faces (see Nass and Brave, 2005, for further discussion of this distinction). One example for the importance of the distinction between automatically expressed and deliberately posed emotions is given by Paul Ekman and colleagues. They demonstrated that only Duchenne Smiles automatic smiles involving crinkling of the eye corners but not deliberately posed smiles correlate with other behavioral and physiological indicators of enjoyment (Ekman et al., 1990). Indeed, there is a large amount of research attempting to detect deception through facial and vocal cues by distinguishing incidental from deliberate behaviors (see Ekman, 2001 for a review). In sum, some emotional facial expressions are deliberate, while others are automatic, and the automatic facial expressions appear to be more informative about underlying mental states than posed ones. Second, because in our approach the emotions were coded second-by-second by trained coders using a linear scale for two oppositely valenced emotions (amusement and sadness), we are able to train our learning algorithms using not just a binary set of data (e.g., sad versus not-sad), but also a linear set of data spanning a full scale of emotional intensity. Most psychological models of emotion allow for the expression of mixed emotional states (e.g., Bradley, 2000). Our approach allows us to compare approaches that only look at binary values in our case the two most extreme values on the ends of the linear scale to approaches that linearly predict the amount of amusement and sadness. Third, given that we collected large amounts of data from each person (i.e., hundreds of video frames rated individually for amusement and sadness), we are able to create three types of models. The first is a universal model which predicts how amused any face is by using one set of subjects faces as training data and another independent set of subjects faces as testing data. This model would be useful for HCI applications in which lots of people use the same interface, such as bank automated teller machines, traffic light cameras, and public computers with webcams. The second is an idiosyncratic model which predicts how amused or sad a given face is by using training and testing data from the same subject for each model. This model is useful for HCI applications in which the same person repeatedly uses the same interface for example, driving in an owned car, using the same computer with a webcam, or any application with a camera in a private home. The third is a gender-specific model, trained and tested using only data from subjects of the same gender. This model is useful for HCI applications targeting a specific gender for example make-up advertisements directed at female consumers, or home repair advertisements targeted at males. It is also theoretically interesting to compare the idiosyncratic, gender-specific, and universal models as such a comparison provides valuable information to social scientists studying how personal differences such as gender effect the expression of emotion. Furthermore, although it has previously been shown that the effectiveness of facial expression recognition systems is usually affected by the subject s skin color, facial and scalp hair, sex, race, and age (Zlochower et al., 1998), the comparison of the various individual model enables us to quantitatively evaluate these differences, and better predict the differences in performance of emotion recognition systems via personal differences. Fourth, since our data include physiological responses (cardiovascular activity, electrodermal responding, and somatic activity) we are able to quantify the improvement in the fit of our models by the addition of such features. One could easily imagine practical contexts in which physiological data could easily be added, such as in an automobile in which the interface could capture facial features from a camera in the dashboard and measure heart rate from the hands gripping the steering wheel. Comparing fit of the models with and without physiological data offers new information regarding the effectiveness of emotion-detection systems with both facial and physiological inputs. This enables application designers to assess the rewards of building physiological measures into their emotion-detection systems. Finally, all of the processing (e.g., computer vision algorithms detecting facial features, physiological measures, formulas based on the learning algorithms) used in our study can be utilized in real-time. This is essential for applications that seek to respond to a user s emotion in ways to improve the interaction, for example cars which seek to avoid accidents for drowsy drivers or advertisements which seek to match their content to the mood of a person walking by a billboard. We targeted amusement and sadness in order to sample positive and negative emotions that recruit behavioral as well as physiological responses. Amusement rather than happiness was chosen, because amusement more clearly allows predictions on which facial behaviors to expect (Bonanno and Keltner, 2004). Sadness was then chosen as

4 306 J.N. Bailenson et al. / Int. J. Human-Computer Studies 66 (2008) the emotion opposite to amusement on the valence continuum (cf. Watson and Tellegen, 1985). We chose only these two emotions since increasing the number of emotions would come at the cost of sacrificing the reliability of the emotions we induced. Amusement and sadness (in contrast to anger, fear, or surprise) can be ethically and reliably induced using films (Philippot, 1993; Gross and Levenson, 1995), a feature crucial to the present design as films allow for standardization of moment-bymoment emotional context across participants across long enough time periods. The selected films induced dynamic changes in emotional states over the 9-min period, ranging from neutral to more intense emotional states. Because different individuals responded to films with different degrees of intensity we were able to assess varying levels of emotional intensity across participants. 4. Data collection The training data were taken from a study in which 151 Stanford undergraduates watched movies pretested to elicit amusement and sadness while their faces were videotaped and their physiological responses were assessed. In the laboratory session, participants watched a 9-min film clip that was composed of an amusing, a neutral, a sad, and another neutral segment (each segment was approximately 2 min long). From the larger dataset of 151, we randomly chose 41 to train and test the learning algorithms. We did not use all 151 due to the time involved running the models with such rich datasets. In incremental tests during dataset construction, we determined that the current sample size was large enough such that adding additional subjects did not change the fits of the models Physiological measures During the experimental session, 15 physiological measures were monitored at 400 Hz using a 12-channel Grass Model 7 polygraph. Fig. 1 depicts a participant wearing the measurement sensors. The features included: heart rate (derived from inter-beat intervals assessed by placing Beckman miniature electrodes in a bipolar configuration on the participant s chest and calculating the interval in ms between successive R-waves), systolic blood pressure (obtained from the third finger of the nondominant hand), diastolic blood pressure (obtained from the third finger of the non-dominant hand), mean arterial blood pressure (obtained from the third finger of the nondominant hand), pre-ejection period (identified as the time in ms elapsed between the Q point on the ECG wave of the left ventricle contracting and the B inflection on the ZCG wave), skin conductance level (derived from a signal using a constant-voltage device to pass 0.5 V between Beckman 4.1. Expert ratings of emotions A total of five trained coders rated facial expressions of amusement and sadness from the video recordings of participants faces such that each participant s tape was rated by two coders (cf. Mauss et al., 2005). Coders used laboratory software to rate the amount of amusement and sadness displayed in each second of video. The coding system was informed by microanalytic analyses of expressive behavior (Ekman and Friesan, 1978). It was anchored at 0 with neutral (no sign of emotion) and 8 with strong laughter for amusement and strong sadness expression/ sobbing for sadness. Coders were unaware of other coders ratings, of the experimental hypotheses, and of which stimuli participants were watching. Average inter-rater reliabilities were satisfactory, with Cronbach s alphas ¼ 0.89 (S.D. ¼ 0.13) for amusement behavior and 0.79 (S.D. ¼ 0.11) for sadness behavior. We thus averaged the coders ratings to create one second-by-second amusement and one second-by-second sadness rating for each participant. These average ratings of amusement and sadness were used as criterion in our model. Fig. 1. System for recording physiological data.

5 J.N. Bailenson et al. / Int. J. Human-Computer Studies 66 (2008) electrodes attached to the palmar surface of the middle phalanges of the first and second fingers of the nondominant hand), finger temperature (measured with a thermistor attached to the palmar surface of the tip of the fourth finger), finger pulse amplitude (assessed using a UFI plethysmograph transducer attached to the tip of the participant s second finger), finger pulse transit time (indexed by the time in ms elapsed between the closest previous R-wave and the upstroke of the peripheral pulse at the finger), ear pulse transit time (indexed by the time in ms elapsed between the closest previous R-wave and the upstroke of the peripheral pulse at the ear), ear pulse amplitude (measured with a UFI plethysmograph transducer attached to the participant s right ear lobe), composite of peripheral sympathetic activation (as indexed by a composite of finger pulse transit time, finger pulse amplitude, ear pulse transit time, and finger temperature), composite cardiac activation (as indexed by a composite of heart rate, finger pulse transit time reversed, finger pulse amplitude reversed, and ear pulse transit time reversed standardized within individuals and then averaged), and somatic activity (assessed through the use of a piezo-electric device attached to the participant s chair, which generates an electrical signal proportional to the participant s overall body movement in any direction). For more detailed descriptions of these measures, see Gross and Levenson (1995), Mauss et al. (2006). 5. System architecture The videos of the 41 participants were analyzed at a resolution of 20 frames per second. The level of amusement/sadness of every person for every second in the video was measured via the continuous ratings from 0 (less amused/sad) to 8 (more amused/sad). The goal was to predict at every individual second the level of amusement or sadness for every person based on measurements from facial tracking output and physiological responses (Fig. 2). For measuring the facial expression of the person at every frame, we used the NEVEN Vision Facial Feature Tracker, a real-time face-tracking solution. This software Face tracking Cardio activity Skin conductance Somatic Activity Feature extraction Fig. 2. Emotion recognition system architecture. Emotion intensity recognition Emotion classification Fig. 3. The points tracked by NEVEN Vision real-time face tracking. uses patented technology to track 22 points on a face at the fate of 30 frames per second with verification rates of over 95% (Fig. 3). By plugging our videos into the NEVEN Vision software using Vizard 2.53, a Python-based virtual environment development platform, we extracted 53 measurements of head-centered coordinates of the face at every frame as well as the confidence rating of the face tracking algorithm. All the points were measured in a two-dimensional headcentred coordinate system normalized to the apparent size of the head on the screen; the coordinates were not affected by rigid head movements, and scaled well to different heads. These 53 points included eight points around the contour of the mouth (three on each lip, and one at each corner), three points on each eye (including the pupil), two points on each eyebrow, and four points around the nose. Pitch, yaw and roll of the face, as well the aspect ratio of the mouth and each eye, the coordinates of the face in the image (a loose proxy for posture), and the scale of the face (which is inversely proportional to the distance from the face to the camera, another indication of posture) were also included. Our real-time face-tracking solution required no training, face-markers, or calibration for individual faces, and collected data at 30 Hz. When the confidence rating of the face-tracking algorithm fell below 40%, the data were discarded and the software was told to re-acquire the face from scratch. We used the software on the pre-recorded videos because the experiment in which the subjects had their faces recorded occurred months before the current study. However, given that the NEVEN vision software locates the coordinates at 30 Hz, the models we developed would currently work in real-time (Bailenson et al., 2006). In our final datasets, we included the 53 NEVEN Vision library facial data points. We excluded the confidence

6 308 J.N. Bailenson et al. / Int. J. Human-Computer Studies 66 (2008) rating, as it is not a meaningful predictor a priori of emotion. We also included six new features which we created heuristically from linear and non-linear combinations of the NEVEN Vision coordinates: the difference between the right mouth corner and right eye corner, the difference left mouth corner and left eye corner, the mouth height, the mouth width, the upper lip curviness (defined as the difference between the right upper lip Y and the upper lip center Y plus the difference between the left upper lip Y and the upper lip center Y), and the mouth aspect ratio divided by the product of the left and right eye aspect ratios. We analyzed the 2 s history for those 59 features, computing the averages, the velocities, and the variances for each one of them. This totaled to 236 facial features (59 instantaneous, 59 averaged positions, 59 averaged velocities, 59 averaged variances) used as inputs to the models. Finally, we added the 15 physiological and somatic measures utilized by Mauss et al. (2006). So in total, there were 251 features used (236 facial features, 15 physiological features). A complete list of these features can be found in Appendix A. 6. Relevant feature extraction We applied Chi-square feature selection (which evaluates the contribution of each feature by computing the value of the Chi-squared statistic with respect to the emotion ratings) using the freely distributed machine learning software package Waikato Environment for Knowledge Analysis (WEKA; Witten and Fank, 2005) to find the most relevant features for the amusement dataset. For this experiment, we processed 19,625 instances and we discretized the expert s ratings into two classes (amused and neutral) where each rating above 3 is considered to be amused and each rating below 0.5 is considered as neutral. We repeated the same methodology for finding the most relevant features for the sadness dataset. The top 20 results are shown in Tables 1 and 2. For amusement, the facial characteristics were the most informative (compared to the physiological measures) according to Chi-square, with only two of the physiological features appearing in the top 20. In contrast, for predicting sadness the physiological features seemed to play much more of a role, with 6 out of the top 20 features being physiological. This would indicate that the facial features by themselves are not as strong an indicator of sadness as the physiological characteristics. It is important to note that while the Chisquare analysis is important to understand the features which contribute most to the model fit, we used all facial and physiological features when building our models. 7. Predicting emotion intensity We began with the more challenging task of assessing emotion intensity before turning to the more commonly reported task of classifying emotion. Research by Schiano et al. (2004) has demonstrated that people perceive Table 1 Chi-square values for top 20 features in amusement analysis Chi-square value Features from amusement analysis Average difference right mouth corner eye corner Average difference left mouth corner eye corner Difference right mouth corner eye corner Average face left mouth corner Y Difference left mouth corner eye corner Face left mouth corner Y Somatic activity Average mouth aspect ratio, divided with eyes aspect ratio Average face right mouth corner Y Face right mouth corner Y Average upper lip curviness Finger temperature Mouth aspect ratio, divided with eyes aspect ratio Upper lip curviness Average face left upper lip Y Average mouth aspect ratio Average face right upper lip Y Left upper lip Y Average face left nostril Y Mouth aspect ratio Table 2 Chi-square values for top 20 features in sadness analysis Chi-square value Features from sadness analysis Finger temperature Skin conductance level Average face Y Average face X Face X Face Y Average Face Scale Average Face Euler Y Average upper lip curviness Face Scale Face Euler Y Heart rate Average face left nostril Y Ear pulse transit time Average face left mouth corner Y Average difference left mouth corner eye corner Face left nostril Y Average face nose tip Y Finger pulse transit time Ear pulse amplitude emotions of others in a continuous fashion, and that merely representing emotions in a binary (on/off) manner is problematic. Consequently, we used linear regression and neural networks for predicting experts ratings in a continuous manner for every second in the face video. We used the WEKA software package linear regression function using the Akaike criterion for model selection and used no attribute selection. The linear neural nets were Multilayer Perceptrons configured to have two hidden layers. Two-fold cross-validation was performed on each

7 J.N. Bailenson et al. / Int. J. Human-Computer Studies 66 (2008) Table 3 Linear classification results for all-subject datasets Category Regression type Emotion Correlation coefficient Mean absolute error Mean squared error Face Linear regression Amusement Face Linear regression Sadness Face Neural network Amusement Face Neural network Sadness Face and physio Linear regression Amusement Face and physio Linear regression Sadness Face and physio Neural network Amusement Face and physio Neural network Sadness Physio Linear regression Amusement Physio Linear regression Sadness Physio Neural network Amusement Physio Neural network Sadness dataset using two non-overlapping sets of subjects. We performed separate tests for both sadness and amusement, using face video alone, physiological features alone, as well as face video in conjunction with the physiological measures to predict the expert ratings. All classifiers were trained and tested on the entire nine minutes of face video data. Our intention in doing so was to demonstrate how effective a system predicting emotion intensity just from a camera could be and to allow application designers to assess the rewards of building in physiological measures. The results are shown in Table 3. As can be seen, the classifiers using only the facial features performed substantially better than the classifiers using only the physiological features, having correlation coefficients on average nearly 20% higher. Yet combining the two sets of data yielded the best results; with both facial and physiological data included the correlation coefficients of the linear regressions increased by 5% over the next best model in the amusement dataset and by 7% in the sadness dataset, and the neural networks performed slightly better as well. Table 3 also demonstrates that predicting the intensity of sadness is not as easy as predicting the intensity of amusement. The correlation coefficients of the sadness neural nets were consistently 20 40% lower than those for the amusement classifiers. One possible explanation for the discrepancies between the models performance on amusement in sadness, however, is that amusement dataset had a mean rating of (S.D. ¼ 1.50) while the sadness dataset had mean rating of (S.D. ¼ 0.73). This difference was significant in a paired t-test, t(41) ¼ 1.23, po0.05, and could partly account for the lower performance of the sadness classifiers; given the lower frequency and intensity of the rated sadness in our subject pool, the models may have had more difficulty in detecting sadness. 8. Emotion classification The previous section presented models predicting linear amounts of amusement and sadness. This is unique because Precision = F1 = C i Ci + C ' i, Recall = 2 * Precision * Recall Precision + Recall Fig. 4. The formulas for precision, recall, and F1. most work predicting facial expressions of emotion has not utilized a training set rich enough to allow such a finegrained analysis. However, in order to compare the current work to previous models, which often presented much higher statistical fits than those we presented above with the linear intensity levels of emotion, we processed our dataset to discretize the expert ratings for amusement and sadness. In the amusement datasets all the expert ratings less than or equal to 0.5 were set to neutral, and ratings of 3 or higher were discretized to become amused. In the sadness datasets all the expert ratings less than or equal to 0.5 were discretized to become neutral, and ratings of 1.5 or higher were discretized to become sad. All the other instances (intermediate ratings) were discarded in these new datasets. Other threshold values (e.g., everything below 1.0 being neutral, etc.) were experimented with, but the thresholds of 0.5 and 3 for amusement and 0.5 and 1.5 for sadness yielded the best fits in our models. The percentage of amused instances in the final amused dataset was 15.2% and the percentage of sad instances in the final sad dataset was 23.9%. We applied a Support Vector Machine classifier with a linear kernel and a Logitboost with a decision stump weak classifier using 40 iterations (Freund and Schapire, 1996; Friedman et al., 2000) to each dataset using the WEKA machine learning software package (Witten and Fank, 2005). As in the linear analyses, we split the data into two non-overlapping datasets and performed a two-fold cross-validation on all our classifiers. In all the experiments we conducted, we calculated the precision, the recall and the F 1 measure, which is defined as the harmonic mean between the precision and the recall. For a multi-class classification problem with classes A i, C i N i

8 310 J.N. Bailenson et al. / Int. J. Human-Computer Studies 66 (2008) Table 4 Discrete classification results for all-subject datasets Category Classifier Emotion Neutral precision Emotion precision Neutral F 1 measure Emotion F 1 measure Face SVMs Amusement Face SVMs Sadness Face LogitBoost Amusement Face LogitBoost Sadness Face and physio SVMs Amusement Face and physio SVMs Sadness Face and physio LogitBoost Amusement Face and physio LogitBoost Sadness Physio SVMs Amusement Physio SVMs Sadness Physio LogitBoost Amusement Physio LogitBoost Sadness i ¼ 1, y, M and each class A i having a total of N i instances in the dataset, respectively, if the classifier predicts correctly C i instances for A i and predicts C 0 i instances to be in A i where in fact those belong to other classes (misclassifies them), then the former measures are defined as following (Fig. 4): C i Precision ¼ C i þ C 0 i ; Recall ¼ C i N i 2 Precision Recall F 1 ¼ Precision þ Recall Maximizing precision or recall individually does not result in a perfect classifier; F 1 gives the optimal accuracy score by relating precision to recall. The results of our analyses are shown in Table 4. In these analyses both classifiers performed equally as well, with precisions nearing 70% for amusement, 50% for sadness, and 94% for neutral in the face and physiological datasets, a substantial improvement over the precisions of the linear classifiers. We noted too, that the addition of the physiological features offered much greater improvement in the discrete classifiers than in the linear classifiers. The addition of physiological features increased the SVM sadness precision by over 15% and the LogitBoost amusement precision by 9%. Also, just as in the linear analyses, the precisions of the sadness classifiers were consistently over 15% worse than the precisions of the amusement classifiers. 9. Experimental results within subjects In addition to creating general models applicable to any subject, we ran experiments in which we trained and tested individual models specifically for each one of the 41 subjects. We expected the linear prediction and the classification accuracy to be better within the same subject, since the models are optimized for the facial characteristics Table 5 Linear classification results for individual subjects (average results) Categories Emotion Correlation coefficient Mean absolute error of each specific subject as well as his or her levels of expressivity Predicting continuous ratings within subjects Mean squared error Face only Amusement Face only Sadness Face and Amusement physio Face and Sadness physio Physio only Amusement Physio only Sadness We built 41 different Multilayer Perceptron neural nets with two hidden layers and individualized them by training and testing them only within the same subject. We chose Multilayer Perceptron neural nets over regression formulas as the previous analyses indicated better fits with the neural nets. For each subject, we used twofold cross-validation for training and testing. In Table 5, we present the average of the results of the 41 neural nets. Using these idiosyncratic methods of building specialized models for particular subjects, we noted a number of important trends. First, building specialized models for each subject significantly increased the prediction accuracy. With sadness in particular, we saw an improvement in the correlation coefficient of more than 50%. This is especially remarkable given that the input set was reduced 20-fold; the all-subject training sets had on average 12,184 instances

9 J.N. Bailenson et al. / Int. J. Human-Computer Studies 66 (2008) Table 6 Discrete classification results for individual subjects (average results) Categories Emotion Neutral accuracy Emotion accuracy Neutral F 1 measure Emotion F 1 measure Face only Amusement Face only Sadness Face and physio Amusement Face and physio Sadness Physio only Amusement Physio only Sadness while the individual training sets had on average only 595 instances. So even though the within-subject models only had about 300 trials to train on, the fits remained quite high. Second, like the universal models, using physiological measures improved the fit for all models. Interestingly, the classifier for sadness using only physiological data slightly out-performed the classifier using only facial features. This supports our earlier findings that physiological features seem to be more important in the detection of sadness compared to amusement. The mean absolute error and the mean squared error were not comparable between the amusement and sadness cases, however since mean ratings of the two datasets were unequal; the majority of expert ratings on sadness did not go beyond the scale of 3.5, while the amusement ratings fluctuated in scale from 0 to Classification results within subjects We performed a similar analysis by building an individual Support Vector Machine classifier with a linear kernel for each one of the 41 subjects. In Table 6 we present those results. As can be seen by comparing the prediction success in Table 6 to all other tables in the paper, the discrete classifiers designed to predict emotion within subjects performed by far the best, with average accuracies nearing 95%. 10. Experimental results by gender Given that previous research has identified systematic gender differences in facial expressions of emotion, with women appearing somewhat more accurate in expressing some emotions than men (see Hall, 1984, for a review; Timmers et al., 1998; Kring, 2000), we separated our dataset into two parts, with one part containing only male subjects (n ¼ 17) and the other part only female subjects (n ¼ 24). We created individual classifiers for each of the datasets in order to compare their performance. We expected the linear prediction and the classification accuracy to be better for the female classifiers given the greater facial expressiveness of women. We also performed relevant feature extraction on each of the datasets to examine any differences in the most informative features between the two genders Relevant feature extraction within gender We applied Chi-square feature selection using the WEKA machine learning software package (Witten and Fank, 2005) to find the most relevant features for both the male and female amusement datasets. For this experiment, we discretized the expert s ratings into two classes (amused and neutral) where each rating above 3 was considered to be amused and each rating below 0.5 was considered as neutral. We repeated the same methodology for finding the most relevant features for the male and female sadness datasets. The top 20 results for each gender are shown in Tables 7 and 8. We observed that in the male dataset the physiological measures were more informative according to Chi-square than for females. This is especially noticeable in the sadness analysis where 8 of the top 20 male features are physiological whereas only 3 of the top 20 female characteristics are physiological Predicting continuous ratings within gender We created separate Multilayer Perceptron neural network models with two hidden layers for each gender and measured the correlation coefficient, mean absolute error, and root mean squared error. As in previous analyses the subjects were split into two non-overlapping datasets in order to perform two-fold crossvalidation on all classifiers. The results are shown in Table 9. As can be seen, the female classifiers generally yielded a greater correlation coefficient, suggesting that our models more accurately predict emotions in women than in men. Also, adding the physiological features increased the correlation coefficient in males by almost 20%, whereas it only increased the correlation coefficient in females by 10%. This indicates that physiological features may be

10 312 J.N. Bailenson et al. / Int. J. Human-Computer Studies 66 (2008) Table 7 Chi-square values for top 20 features in male and female amusement analyses Chi-square Features from male amusement analysis Chi-square Features from female amusement analysis Average difference left mouth corner eye corner Average left mouth corner Y Difference left mouth corner eye corner Average mouth aspect ratio divided with eyes aspect ratio Average left mouth corner Y Finger temperature Average difference right mouth corner eye corner Average difference left mouth corner eye corner Somatic activity Left mouth corner Y Left mouth corner Y Somatic activity Skin conductance level Average right mouth corner Y Average upper lip curviness Right mouth corner Y Difference right mouth corner eye corner Difference left mouth corner eye corner Finger temperature Average left upper lip Y Upper lip curviness Average mouth aspect ratio Average right mouth corner Y Average right upper lip Y Average left nostril Y Average mouth height Average left upper lip Y Mouth aspect ratio divided with eyes aspect ratio Composite of cardiac activation Left upper lip Y Right mouth corner Y Mouth aspect ratio Left upper lip Y Average upper lip curviness Mouth aspect ratio divided with eyes aspect ratio Mouth height Average right upper lip Y Right upper lip Y Left nostril Y Skin conductance level Table 8 Chi-square values for top 20 features in male and female sadness analyses Chi-square Features from male sadness analysis Chi-square Features from female sadness analysis Skin conductance level Average left mouth corner Y Finger temperature Average mouth aspect ratio divided with eyes aspect ratio Average X position Finger temperature X position Average difference left mouth corner eye corner Average Euler Y Left mouth corner Y Y position Somatic activity Average scale Average right mouth corner Y Average Y position Right mouth corner Y Euler Y Difference left mouth corner eye corner Scale Average left upper lip Y Heart rate Average mouth aspect ratio Pre-ejection period Average right upper lip Y Average left pupil X Average mouth height Ear pulse transit time Mouth aspect ratio divided with eyes aspect ratio Ear pulse amplitude Left upper lip Y Diastolic blood pressure Mouth aspect ratio Average upper lip curviness Average upper lip curviness Average left nostril Y Mouth height Finger pulse transit time Right upper lip Y Average left eye aspect ratio Skin conductance level more important in detecting male emotional responses than female responses Classification results by gender We performed a similar analysis by building an individual Support Vector Machine classifier with a linear kernel for both males and females. As in other classifications, two-fold cross-validation was used. We present those results in Table 10. Again we see a significantly higher accuracy in our female models over our male models. Interestingly, the only classifier that performed better in the male dataset than the female dataset was the sadness classifier using only physiological data. Also, when adding the physiological data to the facial data we only saw improvements in

11 J.N. Bailenson et al. / Int. J. Human-Computer Studies 66 (2008) Table 9 Linear classification results for gender-specific datasets Categories Emotion Gender Correlation coefficient Mean absolute error Mean squared error Face only Amusement Male Face only Amusement Female Face only Sadness Male Face only Sadness Female Face and physio Amusement Male Face and physio Amusement Female Face and physio Sadness Male Face and physio Sadness Female Physio only Amusement Male Physio only Amusement Female Physio only Sadness Male Physio only Sadness Female Table 10 Discrete classification results for gender-specific datasets Categories Emotion Gender Neutral accuracy Emotion accuracy Neutral F 1 measure Emotion F 1 measure Face only Amusement Male Face only Amusement Female Face only Sadness Male Face only Sadness Female Face and physio Amusement Male Face and physio Amusement Female Face and physio Sadness Male Face and physio Sadness Female Physio only Amusement Male Physio only Amusement Female Physio only Sadness Male Physio only Sadness Female performance in the male classifiers. These results support the findings of the Chi-square analysis, suggesting physiological data are more important for males than females. 11. Conclusion and future work We have presented a real-time system for emotion recognition and showed that this system is accurate and easy to implement. The present study is unique for a number of reasons, perhaps most notably because of the unusually rich data set. A relatively large number of subjects watched videos designed to make them feel amused or sad while having their facial and physiological responses recorded, and we then produced second-bysecond ratings of the intensity with which they expressed amusement and sadness using trained coders. By having this level of detail in both input and output, we were able to make a number of important advances in our learning algorithms Summary of findings First, we demonstrated the ability to find good statistical fits on algorithms to predict the emotion from the natural facial expressions of everyday people, rather than from discrete and deliberately created facial expressions of trained actors, as in many previous studies. This is important, because people in their day-to-day lives may not produce extreme facial configurations such as those displayed by actors used in typical experimental stimuli. Consequently, previous work may be overestimating the utility of emotion prediction based on the novelty of the stimulus set. Second, in the current study, we demonstrated that amusement is more easily detected than sadness, perhaps due to the difficulty in eliciting true sadness. In our dataset, facial expressions of people watching sad movies and receiving high sadness ratings tended to not have the stereotypical long face, but were predominantly characterized by a lack of movement or any

12 314 J.N. Bailenson et al. / Int. J. Human-Computer Studies 66 (2008) expressivity at all. Consequently, we are demonstrating the importance of examining people experiencing emotions in a naturalistic element. Previous work has also demonstrated that sadness is a difficult emotion to capture using analysis of facial feature points (Deng et al., 2006). Third, we provided evidence that applications for which a single user occupies an interface over time, models tailored to that user, show significant advances over more general models. While in many ways this is intuitive, quantifying the exact level of improvement is an important first step in designing these systems. Specifically with categorizing emotions, the tailored individual models performed extremely well compared to the other models. Fourth, we have shown that both amusement and sadness are more easily detected in female subjects than in male subjects. This finding is consistent with the research done by Hall (1984) suggesting that women are more facially expressive than men, and provides new quantitative data for social scientists studying the differences in emotional response among individuals of opposite gender. Fifth, we have demonstrated that by incorporating measures of physiological responding into our model, we get more accurate predictions than when just using the face. Indeed, when we analyze physiological features as the sole inputs to the model, the fit is often extremely high at predicting the coded emotion ratings of the face. Such measurements can be used in real systems with relatively easy installments of sensors (e.g., on a person s chair or on the steering wheel of a car). In fact, the Chi-square analysis indicates that some of the physiological levels in the detection of sadness outperformed facial tracking, especially for males. Given that real-time computer vision algorithms are not yet as reliable as physiological measurement techniques in terms of consistent performance, augmenting facial tracking with physiological data may be crucial Limitations and future work Of course there are a number of limitations to the current work. First, our models accuracy is closely related to the quality of the vision library that we are using as well as the accuracy of our physiological measures. As these tools improve, our system will become much more useful. Moreover, while the psychologists trained to code amusement and sadness demonstrated high inter-coder reliability, it could be the case that their ratings were not actually picking up the true emotion but were picking up on other types of behavioral artifacts. Our model is only as good as the input and output used to train, and while we are confident that this dataset is more robust than most that have been used previously, there are many ways to improve our measurements. Second, we only examined two emotions, while most models posit there are many more than two emotions (Ekman and Friesan, 1978). In pilot testing, we examined the videos of subjects and determined that there were very few instances of all seven emotions, such as fear, disgust, and surprise. Consequently, we decided to focus on creating robust models which were able to capture the two oppositely valenced emotions which occurred most frequently in our dataset. We also decided to begin with the most conservative models, which were binary comparisons between amused and neutral and sad and neutral rather than a general comparison of all emotions. We acknowledge, however, that these decisions limit the scope of our experiment. In future work, we can expand the models to include other emotions and to compare emotions directly. Third, our study was based upon coders labels of subjects emotions. Thus, although we are confident in the validity of our coders ratings based upon their high intercoder reliability, we cannot claim to be detecting the actual expression of the emotions sadness and amusement; rather, we can only claim to be detecting the expressions of sadness and amusement as evaluated by coders. A possibility for future work would be to repeat the study using reports of emotion from the subjects themselves rather than coders ratings. Fourth, our some of the algorithms in our study depend upon physiological features collected through use of electrodes and transducers which may be too intrusive for some applications. In future work alternate ways of obtaining physiological data could be explored. Finally, all of our results are tied to the specific learning algorithms we utilized as well as to the ways in which we divided the data. The fact that we discarded any data points rated between 0.5 and 3 in our discrete amusement datasets and between 0.5 and 1.5 in our discrete sad datasets makes our models more applicable to subjects with greater facial motion since subjects whose expressions tend to fall in the intermediate range tend to be less represented in the data. It may be the case that different techniques of modeling would produce different patterns of results. In the future, we plan to use our emotion-recognizer model for analyzing data from other studies; for example, assessing how emotion is related to driving safety and how emotions can affect social interaction during a negotiation setting. Acknowledgments We thank Rosalind Picard, Jonathan Gratch, and Alex Pentland for helpful suggestions in early stages of this work. Moreover, we thank Dan Merget for software development, Amanda Luther and Ben Trombley-Shapiro for assistance in data analysis, and Keith Avila, Bryan Kelly, Alexia Nielsen, and Alice Kim for their help in processing video. This work was in part sponsored by NSF Grant , as well as a grant from OMRON Corporation.

Real- Time Classification of Evoked Emotions using Facial Feature Tracking and Physiological Responses

Real- Time Classification of Evoked Emotions using Facial Feature Tracking and Physiological Responses Real- Time Classification of Evoked Emotions using Facial Feature Tracking and Physiological Responses Jeremy N. Bailenson Department of Communication Stanford University Stanford, CA 94305 bailenson@stanford.edu

More information

Temporal Context and the Recognition of Emotion from Facial Expression

Temporal Context and the Recognition of Emotion from Facial Expression Temporal Context and the Recognition of Emotion from Facial Expression Rana El Kaliouby 1, Peter Robinson 1, Simeon Keates 2 1 Computer Laboratory University of Cambridge Cambridge CB3 0FD, U.K. {rana.el-kaliouby,

More information

Emotion Recognition using a Cauchy Naive Bayes Classifier

Emotion Recognition using a Cauchy Naive Bayes Classifier Emotion Recognition using a Cauchy Naive Bayes Classifier Abstract Recognizing human facial expression and emotion by computer is an interesting and challenging problem. In this paper we propose a method

More information

Facial expression recognition with spatiotemporal local descriptors

Facial expression recognition with spatiotemporal local descriptors Facial expression recognition with spatiotemporal local descriptors Guoying Zhao, Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and Information Engineering, P. O. Box

More information

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China A Vision-based Affective Computing System Jieyu Zhao Ningbo University, China Outline Affective Computing A Dynamic 3D Morphable Model Facial Expression Recognition Probabilistic Graphical Models Some

More information

Valence-arousal evaluation using physiological signals in an emotion recall paradigm. CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry.

Valence-arousal evaluation using physiological signals in an emotion recall paradigm. CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry. Proceedings Chapter Valence-arousal evaluation using physiological signals in an emotion recall paradigm CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry Abstract The work presented in this paper aims

More information

On Shape And the Computability of Emotions X. Lu, et al.

On Shape And the Computability of Emotions X. Lu, et al. On Shape And the Computability of Emotions X. Lu, et al. MICC Reading group 10.07.2013 1 On Shape and the Computability of Emotion X. Lu, P. Suryanarayan, R. B. Adams Jr., J. Li, M. G. Newman, J. Z. Wang

More information

Gender Based Emotion Recognition using Speech Signals: A Review

Gender Based Emotion Recognition using Speech Signals: A Review 50 Gender Based Emotion Recognition using Speech Signals: A Review Parvinder Kaur 1, Mandeep Kaur 2 1 Department of Electronics and Communication Engineering, Punjabi University, Patiala, India 2 Department

More information

ANALYSIS OF FACIAL FEATURES OF DRIVERS UNDER COGNITIVE AND VISUAL DISTRACTIONS

ANALYSIS OF FACIAL FEATURES OF DRIVERS UNDER COGNITIVE AND VISUAL DISTRACTIONS ANALYSIS OF FACIAL FEATURES OF DRIVERS UNDER COGNITIVE AND VISUAL DISTRACTIONS Nanxiang Li and Carlos Busso Multimodal Signal Processing (MSP) Laboratory Department of Electrical Engineering, The University

More information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, S. Lee, U. Neumann, S. Narayanan Emotion

More information

Bio-Feedback Based Simulator for Mission Critical Training

Bio-Feedback Based Simulator for Mission Critical Training Bio-Feedback Based Simulator for Mission Critical Training Igor Balk Polhemus, 40 Hercules drive, Colchester, VT 05446 +1 802 655 31 59 x301 balk@alum.mit.edu Abstract. The paper address needs for training

More information

CPSC81 Final Paper: Facial Expression Recognition Using CNNs

CPSC81 Final Paper: Facial Expression Recognition Using CNNs CPSC81 Final Paper: Facial Expression Recognition Using CNNs Luis Ceballos Swarthmore College, 500 College Ave., Swarthmore, PA 19081 USA Sarah Wallace Swarthmore College, 500 College Ave., Swarthmore,

More information

R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology

R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology ISSN: 0975-766X CODEN: IJPTFI Available Online through Research Article www.ijptonline.com FACIAL EMOTION RECOGNITION USING NEURAL NETWORK Kashyap Chiranjiv Devendra, Azad Singh Tomar, Pratigyna.N.Javali,

More information

Facial Expression Biometrics Using Tracker Displacement Features

Facial Expression Biometrics Using Tracker Displacement Features Facial Expression Biometrics Using Tracker Displacement Features Sergey Tulyakov 1, Thomas Slowe 2,ZhiZhang 1, and Venu Govindaraju 1 1 Center for Unified Biometrics and Sensors University at Buffalo,

More information

Facial Expression Recognition Using Principal Component Analysis

Facial Expression Recognition Using Principal Component Analysis Facial Expression Recognition Using Principal Component Analysis Ajit P. Gosavi, S. R. Khot Abstract Expression detection is useful as a non-invasive method of lie detection and behaviour prediction. However,

More information

Blue Eyes Technology

Blue Eyes Technology Blue Eyes Technology D.D. Mondal #1, Arti Gupta *2, Tarang Soni *3, Neha Dandekar *4 1 Professor, Dept. of Electronics and Telecommunication, Sinhgad Institute of Technology and Science, Narhe, Maharastra,

More information

Affective Game Engines: Motivation & Requirements

Affective Game Engines: Motivation & Requirements Affective Game Engines: Motivation & Requirements Eva Hudlicka Psychometrix Associates Blacksburg, VA hudlicka@ieee.org psychometrixassociates.com DigiPen Institute of Technology February 20, 2009 1 Outline

More information

Estimating Multiple Evoked Emotions from Videos

Estimating Multiple Evoked Emotions from Videos Estimating Multiple Evoked Emotions from Videos Wonhee Choe (wonheechoe@gmail.com) Cognitive Science Program, Seoul National University, Seoul 151-744, Republic of Korea Digital Media & Communication (DMC)

More information

Emotion Detection Using Physiological Signals. M.A.Sc. Thesis Proposal Haiyan Xu Supervisor: Prof. K.N. Plataniotis

Emotion Detection Using Physiological Signals. M.A.Sc. Thesis Proposal Haiyan Xu Supervisor: Prof. K.N. Plataniotis Emotion Detection Using Physiological Signals M.A.Sc. Thesis Proposal Haiyan Xu Supervisor: Prof. K.N. Plataniotis May 10 th, 2011 Outline Emotion Detection Overview EEG for Emotion Detection Previous

More information

Nature Neuroscience: doi: /nn Supplementary Figure 1. Behavioral training.

Nature Neuroscience: doi: /nn Supplementary Figure 1. Behavioral training. Supplementary Figure 1 Behavioral training. a, Mazes used for behavioral training. Asterisks indicate reward location. Only some example mazes are shown (for example, right choice and not left choice maze

More information

This is the accepted version of this article. To be published as : This is the author version published as:

This is the accepted version of this article. To be published as : This is the author version published as: QUT Digital Repository: http://eprints.qut.edu.au/ This is the author version published as: This is the accepted version of this article. To be published as : This is the author version published as: Chew,

More information

Predicting Breast Cancer Survival Using Treatment and Patient Factors

Predicting Breast Cancer Survival Using Treatment and Patient Factors Predicting Breast Cancer Survival Using Treatment and Patient Factors William Chen wchen808@stanford.edu Henry Wang hwang9@stanford.edu 1. Introduction Breast cancer is the leading type of cancer in women

More information

Development of 2-Channel Eeg Device And Analysis Of Brain Wave For Depressed Persons

Development of 2-Channel Eeg Device And Analysis Of Brain Wave For Depressed Persons Development of 2-Channel Eeg Device And Analysis Of Brain Wave For Depressed Persons P.Amsaleka*, Dr.S.Mythili ** * PG Scholar, Applied Electronics, Department of Electronics and Communication, PSNA College

More information

An assistive application identifying emotional state and executing a methodical healing process for depressive individuals.

An assistive application identifying emotional state and executing a methodical healing process for depressive individuals. An assistive application identifying emotional state and executing a methodical healing process for depressive individuals. Bandara G.M.M.B.O bhanukab@gmail.com Godawita B.M.D.T tharu9363@gmail.com Gunathilaka

More information

Who Needs Cheeks? Eyes and Mouths are Enough for Emotion Identification. and. Evidence for a Face Superiority Effect. Nila K Leigh

Who Needs Cheeks? Eyes and Mouths are Enough for Emotion Identification. and. Evidence for a Face Superiority Effect. Nila K Leigh 1 Who Needs Cheeks? Eyes and Mouths are Enough for Emotion Identification and Evidence for a Face Superiority Effect Nila K Leigh 131 Ave B (Apt. 1B) New York, NY 10009 Stuyvesant High School 345 Chambers

More information

Introduction to affect computing and its applications

Introduction to affect computing and its applications Introduction to affect computing and its applications Overview What is emotion? What is affective computing + examples? Why is affective computing useful? How do we do affect computing? Some interesting

More information

FACIAL EXPRESSION RECOGNITION FROM IMAGE SEQUENCES USING SELF-ORGANIZING MAPS

FACIAL EXPRESSION RECOGNITION FROM IMAGE SEQUENCES USING SELF-ORGANIZING MAPS International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998 FACIAL EXPRESSION RECOGNITION FROM IMAGE SEQUENCES USING SELF-ORGANIZING MAPS Ayako KATOH*, Yasuhiro FUKUI**

More information

Assessment of Reliability of Hamilton-Tompkins Algorithm to ECG Parameter Detection

Assessment of Reliability of Hamilton-Tompkins Algorithm to ECG Parameter Detection Proceedings of the 2012 International Conference on Industrial Engineering and Operations Management Istanbul, Turkey, July 3 6, 2012 Assessment of Reliability of Hamilton-Tompkins Algorithm to ECG Parameter

More information

Statistical and Neural Methods for Vision-based Analysis of Facial Expressions and Gender

Statistical and Neural Methods for Vision-based Analysis of Facial Expressions and Gender Proc. IEEE Int. Conf. on Systems, Man and Cybernetics (SMC 2004), Den Haag, pp. 2203-2208, IEEE omnipress 2004 Statistical and Neural Methods for Vision-based Analysis of Facial Expressions and Gender

More information

Recognition of facial expressions using Gabor wavelets and learning vector quantization

Recognition of facial expressions using Gabor wavelets and learning vector quantization Engineering Applications of Artificial Intelligence 21 (2008) 1056 1064 www.elsevier.com/locate/engappai Recognition of facial expressions using Gabor wavelets and learning vector quantization Shishir

More information

PHYSIOLOGICAL RESEARCH

PHYSIOLOGICAL RESEARCH DOMAIN STUDIES PHYSIOLOGICAL RESEARCH In order to understand the current landscape of psychophysiological evaluation methods, we conducted a survey of academic literature. We explored several different

More information

Classification of EEG signals in an Object Recognition task

Classification of EEG signals in an Object Recognition task Classification of EEG signals in an Object Recognition task Iacob D. Rus, Paul Marc, Mihaela Dinsoreanu, Rodica Potolea Technical University of Cluj-Napoca Cluj-Napoca, Romania 1 rus_iacob23@yahoo.com,

More information

General Psych Thinking & Feeling

General Psych Thinking & Feeling General Psych Thinking & Feeling Piaget s Theory Challenged Infants have more than reactive sensing Have some form of discrimination (reasoning) 1-month-old babies given a pacifier; never see it Babies

More information

Medical Electronics Dr. Neil Townsend Michaelmas Term 2001 ( The story so far.

Medical Electronics Dr. Neil Townsend Michaelmas Term 2001 (  The story so far. Medical Electronics Dr Neil Townsend Michaelmas Term 2001 (wwwrobotsoxacuk/~neil/teaching/lectures/med_elec) Blood Pressure has been measured (in some way) for centuries However, clinical interpretation

More information

Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners

Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners Hatice Gunes and Maja Pantic Department of Computing, Imperial College London 180 Queen

More information

MODULE 41: THEORIES AND PHYSIOLOGY OF EMOTION

MODULE 41: THEORIES AND PHYSIOLOGY OF EMOTION MODULE 41: THEORIES AND PHYSIOLOGY OF EMOTION EMOTION: a response of the whole organism, involving 1. physiological arousal 2. expressive behaviors, and 3. conscious experience A mix of bodily arousal

More information

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1 1. INTRODUCTION Sign language interpretation is one of the HCI applications where hand gesture plays important role for communication. This chapter discusses sign language interpretation system with present

More information

Recognising Emotions from Keyboard Stroke Pattern

Recognising Emotions from Keyboard Stroke Pattern Recognising Emotions from Keyboard Stroke Pattern Preeti Khanna Faculty SBM, SVKM s NMIMS Vile Parle, Mumbai M.Sasikumar Associate Director CDAC, Kharghar Navi Mumbai ABSTRACT In day to day life, emotions

More information

EBCC Data Analysis Tool (EBCC DAT) Introduction

EBCC Data Analysis Tool (EBCC DAT) Introduction Instructor: Paul Wolfgang Faculty sponsor: Yuan Shi, Ph.D. Andrey Mavrichev CIS 4339 Project in Computer Science May 7, 2009 Research work was completed in collaboration with Michael Tobia, Kevin L. Brown,

More information

N RISCE 2K18 ISSN International Journal of Advance Research and Innovation

N RISCE 2K18 ISSN International Journal of Advance Research and Innovation The Computer Assistance Hand Gesture Recognition system For Physically Impairment Peoples V.Veeramanikandan(manikandan.veera97@gmail.com) UG student,department of ECE,Gnanamani College of Technology. R.Anandharaj(anandhrak1@gmail.com)

More information

THE USE OF MULTIVARIATE ANALYSIS IN DEVELOPMENT THEORY: A CRITIQUE OF THE APPROACH ADOPTED BY ADELMAN AND MORRIS A. C. RAYNER

THE USE OF MULTIVARIATE ANALYSIS IN DEVELOPMENT THEORY: A CRITIQUE OF THE APPROACH ADOPTED BY ADELMAN AND MORRIS A. C. RAYNER THE USE OF MULTIVARIATE ANALYSIS IN DEVELOPMENT THEORY: A CRITIQUE OF THE APPROACH ADOPTED BY ADELMAN AND MORRIS A. C. RAYNER Introduction, 639. Factor analysis, 639. Discriminant analysis, 644. INTRODUCTION

More information

Drive-reducing behaviors (eating, drinking) Drive (hunger, thirst) Need (food, water)

Drive-reducing behaviors (eating, drinking) Drive (hunger, thirst) Need (food, water) Instinct Theory: we are motivated by our inborn automated behaviors that generally lead to survival. But instincts only explain why we do a small fraction of our behaviors. Does this behavior adequately

More information

VIDEO SURVEILLANCE AND BIOMEDICAL IMAGING Research Activities and Technology Transfer at PAVIS

VIDEO SURVEILLANCE AND BIOMEDICAL IMAGING Research Activities and Technology Transfer at PAVIS VIDEO SURVEILLANCE AND BIOMEDICAL IMAGING Research Activities and Technology Transfer at PAVIS Samuele Martelli, Alessio Del Bue, Diego Sona, Vittorio Murino Istituto Italiano di Tecnologia (IIT), Genova

More information

Study on Aging Effect on Facial Expression Recognition

Study on Aging Effect on Facial Expression Recognition Study on Aging Effect on Facial Expression Recognition Nora Algaraawi, Tim Morris Abstract Automatic facial expression recognition (AFER) is an active research area in computer vision. However, aging causes

More information

Assigning B cell Maturity in Pediatric Leukemia Gabi Fragiadakis 1, Jamie Irvine 2 1 Microbiology and Immunology, 2 Computer Science

Assigning B cell Maturity in Pediatric Leukemia Gabi Fragiadakis 1, Jamie Irvine 2 1 Microbiology and Immunology, 2 Computer Science Assigning B cell Maturity in Pediatric Leukemia Gabi Fragiadakis 1, Jamie Irvine 2 1 Microbiology and Immunology, 2 Computer Science Abstract One method for analyzing pediatric B cell leukemia is to categorize

More information

Facial Behavior as a Soft Biometric

Facial Behavior as a Soft Biometric Facial Behavior as a Soft Biometric Abhay L. Kashyap University of Maryland, Baltimore County 1000 Hilltop Circle, Baltimore, MD 21250 abhay1@umbc.edu Sergey Tulyakov, Venu Govindaraju University at Buffalo

More information

Situation Reaction Detection Using Eye Gaze And Pulse Analysis

Situation Reaction Detection Using Eye Gaze And Pulse Analysis Situation Reaction Detection Using Eye Gaze And Pulse Analysis 1 M. Indumathy, 2 Dipankar Dey, 2 S Sambath Kumar, 2 A P Pranav 1 Assistant Professor, 2 UG Scholars Dept. Of Computer science and Engineering

More information

Using simulated body language and colours to express emotions with the Nao robot

Using simulated body language and colours to express emotions with the Nao robot Using simulated body language and colours to express emotions with the Nao robot Wouter van der Waal S4120922 Bachelor Thesis Artificial Intelligence Radboud University Nijmegen Supervisor: Khiet Truong

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 11: Attention & Decision making Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis

More information

ECG Beat Recognition using Principal Components Analysis and Artificial Neural Network

ECG Beat Recognition using Principal Components Analysis and Artificial Neural Network International Journal of Electronics Engineering, 3 (1), 2011, pp. 55 58 ECG Beat Recognition using Principal Components Analysis and Artificial Neural Network Amitabh Sharma 1, and Tanushree Sharma 2

More information

Thought Technology Ltd.

Thought Technology Ltd. Thought Technology Ltd. 8205 Montreal/ Toronto Blvd. Suite 223, Montreal West, QC H4X 1N1 Canada Tel: (800) 361-3651 ۰ (514) 489-8251 Fax: (514) 489-8255 E-mail: mail@thoughttechnology.com Webpage: http://www.thoughttechnology.com

More information

Emotion Affective Color Transfer Using Feature Based Facial Expression Recognition

Emotion Affective Color Transfer Using Feature Based Facial Expression Recognition , pp.131-135 http://dx.doi.org/10.14257/astl.2013.39.24 Emotion Affective Color Transfer Using Feature Based Facial Expression Recognition SeungTaek Ryoo and Jae-Khun Chang School of Computer Engineering

More information

Moralization Through Moral Shock: Exploring Emotional Antecedents to Moral Conviction. Table of Contents

Moralization Through Moral Shock: Exploring Emotional Antecedents to Moral Conviction. Table of Contents Supplemental Materials 1 Supplemental Materials for Wisneski and Skitka Moralization Through Moral Shock: Exploring Emotional Antecedents to Moral Conviction Table of Contents 2 Pilot Studies 2 High Awareness

More information

Facial Event Classification with Task Oriented Dynamic Bayesian Network

Facial Event Classification with Task Oriented Dynamic Bayesian Network Facial Event Classification with Task Oriented Dynamic Bayesian Network Haisong Gu Dept. of Computer Science University of Nevada Reno haisonggu@ieee.org Qiang Ji Dept. of ECSE Rensselaer Polytechnic Institute

More information

Biceps Activity EMG Pattern Recognition Using Neural Networks

Biceps Activity EMG Pattern Recognition Using Neural Networks Biceps Activity EMG Pattern Recognition Using eural etworks K. Sundaraj University Malaysia Perlis (UniMAP) School of Mechatronic Engineering 0600 Jejawi - Perlis MALAYSIA kenneth@unimap.edu.my Abstract:

More information

Skin color detection for face localization in humanmachine

Skin color detection for face localization in humanmachine Research Online ECU Publications Pre. 2011 2001 Skin color detection for face localization in humanmachine communications Douglas Chai Son Lam Phung Abdesselam Bouzerdoum 10.1109/ISSPA.2001.949848 This

More information

Using Automated Facial Expression Analysis for Emotion and Behavior Prediction. Sun Joo Ahn. Jeremy Bailenson. Jesse Fox. Maria Jabon.

Using Automated Facial Expression Analysis for Emotion and Behavior Prediction. Sun Joo Ahn. Jeremy Bailenson. Jesse Fox. Maria Jabon. Paper to be presented at the National Communication Association s 95 th Annual Conference Using Automated Facial Expression Analysis for Emotion and Behavior Prediction Sun Joo Ahn Jeremy Bailenson Jesse

More information

Quantification of facial expressions using high-dimensional shape transformations

Quantification of facial expressions using high-dimensional shape transformations Journal of Neuroscience Methods xxx (2004) xxx xxx Quantification of facial expressions using high-dimensional shape transformations Ragini Verma a,, Christos Davatzikos a,1, James Loughead b,2, Tim Indersmitten

More information

1/12/2012. How can you tell if someone is experiencing an emotion? Emotion. Dr.

1/12/2012. How can you tell if someone is experiencing an emotion?   Emotion. Dr. http://www.bitrebels.com/design/76-unbelievable-street-and-wall-art-illusions/ 1/12/2012 Psychology 456 Emotion Dr. Jamie Nekich A Little About Me Ph.D. Counseling Psychology Stanford University Dissertation:

More information

Bio-sensing for Emotional Characterization without Word Labels

Bio-sensing for Emotional Characterization without Word Labels Bio-sensing for Emotional Characterization without Word Labels Tessa Verhoef 1, Christine Lisetti 2, Armando Barreto 3, Francisco Ortega 2, Tijn van der Zant 4, and Fokie Cnossen 4 1 University of Amsterdam

More information

Learning and Adaptive Behavior, Part II

Learning and Adaptive Behavior, Part II Learning and Adaptive Behavior, Part II April 12, 2007 The man who sets out to carry a cat by its tail learns something that will always be useful and which will never grow dim or doubtful. -- Mark Twain

More information

Sociable Robots Peeping into the Human World

Sociable Robots Peeping into the Human World Sociable Robots Peeping into the Human World An Infant s Advantages Non-hostile environment Actively benevolent, empathic caregiver Co-exists with mature version of self Baby Scheme Physical form can evoke

More information

Electromyography II Laboratory (Hand Dynamometer Transducer)

Electromyography II Laboratory (Hand Dynamometer Transducer) (Hand Dynamometer Transducer) Introduction As described in the Electromyography I laboratory session, electromyography (EMG) is an electrical signal that can be recorded with electrodes placed on the surface

More information

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 5, JULY

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 5, JULY IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 5, JULY 2011 1057 A Framework for Automatic Human Emotion Classification Using Emotion Profiles Emily Mower, Student Member, IEEE,

More information

Person Perception. Forming Impressions of Others. Mar 5, 2012, Banu Cingöz Ulu

Person Perception. Forming Impressions of Others. Mar 5, 2012, Banu Cingöz Ulu Person Perception Forming Impressions of Others Mar 5, 2012, Banu Cingöz Ulu Person Perception person perception: how we come to know about others temporary states, emotions, intentions and desires impression

More information

Generalization of a Vision-Based Computational Model of Mind-Reading

Generalization of a Vision-Based Computational Model of Mind-Reading Generalization of a Vision-Based Computational Model of Mind-Reading Rana el Kaliouby and Peter Robinson Computer Laboratory, University of Cambridge, 5 JJ Thomson Avenue, Cambridge UK CB3 FD Abstract.

More information

The Regulation of Emotion

The Regulation of Emotion The Regulation of Emotion LP 8D Emotional Display 1 Emotions can be disruptive and troublesome. Negative feelings can prevent us from behaving as we would like to, but can so can positive feelings (page

More information

Classification and attractiveness evaluation of facial emotions for purposes of plastic surgery using machine-learning methods and R

Classification and attractiveness evaluation of facial emotions for purposes of plastic surgery using machine-learning methods and R Classification and attractiveness evaluation of facial emotions for purposes of plastic surgery using machine-learning methods and R erum 2018 Lubomír Štěpánek 1, 2 Pavel Kasal 2 Jan Měšťák 3 1 Institute

More information

Estimating Intent for Human-Robot Interaction

Estimating Intent for Human-Robot Interaction Estimating Intent for Human-Robot Interaction D. Kulić E. A. Croft Department of Mechanical Engineering University of British Columbia 2324 Main Mall Vancouver, BC, V6T 1Z4, Canada Abstract This work proposes

More information

Edge Based Grid Super-Imposition for Crowd Emotion Recognition

Edge Based Grid Super-Imposition for Crowd Emotion Recognition Edge Based Grid Super-Imposition for Crowd Emotion Recognition Amol S Patwardhan 1 1Senior Researcher, VIT, University of Mumbai, 400037, India ---------------------------------------------------------------------***---------------------------------------------------------------------

More information

Effect of Sensor Fusion for Recognition of Emotional States Using Voice, Face Image and Thermal Image of Face

Effect of Sensor Fusion for Recognition of Emotional States Using Voice, Face Image and Thermal Image of Face Effect of Sensor Fusion for Recognition of Emotional States Using Voice, Face Image and Thermal Image of Face Yasunari Yoshitomi 1, Sung-Ill Kim 2, Takako Kawano 3 and Tetsuro Kitazoe 1 1:Department of

More information

Discovering Facial Expressions for States of Amused, Persuaded, Informed, Sentimental and Inspired

Discovering Facial Expressions for States of Amused, Persuaded, Informed, Sentimental and Inspired Discovering Facial Expressions for States of Amused, Persuaded, Informed, Sentimental and Inspired Daniel McDuff Microsoft Research, Redmond, WA, USA This work was performed while at Affectiva damcduff@microsoftcom

More information

Information Processing During Transient Responses in the Crayfish Visual System

Information Processing During Transient Responses in the Crayfish Visual System Information Processing During Transient Responses in the Crayfish Visual System Christopher J. Rozell, Don. H. Johnson and Raymon M. Glantz Department of Electrical & Computer Engineering Department of

More information

MPEG-4 Facial Expression Synthesis based on Appraisal Theory

MPEG-4 Facial Expression Synthesis based on Appraisal Theory MPEG-4 Facial Expression Synthesis based on Appraisal Theory L. Malatesta, A. Raouzaiou, K. Karpouzis and S. Kollias Image, Video and Multimedia Systems Laboratory, National Technical University of Athens,

More information

Identification of Tissue Independent Cancer Driver Genes

Identification of Tissue Independent Cancer Driver Genes Identification of Tissue Independent Cancer Driver Genes Alexandros Manolakos, Idoia Ochoa, Kartik Venkat Supervisor: Olivier Gevaert Abstract Identification of genomic patterns in tumors is an important

More information

Contrastive Analysis on Emotional Cognition of Skeuomorphic and Flat Icon

Contrastive Analysis on Emotional Cognition of Skeuomorphic and Flat Icon Contrastive Analysis on Emotional Cognition of Skeuomorphic and Flat Icon Xiaoming Zhang, Qiang Wang and Yan Shi Abstract In the field of designs of interface and icons, as the skeuomorphism style fades

More information

Social Context Based Emotion Expression

Social Context Based Emotion Expression Social Context Based Emotion Expression Radosław Niewiadomski (1), Catherine Pelachaud (2) (1) University of Perugia, Italy (2) University Paris VIII, France radek@dipmat.unipg.it Social Context Based

More information

Feasibility Evaluation of a Novel Ultrasonic Method for Prosthetic Control ECE-492/3 Senior Design Project Fall 2011

Feasibility Evaluation of a Novel Ultrasonic Method for Prosthetic Control ECE-492/3 Senior Design Project Fall 2011 Feasibility Evaluation of a Novel Ultrasonic Method for Prosthetic Control ECE-492/3 Senior Design Project Fall 2011 Electrical and Computer Engineering Department Volgenau School of Engineering George

More information

A Human-Markov Chain Monte Carlo Method For Investigating Facial Expression Categorization

A Human-Markov Chain Monte Carlo Method For Investigating Facial Expression Categorization A Human-Markov Chain Monte Carlo Method For Investigating Facial Expression Categorization Daniel McDuff (djmcduff@mit.edu) MIT Media Laboratory Cambridge, MA 02139 USA Abstract This paper demonstrates

More information

A framework for the Recognition of Human Emotion using Soft Computing models

A framework for the Recognition of Human Emotion using Soft Computing models A framework for the Recognition of Human Emotion using Soft Computing models Md. Iqbal Quraishi Dept. of Information Technology Kalyani Govt Engg. College J Pal Choudhury Dept. of Information Technology

More information

Running head: CULTURES 1. Difference in Nonverbal Communication: Cultures Edition ALI OMIDY. University of Kentucky

Running head: CULTURES 1. Difference in Nonverbal Communication: Cultures Edition ALI OMIDY. University of Kentucky Running head: CULTURES 1 Difference in Nonverbal Communication: Cultures Edition ALI OMIDY University of Kentucky CULTURES 2 Abstract The following paper is focused on the similarities and differences

More information

Emotions and Motivation

Emotions and Motivation Emotions and Motivation LP 8A emotions, theories of emotions 1 10.1 What Are Emotions? Emotions Vary in Valence and Arousal Emotions Have a Physiological Component What to Believe? Using Psychological

More information

Error Detection based on neural signals

Error Detection based on neural signals Error Detection based on neural signals Nir Even- Chen and Igor Berman, Electrical Engineering, Stanford Introduction Brain computer interface (BCI) is a direct communication pathway between the brain

More information

Noise-Robust Speech Recognition Technologies in Mobile Environments

Noise-Robust Speech Recognition Technologies in Mobile Environments Noise-Robust Speech Recognition echnologies in Mobile Environments Mobile environments are highly influenced by ambient noise, which may cause a significant deterioration of speech recognition performance.

More information

This is a repository copy of Facial Expression Classification Using EEG and Gyroscope Signals.

This is a repository copy of Facial Expression Classification Using EEG and Gyroscope Signals. This is a repository copy of Facial Expression Classification Using EEG and Gyroscope Signals. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/116449/ Version: Accepted Version

More information

SUPPLEMENTARY INFORMATION. Table 1 Patient characteristics Preoperative. language testing

SUPPLEMENTARY INFORMATION. Table 1 Patient characteristics Preoperative. language testing Categorical Speech Representation in the Human Superior Temporal Gyrus Edward F. Chang, Jochem W. Rieger, Keith D. Johnson, Mitchel S. Berger, Nicholas M. Barbaro, Robert T. Knight SUPPLEMENTARY INFORMATION

More information

Learning What Others Like: Preference Learning as a Mixed Multinomial Logit Model

Learning What Others Like: Preference Learning as a Mixed Multinomial Logit Model Learning What Others Like: Preference Learning as a Mixed Multinomial Logit Model Natalia Vélez (nvelez@stanford.edu) 450 Serra Mall, Building 01-420 Stanford, CA 94305 USA Abstract People flexibly draw

More information

Running head: FACIAL EXPRESSION AND SKIN COLOR ON APPROACHABILITY 1. Influence of facial expression and skin color on approachability judgment

Running head: FACIAL EXPRESSION AND SKIN COLOR ON APPROACHABILITY 1. Influence of facial expression and skin color on approachability judgment Running head: FACIAL EXPRESSION AND SKIN COLOR ON APPROACHABILITY 1 Influence of facial expression and skin color on approachability judgment Federico Leguizamo Barroso California State University Northridge

More information

Running head: HEARING-AIDS INDUCE PLASTICITY IN THE AUDITORY SYSTEM 1

Running head: HEARING-AIDS INDUCE PLASTICITY IN THE AUDITORY SYSTEM 1 Running head: HEARING-AIDS INDUCE PLASTICITY IN THE AUDITORY SYSTEM 1 Hearing-aids Induce Plasticity in the Auditory System: Perspectives From Three Research Designs and Personal Speculations About the

More information

Agents and Environments

Agents and Environments Agents and Environments Berlin Chen 2004 Reference: 1. S. Russell and P. Norvig. Artificial Intelligence: A Modern Approach. Chapter 2 AI 2004 Berlin Chen 1 What is an Agent An agent interacts with its

More information

Emotions. These aspects are generally stronger in emotional responses than with moods. The duration of emotions tend to be shorter than moods.

Emotions. These aspects are generally stronger in emotional responses than with moods. The duration of emotions tend to be shorter than moods. LP 8D emotions & James/Lange 1 Emotions An emotion is a complex psychological state that involves subjective experience, physiological response, and behavioral or expressive responses. These aspects are

More information

Affective pictures and emotion analysis of facial expressions with local binary pattern operator: Preliminary results

Affective pictures and emotion analysis of facial expressions with local binary pattern operator: Preliminary results Affective pictures and emotion analysis of facial expressions with local binary pattern operator: Preliminary results Seppo J. Laukka 1, Antti Rantanen 1, Guoying Zhao 2, Matti Taini 2, Janne Heikkilä

More information

EMOTION DETECTION THROUGH SPEECH AND FACIAL EXPRESSIONS

EMOTION DETECTION THROUGH SPEECH AND FACIAL EXPRESSIONS EMOTION DETECTION THROUGH SPEECH AND FACIAL EXPRESSIONS 1 KRISHNA MOHAN KUDIRI, 2 ABAS MD SAID AND 3 M YUNUS NAYAN 1 Computer and Information Sciences, Universiti Teknologi PETRONAS, Malaysia 2 Assoc.

More information

Emote to Win: Affective Interactions with a Computer Game Agent

Emote to Win: Affective Interactions with a Computer Game Agent Emote to Win: Affective Interactions with a Computer Game Agent Jonghwa Kim, Nikolaus Bee, Johannes Wagner and Elisabeth André Multimedia Concepts and Application, Faculty for Applied Computer Science

More information

Vital Responder: Real-time Health Monitoring of First- Responders

Vital Responder: Real-time Health Monitoring of First- Responders Vital Responder: Real-time Health Monitoring of First- Responders Ye Can 1,2 Advisors: Miguel Tavares Coimbra 2, Vijayakumar Bhagavatula 1 1 Department of Electrical & Computer Engineering, Carnegie Mellon

More information

Application of ecological interface design to driver support systems

Application of ecological interface design to driver support systems Application of ecological interface design to driver support systems J.D. Lee, J.D. Hoffman, H.A. Stoner, B.D. Seppelt, and M.D. Brown Department of Mechanical and Industrial Engineering, University of

More information

Applied Machine Learning, Lecture 11: Ethical and legal considerations; domain effects and domain adaptation

Applied Machine Learning, Lecture 11: Ethical and legal considerations; domain effects and domain adaptation Applied Machine Learning, Lecture 11: Ethical and legal considerations; domain effects and domain adaptation Richard Johansson including some slides borrowed from Barbara Plank overview introduction bias

More information

REAL-TIME SMILE SONIFICATION USING SURFACE EMG SIGNAL AND THE EVALUATION OF ITS USABILITY.

REAL-TIME SMILE SONIFICATION USING SURFACE EMG SIGNAL AND THE EVALUATION OF ITS USABILITY. REAL-TIME SMILE SONIFICATION USING SURFACE EMG SIGNAL AND THE EVALUATION OF ITS USABILITY Yuki Nakayama 1 Yuji Takano 2 Masaki Matsubara 3 Kenji Suzuki 4 Hiroko Terasawa 3,5 1 Graduate School of Library,

More information

Do you have to look where you go? Gaze behaviour during spatial decision making

Do you have to look where you go? Gaze behaviour during spatial decision making Do you have to look where you go? Gaze behaviour during spatial decision making Jan M. Wiener (jwiener@bournemouth.ac.uk) Department of Psychology, Bournemouth University Poole, BH12 5BB, UK Olivier De

More information

How to Spot a Liar. Reading Practice

How to Spot a Liar. Reading Practice Reading Practice How to Spot a Liar However much we may abhor it, deception comes naturally to all living things. Birds do it by feigning injury to lead hungry predators away from nesting young. Spider

More information