Comparison of selected off-the-shelf solutions for emotion recognition based on facial expressions

Size: px
Start display at page:

Download "Comparison of selected off-the-shelf solutions for emotion recognition based on facial expressions"

Transcription

1 Comparison of selected off-the-shelf solutions for emotion recognition based on facial expressions Grzegorz Brodny, Agata Kołakowska, Agnieszka Landowska, Mariusz Szwoch, Wioleta Szwoch, Michał R. Wróbel Gdańsk University of Technology, Gdańsk, Poland, Abstract The paper concerns accuracy of emotion recognition from facial expressions. As there are a couple of ready off-the-shelf solutions available in the market today, this study aims at practical evaluation of selected solutions in order to provide some insight into what potential buyers might expect. Two solutions were compared: FaceReader by Noldus and Xpress Engine by QuantumLab. The performed evaluation revealed that the recognition accuracies differ for photo and video input data and therefore solutions should be matched to the specificity of the application domain. Keywords emotion recognition; affective computing; facial expressions, FACS I. INTRODUCTION Affective computing, a research area willingly investigated in recent years, focuses on user emotions while interacting with computers and applications []. It refers to the problems of emotion recognition, reacting to them and influencing emotional states. Affective computing approach is more and more visible in applications used in various areas of our life, e.g. education [][], entertainment [][], software engineering [6], healthcare [7] and other domains [8]. The affective computing tools vary in many aspects. The most apparent one is the source of information taken into account to recognize users emotions. It may be visual [9], audio [], textual [], physiological [], or behavioral []. It is rather difficult to indicate the best approach in a general case. Some of these input channels appear to allow for more accurate emotion recognition than others. Some of them are known to be unobtrusive, whereas others are rather inconvenient for users. Facial expression analysis seems to be a compromise between high recognition accuracy and facility of convenient data acquisition. Moreover, it is the most natural method, as the face is usually the first, and often the only way humans show their emotional states. The aim of this article is to compare two applications dedicated to emotion recognition based on expression: FaceReader v. 6. by Noldus, Netherlands and Xpress Engine v..8. by QuantumLab, Poland. Both tools are available at Gdansk University of Technology as a part of Monitor stand [][]. Apart from evaluation of the two solutions, the comparison highlights different problems usually encountered while performing such evaluations. The paper is organized as follows: Section II presents some known solutions for emotion recognition based on facial expressions and a few projects aiming at benchmarking the recognition methods; Section III describes two tools selected for the study; Section IV presents research study plan; Section V contains recognition results obtained by both tools; Section VI is the analysis of the obtained results; Section VII summarizes the study and gives some ideas for the future work. The study might be interesting both for potential customers of the emotion recognition solutions and for researchers, who work on improving accuracies of existing emotion recognition methods. II. RELATED WORK A. recognition based on facial expressions There have been developed a number of research algorithms and methods to recognize emotions on the basis of image and video input. Most of them are research-based solutions [6], however there are also available several commercial off-the-shelf solutions. Most of them target at areas of marketing and usability testing. The most advanced solutions include (in alphabetical order): Affdex, Xpress Engine, Emotient, EmoVu, FaceReader, InSight and nviso. Algorithms implemented in these off-the-shelf solutions are mainly based on Facial Action Coding System (FACS). FACS is taxonomy of human facial movements (called action units, AU) by their appearance on the face. s are recognized as a combination of action units [7]. American psychologist Ekman noticed that some facial expressions corresponding to certain emotions are common for all the people independently of their gender, race, education, ethnicity, etc. He proposed the discrete emotional model using six universal emotions: happiness, surprise, anger, disgust, sadness and fear [8]. The facial expressions reflecting the Ekman s six basic emotions are shown in Fig.. Fig.. Sample of facial expressions characteristic for Ekman's six basic emotions This is a preprint of the paper presented at the 9th International Conference on Human System Interaction, Portsmouth, UK, July 6-8, 6, and published in the IEEE Xplore Digital Library DOI:.9/HSI

2 The off-the-shelf solutions usually provide a subset or a superset of Ekman's basic six emotions as an output. They also usually provide a value for neutral state. Affdex emotion recognition software works both as a stand-alone application and as online service, and provides frame-by-frame emotion metrics as an output. al state is described in the dimensional model (valence, attention, expressiveness) or as the discrete measure (smile, concentration, surprise and dislike) [9]. Emotient detects the 6 basic emotions (joy, surprise, anger, disgust, sadness and fear), and also contempt, frustration, and confusion. It also allows the detection of 9 of the Facial Action Units []. EmoVu API along with the basic emotional expressions, outputs complex features calculated from the basic ones []. InSight claims over 9% accuracy for neutral and 6 basic emotions []. nviso provides cloud service to measure emotional reactions of users. It uses FACS system, tracking the movement of facial muscles in real time []. Xpress Engine and FaceReader are described in more detail in Section III. According to our knowledge, the off-the-shelf solutions are rarely compared. Companies, that offer such solutions, are interested in sharing the best scenario results. On the other hand, researchers usually compare their own algorithms to the commercial one they have access to. B. Benchmarking emotion recognition methods This study is also based on and related to research, that aims at benchmarking emotion recognition algorithms. One of the most common approaches is to organize challenges colocated with main conferences and workshops in the field. There have been several events organized already, that aimed at comparison of emotion recognition algorithms. During the FERA challenges specific evaluation procedures for recognition algorithms were proposed []. Even further went Audio/Visual Challenge (AVEC) [] and Recognition in the Wild (EmotiW) [] events, which were organized in a form of competition between research teams around the world. In the first AVEC event was organized. It aimed at comparison of methods for automatic audio and visual emotion recognition. The competitors were challenged in measuring accuracy over four dimensions: activity, expectation, power and valence. The recognition was performed on the videos from the SEMAINE database. All videos were annotated by to 8 raters [6]. During the third AVEC challenge the number of dimensions was reduced to two: arousal and valence. The challenge used the audio-visual depressive language video database. All videos were annotated by raters []. The goal of the Recognition in the Wild (EmotiW) challenge was to provide a platform for researchers to benchmark the performance and accuracy of emotion recognition methods. The task was to classify sample audiovideo clips from the AFEW database into one of the seven categories: anger, disgust, fear, happiness, neutral, sadness and. Each clip was annotated by raters []. The Facial Expression Recognition and Analysis (FERA) challenge targets at facial expression recognition of on universal emotion classes presented on recordings taken by actors in lab conditions. Unlike the other challenges, the FERA goal was to estimate FACS Action Unit occurrence and intensity. Action Units were annotated frame-by-frame by a team of experts. The harmonic mean of recall and precision was selected as the AU occurrence measure. For comparison of AU intensity the Intraclass Correlation Coefficient was chosen []. To sum up, most challenges used the following method for emotion recognition algorithms benchmarking: first a set of photos or videos was chosen, then it was manually annotated by a number of raters and the algorithms were evaluated using some measures of consistency with manual annotations. In this study, we follow the same procedure. It is important to emphasis, that according to our best knowledge, no commercial off-the-shelf solutions took part in the challenges. III. DESCRIPTIVE ANALYSIS OF EVALUATED EMOTION RECOGNITION TOOLS Xpress Engine from QuantumLab, Poland is an application which allows to recognize and analyze emotions elicited by different stimuli [7]. The analysis can be performed both in real time as well as in off-line batch mode and is based on FACS model. Xpress Engine can detect five out of six Ekman s emotions: joy, disgust, surprise, sadness, anger, and additionally the neutral state. Xpress Engine recognizes also a subset of Action Units from FACS. As the output Xpress Engine provides vector of recognized emotions and AU for each frame. A sample screenshot from Xpress Engine is shown in Fig.. Face and eyes areas are marked on the live view (left). On the right, there is a normalized face in grey scale provided as well as estimation on the face rotation (small twoshaded window in the bottom right corner). Recognized emotions are visualized next to the live view window with a set of bars for each of the recognized emotional state. The recognized expressions are marked on the bars and the shaded area represents a notion of expression intensity. Fig.. QuantumLab Xpress Engine screenshot. The analysis of Xpress Engine interface highlights the most important issues in emotion recognition from facial expressions: recognition of face in the whole available scene, recognition of eyes within the face, recognition of expressions on normalized face image, elimination of the influence of face angle towards camera on affect recognition result. FaceReader from Noldus, Netherlands, similarly to Xpress Engine, can recognize emotions by analyzing user s facial expressions [8]. The application automatically analyzes six

3 Ekman s emotions: joy, disgust, surprise, sadness, anger, fear as well as neutral and, additionally, contempt. FaceReader analyzes a subset of commonly used Action Units. Facial expressions recognized by FaceReader can be represented as a value of valence and arousal in the circumplex model of emotions as well as the value of recognized emotions. A sample screenshot from FaceReader is shown in Fig.. Fig.. Noldus FaceReader screenshot (part of the main window). FaceReader software provides live analysis view, and recognized face is also marked with a rectangle shape. It is also possible to display additional masks on the face to get some insight into the recognition process. In the upper right corner, emotions are represented with bars as well as pie chart. Bottom chart visualizes valence dimension of the recognized emotional state. The interface is customizable and additional sub-windows may be displayed interchangeably. Both solutions might perform real-time or batch analysis and in the latter mode, results are provided as a set of emotion vectors recognized for each video frame. The data may be processed, then analyzed independently. Sample results from both solutions for the same video file are presented in Fig.. For visibility reasons, only three emotions series are visualized: anger, disgust, neutral. The values for other emotions were close to zero. The analysed video clip was originally annotated with anger label. The sample confirms observation, that anger is usually confused with disgust (both by recognition algorithms and manual annotators) as some action units are common for the two expressions. Noldus FaceReader solution seems to smooth the results for consecutive frames and calculates neutral emotion as a complement for recognized intensity of other emotions. Both algorithms agree in general (both recognize symptoms of anger and disgust), however they differ in intensity of the emotions, as well as the dominant emotion for frames. As the clip (and most clips in the annotated datasets) are usually assigned with one label only, some discretization procedure must be proposed to provide single label out of the emotion vectors series provided by the solutions. The procedure used in this study is described in Section IV. Fig.. An example output generated by the two compared solutions IV. STUDY DESIGN AND EXECUTION The aim of the presented study is to compare the effectiveness of two selected tools designed to recognize emotional states from facial expressions. A black-box approach has been adopted to make the comparison, i.e. the final output information on the emotions generated by both tools is the only aspect taken into account during the evaluation. Moreover, due to the type of information provided as annotations in the databases, discrete representation model has been chosen to describe emotional states. A. Selection of databases Facial expressions and emotions databases are widely used for training and evaluation of emotion recognition methods. Many such databases are available, some of them for free provided they are used for non-commercial purposes, such as research or education [9]. However they differ in terms of size, content and quality. Based on the criteria provided by Kolakowska et. al. [] two databases have been chosen: Extended Cohn-Kanade [] and MMI []. These two datasets are among the most popular and this means that they

4 are frequently used in emotion recognition algorithms benchmarking. One of the first attempt to develop a comprehensive database for facial expression analysis was Cohn-Kanade AUCoded Facial Expression Database, in which each sequence was labeled both by the emotion intended to be expressed as well as by observed facial movements using FACS [7]. Unfortunately, sequences were not verified against the real facial expressions they contained and the emotion labels referred to what expressions were requested rather than what could actually have been shown The extended version of this database, CK+, contains 9 sequences with more frames per sequence []. CK+ contains both posed as well as spontaneous facial expressions. In this version, validated emotion labels are provided together with FACS coding. Additional features are recognition results for facial feature tracking, action units and emotions. Although the CK+ database is very well prepared for use in many application fields, it contains only video sequences which can be limiting in some research. MMI Facial Expression Database contains over 9 video clips and still images, annotated using FACS. Additionally, 7 clips are labeled to indicate which of the six basic Ekman s emotions occurred. The MMI database contains 69 different subjects, both male and female, ranging in age from 9 to 6, having either European, African, Asian or South American ethnic background. The dataset is freely shared through a web-based application, which allows filtering and downloading selected recordings [], []. MMI is considered as a challenging database because the subjects express emotions in very different ways. Furthermore, some of them wear glasses, hat, have beards etc. []. All MMI recordings are stored as AVI files with standard codecs. All video clips were recorded in a controlled laboratory environment with homogeneous background without other people or objects visible. There are very good lighting conditions and recorded faces are even illuminated. B. Testing procedure for Cohn-Kanade database The first facial expression database used during tests was Cohn-Kanade, a standard-de-facto solution for testing emotion recognition algorithms. The methodology of photos selection for the study is described below. ) Selecting Cohn-Kanade database subset Cohn-Kanade database contains sequences of photos (selected frames) of certain emotion expressions, where the first photo in the sequence represents neutral state and the final one - the most intense emotional expression (fear, joy, disgust, surprise, sadness or anger). Intermediate photos in the sequence represent all stages of emotion intensity from neutral up to maximum state. For the purpose of the experiment a subset of photos was selected. In order to assure, that the photos are unambiguously labelled, intermediate states were excluded from the analysis. From each sequence, the first photo was selected with a label of neutral state and three up to five last photos in the sequence were selected with a label of the represented emotion. s representing fear were not included in the testing set. ) Performing automated emotion recognition on selected Cohn-Kanade database subset. Both Noldus FaceReader and QuantumLab Xpress Engine performed analysis on photos providing emotion vectors as an output. The assigned labels were then confronted with the Cohn-Kanade original labels of pictures. During the execution, all selected photos were successfully processed by Xpress Engine, while Noldus returned Unknown for a number of them (99 photos, 7% of the database). ) Visualising results as confusion matrix. The results are visualized as typical confusion matrix. Although no fear-labeled photos are analysed, the row and column for fear class was not excluded, as Face Reader software sometimes returns the label. Apart from recall and precision metrics for classes, overall accuracy is provided as a weighted average recall. C. Testing procedure for MMI database The second database used in tests was MMI Facial Expression Database. It was processed according to the following procedure: ) Selecting MMI database subset. From MMI database a subset of 7 video clips annotated with emotions was selected. Up to clips showing emotions: anger, joy, surprise, disgust, sadness plus neutral state were selected for manual annotation. Recordings with fear labels were excluded from the analysis, as Xpress Engine algorithm does not recognize this expression. ) Manual annotation of selected MMI subset Clips in MMI database are labeled with expressions, that the subjects were asked to show, therefore the labels accuracy might be low. Selected subset of video clips was manually annotated with 6 basic emotions (including fear) plus neutral state by 6 independent experts. was included in annotation, as some subjects perform fear expression instead of surprise. From 6 annotations fear was assigned times (,%). ) Setting consistency threshold and selection of final MMI subset for comparison procedure. Consistency of manual annotations varied among clipsonly 6 (6%) clips were annotated with % consistency, while some clips were assigned with different labels. Consistency varied for different emotional expressions as well. In this study kappa coefficient was chosen and consistency measure, as it is frequently used in the annotation consistency calculation []. Table I shows number of clips per class, given the consistency threshold as well as total kappa coefficient for the whole set after elimination of clips below threshold. The threshold for the extent of agreement for one clip was set to,6 (at least of 6 annotating experts had to be consistent to include clip in further analysis). As a result, smaller subset of 8 clips representing anger, joy, surprise, disgust, sadness, and neutral was selected to perform comparison of the algorithms.

5 Recall [%] Precision [%] Recall [%] Unknown CONFUSION MATRIX OBTAINED FOR PHOTOS FROM COHNKANADE DATASET RECOGNIZED BY FACEREADER Correctly recognized photos: 77.9% Tables IV and V present confusion matrices obtained after recognizing emotions presented on video clips from MMI database using Xpress Engine and FaceReader respectively. In contrast to photos, this time FaceReader turned out to be better with accuracy 6.%. Xpress Engine reached.%. The results are far from satisfying. Precision [%]. 8. Recall [%] CONFUSION MATRIX OBTAINED FOR VIDEO CLIPS FROM MMI DATASET RECOGNIZED BY XPRESS ENGINE TABLE IV. Particular accuracies vary among emotions. Both tools recognize joy and surprise the best. The worst results are obtained for anger and disgust in the case of FaceReader or anger and sadness in the case of Xpress Engine. However, in the case of Xpress Engine, the worst result is over 7%, which is still a high rate. Apart from anger and sadness, all other emotions are recognized by Xpress Engine with accuracies higher than 8%. is the only emotional state recognized better by FaceReader than by Xpress Engine. The confusion matrices let us analyze the errors made by the application. It can be noticed that anger is often recognized as neutral. In the case of Xpress Engine sadness is also incorrectly recognized as neutral in many cases recognized % Correctly photos: 9. 6 V. RESULTS Tables II and III present confusion matrices obtained after recognizing emotions presented on photos from Cohn-Kanade database using Xpress Engine and FaceReader, respectively. It can be seen that Xpress Engine achieved higher accuracy of 87.6%, whereas FaceReader 77.9%. However, there is an issue, which should be taken into account while comparing these results. It is the fact that Xpress Engine does not recognize fear, whereas FaceReader does. Although there were no fear examples in the test data, some examples have been recognized as fear by FaceReader. If these examples were excluded from the calculations, the result for FaceReader would be 78.76%, which is slightly better, but still much lower than the one obtained by Xpress Engine. TABLE III. 77 ) Performing automated emotion recognition on selected MMI Clips. Both Noldus FaceReader and QuantumLab Xpress Engine perform analysis on frame-by-frame basis and provide stream of emotion vectors as an output for each frame. The output therefore must be averaged or summed in order to obtain one dominant label for each clip. As MMI clips are recorded as a sequence of: neutral expression, emotion expression, neutral expression only middle one-third of frames was selected, each frame was assigned with vector of emotions from tested recognition algorithm each frame was assigned an Label that had maximum value within the vector for that frame, each clip was assigned an Label that was dominant for the middle one-third of frames (mode was calculated as maximum number of occurrences). ) Visualising results as confusion matrix. The results are presented in the same way as in the case of Cohn-Kanade database. Precision [%],89, Recognized as Kappa CONFUSION MATRIX OBTAINED FOR PHOTOS FROM COHNKANADE DATASET RECOGNIZED BY XPRESS ENGINE Recognized as TABLE II. Recognized as (.8; ] (.6; ] [; ] Extent of agreement CONSISTENCY OF MANUAL ANNOTATION OF MMI SUBSET All TABLE I recognized % Correctly videos:

6 Precision [%] Recall [%] CONFUSION MATRIX OBTAINED FOR VIDEO CLIPS FROM MMI DATASET RECOGNIZED BY FACEREADER Recognized as TABLE V recognized. 6.% Correctly videos: In the case of Xpress Engine joy and surprise are recognized with highest accuracies over 7%, disgust reaches 6.6%, other states show much lower rates. In the case of FaceReader the best results were achieved for disgust (7%). Accuracies for surprise and neutral state reached 6%, for anger and joy only slightly worse and for sadness the worst. Another issue worth noting is low precision in some cases, e.g. neutral for both applications. It means, that when the application returns neutral, the probability of correct decision is low. In other words, other emotional states are often confused with neutral. VI. SUMMARY OF RESULTS AND DISCUSSION In the comparison of affect recognition solutions based on facial expressions two off-the-shelf applications were explored using a black-box approach. Table VI presents summary results obtained for both databases using FaceReader and Xpress Engine. To sum up, the results obtained by the two applications in the case of photo database are rather high (87.6% and 77.9%). Those obtained for videos are much lower (.% and 6.%), but the number of emotions recognized was six, which means than even for videos the recognition accuracy was much better than random guessing. TABLE VI. SUMMARY RESULTS OBTAINED FOR PHOTO AND VIDEO DATABASES USING FACEREADER AND XPRESS ENGINE FaceReader Xpress Engine Database Precision [%] Recall [%] Precision [%] Recall [%] This study results may be formulated as follows: () key difference between Noldus FaceReader and QuantumLab Xpress Engine is the fear category, that is provided by the first one, and omitted by the second; () QuantumLab Xpress Engine solution performs slightly better on picture database with good recording conditions (illumination, angle); () Noldus Face Reader solution performs better on video database; () none of the solutions is clearly better, the accuracies and correctly recognized/confused emotional states are different. Finding the reasons behind the discrepancy in accuracies requires further research and is beyond this study scope, however seems an interesting direction for future works. We acknowledge that our approach to this study and analysis has some limitations, the main ones including a limited number and arbitral choice of databases for testing, limitations of chosen discretization methods and a quantitative black-box approach for results analysis. The choice of the databases is justified in Section IV, however we acknowledge, that more datasets would provide more valuable results. The databases we have chosen cover diversity in only one of the important characteristics: photo/film input. Another issue is how algorithms deal with conditions that are more natural and the presented research will be continued to address this issue. There are many different methods of discretization of emotional states from vectors of Ekman's six basic emotions values to single labels. The chosen discretization method is the simplest, straightforward approach and is adjusted to the characteristics of the tested datasets. The main limitation of the study is the quantitative blackbox approach chosen for the analysis of the results. The aim of the paper, however, was to quantify the accuracies of the algorithms with diverse databases and that approach was the most effective. However, some questions remain, including exploration of the reasons for the discrepancy of accuracy with photo/video data sets. Analysis of intermediate results (for example visualized in Figure ) suggests, that FaceReader uses smoothing based on neighboring frames and calculates neutral state as one minus the dominant emotion. However exploring this thesis might require a more qualitative whitebox approach. Although these results seem satisfying from the point of view of emotion recognition problem, it has to be highlighted, that these levels of accuracy would not be good enough in some practical applications. VII. CONCLUSIONS AND FUTURE WORKS In this paper, two off-the-shelf solutions for emotion recognition based on facial expressions were validated. Both FaceReader and Xpress Engine work well with video and still images database. FaceReader gives slightly better results for video recordings while Xpress Engine for still images. Although such difference does not let us clearly indicate a better solution, it may be an indication for using one of the programs according to the application field. Some advantage

7 of FaceReader is that it recognizes fear emotion, which is not possible with Xpress Engine. Both selected databases, MMI and CK+, contain recordings obtained in very good, controlled laboratory conditions: participants are correctly positioned in front of the monitor, the background and light conditions are proper and homogeneous. However, to be useful in real-life applications, the algorithms of emotion recognition should work with good accuracy also in not so ideal natural environment. One direction of our future work is to extend the validation tests on other video databases that meet these requirements. One of such databases is FEEDB that contains video clips of participants recorded in standard scenery of IT laboratory with uncontrolled mixed natural and fluorescent illumination [6]. Preliminary tests with FEEDB recordings proved that both applications achieve far lower recognition efficiency, but more tests and research are needed to find out the main factors that influence the results. Another direction of the work is to investigate more sophisticated methods for determining the actual user emotion taking into account highly dynamic changes of indicated emotions for particular frames. For this purpose, we plan to use low-pass time filtering and an expert voting system based on decision trees or neural networks. [9] [] [] [] [] [] [] [6] [7] ACKNOWLEDGMENT The research leading to these results has received funding from the Polish- Norwegian Research Programme operated by the National Centre for Research and Development under the Norwegian Financial Mechanism 9- in the frame of Project Contract No Pol-Nor/96/8/ as well as from DS Funds of Faculty of Electronics, Telecommunications and Informatics, Gdansk University of Technology. The authors thank Michał Witkowicz for data preprocessing. [8] REFERENCES [] [] [] [] [] [] [6] [7] [8] R. Picard, Affective computing: from laughter to IEEE, IEEE Trans. Affective Computing (), pp. -7,. G. Tsoulouhas, D. Georgiou, A. Karakos A, Detection of learner s affective state based on mouse movements, Journal of Computing (),, pp A. Landowska, "Affective learning manifesto years later", th European Conference on e-learning (ECEL), Aalborg Univ, Copenhagen, Denmark, October, pp M. Szwoch and W. Szwoch, recognition for affect aware video games, 6th International Conference on Image Processing & Communications, Springer-Verlag,. A. Landowska and M.R. Wrobel, Affective reactions to playing digital games, 8th International Conference on Human System Interaction, Warsaw, Poland, pp. 6-7,. M.R. Wróbel, s in the software development process, 6th International Conference on Human System Interaction, Gdańsk, Poland,. S. Tivatansakul, G. Chalumporn, S. Puangpontip, Y. Kankanokkul, T. Achalaku, and M. Ohkura, Healthcare system focusing on emotional aspect using augmented reality: emotion detection by facial expression, Advances in Human Aspects of Healthcare, pp. 7-8,. A. Kolakowska, A. Landowska, M.Szwoch, W. Szwoch, and M.R. Wróbel, recognition and its applications, Human-Computer [9] [] [] [] [] [] [6] [7] [8] Systems Interaction: Backgrounds and Applications, Advances in Intelligent Systems and Computing, vol., pp. -6,. H. Gunes and M. Piccardi, Affect recognition from face and body: early fusion vs. late fusion, IEEE International Conference on Systems, Man and Cybernetics, vol., pp. 7,. B. Schuller, M. Lang, and G. Rigoll, Multimodal emotion recognition in audiovisual communication, Proceedings of IEEE International Conference on Multimedia and Expo, ICME, Lausanne,. A.J. Gill, R.M. French, D. Gergle, and J. Oberlander, Identifying emotional characteristics from short blog texts, th Annual Conference of the Cognitive Science Society, pp. 7-, 8. W. Szwoch, Using physiological signals for emotion recognition, 6th International Conference on Human System Interaction, Gdańsk, Poland,. A. Kołakowska, A review of emotion recognition methods based on keystroke dynamics and mouse movements, 6th International Conference on Human System Interaction, Gdańsk, Poland,. A. Landowska, monitoring - verification of physiological measurement procedures, Metrology and Measurement Systems, vol. (), pp. 8-88,. A. Landowska, " monitor - concept, construction and lessons learned", Federated Conf.on Computer Science and Information Systems, IEEE, pp.7-8,. A. Dhall, O. V Ramana Murthy, R. Goecke, J. Joshi, and T. Gedeon, and image based emotion recognition challenges in the wild: Emotiw, ACM International Conference on Multimodal Interaction, 6,. P. Ekman, W.V.Friesen and J.C.Hager Facial action coding system, A Human Face,. P. Ekman and W. V Friesen, Constants across cultures in the face and emotion, Journal of personality and social psychology, vol. 7(), pp. -9, 97. Affectiva, White paper: Exploring the emotion classifiers behind Affdex facial coding, Exploring_Affdex_Classifiers.p df, accessed..6. Bența Kuderna-Iulian and Vaida Mircea-Florin, Towards real-life facial expression recognition systems, Advances in Electrical and Computer Engineering, vol. (), pp. 9,. L. Liu, D. Preot, and L. Ungar, Analyzing personality through social media profile picture choice, The AAAI Digital Library, 6. A. Dinculescu, C. Vizitiu, A. Nistorescu, M. Marin, and A. Vizitiu, Novel approach to face expression analysis in determining emotional valence and intensity with benefit for human space flight studies, EHealth and Bioengineering Conference (EHB), pp.,. M. F. Valstar, T. Almaev, J. M. Girard, G. McKeown, M. Mehu, L. Yin, M. Pantic, and J. F. Cohn, FERA - second Facial Expression Recognition and Analysis challenge, th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), vol. 6, pp. 8,. M. Valstar, B. Schuller, K. Smith, F. Eyben, B. Jiang, S. Bilakhia, S. Schnieder, R. Cowie, and M. Pantic, AVEC - The Continuous Audio/Visual and Depression Recognition Challenge, rd ACM international workshop on Audio/visual emotion challenge AVEC, pp.,. A. Dhall, R. Goecke, J. Joshi, and T. Gedeon, and Image based Recognition Challenges in the Wild : EmotiW, ACM International Conference on Multimodal Interaction ICMI, pp. 6,. M. Valstar, F. Eyben, G. Mckeown, R. Cowie, and M. Pantic, AVEC The First International audio/visual emotion challenge, th International Conference on Affective Computing and Intelligent Interaction, pp. -,, Springer Verlag,. Xpress Engine solution description, accessed on..6. FaceReader solution description, accessed on..6.

8 [9] G. Castaneda and T. M. Khoshgoftaar, A survey of D face databases, IEEE International Conference on Information Reuse and Integration, pp. 9,. [] A. Kolakowska, A. Landowska, M. Szwoch, W. Szwoch, and M. R. Wrobel, Evaluation criteria for affect-annotated databases, Beyond Databases, Architectures and Structures, vol., Springer International Publishing, pp. 8 97,. [] P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, The extended Cohn-Kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression, IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, CVPRW, pp. 9,. [] M. F. Valstar and M. Pantic, Induced disgust, happiness and surprise: an addition to the MMI facial expression database, Int. Conf. Lang. Resour. Eval. Work. Emot., pp. 6 7,. [] M. Pantic, M. Valstar, R. Rademaker, and L. Maat, Web-based database for facial expression analysis, IEEE International Conference on Multimedia and Expo, pp. 7,. [] L. Zhong, Q. Liu, P. Yang, J. Huang, and D. N. Metaxas, Learning multiscale active facial patches for expression analysis, IEEE Trans on Cybernetics, vol. (8), pp. 99-,. [] J. L. Fleiss, Measuring nominal scale agreement among many raters, Psychological Bulletin, vol. 76(), pp. 79-8, 97. [6] M. Szwoch, On Facial Expressions and s RGB-D Database, th International Conference, BDAS, Springer Verlag,.

Limitations of Emotion Recognition from Facial Expressions in e-learning Context

Limitations of Emotion Recognition from Facial Expressions in e-learning Context Limitations of Emotion Recognition from Facial Expressions in e-learning Context Agnieszka Landowska, Grzegorz Brodny and Michal R. Wrobel Department of Software Engineering, Gdansk University of Technology,

More information

This is the accepted version of this article. To be published as : This is the author version published as:

This is the accepted version of this article. To be published as : This is the author version published as: QUT Digital Repository: http://eprints.qut.edu.au/ This is the author version published as: This is the accepted version of this article. To be published as : This is the author version published as: Chew,

More information

Emotion Recognition using a Cauchy Naive Bayes Classifier

Emotion Recognition using a Cauchy Naive Bayes Classifier Emotion Recognition using a Cauchy Naive Bayes Classifier Abstract Recognizing human facial expression and emotion by computer is an interesting and challenging problem. In this paper we propose a method

More information

Face Analysis : Identity vs. Expressions

Face Analysis : Identity vs. Expressions Hugo Mercier, 1,2 Patrice Dalle 1 Face Analysis : Identity vs. Expressions 1 IRIT - Université Paul Sabatier 118 Route de Narbonne, F-31062 Toulouse Cedex 9, France 2 Websourd Bâtiment A 99, route d'espagne

More information

Advanced FACS Methodological Issues

Advanced FACS Methodological Issues 7/9/8 Advanced FACS Methodological Issues Erika Rosenberg, University of California, Davis Daniel Messinger, University of Miami Jeffrey Cohn, University of Pittsburgh The th European Conference on Facial

More information

Facial expression recognition with spatiotemporal local descriptors

Facial expression recognition with spatiotemporal local descriptors Facial expression recognition with spatiotemporal local descriptors Guoying Zhao, Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and Information Engineering, P. O. Box

More information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, S. Lee, U. Neumann, S. Narayanan Emotion

More information

Facial Expression Recognition Using Principal Component Analysis

Facial Expression Recognition Using Principal Component Analysis Facial Expression Recognition Using Principal Component Analysis Ajit P. Gosavi, S. R. Khot Abstract Expression detection is useful as a non-invasive method of lie detection and behaviour prediction. However,

More information

Recognising Emotions from Keyboard Stroke Pattern

Recognising Emotions from Keyboard Stroke Pattern Recognising Emotions from Keyboard Stroke Pattern Preeti Khanna Faculty SBM, SVKM s NMIMS Vile Parle, Mumbai M.Sasikumar Associate Director CDAC, Kharghar Navi Mumbai ABSTRACT In day to day life, emotions

More information

Limitations of Emotion Recognition in Software User Experience Evaluation Context

Limitations of Emotion Recognition in Software User Experience Evaluation Context Proceedings of the Federated Conference on Computer Science DOI: 10.15439/2016F535 and Information Systems pp. 1631 1640 ACSIS, Vol. 8. ISSN 2300-5963 Limitations of Emotion Recognition in Software User

More information

Emotion Recognition and its Application in Software Engineering

Emotion Recognition and its Application in Software Engineering This is a preprint of the paper to be presented at the 6th International Conference on Human System Interaction, June 06-08 2013 in Gdansk, Poland and to be published in the IEEE Xplore Digital Library.

More information

Research Proposal on Emotion Recognition

Research Proposal on Emotion Recognition Research Proposal on Emotion Recognition Colin Grubb June 3, 2012 Abstract In this paper I will introduce my thesis question: To what extent can emotion recognition be improved by combining audio and visual

More information

Facial Expression Biometrics Using Tracker Displacement Features

Facial Expression Biometrics Using Tracker Displacement Features Facial Expression Biometrics Using Tracker Displacement Features Sergey Tulyakov 1, Thomas Slowe 2,ZhiZhang 1, and Venu Govindaraju 1 1 Center for Unified Biometrics and Sensors University at Buffalo,

More information

D4.10 Demonstrator 4 EoT Application

D4.10 Demonstrator 4 EoT Application Horizon 2020 PROGRAMME ICT-01-2014: Smart Cyber-Physical Systems This project has received funding from the European Union s Horizon 2020 research and innovation programme under Grant Agreement No 643924

More information

Facial Expression Classification Using Convolutional Neural Network and Support Vector Machine

Facial Expression Classification Using Convolutional Neural Network and Support Vector Machine Facial Expression Classification Using Convolutional Neural Network and Support Vector Machine Valfredo Pilla Jr, André Zanellato, Cristian Bortolini, Humberto R. Gamba and Gustavo Benvenutti Borba Graduate

More information

An assistive application identifying emotional state and executing a methodical healing process for depressive individuals.

An assistive application identifying emotional state and executing a methodical healing process for depressive individuals. An assistive application identifying emotional state and executing a methodical healing process for depressive individuals. Bandara G.M.M.B.O bhanukab@gmail.com Godawita B.M.D.T tharu9363@gmail.com Gunathilaka

More information

Blue Eyes Technology

Blue Eyes Technology Blue Eyes Technology D.D. Mondal #1, Arti Gupta *2, Tarang Soni *3, Neha Dandekar *4 1 Professor, Dept. of Electronics and Telecommunication, Sinhgad Institute of Technology and Science, Narhe, Maharastra,

More information

Accuracy of three commercial automatic emotion recognition systems across different individuals and their facial expressions

Accuracy of three commercial automatic emotion recognition systems across different individuals and their facial expressions Accuracy of three commercial automatic emotion recognition systems across different individuals and their facial expressions Dupré, D., Andelic, N., Morrison, G., & McKeown, G. (Accepted/In press). Accuracy

More information

Using Affect Awareness to Modulate Task Experience: A Study Amongst Pre-Elementary School Kids

Using Affect Awareness to Modulate Task Experience: A Study Amongst Pre-Elementary School Kids Proceedings of the Twenty-Seventh International Florida Artificial Intelligence Research Society Conference Using Affect Awareness to Modulate Task Experience: A Study Amongst Pre-Elementary School Kids

More information

Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners

Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners Hatice Gunes and Maja Pantic Department of Computing, Imperial College London 180 Queen

More information

R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology

R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology ISSN: 0975-766X CODEN: IJPTFI Available Online through Research Article www.ijptonline.com FACIAL EMOTION RECOGNITION USING NEURAL NETWORK Kashyap Chiranjiv Devendra, Azad Singh Tomar, Pratigyna.N.Javali,

More information

IMPLEMENTATION OF AN AUTOMATED SMART HOME CONTROL FOR DETECTING HUMAN EMOTIONS VIA FACIAL DETECTION

IMPLEMENTATION OF AN AUTOMATED SMART HOME CONTROL FOR DETECTING HUMAN EMOTIONS VIA FACIAL DETECTION IMPLEMENTATION OF AN AUTOMATED SMART HOME CONTROL FOR DETECTING HUMAN EMOTIONS VIA FACIAL DETECTION Lim Teck Boon 1, Mohd Heikal Husin 2, Zarul Fitri Zaaba 3 and Mohd Azam Osman 4 1 Universiti Sains Malaysia,

More information

A Common Framework for Real-Time Emotion Recognition and Facial Action Unit Detection

A Common Framework for Real-Time Emotion Recognition and Facial Action Unit Detection A Common Framework for Real-Time Emotion Recognition and Facial Action Unit Detection Tobias Gehrig and Hazım Kemal Ekenel Facial Image Processing and Analysis Group, Institute for Anthropomatics Karlsruhe

More information

Towards Human Affect Modeling: A Comparative Analysis of Discrete Affect and Valence-Arousal Labeling

Towards Human Affect Modeling: A Comparative Analysis of Discrete Affect and Valence-Arousal Labeling Towards Human Affect Modeling: A Comparative Analysis of Discrete Affect and Valence-Arousal Labeling Sinem Aslan 1, Eda Okur 1, Nese Alyuz 1, Asli Arslan Esme 1, Ryan S. Baker 2 1 Intel Corporation, Hillsboro

More information

Classification of valence using facial expressions of TV-viewers

Classification of valence using facial expressions of TV-viewers Classification of valence using facial expressions of TV-viewers Master s Thesis Yorick H. Holkamp Classification of valence using facial expressions of TV-viewers THESIS submitted in partial fulfillment

More information

Emotion AI, Real-Time Emotion Detection using CNN

Emotion AI, Real-Time Emotion Detection using CNN Emotion AI, Real-Time Emotion Detection using CNN Tanner Gilligan M.S. Computer Science Stanford University tanner12@stanford.edu Baris Akis B.S. Computer Science Stanford University bakis@stanford.edu

More information

Study on Aging Effect on Facial Expression Recognition

Study on Aging Effect on Facial Expression Recognition Study on Aging Effect on Facial Expression Recognition Nora Algaraawi, Tim Morris Abstract Automatic facial expression recognition (AFER) is an active research area in computer vision. However, aging causes

More information

Valence-arousal evaluation using physiological signals in an emotion recall paradigm. CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry.

Valence-arousal evaluation using physiological signals in an emotion recall paradigm. CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry. Proceedings Chapter Valence-arousal evaluation using physiological signals in an emotion recall paradigm CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry Abstract The work presented in this paper aims

More information

Automatic detection of a driver s complex mental states

Automatic detection of a driver s complex mental states Automatic detection of a driver s complex mental states Zhiyi Ma 1, Marwa Mahmoud 2, Peter Robinson 2, Eduardo Dias 3, and Lee Skrypchuk 3 1 Department of Engineering, University of Cambridge, Cambridge,

More information

What Can Head and Facial Movements Convey about Positive and Negative Affect?

What Can Head and Facial Movements Convey about Positive and Negative Affect? 2015 International Conference on Affective Computing and Intelligent Interaction (ACII) What Can Head and Facial Movements Convey about Positive and Negative Affect? Zakia Hammal1, Jeffrey F Cohn1,2, Carrie

More information

A Comparison of Collaborative Filtering Methods for Medication Reconciliation

A Comparison of Collaborative Filtering Methods for Medication Reconciliation A Comparison of Collaborative Filtering Methods for Medication Reconciliation Huanian Zheng, Rema Padman, Daniel B. Neill The H. John Heinz III College, Carnegie Mellon University, Pittsburgh, PA, 15213,

More information

Statistical and Neural Methods for Vision-based Analysis of Facial Expressions and Gender

Statistical and Neural Methods for Vision-based Analysis of Facial Expressions and Gender Proc. IEEE Int. Conf. on Systems, Man and Cybernetics (SMC 2004), Den Haag, pp. 2203-2208, IEEE omnipress 2004 Statistical and Neural Methods for Vision-based Analysis of Facial Expressions and Gender

More information

Audio-based Emotion Recognition for Advanced Automatic Retrieval in Judicial Domain

Audio-based Emotion Recognition for Advanced Automatic Retrieval in Judicial Domain Audio-based Emotion Recognition for Advanced Automatic Retrieval in Judicial Domain F. Archetti 1,2, G. Arosio 1, E. Fersini 1, E. Messina 1 1 DISCO, Università degli Studi di Milano-Bicocca, Viale Sarca,

More information

Local Image Structures and Optic Flow Estimation

Local Image Structures and Optic Flow Estimation Local Image Structures and Optic Flow Estimation Sinan KALKAN 1, Dirk Calow 2, Florentin Wörgötter 1, Markus Lappe 2 and Norbert Krüger 3 1 Computational Neuroscience, Uni. of Stirling, Scotland; {sinan,worgott}@cn.stir.ac.uk

More information

Facial Behavior as a Soft Biometric

Facial Behavior as a Soft Biometric Facial Behavior as a Soft Biometric Abhay L. Kashyap University of Maryland, Baltimore County 1000 Hilltop Circle, Baltimore, MD 21250 abhay1@umbc.edu Sergey Tulyakov, Venu Govindaraju University at Buffalo

More information

A framework for the Recognition of Human Emotion using Soft Computing models

A framework for the Recognition of Human Emotion using Soft Computing models A framework for the Recognition of Human Emotion using Soft Computing models Md. Iqbal Quraishi Dept. of Information Technology Kalyani Govt Engg. College J Pal Choudhury Dept. of Information Technology

More information

Pupillary Responses of Asian Observers in Discriminating Real from Fake Smiles: a Preliminary Study

Pupillary Responses of Asian Observers in Discriminating Real from Fake Smiles: a Preliminary Study Pupillary Responses of Asian Observers in Discriminating Real from Fake Smiles: a Preliminary Study M.Z. Hossain 1, T.D. Gedeon 1, R. Sankaranarayana 1, D. Apthorp 2, A. Dawel 2 1 Research School of Computer

More information

DEEP convolutional neural networks have gained much

DEEP convolutional neural networks have gained much Real-time emotion recognition for gaming using deep convolutional network features Sébastien Ouellet arxiv:8.37v [cs.cv] Aug 2 Abstract The goal of the present study is to explore the application of deep

More information

Detection of Facial Landmarks from Neutral, Happy, and Disgust Facial Images

Detection of Facial Landmarks from Neutral, Happy, and Disgust Facial Images Detection of Facial Landmarks from Neutral, Happy, and Disgust Facial Images Ioulia Guizatdinova and Veikko Surakka Research Group for Emotions, Sociality, and Computing Tampere Unit for Computer-Human

More information

Facial Feature Model for Emotion Recognition Using Fuzzy Reasoning

Facial Feature Model for Emotion Recognition Using Fuzzy Reasoning Facial Feature Model for Emotion Recognition Using Fuzzy Reasoning Renan Contreras, Oleg Starostenko, Vicente Alarcon-Aquino, and Leticia Flores-Pulido CENTIA, Department of Computing, Electronics and

More information

Development of novel algorithm by combining Wavelet based Enhanced Canny edge Detection and Adaptive Filtering Method for Human Emotion Recognition

Development of novel algorithm by combining Wavelet based Enhanced Canny edge Detection and Adaptive Filtering Method for Human Emotion Recognition International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 12, Issue 9 (September 2016), PP.67-72 Development of novel algorithm by combining

More information

On Shape And the Computability of Emotions X. Lu, et al.

On Shape And the Computability of Emotions X. Lu, et al. On Shape And the Computability of Emotions X. Lu, et al. MICC Reading group 10.07.2013 1 On Shape and the Computability of Emotion X. Lu, P. Suryanarayan, R. B. Adams Jr., J. Li, M. G. Newman, J. Z. Wang

More information

HUMAN EMOTION DETECTION THROUGH FACIAL EXPRESSIONS

HUMAN EMOTION DETECTION THROUGH FACIAL EXPRESSIONS th June. Vol.88. No. - JATIT & LLS. All rights reserved. ISSN: -8 E-ISSN: 87- HUMAN EMOTION DETECTION THROUGH FACIAL EXPRESSIONS, KRISHNA MOHAN KUDIRI, ABAS MD SAID AND M YUNUS NAYAN Computer and Information

More information

Affective Game Engines: Motivation & Requirements

Affective Game Engines: Motivation & Requirements Affective Game Engines: Motivation & Requirements Eva Hudlicka Psychometrix Associates Blacksburg, VA hudlicka@ieee.org psychometrixassociates.com DigiPen Institute of Technology February 20, 2009 1 Outline

More information

Observer Annotation of Affective Display and Evaluation of Expressivity: Face vs. Face-and-Body

Observer Annotation of Affective Display and Evaluation of Expressivity: Face vs. Face-and-Body Observer Annotation of Affective Display and Evaluation of Expressivity: Face vs. Face-and-Body Hatice Gunes and Massimo Piccardi Faculty of Information Technology, University of Technology, Sydney (UTS)

More information

From Dials to Facial Coding: Automated Detection of Spontaneous Facial Expressions for Media Research

From Dials to Facial Coding: Automated Detection of Spontaneous Facial Expressions for Media Research From Dials to Facial Coding: Automated Detection of Spontaneous Facial Expressions for Media Research Evan Kodra, Thibaud Senechal, Daniel McDuff, Rana el Kaliouby Abstract Typical consumer media research

More information

Touchy Feely: An Emotion Recognition Challenge

Touchy Feely: An Emotion Recognition Challenge Touchy Feely: An Emotion Recognition Challenge Dhruv Amin Stanford University dhruv92@stanford.edu Patrick Chase Stanford University pchase@stanford.edu Kirin Sinha Stanford University ksinha@stanford.edu

More information

Modeling emotions for affect-aware applications

Modeling emotions for affect-aware applications Modeling emotions for affect-aware applications Agata Kołakowska, Agnieszka Landowska, Mariusz Szwoch, Wioleta Szwoch, Michał R. Wróbel* Faculty of Electronics, Telecommunications and Informatics Gdansk

More information

EMOTION MONITORING VERIFICATION OF PHYSIOLOGICAL CHARACTERISTICS MEASUREMENT PROCEDURES

EMOTION MONITORING VERIFICATION OF PHYSIOLOGICAL CHARACTERISTICS MEASUREMENT PROCEDURES Metrol. Meas. Syst., Vol. XXI (2014), No. 4, pp. 719 732. METROLOGY AND MEASUREMENT SYSTEMS Index 330930, ISSN 0860-8229 www.metrology.pg.gda.pl EMOTION MONITORING VERIFICATION OF PHYSIOLOGICAL CHARACTERISTICS

More information

arxiv: v4 [cs.cv] 1 Sep 2018

arxiv: v4 [cs.cv] 1 Sep 2018 manuscript No. (will be inserted by the editor) Deep Affect Prediction in-the-wild: Aff-Wild Database and Challenge, Deep Architectures, and Beyond Dimitrios Kollias Panagiotis Tzirakis Mihalis A. Nicolaou

More information

Gesture Recognition using Marathi/Hindi Alphabet

Gesture Recognition using Marathi/Hindi Alphabet Gesture Recognition using Marathi/Hindi Alphabet Rahul Dobale ¹, Rakshit Fulzele², Shruti Girolla 3, Seoutaj Singh 4 Student, Computer Engineering, D.Y. Patil School of Engineering, Pune, India 1 Student,

More information

Audio-visual Classification and Fusion of Spontaneous Affective Data in Likelihood Space

Audio-visual Classification and Fusion of Spontaneous Affective Data in Likelihood Space 2010 International Conference on Pattern Recognition Audio-visual Classification and Fusion of Spontaneous Affective Data in Likelihood Space Mihalis A. Nicolaou, Hatice Gunes and Maja Pantic, Department

More information

arxiv: v5 [cs.cv] 1 Feb 2019

arxiv: v5 [cs.cv] 1 Feb 2019 International Journal of Computer Vision - Special Issue on Deep Learning for Face Analysis manuscript No. (will be inserted by the editor) Deep Affect Prediction in-the-wild: Aff-Wild Database and Challenge,

More information

arxiv: v1 [cs.cv] 11 Nov 2018

arxiv: v1 [cs.cv] 11 Nov 2018 Aff-Wild2: Extending the Aff-Wild Database for Affect Recognition Dimitrios Kollias 1 and Stefanos Zafeiriou 1,2 1 Department of Computing, Imperial College London, UK 2 Centre for Machine Vision and Signal

More information

Emotion Analysis Using Emotion Recognition Module Evolved by Genetic Programming

Emotion Analysis Using Emotion Recognition Module Evolved by Genetic Programming THE HARRIS SCIENCE REVIEW OF DOSHISHA UNIVERSITY, VOL. 57, NO. 2 July 2016 Emotion Analysis Using Emotion Recognition Module Evolved by Genetic Programming Rahadian YUSUF*, Ivan TANEV*, Katsunori SHIMOHARA*

More information

Emotion Affective Color Transfer Using Feature Based Facial Expression Recognition

Emotion Affective Color Transfer Using Feature Based Facial Expression Recognition , pp.131-135 http://dx.doi.org/10.14257/astl.2013.39.24 Emotion Affective Color Transfer Using Feature Based Facial Expression Recognition SeungTaek Ryoo and Jae-Khun Chang School of Computer Engineering

More information

ERI User s Guide. 2. Obtaining the ERI for research purposes

ERI User s Guide. 2. Obtaining the ERI for research purposes ERI User s Guide 1. Goal and features of the ERI The Emotion Recognition Index (Scherer & Scherer, 2011) is a rapid screening instrument measuring emotion recognition ability. The ERI consists of a facial

More information

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING 134 TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING H.F.S.M.Fonseka 1, J.T.Jonathan 2, P.Sabeshan 3 and M.B.Dissanayaka 4 1 Department of Electrical And Electronic Engineering, Faculty

More information

Online Speaker Adaptation of an Acoustic Model using Face Recognition

Online Speaker Adaptation of an Acoustic Model using Face Recognition Online Speaker Adaptation of an Acoustic Model using Face Recognition Pavel Campr 1, Aleš Pražák 2, Josef V. Psutka 2, and Josef Psutka 2 1 Center for Machine Perception, Department of Cybernetics, Faculty

More information

The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression

The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression Patrick Lucey 1,2, Jeffrey F. Cohn 1,2, Takeo Kanade 1, Jason Saragih 1, Zara Ambadar 2 Robotics

More information

Manual Annotation and Automatic Image Processing of Multimodal Emotional Behaviours: Validating the Annotation of TV Interviews

Manual Annotation and Automatic Image Processing of Multimodal Emotional Behaviours: Validating the Annotation of TV Interviews Manual Annotation and Automatic Image Processing of Multimodal Emotional Behaviours: Validating the Annotation of TV Interviews J.-C. Martin 1, G. Caridakis 2, L. Devillers 1, K. Karpouzis 2, S. Abrilian

More information

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China A Vision-based Affective Computing System Jieyu Zhao Ningbo University, China Outline Affective Computing A Dynamic 3D Morphable Model Facial Expression Recognition Probabilistic Graphical Models Some

More information

A Spontaneous Cross-Cultural Emotion Database: Latin-America vs. Japan

A Spontaneous Cross-Cultural Emotion Database: Latin-America vs. Japan KEER2014, LINKÖPING JUNE 11-13 2014 INTERNATIONAL CONFERENCE ON KANSEI ENGINEERING AND EMOTION RESEARCH A Spontaneous Cross-Cultural Emotion Database: Latin-America vs. Japan Maria Alejandra Quiros-Ramirez

More information

Facial Emotion Recognition with Facial Analysis

Facial Emotion Recognition with Facial Analysis Facial Emotion Recognition with Facial Analysis İsmail Öztel, Cemil Öz Sakarya University, Faculty of Computer and Information Sciences, Computer Engineering, Sakarya, Türkiye Abstract Computer vision

More information

Discovering Facial Expressions for States of Amused, Persuaded, Informed, Sentimental and Inspired

Discovering Facial Expressions for States of Amused, Persuaded, Informed, Sentimental and Inspired Discovering Facial Expressions for States of Amused, Persuaded, Informed, Sentimental and Inspired Daniel McDuff Microsoft Research, Redmond, WA, USA This work was performed while at Affectiva damcduff@microsoftcom

More information

Who Needs Cheeks? Eyes and Mouths are Enough for Emotion Identification. and. Evidence for a Face Superiority Effect. Nila K Leigh

Who Needs Cheeks? Eyes and Mouths are Enough for Emotion Identification. and. Evidence for a Face Superiority Effect. Nila K Leigh 1 Who Needs Cheeks? Eyes and Mouths are Enough for Emotion Identification and Evidence for a Face Superiority Effect Nila K Leigh 131 Ave B (Apt. 1B) New York, NY 10009 Stuyvesant High School 345 Chambers

More information

A Multilevel Fusion Approach for Audiovisual Emotion Recognition

A Multilevel Fusion Approach for Audiovisual Emotion Recognition A Multilevel Fusion Approach for Audiovisual Emotion Recognition Girija Chetty & Michael Wagner National Centre for Biometric Studies Faculty of Information Sciences and Engineering University of Canberra,

More information

Open Research Online The Open University s repository of research publications and other research outputs

Open Research Online The Open University s repository of research publications and other research outputs Open Research Online The Open University s repository of research publications and other research outputs Toward Emotionally Accessible Massive Open Online Courses (MOOCs) Conference or Workshop Item How

More information

Edge Based Grid Super-Imposition for Crowd Emotion Recognition

Edge Based Grid Super-Imposition for Crowd Emotion Recognition Edge Based Grid Super-Imposition for Crowd Emotion Recognition Amol S Patwardhan 1 1Senior Researcher, VIT, University of Mumbai, 400037, India ---------------------------------------------------------------------***---------------------------------------------------------------------

More information

Temporal Context and the Recognition of Emotion from Facial Expression

Temporal Context and the Recognition of Emotion from Facial Expression Temporal Context and the Recognition of Emotion from Facial Expression Rana El Kaliouby 1, Peter Robinson 1, Simeon Keates 2 1 Computer Laboratory University of Cambridge Cambridge CB3 0FD, U.K. {rana.el-kaliouby,

More information

AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild

AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild Ali Mollahosseini, Student Member, IEEE, Behzad Hasani, Student Member, IEEE, and Mohammad H. Mahoor, Senior Member,

More information

SPEECH EMOTION RECOGNITION: ARE WE THERE YET?

SPEECH EMOTION RECOGNITION: ARE WE THERE YET? SPEECH EMOTION RECOGNITION: ARE WE THERE YET? CARLOS BUSSO Multimodal Signal Processing (MSP) lab The University of Texas at Dallas Erik Jonsson School of Engineering and Computer Science Why study emotion

More information

FERA Second Facial Expression Recognition and Analysis Challenge

FERA Second Facial Expression Recognition and Analysis Challenge FERA 2015 - Second Facial Expression Recognition and Analysis Challenge Michel F. Valstar 1, Timur Almaev 1, Jeffrey M. Girard 2, Gary McKeown 3, Marc Mehu 4, Lijun Yin 5, Maja Pantic 6,7 and Jeffrey F.

More information

The Ordinal Nature of Emotions. Georgios N. Yannakakis, Roddy Cowie and Carlos Busso

The Ordinal Nature of Emotions. Georgios N. Yannakakis, Roddy Cowie and Carlos Busso The Ordinal Nature of Emotions Georgios N. Yannakakis, Roddy Cowie and Carlos Busso The story It seems that a rank-based FeelTrace yields higher inter-rater agreement Indeed, FeelTrace should actually

More information

Emotion based E-learning System using Physiological Signals. Dr. Jerritta S, Dr. Arun S School of Engineering, Vels University, Chennai

Emotion based E-learning System using Physiological Signals. Dr. Jerritta S, Dr. Arun S School of Engineering, Vels University, Chennai CHENNAI - INDIA Emotion based E-learning System using Physiological Signals School of Engineering, Vels University, Chennai Outline Introduction Existing Research works on Emotion Recognition Research

More information

Fuzzy Model on Human Emotions Recognition

Fuzzy Model on Human Emotions Recognition Fuzzy Model on Human Emotions Recognition KAVEH BAKHTIYARI &HAFIZAH HUSAIN Department of Electrical, Electronics and Systems Engineering Faculty of Engineering and Built Environment, Universiti Kebangsaan

More information

ITU-T. FG AVA TR Version 1.0 (10/2013) Part 3: Using audiovisual media A taxonomy of participation

ITU-T. FG AVA TR Version 1.0 (10/2013) Part 3: Using audiovisual media A taxonomy of participation International Telecommunication Union ITU-T TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU FG AVA TR Version 1.0 (10/2013) Focus Group on Audiovisual Media Accessibility Technical Report Part 3: Using

More information

On the Performance Analysis of APIs Recognizing Emotions from Video Images of Facial Expressions

On the Performance Analysis of APIs Recognizing Emotions from Video Images of Facial Expressions On the Performance Analysis of APIs Recognizing Emotions from Video Images of Facial Expressions Ananya Bhattacharjee 1, Tanmoy Pias 2, Mahathir Ahmad 3, and Ashikur Rahman 4 Department of CSE, Bangladesh

More information

Sayette Group Formation Task (GFT) Spontaneous Facial Expression Database

Sayette Group Formation Task (GFT) Spontaneous Facial Expression Database Sayette Group Formation Task (GFT) Spontaneous Facial Expression Database Jeffrey M. Girard 1, Wen-Sheng Chu 2, László A. Jeni 2, Jeffrey F. Cohn 1,2, Fernando De la Torre 2, and Michael A. Sayette 1 1

More information

Generalization of a Vision-Based Computational Model of Mind-Reading

Generalization of a Vision-Based Computational Model of Mind-Reading Generalization of a Vision-Based Computational Model of Mind-Reading Rana el Kaliouby and Peter Robinson Computer Laboratory, University of Cambridge, 5 JJ Thomson Avenue, Cambridge UK CB3 FD Abstract.

More information

Exploiting Privileged Information for Facial Expression Recognition

Exploiting Privileged Information for Facial Expression Recognition Exploiting Privileged Information for Facial Expression Recognition Michalis Vrigkas 1, Christophoros Nikou 1,2, Ioannis A. Kakadiaris 2 1 Department of Computer Science & Engineering, University of Ioannina,

More information

An Affect Prediction Approach through Depression Severity Parameter Incorporation in Neural Networks

An Affect Prediction Approach through Depression Severity Parameter Incorporation in Neural Networks INTERSPEECH 2017 August 20 24, 2017, Stockholm, Sweden An Affect Prediction Approach through Depression Severity Parameter Incorporation in Neural Networks Rahul Gupta, Saurabh Sahu +, Carol Espy-Wilson

More information

On the Performance Analysis of APIs Recognizing Emotions from Video Images of Facial Expressions

On the Performance Analysis of APIs Recognizing Emotions from Video Images of Facial Expressions On the Performance Analysis of APIs Recognizing Emotions from Video Images of Facial Expressions Ananya Bhattacharjee 1, Tanmoy Pias 2, Mahathir Ahmad 3, and Ashikur Rahman 4 Department of CSE, Bangladesh

More information

Enhanced Autocorrelation in Real World Emotion Recognition

Enhanced Autocorrelation in Real World Emotion Recognition Enhanced Autocorrelation in Real World Emotion Recognition Sascha Meudt Institute of Neural Information Processing University of Ulm sascha.meudt@uni-ulm.de Friedhelm Schwenker Institute of Neural Information

More information

Audiovisual to Sign Language Translator

Audiovisual to Sign Language Translator Technical Disclosure Commons Defensive Publications Series July 17, 2018 Audiovisual to Sign Language Translator Manikandan Gopalakrishnan Follow this and additional works at: https://www.tdcommons.org/dpubs_series

More information

Group-level Arousal and Valence Recognition in Static Images: Face, Body and Context

Group-level Arousal and Valence Recognition in Static Images: Face, Body and Context Group-level Arousal and Valence Recognition in Static Images: Face, Body and Context Wenxuan Mou, Oya Celiktutan, Hatice Gunes School of Electronic Engineering and Computer Science, Queen Mary University

More information

Emotion Detection Using Physiological Signals. M.A.Sc. Thesis Proposal Haiyan Xu Supervisor: Prof. K.N. Plataniotis

Emotion Detection Using Physiological Signals. M.A.Sc. Thesis Proposal Haiyan Xu Supervisor: Prof. K.N. Plataniotis Emotion Detection Using Physiological Signals M.A.Sc. Thesis Proposal Haiyan Xu Supervisor: Prof. K.N. Plataniotis May 10 th, 2011 Outline Emotion Detection Overview EEG for Emotion Detection Previous

More information

EMOTION DETECTION THROUGH SPEECH AND FACIAL EXPRESSIONS

EMOTION DETECTION THROUGH SPEECH AND FACIAL EXPRESSIONS EMOTION DETECTION THROUGH SPEECH AND FACIAL EXPRESSIONS 1 KRISHNA MOHAN KUDIRI, 2 ABAS MD SAID AND 3 M YUNUS NAYAN 1 Computer and Information Sciences, Universiti Teknologi PETRONAS, Malaysia 2 Assoc.

More information

N RISCE 2K18 ISSN International Journal of Advance Research and Innovation

N RISCE 2K18 ISSN International Journal of Advance Research and Innovation The Computer Assistance Hand Gesture Recognition system For Physically Impairment Peoples V.Veeramanikandan(manikandan.veera97@gmail.com) UG student,department of ECE,Gnanamani College of Technology. R.Anandharaj(anandhrak1@gmail.com)

More information

Formulating Emotion Perception as a Probabilistic Model with Application to Categorical Emotion Classification

Formulating Emotion Perception as a Probabilistic Model with Application to Categorical Emotion Classification Formulating Emotion Perception as a Probabilistic Model with Application to Categorical Emotion Classification Reza Lotfian and Carlos Busso Multimodal Signal Processing (MSP) lab The University of Texas

More information

arxiv: v1 [cs.lg] 4 Feb 2019

arxiv: v1 [cs.lg] 4 Feb 2019 Machine Learning for Seizure Type Classification: Setting the benchmark Subhrajit Roy [000 0002 6072 5500], Umar Asif [0000 0001 5209 7084], Jianbin Tang [0000 0001 5440 0796], and Stefan Harrer [0000

More information

EBCC Data Analysis Tool (EBCC DAT) Introduction

EBCC Data Analysis Tool (EBCC DAT) Introduction Instructor: Paul Wolfgang Faculty sponsor: Yuan Shi, Ph.D. Andrey Mavrichev CIS 4339 Project in Computer Science May 7, 2009 Research work was completed in collaboration with Michael Tobia, Kevin L. Brown,

More information

Deep learning and non-negative matrix factorization in recognition of mammograms

Deep learning and non-negative matrix factorization in recognition of mammograms Deep learning and non-negative matrix factorization in recognition of mammograms Bartosz Swiderski Faculty of Applied Informatics and Mathematics Warsaw University of Life Sciences, Warsaw, Poland bartosz_swiderski@sggw.pl

More information

Bio-Feedback Based Simulator for Mission Critical Training

Bio-Feedback Based Simulator for Mission Critical Training Bio-Feedback Based Simulator for Mission Critical Training Igor Balk Polhemus, 40 Hercules drive, Colchester, VT 05446 +1 802 655 31 59 x301 balk@alum.mit.edu Abstract. The paper address needs for training

More information

PHYSIOLOGICAL RESEARCH

PHYSIOLOGICAL RESEARCH DOMAIN STUDIES PHYSIOLOGICAL RESEARCH In order to understand the current landscape of psychophysiological evaluation methods, we conducted a survey of academic literature. We explored several different

More information

MEMORABILITY OF NATURAL SCENES: THE ROLE OF ATTENTION

MEMORABILITY OF NATURAL SCENES: THE ROLE OF ATTENTION MEMORABILITY OF NATURAL SCENES: THE ROLE OF ATTENTION Matei Mancas University of Mons - UMONS, Belgium NumediArt Institute, 31, Bd. Dolez, Mons matei.mancas@umons.ac.be Olivier Le Meur University of Rennes

More information

Natural Affect Data - Collection & Annotation in a Learning Context

Natural Affect Data - Collection & Annotation in a Learning Context Natural Affect Data - Collection & Annotation in a Learning Context Shazia Afzal Univ. of Cambridge Computer Laboratory Shazia.Afzal@cl.cam.ac.uk Peter Robinson Univ. of Cambridge Computer Laboratory Peter.Robinson@cl.cam.ac.uk

More information

Affect Intensity Estimation using Multiple Modalities

Affect Intensity Estimation using Multiple Modalities Affect Intensity Estimation using Multiple Modalities Amol S. Patwardhan, and Gerald M. Knapp Department of Mechanical and Industrial Engineering Louisiana State University apatwa3@lsu.edu Abstract One

More information

Get The FACS Fast: Automated FACS face analysis benefits from the addition of velocity

Get The FACS Fast: Automated FACS face analysis benefits from the addition of velocity Get The FACS Fast: Automated FACS face analysis benefits from the addition of velocity Timothy R. Brick University of Virginia Charlottesville, VA 22904 tbrick@virginia.edu Michael D. Hunter University

More information

Identity Verification Using Iris Images: Performance of Human Examiners

Identity Verification Using Iris Images: Performance of Human Examiners Identity Verification Using Iris Images: Performance of Human Examiners Kevin McGinn, Samuel Tarin and Kevin W. Bowyer Department of Computer Science and Engineering University of Notre Dame kmcginn3,

More information