Face Analysis for Emotional Interfaces
|
|
- Johnathan Simpson
- 5 years ago
- Views:
Transcription
1 Face Analysis for Emotional Interfaces Matti Pietikäinen, Guoying Zhao Center Machine Vision and Signal Analysis, University of Oulu, Finland Human faces contain lots of information Identity Demographic information (e.g., gender, age, race/ethnicity...) Emotions ( happy, sad angry, surprise, etc.) Direction of attention (head pose, gaze direction) Visual speech (+ audio speech) Health (e.g., pain, autism, mental disorders,...) Even unvisible information (e.g., heart rate, micro-expressions) Face information is very important for developing emotionally intelligent systems to various applications 1
2 Applications Education: online learning Health and medicine: autism, emotional well-being HCI: emotional chatpots, human-robot interaction Security/safety: interogation, border control, safe driving Job interviews, video conferences User/customer experience analysis AI with emotional intelligence Jiaojiao: The bank lobby manager robot was deployed in Chinese banks to introduce financial products and help customers. 2
3 Affective human-robot interaction Röning J, Holappa J, Kellokumpu V, Tikanmäki A & Pietikäinen M (2014) Minotaurus: A system for affective human-robot interaction in smart environments. Cognitive Computation 6(4):
4 A perceptual interface for face analysis 4
5 Examples of key results from our research Image and video description with local binary patterns (LBP) Benchmark of LBP variants and deep texture descriptors Face description with LBPs Facial expression recognition Pain intensity analysis from facial expressions Group-level happiness intensity analysis Recognition and spotting of spontaneous facial microexpressions Remote heart rate measurement from video data Face anti-spoofing by detecting pulse from videos Visual speech analysis and animation Multi-modal emotion analysis Local Binary Pattern and Contrast operators Ojala T, Pietikäinen M & Harwood D (1996) A comparative study of texture measures with classification based on feature distributions. Pattern Recognition 29: ,283 citations in Google Scholar ( ) - Most cited paper of the Pattern Recognition journal! An example of computing LBP and C in a 3x3 neighborhood: example thresholded weights Pattern = LBP = = 241 C = ( )/5 - (5+2+1)/3 = 4.7 Important properties: LBP is invariant to any monotonic gray level change computational simplicity 5
6 Multiscale LBP Ojala T, Pietikäinen M & Mäenpää T (2002) Multiresolution gray-scale and rotation invariant texture classification with Local Binary Patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(7): (an early version at ECCV 2000). - 10,645 citations in Google Scholar - Most cited paper of the top-ranking IEEE PAMI journal since the most cited Finnish paper in ICT area published since arbitrary circular neighborhoods - uniform patterns - multiple scales - rotation invariance - gray scale variance as contrast measure Median Robust LBP Liu L, Lao S, Fieguth P, Guo Y, Wang X & Pietikäinen M (2016) Median robust extended local binary pattern for texture classification. IEEE Transactions on Image Processing 25(3):
7 Liu L, Fieguth P, Guo Y, Wang X & Pietikäinen M (2017) Local binary features for texture classification: Taxonomy and experimental study. Pattern Recognition, 62: Table from: Liu L, Fieguth P, Wang X, Pietikäinen M & Hu D (2016) Evaluation of LBP and deep texture descriptors with a new robustness benchmark. In: Computer Vision, ECCV 2016 Proceedings, Lecture Notes in Computer Science, 9907:
8 LBP vs. CNN: Handcrafted features vs. handcrafted architectures LBP: + excellent performance of some recent variants (e.g., L. Liu et al., IEEE TIP 2016) + computational efficiency -> without GPU computations + robustness to some image degradations, e.g. rotations, noise (e.g., MRELBP) + used in various applications (e.g., biometrics, medical image analysis, motion analysis) - most variants cannot be trained - not so effective for macro-textures and large appearance changes CNN: + effective also for macro-textures and large appearance changes + able to learn texture descriptors at multiple scales from data + end-to-end training of filters and of classification of filter responses - need massive amounts of training data (not available in many applications) - computationally very expensive -> need GPU processing - not so robust to image rotations and noise - currently limited use in real-world applications What next?: Combining strengths of LBP and CNN descriptors moderate amounts of training data (as required by many applications) more insensitivity to image trasformations and degradations compactness and efficiency -> can be used e.g. in mobile/wearable devices with strict low-power constraints How to achieve these goals? A Survey of Recent Advances in Texture Representation Li Liu, Jie Chen, Paul Fieguth, Guoying Zhao, Rama Chellappa, Matti Pietikäinen (Submitted on 31 Jan 2018) Texture is a fundamental characteristic of many types of images, and texture representation is one of the essential and challenging problems in computer vision and pattern recognition which has attracted extensive research attention. Since 2000, texture representations based on Bag of Words (BoW) and on Convolutional Neural Networks (CNNs) have been extensively studied with impressive performance. Given this period of remarkable evolution, this paper aims to present a comprehensive survey of advances in texture representation over the last two decades. More than 200 major publications are cited in this survey covering different aspects of the research, which includes (i) problem description; (ii) recent advances in the broad categories of BoW-based, CNN-based and attribute-based methods; and (iii) evaluation issues, specifically benchmark datasets and state of the art results. In retrospect of what has been achieved so far, the survey discusses open challenges and directions for future research. Comments: 27 pages and 29 figures, submitted to IJCV Subjects: Computer Vision and Pattern Recognition (cs.cv); Learning (cs.lg) MSC classes: 68T10 Cite as: arxiv: [cs.cv] (or arxiv: v1 [cs.cv] for this version) 8
9 Spatiotemporal LBP Zhao G & Pietikäinen M (2007) Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence 29(6): ,741 citations in Google Scholar Face description with LBP Ahonen T, Hadid A & Pietikäinen M (2006) Face description with local binary patterns: application to face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(12): (an early version published at ECCV 2004) - 4,474 citations in Google Scholar (ECCV paper: 2,377 citations) - 3rd most cited paper of PAMI journal since ECCV paper was awarded with Koenderink Prize 2014 for fundamental contributions in computer vision 9
10 Adopted from a slide by Anil K. Jain 2014 Taigman et al. DeepFace Facial expression recognition 10
11 Zhao G & Pietikäinen M (2007) Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence 29(6): Facial expression recognition in determining the emotional state of the face Regardless of the identity of the face (a) Non-overlapping blocks(9 x 8) (b) Overlapping blocks (4 x 3, overlap size= 10) (a) Block volumes (b) LBP features (c) Concatenated features for one block volume from three orthogonal planes with the appearance and motion 11
12 Demos for acted facial expression recognition Guo Y, Zhao G & Pietikäinen M (2016) Dynamic facial expression recognition with atlas construction and sparse representation. IEEE Transactions on Image Processing 25(5): Towards spontaneous expressions 12
13 Dealing with facial occlusion Real-world challenge (Occlusion) Existed occlusion detections (on static images) Huang X, Zhao G, Zheng W & Pietikäinen M (2012) Towards a dynamic expression recognition under facial occlusion. Pattern Recognition Letters 33(16): Dealing with multiple views Huang X, Zhao G & Pietikainen M (2013) Emotion recognition from facial images with arbitrary views. BMVC
14 3D action unit detection Lip Stretcher Lip Funneler Lip Corner Puller Lip Corner Depressor Chin Raiser Lip Tightener Bayramoglu N, Zhao G & Pietikainen M (2013) CS-3DLBP and geometry based person independent 3D facial action unit detection. ICB Pain Intensity Estimation 14
15 A pain scale measures a patient's pain intensity or other features. Automatic pain intensity estimation has received increased attention recently Wide applications in health care, ranging from monitoring patients in intensive care units to assessment of chronic lower back pain Figure. An example of sequences from the UNBC-MacMaster Shoulder Pain Expression Archive dataset, with four video-level and two frame-level pain ratings. Overview of the proposed second pooling framework. We take frame level pain intensity estimation as an example We propose a 2 nd order standardized moment average pooling technique to capsule both low-level information from raw features and the mid-level ones from the local descriptors and form the image representations. Relevance Vector Regression (RVR) is used to learn the mapping from image representations to the frame level intensity (PSPI scale). Table. Pain intensity estimation on the UNBC-MacMaster Shoulder Pain Expression Archive dataset. The Mean Squared Error (MSE) and the Pearson Product-moment Correlation Coefficient (PCC) are used as the quantitative Methods MSE PCC PTS DCT LBP PTS/DCT/LBP_average PTS/DCT/LBP_RVR Ours Hong X, Zhao G, Zafeiriou S, Pantic M & Pietikäinen M (2016) Capturing Correlations of Local Features for Image Representation. Neurocomputing 184:
16 Pain intensity estimation with RCNN Zhou J, Hong X, Su F & Zhao G (2016) Recurrent convolutional neural network regression for continuous pain intensity estimation in video. Proc. CVPR Workshops. Traditional static methods prefer to extract features from frames separately in a video, unstable changes and peaks among adjacent frames. We propose a real-time regression framework based on the recurrent convolutional neural network (RCNN) for automatic frame-level pain intensity estimation Group-level happiness intensity analysis Huang X, Dhali A, Zhao G, Goecke R & Pietikäinen M (2015) Riesz-based volume local binary pattern and a novel group expression model for group happiness intensity analysis. Proc. the British Machine Vision Conference (BMVC 2015), Swansea, UK, 13 p. 16
17 Group-level happiness intensity analysis Background Large pool of data containing multiple people Crowd analysis and family relationships Purpose Infer emotional intensity of a group of people Issues Feature extraction Group expression model (GEM) Table. Comparison of various GEM approaches with HOG, LBP, LPQ and RVLBP (Mean absolute error) Methods Feature HOG LBP LPQ RVLBP GEM avg GEM w GEM LDA GEM CCRF
18 Micro-expression (ME) analysis The first paper on spontaneous micro-expressions: Pfister T, Li X, Zhao G & Pietikäinen M (2011) Recognising spontaneous facial micro-expressions. Proc. International Conference on Computer Vision (ICCV 2011), Barcelona, Spain, Q: Micro-expression Paul Ekman Micro-expression rapid involuntary facial expressions reveal suppressed affect reveal contradictions between facial expressions and the emotional state enable recognition of suppressed emotions. Contempt micro-expression Micro-expression spotting and recognition
19 Attribute Microexpressions Macroexpressions Duration < 0,5 s 0,5-4 s Facial movements Subtle Clearly visible Emotion Repressed Expressed Yan, W.J., Wu, Q., Liang, J., Chen, Y.H., Fu, X.: How fast are the leaked facial expressions: The duration of microexpressions. Journal of Nonverbal Behavior 37(4), (2013) Idea Read hidden emotions in faces 19
20 Potential applications Reveal lies Our method Interrogation Price negotiation Find right price SMIC: Micro-expression Database 20
21 Researcher High speed camera Emotional movie clips Participant Movie to induce Negative emotion: 1/10 Speed 21
22 SMIC database Other databases: Micro-expression recognition 22
23 Our research reported in MIT Technology Review, Nov. 2015, based on our article at Machine Vision Algorithm Learns to Recognize Hidden Facial Expressions Microexpressions reveal your deepest emotions, even when you are trying to hide them. Now a machine vision algorithm has learned to spot them, with wide-ranging applications from law enforcement to psychological analysis. Final version of the arxiv report: Li X, Hong X, Moilanen A, Huang X, Pfister T, Zhao G & Pietikäinen M (2017) Towards reading hidden emotions: A comparative study of spontaneous micro-expression spotting. and recognition methods. IEEE Transactions on Affective Computing, in press (available online). First framework: An example of a facial micro-expression (top-left) being interpolated through graph embedding (top-right); the result from which spatiotemporal local texture descriptors are extracted (bottom-right), enabling recognition using multiple kernel learning. 23
24 New Framework Micro-expression spotting 24
25 Framework Demo for Spotting facial movements: 25
26 Heart-rate measurement from videos Color-based method on a face video Ambient light ,5 93,5 92, Tiny signal; Many noise factors; Eulerian Video Magnification for Revealing Subtle Changes in the World" -- Wu et al
27 Heart rate measurment from videos (remotely!) Li X, Chen J, Zhao G & Pietikäinen M (2014) Remote heart rate measurement from face videos under realistic situations. Proc. International Conference on Computer Vision and Pattern Recognition (CVPR 2014). Demo with MAHNOB database Heart rate variability contains information about emotions and identity 27
28 Generalized face anti-spoofing BY DETECTING pulse FROM FACE VIDEOS Xiaobai Li, Jukka Komulainen, Guoying Zhao, Pong-chi Yuen, and Matti Pietikäinen CMVS, University of Oulu, Finland Li X, Komulainen J, Zhao G, Yuen PC & Pietikäinen M (2016) Generalized face anti-spoofing by detecting pulse from face videos. Proc. International Conference on Pattern Recognition (ICPR 2016) Application: face antispoofing for biometrics Without countermeasures biometric systems are vulnerable to spoofing attacks Recognition algorithms try to identify/verify instead of checking whether the biometric input is genuine Especially true for face biometrics falsifying biometric data is easy Social media (Facebook, etc.) Hard to hide your face in public Paper prints, video displays, masks 28
29 THE PROBLEM 3) Mask Attack: o 3D printing o Depth o Eye movements REAL-F 3DMAD Visual speech recognition Zhao G, Barnard M & Pietikäinen M (2009). Lipreading with local spatiotemporal descriptors. IEEE Transactions on Multimedia 11(7): Visual speech information plays an important role in speech recognition under noisy conditions or for listeners with hearing impairment. A human listener can use visual cues, such as lip and tongue movements, to enhance the level of speech understanding. The process of using visual modality is often referred to as lipreading which is to make sense of what someone is saying by watching the movement of his lips. McGurk effect [McGurk and MacDonald 1976] demonstrates that inconsistency between audio and visual information can result in perceptual confusion. 29
30 Features in each block volume. Mouth movement representation. Demo for visual speech recognition 30
31 Zhou Z, Hong X, Zhao G & Pietikäinen M (2014) A compact representation of visual speech data using latent variables. IEEE Transactions on Pattern Analysis and Machine Intelligence 36(1): Using generative latent variable model (GLVM) to model the interspeaker variations of visual appearances and those caused by uttering Speech emotion analysis Speech emotion analysis deals with methods to analyze vocal behavior as a marker of affect (e.g., emotions, moods, and stress) Traditionally only audio speech has been used, but according to psychological studies also lip-patterns carry information about emotions Our understanding of visual speech falls short of our understanding of the acoustic aspects of speech Integrating audio and visual speech would be highly desirable Our past research has focused on visual speech recognition, but could form a basis for emotion analysis 31
32 Visual speech animation Zhou Z, Zhao G & Pietikäinen M (2012) An image-based visual speech animation system. IEEE Trans. Circuits and Systems for Video Technology 22(10): Example application area: Interaction with a social robot 32
33 3D visual speech from video sequences Musti U, Ouni S, Zhou Z & Pietikäinen M (2014) 3D visual speech animation from image sequences. Proc. The Ninth Indian Conference on Computer Vision, Graphics and Image Processing (ICVGIP 2014), Bangalore, India. 3D facial shape animation 33
34 Teeth and tongue information Examples Emotions are missing! 34
35 Multimodal emotion analysis a central part of human communication. a multimodal procedure: speech, facial expressions, gestures, characteristics have a key role in human-computer interactions (HCI). Multi-modal emotion recognition Huang X, Kortelainen J, Zhao G, Li X, Moilanen A, Seppänen T & Pietikäinen M (2016) Multi-modal emotion analysis from facial expressions and electroencephalogram. Computer Vision and Image Understanding 147:
36 Multimodal emotion recognition by combining the complementary information from facial expressions and EEG. 1) emotion recognition from expressions in long evoked videos; 2) extraction and selection of spectral power and spectral power difference features for EEG analysis; 3) fusion of facial expressions and EEG for valence and arousal recognition on challenging MAHNOB-HCI database; 4) human test for perceiving the emotions and analysis for the effects of gender on expressing and perceiving emotions. While the subjects watched the videos, facial expressions and EEG were recorded. 36
37 Results Some future challenges 37
38 Facial behaviour analysis in-the-wild 75 Facial Action Unit (AU) coding in-the-wild 76 38
39 Continuous dimensional facial behavior analysis in-the-wild 77 Multi-modal recognition of emotions expressions speech physiological signals gestures Happy Happy Angry Most of the current methods work well for near-frontal faces in proper illumination conditions -> robust descriptors for changing conditions still needed Recognition of subtle spontaneous macro/micro-expressions from continuous video data is a great challenge Facial expressions are not culturally universal, also gender and facial age affect emotion decoding Joint use of different modalities (vision, speech, physiological signals (e.g., heart rate, temperature, respiration, galvanic skin response, blood pressure, EEG) Collaboration with experts from different disciplines, e.g., cognitive sciences, psychology, medicine 39
40 Role of emotions in computer-mediated interaction A new project (& Univ. of Helsinki and Aalto Univ.): Quantifying Human Experience for Increased Intelligence Within Work Teams and in the Customer Interface Research challenges Machine-centric -> Human-centric Perception -> Cognition Rational -> Emotional Dr. Harry Shum, Executive vice president of Microsoft's Artificial Intelligence (AI) and Research group (In interview by Business Management Review, China): Get 10 billion USD profit with his new team by bringing emotions to AI in next 3-5 years. Other companies: Apple bought Emotient (2016): Prof. Marian Bartlett (UC San Diego) Affectiva: Prof. Rosalind Picard, co-founder Facebook Facereader: Netherlands Google Realeyes: UK Affecto: Finland 40
41 Summary Machines with some emotional intelligence are emerging Various methods for emotion analysis were introduced Recognition of identity, emotions, and speech are needed for affective interaction Talking face avatars are used for computer s (robot s) response, but should include emotions Micro-expressions and heart rate provide useful invisible information There are still many challenges to make major breakthroughs for real-world applications in-the-wild Multimodal data is needed Thanks! 41
Facial expression recognition with spatiotemporal local descriptors
Facial expression recognition with spatiotemporal local descriptors Guoying Zhao, Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and Information Engineering, P. O. Box
More informationStudy on Aging Effect on Facial Expression Recognition
Study on Aging Effect on Facial Expression Recognition Nora Algaraawi, Tim Morris Abstract Automatic facial expression recognition (AFER) is an active research area in computer vision. However, aging causes
More informationA Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China
A Vision-based Affective Computing System Jieyu Zhao Ningbo University, China Outline Affective Computing A Dynamic 3D Morphable Model Facial Expression Recognition Probabilistic Graphical Models Some
More informationEmotion Recognition using a Cauchy Naive Bayes Classifier
Emotion Recognition using a Cauchy Naive Bayes Classifier Abstract Recognizing human facial expression and emotion by computer is an interesting and challenging problem. In this paper we propose a method
More informationFor Micro-expression Recognition: Database and Suggestions
For Micro-expression Recognition: Database and Suggestions Wen-Jing Yan a,b, Su-Jing Wang a, Yong-Jin Liu c, Qi Wu d, Xiaolan Fu a,1 a State Key Laboratory of Brain and Cognitive Science, Institute of
More informationAge Estimation based on Multi-Region Convolutional Neural Network
Age Estimation based on Multi-Region Convolutional Neural Network Ting Liu, Jun Wan, Tingzhao Yu, Zhen Lei, and Stan Z. Li 1 Center for Biometrics and Security Research & National Laboratory of Pattern
More informationNeuro-Inspired Statistical. Rensselaer Polytechnic Institute National Science Foundation
Neuro-Inspired Statistical Pi Prior Model lfor Robust Visual Inference Qiang Ji Rensselaer Polytechnic Institute National Science Foundation 1 Status of Computer Vision CV has been an active area for over
More informationAnalysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information
Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, S. Lee, U. Neumann, S. Narayanan Emotion
More informationValence-arousal evaluation using physiological signals in an emotion recall paradigm. CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry.
Proceedings Chapter Valence-arousal evaluation using physiological signals in an emotion recall paradigm CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry Abstract The work presented in this paper aims
More informationDevelopment of novel algorithm by combining Wavelet based Enhanced Canny edge Detection and Adaptive Filtering Method for Human Emotion Recognition
International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 12, Issue 9 (September 2016), PP.67-72 Development of novel algorithm by combining
More informationHierarchical Convolutional Features for Visual Tracking
Hierarchical Convolutional Features for Visual Tracking Chao Ma Jia-Bin Huang Xiaokang Yang Ming-Husan Yang SJTU UIUC SJTU UC Merced ICCV 2015 Background Given the initial state (position and scale), estimate
More informationFacial Behavior as a Soft Biometric
Facial Behavior as a Soft Biometric Abhay L. Kashyap University of Maryland, Baltimore County 1000 Hilltop Circle, Baltimore, MD 21250 abhay1@umbc.edu Sergey Tulyakov, Venu Govindaraju University at Buffalo
More informationThis is the accepted version of this article. To be published as : This is the author version published as:
QUT Digital Repository: http://eprints.qut.edu.au/ This is the author version published as: This is the accepted version of this article. To be published as : This is the author version published as: Chew,
More informationAffective pictures and emotion analysis of facial expressions with local binary pattern operator: Preliminary results
Affective pictures and emotion analysis of facial expressions with local binary pattern operator: Preliminary results Seppo J. Laukka 1, Antti Rantanen 1, Guoying Zhao 2, Matti Taini 2, Janne Heikkilä
More informationDescription and explanation of the major themes of The Curious Incident of the Dog in the other people, dealing with new environments, and making
How To Analyze People: Reading People, Body Language, Recognizing Emotions & Facial Expressions (Analyzing People, Body Language Books, How To Read Lies, Reading Facial Expressions) By Bradley Fairbanks
More informationDimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners
Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners Hatice Gunes and Maja Pantic Department of Computing, Imperial College London 180 Queen
More informationCASME Database: A Dataset of Spontaneous Micro-Expressions Collected From Neutralized Faces
CASME Database: A Dataset of Spontaneous Micro-Expressions Collected From Neutralized Faces Wen-Jing Yan, Qi Wu, Yong-Jin Liu, Su-Jing Wang and Xiaolan Fu* Abstract Micro-expressions are facial expressions
More informationVideo Saliency Detection via Dynamic Consistent Spatio- Temporal Attention Modelling
AAAI -13 July 16, 2013 Video Saliency Detection via Dynamic Consistent Spatio- Temporal Attention Modelling Sheng-hua ZHONG 1, Yan LIU 1, Feifei REN 1,2, Jinghuan ZHANG 2, Tongwei REN 3 1 Department of
More informationHierarchical Age Estimation from Unconstrained Facial Images
Hierarchical Age Estimation from Unconstrained Facial Images STIC-AmSud Jhony Kaesemodel Pontes Department of Electrical Engineering Federal University of Paraná - Supervisor: Alessandro L. Koerich (/PUCPR
More informationVIDEO SURVEILLANCE AND BIOMEDICAL IMAGING Research Activities and Technology Transfer at PAVIS
VIDEO SURVEILLANCE AND BIOMEDICAL IMAGING Research Activities and Technology Transfer at PAVIS Samuele Martelli, Alessio Del Bue, Diego Sona, Vittorio Murino Istituto Italiano di Tecnologia (IIT), Genova
More informationMedical Image Analysis
Medical Image Analysis 1 Co-trained convolutional neural networks for automated detection of prostate cancer in multiparametric MRI, 2017, Medical Image Analysis 2 Graph-based prostate extraction in t2-weighted
More informationA Study of Facial Expression Reorganization and Local Binary Patterns
A Study of Facial Expression Reorganization and Local Binary Patterns Poonam Verma #1, Deepshikha Rathore *2 #1 MTech Scholar,Sanghvi Innovative Academy Indore *2 Asst.Professor,Sanghvi Innovative Academy
More informationOn Shape And the Computability of Emotions X. Lu, et al.
On Shape And the Computability of Emotions X. Lu, et al. MICC Reading group 10.07.2013 1 On Shape and the Computability of Emotion X. Lu, P. Suryanarayan, R. B. Adams Jr., J. Li, M. G. Newman, J. Z. Wang
More informationFacial Emotion Recognition with Facial Analysis
Facial Emotion Recognition with Facial Analysis İsmail Öztel, Cemil Öz Sakarya University, Faculty of Computer and Information Sciences, Computer Engineering, Sakarya, Türkiye Abstract Computer vision
More informationA framework for the Recognition of Human Emotion using Soft Computing models
A framework for the Recognition of Human Emotion using Soft Computing models Md. Iqbal Quraishi Dept. of Information Technology Kalyani Govt Engg. College J Pal Choudhury Dept. of Information Technology
More informationThis article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution
More informationLocal Image Structures and Optic Flow Estimation
Local Image Structures and Optic Flow Estimation Sinan KALKAN 1, Dirk Calow 2, Florentin Wörgötter 1, Markus Lappe 2 and Norbert Krüger 3 1 Computational Neuroscience, Uni. of Stirling, Scotland; {sinan,worgott}@cn.stir.ac.uk
More informationAffective Game Engines: Motivation & Requirements
Affective Game Engines: Motivation & Requirements Eva Hudlicka Psychometrix Associates Blacksburg, VA hudlicka@ieee.org psychometrixassociates.com DigiPen Institute of Technology February 20, 2009 1 Outline
More informationA Study on Automatic Age Estimation using a Large Database
A Study on Automatic Age Estimation using a Large Database Guodong Guo WVU Guowang Mu NCCU Yun Fu BBN Technologies Charles Dyer UW-Madison Thomas Huang UIUC Abstract In this paper we study some problems
More informationGeneralization of a Vision-Based Computational Model of Mind-Reading
Generalization of a Vision-Based Computational Model of Mind-Reading Rana el Kaliouby and Peter Robinson Computer Laboratory, University of Cambridge, 5 JJ Thomson Avenue, Cambridge UK CB3 FD Abstract.
More informationDesign of Palm Acupuncture Points Indicator
Design of Palm Acupuncture Points Indicator Wen-Yuan Chen, Shih-Yen Huang and Jian-Shie Lin Abstract The acupuncture points are given acupuncture or acupressure so to stimulate the meridians on each corresponding
More informationBlue Eyes Technology
Blue Eyes Technology D.D. Mondal #1, Arti Gupta *2, Tarang Soni *3, Neha Dandekar *4 1 Professor, Dept. of Electronics and Telecommunication, Sinhgad Institute of Technology and Science, Narhe, Maharastra,
More informationAutomatic Classification of Perceived Gender from Facial Images
Automatic Classification of Perceived Gender from Facial Images Joseph Lemley, Sami Abdul-Wahid, Dipayan Banik Advisor: Dr. Razvan Andonie SOURCE 2016 Outline 1 Introduction 2 Faces - Background 3 Faces
More informationDutch Multimodal Corpus for Speech Recognition
Dutch Multimodal Corpus for Speech Recognition A.G. ChiŃu and L.J.M. Rothkrantz E-mails: {A.G.Chitu,L.J.M.Rothkrantz}@tudelft.nl Website: http://mmi.tudelft.nl Outline: Background and goal of the paper.
More informationNeuromorphic convolutional recurrent neural network for road safety or safety near the road
Neuromorphic convolutional recurrent neural network for road safety or safety near the road WOO-SUP HAN 1, IL SONG HAN 2 1 ODIGA, London, U.K. 2 Korea Advanced Institute of Science and Technology, Daejeon,
More informationUtilizing Posterior Probability for Race-composite Age Estimation
Utilizing Posterior Probability for Race-composite Age Estimation Early Applications to MORPH-II Benjamin Yip NSF-REU in Statistical Data Mining and Machine Learning for Computer Vision and Pattern Recognition
More informationRecognising Emotions from Keyboard Stroke Pattern
Recognising Emotions from Keyboard Stroke Pattern Preeti Khanna Faculty SBM, SVKM s NMIMS Vile Parle, Mumbai M.Sasikumar Associate Director CDAC, Kharghar Navi Mumbai ABSTRACT In day to day life, emotions
More informationReal Time Sign Language Processing System
Real Time Sign Language Processing System Dibyabiva Seth (&), Anindita Ghosh, Ariruna Dasgupta, and Asoke Nath Department of Computer Science, St. Xavier s College (Autonomous), Kolkata, India meetdseth@gmail.com,
More informationCS-E Deep Learning Session 4: Convolutional Networks
CS-E4050 - Deep Learning Session 4: Convolutional Networks Jyri Kivinen Aalto University 23 September 2015 Credits: Thanks to Tapani Raiko for slides material. CS-E4050 - Deep Learning Session 4: Convolutional
More informationA Survey on Hand Gesture Recognition for Indian Sign Language
A Survey on Hand Gesture Recognition for Indian Sign Language Miss. Juhi Ekbote 1, Mrs. Mahasweta Joshi 2 1 Final Year Student of M.E. (Computer Engineering), B.V.M Engineering College, Vallabh Vidyanagar,
More informationEMOTION DETECTION THROUGH SPEECH AND FACIAL EXPRESSIONS
EMOTION DETECTION THROUGH SPEECH AND FACIAL EXPRESSIONS 1 KRISHNA MOHAN KUDIRI, 2 ABAS MD SAID AND 3 M YUNUS NAYAN 1 Computer and Information Sciences, Universiti Teknologi PETRONAS, Malaysia 2 Assoc.
More informationMotion Control for Social Behaviours
Motion Control for Social Behaviours Aryel Beck a.beck@ntu.edu.sg Supervisor: Nadia Magnenat-Thalmann Collaborators: Zhang Zhijun, Rubha Shri Narayanan, Neetha Das 10-03-2015 INTRODUCTION In order for
More informationANALYSIS OF FACIAL FEATURES OF DRIVERS UNDER COGNITIVE AND VISUAL DISTRACTIONS
ANALYSIS OF FACIAL FEATURES OF DRIVERS UNDER COGNITIVE AND VISUAL DISTRACTIONS Nanxiang Li and Carlos Busso Multimodal Signal Processing (MSP) Laboratory Department of Electrical Engineering, The University
More informationAffective Computing Ana Paiva & João Dias. Lecture 1. Course Presentation
Affective Computing Ana Paiva & João Dias Lecture 1. Course Presentation Motivation. What is Affective Computing? Applications and Problems Perspectives on Emotions History of Affective Sciences Communication
More informationA NEW HUMANLIKE FACIAL ATTRACTIVENESS PREDICTOR WITH CASCADED FINE-TUNING DEEP LEARNING MODEL
A NEW HUMANLIKE FACIAL ATTRACTIVENESS PREDICTOR WITH CASCADED FINE-TUNING DEEP LEARNING MODEL Jie Xu, Lianwen Jin*, Lingyu Liang*, Ziyong Feng, Duorui Xie South China University of Technology, Guangzhou
More informationIntroduction to affect computing and its applications
Introduction to affect computing and its applications Overview What is emotion? What is affective computing + examples? Why is affective computing useful? How do we do affect computing? Some interesting
More informationarxiv: v2 [cs.cv] 19 Dec 2017
An Ensemble of Deep Convolutional Neural Networks for Alzheimer s Disease Detection and Classification arxiv:1712.01675v2 [cs.cv] 19 Dec 2017 Jyoti Islam Department of Computer Science Georgia State University
More informationFacial Expression Biometrics Using Tracker Displacement Features
Facial Expression Biometrics Using Tracker Displacement Features Sergey Tulyakov 1, Thomas Slowe 2,ZhiZhang 1, and Venu Govindaraju 1 1 Center for Unified Biometrics and Sensors University at Buffalo,
More informationAn assistive application identifying emotional state and executing a methodical healing process for depressive individuals.
An assistive application identifying emotional state and executing a methodical healing process for depressive individuals. Bandara G.M.M.B.O bhanukab@gmail.com Godawita B.M.D.T tharu9363@gmail.com Gunathilaka
More information1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1
1. INTRODUCTION Sign language interpretation is one of the HCI applications where hand gesture plays important role for communication. This chapter discusses sign language interpretation system with present
More informationPotential applications of affective computing in the surveillance work of CCTV operators
Loughborough University Institutional Repository Potential applications of affective computing in the surveillance work of CCTV operators This item was submitted to Loughborough University's Institutional
More informationCPSC81 Final Paper: Facial Expression Recognition Using CNNs
CPSC81 Final Paper: Facial Expression Recognition Using CNNs Luis Ceballos Swarthmore College, 500 College Ave., Swarthmore, PA 19081 USA Sarah Wallace Swarthmore College, 500 College Ave., Swarthmore,
More informationPHYSIOLOGICAL RESEARCH
DOMAIN STUDIES PHYSIOLOGICAL RESEARCH In order to understand the current landscape of psychophysiological evaluation methods, we conducted a survey of academic literature. We explored several different
More informationShu Kong. Department of Computer Science, UC Irvine
Ubiquitous Fine-Grained Computer Vision Shu Kong Department of Computer Science, UC Irvine Outline 1. Problem definition 2. Instantiation 3. Challenge and philosophy 4. Fine-grained classification with
More informationSUPPRESSION OF MUSICAL NOISE IN ENHANCED SPEECH USING PRE-IMAGE ITERATIONS. Christina Leitner and Franz Pernkopf
2th European Signal Processing Conference (EUSIPCO 212) Bucharest, Romania, August 27-31, 212 SUPPRESSION OF MUSICAL NOISE IN ENHANCED SPEECH USING PRE-IMAGE ITERATIONS Christina Leitner and Franz Pernkopf
More informationShu Kong. Department of Computer Science, UC Irvine
Ubiquitous Fine-Grained Computer Vision Shu Kong Department of Computer Science, UC Irvine Outline 1. Problem definition 2. Instantiation 3. Challenge 4. Fine-grained classification with holistic representation
More informationFacial Expression Classification Using Convolutional Neural Network and Support Vector Machine
Facial Expression Classification Using Convolutional Neural Network and Support Vector Machine Valfredo Pilla Jr, André Zanellato, Cristian Bortolini, Humberto R. Gamba and Gustavo Benvenutti Borba Graduate
More informationDetection of Facial Landmarks from Neutral, Happy, and Disgust Facial Images
Detection of Facial Landmarks from Neutral, Happy, and Disgust Facial Images Ioulia Guizatdinova and Veikko Surakka Research Group for Emotions, Sociality, and Computing Tampere Unit for Computer-Human
More informationA Deep Learning Approach for Subject Independent Emotion Recognition from Facial Expressions
A Deep Learning Approach for Subject Independent Emotion Recognition from Facial Expressions VICTOR-EMIL NEAGOE *, ANDREI-PETRU BĂRAR *, NICU SEBE **, PAUL ROBITU * * Faculty of Electronics, Telecommunications
More informationSign Language Recognition System Using SIFT Based Approach
Sign Language Recognition System Using SIFT Based Approach Ashwin S. Pol, S. L. Nalbalwar & N. S. Jadhav Dept. of E&TC, Dr. BATU Lonere, MH, India E-mail : ashwin.pol9@gmail.com, nalbalwar_sanjayan@yahoo.com,
More informationPrince Willem Alexander and Princess Maxima. Analyzed according to the Twelve Goodfield Personality Types. by Prof. Barry A. Goodfield, Ph.D.
Prince Willem Alexander and Princess Maxima Analyzed according to the Twelve Goodfield Personality Types by Prof. Barry A. Goodfield, Ph.D. Lecture in The Netherlands Eindhoven February 25, 2013 A lot
More informationAn Artificial Neural Network Architecture Based on Context Transformations in Cortical Minicolumns
An Artificial Neural Network Architecture Based on Context Transformations in Cortical Minicolumns 1. Introduction Vasily Morzhakov, Alexey Redozubov morzhakovva@gmail.com, galdrd@gmail.com Abstract Cortical
More informationLearning to Rank Authenticity from Facial Activity Descriptors Otto von Guericke University, Magdeburg - Germany
Learning to Rank Authenticity from Facial s Otto von Guericke University, Magdeburg - Germany Frerk Saxen, Philipp Werner, Ayoub Al-Hamadi The Task Real or Fake? Dataset statistics Training set 40 Subjects
More informationFace Analysis : Identity vs. Expressions
Hugo Mercier, 1,2 Patrice Dalle 1 Face Analysis : Identity vs. Expressions 1 IRIT - Université Paul Sabatier 118 Route de Narbonne, F-31062 Toulouse Cedex 9, France 2 Websourd Bâtiment A 99, route d'espagne
More informationBeyond AI: Bringing Emotional Intelligence to the Digital
Beyond AI: Bringing Emotional Intelligence to the Digital World Emotions influence every aspect of our lives how we live, work and play to the decisions we make We are surrounded by hyper-connected devices,
More informationHUMAN age estimation from faces is an important research. Attended End-to-end Architecture for Age Estimation from Facial Expression Videos
Attended End-to-end Architecture for Age Estimation from Facial Expression Videos Wenjie Pei, Hamdi Dibeklioğlu, Member, IEEE, Tadas Baltrušaitis and David M.J. Tax arxiv:7.090v [cs.cv] 3 Nov 7 Abstract
More informationHuman Visual Behaviour for Collaborative Human-Machine Interaction
Human Visual Behaviour for Collaborative Human-Machine Interaction Andreas Bulling Perceptual User Interfaces Group Max Planck Institute for Informatics Saarbrücken, Germany bulling@mpi-inf.mpg.de Abstract
More informationTemporal Context and the Recognition of Emotion from Facial Expression
Temporal Context and the Recognition of Emotion from Facial Expression Rana El Kaliouby 1, Peter Robinson 1, Simeon Keates 2 1 Computer Laboratory University of Cambridge Cambridge CB3 0FD, U.K. {rana.el-kaliouby,
More informationarxiv: v5 [cs.cv] 1 Feb 2019
International Journal of Computer Vision - Special Issue on Deep Learning for Face Analysis manuscript No. (will be inserted by the editor) Deep Affect Prediction in-the-wild: Aff-Wild Database and Challenge,
More informationarxiv: v4 [cs.cv] 1 Sep 2018
manuscript No. (will be inserted by the editor) Deep Affect Prediction in-the-wild: Aff-Wild Database and Challenge, Deep Architectures, and Beyond Dimitrios Kollias Panagiotis Tzirakis Mihalis A. Nicolaou
More informationCASME II: An Improved Spontaneous Micro-Expression Database and the Baseline Evaluation
CASME II: An Improved Spontaneous Micro-Expression Database and the Baseline Evaluation Wen-Jing Yan 1,2, Xiaobai Li 3, Su-Jing Wang 1, Guoying Zhao 3, Yong-Jin Liu 4, Yu-Hsin Chen 1,2, Xiaolan Fu 1 *
More informationEffect of Sensor Fusion for Recognition of Emotional States Using Voice, Face Image and Thermal Image of Face
Effect of Sensor Fusion for Recognition of Emotional States Using Voice, Face Image and Thermal Image of Face Yasunari Yoshitomi 1, Sung-Ill Kim 2, Takako Kawano 3 and Tetsuro Kitazoe 1 1:Department of
More informationA Common Framework for Real-Time Emotion Recognition and Facial Action Unit Detection
A Common Framework for Real-Time Emotion Recognition and Facial Action Unit Detection Tobias Gehrig and Hazım Kemal Ekenel Facial Image Processing and Analysis Group, Institute for Anthropomatics Karlsruhe
More informationAFFECTIVE COMPUTING. Affective Computing. Introduction. Guoying Zhao 1 / 67
Affective Computing Introduction Guoying Zhao 1 / 67 Your Staff Assoc. Prof. Guoying Zhao - email: guoying.zhao@oulu.fi - office: TS302 - phone: 0294 487564 - Wed. 3-4pm Dr. Xiaohua Huang (Assistant Lecturer)
More informationMultimodal Coordination of Facial Action, Head Rotation, and Eye Motion during Spontaneous Smiles
Multimodal Coordination of Facial Action, Head Rotation, and Eye Motion during Spontaneous Smiles Jeffrey F. Cohn jeffcohn@cs.cmu.edu Lawrence Ian Reed lirst6@pitt.edu suyoshi Moriyama Carnegie Mellon
More informationAutomated Tessellated Fundus Detection in Color Fundus Images
University of Iowa Iowa Research Online Proceedings of the Ophthalmic Medical Image Analysis International Workshop 2016 Proceedings Oct 21st, 2016 Automated Tessellated Fundus Detection in Color Fundus
More informationAudiovisual to Sign Language Translator
Technical Disclosure Commons Defensive Publications Series July 17, 2018 Audiovisual to Sign Language Translator Manikandan Gopalakrishnan Follow this and additional works at: https://www.tdcommons.org/dpubs_series
More informationEnhanced Facial Expressions Recognition using Modular Equable 2DPCA and Equable 2DPC
Enhanced Facial Expressions Recognition using Modular Equable 2DPCA and Equable 2DPC Sushma Choudhar 1, Sachin Puntambekar 2 1 Research Scholar-Digital Communication Medicaps Institute of Technology &
More informationAudio-visual Classification and Fusion of Spontaneous Affective Data in Likelihood Space
2010 International Conference on Pattern Recognition Audio-visual Classification and Fusion of Spontaneous Affective Data in Likelihood Space Mihalis A. Nicolaou, Hatice Gunes and Maja Pantic, Department
More informationEmotion Affective Color Transfer Using Feature Based Facial Expression Recognition
, pp.131-135 http://dx.doi.org/10.14257/astl.2013.39.24 Emotion Affective Color Transfer Using Feature Based Facial Expression Recognition SeungTaek Ryoo and Jae-Khun Chang School of Computer Engineering
More informationFacial Expression Recognition Using Principal Component Analysis
Facial Expression Recognition Using Principal Component Analysis Ajit P. Gosavi, S. R. Khot Abstract Expression detection is useful as a non-invasive method of lie detection and behaviour prediction. However,
More informationFudan University, China
Cyber Psychosocial and Physical (CPP) Computation Based on Social Neuromechanism -Joint research work by Fudan University and University of Novi Sad By Professor Weihui Dai Fudan University, China 1 Agenda
More informationComparison of Lip Image Feature Extraction Methods for Improvement of Isolated Word Recognition Rate
, pp.57-61 http://dx.doi.org/10.14257/astl.2015.107.14 Comparison of Lip Image Feature Extraction Methods for Improvement of Isolated Word Recognition Rate Yong-Ki Kim 1, Jong Gwan Lim 2, Mi-Hye Kim *3
More informationManual Annotation and Automatic Image Processing of Multimodal Emotional Behaviours: Validating the Annotation of TV Interviews
Manual Annotation and Automatic Image Processing of Multimodal Emotional Behaviours: Validating the Annotation of TV Interviews J.-C. Martin 1, G. Caridakis 2, L. Devillers 1, K. Karpouzis 2, S. Abrilian
More informationFacial Expression Analysis for Estimating Pain in Clinical Settings
Facial Expression Analysis for Estimating Pain in Clinical Settings Karan Sikka University of California San Diego 9450 Gilman Drive, La Jolla, California, USA ksikka@ucsd.edu ABSTRACT Pain assessment
More informationFace Gender Classification on Consumer Images in a Multiethnic Environment
Face Gender Classification on Consumer Images in a Multiethnic Environment Wei Gao and Haizhou Ai Computer Science and Technology Department, Tsinghua University, Beijing 100084, China ahz@mail.tsinghua.edu.cn
More informationINTELLIGENT LIP READING SYSTEM FOR HEARING AND VOCAL IMPAIRMENT
INTELLIGENT LIP READING SYSTEM FOR HEARING AND VOCAL IMPAIRMENT R.Nishitha 1, Dr K.Srinivasan 2, Dr V.Rukkumani 3 1 Student, 2 Professor and Head, 3 Associate Professor, Electronics and Instrumentation
More informationClassroom Data Collection and Analysis using Computer Vision
Classroom Data Collection and Analysis using Computer Vision Jiang Han Department of Electrical Engineering Stanford University Abstract This project aims to extract different information like faces, gender
More informationNaveen Kumar H N 1, Dr. Jagadeesha S 2 1 Assistant Professor, Dept. of ECE, SDMIT, Ujire, Karnataka, India 1. IJRASET: All Rights are Reserved 417
Physiological Measure of Drowsiness Using Image Processing Technique Naveen Kumar H N 1, Dr. Jagadeesha S 2 1 Assistant Professor, Dept. of ECE, SDMIT, Ujire, Karnataka, India 1 2 Professor, Dept. of ECE,
More informationGfK Verein. Detecting Emotions from Voice
GfK Verein Detecting Emotions from Voice Respondents willingness to complete questionnaires declines But it doesn t necessarily mean that consumers have nothing to say about products or brands: GfK Verein
More informationANALYSIS AND DETECTION OF BRAIN TUMOUR USING IMAGE PROCESSING TECHNIQUES
ANALYSIS AND DETECTION OF BRAIN TUMOUR USING IMAGE PROCESSING TECHNIQUES P.V.Rohini 1, Dr.M.Pushparani 2 1 M.Phil Scholar, Department of Computer Science, Mother Teresa women s university, (India) 2 Professor
More informationOnline Speaker Adaptation of an Acoustic Model using Face Recognition
Online Speaker Adaptation of an Acoustic Model using Face Recognition Pavel Campr 1, Aleš Pražák 2, Josef V. Psutka 2, and Josef Psutka 2 1 Center for Machine Perception, Department of Cybernetics, Faculty
More informationCAN A SMILE REVEAL YOUR GENDER?
CAN A SMILE REVEAL YOUR GENDER? Antitza Dantcheva, Piotr Bilinski, Francois Bremond INRIA Sophia Antipolis, France JOURNEE de la BIOMETRIE 2017, Caen, 07/07/17 OUTLINE 1. Why Gender Estimation? 2. Related
More informationGender Based Emotion Recognition using Speech Signals: A Review
50 Gender Based Emotion Recognition using Speech Signals: A Review Parvinder Kaur 1, Mandeep Kaur 2 1 Department of Electronics and Communication Engineering, Punjabi University, Patiala, India 2 Department
More informationFACIAL expression research has a long history and accelerated
PREPRINT SUBMITTED TO IEEE JOURNAL. 1 A Review on Facial Micro-Expressions Analysis: Datasets, Features and Metrics Walied Merghani, Adrian K. Davison, Member, IEEE, Moi Hoon Yap, Member, IEEE (This work
More informationOverview of the visual cortex. Ventral pathway. Overview of the visual cortex
Overview of the visual cortex Two streams: Ventral What : V1,V2, V4, IT, form recognition and object representation Dorsal Where : V1,V2, MT, MST, LIP, VIP, 7a: motion, location, control of eyes and arms
More informationHierarchical Local Binary Pattern for Branch Retinal Vein Occlusion Recognition
Hierarchical Local Binary Pattern for Branch Retinal Vein Occlusion Recognition Zenghai Chen 1, Hui Zhang 2, Zheru Chi 1, and Hong Fu 1,3 1 Department of Electronic and Information Engineering, The Hong
More informationComputational modeling of visual attention and saliency in the Smart Playroom
Computational modeling of visual attention and saliency in the Smart Playroom Andrew Jones Department of Computer Science, Brown University Abstract The two canonical modes of human visual attention bottomup
More informationStatistical and Neural Methods for Vision-based Analysis of Facial Expressions and Gender
Proc. IEEE Int. Conf. on Systems, Man and Cybernetics (SMC 2004), Den Haag, pp. 2203-2208, IEEE omnipress 2004 Statistical and Neural Methods for Vision-based Analysis of Facial Expressions and Gender
More informationUnderstanding Facial Expressions and Microexpressions
Understanding Facial Expressions and Microexpressions 1 You can go to a book store and find many books on bodylanguage, communication and persuasion. Many of them seem to cover the same material though:
More information