Emotion Recognition from Facial Action Points by Principal Component Analysis

Size: px
Start display at page:

Download "Emotion Recognition from Facial Action Points by Principal Component Analysis"

Transcription

1 Emotion Recognition from Facial Action Points by Principal Component Analysis Anisha Halder, Garima Singh Arindam Jati, Amit Konar ETCE Department, Jadavpur University, Kolkata-32. India Aruna Chakraborty Department of Computer Science and Engineering, St. Thomas College of Engineering and Technology, Kolkata, India Atulya K. Nagar Department of Math and Computer Science, Liverpool Hope University, Liverpool, UK. Abstract This paper proposes a novel approach to emotion recognition of a subject employing 36 selected facial action points marked at specific locations on their faces. Facial expressions obtained from the subjects enacting them are recorded, and the corresponding changes in marked action points are evaluated. The measurements reveal that the action points have wider variations in facial expressions containing diverse and intensified emotions. Considering 10 instances for each facial expression, and analysing the same emotion, experimented over 10 subjects, we obtain a set of 100 distance matrices, representing the distance between any two selected action points. The average of 100 matrices for each individual emotion is evaluated, and the first principal component, representing the most prominent features of the average distance matrix is evaluated. In the recognition phase, the first Principal component obtained from the distance matrix of an unknown facial expression is evaluated, and its Euclidean distance with the first Principal component of each emotion is determined. The Euclidean distance between the obtained principal component and that of j-th emotion class is minimum if the obtained emotion accurately falls in the emotion class. Classification of 120 facial images, with equal number of samples for six emotion classes, reveals an average classification accuracy of 92.5%, the highest being recorded in relax and disgust and the least in fear and anger. Keywords- Action points, Emotion Recognition, Principal Component Analysis (PCA). I. INTRODUCTION Due to its increasing scope of application in humancomputer interfaces, Emotion recognition, forms an inevitable part of artificial intelligence. Several modalities of emotion recognition, including facial expression, voice, gesture and posture have been studied in the literature. However, irrespective of the modality, emotion recognition comprises of two fundamental steps involving feature extraction and classification [1]. Feature extraction refers to the determination of a set of features/attributes, preferably independent, which together represent a given emotional expression. Classification aims at mapping emotional features into one of the several emotion classes. Among the well-known methods of determining human emotions, Fourier descriptor [2], template matching [3], and neural network techniques [4], [5] deserve special mention. Other important works undertaken so far for recognition of emotions through facial expressions by selecting suitable features include [2], [6], [7], [8], [9], [10], [11], and by identifying the right classifier include [3], [4], [5], [10], [12], [13], [14], [15], [16], [17]. Human facial expressions as well as facial movements change for different emotions. The significance of facial movements can be understood from the biological point of view. Human face comprises of several muscles that are responsible for the movement of every part of the face. For instance, facial attributes, such as mouth- and eye-opening, commonly used for emotion recognition in the literature, have good correlations with specific muscles in the face. Naturally, detection of motion of the muscles carries more fundamental information than their aggregation, such as smiling or raising of eyebrows. The paper aims at detecting motion of the muscles involved in smiling, raising of eyebrows, partial and total opening of the eyes, constriction of eyebrows and temporal wrinkles formed due to muscle movements in the face while experiencing a specific emotion. The first research on emotion recognition from facial action points/units is by Ekman (1975) [11], followed by Ekman and Frisen [21]. In [22], the authors considered 46 basic facial action points to represent the movements of cheek, chin and wrinkles. Unfortunately, since its foundation, the action point features did not receive much attention, as most of the researchers emphasized on high level features comprising joint movement of several muscles rather than the primitive action points. Recently, researchers are of the views that high level features are more personspecific, and have wider variations because of cultural traits and are not always free from gender and region bias. The action points being more fundamental and localized, offer culture-, race- and gender-independent features, containing sufficient information for automatic recognition of emotions.

2 The only difference between the action unit based emotion recognition scheme and the others is probably that the action point based recognition is easier for machines, while high level features are a common source of recognition by humans. Recently Pantic and her research team [17], [18], [19], [20] employed action units in emotion recognition, and noted that the performance of their methods outperformed other high level feature based emotion recognition schemes. Among other interesting works on action point based emotion recognition, the works undertaken in [21], [22], [24], [25], [26], [27] need special mention. The paper provides an alternative approach to emotion recognition of a subject from selected 36 facial action points marked at specific locations on their faces (Fig. 1). Facial expressions obtained through acting by 10 subjects are recorded, and the changes in marked action points are measured. The measurements obtained indicate that the action points have wider variations for facial expressions containing diverse emotions. Considering 10 instances for each facial expression, representative of the same emotion, experimented over 10 subjects, we obtain a set of 100 distance matrices, where each matrix element represents the distance between any two selected action points. The 100 matrices for each individual emotion are averaged, and the first principal component, representing the most prominent features of the average distance matrix is evaluated. During the recognition phase, the first Principal component obtained from the distance matrix of a given facial expression is evaluated, and its Euclidean distance with the first Principal component of each emotion is determined. The unknown facial expression is classified into emotion class j, if the Euclidean distance between the obtained principal component and that of j-th emotion class is minimum. Classification with 120 facial images, containing equal number of samples for six emotion classes, reveals an average classification accuracy of 92.5%, the highest being in relax and disgust and the least in fear and anger. The paper is divided into four sections. Section II provides the methodology of the proposed facial action point-based model. Section III gives a brief overview on Principal Component Analysis (PCA). In section IV, we describe the experiments in two different subsections. The former deals with the feature extraction methods while the results are displayed in the latter part. Conclusions are listed in section V. II. METHODOLOGY We now briefly discuss the main steps involved in emotion recognition from facial action points based upon the measurements of the distance matrices for 36 facial feature points for 10-subjects, each having 10 instances of facial expression for a particular emotion. In this paper, we consider 6 basic emotions: relaxation, happiness, disgust, anger, fear and surprise. We need to classify the facial expression of an unknown person into one of the 6 emotion classes. The methodology of our proposed algorithm has two main parts. The first part is the facial point based model construction. The second part involves evaluating the emotions of an unknown person. A. Facial-point based model construction For constructing the facial action point based model, we selected 10 subjects and marked their faces with 36 facial action points at specific locations. As in fig. 1, different facial expressions obtained by them acting, are recorded, and changes in the co-ordinates of the marked action points are measured for different emotions. The distance between any two selected action points is considered as the feature for our proposed approach. Thus, for each facial expression marked with 36 points, we have a set of distance matrices. (a) (d) (b) (c) (e) (f) Fig. 1 Facial action points in different emotions (a) Relax, (b) Anger, (c) Disgust, (d) Fear, (e) Happy, (f) Surprise Considering 10 instances for each facial expression, representative of the same emotion, experimented over 10 subjects, we obtain a set of 100 such distance matrices. The 100 matrices for individual emotion are averaged, and the first principal component, representing the most prominent features of the average distance matrix is evaluated. Thus for 6 emotions 6 such principal component can be obtained. Now, emotion of an unknown facial expression can be evaluated by using these 6 principal component obtained from 10 known subjects. The main steps of constructing the action point based model are as follows: 1. Let, there be n subjects, marked with 36 points on their faces. Each subject exhibits k instances of each facial expression, representative of the same emotion. So, for m emotions, each subject has m k facial expressions.

3 2. The Euclidian distance between each point on a particular face is determined first. Thus for each facial expression, a set of distance matrices can be obtained. Let, A m be the distance matrix of the m-th emotions. A m = [d ij ] where, i, j ε [1, 36] For each emotion, we have to evaluate n k such A m. 3. Then, the n k matrices for individual emotions are averaged. Let us denote this averaged matrix by M l which has the same dimension (36 36) as of A l. Now, for m emotions, m number of M l s can be obtained. 4. Then the first principal component P m, representing the most prominent features of the average distance matrix M l, is evaluated for each emotion. Here, number of emotions m = 1 to 6. Obeying the principle of PCA, the dimension of the first principal component P m is now reduced to These m number of principal components is kept for the recognition of unknown emotion. B. Evaluating the emotion of an unknown facial expression: During the recognition phase, the distance matrix is evaluated from the 36 points marked on the unknown facial expression. The first Principal component obtained from the distance matrix of the facial expression is then evaluated, and its Euclidean distance with the first Principal component of each of the 6 emotions that we obtained earlier, is determined. The unknown facial expression is classified into emotion class j, if the Euclidean distance between the obtained principal component and that of j-th emotion class is minimum. The main steps for recognition of the emotion of an unknown facial expression are as follows: 1. Evaluate distance matrix for the unknown facial expression. Thus for the unknown face, a set of distance matrices can be obtained for 36 facial action points. where, i, j ε [1, 36] A l m= [d ij ] 2. The first principal component P unknown is then evaluated for each emotion. Obeying the principle of PCA, the dimension of P unknown is now reduced to Then the Euclidean distance between the obtained principal component P unknown and P m that have been stored earlier for each m, is evaluated individually. III. 4. The unknown facial expression is classified into emotion class j, if the Euclidean distance between the obtained principal component and that of j-th emotion class is minimum. AN OUTLINE OF PRINCIPAL COMPONENT ANALYSIS Principal Component Analysis (PCA) [30] is a method of representing data points in a compact form by reducing their dimensions. Let us, for instance, consider a data set D= {x xεr N }, where each data x is a point in N- dimensional space. Suppose, we are interested to find the first two principal components of D. This can be made clear with Figure 2. Fig. 2 Direction of the first two principal components of the 2- dimensional data points The first principal component PC 1 is the direction along which the points in D have a maximum variance. The second principal component PC 2 is the direction orthogonal to PC 1 along which the variance is maximum. The principle stated above can be extended is this manner to determine the third, fourth and higher principal components. It is possible to transform the data points x to y by an effective transformation x y, where xεr N and yεr P, P<N. This can be accomplished by projecting the data points onto principal subspace formed by the first p principal components. This is the basis of dimensionality reduction, and the technique of representing data is referred to as subspace decomposition. Let, X be the reconstructed data point obtained from the projections y onto the p largest principal component directions q i s: X x 2 p i1 y i q i PC 2 The error vector e = X- X is orthogonal to the approximate data vector X. This is called the principal of orthogonality. x 1 PC 1 (1)

4 IV. EXPERIMENTS AND RESULTS In this section, we present the experimental details of emotion recognition using the principles introduced in section II. We here consider 6 emotion classes, (i.e., m=5) including anger, fear, disgust, surprise, happiness and relaxation. The experiment is conducted with two sets of facial expressions: a) the first set of = 600 facial expressions from n (=10) subjects are considered for constructing the facial action point based model and, b) the other set of 120 facial expressions are considered to validate the result of the proposed emotion classification scheme. 36 facial points have been used here as features. We now briefly overview the main steps of feature extraction followed by experimental results. A. Feature Extraction Feature extraction is the fundamental step in emotion recognition. This paper considers extraction of features from emotionally rich facial expressions synthesized by the subjects by acting. Existing research results [15], [23] reveal that apart from the eyes, lips, and eye-brow region, which are the most important facial regions responsible for the manifestation of emotion. There are some more facial regions that also have importance for emotional changes in the face, are the forehead region and some area of cheek region. This motivated us to mark 36 points on specific locations on face. Fig. 3 shows the locations of these feature points on a selected facial image d 1,1 d 1,2 d 1,3..d 1,36 d 2,1 d 2,2 d 2,3..d 2,36 d 36,1 d 36,2 d 36,3 d 36,36 B. Results As mentioned earlier, the action point based model is validated by a set of 20 facial expressions of each of the 6 emotions. So, a total of 120 unknown emotional faces are tried to be recognized by the model. Experiment shows that out of the 20 facial expressions of each emotion, our model can correctly classify 20 emotional faces for emotion relax and disgust, 19 faces for emotion surprise, 18 faces for happiness, and 17 emotional facial expressions for anger and fear emotions. The results of classification of emotion by the proposed approach are summarized in Table-I. The Table provides the percentage of classification (and misclassification) of individual emotion. It is apparent from the Table that there exists a disparity in the diagonal entries of the confusion matrix. This indicates that percentage classification of individual emotion class is not uniform for all emotions. The emotions: relaxed and disgust have a classification accuracy of 100%, while emotions like anger and fear have a poor classification accuracy of barely 85%. The justification of the disparity in classification is due to individual differences in expressing an emotion. Naturally, when the principal component (PC) obtained from unknown facial instance is compared with the respective principal components of different emotion classes, the other classes having similar PC sometimes have a better matching with the test PC. TABLE I. CONFUSION MATRIX OF EMOTION CLASSIFICATION USING PCA Fig. 3 Locations of Facial action points These selected points as well as their co-ordinates shift accordingly with change of facial actions for different emotions. To keep the measurements in an emotional expression normalized and free from distance variation from the camera focal plane, we consider the distance between each point as features. So, for a particular emotional face, we have a distance matrix for 36 feature points. The emotion recognition problem addressed here attempts to determine the emotion of an unknown person from his/her facial expression. So, for an unknown face also, we have to first evaluate the co-ordinates of the marked points and then the Euclidean distance between each point for constructing the distance matrix, as indicated below: In/Out Relax Anger Disgust Happiness Fear Surprise Relax Anger Disgust Happiness Fear Surprise Experiments on emotional facial expressions shows that the classification accuracy of emotions Relax, Anger, Disgust, Happiness, Fear and Surprise are respectively 100%, 100%, 90%, 85% and 95%. So, the overall accuracy of recognizing emotion from facial action points through our proposed model is 92.3%.

5 V. PERFORMANCE ANALYSIS Two statistical tests, namely McNemar s test and Friedman test, have been performed to compare the relative performance of our algorithm with 4 standard techniques for emotion recognition. The study was performed with our own face database, as other face databases do not include action points embedded on the facial expression. A. The McNemar s test Let f A and f B be two classifiers outputs obtained by algorithm A and B, when both the algorithms used a common training set R. We now define a null hypothesis: P rr,x [f A (x)=f(x)]=p rr,x [f B (x)=f(x)], (2) where, f(x) be the experimentally induced target function to map any data point x on to specific emotion classes K, where f(x) is one of K classes. Let, n 01 be the number of examples misclassified by f A but not by f B and n 10 be the number of examples misclassified by f B but not by f A. Then following (2), we define a statistic, 2 ( n01 n10 1) Z (3) n n Let, A be our PCA based algorithm and B is one of 4 standard algorithm: Support Vector Machine (SVM), Back Propagation, Multilayer Perceptron (MLP), Radial Basis Function (RBF). We thus evaluate Z = Z 1 through Z 4, where Z j denotes the comparator statistic of misclassification between the PCA (Algorithm: A) and the jth of 4 algorithm (Algorithm: B). Classifier Algorithm used for comparison TABLE II STATISTICAL COMPARISON 1 : MCNEMAR S TEST Reference Algorithm = PCA Parameters used for McNemar Test n 01 n 10 Z j Comments on acceptance/ rejection of hypothesis SVM accepted Bsck-Propagation rejected MLP rejected RBF rejected Table II is evaluated to obtain Z 1 through Z 4 and the hypothesis is rejected if, Z j > χ 2 1, 0.95= , which indicates that the probability of the null hypothesis is correct only to a level of 5% and so, we reject it. If the hypothesis is not rejected, we consider its acceptance. The acceptance or rejected decision is also included in table II. B. Modified Friedman Test In Friedman test, N, the number of databases usually is large. But because of lack in databases, we tested our results with only Indian (Jadavpur university) database. So, the Friedman test cannot be applied directly in the present problem. We propose a modified Friedman test, where instead of taking average rank of individual algorithm, we consider the rank obtained from classification accuracy obtained by experimenting with our database only. The null hypothesis here, states that all the algorithms are equivalent, so their ranks r j should be equal. The Friedman statistic, N 2 k( k 1) F R j (4) k( k 1) j 4 is distributed accordingly to χ 2 F with k-1 degrees of freedom. Here, K=4 and N=1. Here, we consider percentage accuracy of classification as the basis of rank. Table III provides the percentage accuracy of classification with respect to one database. TABLE III STATISTICAL COMPARISON 2: FRIEDMAN TEST Classifier Algorithm j Classification Accuracy Now, from Table III, we obtain R j s. Evaluated χ 2 F using (4), with N= 1, K= 5, is found to be 4. 4> χ 2 1, 0.95= , So, the null hypothesis, claiming that all the algorithms are equivalent, is wrong and, therefore, the performances of the algorithms are determined by their ranks only. It is clear from the table that the average rank of PCA is 1, claiming PCA outperforms all the algorithms by Friedman Test. VI. CONCLUSION Ranks obtained through experiments PCA 92.5% 1 SVM 88.3% 2 Back- Propagation MLP 82.5% 4 RBF 78.3% 5 The distance matrix between each pair of action points is the emphasised criterion in our research. The distance matrix of an unknown facial expression thus obtained is real symmetric and provides a general framework of our methodology using Principal component analysis. The results are appealing. The diagonal entries of the confusion matrix reveal that for the emotions relax and disgust, the classification accuracy is very high of the order of 92.5%,

6 while for a few emotions, which are found often mixed with other simple emotions the results are not so convincing. Statistical tests have been used to validate that the null hypothesis claiming that performance of our algorithm is equivalently similar to the other algorithms are false and are rejected within a confidence limit of 95%. The results obtained from McNemar s test have thereafter confirmed that our proposed algorithm outperforms all the other three competitive algorithms. The results obtained by Friedman test also indicate that our algorithm s performance is better than the other three standard algorithms, popularly used in n the literature of emotion intelligence. REFERENCES [1] A. Konar and A. Chakraborty (Eds.), Advances in Emotion Recognition, Wiley-Blackwell, 2011 (to appear). [2] O. A. Uwechue and S. A. Pandya, Human Face Recognition Using Third-Order Synthetic Neural Networks, Boston, MA: Kluwer, [3] B. Biswas, A. K. Mukherjee, and A. Konar, Matching of digital images using fuzzy logic, AMSE Publication, vol. 35, no. 2, pp. 7 11, [4] A.Bhavsar and H.M.Patel, Facial Expression Recognition Using Neural Classifier and Fuzzy Mapping, IEEE Indicon 2005 Conference, Chennai, India. [5] Yimo Guo, Huanping Gao, Emotion Recognition System in Images Based On Fuzzy Neural Network and HMM, Proc. 5th IEEE Int. Conf. on Cognitive Informatics (ICCI'06), IEEE 2006 [6] M.Rizon, M. Karthigayan, S.Yaacob and R. Nagarajan, Japanese face emotions classification using lip features, Universiti Malaysia Perlis, Jejawi, Perlis, Malaysia, Geometric Modelling and Imaging (GMAI'07), IEEE [7] H. Kobayashi and F. Hara, Measurement of the strength of six basic facial expressions by neural network, Trans. Jpn. Soc. Mech. Eng. (C), vol. 59, no. 567, pp , [8] H. Kobayashi and F. Hara, Recognition of mixed facial expressions by neural network, Trans. Jpn. Soc. Mech. Eng. (C), vol. 59, no. 567, pp , [9] H. Kobayashi and F. Hara, The recognition of basic facial expressions by neural network, Trans. Soc. Instrum. Contr. Eng, vol. 29, no. 1, pp , [10] H.Tsai, Y.Lai and Y.Zhang, Using SVM to design facial expression recognition for shape and texture features, Proceedings of the Ninth International Conference on Machine Learning and Cybernetics, Qingdao, July 2010, IEEE 2010 [11] P. Ekman and W. V. Friesen, Unmasking the Face: A Guide to Recognizing Emotions From Facial Clues, Englewood Cliffs, NJ: Prentice-Hall, [12] H.Zhao, Z.Wang and J.Men, Facial Complex Expression Recognition Based on Fuzzy Kernel Clustering and Support Vector Machines, Third International Conference on Natural Computation (ICNC 2007), IEEE [13] Y.Lee, C.W.Han and J.Shim, Fuzzy Neural Networks and Fuzzy Integral Approach to Curvature-based Component Range Facial Recognition, 2007 International Conference on Convergence Information Technology, IEEE 2007 [14] J. M. Sun, X. S. Pei and S. S. Zhou, Facial Emotion Recognition in modern distant system using SVM, Proceedings of the Seventh International Conference on Machine Learning and Cybernetics, Kunming, July 2008, IEEE 2008 [15] S. Das, A. Halder, P. Bhowmik, A. Chakraborty, A. Konar, A. K. Nagar, Voice and Facial Expression Based Classification of Emotion Using Linear Support Vector, 2009 Second International Conference on Developments in esystems Engineering, IEEE 2009 [16] Rosalind W. Picard, E. Vyzas, Jennifer Healey, Toward Machine Emotional Intelligence: Analysis of Affective Physiological State, IEEE Trans. Pattern Anal. Mach. Intell, vol. 23, no. 10, pp [17] Maja Pantic, Ioannis Patras, Dynamics of Facial Expression: Recognition of Facial Actions and Their Temporal Segments From Face Profile Image Sequences, IEEE Transactions on Systems, Man, and Cybernetics, Part B, vol. 36, no. 2, pp ,2006. [18] M. Pantic, I. Patras and L.J.M. Rothkrantz, Facial action recognition in face profile image sequences, Proc. IEEE Int Conf.ICME, pp , [19] M.F.Valstar and M.Pantic, Combined Support Vector Machines and Hidden Markov Models for modelling Facial Action Temporal Dynamics, Article in a conference proceedings. [20] Michel Valstar and Maja Pantic, Fully Automatic Facial Action Unit Detection and Temporal Analysis, Proceedings of the 2006 IEEE Computer society conference on Computer Vision and Pattern Recognition Workshop(CVPRW 06). [21] P. Ekman and W. Friesen. Facial Action Coding System. Palo Alto: Consulting Psychologist Press, [22] Paul Ekman, Methods For Measuring Facial Action, In Scherer, K. R. and Ekman, P. (Eds.), Handbook of Methods In Nonverbal Bahavior Research. New York: Cambridge University Press, Pp [23] A.Chakraborty, A.Konar, U.K.Chakraborty and A.Chatterjee, Emotion Recognition From Facial Expressions and Its Control Using Fuzzy Logic, IEEE Transactions on Systems, Man and Cybernetics, IEEE 2009 [24] Mahmoud Khademi, Mohammad Taghi Manzuri-Shalmani, Mohammad Hadi Kiapour, and Ali Akbar Kiaei, Recognizing Combinations of Facial Action Units with Different Intensity Using a Mixture of Hidden Markov Models and Neural Network, N. El Gayar, J. Kittler, and F. Roli (Eds.): MCS 2010, LNCS 5997, pp , [25] Yan Tong, Wenhui Liao, Qiang Ji, Inferring Facial Action Units with Causal Relations, Proceedings of the 2006 IEEE Computer society conference on Computer Vision and Pattern Recognition(CVPR 06). [26] Ying-li Tian, Takeo Kanade and Jeffrey F. Cohn, Evaluation of Gabor-Wavelet-Based Facial Action Unit Recognition in Image Sequences of Increasing Complexity,Article in a journal: [27] Jeffrey F. Cohn, Adena J. Zlochower, James Lien and Takio Kanade, Automated face analysis by feature point tracking has high concurrent validity with manual FACS coding, Psychophysiology, 36 ~1999!, Cambridge University Press. Printed in the USA. [28] Marian Stewart Bartlett, Paul A. Viola, Terrence J. Sejnowski, Beatrice A. Golomb, Jan Larsen, Joseph C. Hager, Paul Ekman, Classifying Facial Action, Advances in neural information processing systems 8 (NIPS * 96), D. Touretzky, M.Mozer and M.Hasselmo (Eds.), MIT Press, 1996 pp [29] Chung-Lin Huang and Yu-Ming Huang, Facial Expression Recognition using Model- Based Feature Extraction and Action Parameters Classification, Journal of Visual Communication and Image representation Vol.8,No. 3,September, pp ,1997. [30] Amit Konar, Computational Intelligence: Principles, Techniques and Applications.Springer Berlin Heidelberg, New York, [31] Janez Demsar, Statistical Comparisons of Classifiers over Multiple Data Sets, Journal of Machine Learning Research 7, pp. 1-30, [32] Thomas G. Dietterich, Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms,

Facial Expression Recognition Using Principal Component Analysis

Facial Expression Recognition Using Principal Component Analysis Facial Expression Recognition Using Principal Component Analysis Ajit P. Gosavi, S. R. Khot Abstract Expression detection is useful as a non-invasive method of lie detection and behaviour prediction. However,

More information

Statistical and Neural Methods for Vision-based Analysis of Facial Expressions and Gender

Statistical and Neural Methods for Vision-based Analysis of Facial Expressions and Gender Proc. IEEE Int. Conf. on Systems, Man and Cybernetics (SMC 2004), Den Haag, pp. 2203-2208, IEEE omnipress 2004 Statistical and Neural Methods for Vision-based Analysis of Facial Expressions and Gender

More information

Facial expression recognition with spatiotemporal local descriptors

Facial expression recognition with spatiotemporal local descriptors Facial expression recognition with spatiotemporal local descriptors Guoying Zhao, Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and Information Engineering, P. O. Box

More information

R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology

R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology ISSN: 0975-766X CODEN: IJPTFI Available Online through Research Article www.ijptonline.com FACIAL EMOTION RECOGNITION USING NEURAL NETWORK Kashyap Chiranjiv Devendra, Azad Singh Tomar, Pratigyna.N.Javali,

More information

Emotion Recognition using a Cauchy Naive Bayes Classifier

Emotion Recognition using a Cauchy Naive Bayes Classifier Emotion Recognition using a Cauchy Naive Bayes Classifier Abstract Recognizing human facial expression and emotion by computer is an interesting and challenging problem. In this paper we propose a method

More information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, S. Lee, U. Neumann, S. Narayanan Emotion

More information

This is the accepted version of this article. To be published as : This is the author version published as:

This is the accepted version of this article. To be published as : This is the author version published as: QUT Digital Repository: http://eprints.qut.edu.au/ This is the author version published as: This is the accepted version of this article. To be published as : This is the author version published as: Chew,

More information

Facial Feature Model for Emotion Recognition Using Fuzzy Reasoning

Facial Feature Model for Emotion Recognition Using Fuzzy Reasoning Facial Feature Model for Emotion Recognition Using Fuzzy Reasoning Renan Contreras, Oleg Starostenko, Vicente Alarcon-Aquino, and Leticia Flores-Pulido CENTIA, Department of Computing, Electronics and

More information

Facial Expression Biometrics Using Tracker Displacement Features

Facial Expression Biometrics Using Tracker Displacement Features Facial Expression Biometrics Using Tracker Displacement Features Sergey Tulyakov 1, Thomas Slowe 2,ZhiZhang 1, and Venu Govindaraju 1 1 Center for Unified Biometrics and Sensors University at Buffalo,

More information

Emotion Affective Color Transfer Using Feature Based Facial Expression Recognition

Emotion Affective Color Transfer Using Feature Based Facial Expression Recognition , pp.131-135 http://dx.doi.org/10.14257/astl.2013.39.24 Emotion Affective Color Transfer Using Feature Based Facial Expression Recognition SeungTaek Ryoo and Jae-Khun Chang School of Computer Engineering

More information

Face Analysis : Identity vs. Expressions

Face Analysis : Identity vs. Expressions Hugo Mercier, 1,2 Patrice Dalle 1 Face Analysis : Identity vs. Expressions 1 IRIT - Université Paul Sabatier 118 Route de Narbonne, F-31062 Toulouse Cedex 9, France 2 Websourd Bâtiment A 99, route d'espagne

More information

A framework for the Recognition of Human Emotion using Soft Computing models

A framework for the Recognition of Human Emotion using Soft Computing models A framework for the Recognition of Human Emotion using Soft Computing models Md. Iqbal Quraishi Dept. of Information Technology Kalyani Govt Engg. College J Pal Choudhury Dept. of Information Technology

More information

HUMAN EMOTION DETECTION THROUGH FACIAL EXPRESSIONS

HUMAN EMOTION DETECTION THROUGH FACIAL EXPRESSIONS th June. Vol.88. No. - JATIT & LLS. All rights reserved. ISSN: -8 E-ISSN: 87- HUMAN EMOTION DETECTION THROUGH FACIAL EXPRESSIONS, KRISHNA MOHAN KUDIRI, ABAS MD SAID AND M YUNUS NAYAN Computer and Information

More information

Detection of Facial Landmarks from Neutral, Happy, and Disgust Facial Images

Detection of Facial Landmarks from Neutral, Happy, and Disgust Facial Images Detection of Facial Landmarks from Neutral, Happy, and Disgust Facial Images Ioulia Guizatdinova and Veikko Surakka Research Group for Emotions, Sociality, and Computing Tampere Unit for Computer-Human

More information

Recognizing Emotions from Facial Expressions Using Neural Network

Recognizing Emotions from Facial Expressions Using Neural Network Recognizing Emotions from Facial Expressions Using Neural Network Isidoros Perikos, Epaminondas Ziakopoulos, Ioannis Hatzilygeroudis To cite this version: Isidoros Perikos, Epaminondas Ziakopoulos, Ioannis

More information

Available online at ScienceDirect. Procedia Computer Science 45 (2015 )

Available online at  ScienceDirect. Procedia Computer Science 45 (2015 ) Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 45 (2015 ) 282 289 International Conference on Advanced Computing Technologies and Applications (ICACTA- 2015) A Human

More information

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China A Vision-based Affective Computing System Jieyu Zhao Ningbo University, China Outline Affective Computing A Dynamic 3D Morphable Model Facial Expression Recognition Probabilistic Graphical Models Some

More information

Facial Event Classification with Task Oriented Dynamic Bayesian Network

Facial Event Classification with Task Oriented Dynamic Bayesian Network Facial Event Classification with Task Oriented Dynamic Bayesian Network Haisong Gu Dept. of Computer Science University of Nevada Reno haisonggu@ieee.org Qiang Ji Dept. of ECSE Rensselaer Polytechnic Institute

More information

Gender Based Emotion Recognition using Speech Signals: A Review

Gender Based Emotion Recognition using Speech Signals: A Review 50 Gender Based Emotion Recognition using Speech Signals: A Review Parvinder Kaur 1, Mandeep Kaur 2 1 Department of Electronics and Communication Engineering, Punjabi University, Patiala, India 2 Department

More information

A Deep Learning Approach for Subject Independent Emotion Recognition from Facial Expressions

A Deep Learning Approach for Subject Independent Emotion Recognition from Facial Expressions A Deep Learning Approach for Subject Independent Emotion Recognition from Facial Expressions VICTOR-EMIL NEAGOE *, ANDREI-PETRU BĂRAR *, NICU SEBE **, PAUL ROBITU * * Faculty of Electronics, Telecommunications

More information

Recognition of Facial Expressions for Images using Neural Network

Recognition of Facial Expressions for Images using Neural Network Recognition of Facial Expressions for Images using Neural Network Shubhangi Giripunje Research Scholar, Dept.of Electronics Engg., GHRCE, Nagpur, India Preeti Bajaj Senior IEEE Member, Professor, Dept.of

More information

Recognition of facial expressions using Gabor wavelets and learning vector quantization

Recognition of facial expressions using Gabor wavelets and learning vector quantization Engineering Applications of Artificial Intelligence 21 (2008) 1056 1064 www.elsevier.com/locate/engappai Recognition of facial expressions using Gabor wavelets and learning vector quantization Shishir

More information

Study on Aging Effect on Facial Expression Recognition

Study on Aging Effect on Facial Expression Recognition Study on Aging Effect on Facial Expression Recognition Nora Algaraawi, Tim Morris Abstract Automatic facial expression recognition (AFER) is an active research area in computer vision. However, aging causes

More information

Classification and attractiveness evaluation of facial emotions for purposes of plastic surgery using machine-learning methods and R

Classification and attractiveness evaluation of facial emotions for purposes of plastic surgery using machine-learning methods and R Classification and attractiveness evaluation of facial emotions for purposes of plastic surgery using machine-learning methods and R erum 2018 Lubomír Štěpánek 1, 2 Pavel Kasal 2 Jan Měšťák 3 1 Institute

More information

Generalization of a Vision-Based Computational Model of Mind-Reading

Generalization of a Vision-Based Computational Model of Mind-Reading Generalization of a Vision-Based Computational Model of Mind-Reading Rana el Kaliouby and Peter Robinson Computer Laboratory, University of Cambridge, 5 JJ Thomson Avenue, Cambridge UK CB3 FD Abstract.

More information

Get The FACS Fast: Automated FACS face analysis benefits from the addition of velocity

Get The FACS Fast: Automated FACS face analysis benefits from the addition of velocity Get The FACS Fast: Automated FACS face analysis benefits from the addition of velocity Timothy R. Brick University of Virginia Charlottesville, VA 22904 tbrick@virginia.edu Michael D. Hunter University

More information

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1 1. INTRODUCTION Sign language interpretation is one of the HCI applications where hand gesture plays important role for communication. This chapter discusses sign language interpretation system with present

More information

FACIAL EXPRESSION RECOGNITION FROM IMAGE SEQUENCES USING SELF-ORGANIZING MAPS

FACIAL EXPRESSION RECOGNITION FROM IMAGE SEQUENCES USING SELF-ORGANIZING MAPS International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998 FACIAL EXPRESSION RECOGNITION FROM IMAGE SEQUENCES USING SELF-ORGANIZING MAPS Ayako KATOH*, Yasuhiro FUKUI**

More information

Automatic Facial Expression Recognition Using Boosted Discriminatory Classifiers

Automatic Facial Expression Recognition Using Boosted Discriminatory Classifiers Automatic Facial Expression Recognition Using Boosted Discriminatory Classifiers Stephen Moore and Richard Bowden Centre for Vision Speech and Signal Processing University of Surrey, Guildford, GU2 7JW,

More information

Analysis of Speech Recognition Techniques for use in a Non-Speech Sound Recognition System

Analysis of Speech Recognition Techniques for use in a Non-Speech Sound Recognition System Analysis of Recognition Techniques for use in a Sound Recognition System Michael Cowling, Member, IEEE and Renate Sitte, Member, IEEE Griffith University Faculty of Engineering & Information Technology

More information

A Common Framework for Real-Time Emotion Recognition and Facial Action Unit Detection

A Common Framework for Real-Time Emotion Recognition and Facial Action Unit Detection A Common Framework for Real-Time Emotion Recognition and Facial Action Unit Detection Tobias Gehrig and Hazım Kemal Ekenel Facial Image Processing and Analysis Group, Institute for Anthropomatics Karlsruhe

More information

Temporal Context and the Recognition of Emotion from Facial Expression

Temporal Context and the Recognition of Emotion from Facial Expression Temporal Context and the Recognition of Emotion from Facial Expression Rana El Kaliouby 1, Peter Robinson 1, Simeon Keates 2 1 Computer Laboratory University of Cambridge Cambridge CB3 0FD, U.K. {rana.el-kaliouby,

More information

ECG Beat Recognition using Principal Components Analysis and Artificial Neural Network

ECG Beat Recognition using Principal Components Analysis and Artificial Neural Network International Journal of Electronics Engineering, 3 (1), 2011, pp. 55 58 ECG Beat Recognition using Principal Components Analysis and Artificial Neural Network Amitabh Sharma 1, and Tanushree Sharma 2

More information

A Possibility for Expressing Multi-Emotion on Robot Faces

A Possibility for Expressing Multi-Emotion on Robot Faces The 5 th Conference of TRS Conference 26-27 May 2011, Bangkok, Thailand A Possibility for Expressing Multi-Emotion on Robot Faces Trin Veerasiri 1*, Djitt Laowattana 2 Institute of Field robotics, King

More information

EECS 433 Statistical Pattern Recognition

EECS 433 Statistical Pattern Recognition EECS 433 Statistical Pattern Recognition Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1 / 19 Outline What is Pattern

More information

PERFORMANCE ANALYSIS OF THE TECHNIQUES EMPLOYED ON VARIOUS DATASETS IN IDENTIFYING THE HUMAN FACIAL EMOTION

PERFORMANCE ANALYSIS OF THE TECHNIQUES EMPLOYED ON VARIOUS DATASETS IN IDENTIFYING THE HUMAN FACIAL EMOTION PERFORMANCE ANALYSIS OF THE TECHNIQUES EMPLOYED ON VARIOUS DATASETS IN IDENTIFYING THE HUMAN FACIAL EMOTION Usha Mary Sharma 1, Jayanta Kumar Das 2, Trinayan Dutta 3 1 Assistant Professor, 2,3 Student,

More information

Hierarchical Age Estimation from Unconstrained Facial Images

Hierarchical Age Estimation from Unconstrained Facial Images Hierarchical Age Estimation from Unconstrained Facial Images STIC-AmSud Jhony Kaesemodel Pontes Department of Electrical Engineering Federal University of Paraná - Supervisor: Alessandro L. Koerich (/PUCPR

More information

A Human-Markov Chain Monte Carlo Method For Investigating Facial Expression Categorization

A Human-Markov Chain Monte Carlo Method For Investigating Facial Expression Categorization A Human-Markov Chain Monte Carlo Method For Investigating Facial Expression Categorization Daniel McDuff (djmcduff@mit.edu) MIT Media Laboratory Cambridge, MA 02139 USA Abstract This paper demonstrates

More information

Utilizing Posterior Probability for Race-composite Age Estimation

Utilizing Posterior Probability for Race-composite Age Estimation Utilizing Posterior Probability for Race-composite Age Estimation Early Applications to MORPH-II Benjamin Yip NSF-REU in Statistical Data Mining and Machine Learning for Computer Vision and Pattern Recognition

More information

A Semi-supervised Approach to Perceived Age Prediction from Face Images

A Semi-supervised Approach to Perceived Age Prediction from Face Images IEICE Transactions on Information and Systems, vol.e93-d, no.10, pp.2875 2878, 2010. 1 A Semi-supervised Approach to Perceived Age Prediction from Face Images Kazuya Ueki NEC Soft, Ltd., Japan Masashi

More information

Recognising Emotions from Keyboard Stroke Pattern

Recognising Emotions from Keyboard Stroke Pattern Recognising Emotions from Keyboard Stroke Pattern Preeti Khanna Faculty SBM, SVKM s NMIMS Vile Parle, Mumbai M.Sasikumar Associate Director CDAC, Kharghar Navi Mumbai ABSTRACT In day to day life, emotions

More information

Enhanced Facial Expressions Recognition using Modular Equable 2DPCA and Equable 2DPC

Enhanced Facial Expressions Recognition using Modular Equable 2DPCA and Equable 2DPC Enhanced Facial Expressions Recognition using Modular Equable 2DPCA and Equable 2DPC Sushma Choudhar 1, Sachin Puntambekar 2 1 Research Scholar-Digital Communication Medicaps Institute of Technology &

More information

COMPARATIVE STUDY ON FEATURE EXTRACTION METHOD FOR BREAST CANCER CLASSIFICATION

COMPARATIVE STUDY ON FEATURE EXTRACTION METHOD FOR BREAST CANCER CLASSIFICATION COMPARATIVE STUDY ON FEATURE EXTRACTION METHOD FOR BREAST CANCER CLASSIFICATION 1 R.NITHYA, 2 B.SANTHI 1 Asstt Prof., School of Computing, SASTRA University, Thanjavur, Tamilnadu, India-613402 2 Prof.,

More information

Mammogram Analysis: Tumor Classification

Mammogram Analysis: Tumor Classification Mammogram Analysis: Tumor Classification Term Project Report Geethapriya Raghavan geeragh@mail.utexas.edu EE 381K - Multidimensional Digital Signal Processing Spring 2005 Abstract Breast cancer is the

More information

CPSC81 Final Paper: Facial Expression Recognition Using CNNs

CPSC81 Final Paper: Facial Expression Recognition Using CNNs CPSC81 Final Paper: Facial Expression Recognition Using CNNs Luis Ceballos Swarthmore College, 500 College Ave., Swarthmore, PA 19081 USA Sarah Wallace Swarthmore College, 500 College Ave., Swarthmore,

More information

Audio-visual Classification and Fusion of Spontaneous Affective Data in Likelihood Space

Audio-visual Classification and Fusion of Spontaneous Affective Data in Likelihood Space 2010 International Conference on Pattern Recognition Audio-visual Classification and Fusion of Spontaneous Affective Data in Likelihood Space Mihalis A. Nicolaou, Hatice Gunes and Maja Pantic, Department

More information

Brain Tumour Detection of MR Image Using Naïve Beyer classifier and Support Vector Machine

Brain Tumour Detection of MR Image Using Naïve Beyer classifier and Support Vector Machine International Journal of Scientific Research in Computer Science, Engineering and Information Technology 2018 IJSRCSEIT Volume 3 Issue 3 ISSN : 2456-3307 Brain Tumour Detection of MR Image Using Naïve

More information

Automatic Facial Action Unit Recognition by Modeling Their Semantic And Dynamic Relationships

Automatic Facial Action Unit Recognition by Modeling Their Semantic And Dynamic Relationships Chapter 10 Automatic Facial Action Unit Recognition by Modeling Their Semantic And Dynamic Relationships Yan Tong, Wenhui Liao, and Qiang Ji Abstract A system that could automatically analyze the facial

More information

Primary Level Classification of Brain Tumor using PCA and PNN

Primary Level Classification of Brain Tumor using PCA and PNN Primary Level Classification of Brain Tumor using PCA and PNN Dr. Mrs. K.V.Kulhalli Department of Information Technology, D.Y.Patil Coll. of Engg. And Tech. Kolhapur,Maharashtra,India kvkulhalli@gmail.com

More information

Emotion based E-learning System using Physiological Signals. Dr. Jerritta S, Dr. Arun S School of Engineering, Vels University, Chennai

Emotion based E-learning System using Physiological Signals. Dr. Jerritta S, Dr. Arun S School of Engineering, Vels University, Chennai CHENNAI - INDIA Emotion based E-learning System using Physiological Signals School of Engineering, Vels University, Chennai Outline Introduction Existing Research works on Emotion Recognition Research

More information

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2014

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2014 UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2014 Exam policy: This exam allows two one-page, two-sided cheat sheets (i.e. 4 sides); No other materials. Time: 2 hours. Be sure to write

More information

DEEP convolutional neural networks have gained much

DEEP convolutional neural networks have gained much Real-time emotion recognition for gaming using deep convolutional network features Sébastien Ouellet arxiv:8.37v [cs.cv] Aug 2 Abstract The goal of the present study is to explore the application of deep

More information

Mammogram Analysis: Tumor Classification

Mammogram Analysis: Tumor Classification Mammogram Analysis: Tumor Classification Literature Survey Report Geethapriya Raghavan geeragh@mail.utexas.edu EE 381K - Multidimensional Digital Signal Processing Spring 2005 Abstract Breast cancer is

More information

Classification of mammogram masses using selected texture, shape and margin features with multilayer perceptron classifier.

Classification of mammogram masses using selected texture, shape and margin features with multilayer perceptron classifier. Biomedical Research 2016; Special Issue: S310-S313 ISSN 0970-938X www.biomedres.info Classification of mammogram masses using selected texture, shape and margin features with multilayer perceptron classifier.

More information

Clinical Examples as Non-uniform Learning and Testing Sets

Clinical Examples as Non-uniform Learning and Testing Sets Clinical Examples as Non-uniform Learning and Testing Sets Piotr Augustyniak AGH University of Science and Technology, 3 Mickiewicza Ave. 3-9 Krakow, Poland august@agh.edu.pl Abstract. Clinical examples

More information

EMOTION DETECTION THROUGH SPEECH AND FACIAL EXPRESSIONS

EMOTION DETECTION THROUGH SPEECH AND FACIAL EXPRESSIONS EMOTION DETECTION THROUGH SPEECH AND FACIAL EXPRESSIONS 1 KRISHNA MOHAN KUDIRI, 2 ABAS MD SAID AND 3 M YUNUS NAYAN 1 Computer and Information Sciences, Universiti Teknologi PETRONAS, Malaysia 2 Assoc.

More information

Robust system for patient specific classification of ECG signal using PCA and Neural Network

Robust system for patient specific classification of ECG signal using PCA and Neural Network International Research Journal of Engineering and Technology (IRJET) e-issn: 395-56 Volume: 4 Issue: 9 Sep -7 www.irjet.net p-issn: 395-7 Robust system for patient specific classification of using PCA

More information

Facial Expression Classification Using Convolutional Neural Network and Support Vector Machine

Facial Expression Classification Using Convolutional Neural Network and Support Vector Machine Facial Expression Classification Using Convolutional Neural Network and Support Vector Machine Valfredo Pilla Jr, André Zanellato, Cristian Bortolini, Humberto R. Gamba and Gustavo Benvenutti Borba Graduate

More information

University of East London Institutional Repository:

University of East London Institutional Repository: University of East London Institutional Repository: http://roar.uel.ac.uk This paper is made available online in accordance with publisher policies. Please scroll down to view the document itself. Please

More information

Emotion Detection Using Physiological Signals. M.A.Sc. Thesis Proposal Haiyan Xu Supervisor: Prof. K.N. Plataniotis

Emotion Detection Using Physiological Signals. M.A.Sc. Thesis Proposal Haiyan Xu Supervisor: Prof. K.N. Plataniotis Emotion Detection Using Physiological Signals M.A.Sc. Thesis Proposal Haiyan Xu Supervisor: Prof. K.N. Plataniotis May 10 th, 2011 Outline Emotion Detection Overview EEG for Emotion Detection Previous

More information

Blue Eyes Technology

Blue Eyes Technology Blue Eyes Technology D.D. Mondal #1, Arti Gupta *2, Tarang Soni *3, Neha Dandekar *4 1 Professor, Dept. of Electronics and Telecommunication, Sinhgad Institute of Technology and Science, Narhe, Maharastra,

More information

Research Article. Automated grading of diabetic retinopathy stages in fundus images using SVM classifer

Research Article. Automated grading of diabetic retinopathy stages in fundus images using SVM classifer Available online www.jocpr.com Journal of Chemical and Pharmaceutical Research, 2016, 8(1):537-541 Research Article ISSN : 0975-7384 CODEN(USA) : JCPRC5 Automated grading of diabetic retinopathy stages

More information

Impact of Ethnic Group on Human Emotion Recognition Using Backpropagation Neural Network

Impact of Ethnic Group on Human Emotion Recognition Using Backpropagation Neural Network Impact of Ethnic Group on Human Emotion Recognition Using Backpropagation Neural Network Nabil M. Hewahi Computer Science Department Bahrain University, Bahrain nhewahi@gmail.com Abdul Rhman M. Baraka

More information

An Improved Algorithm To Predict Recurrence Of Breast Cancer

An Improved Algorithm To Predict Recurrence Of Breast Cancer An Improved Algorithm To Predict Recurrence Of Breast Cancer Umang Agrawal 1, Ass. Prof. Ishan K Rajani 2 1 M.E Computer Engineer, Silver Oak College of Engineering & Technology, Gujarat, India. 2 Assistant

More information

A Review: Facial Expression Detection with its Techniques and Application

A Review: Facial Expression Detection with its Techniques and Application Vol.9, No.6 (2016), pp.149-158 http://dx.doi.org/10.14257/ijsip.2016.9.6.13 A Review: Facial Expression Detection with its Techniques and Application Neha Bhardwaj 1 and Manish Dixit 2 1 Department of

More information

EEG Features in Mental Tasks Recognition and Neurofeedback

EEG Features in Mental Tasks Recognition and Neurofeedback EEG Features in Mental Tasks Recognition and Neurofeedback Ph.D. Candidate: Wang Qiang Supervisor: Asst. Prof. Olga Sourina Co-Supervisor: Assoc. Prof. Vladimir V. Kulish Division of Information Engineering

More information

NMF-Density: NMF-Based Breast Density Classifier

NMF-Density: NMF-Based Breast Density Classifier NMF-Density: NMF-Based Breast Density Classifier Lahouari Ghouti and Abdullah H. Owaidh King Fahd University of Petroleum and Minerals - Department of Information and Computer Science. KFUPM Box 1128.

More information

Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners

Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners Hatice Gunes and Maja Pantic Department of Computing, Imperial College London 180 Queen

More information

An assistive application identifying emotional state and executing a methodical healing process for depressive individuals.

An assistive application identifying emotional state and executing a methodical healing process for depressive individuals. An assistive application identifying emotional state and executing a methodical healing process for depressive individuals. Bandara G.M.M.B.O bhanukab@gmail.com Godawita B.M.D.T tharu9363@gmail.com Gunathilaka

More information

Facial Behavior as a Soft Biometric

Facial Behavior as a Soft Biometric Facial Behavior as a Soft Biometric Abhay L. Kashyap University of Maryland, Baltimore County 1000 Hilltop Circle, Baltimore, MD 21250 abhay1@umbc.edu Sergey Tulyakov, Venu Govindaraju University at Buffalo

More information

Electrocardiogram beat classification using Discrete Wavelet Transform, higher order statistics and multivariate analysis

Electrocardiogram beat classification using Discrete Wavelet Transform, higher order statistics and multivariate analysis Electrocardiogram beat classification using Discrete Wavelet Transform, higher order statistics and multivariate analysis Thripurna Thatipelli 1, Padmavathi Kora 2 1Assistant Professor, Department of ECE,

More information

Extraction and Identification of Tumor Regions from MRI using Zernike Moments and SVM

Extraction and Identification of Tumor Regions from MRI using Zernike Moments and SVM I J C T A, 8(5), 2015, pp. 2327-2334 International Science Press Extraction and Identification of Tumor Regions from MRI using Zernike Moments and SVM Sreeja Mole S.S.*, Sree sankar J.** and Ashwin V.H.***

More information

REVIEW ON ARRHYTHMIA DETECTION USING SIGNAL PROCESSING

REVIEW ON ARRHYTHMIA DETECTION USING SIGNAL PROCESSING REVIEW ON ARRHYTHMIA DETECTION USING SIGNAL PROCESSING Vishakha S. Naik Dessai Electronics and Telecommunication Engineering Department, Goa College of Engineering, (India) ABSTRACT An electrocardiogram

More information

Computational Cognitive Neuroscience

Computational Cognitive Neuroscience Computational Cognitive Neuroscience Computational Cognitive Neuroscience Computational Cognitive Neuroscience *Computer vision, *Pattern recognition, *Classification, *Picking the relevant information

More information

A Study on Automatic Age Estimation using a Large Database

A Study on Automatic Age Estimation using a Large Database A Study on Automatic Age Estimation using a Large Database Guodong Guo WVU Guowang Mu NCCU Yun Fu BBN Technologies Charles Dyer UW-Madison Thomas Huang UIUC Abstract In this paper we study some problems

More information

A Unified Probabilistic Framework For Measuring The Intensity of Spontaneous Facial Action Units

A Unified Probabilistic Framework For Measuring The Intensity of Spontaneous Facial Action Units A Unified Probabilistic Framework For Measuring The Intensity of Spontaneous Facial Action Units Yongqiang Li 1, S. Mohammad Mavadati 2, Mohammad H. Mahoor and Qiang Ji Abstract Automatic facial expression

More information

Comparison of Deliberate and Spontaneous Facial Movement in Smiles and Eyebrow Raises

Comparison of Deliberate and Spontaneous Facial Movement in Smiles and Eyebrow Raises J Nonverbal Behav (2009) 33:35 45 DOI 10.1007/s10919-008-0058-6 ORIGINAL PAPER Comparison of Deliberate and Spontaneous Facial Movement in Smiles and Eyebrow Raises Karen L. Schmidt Æ Sharika Bhattacharya

More information

Affective Dialogue Communication System with Emotional Memories for Humanoid Robots

Affective Dialogue Communication System with Emotional Memories for Humanoid Robots Affective Dialogue Communication System with Emotional Memories for Humanoid Robots M. S. Ryoo *, Yong-ho Seo, Hye-Won Jung, and H. S. Yang Artificial Intelligence and Media Laboratory Department of Electrical

More information

Affective Modeling and Recognition of Learning Emotion: Application to E-learning

Affective Modeling and Recognition of Learning Emotion: Application to E-learning JOURNAL OF SOFTWARE, VOL. 4, NO. 8, OCTOBER 2009 859 Affective Modeling and Recognition of Learning Emotion: Application to E-learning Yanwen Wu, Tingting Wang, Xiaonian Chu Department of Information &

More information

PMR5406 Redes Neurais e Lógica Fuzzy. Aula 5 Alguns Exemplos

PMR5406 Redes Neurais e Lógica Fuzzy. Aula 5 Alguns Exemplos PMR5406 Redes Neurais e Lógica Fuzzy Aula 5 Alguns Exemplos APPLICATIONS Two examples of real life applications of neural networks for pattern classification: RBF networks for face recognition FF networks

More information

Local Image Structures and Optic Flow Estimation

Local Image Structures and Optic Flow Estimation Local Image Structures and Optic Flow Estimation Sinan KALKAN 1, Dirk Calow 2, Florentin Wörgötter 1, Markus Lappe 2 and Norbert Krüger 3 1 Computational Neuroscience, Uni. of Stirling, Scotland; {sinan,worgott}@cn.stir.ac.uk

More information

Improved Intelligent Classification Technique Based On Support Vector Machines

Improved Intelligent Classification Technique Based On Support Vector Machines Improved Intelligent Classification Technique Based On Support Vector Machines V.Vani Asst.Professor,Department of Computer Science,JJ College of Arts and Science,Pudukkottai. Abstract:An abnormal growth

More information

AFFECTIVE computing addresses issues relating to emotion

AFFECTIVE computing addresses issues relating to emotion 96 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART B: CYBERNETICS, VOL. 36, NO. 1, FEBRUARY 2006 A Real-Time Automated System for the Recognition of Human Facial Expressions Keith Anderson and

More information

Comparative Study of K-means, Gaussian Mixture Model, Fuzzy C-means algorithms for Brain Tumor Segmentation

Comparative Study of K-means, Gaussian Mixture Model, Fuzzy C-means algorithms for Brain Tumor Segmentation Comparative Study of K-means, Gaussian Mixture Model, Fuzzy C-means algorithms for Brain Tumor Segmentation U. Baid 1, S. Talbar 2 and S. Talbar 1 1 Department of E&TC Engineering, Shri Guru Gobind Singhji

More information

Real-time Automatic Deceit Detection from Involuntary Facial Expressions

Real-time Automatic Deceit Detection from Involuntary Facial Expressions Real-time Automatic Deceit Detection from Involuntary Facial Expressions Zhi Zhang, Vartika Singh, Thomas E. Slowe, Sergey Tulyakov, and Venugopal Govindaraju Center for Unified Biometrics and Sensors

More information

Affective Game Engines: Motivation & Requirements

Affective Game Engines: Motivation & Requirements Affective Game Engines: Motivation & Requirements Eva Hudlicka Psychometrix Associates Blacksburg, VA hudlicka@ieee.org psychometrixassociates.com DigiPen Institute of Technology February 20, 2009 1 Outline

More information

COMPARISON BETWEEN GMM-SVM SEQUENCE KERNEL AND GMM: APPLICATION TO SPEECH EMOTION RECOGNITION

COMPARISON BETWEEN GMM-SVM SEQUENCE KERNEL AND GMM: APPLICATION TO SPEECH EMOTION RECOGNITION Journal of Engineering Science and Technology Vol. 11, No. 9 (2016) 1221-1233 School of Engineering, Taylor s University COMPARISON BETWEEN GMM-SVM SEQUENCE KERNEL AND GMM: APPLICATION TO SPEECH EMOTION

More information

A REVIEW ON CLASSIFICATION OF BREAST CANCER DETECTION USING COMBINATION OF THE FEATURE EXTRACTION MODELS. Aeronautical Engineering. Hyderabad. India.

A REVIEW ON CLASSIFICATION OF BREAST CANCER DETECTION USING COMBINATION OF THE FEATURE EXTRACTION MODELS. Aeronautical Engineering. Hyderabad. India. Volume 116 No. 21 2017, 203-208 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu A REVIEW ON CLASSIFICATION OF BREAST CANCER DETECTION USING COMBINATION OF

More information

Emotion Classification Using Neural Network

Emotion Classification Using Neural Network Emotion Classification Using Neural Network Fadzilah Siraj, Nooraini Yusoff and Lam Choong Kee Faculty of Information Technology Universiti Utara Malaysia, 06010 Sintok Kedah, Malaysia Tel: +604-9284672,

More information

When Overlapping Unexpectedly Alters the Class Imbalance Effects

When Overlapping Unexpectedly Alters the Class Imbalance Effects When Overlapping Unexpectedly Alters the Class Imbalance Effects V. García 1,2, R.A. Mollineda 2,J.S.Sánchez 2,R.Alejo 1,2, and J.M. Sotoca 2 1 Lab. Reconocimiento de Patrones, Instituto Tecnológico de

More information

Affective pictures and emotion analysis of facial expressions with local binary pattern operator: Preliminary results

Affective pictures and emotion analysis of facial expressions with local binary pattern operator: Preliminary results Affective pictures and emotion analysis of facial expressions with local binary pattern operator: Preliminary results Seppo J. Laukka 1, Antti Rantanen 1, Guoying Zhao 2, Matti Taini 2, Janne Heikkilä

More information

MRI Image Processing Operations for Brain Tumor Detection

MRI Image Processing Operations for Brain Tumor Detection MRI Image Processing Operations for Brain Tumor Detection Prof. M.M. Bulhe 1, Shubhashini Pathak 2, Karan Parekh 3, Abhishek Jha 4 1Assistant Professor, Dept. of Electronics and Telecommunications Engineering,

More information

Personalized Facial Attractiveness Prediction

Personalized Facial Attractiveness Prediction Personalized Facial Attractiveness Prediction Jacob Whitehill and Javier R. Movellan Machine Perception Laboratory University of California, San Diego La Jolla, CA 92093, USA {jake,movellan}@mplab.ucsd.edu

More information

Intelligent Edge Detector Based on Multiple Edge Maps. M. Qasim, W.L. Woon, Z. Aung. Technical Report DNA # May 2012

Intelligent Edge Detector Based on Multiple Edge Maps. M. Qasim, W.L. Woon, Z. Aung. Technical Report DNA # May 2012 Intelligent Edge Detector Based on Multiple Edge Maps M. Qasim, W.L. Woon, Z. Aung Technical Report DNA #2012-10 May 2012 Data & Network Analytics Research Group (DNA) Computing and Information Science

More information

Audio-Visual Emotion Recognition in Adult Attachment Interview

Audio-Visual Emotion Recognition in Adult Attachment Interview Audio-Visual Emotion Recognition in Adult Attachment Interview Zhihong Zeng, Yuxiao Hu, Glenn I. Roisman, Zhen Wen, Yun Fu and Thomas S. Huang University of Illinois at Urbana-Champaign IBM T.J.Watson

More information

Cognitive Model Based Emotion Recognition From Facial Expressions For Live Human Computer Interaction

Cognitive Model Based Emotion Recognition From Facial Expressions For Live Human Computer Interaction Cognitive Model Based Emotion Recognition From Facial Expressions For Live Human Computer Interaction Maringanti Hima Bindu, Priya Gupta, and U.S.Tiwary, Senior TMember, IEEET Abstract Human emotions are

More information

FACIAL IMAGE BASED EXPRESSION CLASSIFICATION SYSTEM USING COMMITTEE NEURAL NETWORKS. A Thesis. Presented to

FACIAL IMAGE BASED EXPRESSION CLASSIFICATION SYSTEM USING COMMITTEE NEURAL NETWORKS. A Thesis. Presented to FACIAL IMAGE BASED EXPRESSION CLASSIFICATION SYSTEM USING COMMITTEE NEURAL NETWORKS A Thesis Presented to The Graduate Faculty of The University of Akron In Partial Fulfillment of the Requirements for

More information

To What Extent Can the Recognition of Unfamiliar Faces be Accounted for by the Direct Output of Simple Cells?

To What Extent Can the Recognition of Unfamiliar Faces be Accounted for by the Direct Output of Simple Cells? To What Extent Can the Recognition of Unfamiliar Faces be Accounted for by the Direct Output of Simple Cells? Peter Kalocsai, Irving Biederman, and Eric E. Cooper University of Southern California Hedco

More information

Shu Kong. Department of Computer Science, UC Irvine

Shu Kong. Department of Computer Science, UC Irvine Ubiquitous Fine-Grained Computer Vision Shu Kong Department of Computer Science, UC Irvine Outline 1. Problem definition 2. Instantiation 3. Challenge 4. Fine-grained classification with holistic representation

More information

Gene Selection for Tumor Classification Using Microarray Gene Expression Data

Gene Selection for Tumor Classification Using Microarray Gene Expression Data Gene Selection for Tumor Classification Using Microarray Gene Expression Data K. Yendrapalli, R. Basnet, S. Mukkamala, A. H. Sung Department of Computer Science New Mexico Institute of Mining and Technology

More information