Relational Learning based Happiness Intensity Analysis in a Group

Size: px
Start display at page:

Download "Relational Learning based Happiness Intensity Analysis in a Group"

Transcription

1 2016 IEEE International Symposium on Multimedia Relational Learning based Happiness Intensity Analysis in a Group Tuoerhongjiang Yusufu, Naifan Zhuang, Kai Li, Kien A. Hua Department of Computer Science University of Central Florida Orlando,Florida {yusufu, kaili,kienhua}@cs.ucf.edu naifanzhuang@knights.ucf.edu Abstract Pictures and videos from social events and gatherings usually contain multiple people. Physiological and behavioral science studies indicate that there are strong emotional connections among group members. These emotional relations among group members are indispensable to better analyzing individual emotions in a group. However, most of the existing affective computing methods focus on estimating the emotion of a single object only. In this work, we concentrate on estimating happiness intensities of group members while considering the reciprocities among them. We propose a novel facial descriptor that effectively captures happiness related facial action units. We also introduce two different structural regression models, Continuous Conditional Random Fields (CCRF) and Continuous Conditional Neural Fields (CCNF), for estimating emotions of group members. Our experimental results on HAPPEI dataset demonstrate the viability of proposed features and the two frameworks. Keywords-Action Units, Happiness Intensity, Group, Probabilistic Graphic Model intensity of a group as whole. Analyzing an individual s emotion in a group context is still an unexplored problem. I. INTRODUCTION Millions of images and videos from different social event and gatherings are uploaded and shared each day. In a social event, such as party, wedding or graduation ceremony, many pictures and videos are taken. These images and videos usually contain multiple people. Techniques for analyzing and understanding group images and videos have many applications. Recently, the study of a group of people in an image or a video has received much attention in the computer vision community for different research purposes. Callagher and Chen [22] proposed contextual features based on the group structure for computing the age and gender of individuals. Eichner at el. [24] present a novel multi-person pose estimation framework. In this paper we are also interested in group pictures. However, our topic is emotions in a group. Human affect analysis is a long-studied problem for its importance in human-computer interaction and affective computing. Most of the automatic affect analysis and recognition algorithms in existing works, however, focus on analyzing the expressions and emotions of an individual only [3][4]. Although there are some works on analyzing group affect [5][6][7], they are interested in inferring the emotional Figure 1: Group images from different social gatherings. Based on human cognitive and behavioral researches [1][2], group members bring their individual level emotional experiences, such as dispositional affect, moods, emotions, emotional intelligence, and sentiments, with them to a group interaction. Then through a variety of explicit and implicit processes, individual-level moods and emotions are spread and shared among group members. In other words, in a group, emotions of group members are connected to each other. Assessing reciprocity among the group members is indispensable to better understanding individual level emotions of group members. In this paper, we focus on modeling the relations among individual emotions in a group. After extensive research, we find that HAPPEI [8] dataset is the only suitable dataset for our research, as it include all group images and each face is annotated with different level of happiness intensity. Figure 1 shows some group images from HAPPEI dataset. All pictures in this dataset are taken from different social gatherings. Since we use the HAPPEI dataset, in this paper we only study two types of basic human expressions: happiness and neutral. Interestingly, as people tend to present themselves in a favorable way [30], most of /16 $ IEEE DOI /ISM

2 the uploaded and shared pictures on websites are positive. Studying happiness in the group has many real-world applications, such as emotion ranking, event summarization and highlight summarization, image search and retrieval, etc. The key contributions of this paper are as follows: 1) We propose a novel compact facial descriptor which refers to happiness related ction units (AUs). This feature effectively represent happiness intensities. 2) We introduce a Continuous Conditional Random Fields (CCRF) based emotion prediction model. This model combines Support Vector Regressors (SVR) and Continuous Conditional Random Fields (CCRF) to model the relations between different individuals emotions in a group image. 3) We also introduce a Continuous Conditional Neural Fields (CCNF) model for directly estimating emotion intensities of all group members together while considering the relation among group members. This paper is organized as follows: In section 2, we discuss the previous related works. In Section 3, we introduce the proposed feature extraction and emotion estimation frameworks. In Section 4, we present the results of examining the proposed feature and structured regression models for happiness level estimation in a group. Finally, we draw our conclusions in Section 5. II. RELATED WORKS Facial image descriptors can be classified as appearance features and geometric features. Appearance features describe the skin texture of faces. Because appearance features are usually extracted from small regions, this type of features are robust to illumination variations. Moreover, as most of the appearance features are obtained by concatenating local histograms, and they are also normalized, it increase the robustness of the overall representation. They are also robust to registration errors as they involve pooling over histogram. However, as appearance features favor identity-related cues rather than expressions, this kind of features are affected by identity bias. Most popular appearance representations are local binary patterns (LBP) [17] and local phase quantization (LPQ) [18]. Other features such as Histogram of gradients(hog) [19], pyramid of histogram of gradients (PHOG) [29], quantized local Zernike moments (QLZM) [20] and Gabor wavelets [21] are also frequently used as facial descriptors. Geometric features represent the facial geometry, such as the shapes of face and the locations of facial landmarks [9][10][11]. Since this kind of features are based on coordinate values instead of pixel values, they are more robust to illumination variations than appearance features. More importantly, geometric features are less affected by identity bias, which makes geometric features more suitable for expression analysis. However, the disadvantage of geometricbased features is that they are vulnerable to registration errors. We want to model the affect continuously. Because discretization may lead to loss of information and relationships between neighboring classes, the regression techniques are the natural choice for our problem. Most popular regression techniques are linear and logistic regression, support vector regression, neural networks and relevance vector machine (RVM) [26]. However, they are all designed to model input-output dependencies disregarding the output-output relations. Recently, Conditional Random Fields (CRF) based structured regression models have received many attentions from researchers. CRF technique is a powerful tool for relational learning because it allows to model relations between objects and contents of objects. As an extension to the classic CRF to apply for continuous case, Continuous Conditional Random Fields (CCRF)[31] has been successfully applied to global ranking problems [31], emotion tracking in music [32], and dimensional affect recognition in temporal data [33], etc. Continuous Conditional Neural Fields (CCNF) [23] is an extension of Conditional Neural Fields (CNF). It also can define temporal and spatial relationships. CCNF has been applied for emotion prediction in music [25], facial action unit recognition and facial landmark detection tasks [35]. Both CCRF and CCNF can perform structured regression, and they can easily define temporal and spatial relationships. III. PROPOSED FRAMEWORK A. Facial Feature Extreaction In the HAPPEI dataset, each face in a group image is annotated with happiness intensity of 6 stages: Neutral, Small Smile, Large Smile, Small Laugh, Large Laugh and Thrilled. Since we are dealing with only two kind basic human expressions- neutral and happiness, we propose a problem specific and more efficient facial feature for happiness intensity estimation. Previous works in psychology and computer vision have shown the value of using Action Unit (AUs) for analyzing facial expressions [11][12][13]. In the Facial Action Coding System (FACS) [27], AUs are related to the contractions of specific facial muscles. Among 30 AUs, 12 of them are for upper face and 18 are for the lower face. Any facial expression can be explained as occurrence of an AU or occurrence of a combination of several AUs. In order to clearly show different happiness levels, in Figure 2, we take some pictures of the same object from CK database [34] and present four levels of happiness intensities and corresponding AUs. We can see in a neutral face, the eyes, brow and cheeks are relaxed, and lips are relaxed and closed. When a person expresses his or her happiness, their cheeks, upper and lower eyelids would be raised. At the same time, the lip corners would be pulled obliquely, lips would be relaxed and parted, mandible may be lowered. Any 354

3 level of happiness can be expressed as combination of AU5, AU6, AU7, AU12, AU25 and AU26. Inspired by previous works [11][14][15], we extract geometric facial features referring to happiness related AUs. We call the new feature Happiness Related Facial Feature (HRFF). The facial feature extraction steps are as follows: 1) Face detection: we use Viola-Jones [28] face detection algorithm. 2) Facial landmark detection and non-face elimination: Intraface [16] ia applied to detect 49 facial landmarks from each detected faces. Using the landmark detection results we also can eliminate most of the falsely detected faces. The reason is that, we can t extract expected landmarks from non-face objects. Figure 3 shows the locations and indices of the corresponding 2D facial landmarks. 3) Face resizing and alignment: each face is resized to pixels. Results from Intrafae are used to perform face alignment. 4) Geometric features are calculated using aligned landmarks. Table I presents the descriptions and measurements of the 6 dimensional facial features that correspond to happiness related AUs. Table I: Happiness Related Facial Feature (HRFF) Features Implication Measurement AUs (a) Neutral (b) AU6+12 f 1,2 Eye lid movement Sum of distances between corresponding landmarks on the upper and lower lips AU5, AU7 f 3 Lip tightener Sum of distances of corresponding points on the upper and lower mouth outer contour AU25, AU26 (c) AU (d) AU Figure 2: Happiness expressions and corresponding AUs. f 4 Lip parted Sum of distance of corresponding points on the upper and lower mouth inner contour AU25, AU26 f 5 Lip Depressor Angle between mouth corners and lip upper center A12 f 6 Cheek raiser Angle between nose wing and nose center AU6 Figure 3: Facial Landmarks B. Group Happiness Intensity Estimation We select CCRF and CCNF as happiness intensity estimation model in a group, as it has shown promising results for continuous variable modeling when the extra context is required. Both CCRF and CCNF are undirected graphical models that can learn the conditional probability of a continuous valued vector y depending on continuous X. They are discriminative approaches, where the conditional probability P (y X) is modeled explicitly. The graphical models that represents CCRF and CCNF for emotion prediction in a group are presented in Figure 4. The probability density function for CCRF and CCNF can be written as below: P (y X) = exp(ψ) exp(ψ) (1) In the CCRF model, the Ψ is defined as: Ψ= K 1 α k f k (y i,x i )+ K 2 β k g k (y i,y j,x) (2) i k=1 i,j k=1 Above X = {X 1,X 2,..., X n } is the set of facial features vectors that can be represented as a matrix. Each row corresponding to a face feature vector for each detected face. y = {y 1,y 2,..., y n } is the output variables that we want to predict. In our case, it is the happiness intensity of each 355

4 smoothness between neighboring nodes. But the vertex feature f k in CCNF represents the mapping from the X i to y i through a one layer neural network, and the new parameter θ k in CCNF represents the weight vector for a particular neuron k. The number of vertex features k is determined experimentally during cross-validation. The vertex feature in CCNF can be written as: (a) CCRF Model (b) CCNF Model Figure 4: Proposed Frameworks individual in a group image. In CCRF, two type of features are defined. They are vertex features f k and edge features g k. f k (y i,x i )= (y i X i,k ) 2 (3) g k (y i,y j,x)= 1 2 S(k) i,j (y i y j ) 2 (4) Vertex features f k represent the dependency between the X i,k and y k. In our case, it is dependency between a happiness intensity prediction from a regressor and the actual happiness intensity level. The parameter α k controls the reliability of particular signal for a particular emotion. Edge features g k represent the dependencies between observations y i and y j, for example, how related is the happiness intensity of person A and person B in a group. This is also affected by the similarity measure S k. The parameter β k and similarities S k allow us to control the effect of such connections between emotions. α k and β k are positive. We selected our similarity function as: S i,j = exp( X i X j ) (5) δ In the CCNF model the Ψ is defined as: Ψ= K 1 α k f k (y i,x i,θ k )+ K 2 β k g k (y i,y j,x) i k=1 i,j k=1 (6) Here again, α k and β k are positive, and Θ is unconstrained. Similar to CCRF, CCNF also has the same edge feature, and also use the same similarity function to enforce f k (y i,x i,θ k )= (y i h(θ k,x i )) 2 (7) where 1 h(θ, X i )= (8) 1+e θt X i In the learning stage, we pick the α, β values for CCRF model. For CCNF, we pick the α,β,θ and k parameters to optimize the conditional log-likelihood of the model on the training images. All of the parameters are optimized jointly. n L(α, β, Θ) = log P (y (q) x (q) ) (9) q=1 (ᾱ, β, Θ) = arg max(l(α, β, Θ)) (10) Because both Eq.2 and Eq.6 are convex, the optimal parameter values can be determined using standard techniques such as stochastic gradient ascent or other general optimization techniques. Both CCRF and CCNF models can be viewed as multivariate Gaussian [33][36], inferring output values that maximize Ψ(y X) is straightforward and efficient. IV. EXPERIMENTAL ANALYSIS Because the HAPPEI database is the only dataset related to both group and happiness intensity levels, we examine the performance of our new facial feature and introduced emotion estimation frameworks at the same time. All experiments are conducted on MATLAB 2015a, with 3.16Hz CPU and 4GB RAM computer environment group images, including 7248 faces are used in our experiments. We conducted 4-fold cross-validation, where 1500 images are selected for training and 500 for testing. The reported results are the average result of 4 folds. First, We extracted LBP, LPQ, and PHOG features to evaluate the computational complexity of HRFF. Table II: Average Feature Extraction Time Features Feature Dimension Execution time(second) LBP LPQ PHOG HRFF As we can see from Table II, LPQ feature takes highest execution time. Although PHOG have highest dimension of 680, but its extraction time is much smaller than LPQ. 356

5 LBP is faster than PHOG and LPQ because calculating LBP don t require any transformation. But LPQ is based on computing the short-term Fourier transform (STFT) on each local image patch. As an extension of HOG, PHOG is based on simple gradient operation. That is why LBP is faster than PHOG and PHOG is faster than LPQ. However, HRFF outperformed all of those features in terms of extraction and processing speed, because it only related to few calculation on coordinate values. The compactness and fast extraction time are highly desirable in real-time emotion analysis systems, such as real time event satisfaction level analysis and tracking. Then, we use above extracted features to train and test emotion estimation models we introduced. After that, we evaluate the performance of each descriptor and structured regression models at the same time. We compared the performance of CCRF and CCNF with the most popular regression model- Support Vector Regressors(SVR) to show how relational learning models can improve the performance compared to single face analysis methods. For SVR-based experiments we used a 2-fold cross validation on each fold of training data to pick the hyperparameters. Then chosen hyper-parameters are used to train on the whole dataset. For CCRF-based experiments, each fold of training data split into two parts. One part is used for training SVR and the other is for training CCRF. Then we performed a 2 fold cross-validation on both SVR and CCRF training data to choose the hyper-parameters. These hyper-parameters are then used for training on the whole training data. For CCNF-based experiments we also used a 2-fold cross validation on each fold of training data to pick the hyperparameters. Similar to CCRF, we use the chosen hyperparameters for training on the whole dataset. BFGS Quasi- Newton method is used for both cross validation and training stages. We used two different evaluation metrics. In terms of prediction accuracy, we selected mean squared error (MSE). For prediction structure, we selected average correlation coefficients. They are most common evaluation metrics for regression models. Notice, smaller MSE values correspond to better performance, while the opposite is true for correlation coefficients. Table III. shows the average mean squared error (MSE) for happiness intensity estimation with different models with different facial features. And Table IV, presents the average correlation coefficient of different models with different descriptors. Table III: Mean Squared Error LBP LPQ PHOG HRFF SVR SVR + CCRF CCNF Table IV: Correlation Coefficient LBP LPQ PHOG HRFF SVR SVR + CCRF CCNF We can see from Table III and Table IV, the best result is achieved when CCNF and HRFF are combined. LBP and LPQ obtained highest MSE and lowest correlation coefficients. The performance of PHOG is in between HRFF and other appearance features. The driving of LBP and LPQ features are highly effected by identity bias. That makes them not the good options for facial expression analysis. The performance of PHOG is better than LBP and LPQ, because it take both gradient orientations and spatial layouts into consideration. Our geometric features outperforms all other face descriptors on this wild collected images, because HRFF is directly related to happiness related facial AUs. We can also see from experiment results in the Table III and Table IV, the combination of SVR and CCRF obtained consistently better result than SVR alone in both evaluation metrics. It proves that considering the relations and reciprocities among group members will improve the emotion estimation results. Among these two structured regression models we introduced, CCNF achieves the best result because of its learning capacity and the nonlinearity of the neural network. Compared to CCRF, training process of CCNF is not too complex, because it don t have to combine with another regression model. It take the facial features as direct input and train the model while considering the emotional relations from the beginning. V. CONCLUSION In this paper, we proposed a novel facial descriptor and introduced two model for happiness intensity estimation in group context problem. We extracted compact geometric features from facial landmarks that refer to facial action units (AU)s. For emotion estimation, we used two structured regression frameworks-continuous Conditional Random Fields(CCRF) and Continuous Conditional Neural Fields(CCNF). The combination of feature descriptor and emotion estimation model is used to infer the happiness intensities in a group of people. We conducted experiments on HAPPEI database to show how the proposed facial feature considerably improves the performance of happiness intensity estimation. We also tested the performances of two different structured regression models, and compared with most popular regression model - Support Vector Regression (SVR). Experimental results indicate that, compared to traditional single face analysis methods, considering the relations between faces in a group will improve emotion estimation accuracy significantly. Experiment result also shows CCNF have better performances over CCRF. 357

6 In future, we will extend our method to real time emotion tracking of multiple people in video sequences. We also expecting to use deep learning methods to improve the accuracy of emotion estimation and prediction. VI. ACKNOWLEDGMENT This material is based upon work partially supported by the NASA under Grant Number NNX15AV40A. Any opinions, findings, and conclusion or recommendations expressed in this materials are those of the authors and do not necessarily reflect the views of the National Science Foundation. REFERENCES [1] Janice R. Kelly, Sigal G. Barsade, Mood and Emotions in Small Groups and Work Teams, 3rd ed. Harlow, England: Addison- Wesley, [2] S. Barasade and D. Gibson, Group Emotion: a View from Top and Bottom, 3rd ed. Harlow, England: Addison-Wesley, [3] Z. Zeng,M. Pantic, G. I. Toisman, and T.S. Huang, A survey of affect recognition methods: Audio, visual, and spontaneous expressions, IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 1,pp ,Jan [4] E. Sariyanidi, H. Gunes, and A. Cavallaro, Automatic analysis of facial affect: A survey of registration, representation and recognition,. IEEE Trans. on Pattern Analysis and Machine Intelligence, pp. 1-22,2014. [5] A. Dhall, R. Goecke, and T. Gedeon, Automatic Group Happiness Intensity Analysis,. IEEE Transactions on Affective Computing, vol.6, no. 1, 2015 [6] W. Mou, O. Celiktutan and H. Gunes, Group-level Arousal and Valence Recognition in Static Images: Face, Body and Context, IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG) [7] A. Dhall, J. Joshi, K. Sikka, R. Goeckee and N. Sebe, The More the Merrier: Analysing the Affect of a Group of People in Images, IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG) [8] A. Dhall, J. Joshi, I. Radwan and R. Goecke, Finding Happiest Moments in a Social Context, ACCV, [9] S. Lucey, A. B. Ashraf, and J. Gohn,, Investigating spontaneous facial action recognition through AAM representations of the face, Face recognition Book. Mamendorf, Germany: Pro Literatur Verlag, [10] M. Valstar, H. Gunes, and M. Pantic, How to distinguish posed from spontaneous smiles using geometric features, Proc. ICM Int. Conf. Multimodal Interfaces, [11] Y. L. Tian, T. Kanade, and J. Cohn, Recognizing action units for facia lexpression analysis, IEEE Trans. Pattern Anal. Mach. Intell., vol.23, no. 2,pp , Feb [12] G. Littlewort, M. S. Bartlett, I. Fasel, J. Susskind and J. Movellan, Dynamics of facial expression extracted Automatically from Video, IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) [13] D. McDuff, R. El Kaliouby, K. Kassam, and R. Picard, Affect valence inference from facial action unit spectrograms, IEEE conf. Compu. Vis. Pattern Recogni. Workshop [14] F. Zhou, F. De la Torre, and J. F. Cohn, Unsupervised discovery of facial events, 2010 IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., [15] ] Michael Xuelin Huang, Grace Ngai, Kien A. Hua, Identifying User-specific Facial Affects from Spontaneous Expressions with Minimal Annotation, IEEE Transactions on Affective Computing [16] X. Xiong and F. De la Torre, Supervised descent method and its applications to face alignment, IEEE CVPR [17] T. Ahonen, A. Hadid and M. Pietikainen, Face Description with Local Binary Patterns: Application to Face Recognition, IEEE Trans. on Pattern Ana. and Mach. Intel vol.28, [18] V. Ojansivu and J. Heikkila, Blur Insensitive Texture Classification Using Local Phase Quantization, In Proc. Int. Conf. Image Signal Process., 2008, pp [19] N. Dalal and B. Triggs, Histograms of Oriented Gradients for Human Detection, IEEE Conf. Comput. Vis. Pattern Recogni., vol.1, 2005,pp [20] E. Sariyanidi, H. Gunes, M. Gokmen and A. Cavallaro, Local Zernike moment representations for facial affect recognition, British Machine Vision Conference [21] C. Liu and H. Wechsler, Gabor feature based classification using the enhanced fisher linear discriminant model for face recognition, IEEE Transactions on Image Processing, [22] A. C. Gallagher, T. Chen, Understanding Images of Groups of People, IEEE CVPR [23] M. Eichner and V. Ferrari, We are Family: Joint Pose Estimation of Multiple Persons, European Conference on Computer Vision, [24] V. Imbrasaite, T. Baltrusaitis, and P. Robinson CCNF for Continuous Emotion Tracking in Music: Comparison with CCRF and relative feature representation, IEEE Intern. Conf. on Multimedia and Expo, Multimedia Affective Computing, [25] T. Baltrusaitis, P. Robinson, and L.-Philippe Morency Continous Conditional Neural Fields for Structured Regression, ECCV, [26] Bishop, C.M., Pattern Recognition and Machine Learning., Springer-Verlag New York,Inc [27] P. Ekman and W. V. Friesen, The Facial Action Coding System: A technique for the measurement of Facial Movement., San Francisco: Consulting Psychologists Press, [28] P. Viola and M. Jones, Rapid Object Detection using a Booosted Cascade of Simple Features., IEEE CVPR [29] A. Bosch, A. Zisserman, and X. Munoz, Representing shape with a spatial pyramid kernel., ACM International Conference on Image and Video Retrieval (CIVR), [30] H. G. Chou, N. Edge, They are Happier and Having Better lives than I Am : The Impact of Using Facebook on Perceptions of Others Lives, Cyberpychology, Behavior, and Social Networking, vol 15, no 2, [31] T. Qin,T. L, X. Zhang,D. Wang, H. Li, Global Ranking Using Continuous Conditional Random Fields, Conference on Neural Information Processing Systems (NIPS), [32] V. Imbrasaite, T. Baltrusaitis, and P. Robinson, Emotion Tracking in Music Using Continuous Conditional Random Fields and Relative Feature Representation, IEEE Intern. Conf. on Multimedia and Expo Workshops, [33] T. Baltrusaitis,N. Banda, and P. Robinson, Dimensional Affect Recognition using Continuous Conditional Random Fields, IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), [34] Kanade, T., Cohn, J. F., and Tian, Y. Comprehensive database for facial expression analysis, IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG),

Facial expression recognition with spatiotemporal local descriptors

Facial expression recognition with spatiotemporal local descriptors Facial expression recognition with spatiotemporal local descriptors Guoying Zhao, Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and Information Engineering, P. O. Box

More information

Neuro-Inspired Statistical. Rensselaer Polytechnic Institute National Science Foundation

Neuro-Inspired Statistical. Rensselaer Polytechnic Institute National Science Foundation Neuro-Inspired Statistical Pi Prior Model lfor Robust Visual Inference Qiang Ji Rensselaer Polytechnic Institute National Science Foundation 1 Status of Computer Vision CV has been an active area for over

More information

Group-level Arousal and Valence Recognition in Static Images: Face, Body and Context

Group-level Arousal and Valence Recognition in Static Images: Face, Body and Context Group-level Arousal and Valence Recognition in Static Images: Face, Body and Context Wenxuan Mou, Oya Celiktutan, Hatice Gunes School of Electronic Engineering and Computer Science, Queen Mary University

More information

Study on Aging Effect on Facial Expression Recognition

Study on Aging Effect on Facial Expression Recognition Study on Aging Effect on Facial Expression Recognition Nora Algaraawi, Tim Morris Abstract Automatic facial expression recognition (AFER) is an active research area in computer vision. However, aging causes

More information

This is the accepted version of this article. To be published as : This is the author version published as:

This is the accepted version of this article. To be published as : This is the author version published as: QUT Digital Repository: http://eprints.qut.edu.au/ This is the author version published as: This is the accepted version of this article. To be published as : This is the author version published as: Chew,

More information

Emotion Recognition using a Cauchy Naive Bayes Classifier

Emotion Recognition using a Cauchy Naive Bayes Classifier Emotion Recognition using a Cauchy Naive Bayes Classifier Abstract Recognizing human facial expression and emotion by computer is an interesting and challenging problem. In this paper we propose a method

More information

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China A Vision-based Affective Computing System Jieyu Zhao Ningbo University, China Outline Affective Computing A Dynamic 3D Morphable Model Facial Expression Recognition Probabilistic Graphical Models Some

More information

Age Estimation based on Multi-Region Convolutional Neural Network

Age Estimation based on Multi-Region Convolutional Neural Network Age Estimation based on Multi-Region Convolutional Neural Network Ting Liu, Jun Wan, Tingzhao Yu, Zhen Lei, and Stan Z. Li 1 Center for Biometrics and Security Research & National Laboratory of Pattern

More information

Hierarchical Age Estimation from Unconstrained Facial Images

Hierarchical Age Estimation from Unconstrained Facial Images Hierarchical Age Estimation from Unconstrained Facial Images STIC-AmSud Jhony Kaesemodel Pontes Department of Electrical Engineering Federal University of Paraná - Supervisor: Alessandro L. Koerich (/PUCPR

More information

The More the Merrier: Analysing the Affect of a Group of People in Images

The More the Merrier: Analysing the Affect of a Group of People in Images The More the Merrier: Analysing the Affect of a Group of People in Images Abhinav Dhall1,2, Jyoti Joshi1, Karan Sikka3, Roland Goecke1,2 and Nicu Sebe4 1 HCC Lab, Vision & Sensing Group, University of

More information

On Shape And the Computability of Emotions X. Lu, et al.

On Shape And the Computability of Emotions X. Lu, et al. On Shape And the Computability of Emotions X. Lu, et al. MICC Reading group 10.07.2013 1 On Shape and the Computability of Emotion X. Lu, P. Suryanarayan, R. B. Adams Jr., J. Li, M. G. Newman, J. Z. Wang

More information

A Common Framework for Real-Time Emotion Recognition and Facial Action Unit Detection

A Common Framework for Real-Time Emotion Recognition and Facial Action Unit Detection A Common Framework for Real-Time Emotion Recognition and Facial Action Unit Detection Tobias Gehrig and Hazım Kemal Ekenel Facial Image Processing and Analysis Group, Institute for Anthropomatics Karlsruhe

More information

Facial Behavior as a Soft Biometric

Facial Behavior as a Soft Biometric Facial Behavior as a Soft Biometric Abhay L. Kashyap University of Maryland, Baltimore County 1000 Hilltop Circle, Baltimore, MD 21250 abhay1@umbc.edu Sergey Tulyakov, Venu Govindaraju University at Buffalo

More information

Facial Event Classification with Task Oriented Dynamic Bayesian Network

Facial Event Classification with Task Oriented Dynamic Bayesian Network Facial Event Classification with Task Oriented Dynamic Bayesian Network Haisong Gu Dept. of Computer Science University of Nevada Reno haisonggu@ieee.org Qiang Ji Dept. of ECSE Rensselaer Polytechnic Institute

More information

Constructionist Steps Towards an Autonomously Empathetic System

Constructionist Steps Towards an Autonomously Empathetic System Constructionist Steps Towards an Autonomously Empathetic System Trevor Buteau Fordham University New York, NY 10023 tbuteau@fordham.edu Damian Lyons Fordham University New York, NY 10023 dlyons@fordham.edu

More information

Local Image Structures and Optic Flow Estimation

Local Image Structures and Optic Flow Estimation Local Image Structures and Optic Flow Estimation Sinan KALKAN 1, Dirk Calow 2, Florentin Wörgötter 1, Markus Lappe 2 and Norbert Krüger 3 1 Computational Neuroscience, Uni. of Stirling, Scotland; {sinan,worgott}@cn.stir.ac.uk

More information

A Unified Probabilistic Framework For Measuring The Intensity of Spontaneous Facial Action Units

A Unified Probabilistic Framework For Measuring The Intensity of Spontaneous Facial Action Units A Unified Probabilistic Framework For Measuring The Intensity of Spontaneous Facial Action Units Yongqiang Li 1, S. Mohammad Mavadati 2, Mohammad H. Mahoor and Qiang Ji Abstract Automatic facial expression

More information

ANALYSIS OF FACIAL FEATURES OF DRIVERS UNDER COGNITIVE AND VISUAL DISTRACTIONS

ANALYSIS OF FACIAL FEATURES OF DRIVERS UNDER COGNITIVE AND VISUAL DISTRACTIONS ANALYSIS OF FACIAL FEATURES OF DRIVERS UNDER COGNITIVE AND VISUAL DISTRACTIONS Nanxiang Li and Carlos Busso Multimodal Signal Processing (MSP) Laboratory Department of Electrical Engineering, The University

More information

Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners

Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners Hatice Gunes and Maja Pantic Department of Computing, Imperial College London 180 Queen

More information

Face Analysis : Identity vs. Expressions

Face Analysis : Identity vs. Expressions Hugo Mercier, 1,2 Patrice Dalle 1 Face Analysis : Identity vs. Expressions 1 IRIT - Université Paul Sabatier 118 Route de Narbonne, F-31062 Toulouse Cedex 9, France 2 Websourd Bâtiment A 99, route d'espagne

More information

Statistical and Neural Methods for Vision-based Analysis of Facial Expressions and Gender

Statistical and Neural Methods for Vision-based Analysis of Facial Expressions and Gender Proc. IEEE Int. Conf. on Systems, Man and Cybernetics (SMC 2004), Den Haag, pp. 2203-2208, IEEE omnipress 2004 Statistical and Neural Methods for Vision-based Analysis of Facial Expressions and Gender

More information

Recognition of Facial Expressions for Images using Neural Network

Recognition of Facial Expressions for Images using Neural Network Recognition of Facial Expressions for Images using Neural Network Shubhangi Giripunje Research Scholar, Dept.of Electronics Engg., GHRCE, Nagpur, India Preeti Bajaj Senior IEEE Member, Professor, Dept.of

More information

Discovering Facial Expressions for States of Amused, Persuaded, Informed, Sentimental and Inspired

Discovering Facial Expressions for States of Amused, Persuaded, Informed, Sentimental and Inspired Discovering Facial Expressions for States of Amused, Persuaded, Informed, Sentimental and Inspired Daniel McDuff Microsoft Research, Redmond, WA, USA This work was performed while at Affectiva damcduff@microsoftcom

More information

Facial Expression Biometrics Using Tracker Displacement Features

Facial Expression Biometrics Using Tracker Displacement Features Facial Expression Biometrics Using Tracker Displacement Features Sergey Tulyakov 1, Thomas Slowe 2,ZhiZhang 1, and Venu Govindaraju 1 1 Center for Unified Biometrics and Sensors University at Buffalo,

More information

From Dials to Facial Coding: Automated Detection of Spontaneous Facial Expressions for Media Research

From Dials to Facial Coding: Automated Detection of Spontaneous Facial Expressions for Media Research From Dials to Facial Coding: Automated Detection of Spontaneous Facial Expressions for Media Research Evan Kodra, Thibaud Senechal, Daniel McDuff, Rana el Kaliouby Abstract Typical consumer media research

More information

Facial Feature Model for Emotion Recognition Using Fuzzy Reasoning

Facial Feature Model for Emotion Recognition Using Fuzzy Reasoning Facial Feature Model for Emotion Recognition Using Fuzzy Reasoning Renan Contreras, Oleg Starostenko, Vicente Alarcon-Aquino, and Leticia Flores-Pulido CENTIA, Department of Computing, Electronics and

More information

Facial Expression Analysis for Estimating Pain in Clinical Settings

Facial Expression Analysis for Estimating Pain in Clinical Settings Facial Expression Analysis for Estimating Pain in Clinical Settings Karan Sikka University of California San Diego 9450 Gilman Drive, La Jolla, California, USA ksikka@ucsd.edu ABSTRACT Pain assessment

More information

MEMORABILITY OF NATURAL SCENES: THE ROLE OF ATTENTION

MEMORABILITY OF NATURAL SCENES: THE ROLE OF ATTENTION MEMORABILITY OF NATURAL SCENES: THE ROLE OF ATTENTION Matei Mancas University of Mons - UMONS, Belgium NumediArt Institute, 31, Bd. Dolez, Mons matei.mancas@umons.ac.be Olivier Le Meur University of Rennes

More information

Accuracy of three commercial automatic emotion recognition systems across different individuals and their facial expressions

Accuracy of three commercial automatic emotion recognition systems across different individuals and their facial expressions Accuracy of three commercial automatic emotion recognition systems across different individuals and their facial expressions Dupré, D., Andelic, N., Morrison, G., & McKeown, G. (Accepted/In press). Accuracy

More information

Generalization of a Vision-Based Computational Model of Mind-Reading

Generalization of a Vision-Based Computational Model of Mind-Reading Generalization of a Vision-Based Computational Model of Mind-Reading Rana el Kaliouby and Peter Robinson Computer Laboratory, University of Cambridge, 5 JJ Thomson Avenue, Cambridge UK CB3 FD Abstract.

More information

Mammogram Analysis: Tumor Classification

Mammogram Analysis: Tumor Classification Mammogram Analysis: Tumor Classification Term Project Report Geethapriya Raghavan geeragh@mail.utexas.edu EE 381K - Multidimensional Digital Signal Processing Spring 2005 Abstract Breast cancer is the

More information

A Semi-supervised Approach to Perceived Age Prediction from Face Images

A Semi-supervised Approach to Perceived Age Prediction from Face Images IEICE Transactions on Information and Systems, vol.e93-d, no.10, pp.2875 2878, 2010. 1 A Semi-supervised Approach to Perceived Age Prediction from Face Images Kazuya Ueki NEC Soft, Ltd., Japan Masashi

More information

Facial Expression Recognition Using Principal Component Analysis

Facial Expression Recognition Using Principal Component Analysis Facial Expression Recognition Using Principal Component Analysis Ajit P. Gosavi, S. R. Khot Abstract Expression detection is useful as a non-invasive method of lie detection and behaviour prediction. However,

More information

A Study on Automatic Age Estimation using a Large Database

A Study on Automatic Age Estimation using a Large Database A Study on Automatic Age Estimation using a Large Database Guodong Guo WVU Guowang Mu NCCU Yun Fu BBN Technologies Charles Dyer UW-Madison Thomas Huang UIUC Abstract In this paper we study some problems

More information

Emotion Detection Through Facial Feature Recognition

Emotion Detection Through Facial Feature Recognition Emotion Detection Through Facial Feature Recognition James Pao jpao@stanford.edu Abstract Humans share a universal and fundamental set of emotions which are exhibited through consistent facial expressions.

More information

Automatic Facial Expression Recognition Using Boosted Discriminatory Classifiers

Automatic Facial Expression Recognition Using Boosted Discriminatory Classifiers Automatic Facial Expression Recognition Using Boosted Discriminatory Classifiers Stephen Moore and Richard Bowden Centre for Vision Speech and Signal Processing University of Surrey, Guildford, GU2 7JW,

More information

DEEP convolutional neural networks have gained much

DEEP convolutional neural networks have gained much Real-time emotion recognition for gaming using deep convolutional network features Sébastien Ouellet arxiv:8.37v [cs.cv] Aug 2 Abstract The goal of the present study is to explore the application of deep

More information

Automatic Facial Action Unit Recognition by Modeling Their Semantic And Dynamic Relationships

Automatic Facial Action Unit Recognition by Modeling Their Semantic And Dynamic Relationships Chapter 10 Automatic Facial Action Unit Recognition by Modeling Their Semantic And Dynamic Relationships Yan Tong, Wenhui Liao, and Qiang Ji Abstract A system that could automatically analyze the facial

More information

A Human-Markov Chain Monte Carlo Method For Investigating Facial Expression Categorization

A Human-Markov Chain Monte Carlo Method For Investigating Facial Expression Categorization A Human-Markov Chain Monte Carlo Method For Investigating Facial Expression Categorization Daniel McDuff (djmcduff@mit.edu) MIT Media Laboratory Cambridge, MA 02139 USA Abstract This paper demonstrates

More information

Learning to Rank Authenticity from Facial Activity Descriptors Otto von Guericke University, Magdeburg - Germany

Learning to Rank Authenticity from Facial Activity Descriptors Otto von Guericke University, Magdeburg - Germany Learning to Rank Authenticity from Facial s Otto von Guericke University, Magdeburg - Germany Frerk Saxen, Philipp Werner, Ayoub Al-Hamadi The Task Real or Fake? Dataset statistics Training set 40 Subjects

More information

Action Recognition. Computer Vision Jia-Bin Huang, Virginia Tech. Many slides from D. Hoiem

Action Recognition. Computer Vision Jia-Bin Huang, Virginia Tech. Many slides from D. Hoiem Action Recognition Computer Vision Jia-Bin Huang, Virginia Tech Many slides from D. Hoiem This section: advanced topics Convolutional neural networks in vision Action recognition Vision and Language 3D

More information

Blue Eyes Technology

Blue Eyes Technology Blue Eyes Technology D.D. Mondal #1, Arti Gupta *2, Tarang Soni *3, Neha Dandekar *4 1 Professor, Dept. of Electronics and Telecommunication, Sinhgad Institute of Technology and Science, Narhe, Maharastra,

More information

An assistive application identifying emotional state and executing a methodical healing process for depressive individuals.

An assistive application identifying emotional state and executing a methodical healing process for depressive individuals. An assistive application identifying emotional state and executing a methodical healing process for depressive individuals. Bandara G.M.M.B.O bhanukab@gmail.com Godawita B.M.D.T tharu9363@gmail.com Gunathilaka

More information

Gray level cooccurrence histograms via learning vector quantization

Gray level cooccurrence histograms via learning vector quantization Gray level cooccurrence histograms via learning vector quantization Timo Ojala, Matti Pietikäinen and Juha Kyllönen Machine Vision and Media Processing Group, Infotech Oulu and Department of Electrical

More information

A HMM-based Pre-training Approach for Sequential Data

A HMM-based Pre-training Approach for Sequential Data A HMM-based Pre-training Approach for Sequential Data Luca Pasa 1, Alberto Testolin 2, Alessandro Sperduti 1 1- Department of Mathematics 2- Department of Developmental Psychology and Socialisation University

More information

A Deep Learning Approach for Subject Independent Emotion Recognition from Facial Expressions

A Deep Learning Approach for Subject Independent Emotion Recognition from Facial Expressions A Deep Learning Approach for Subject Independent Emotion Recognition from Facial Expressions VICTOR-EMIL NEAGOE *, ANDREI-PETRU BĂRAR *, NICU SEBE **, PAUL ROBITU * * Faculty of Electronics, Telecommunications

More information

Emotion Affective Color Transfer Using Feature Based Facial Expression Recognition

Emotion Affective Color Transfer Using Feature Based Facial Expression Recognition , pp.131-135 http://dx.doi.org/10.14257/astl.2013.39.24 Emotion Affective Color Transfer Using Feature Based Facial Expression Recognition SeungTaek Ryoo and Jae-Khun Chang School of Computer Engineering

More information

HUMAN EMOTION DETECTION THROUGH FACIAL EXPRESSIONS

HUMAN EMOTION DETECTION THROUGH FACIAL EXPRESSIONS th June. Vol.88. No. - JATIT & LLS. All rights reserved. ISSN: -8 E-ISSN: 87- HUMAN EMOTION DETECTION THROUGH FACIAL EXPRESSIONS, KRISHNA MOHAN KUDIRI, ABAS MD SAID AND M YUNUS NAYAN Computer and Information

More information

Affective pictures and emotion analysis of facial expressions with local binary pattern operator: Preliminary results

Affective pictures and emotion analysis of facial expressions with local binary pattern operator: Preliminary results Affective pictures and emotion analysis of facial expressions with local binary pattern operator: Preliminary results Seppo J. Laukka 1, Antti Rantanen 1, Guoying Zhao 2, Matti Taini 2, Janne Heikkilä

More information

Mammogram Analysis: Tumor Classification

Mammogram Analysis: Tumor Classification Mammogram Analysis: Tumor Classification Literature Survey Report Geethapriya Raghavan geeragh@mail.utexas.edu EE 381K - Multidimensional Digital Signal Processing Spring 2005 Abstract Breast cancer is

More information

NMF-Density: NMF-Based Breast Density Classifier

NMF-Density: NMF-Based Breast Density Classifier NMF-Density: NMF-Based Breast Density Classifier Lahouari Ghouti and Abdullah H. Owaidh King Fahd University of Petroleum and Minerals - Department of Information and Computer Science. KFUPM Box 1128.

More information

Classroom Data Collection and Analysis using Computer Vision

Classroom Data Collection and Analysis using Computer Vision Classroom Data Collection and Analysis using Computer Vision Jiang Han Department of Electrical Engineering Stanford University Abstract This project aims to extract different information like faces, gender

More information

Facial Expression Classification Using Convolutional Neural Network and Support Vector Machine

Facial Expression Classification Using Convolutional Neural Network and Support Vector Machine Facial Expression Classification Using Convolutional Neural Network and Support Vector Machine Valfredo Pilla Jr, André Zanellato, Cristian Bortolini, Humberto R. Gamba and Gustavo Benvenutti Borba Graduate

More information

Event Detection: Ultra Large-scale Clustering of Facial Expressions

Event Detection: Ultra Large-scale Clustering of Facial Expressions Event Detection: Ultra Large-scale Clustering of Facial Expressions Thomas Vandal, Daniel McDuff and Rana El Kaliouby Affectiva, Waltham, USA Abstract Facial behavior contains rich non-verbal information.

More information

Automated Embryo Stage Classification in Time-Lapse Microscopy Video of Early Human Embryo Development

Automated Embryo Stage Classification in Time-Lapse Microscopy Video of Early Human Embryo Development Automated Embryo Stage Classification in Time-Lapse Microscopy Video of Early Human Embryo Development Yu Wang, Farshid Moussavi, and Peter Lorenzen Auxogyn, Inc. 1490 O Brien Drive, Suite A, Menlo Park,

More information

Valence-arousal evaluation using physiological signals in an emotion recall paradigm. CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry.

Valence-arousal evaluation using physiological signals in an emotion recall paradigm. CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry. Proceedings Chapter Valence-arousal evaluation using physiological signals in an emotion recall paradigm CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry Abstract The work presented in this paper aims

More information

Facial Emotion Recognition with Facial Analysis

Facial Emotion Recognition with Facial Analysis Facial Emotion Recognition with Facial Analysis İsmail Öztel, Cemil Öz Sakarya University, Faculty of Computer and Information Sciences, Computer Engineering, Sakarya, Türkiye Abstract Computer vision

More information

Audio-visual Classification and Fusion of Spontaneous Affective Data in Likelihood Space

Audio-visual Classification and Fusion of Spontaneous Affective Data in Likelihood Space 2010 International Conference on Pattern Recognition Audio-visual Classification and Fusion of Spontaneous Affective Data in Likelihood Space Mihalis A. Nicolaou, Hatice Gunes and Maja Pantic, Department

More information

EMOTION DETECTION THROUGH SPEECH AND FACIAL EXPRESSIONS

EMOTION DETECTION THROUGH SPEECH AND FACIAL EXPRESSIONS EMOTION DETECTION THROUGH SPEECH AND FACIAL EXPRESSIONS 1 KRISHNA MOHAN KUDIRI, 2 ABAS MD SAID AND 3 M YUNUS NAYAN 1 Computer and Information Sciences, Universiti Teknologi PETRONAS, Malaysia 2 Assoc.

More information

Real-time Automatic Deceit Detection from Involuntary Facial Expressions

Real-time Automatic Deceit Detection from Involuntary Facial Expressions Real-time Automatic Deceit Detection from Involuntary Facial Expressions Zhi Zhang, Vartika Singh, Thomas E. Slowe, Sergey Tulyakov, and Venugopal Govindaraju Center for Unified Biometrics and Sensors

More information

A MULTIMODAL NONVERBAL HUMAN-ROBOT COMMUNICATION SYSTEM ICCB 2015

A MULTIMODAL NONVERBAL HUMAN-ROBOT COMMUNICATION SYSTEM ICCB 2015 VI International Conference on Computational Bioengineering ICCB 2015 M. Cerrolaza and S.Oller (Eds) A MULTIMODAL NONVERBAL HUMAN-ROBOT COMMUNICATION SYSTEM ICCB 2015 SALAH SALEH *, MANISH SAHU, ZUHAIR

More information

Automatic Coding of Facial Expressions Displayed During Posed and Genuine Pain

Automatic Coding of Facial Expressions Displayed During Posed and Genuine Pain Automatic Coding of Facial Expressions Displayed During Posed and Genuine Pain Gwen C. Littlewort Machine Perception Lab, Institute for Neural Computation University of California, San Diego La Jolla,

More information

Classification and Statistical Analysis of Auditory FMRI Data Using Linear Discriminative Analysis and Quadratic Discriminative Analysis

Classification and Statistical Analysis of Auditory FMRI Data Using Linear Discriminative Analysis and Quadratic Discriminative Analysis International Journal of Innovative Research in Computer Science & Technology (IJIRCST) ISSN: 2347-5552, Volume-2, Issue-6, November-2014 Classification and Statistical Analysis of Auditory FMRI Data Using

More information

EECS 433 Statistical Pattern Recognition

EECS 433 Statistical Pattern Recognition EECS 433 Statistical Pattern Recognition Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1 / 19 Outline What is Pattern

More information

Automated Tessellated Fundus Detection in Color Fundus Images

Automated Tessellated Fundus Detection in Color Fundus Images University of Iowa Iowa Research Online Proceedings of the Ophthalmic Medical Image Analysis International Workshop 2016 Proceedings Oct 21st, 2016 Automated Tessellated Fundus Detection in Color Fundus

More information

A Bag of Words Approach for Discriminating between Retinal Images Containing Exudates or Drusen

A Bag of Words Approach for Discriminating between Retinal Images Containing Exudates or Drusen A Bag of Words Approach for Discriminating between Retinal Images Containing Exudates or Drusen by M J J P Van Grinsven, Arunava Chakravarty, Jayanthi Sivaswamy, T Theelen, B Van Ginneken, C I Sanchez

More information

SCIENTIFIC work on facial expressions can be traced back

SCIENTIFIC work on facial expressions can be traced back JOURNAL OF L A T E X CLASS FILES, VOL. 13, NO. 9, SEPTEMBER 2014 1 Automatic Analysis of Facial Actions: A Survey Brais Martinez, Member, IEEE, Michel F. Valstar, Senior Member, IEEE, Bihan Jiang, and

More information

Neural Conditional Ordinal Random Fields for Agreement Level Estimation

Neural Conditional Ordinal Random Fields for Agreement Level Estimation Neural Conditional Ordinal Random Fields for Agreement Level Estimation Nemanja Rakicevic, Ognjen Rudovic and Stavros Petridis Department of Computing Imperial College London Email: {n.rakicevic, o.rudovic,

More information

VIDEO SALIENCY INCORPORATING SPATIOTEMPORAL CUES AND UNCERTAINTY WEIGHTING

VIDEO SALIENCY INCORPORATING SPATIOTEMPORAL CUES AND UNCERTAINTY WEIGHTING VIDEO SALIENCY INCORPORATING SPATIOTEMPORAL CUES AND UNCERTAINTY WEIGHTING Yuming Fang, Zhou Wang 2, Weisi Lin School of Computer Engineering, Nanyang Technological University, Singapore 2 Department of

More information

Development of novel algorithm by combining Wavelet based Enhanced Canny edge Detection and Adaptive Filtering Method for Human Emotion Recognition

Development of novel algorithm by combining Wavelet based Enhanced Canny edge Detection and Adaptive Filtering Method for Human Emotion Recognition International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 12, Issue 9 (September 2016), PP.67-72 Development of novel algorithm by combining

More information

Lung Cancer Diagnosis from CT Images Using Fuzzy Inference System

Lung Cancer Diagnosis from CT Images Using Fuzzy Inference System Lung Cancer Diagnosis from CT Images Using Fuzzy Inference System T.Manikandan 1, Dr. N. Bharathi 2 1 Associate Professor, Rajalakshmi Engineering College, Chennai-602 105 2 Professor, Velammal Engineering

More information

Computational modeling of visual attention and saliency in the Smart Playroom

Computational modeling of visual attention and saliency in the Smart Playroom Computational modeling of visual attention and saliency in the Smart Playroom Andrew Jones Department of Computer Science, Brown University Abstract The two canonical modes of human visual attention bottomup

More information

Facial Action Unit Detection by Cascade of Tasks

Facial Action Unit Detection by Cascade of Tasks Facial Action Unit Detection by Cascade of Tasks Xiaoyu Ding Wen-Sheng Chu 2 Fernando De la Torre 2 Jeffery F. Cohn 2,3 Qiao Wang School of Information Science and Engineering, Southeast University, Nanjing,

More information

Automatic Classification of Perceived Gender from Facial Images

Automatic Classification of Perceived Gender from Facial Images Automatic Classification of Perceived Gender from Facial Images Joseph Lemley, Sami Abdul-Wahid, Dipayan Banik Advisor: Dr. Razvan Andonie SOURCE 2016 Outline 1 Introduction 2 Faces - Background 3 Faces

More information

Automatic Depression Scale Prediction using Facial Expression Dynamics and Regression

Automatic Depression Scale Prediction using Facial Expression Dynamics and Regression Automatic Depression Scale Prediction using Facial Expression Dynamics and Regression Asim Jan, Hongying Meng, Yona Falinie Binti Abd Gaus, Fan Zhang, Saeed Turabzadeh Department of Electronic and Computer

More information

Putting Context into. Vision. September 15, Derek Hoiem

Putting Context into. Vision. September 15, Derek Hoiem Putting Context into Vision Derek Hoiem September 15, 2004 Questions to Answer What is context? How is context used in human vision? How is context currently used in computer vision? Conclusions Context

More information

A framework for the Recognition of Human Emotion using Soft Computing models

A framework for the Recognition of Human Emotion using Soft Computing models A framework for the Recognition of Human Emotion using Soft Computing models Md. Iqbal Quraishi Dept. of Information Technology Kalyani Govt Engg. College J Pal Choudhury Dept. of Information Technology

More information

Shu Kong. Department of Computer Science, UC Irvine

Shu Kong. Department of Computer Science, UC Irvine Ubiquitous Fine-Grained Computer Vision Shu Kong Department of Computer Science, UC Irvine Outline 1. Problem definition 2. Instantiation 3. Challenge and philosophy 4. Fine-grained classification with

More information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, S. Lee, U. Neumann, S. Narayanan Emotion

More information

Detection of Facial Landmarks from Neutral, Happy, and Disgust Facial Images

Detection of Facial Landmarks from Neutral, Happy, and Disgust Facial Images Detection of Facial Landmarks from Neutral, Happy, and Disgust Facial Images Ioulia Guizatdinova and Veikko Surakka Research Group for Emotions, Sociality, and Computing Tampere Unit for Computer-Human

More information

Shu Kong. Department of Computer Science, UC Irvine

Shu Kong. Department of Computer Science, UC Irvine Ubiquitous Fine-Grained Computer Vision Shu Kong Department of Computer Science, UC Irvine Outline 1. Problem definition 2. Instantiation 3. Challenge 4. Fine-grained classification with holistic representation

More information

The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression

The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression Patrick Lucey 1,2, Jeffrey F. Cohn 1,2, Takeo Kanade 1, Jason Saragih 1, Zara Ambadar 2 Robotics

More information

Utilizing Posterior Probability for Race-composite Age Estimation

Utilizing Posterior Probability for Race-composite Age Estimation Utilizing Posterior Probability for Race-composite Age Estimation Early Applications to MORPH-II Benjamin Yip NSF-REU in Statistical Data Mining and Machine Learning for Computer Vision and Pattern Recognition

More information

Beyond R-CNN detection: Learning to Merge Contextual Attribute

Beyond R-CNN detection: Learning to Merge Contextual Attribute Brain Unleashing Series - Beyond R-CNN detection: Learning to Merge Contextual Attribute Shu Kong CS, ICS, UCI 2015-1-29 Outline 1. RCNN is essentially doing classification, without considering contextual

More information

AUTOMATIC DIABETIC RETINOPATHY DETECTION USING GABOR FILTER WITH LOCAL ENTROPY THRESHOLDING

AUTOMATIC DIABETIC RETINOPATHY DETECTION USING GABOR FILTER WITH LOCAL ENTROPY THRESHOLDING AUTOMATIC DIABETIC RETINOPATHY DETECTION USING GABOR FILTER WITH LOCAL ENTROPY THRESHOLDING MAHABOOB.SHAIK, Research scholar, Dept of ECE, JJT University, Jhunjhunu, Rajasthan, India Abstract: The major

More information

Video Saliency Detection via Dynamic Consistent Spatio- Temporal Attention Modelling

Video Saliency Detection via Dynamic Consistent Spatio- Temporal Attention Modelling AAAI -13 July 16, 2013 Video Saliency Detection via Dynamic Consistent Spatio- Temporal Attention Modelling Sheng-hua ZHONG 1, Yan LIU 1, Feifei REN 1,2, Jinghuan ZHANG 2, Tongwei REN 3 1 Department of

More information

Annotation and Retrieval System Using Confabulation Model for ImageCLEF2011 Photo Annotation

Annotation and Retrieval System Using Confabulation Model for ImageCLEF2011 Photo Annotation Annotation and Retrieval System Using Confabulation Model for ImageCLEF2011 Photo Annotation Ryo Izawa, Naoki Motohashi, and Tomohiro Takagi Department of Computer Science Meiji University 1-1-1 Higashimita,

More information

Face Gender Classification on Consumer Images in a Multiethnic Environment

Face Gender Classification on Consumer Images in a Multiethnic Environment Face Gender Classification on Consumer Images in a Multiethnic Environment Wei Gao and Haizhou Ai Computer Science and Technology Department, Tsinghua University, Beijing 100084, China ahz@mail.tsinghua.edu.cn

More information

Analysis of EEG signals and facial expressions for continuous emotion detection

Analysis of EEG signals and facial expressions for continuous emotion detection IEEE TRANSACTIONS ON AFFECTIVE COMPUTING 1 Analysis of EEG signals and facial expressions for continuous emotion detection Mohammad Soleymani, Member, IEEE, Sadjad Asghari-Esfeden, Student member, IEEE

More information

Hierarchical Convolutional Features for Visual Tracking

Hierarchical Convolutional Features for Visual Tracking Hierarchical Convolutional Features for Visual Tracking Chao Ma Jia-Bin Huang Xiaokang Yang Ming-Husan Yang SJTU UIUC SJTU UC Merced ICCV 2015 Background Given the initial state (position and scale), estimate

More information

Exploiting Privileged Information for Facial Expression Recognition

Exploiting Privileged Information for Facial Expression Recognition Exploiting Privileged Information for Facial Expression Recognition Michalis Vrigkas 1, Christophoros Nikou 1,2, Ioannis A. Kakadiaris 2 1 Department of Computer Science & Engineering, University of Ioannina,

More information

The 29th Fuzzy System Symposium (Osaka, September 9-, 3) Color Feature Maps (BY, RG) Color Saliency Map Input Image (I) Linear Filtering and Gaussian

The 29th Fuzzy System Symposium (Osaka, September 9-, 3) Color Feature Maps (BY, RG) Color Saliency Map Input Image (I) Linear Filtering and Gaussian The 29th Fuzzy System Symposium (Osaka, September 9-, 3) A Fuzzy Inference Method Based on Saliency Map for Prediction Mao Wang, Yoichiro Maeda 2, Yasutake Takahashi Graduate School of Engineering, University

More information

A Comparison of Collaborative Filtering Methods for Medication Reconciliation

A Comparison of Collaborative Filtering Methods for Medication Reconciliation A Comparison of Collaborative Filtering Methods for Medication Reconciliation Huanian Zheng, Rema Padman, Daniel B. Neill The H. John Heinz III College, Carnegie Mellon University, Pittsburgh, PA, 15213,

More information

COMPARATIVE STUDY ON FEATURE EXTRACTION METHOD FOR BREAST CANCER CLASSIFICATION

COMPARATIVE STUDY ON FEATURE EXTRACTION METHOD FOR BREAST CANCER CLASSIFICATION COMPARATIVE STUDY ON FEATURE EXTRACTION METHOD FOR BREAST CANCER CLASSIFICATION 1 R.NITHYA, 2 B.SANTHI 1 Asstt Prof., School of Computing, SASTRA University, Thanjavur, Tamilnadu, India-613402 2 Prof.,

More information

Comparison of selected off-the-shelf solutions for emotion recognition based on facial expressions

Comparison of selected off-the-shelf solutions for emotion recognition based on facial expressions Comparison of selected off-the-shelf solutions for emotion recognition based on facial expressions Grzegorz Brodny, Agata Kołakowska, Agnieszka Landowska, Mariusz Szwoch, Wioleta Szwoch, Michał R. Wróbel

More information

FEATURE EXTRACTION USING GAZE OF PARTICIPANTS FOR CLASSIFYING GENDER OF PEDESTRIANS IN IMAGES

FEATURE EXTRACTION USING GAZE OF PARTICIPANTS FOR CLASSIFYING GENDER OF PEDESTRIANS IN IMAGES FEATURE EXTRACTION USING GAZE OF PARTICIPANTS FOR CLASSIFYING GENDER OF PEDESTRIANS IN IMAGES Riku Matsumoto, Hiroki Yoshimura, Masashi Nishiyama, and Yoshio Iwai Department of Information and Electronics,

More information

Intelligent Edge Detector Based on Multiple Edge Maps. M. Qasim, W.L. Woon, Z. Aung. Technical Report DNA # May 2012

Intelligent Edge Detector Based on Multiple Edge Maps. M. Qasim, W.L. Woon, Z. Aung. Technical Report DNA # May 2012 Intelligent Edge Detector Based on Multiple Edge Maps M. Qasim, W.L. Woon, Z. Aung Technical Report DNA #2012-10 May 2012 Data & Network Analytics Research Group (DNA) Computing and Information Science

More information

lateral organization: maps

lateral organization: maps lateral organization Lateral organization & computation cont d Why the organization? The level of abstraction? Keep similar features together for feedforward integration. Lateral computations to group

More information

Affect Intensity Estimation using Multiple Modalities

Affect Intensity Estimation using Multiple Modalities Affect Intensity Estimation using Multiple Modalities Amol S. Patwardhan, and Gerald M. Knapp Department of Mechanical and Industrial Engineering Louisiana State University apatwa3@lsu.edu Abstract One

More information

R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology

R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology ISSN: 0975-766X CODEN: IJPTFI Available Online through Research Article www.ijptonline.com FACIAL EMOTION RECOGNITION USING NEURAL NETWORK Kashyap Chiranjiv Devendra, Azad Singh Tomar, Pratigyna.N.Javali,

More information