A Common Framework for Real-Time Emotion Recognition and Facial Action Unit Detection
|
|
- Kerrie Flowers
- 5 years ago
- Views:
Transcription
1 A Common Framework for Real-Time Emotion Recognition and Facial Action Unit Detection Tobias Gehrig and Hazım Kemal Ekenel Facial Image Processing and Analysis Group, Institute for Anthropomatics Karlsruhe Institute of Technology D Karlsruhe, P.O. Box 6980 Germany {tobias.gehrig, Abstract In this paper, we present a common framework for realtime action unit detection and emotion recognition that we have developed for the emotion recognition and action unit detection sub-challenges of the FG 2011 Facial Expression Recognition and Analysis Challenge. For these tasks we employed a local appearance-based face representation approach using discrete cosine transform, which has been shown to be very effective and robust for face recognition. Using these features, we trained multiple one-versusall support vector machine classifiers corresponding to the individual classes of the specific task. With this framework we achieve 24.2% and 7.6% absolute improvement over the overall baseline results on the emotion recognition and action unit detection sub-challenge, respectively. 1. Introduction Facial expressions are naturally used by humans as one way to communicate their emotions, opinions, intentions, and cognitive states with others and are therefore important in natural communication. However, in today s humancomputer interaction (HCI) scenarios this kind of information is mostly being neglected, in spite of the fact that it can be a much faster and more direct way than describing the affective state with words. To compensate for this neglect and improve HCI, recently, there have been many studies conducted on automatic facial expression analysis and the topic has become more and more popular. A wide range of applications [5] including, but not limited to, psychological studies, pain [2] or stress detection, online tutoring systems [19], and assistance systems for autistic persons [12] has also fueled this increasing interest in this topic. When we focus on applications in HCI scenarios, such as mobile service robots [20], or especially safety critical applications, such as drowsy driver detection [18], the runtime of an algorithm plays an important role, since the system should react in real-time without much latency. Facial expression analysis can be performed in different degrees of granularity. It can be either done by directly classifying the prototypic expressions (e.g. anger, fear, joy, surprise,... ) from face images or, with finer granularity, by detecting facial muscle activities. The latter is commonly described using the facial action coding system (FACS) [8], which defines action units (AU) corresponding to atomic facial muscle actions. Previous approaches to automatic expression analysis utilized, besides other feature representations, gabor wavelets [4], local binary patterns (LBP) [15] or the widely used active appearance models (AAM) to model the human face [11, 14]. Detailed surveys about works on automatic expressions analysis can be found in [9, 13, 16, 21]. In this study, we considered two main points while choosing our face representation method. The first main point was to be able to perform multiple face classification tasks, such as face recognition, gender and expression classification, with the same representation. The reason behind this is that if we can have a common framework that can provide us, for example, identity and gender information of the subject, this information can then be used to enhance the performance of the expression analysis system by employing person specific or gender specific expression analysis models, which has been shown in [14] to improve the performance. The second main point was to be able to have real-time processing capability. To achieve this, the feature extraction should be computed fast and at the same time the representation should be compact. These design choices led us to utilize the discrete cosine transform (DCT) for the representation. We employed a local appearancebased face representation approach, in which DCT is used to model local facial regions [7], and an ensemble of support vector machine (SVM) classifiers for the task of automatic facial expression analysis. We evaluate the approach 1
2 MCT-based face & eye detection eye-based alignment block-based DCT 10 Coeff. per block 1-vs-all SVM (RBF kernel) Figure 1. Overview of the common face processing and classification framework. within the emotion recognition and action unit detection sub-challenges of the FG 2011 Facial Expression Recognition and Analysis Challenge (FERA2011) [17] and compare our results to the baseline results, which were obtained using a local binary pattern (LBP)-based face representation [17]. We achieve 24.2% absolute improvement in terms of classification rate over the baseline results for the emotion recognition sub-challenge, and 7.6% absolute improvement in terms of F1-score for the sub-challenge on action unit detection using the same face representation. These results lead to a high rank and outperformed a range of other approaches in the FERA2011 emotion recognition and action unit detection sub-challenges [1]. In Section 2, we will describe the face representation framework, followed by a description of FERA2011, its dataset, tasks, and baseline system in Section 3. Then, we will present our proposed system for both tasks and discuss their results in Section 4. Finally, in Section 5, we will give conclusions and future research directions. 2. Methodology The face processing system is illustrated in Figure 1 and described in more detail in the following subsections Preprocessing The face and eyes are automatically detected using a modified census transformation (MCT) based face and eye detector [10]. Then the face image is aligned by an euclidean transform using the detected eye coordinates so that for all images the eyes are at the same position based on a given eye row and an interocular distance. This reduces the amount of variation in the feature space that is due to in-plane rotation, scale variations, and small pose changes. Finally, the image is converted to grayscale Local Appearance-based Face Representation From the preprocessed face image a local appearancebased face representation is retrieved. Such representations are known to provide more robustness against local appearance changes than holistic ones. This is due to the fact that the face representation of local approaches differs only in those regions, where changes occur, e.g. due to expression, local illumination, or occlusion, whereas for holistic approaches the whole face representation changes. Therefore, they are widely used in automatic face analysis, like face recognition [7]. Here, we retrieve these local regions by dividing the face image into equally sized, nonoverlapping blocks of N N pixels, as shown in Figure 1. The block size is chosen such, that it provides stationarity, simple transform complexity, and sufficient compression. These local blocks are then processed individually by the two-dimensional type-ii DCT, which provides a compact representation of the data similar to the Karhunen-Loeve transform (KLT), but with the advantage of being data independent. The DCT coefficients are extracted using zig-zag scanning and all coefficients but the first few are removed. To reduce the effect of illumination and balance the contribution of each coefficient, which normally has higher magnitude for small indices, the resulting coefficient vectors are normalized separately for each block, as proposed by Ekenel [7]. This is achieved by first dividing each coefficient by its standard deviation and normalizing the resulting local feature vector to unit norm. Finally, all local feature vectors are concatenated to form the overall feature vector Support Vector Machines (SVM) One of the most popular approaches of doing facial expression analysis is using SVMs [4, 11, 15, 17]. To prevent some attributes from dominating others, due to different numeric ranges, we normalize the feature vectors F = {f i }, before feeding them into the SVM, by making each attribute f i,j zero-mean and unit-variance over all feature vectors F. The normalization parameters are estimated on the training set and applied to the test set before classification. Here, we decided to use a SVM with a radial basis function (RBF) kernel, as proposed by [15], which transforms the features into a higher dimensional space, where it can be then linearly separated by a hyperplane. For both sub-challenges we utilize a single one-versusall classifier per class to be detected, which is trained on the occurances of the class as positive samples and occurances of samples of all the other classes as negative samples. For action unit detection, a threshold on the distance to the hyperplane serves us directly with the detection estimate for the particular AU in the current frame. For emotion recognition, additionally a model for probability estimates is trained for each classifier to make the classifier outputs more comparable. The decision for a frame is then achieved by selecting the class with the highest probability. Finally, majority voting over all frame decisions of a video gives us the emotion estimate for that video. 2
3 3. FG 2011 Facial Expression Recognition and Analysis Challenge (FERA2011) The presented approach is evaluated within the FG 2011 Facial Expression Recognition and Analysis Challenge (FERA2011) [17]. The aim of the challenge is to overcome the lack of standardized evaluation procedures and thus the low comparability in the previous literature on automatic facial expression analysis. In this section, we describe briefly the database used in FERA2011, its subchallenges, and the official baseline system. Detailed informations can be found in the challenge paper [17] Database The GEMEP-FERA database used for the FERA challenge is a subset derived from the GEneva Multimodal Emotion Portrayals (GEMEP) database [3]. It consists of recordings of 10 actors displaying a variety of expressions, while either uttering a meaningless phrase or the word Aaah. The movement of the actors faces varies from steady frontal to moving fast and wild with out-of-plane rotations. The sequences are between 1 and 4 seconds long. For the evaluation the dataset is provided as a strictly divided training and test set. The training set contains recordings of 7 actors, of which 3 are also present in the test set together with 3 unseen persons. Thus, the system can be evaluated on subject-dependent and subject-independent data Emotion Recognition Sub-Challenge For the emotion recognition sub-challenge the task is to predict one of five discrete, mutually-exclusive emotion classes (anger, fear, joy, relief, and sadness) per video. The performance of each emotion classifier is measured by means of the classification rate, meaning the ratio between the number of correctly classified videos and the total number of videos of the corresponding emotion. The overall system performance is computed as the average over all individual emotion classification rates Action Unit Detection Sub-Challenge For the action unit (AU) detection sub-challenge the task is to detect the presence or absence of 12 AUs from the facial action coding system (FACS) (1, 2, 4, 6, 7, 10, 12, 15, 17, 18, 25, and 26) on a frame-by-frame basis. During speech, which was labeled as AD50, AU25 and AU26 were not labeled and thus these speech frames are neither used for the training of the AU25 and AU26 classifiers nor are they included in the computation of the scores. The performance of each AU detector is measured by means of the F1- measure and the overall system performance is computed as the average over all individual F1-scores Baseline system The organizers of FERA2011 provide baseline results on the GEMEP-FERA corpus to have a common ground for participants to compare their results to [17]. This baseline system first uses the output of the OpenCV face and eye detectors to align and scale the face image to pixels. The aligned face image is then split into blocks of pixels, for which histograms of uniform local binary patterns (ULBP) with 8 neighbors and radius 1 are generated. These histograms are then concatenated to build a 5900 dimensional feature vector. The dimension of this feature vector is then reduced by applying the principal component analysis (PCA) to it. Finally one-versus-all SVMs with RBF kernel are utilized to classify the data. For the emotion recognition task they perform the processing as described above on the whole face, followed by a majority voting over the estimates for each video. In case of the AU task, they instead detect upper face AUs on the upper half of the face image and lower face AUs on the lower half, respectively. Additionally, they filtered the training set such that for each video only one frame per AU combination was used for the training. For further details about the baseline system the interested reader is kindly referred to the challenge paper [17]. 4. Evaluation of the proposed system In this section, we describe in detail our proposed setup common to both of our systems for the two FERA2011 sub-challenges. This is followed by the setup and results of the emotion recognition and action unit detection subchallenge, respectively. Finally, the run-time is evaluated Common Setup In our proposed common framework, each frame of the video sequences is first processed by a face and eye detector. Afterwards, the face images are aligned with respect to the detected eye center locations so that the eye distance equals 31 pixels and the eyes are in the 26th row of the crop-out. The DCT is calculated for blocks of 8 8 pixels and only the first 10 coefficients are used. This leads to a feature vector of 800 dimensions. These parameters have shown to be the best in preliminary experiments on the GEMEP-FERA data using double cross-validation or a development and validation set splitting on the training data. Also, it was shown in [7] that a block size of 8 8 and keeping 10 coefficients gives the best compromise between recognition rate and data compression for face recognition. If no face is detected the frame is simply ignored. If no eyes are detected the frame is ignored for training, but for testing the unaligned face is processed as described above. For classification, we use the SVM implementation as provided by LIBSVM [6] using a RBF kernel. The best 3
4 Table 1. Confusion matrix for emotion recognition on the part of the test set with unknown persons (person independent). pred. \ truth Anger Fear Joy Relief Sadness Anger Fear Joy Relief Sadness values for the soft margin parameter C and kernel parameter γ are estimated by doing a grid search utilizing a 5-fold subject-independent cross-validation on the training data Emotion Recognition Sub-Challenge Based on this common setup we built a system for the emotion recognition sub-challenge, which will be described in detail in this subsection together with a discussion of the results on the GEMEP-FERA dataset Setup For the emotion recognition sub-challenge we use a single one-versus-all SVM classifier per emotion class. For each classifier a model for probability estimates is trained. For the training of such an emotion classifier we use all the frames of the videos labeled with the corresponding emotion as positive samples and all others as negative samples. The grid search is performed over C = 2 k and γ = 2 l with k = 3,..., 1 and l = 16,..., 7, respectively. In the classification stage each frame of the videos is first classified by all the emotion classifiers. The estimated emotion for that frame is then set to the emotion corresponding to the classifier with the highest probability. Since the task is to output one emotion estimate per video, we finally performed majority voting over all the emotion estimates of the frames for each video, like it is done in the baseline method [17]. In case the same number of estimates is returned by multiple classifiers, we select the emotion that is first in an alphabetical order Results The confusion matrix for the emotion task is shown in Table 1, Table 2 and Table 3 for the person independent and person specific portion of the test set as well as for the whole test set, respectively. The classification rates for our proposed DCT-based approach and the official LBP-based baseline are given in Table 4 1. We can see from the confusion matrix on the person specific portion in Table 2 that there is almost no confusion be- 1 The results slightly changed since the challenge submission, since we found a small bug in the meantime. The submitted system had 65.8%, 94.4%, and 77.3% person independent, person specific, and overall classification rate, respectively. Table 2. Confusion matrix for emotion recognition on the part of the test set with known persons (person specific). pred. \ truth Anger Fear Joy Relief Sadness Anger Fear Joy Relief Sadness Table 3. Confusion matrix for emotion recognition on the whole test set. pred. \ truth Anger Fear Joy Relief Sadness Anger Fear Joy Relief Sadness Table 4. Classification rates for emotion recognition using DCT and the LBP baseline [17]. Results are provided for the person specific (PS), person independent (PI), and overall partitions. DCT LBP Emotion PI PS Overall PI PS Overall anger 92.9% 100.0% 96.3% 86% 92% 89% fear 40.0% 90.0% 60.0% 7% 40% 20% joy 100.0% 100.0% 100.0% 70% 73% 71% relief 68.8% 100.0% 80.8% 31% 70% 46% sadness 40.0% 100.0% 64.0% 27% 90% 52% Average 68.3% 98.0% 80.2% 44% 73% 56% tween the emotions, which can also be seen from the person specific column of the classification results in Table 4, where 100% accuracy was achieved for all classes besides fear. This shows that the local appearance-based DCT face representation is very well suited for emotion classification on known subjects. On the person independent portion the results look different. Here, 100% accuracy is achieved only for joy, but compared to the person specific portion sadness is confused by a huge amount with anger. Also fear is almost equally confused with anger and joy. Looking at the overall performance, one can see that anger and joy seem to be relatively easy to distinguish from the other emotions, which can also be observed from the LBP-based baseline results, reproduced in Table 4. But emotions like fear and sadness, and to some extend also relief, are very challenging for both approaches, when the subject is unknown. This could also simply mean that subjects tend to express these emotions differently. But to get a statistically meaningful answer to that, one would have to evaluate on more data. It can be observed that for each of the portions the DCTbased approach gives around 24.2% absolute improvement over the LBP-based baseline approach. Furthermore a classification rate of 98% on the person specific test data portion shows that our approach is well suited especially for person specific emotion classification tasks. 4
5 4.3. Action Unit Detection Sub-Challenge In this subsection, we apply the same local appearancebased DCT face representation framework to the action unit detection sub-challenge and describe the setup, the differences to the emotion classification system as well as the results on the GEMEP-FERA dataset Setup For the action unit detection sub-challenge we also use a single one-versus-all SVM classifier per action unit. For the training of such an action unit classifier we select all the frames of the videos for which the corresponding action unit was labeled to be active as positive sample candidates and all others as negative sample candidates. Only for AU25 and AU26 we additionally remove all samples from the training set that are labeled with AD50 as being present. These sample candidates are then balanced by randomly removing samples from the class that is larger until we have the same amount of positive and negative samples. The grid search for the parameter estimation of the SVM is performed over C = k and γ = 2 l with k = 0,..., 31 and l = 15,..., 1, respectively. In the classification stage, each frame of the videos is independently classified by all the action unit classifiers and all action units for which the distance to the hyperplane is greater than 0 are set to be present in that frame. In case, frames have been ignored due to missing face detections, all action units are set to be inactive. Table 5. F1 scores for action unit detection using DCT and the LBP baseline [17]. Results are provided for the person specific (PS), person independent (PI), and overall partitions. Action DCT LBP Unit PI PS Overall PI PS Overall AU1 60.6% 30.7% 50.8% 63.3% 36.2% 56.7% AU2 52.0% 40.5% 47.9% 67.5% 40.0% 58.9% AU4 59.9% 52.6% 57.3% 13.3% 29.8% 19.2% AU6 82.5% 62.0% 76.1% 53.6% 25.5% 46.3% AU7 51.6% 57.1% 53.9% 49.3% 48.1% 48.9% AU % 53.9% 49.4% 44.5% 52.6% 47.9% AU % 82.9% 80.5% 76.9% 68.8% 74.2% AU % 22.8% 18.0% 8.2% 19.9% 13.3% AU % 24.8% 42.7% 37.8% 34.9% 36.9% AU % 26.8% 31.9% 12.6% 24.0% 17.6% AU % 76.5% 78.0% 79.6% 80.9% 80.2% AU % 40.8% 45.3% 37.1% 47.4% 41.5% Average 55.2% 47.6% 52.7% 45.3% 42.3% 45.1% Table 6. Two alternative forced choice (2AFC) score for action unit detection using DCT and the LBP baseline [17]. Results are provided for the person specific (PS), person independent (PI), and overall partitions. Action DCT LBP Unit PI PS Overall PI PS Overall AU1 52.6% 58.2% 58.2% 84.5% 61.3% 79.0% AU2 72.9% 64.7% 70.0% 81.8% 64.0% 76.7% AU4 51.8% 49.9% 51.1% 48.1% 60.7% 52.6% AU6 87.9% 81.5% 84.8% 69.0% 56.8% 65.7% AU7 68.5% 62.6% 66.5% 57.2% 53.0% 55.6% AU % 59.1% 53.4% 57.7% 62.7% 59.7% AU % 90.0% 84.2% 73.8% 70.0% 72.4% AU % 50.9% 49.8% 55.5% 56.7% 56.3% AU % 58.2% 60.7% 67.9% 66.1% 64.6% AU % 57.4% 70.7% 62.0% 59.9% 61.0% AU % 67.1% 66.9% 54.4% 66.9% 59.3% AU % 54.2% 59.4% 45.7% 55.5% 50.0% Average 65.3% 62.8% 64.6% 63.1% 61.1% 62.8% Results The F1 measures and two alternative forced choice (2AFC) scores for the action unit task using our proposed DCTbased approach and the official LBP-based baseline are shown in Table 5 and Table 6, respectively. 2 We can see in Table 5, that for AU6, AU12, and AU25 we achieve reasonable results, which shows that smiles are very well detectable. The poor performance for AU15, AU17, AU18, and AU26 could be due to the amount of data available for their training, since for these AUs we had only between 400 and 1000 samples, while for the other AUs there were around 1300 to 2700 samples available. But still our approach performs on most of them better than the baseline, which might indicate that using DCT allows you to have less training samples for a classifier to generalize. Since both systems show similar trends in performance, these results could be due to the distribution of the AUs in the dataset or simply a hint that these AUs are simply harder to detect in this dataset with these features. 2 The results slightly changed since the challenge submission, since we found a small bug in the meantime. The submitted system had 52.2% overall F1-score and 64% overall 2AFC. One interesting thing about both systems is that for about two third of the AUs the performance is better on the person independent test set portion. According to the organizers, this could be due to the distribution of AUs across subjects. Compared to the LBP-based baseline results, our approach achieves 7.6% absolute improvement in terms of the overall F1 score Runtime Our system has also proven to be very fast in classifying the test data. On an Intel Core i5-750 with 4 cores of 2.67 GHz each, it processed the 4733 frames of the action unit detection test set in approximately 3.35 minutes, which corresponds to a frame rate of around 23.6 frames per second. For the emotion recognition, the classification of the 7537 test frames took approximately 4.92 minutes, which corresponds to around 25.5 frames per second. Both of these timing tests include everything, from loading the videos from disk, over feature extraction to classifying the samples and finally saving the results back to disk. These results show that the system is real-time capable. 5
6 5. Conclusion and Future Work In this paper, we proposed a common framework for real-time action unit detection and emotion recognition using a local appearance-based DCT face representation and one-versus-all SVM classifiers. We evaluated the proposed approach within the FERA 2011 Challenge and compared its performance with the official LBP-based baseline results. We achieve 24.2% and 7.6% absolute improvement on the emotion recognition and action unit detection sub-challenge, respectively. This shows that the local appearance-based DCT face representation is well suited for emotion classification as well as action unit detection tasks. We also showed that the system runs in real-time. In the future, we plan to investigate the low performance of some of the action unit detectors as well as the generalization of the approach to other databases. In addition, incorporating time information and utilizing the locality information of the AUs, as well as some intelligent sample selection method for determining on which samples the detectors should be trained could improve the performance. 6. Acknowledgments This work is funded by the Concept for the Future of Karlsruhe Institute of Technology within the framework of the German Excellence Initiative. References [1] FERA2011 website. 2 [2] A. B. Ashraf, S. Lucey, J. F. Cohn, T. Chen, Z. Ambadar, K. M. Prkachin, and P. E. Solomon. The painful face Pain expression recognition using active appearance models. Image and Vision Computing, 27(12): , [3] T. Bänziger and K. R. Scherer. Introducing the Geneva Multimodal Emotion Portrayal (GEMEP) Corpus. In K. R. Scherer, T. Bänziger, and E. B. Roesch, editors, Blueprint for affective computing: A sourcebook, pages Oxford University Press, Oxford, England, [4] M. S. Bartlett, G. C. Littlewort, M. G. Frank, C. Lainscsek, I. R. Fasel, and J. R. Movellan. Automatic Recognition of Facial Actions in Spontaneous Expressions. Journal of Multimedia, , 2 [5] M. S. Bartlett and J. Whitehill. Automated facial expression measurement: Recent applications to basic research in human behavior, learning, and education. In A. Calder, G. Rhodes, J. V. Haxby, and M. H. Johnson, editors, Handbook of Face Perception. Oxford University Press, [6] C.-C. Chang and C.-J. Lin. LIBSVM: a library for support vector machines, Software available at http: // cjlin/libsvm. 3 [7] H. K. Ekenel. A Robust Face Recognition Algorithm for Real-World Applications. PhD thesis, Universität Karlsruhe (TH), Karlsruhe, Germany, Feb , 2, 3 [8] P. Ekman and W. V. Friesen. Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto, California, [9] B. Fasel and J. Luettin. Automatic facial expression analysis: a survey. Pattern Recognition, 36(1): , Jan [10] C. Küblbeck and A. Ernst. Face detection and tracking in video sequences using the modified census transformation. Image and Vision Computing, 24(6): , June [11] P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, and Z. Ambadar. The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression. In Proceedings of 3rd IEEE CVPR Workshop on CVPR for Human Communicative Behavior Analysis, , 2 [12] M. Madsen, R. el Kaliouby, M. Goodwin, and R. W. Picard. Technology for Just-In-Time In-Situ Learning of Facial Affect for Persons Diagnosed with an Autism Spectrum Disorder. In Proceedings of the 10th ACM Conference on Computers and Accessibility (ASSETS), [13] M. Pantic and L. Rothkrantz. Automatic analysis of facial expressions: The state of the art. IEEE Trans. on Pattern Analysis and Machine Intelligence, 22(12): , [14] Y. Saatci and C. Town. Cascaded Classification of Gender and Facial Expression using Active Appearance Models. In Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition (FGR 06), [15] C. Shan, S. Gong, and P. W. McOwan. Facial expression recognition based on Local Binary Patterns: A comprehensive study. Image and Vision Computing, 27(6):803, , 2 [16] Y.-I. Tian, T. Kanade, and J. Cohn. Recognizing action units for facial expression analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(2):97 115, [17] M. F. Valstar, B. Jiang, M. Méhu, M. Pantic, and K. Scherer. The First Facial Expression Recognition and Analysis Challenge. In Proc. of the IEEE International Conference on Automatic Face and Gesture Recognition, , 3, 4, 5 [18] E. Vural, M. Cetin, A. Ercil, G. Littlewort, M. Bartlett, and J. Movellan. Drowsy Driver Detection Through Facial Movement Analysis. In Proceedings of ICCV 2007 Workshop on Human Computer Interaction, [19] J. Whitehill, M. Bartlett, and J. Movellan. Automatic Facial Expression Recognition for Intelligent Tutoring Systems. In Proceedings of IEEE CVPR Workshop on CVPR for Human Communicative Behavior Analysis, [20] T. Wilhelm, H.-J. Böhme, and H.-M. Groß. Classification of Face Images for Gender, Age, Facial Expression, and Identity. In Proceedings of 15th International Conference on Artificial Neural Networks: Biological Inspirations (ICANN 2005), pages , [21] Z. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang. A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(1):39 58,
This is the accepted version of this article. To be published as : This is the author version published as:
QUT Digital Repository: http://eprints.qut.edu.au/ This is the author version published as: This is the accepted version of this article. To be published as : This is the author version published as: Chew,
More informationFacial expression recognition with spatiotemporal local descriptors
Facial expression recognition with spatiotemporal local descriptors Guoying Zhao, Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and Information Engineering, P. O. Box
More informationEmotion Recognition using a Cauchy Naive Bayes Classifier
Emotion Recognition using a Cauchy Naive Bayes Classifier Abstract Recognizing human facial expression and emotion by computer is an interesting and challenging problem. In this paper we propose a method
More informationStatistical and Neural Methods for Vision-based Analysis of Facial Expressions and Gender
Proc. IEEE Int. Conf. on Systems, Man and Cybernetics (SMC 2004), Den Haag, pp. 2203-2208, IEEE omnipress 2004 Statistical and Neural Methods for Vision-based Analysis of Facial Expressions and Gender
More informationStudy on Aging Effect on Facial Expression Recognition
Study on Aging Effect on Facial Expression Recognition Nora Algaraawi, Tim Morris Abstract Automatic facial expression recognition (AFER) is an active research area in computer vision. However, aging causes
More informationAutomatic Facial Expression Recognition Using Boosted Discriminatory Classifiers
Automatic Facial Expression Recognition Using Boosted Discriminatory Classifiers Stephen Moore and Richard Bowden Centre for Vision Speech and Signal Processing University of Surrey, Guildford, GU2 7JW,
More informationDEEP convolutional neural networks have gained much
Real-time emotion recognition for gaming using deep convolutional network features Sébastien Ouellet arxiv:8.37v [cs.cv] Aug 2 Abstract The goal of the present study is to explore the application of deep
More informationFacial Expression Analysis for Estimating Pain in Clinical Settings
Facial Expression Analysis for Estimating Pain in Clinical Settings Karan Sikka University of California San Diego 9450 Gilman Drive, La Jolla, California, USA ksikka@ucsd.edu ABSTRACT Pain assessment
More informationFacial Behavior as a Soft Biometric
Facial Behavior as a Soft Biometric Abhay L. Kashyap University of Maryland, Baltimore County 1000 Hilltop Circle, Baltimore, MD 21250 abhay1@umbc.edu Sergey Tulyakov, Venu Govindaraju University at Buffalo
More informationAUDIO-VISUAL EMOTION RECOGNITION USING AN EMOTION SPACE CONCEPT
16th European Signal Processing Conference (EUSIPCO 28), Lausanne, Switzerland, August 25-29, 28, copyright by EURASIP AUDIO-VISUAL EMOTION RECOGNITION USING AN EMOTION SPACE CONCEPT Ittipan Kanluan, Michael
More informationFace Analysis : Identity vs. Expressions
Hugo Mercier, 1,2 Patrice Dalle 1 Face Analysis : Identity vs. Expressions 1 IRIT - Université Paul Sabatier 118 Route de Narbonne, F-31062 Toulouse Cedex 9, France 2 Websourd Bâtiment A 99, route d'espagne
More informationThe Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression
The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression Patrick Lucey 1,2, Jeffrey F. Cohn 1,2, Takeo Kanade 1, Jason Saragih 1, Zara Ambadar 2 Robotics
More informationGender Based Emotion Recognition using Speech Signals: A Review
50 Gender Based Emotion Recognition using Speech Signals: A Review Parvinder Kaur 1, Mandeep Kaur 2 1 Department of Electronics and Communication Engineering, Punjabi University, Patiala, India 2 Department
More informationGet The FACS Fast: Automated FACS face analysis benefits from the addition of velocity
Get The FACS Fast: Automated FACS face analysis benefits from the addition of velocity Timothy R. Brick University of Virginia Charlottesville, VA 22904 tbrick@virginia.edu Michael D. Hunter University
More informationFacial Expression Recognition Using Principal Component Analysis
Facial Expression Recognition Using Principal Component Analysis Ajit P. Gosavi, S. R. Khot Abstract Expression detection is useful as a non-invasive method of lie detection and behaviour prediction. However,
More informationA MULTIMODAL NONVERBAL HUMAN-ROBOT COMMUNICATION SYSTEM ICCB 2015
VI International Conference on Computational Bioengineering ICCB 2015 M. Cerrolaza and S.Oller (Eds) A MULTIMODAL NONVERBAL HUMAN-ROBOT COMMUNICATION SYSTEM ICCB 2015 SALAH SALEH *, MANISH SAHU, ZUHAIR
More informationUtilizing Posterior Probability for Race-composite Age Estimation
Utilizing Posterior Probability for Race-composite Age Estimation Early Applications to MORPH-II Benjamin Yip NSF-REU in Statistical Data Mining and Machine Learning for Computer Vision and Pattern Recognition
More informationFacial Expression Biometrics Using Tracker Displacement Features
Facial Expression Biometrics Using Tracker Displacement Features Sergey Tulyakov 1, Thomas Slowe 2,ZhiZhang 1, and Venu Govindaraju 1 1 Center for Unified Biometrics and Sensors University at Buffalo,
More informationGeneralization of a Vision-Based Computational Model of Mind-Reading
Generalization of a Vision-Based Computational Model of Mind-Reading Rana el Kaliouby and Peter Robinson Computer Laboratory, University of Cambridge, 5 JJ Thomson Avenue, Cambridge UK CB3 FD Abstract.
More informationA framework for the Recognition of Human Emotion using Soft Computing models
A framework for the Recognition of Human Emotion using Soft Computing models Md. Iqbal Quraishi Dept. of Information Technology Kalyani Govt Engg. College J Pal Choudhury Dept. of Information Technology
More informationHUMAN EMOTION DETECTION THROUGH FACIAL EXPRESSIONS
th June. Vol.88. No. - JATIT & LLS. All rights reserved. ISSN: -8 E-ISSN: 87- HUMAN EMOTION DETECTION THROUGH FACIAL EXPRESSIONS, KRISHNA MOHAN KUDIRI, ABAS MD SAID AND M YUNUS NAYAN Computer and Information
More informationAnalysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information
Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, S. Lee, U. Neumann, S. Narayanan Emotion
More informationFERA Second Facial Expression Recognition and Analysis Challenge
FERA 2015 - Second Facial Expression Recognition and Analysis Challenge Michel F. Valstar 1, Timur Almaev 1, Jeffrey M. Girard 2, Gary McKeown 3, Marc Mehu 4, Lijun Yin 5, Maja Pantic 6,7 and Jeffrey F.
More informationFacial Expression Classification Using Convolutional Neural Network and Support Vector Machine
Facial Expression Classification Using Convolutional Neural Network and Support Vector Machine Valfredo Pilla Jr, André Zanellato, Cristian Bortolini, Humberto R. Gamba and Gustavo Benvenutti Borba Graduate
More informationAffective pictures and emotion analysis of facial expressions with local binary pattern operator: Preliminary results
Affective pictures and emotion analysis of facial expressions with local binary pattern operator: Preliminary results Seppo J. Laukka 1, Antti Rantanen 1, Guoying Zhao 2, Matti Taini 2, Janne Heikkilä
More informationEmotion Affective Color Transfer Using Feature Based Facial Expression Recognition
, pp.131-135 http://dx.doi.org/10.14257/astl.2013.39.24 Emotion Affective Color Transfer Using Feature Based Facial Expression Recognition SeungTaek Ryoo and Jae-Khun Chang School of Computer Engineering
More informationFacial Action Unit Detection by Cascade of Tasks
Facial Action Unit Detection by Cascade of Tasks Xiaoyu Ding Wen-Sheng Chu 2 Fernando De la Torre 2 Jeffery F. Cohn 2,3 Qiao Wang School of Information Science and Engineering, Southeast University, Nanjing,
More informationDetection of Facial Landmarks from Neutral, Happy, and Disgust Facial Images
Detection of Facial Landmarks from Neutral, Happy, and Disgust Facial Images Ioulia Guizatdinova and Veikko Surakka Research Group for Emotions, Sociality, and Computing Tampere Unit for Computer-Human
More informationIMPLEMENTATION OF AN AUTOMATED SMART HOME CONTROL FOR DETECTING HUMAN EMOTIONS VIA FACIAL DETECTION
IMPLEMENTATION OF AN AUTOMATED SMART HOME CONTROL FOR DETECTING HUMAN EMOTIONS VIA FACIAL DETECTION Lim Teck Boon 1, Mohd Heikal Husin 2, Zarul Fitri Zaaba 3 and Mohd Azam Osman 4 1 Universiti Sains Malaysia,
More informationR Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology
ISSN: 0975-766X CODEN: IJPTFI Available Online through Research Article www.ijptonline.com FACIAL EMOTION RECOGNITION USING NEURAL NETWORK Kashyap Chiranjiv Devendra, Azad Singh Tomar, Pratigyna.N.Javali,
More informationA Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China
A Vision-based Affective Computing System Jieyu Zhao Ningbo University, China Outline Affective Computing A Dynamic 3D Morphable Model Facial Expression Recognition Probabilistic Graphical Models Some
More informationReal-time Automatic Deceit Detection from Involuntary Facial Expressions
Real-time Automatic Deceit Detection from Involuntary Facial Expressions Zhi Zhang, Vartika Singh, Thomas E. Slowe, Sergey Tulyakov, and Venugopal Govindaraju Center for Unified Biometrics and Sensors
More informationFacial Emotion Recognition with Facial Analysis
Facial Emotion Recognition with Facial Analysis İsmail Öztel, Cemil Öz Sakarya University, Faculty of Computer and Information Sciences, Computer Engineering, Sakarya, Türkiye Abstract Computer vision
More informationFace Emotions and Short Surveys during Automotive Tasks
Face Emotions and Short Surveys during Automotive Tasks LEE QUINTANAR, PETE TRUJILLO, AND JEREMY WATSON March 2016 J.D. Power A Global Marketing Information Company jdpower.com Introduction Facial expressions
More informationEMOTION CLASSIFICATION: HOW DOES AN AUTOMATED SYSTEM COMPARE TO NAÏVE HUMAN CODERS?
EMOTION CLASSIFICATION: HOW DOES AN AUTOMATED SYSTEM COMPARE TO NAÏVE HUMAN CODERS? Sefik Emre Eskimez, Kenneth Imade, Na Yang, Melissa Sturge- Apple, Zhiyao Duan, Wendi Heinzelman University of Rochester,
More informationAutomatic Coding of Facial Expressions Displayed During Posed and Genuine Pain
Automatic Coding of Facial Expressions Displayed During Posed and Genuine Pain Gwen C. Littlewort Machine Perception Lab, Institute for Neural Computation University of California, San Diego La Jolla,
More informationEmotion Detection Through Facial Feature Recognition
Emotion Detection Through Facial Feature Recognition James Pao jpao@stanford.edu Abstract Humans share a universal and fundamental set of emotions which are exhibited through consistent facial expressions.
More informationA Deep Learning Approach for Subject Independent Emotion Recognition from Facial Expressions
A Deep Learning Approach for Subject Independent Emotion Recognition from Facial Expressions VICTOR-EMIL NEAGOE *, ANDREI-PETRU BĂRAR *, NICU SEBE **, PAUL ROBITU * * Faculty of Electronics, Telecommunications
More informationDimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners
Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners Hatice Gunes and Maja Pantic Department of Computing, Imperial College London 180 Queen
More informationLocal Image Structures and Optic Flow Estimation
Local Image Structures and Optic Flow Estimation Sinan KALKAN 1, Dirk Calow 2, Florentin Wörgötter 1, Markus Lappe 2 and Norbert Krüger 3 1 Computational Neuroscience, Uni. of Stirling, Scotland; {sinan,worgott}@cn.stir.ac.uk
More informationA Unified Probabilistic Framework For Measuring The Intensity of Spontaneous Facial Action Units
A Unified Probabilistic Framework For Measuring The Intensity of Spontaneous Facial Action Units Yongqiang Li 1, S. Mohammad Mavadati 2, Mohammad H. Mahoor and Qiang Ji Abstract Automatic facial expression
More informationFace Emotions and Short Surveys during Automotive Tasks. April 2016
Face Emotions and Short Surveys during Automotive Tasks April 2016 Presented at the 2016 Council of American Survey Research Organizations (CASRO) Digital Conference, March 2016 A Global Marketing Information
More informationDeep Learning based FACS Action Unit Occurrence and Intensity Estimation
Deep Learning based FACS Action Unit Occurrence and Intensity Estimation Amogh Gudi, H. Emrah Tasli, Tim M. den Uyl, Andreas Maroulis Vicarious Perception Technologies, Amsterdam, The Netherlands Abstract
More informationOn Shape And the Computability of Emotions X. Lu, et al.
On Shape And the Computability of Emotions X. Lu, et al. MICC Reading group 10.07.2013 1 On Shape and the Computability of Emotion X. Lu, P. Suryanarayan, R. B. Adams Jr., J. Li, M. G. Newman, J. Z. Wang
More informationAutomatic detection of a driver s complex mental states
Automatic detection of a driver s complex mental states Zhiyi Ma 1, Marwa Mahmoud 2, Peter Robinson 2, Eduardo Dias 3, and Lee Skrypchuk 3 1 Department of Engineering, University of Cambridge, Cambridge,
More informationRecognizing Emotions from Facial Expressions Using Neural Network
Recognizing Emotions from Facial Expressions Using Neural Network Isidoros Perikos, Epaminondas Ziakopoulos, Ioannis Hatzilygeroudis To cite this version: Isidoros Perikos, Epaminondas Ziakopoulos, Ioannis
More informationAutomatic Classification of Perceived Gender from Facial Images
Automatic Classification of Perceived Gender from Facial Images Joseph Lemley, Sami Abdul-Wahid, Dipayan Banik Advisor: Dr. Razvan Andonie SOURCE 2016 Outline 1 Introduction 2 Faces - Background 3 Faces
More informationDiscovering Facial Expressions for States of Amused, Persuaded, Informed, Sentimental and Inspired
Discovering Facial Expressions for States of Amused, Persuaded, Informed, Sentimental and Inspired Daniel McDuff Microsoft Research, Redmond, WA, USA This work was performed while at Affectiva damcduff@microsoftcom
More informationAdvanced FACS Methodological Issues
7/9/8 Advanced FACS Methodological Issues Erika Rosenberg, University of California, Davis Daniel Messinger, University of Miami Jeffrey Cohn, University of Pittsburgh The th European Conference on Facial
More informationEMOTION DETECTION THROUGH SPEECH AND FACIAL EXPRESSIONS
EMOTION DETECTION THROUGH SPEECH AND FACIAL EXPRESSIONS 1 KRISHNA MOHAN KUDIRI, 2 ABAS MD SAID AND 3 M YUNUS NAYAN 1 Computer and Information Sciences, Universiti Teknologi PETRONAS, Malaysia 2 Assoc.
More informationThe Role of Face Parts in Gender Recognition
The Role of Face Parts in Gender Recognition Yasmina Andreu Ramón A. Mollineda Pattern Analysis and Learning Section Computer Vision Group University Jaume I of Castellón (Spain) Y. Andreu, R.A. Mollineda
More informationCOMPARISON BETWEEN GMM-SVM SEQUENCE KERNEL AND GMM: APPLICATION TO SPEECH EMOTION RECOGNITION
Journal of Engineering Science and Technology Vol. 11, No. 9 (2016) 1221-1233 School of Engineering, Taylor s University COMPARISON BETWEEN GMM-SVM SEQUENCE KERNEL AND GMM: APPLICATION TO SPEECH EMOTION
More informationNMF-Density: NMF-Based Breast Density Classifier
NMF-Density: NMF-Based Breast Density Classifier Lahouari Ghouti and Abdullah H. Owaidh King Fahd University of Petroleum and Minerals - Department of Information and Computer Science. KFUPM Box 1128.
More informationUsing Affect Awareness to Modulate Task Experience: A Study Amongst Pre-Elementary School Kids
Proceedings of the Twenty-Seventh International Florida Artificial Intelligence Research Society Conference Using Affect Awareness to Modulate Task Experience: A Study Amongst Pre-Elementary School Kids
More informationAUTOMATIC DETECTION AND INTENSITY ESTIMATION OF SPONTANEOUS SMILES. by Jeffrey M. Girard B.A. in Psychology/Philosophy, University of Washington, 2009
AUTOMATIC DETECTION AND INTENSITY ESTIMATION OF SPONTANEOUS SMILES by Jeffrey M. Girard B.A. in Psychology/Philosophy, University of Washington, 2009 Submitted to the Graduate Faculty of The Dietrich School
More informationEnhanced Facial Expressions Recognition using Modular Equable 2DPCA and Equable 2DPC
Enhanced Facial Expressions Recognition using Modular Equable 2DPCA and Equable 2DPC Sushma Choudhar 1, Sachin Puntambekar 2 1 Research Scholar-Digital Communication Medicaps Institute of Technology &
More informationFusion of visible and thermal images for facial expression recognition
Front. Comput. Sci., 2014, 8(2): 232 242 DOI 10.1007/s11704-014-2345-1 Fusion of visible and thermal images for facial expression recognition Shangfei WANG 1,2, Shan HE 1,2,YueWU 3, Menghua HE 1,2,QiangJI
More informationAutomated Tessellated Fundus Detection in Color Fundus Images
University of Iowa Iowa Research Online Proceedings of the Ophthalmic Medical Image Analysis International Workshop 2016 Proceedings Oct 21st, 2016 Automated Tessellated Fundus Detection in Color Fundus
More informationA Study of Facial Expression Reorganization and Local Binary Patterns
A Study of Facial Expression Reorganization and Local Binary Patterns Poonam Verma #1, Deepshikha Rathore *2 #1 MTech Scholar,Sanghvi Innovative Academy Indore *2 Asst.Professor,Sanghvi Innovative Academy
More informationFacial Feature Model for Emotion Recognition Using Fuzzy Reasoning
Facial Feature Model for Emotion Recognition Using Fuzzy Reasoning Renan Contreras, Oleg Starostenko, Vicente Alarcon-Aquino, and Leticia Flores-Pulido CENTIA, Department of Computing, Electronics and
More informationRecognition of Facial Expressions for Images using Neural Network
Recognition of Facial Expressions for Images using Neural Network Shubhangi Giripunje Research Scholar, Dept.of Electronics Engg., GHRCE, Nagpur, India Preeti Bajaj Senior IEEE Member, Professor, Dept.of
More informationPersonalized Facial Attractiveness Prediction
Personalized Facial Attractiveness Prediction Jacob Whitehill and Javier R. Movellan Machine Perception Laboratory University of California, San Diego La Jolla, CA 92093, USA {jake,movellan}@mplab.ucsd.edu
More informationPart III. Chapter 14 Insights on spontaneous facial expressions from automatic expression measurement
To appear in Giese,M. Curio, C., Bulthoff, H. (Eds.) Dynamic Faces: Insights from Experiments and Computation. MIT Press. 2009. Part III Chapter 14 Insights on spontaneous facial expressions from automatic
More informationBrain Tumor segmentation and classification using Fcm and support vector machine
Brain Tumor segmentation and classification using Fcm and support vector machine Gaurav Gupta 1, Vinay singh 2 1 PG student,m.tech Electronics and Communication,Department of Electronics, Galgotia College
More informationRecognition of facial expressions using Gabor wavelets and learning vector quantization
Engineering Applications of Artificial Intelligence 21 (2008) 1056 1064 www.elsevier.com/locate/engappai Recognition of facial expressions using Gabor wavelets and learning vector quantization Shishir
More informationClassroom Data Collection and Analysis using Computer Vision
Classroom Data Collection and Analysis using Computer Vision Jiang Han Department of Electrical Engineering Stanford University Abstract This project aims to extract different information like faces, gender
More informationIEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 5, JULY
IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 5, JULY 2011 1057 A Framework for Automatic Human Emotion Classification Using Emotion Profiles Emily Mower, Student Member, IEEE,
More informationIdentification of Neuroimaging Biomarkers
Identification of Neuroimaging Biomarkers Dan Goodwin, Tom Bleymaier, Shipra Bhal Advisor: Dr. Amit Etkin M.D./PhD, Stanford Psychiatry Department Abstract We present a supervised learning approach to
More informationPERFORMANCE ANALYSIS OF THE TECHNIQUES EMPLOYED ON VARIOUS DATASETS IN IDENTIFYING THE HUMAN FACIAL EMOTION
PERFORMANCE ANALYSIS OF THE TECHNIQUES EMPLOYED ON VARIOUS DATASETS IN IDENTIFYING THE HUMAN FACIAL EMOTION Usha Mary Sharma 1, Jayanta Kumar Das 2, Trinayan Dutta 3 1 Assistant Professor, 2,3 Student,
More informationEmotion AI, Real-Time Emotion Detection using CNN
Emotion AI, Real-Time Emotion Detection using CNN Tanner Gilligan M.S. Computer Science Stanford University tanner12@stanford.edu Baris Akis B.S. Computer Science Stanford University bakis@stanford.edu
More informationarxiv: v1 [cs.lg] 4 Feb 2019
Machine Learning for Seizure Type Classification: Setting the benchmark Subhrajit Roy [000 0002 6072 5500], Umar Asif [0000 0001 5209 7084], Jianbin Tang [0000 0001 5440 0796], and Stefan Harrer [0000
More informationAutomated facial expression measurement: Recent applications to basic research in human behavior, learning, and education
1 Automated facial expression measurement: Recent applications to basic research in human behavior, learning, and education Marian Stewart Bartlett and Jacob Whitehill, Institute for Neural Computation,
More informationValence-arousal evaluation using physiological signals in an emotion recall paradigm. CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry.
Proceedings Chapter Valence-arousal evaluation using physiological signals in an emotion recall paradigm CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry Abstract The work presented in this paper aims
More informationCPSC81 Final Paper: Facial Expression Recognition Using CNNs
CPSC81 Final Paper: Facial Expression Recognition Using CNNs Luis Ceballos Swarthmore College, 500 College Ave., Swarthmore, PA 19081 USA Sarah Wallace Swarthmore College, 500 College Ave., Swarthmore,
More informationVital Responder: Real-time Health Monitoring of First- Responders
Vital Responder: Real-time Health Monitoring of First- Responders Ye Can 1,2 Advisors: Miguel Tavares Coimbra 2, Vijayakumar Bhagavatula 1 1 Department of Electrical & Computer Engineering, Carnegie Mellon
More informationTWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING
134 TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING H.F.S.M.Fonseka 1, J.T.Jonathan 2, P.Sabeshan 3 and M.B.Dissanayaka 4 1 Department of Electrical And Electronic Engineering, Faculty
More informationAge Estimation based on Multi-Region Convolutional Neural Network
Age Estimation based on Multi-Region Convolutional Neural Network Ting Liu, Jun Wan, Tingzhao Yu, Zhen Lei, and Stan Z. Li 1 Center for Biometrics and Security Research & National Laboratory of Pattern
More informationBayesian Face Recognition Using Gabor Features
Bayesian Face Recognition Using Gabor Features Xiaogang Wang, Xiaoou Tang Department of Information Engineering The Chinese University of Hong Kong Shatin, Hong Kong {xgwang1,xtang}@ie.cuhk.edu.hk Abstract
More informationLearning to Rank Authenticity from Facial Activity Descriptors Otto von Guericke University, Magdeburg - Germany
Learning to Rank Authenticity from Facial s Otto von Guericke University, Magdeburg - Germany Frerk Saxen, Philipp Werner, Ayoub Al-Hamadi The Task Real or Fake? Dataset statistics Training set 40 Subjects
More informationA Face-House Paradigm for Architectural Scene Analysis
A Face-House Paradigm for Architectural Scene Analysis Stephan K. Chalup Newcastle Robotics Lab School of Electrical Eng. and Computer Science The University of Newcastle NSW 2308 Australia stephan.chalup@newcastle.edu.au
More informationAssessment of Pain Using Facial Pictures Taken with a Smartphone
2015 IEEE 39th Annual International Computers, Software & Applications Conference Assessment of Pain Using Facial Pictures Taken with a Smartphone Mohammad Adibuzzaman 1, Colin Ostberg 1, Sheikh Ahamed
More informationACTIVE APPEARANCE MODELS FOR AFFECT RECOGNITION USING FACIAL EXPRESSIONS. Matthew Stephen Ratliff
ACTIVE APPEARANCE MODELS FOR AFFECT RECOGNITION USING FACIAL EXPRESSIONS Matthew Stephen Ratliff A Thesis Submitted to the University of North Carolina Wilmington in Partial Fulfillment of the Requirements
More informationAudio-Visual Emotion Recognition in Adult Attachment Interview
Audio-Visual Emotion Recognition in Adult Attachment Interview Zhihong Zeng, Yuxiao Hu, Glenn I. Roisman, Zhen Wen, Yun Fu and Thomas S. Huang University of Illinois at Urbana-Champaign IBM T.J.Watson
More informationAffect Recognition for Interactive Companions
Affect Recognition for Interactive Companions Ginevra Castellano School of Electronic Engineering and Computer Science Queen Mary University of London, UK ginevra@dcs.qmul.ac.uk Ruth Aylett School of Maths
More informationApoptosis Detection for Adherent Cell Populations in Time-lapse Phase-contrast Microscopy Images
Apoptosis Detection for Adherent Cell Populations in Time-lapse Phase-contrast Microscopy Images Seungil Huh 1, Dai Fei Elmer Ker 2, Hang Su 1, and Takeo Kanade 1 1 Robotics Institute, Carnegie Mellon
More informationEmotion Detection Using Physiological Signals. M.A.Sc. Thesis Proposal Haiyan Xu Supervisor: Prof. K.N. Plataniotis
Emotion Detection Using Physiological Signals M.A.Sc. Thesis Proposal Haiyan Xu Supervisor: Prof. K.N. Plataniotis May 10 th, 2011 Outline Emotion Detection Overview EEG for Emotion Detection Previous
More informationA Study on Automatic Age Estimation using a Large Database
A Study on Automatic Age Estimation using a Large Database Guodong Guo WVU Guowang Mu NCCU Yun Fu BBN Technologies Charles Dyer UW-Madison Thomas Huang UIUC Abstract In this paper we study some problems
More informationFace Gender Classification on Consumer Images in a Multiethnic Environment
Face Gender Classification on Consumer Images in a Multiethnic Environment Wei Gao and Haizhou Ai Computer Science and Technology Department, Tsinghua University, Beijing 100084, China ahz@mail.tsinghua.edu.cn
More informationSmileMaze: A Tutoring System in Real-Time Facial Expression Perception and Production in Children with Autism Spectrum Disorder
SmileMaze: A Tutoring System in Real-Time Facial Expression Perception and Production in Children with Autism Spectrum Disorder Jeff Cockburn 1, Marni Bartlett 2, James Tanaka 1, Javier Movellan 2, Matt
More informationDISCRETE WAVELET PACKET TRANSFORM FOR ELECTROENCEPHALOGRAM- BASED EMOTION RECOGNITION IN THE VALENCE-AROUSAL SPACE
DISCRETE WAVELET PACKET TRANSFORM FOR ELECTROENCEPHALOGRAM- BASED EMOTION RECOGNITION IN THE VALENCE-AROUSAL SPACE Farzana Kabir Ahmad*and Oyenuga Wasiu Olakunle Computational Intelligence Research Cluster,
More informationUsing Computational Models to Understand ASD Facial Expression Recognition Patterns
Using Computational Models to Understand ASD Facial Expression Recognition Patterns Irene Feng Dartmouth College Computer Science Technical Report TR2017-819 May 30, 2017 Irene Feng 2 Literature Review
More informationANALYSIS OF FACIAL FEATURES OF DRIVERS UNDER COGNITIVE AND VISUAL DISTRACTIONS
ANALYSIS OF FACIAL FEATURES OF DRIVERS UNDER COGNITIVE AND VISUAL DISTRACTIONS Nanxiang Li and Carlos Busso Multimodal Signal Processing (MSP) Laboratory Department of Electrical Engineering, The University
More informationFrom Dials to Facial Coding: Automated Detection of Spontaneous Facial Expressions for Media Research
From Dials to Facial Coding: Automated Detection of Spontaneous Facial Expressions for Media Research Evan Kodra, Thibaud Senechal, Daniel McDuff, Rana el Kaliouby Abstract Typical consumer media research
More informationHierarchical Age Estimation from Unconstrained Facial Images
Hierarchical Age Estimation from Unconstrained Facial Images STIC-AmSud Jhony Kaesemodel Pontes Department of Electrical Engineering Federal University of Paraná - Supervisor: Alessandro L. Koerich (/PUCPR
More informationDesign of Palm Acupuncture Points Indicator
Design of Palm Acupuncture Points Indicator Wen-Yuan Chen, Shih-Yen Huang and Jian-Shie Lin Abstract The acupuncture points are given acupuncture or acupressure so to stimulate the meridians on each corresponding
More informationSound Texture Classification Using Statistics from an Auditory Model
Sound Texture Classification Using Statistics from an Auditory Model Gabriele Carotti-Sha Evan Penn Daniel Villamizar Electrical Engineering Email: gcarotti@stanford.edu Mangement Science & Engineering
More informationEstimating smile intensity: A better way
Pattern Recognition Letters journal homepage: www.elsevier.com Estimating smile intensity: A better way Jeffrey M. Girard a,, Jeffrey F. Cohn a,b, Fernando De la Torre b a Department of Psychology, University
More informationGene Selection for Tumor Classification Using Microarray Gene Expression Data
Gene Selection for Tumor Classification Using Microarray Gene Expression Data K. Yendrapalli, R. Basnet, S. Mukkamala, A. H. Sung Department of Computer Science New Mexico Institute of Mining and Technology
More informationDevelopment of novel algorithm by combining Wavelet based Enhanced Canny edge Detection and Adaptive Filtering Method for Human Emotion Recognition
International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 12, Issue 9 (September 2016), PP.67-72 Development of novel algorithm by combining
More informationTowards Multimodal Emotion Recognition: A New Approach
Towards Multimodal Emotion Recognition: A New Approach Marco Paleari TEleRobotics and Applications Italian Institute of Technology Genoa, Italy marco.paleari @ iit.it Benoit Huet Multimedia Department
More information