Enhanced Facial Expressions Recognition using Modular Equable 2DPCA and Equable 2DPC

Similar documents
Facial expression recognition with spatiotemporal local descriptors

A Study of Facial Expression Reorganization and Local Binary Patterns

Facial Expression Recognition Using Principal Component Analysis

Study on Aging Effect on Facial Expression Recognition

This is the accepted version of this article. To be published as : This is the author version published as:

Emotion Recognition using a Cauchy Naive Bayes Classifier

A framework for the Recognition of Human Emotion using Soft Computing models

Facial Expression Biometrics Using Tracker Displacement Features

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China

R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology

Development of novel algorithm by combining Wavelet based Enhanced Canny edge Detection and Adaptive Filtering Method for Human Emotion Recognition

Statistical and Neural Methods for Vision-based Analysis of Facial Expressions and Gender

Gender Based Emotion Recognition using Speech Signals: A Review

Face Analysis : Identity vs. Expressions

EECS 433 Statistical Pattern Recognition

MRI Image Processing Operations for Brain Tumor Detection

Emotion Affective Color Transfer Using Feature Based Facial Expression Recognition

Classroom Data Collection and Analysis using Computer Vision

Mammogram Analysis: Tumor Classification

HUMAN EMOTION DETECTION THROUGH FACIAL EXPRESSIONS

Valence-arousal evaluation using physiological signals in an emotion recall paradigm. CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry.

On Shape And the Computability of Emotions X. Lu, et al.

A Study on Automatic Age Estimation using a Large Database

Primary Level Classification of Brain Tumor using PCA and PNN

Local Image Structures and Optic Flow Estimation

Detection of Facial Landmarks from Neutral, Happy, and Disgust Facial Images

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information

Automatic Facial Expression Recognition Using Boosted Discriminatory Classifiers

CPSC81 Final Paper: Facial Expression Recognition Using CNNs

FACIAL EXPRESSION RECOGNITION FROM IMAGE SEQUENCES USING SELF-ORGANIZING MAPS

ACTIVE APPEARANCE MODELS FOR AFFECT RECOGNITION USING FACIAL EXPRESSIONS. Matthew Stephen Ratliff

COMPARATIVE STUDY ON FEATURE EXTRACTION METHOD FOR BREAST CANCER CLASSIFICATION

PERFORMANCE ANALYSIS OF THE TECHNIQUES EMPLOYED ON VARIOUS DATASETS IN IDENTIFYING THE HUMAN FACIAL EMOTION

A Survey on Hand Gesture Recognition for Indian Sign Language

IMPLEMENTATION OF AN AUTOMATED SMART HOME CONTROL FOR DETECTING HUMAN EMOTIONS VIA FACIAL DETECTION

Automatic Emotion Recognition Using Facial Expression: A Review

Gender Discrimination Through Fingerprint- A Review

Discovering Facial Expressions for States of Amused, Persuaded, Informed, Sentimental and Inspired

Facial Expression Classification Using Convolutional Neural Network and Support Vector Machine

Blue Eyes Technology

Recognizing Emotions from Facial Expressions Using Neural Network

Affective pictures and emotion analysis of facial expressions with local binary pattern operator: Preliminary results

Facial Behavior as a Soft Biometric

Available online at ScienceDirect. Procedia Computer Science 45 (2015 )

Age Estimation based on Multi-Region Convolutional Neural Network

A Comparison of Collaborative Filtering Methods for Medication Reconciliation

Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners

Facial Emotion Recognition with Facial Analysis

Improved Intelligent Classification Technique Based On Support Vector Machines

Volume 6, Issue 11, November 2018 International Journal of Advance Research in Computer Science and Management Studies

A Deep Learning Approach for Subject Independent Emotion Recognition from Facial Expressions

A REVIEW ON CLASSIFICATION OF BREAST CANCER DETECTION USING COMBINATION OF THE FEATURE EXTRACTION MODELS. Aeronautical Engineering. Hyderabad. India.

Facial Event Classification with Task Oriented Dynamic Bayesian Network

Recognition of facial expressions using Gabor wavelets and learning vector quantization

Edge Based Grid Super-Imposition for Crowd Emotion Recognition

NMF-Density: NMF-Based Breast Density Classifier

Estimating Multiple Evoked Emotions from Videos

Copyright 2007 IEEE. Reprinted from 4th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, April 2007.

Design of Palm Acupuncture Points Indicator

A Semi-supervised Approach to Perceived Age Prediction from Face Images

HUMAN EMOTION RECOGNITION USING THERMAL IMAGE PROCESSING AND EIGENFACES

International Journal of Computer Science Trends and Technology (IJCST) Volume 5 Issue 1, Jan Feb 2017

IJESRT. Scientific Journal Impact Factor: (ISRA), Impact Factor: 1.852

AUTOMATIC DIABETIC RETINOPATHY DETECTION USING GABOR FILTER WITH LOCAL ENTROPY THRESHOLDING

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING

Formulating Emotion Perception as a Probabilistic Model with Application to Categorical Emotion Classification

Automated Tessellated Fundus Detection in Color Fundus Images

EXTRACTION OF RETINAL BLOOD VESSELS USING IMAGE PROCESSING TECHNIQUES

ABSTRACT I. INTRODUCTION

CLASSIFICATION OF BRAIN TUMOUR IN MRI USING PROBABILISTIC NEURAL NETWORK

Automated Real Time Emotion Recognition using Facial Expression Analysis

EXTRACT THE BREAST CANCER IN MAMMOGRAM IMAGES

Facial Feature Model for Emotion Recognition Using Fuzzy Reasoning

Recognizing Spontaneous Emotion From The Eye Region Under Different Head Poses

Brain Tumour Detection of MR Image Using Naïve Beyer classifier and Support Vector Machine

A Survey on Localizing Optic Disk

For Micro-expression Recognition: Database and Suggestions

Decision Support System for Skin Cancer Diagnosis

Utilizing Posterior Probability for Race-composite Age Estimation

Research Proposal on Emotion Recognition

Lung Cancer Diagnosis from CT Images Using Fuzzy Inference System

Detection of Glaucoma and Diabetic Retinopathy from Fundus Images by Bloodvessel Segmentation

Classification of normal and abnormal images of lung cancer

A Common Framework for Real-Time Emotion Recognition and Facial Action Unit Detection

Automated Brain Tumor Segmentation Using Region Growing Algorithm by Extracting Feature

Comparative Study of K-means, Gaussian Mixture Model, Fuzzy C-means algorithms for Brain Tumor Segmentation

Automated Embryo Stage Classification in Time-Lapse Microscopy Video of Early Human Embryo Development

Impact of Ethnic Group on Human Emotion Recognition Using Backpropagation Neural Network

D4.10 Demonstrator 4 EoT Application

A Novel Capsule Neural Network Based Model For Drowsiness Detection Using Electroencephalography Signals

Temporal Context and the Recognition of Emotion from Facial Expression

Recognizing Scenes by Simulating Implied Social Interaction Networks

EMOTION DETECTION THROUGH SPEECH AND FACIAL EXPRESSIONS

Detection and Classification of Lung Cancer Using Artificial Neural Network

Neuro-Inspired Statistical. Rensselaer Polytechnic Institute National Science Foundation

Automated Assessment of Diabetic Retinal Image Quality Based on Blood Vessel Detection

Automatic Definition of Planning Target Volume in Computer-Assisted Radiotherapy

Mammogram Analysis: Tumor Classification

Disease predictive, best drug: big data implementation of drug query with disease prediction, side effects & feedback analysis

Cancer Cells Detection using OTSU Threshold Algorithm

Transcription:

Enhanced Facial Expressions Recognition using Modular Equable 2DPCA and Equable 2DPC Sushma Choudhar 1, Sachin Puntambekar 2 1 Research Scholar-Digital Communication Medicaps Institute of Technology & Management 2 Asst. Professor-Digital Communication Medicaps Institute of Technology & Management Abstract: Human Face is main part human body can be used for various purposes. Facial expressions, resulting from movements of the facial muscles, are the face changes in response to a person s internal emotional states, intentions, or social communications. Computer recognition of facial expressions has many important applications in intelligent human-computer interaction, awareness systems, surveillance and security, medical diagnosis, law enforcement, automated tutoring systems and many more. Face detection and Facial expression recognition system can be good source for express recognition activity. It can be used for important information, current state, and user behaviour and expression retrieval. Biometric area of research is well known activity of applications due to human interaction applications. As LBP convert any high definition file to greyscale, it leads to convert this file into low quality work. PCA suffers with the issue of involvement of least important facial part. The dark part of all such technique is lake of efficiency and accuracy due to consideration for least important facial feature during reorganization process. 2DPCA and E2DPCA algorithm can be used as the replacement algorithm to observed better accuracy and feature extraction. This work proposed to replace LBP with E2DPCA as the advance variant of PCA to get better level of performance. E2DPCA would remove least important part of face by estimating average value for individual classes than whose face. Subsequently, a modular approach will be used to obtain more perfection in extraction process. This work also proposed the use of.euclidian distance measure for expression classifier. Yale algorithm for auto faces detection purpose. The best part of proposed work is evaluation on user images. The complete work will be evaluated on basis of JAFFE, Cohn-kanade and Self Images. Keywords: ME2DPCA, Euclidian distance, facial expression, expression reorganization. I. INTRODUCTION Facial expressions, resulting from movements of the facial muscles, are the face changes in response to a person s internal emotional states, intentions, or social communications. There is a considerable history associated with the study on facial expressions. Darwin (1872) was the first to describe in details the specific facial expressions associated with emotions in animals and humans, who argued that all mammals show emotions reliably in their faces. Since that, facial expression analysis has been a area of great research interest for behavioral scientists (Ekman, Friesen, and Hager, 2002). Psychological studies (Mehrabian, 1968; Ambady and Rosenthal, 1992) suggest that facial expressions, as the main mode for non-verbal communication, play a vital role in human communication. For illustration, we show some examples of facial expressions in Figure 1. Figure 1: Facial Expressions of Barak Obama Computer recognition of facial expressions has many important applications in intelligent human-computer interaction, awareness systems, surveillance and security, medical diagnosis, law enforcement, automated tutoring systems and many more. In this work, we present our recent work on recognizing facial expressions using discriminative local statistical features. Especially variants of principal component analysis are investigated for facial representation. Finally, the current status, open problems, and research directions are discussed. This study investigated whether facial expressions can be accurately identified using face region only, how much recognition is impaired relative to the facial parts, and what mechanisms account for the recognition advantage of 14

some expressions. In most of the prior studies, the facial parts were used but there significance in recognition respective of facial expression was not shown. expression recognition methods use a combination of geometric and appearance-based features. Spatial features are derived from displacements of facial landmarks, and carry geometric information. These features are either selected based on prior knowledge, or dimension-reduced from a large pool. In this study, we produce a large number of potential spatial features using two combnations of facial landmarks [1]. Facial expressions can be described at different levels. Two mainstream description methods are facial affect (emotion) and facial muscle action (action unit). Psychologists suggest that some basic emotions are universally displayed and recognized from facial expressions, and the most commonly used descriptors are the six basic emotions, which includes anger, disgust, fear, joy, surprise, and sadness. This is also reflected by the research on automatic facial expression analysis; most facial expression analysis systems developed so far target facial affect analysis, and attempt to recognize a set of prototypic emotions. However, little is known about facial expression processing in the visual aspect based on facial parts. Facial-expression recognition has been active research fields for several years. Several attempts have been made to improve the reliability of these recognition systems. One highly successful approach is based on Principal Component Analysis (PCA), proposed by Matthew Turk and Alex Pentland proposed in 1991. Since then, researchers have been investigating PCA and using it as a basis for developing successful techniques for facial-expression recognition. A novel framework of information extraction is developed for facial expression recognition. Facial Expression Recognition system is based on machine learning theory; precisely it is the classification task. The input to the classifier is a set of features, retrieved from face region in the previous stage. The set of features is formed to describe the facial expression. Classification requires supervised training, so the training set should consist of labeled data. There are a lot of different machine learning techniques for classification task, namely: K-Nearest Neighbors, Artificial Neural Networks, Support Vector Machines, Hidden Markov Models, Expert Systems with rule based classifier, Bayesian Networks or Boosting Techniques (Adaboost, Gentleboost). Three principal issues in classification task are: choosing good feature set, efficient machine learning technique and diverse database for training. Feature set should be composed of features that are discriminative and characteristic for particular expression. II. PROBLEM STATEMENT The face expression research community is shifting its focus t the recognition of spontaneous expressions. As discussed earlier, the major challenge that the researchers face is the non-availability of spontaneous expression data. Capturing spontaneous expressions on images and video is one of the biggest challenges. If the subjects become aware of the recording and data capture process, their expressions immediately loses its authenticity [2]. To overcome this they used a hidden camera to record the subject s expressions and later asked for their consents. Although building a truly authentic expression database (one where the subjects are not aware of the fact that their expressions are being recorded) is extremely challenging, a semi-authentic expression database (one where the subjects watch emotion eliciting videos but are aware that they are being recorded) can be built fairly easily. One of the best efforts in recent years in this direction is the creation of the MMI database. Along with posed expressions, spontaneous expressions have also been included. Furthermore, the DB is web-based, searchable and downloadable. Many of the systems still require manual intervention. For the purpose of tracking the face, many systems require facial points to be manually located on the first frame. The challenge is to make the system fully automatic. In recent years, there have been advances in building fully automatic face expression recognizers. III. PROPOSED SOLUTION The facial expression recognition system is designed for recognition of facial expression using variants of PCA. We applied Modular Equable Two Dimensional Principal Component Analysis for information extraction.the information extraction is performed on face and facial parts images. While classifications performed using various classifiers and the final classifier chosen are presented in the flow diagram of the proposed system is shown in Figure 2. 15

Figure 2: System Architecture of Facial Expression Recognition System We consider facial expression classification in the framework of measuring similarities mainly for the reason that it is conceptually simple and does not require direct access to the features of the samples. it only requires the similarity function to be defined for any pair of samples. Thus, a metric structure that adapts to the feature space embeddings is preferred over the default Euclidean metric as a measure of similarity. Inspired by the success of Metric Learning (ML) in learning domain specific distance metrics, we propose an expression classification method based on ME2DPCA. In particular, generalize Euclidian distance matrix is learned that satisfies pair wise similarity/dissimilarity constraints on distance between expression feature vectors. Afterwards, classifiers equipped with this distance metric are used to assign the majority class label to the query expression. Many researcher worked on face recognition and then on facial expression recognition to reduce the unwanted information from an input image. Then the modular E2DPCA is based on dividing the face image into four parts and apply feature extraction on all parts and then apply classifiers. In spite of working on parts of image we chose to work on facial parts which are most significant and important part of a face. Euclidian 16

T E S T I N G Figure 3: System Architecture of Proposed System Figure 3 represents the System Architecture of Proposed System. Facial expression recognition has three stages face detection, feature extraction and classification. IV. EXPERIMENTAL ANALYSIS In this method we apply E2DPCA on whole image, finally these E2DPCA components are used as input to the classifier. We have used Euclidean distance as a nearest neighbor classifier. Figure 4: Process Flow for Proposed System To summarize the experiments and do the comparative analysis of the methods on the basis of recognition accuracy we are presenting the bar graphs for two methods E2DPCA and Modular E2DPCA. Figure 5: Comparison of Recognition Rate for Six Expressions using E2DPCA & ME2DPCA Figure 5 shows that the recognition accuracy of Modular E2DPCA method is high for five expressions compared to E2DPCA i.e. Angry, Disgust, Fear, Happy and Sad. The accuracy for Surprise is equivalent in both cases. Table 6.3 represents the recognition rate of E2DPCA& ME2DPCA 17

Table 1Recognition Rate for Six Expressions using E2DPCA & ME2DPCA S.no Dataset Method Used Accuracy (%) 1 Cohn- PCA+SVM 65.78 2 Cohn- PCA+MDA+SVM 81.3 3 Yaleface EPCA+LRC 86.7 4 Jaffe EPCA+LRC 91.1 5 Cohn- 6 Cohn- Selected Spatial Features Method ME2DPCA+Euclidian Proposed Method 88.71 93.33 A. Majumder and co-authors worked on various techniques named as SVM,SOM,RBFN and LBP and the promising results were using PCA with SVM i.e. 65.78 [40]. S. Kumar and co-authors, proposed to align neutral images of different subjects in the feature domain using Procrustes analysis. Subsequently, modelling of shape-free neutral images are done using Principal Component Analysis (PCA).For classification MDA and SVM were used and the accuracy was 81.3% [4]. Y. Zhu, X. Li and G. Wu proposed Face expression recognition based on equable principal component analysis and linear regression classification and the accuracy was 91.11% [32]. C. Gacav, B. Benligiray and C. Topal proposed a Sequential forward feature selection method for facial expression recognition and the accuracy was 88.71% [11].Table 6.4 shows all methods discussed here with their accuracies & datasets used. EXPRESSIONS E2DPCA ME2DPCA ANGRY 70 100 DISGUST 60 95 FEAR 60 95 HAPPY 95 100 SAD 65 90 SURPRISE 80 80 Table 2 Methods and Their Recognition Accuracy V. CONCLUSION We present a unique expression classification method using Modular Equable Two Dimensional Principal Component Analysis and Euclidian Distance classifier. A modular system for facial expression recognition was developed. Though the goal was to classify Ekman's basic expressions, the modular approach can be expanded to any number of classes. Feature descriptors based on Principal Component Analysis, Equable Two Dimensional Principal Component Analysis were formulated in terms of pixels around key points. This approach can be seen as generalization of the local appearance based approaches often found in literature that divide the image into non-overlapping blocks of the same size. Viola Jones method was used to select significant key point locations. These locations are important facial features like; eyes nose and mouth. Initially these features were manually detected; later Adaboost method was used for automatic detection. To maintain the equilibrium in selecting useful information and reducing unwanted information or less important face regions, we have further applied Adaboost method to get the most important information from a face image i.e. the central region of the face composed of Eyes, Nose and Mouth. To derive the significance of facial parts we have used the most important parts of face as module of ME2PCA. It is not only four parts of face image but these are four identification 18

pillars which are Left and Right Eye, Nose and Mouth. Euclidian classifier + ME2DPCA perform best with overall recognition accuracy of 93.33%. REFERENCES [1] C. Gacav, B. Benligiray and C. Topal, "Sequential forward feature selection for facial expression recognition," 2016 24th Signal Processing and Communication Application Conference (SIU), Zonguldak, 2016, pp. 1481-1484. [2] NicuSebe, Michael S. Lew, Yafei Sun, Ira Cohen, Theo Gevers, and Thomas S. Huang, Authentic Facial Expression Analysis, published in Image and Vision Computing, Volume 25, 2007, pp 1856-1863 [3] Karen L. Schmidt and Jeffrey F. Cohn, Human Facial Expressions as Adaptations: Evolutionary Questions in Facial Expression Research, published in Am. J. Phys. Anthropol., Volume 116, Supplement Issue. 33, 2001, pp 3 14 [4] Matthias Scheutz, Useful Roles of Emotions in Artificial Agents, A Case Study fromartificial Life, American Association for Artificial Intelligence, 2004 [5] Forsman, P.M., Efficient Driver Drowsiness Detection at Moderate Levels of Drowsiness.Accid Anal 2012. [6] David Matsumoto, Evaluating Truthfulness and Detecting Deception, published in federal bureau of investigation, Law Enforcement Bulletin., 2011 [7] Prachi Gupta and Avinash Dhole, A Study on Modern Era of Diverse Facial Expression Recognition Techniques, published in international journal of engineering, economics and management issn: 2319-7927, Volume 2, Issue 1, 2013 [8] K. Pearson, On Lines and Planes of Closest Fit to Systems of Points in Space, Philosophical Magazine 2, 1901,pp 559 572, http://pbil.univlyon1.fr/r/pearson1901.pdf [9] Harold Hotelling, Analysis of a Complex of Statistical Variables into Principal Components, published in Journal of Educational Psychology, Volume 24, 1933, pp 417 441 [10] Jie Chen, Guoying Zhao, and Mattipietikäinen, An Improved Local Descriptor and Threshold Learning for Unsupervised Dynamic Texture Segmentation., published in proceedings of Twelfth International Conference on Computer Vision Workshop on Machine Learning for Vision-Based Motion Analysis, 2009, pp 460 467 [11] K. Talele, A. Shirsat, T. Uplenchwar and K. Tuckley, "Facial expression recognition using general regression neural network," 2016 IEEE Bombay Section Symposium (IBSS), Baramati, 2016, pp. 1-6 [12] Di Huang, Caifeng Shan Mohsen Ardabilian, Yunhong Wang and Liming Chen Local Binary Patterns and Its Application to Facial Image Analysis, A Survey IEEE Transactions on Systems, Man, and Cybernetics -Part C: Applications and Reviews, Volume 41, no. 6, 2011, pp 765-781 [13] TimoOjala, MattiPietikäinen and David Harwood, A Comparative Study of Texture Measures with Classification Based on Feature Distributions, Pattern Recognition, Volume 29, 1996, pp 51 5 [14] Li Wang., and Dong-Chen He, Texture Classification Using Texture Spectrum, Pattern Recognition, Volume 23, no. 8, 1990, pp 905-91 [15] Timo Ojala, MattiPietikäinen and TopiMosenpää, Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns. IEEE Transaction, Pattern Annual Machine Intelligence, Volume 24, no. 7, 2002, pp 971-987 [16] L. Wang, Y. Zhang, and J. Feng, On the Euclidean Distance of Image, IEEE Trans. PAMI, 2005, 27(8), pp. 1334-1339 [17] W. Ratcliff and David E. Metzener, Pattern Matching: The Gestalt Approach, DR. DOBB S JOURNAL, 1998, p. 46. [18] Jain,Sunny Bagga, Ramchand Hablani, Narendra Chaudhari, Sanjay Tanwani, Facial Expression Recognition Using Local Binary Patterns with Different Distance Measures, Intelligent Computing, Networking, and Informatics, Advances in Intelligent Systems and Computing 243, DOI, 10.1007/978-81-322-1665-0_86, Springer India 2014. 19