R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology

Similar documents
R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology

A framework for the Recognition of Human Emotion using Soft Computing models

Statistical and Neural Methods for Vision-based Analysis of Facial Expressions and Gender

Gender Based Emotion Recognition using Speech Signals: A Review

Emotion Recognition using a Cauchy Naive Bayes Classifier

Facial expression recognition with spatiotemporal local descriptors

Lung Cancer Diagnosis from CT Images Using Fuzzy Inference System

This is the accepted version of this article. To be published as : This is the author version published as:

CPSC81 Final Paper: Facial Expression Recognition Using CNNs

Emotion Affective Color Transfer Using Feature Based Facial Expression Recognition

Facial Expression Recognition Using Principal Component Analysis

FACIAL EXPRESSION RECOGNITION FROM IMAGE SEQUENCES USING SELF-ORGANIZING MAPS

Facial Feature Model for Emotion Recognition Using Fuzzy Reasoning

A Deep Learning Approach for Subject Independent Emotion Recognition from Facial Expressions

Automatic Emotion Recognition Using Facial Expression: A Review

Biceps Activity EMG Pattern Recognition Using Neural Networks

Recognizing Emotions from Facial Expressions Using Neural Network

MRI Image Processing Operations for Brain Tumor Detection

Detection of Facial Landmarks from Neutral, Happy, and Disgust Facial Images

REVIEW ON ARRHYTHMIA DETECTION USING SIGNAL PROCESSING

Face Analysis : Identity vs. Expressions

Automatic Classification of Perceived Gender from Facial Images

Detection of Glaucoma and Diabetic Retinopathy from Fundus Images by Bloodvessel Segmentation

PERFORMANCE ANALYSIS OF THE TECHNIQUES EMPLOYED ON VARIOUS DATASETS IN IDENTIFYING THE HUMAN FACIAL EMOTION

Emotion Classification Using Neural Network

Impact of Ethnic Group on Human Emotion Recognition Using Backpropagation Neural Network

Research Article. Automated grading of diabetic retinopathy stages in fundus images using SVM classifer

Edge Detection Techniques Based On Soft Computing

Facial Event Classification with Task Oriented Dynamic Bayesian Network

FACIAL IMAGE BASED EXPRESSION CLASSIFICATION SYSTEM USING COMMITTEE NEURAL NETWORKS. A Thesis. Presented to

Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention

Neuromorphic convolutional recurrent neural network for road safety or safety near the road

MEM BASED BRAIN IMAGE SEGMENTATION AND CLASSIFICATION USING SVM

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China

Improved Intelligent Classification Technique Based On Support Vector Machines

Neuro-Inspired Statistical. Rensselaer Polytechnic Institute National Science Foundation

Enhanced Facial Expressions Recognition using Modular Equable 2DPCA and Equable 2DPC

Fuzzy Model on Human Emotions Recognition

D4.10 Demonstrator 4 EoT Application

World Journal of Engineering Research and Technology WJERT

Lung Tumour Detection by Applying Watershed Method

VIDEO SURVEILLANCE AND BIOMEDICAL IMAGING Research Activities and Technology Transfer at PAVIS

N RISCE 2K18 ISSN International Journal of Advance Research and Innovation

Recognition of English Characters Using Spiking Neural Networks

Blue Eyes Technology

Facial Expression Classification Using Convolutional Neural Network and Support Vector Machine

A Possibility for Expressing Multi-Emotion on Robot Faces

Affective Game Engines: Motivation & Requirements

M.Sc. in Cognitive Systems. Model Curriculum

AN EFFICIENT DIGITAL SUPPORT SYSTEM FOR DIAGNOSING BRAIN TUMOR

Available online at ScienceDirect. Procedia Computer Science 102 (2016 ) Kamil Dimililer a *, Ahmet lhan b

Valence and Gender Effects on Emotion Recognition Following TBI. Cassie Brown Arizona State University

Genetic Algorithm based Feature Extraction for ECG Signal Classification using Neural Network

Diabetic Retinopathy Detection Using Eye Images

A Survey on Brain Tumor Detection Technique

Audiovisual to Sign Language Translator

Emotion Recognition from Facial Action Points by Principal Component Analysis

HUMAN EMOTION DETECTION THROUGH FACIAL EXPRESSIONS

A Novel Capsule Neural Network Based Model For Drowsiness Detection Using Electroencephalography Signals

Errol Davis Director of Research and Development Sound Linked Data Inc. Erik Arisholm Lead Engineer Sound Linked Data Inc.

Detection of Lung Cancer Using Backpropagation Neural Networks and Genetic Algorithm

Recognising Emotions from Keyboard Stroke Pattern

Recognition of facial expressions using Gabor wavelets and learning vector quantization

Introduction to Computational Neuroscience

Intelligent Control Systems

Who Needs Cheeks? Eyes and Mouths are Enough for Emotion Identification. and. Evidence for a Face Superiority Effect. Nila K Leigh

What is Emotion? Emotion is a 4 part process consisting of: physiological arousal cognitive interpretation, subjective feelings behavioral expression.

Automatic Facial Expression Recognition Using Boosted Discriminatory Classifiers

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1

Development of novel algorithm by combining Wavelet based Enhanced Canny edge Detection and Adaptive Filtering Method for Human Emotion Recognition

Unsupervised MRI Brain Tumor Detection Techniques with Morphological Operations

Brain Tumor Segmentation Based On a Various Classification Algorithm

Implementation of Inference Engine in Adaptive Neuro Fuzzy Inference System to Predict and Control the Sugar Level in Diabetic Patient

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information

IMPLEMENTATION OF AN AUTOMATED SMART HOME CONTROL FOR DETECTING HUMAN EMOTIONS VIA FACIAL DETECTION

Brain Tumor segmentation and classification using Fcm and support vector machine

Study on Aging Effect on Facial Expression Recognition

Automatic Detection of Heart Disease Using Discreet Wavelet Transform and Artificial Neural Network

implications. The most prevalent causes of blindness in the industrialized world are age-related macular degeneration,

Temporal Context and the Recognition of Emotion from Facial Expression

International Journal for Science and Emerging

EXTRACTION OF RETINAL BLOOD VESSELS USING IMAGE PROCESSING TECHNIQUES

Automatic Classification of Breast Masses for Diagnosis of Breast Cancer in Digital Mammograms using Neural Network

A Study of Facial Expression Reorganization and Local Binary Patterns

Primary Level Classification of Brain Tumor using PCA and PNN

Facial Expression Biometrics Using Tracker Displacement Features

Effect of Sensor Fusion for Recognition of Emotional States Using Voice, Face Image and Thermal Image of Face

ERA: Architectures for Inference

Object recognition and hierarchical computation

Keywords Missing values, Medoids, Partitioning Around Medoids, Auto Associative Neural Network classifier, Pima Indian Diabetes dataset.

Comparing a novel model based on the transferable belief model with humans during the recognition of partially occluded facial expressions

Mental State Recognition by using Brain Waves

ANALYSIS AND DETECTION OF BRAIN TUMOUR USING IMAGE PROCESSING TECHNIQUES

Edge Detection Techniques Using Fuzzy Logic

Satoru Hiwa, 1 Kenya Hanawa, 2 Ryota Tamura, 2 Keisuke Hachisuka, 3 and Tomoyuki Hiroyasu Introduction

ECG Beat Recognition using Principal Components Analysis and Artificial Neural Network

Intelligent Edge Detector Based on Multiple Edge Maps. M. Qasim, W.L. Woon, Z. Aung. Technical Report DNA # May 2012

Local Image Structures and Optic Flow Estimation

COMPARATIVE STUDY ON FEATURE EXTRACTION METHOD FOR BREAST CANCER CLASSIFICATION

Recognition of Facial Expressions for Images using Neural Network

Transcription:

ISSN: 0975-766X CODEN: IJPTFI Available Online through Research Article www.ijptonline.com FACIAL EMOTION RECOGNITION USING NEURAL NETWORK Kashyap Chiranjiv Devendra, Azad Singh Tomar, Pratigyna.N.Javali, R Jagdeesh Kannan School of Computing Science and Engineering, VIT University, Chennai. Email: jagadeeshkannan.r@vit.ac.in Received on 02-08-2016 Accepted on 25-09-2016 Abstract Facial expression is the most natural and instinctive means for human beings to communicate with each other Automatic analysis of human facial expression remains a very challenging area of research in computer vision and machine learning. Therefore, facial emotion recognition can be considered as a vital and useful visual based tool for building systems which can identify, interpret, process, and simulate human emotions. The traditional approach for performing facial emotion recognition is tracking changes in the facial muscles which are defined as Action Units (AU). Although Action Units has proven to be a quite successful approach in the process of identification of facial expressions, there are a total of 7000 AUs combinations of different AUs characterized to distinguish the emotions, which can prove to be really a very extensive and time-consuming procedure. Key Words: Facial Recognition, Facial Expression, Neural Network. Introduction Figure 1: Action Units of several facial configuration. Human emotion is a visible manifestation of affective state, cognitive activity, emotional state and personality. Like face detection, human emotion analysis is also a very challenging area of research in the field of computer vision and machine IJPT Sep-2016 Vol. 8 Issue No.3 18727-18733 Page 18727

learning. Neural networks (NN) have found profound success in the area of pattern recognition. By repeatedly showing a neural network inputs classified into groups, the network can be trained to discern the criteria used to classify, and it can do so in a generalized manner allowing successful classification of new inputs not used during training. With the explosion of research in emotion in recent year, the application of pattern recognition technology to emotion detection has become increasingly interesting. Automatic facial expression recognition involves two vital aspects: facial representation and classifier design. Facial representation is to derive a set of features from the original face which effectively represent the face. The features should be well defined because it is the key point to distinguish different facial expressions. In this study we focus on movement of two such features i.e. eyebrows and eyes and to determine how the variation in these two components help us to differentiate the 7 basic emotional states of a human i.e. happiness, fear, anger, sadness, disgust, surprise and normal (neutral). Literature Survey Facial Action Coding System is the most commonly used research tool in tracking the changes in the facial muscular activity. FACS helps in translating the varying changes in facial muscles to the appropriate Action Units. It is an anatomically based system for describing all notable facial movements in detail[2]. Each notable component of facial movement is called an Action Unit or AU. All facial expressions can be compartmentalized into their constituent AUs[2]. According to Ekman and Friesen, these changes can be transformed into 46 action units by combinations of which we can cover all the basic emotions. Maja Pantic and Ioannis Patras[8] took up the challenge of automatic analysis by recognizing facial muscle actions which are generated through different expressions. They performed particle filtering to obtain 15 fiducial or feature points on sequence of profile face images and were able to obtain recognition rate of 87%. Figure 2: Facial points obtained through Particle Filtering[8]. IJPT Sep-2016 Vol. 8 Issue No.3 18727-18733 Page 18728

The selection of these 15 facial points are shown in fig.2. Guided Particle Swarm Optimization (GPSO) algorithm[4], a variation of PSO algorithm implemented by Bashir Mohammed Ghandi, R. Nagarajan and Hazry Desa for facial emotion detection by tracking the relevant points, which here are considered as Action Units (AUs) and is able to detect the six emotions in real time. But a drawback of GPSO algorithm encountered was that image pre-processing needed to be performed, hence its application was limited to pre-recorded images only. Therefore they improved the system by implementing Lucas- Kanade (LK) [4] optical flow algorithm. LK Algorithm helped in keeping track of the positions AUs in real time which eliminated the requirement of pre-processing. However they observed that the Back-Propagation Neural Network (BPNN)[7] has proven more successful in classification based problems. Figure 3: Facial Features classified by Bayesian Network[5]. After comparing both the approaches they concluded that BPNN is better than GPSO in terms of speed however the accuracy of results by BPNN was slight less than GPSO. Yoshihiro Miyakoshi, and Shohei Kato[5] gave an another approach of using Bayesian Network for emotion detection system with facial features. Bayesian network classifiers infer from the dependencies among the target attribute and explanatory variables. The system proposed by them tries to learn Bayesian Network in two phases: internal and external phase. The internal phase uses K2 algorithm to construct casual relations among the facial features whereas the external phase constructs casual relation between facial features and emotions using feature selection. The facial features constructed by the K2 algorithm are depicted in fig. 3. A facial components detection method proposed by Byung-Hun Oh, Kwang-Seok[6] Hong uses the histogram method, the blob labeling method, and the MMGC image for face detection with an accuracy of 81.4%. Tie Yun and Ling Guan[3] proposed an alternative by introducing fiducial points localization using scale invariant feature based Adaboost classifiers and were able to achieve 90.2% average recognition rate using Support Vector Machines (SVM). These 26 fiducial IJPT Sep-2016 Vol. 8 Issue No.3 18727-18733 Page 18729

points as per classification of AUs[2, 10-12] can be described as per given table. According to Tie Yun and Ling Guan, these feature points extracts most important characters of the face and these points must be selected as minimum as possible. Methodology The raw images were processed for denoising, edge detection and then latter on for object extraction using the known methods [13-15]. In this we are using Back propagation Neural Network, which is a feed-forward network, the neurons are partitioned into layers, with links from each neuron in layer n being directed (only) to each neuron in layer (n+1). Inputs from the environment are fed into first the layer (the input layer), and outputs from the network are obtained at the last layer (the output layer). The middle layers are referred to as hidden layers. A weight or connection strength" is associated with each link, and the network is trained by modifying these weights, there by modifying the network function which maps inputs to Outputs. The architecture of the network we used is shown in Fig 4. It consists of three layers, namely, Input, Output and one Hidden layer. The Input layer has 20 neurons, one for each x and each y coordinate representing the positions of the 10 facial points we tracked from the video clips. The output layer consists of 7 neurons, one for each of the six basic emotions plus the neutral state. Figure 4: Structure of the BPNN used for the experiment. This means the network is trained using the back- propagation algorithm. The back propagation algorithm has become the training algorithm for artificial neural networks. We are using this method for our facial expression data set, to produce the output. The output produced contains the six facial emotions of the human beings. IJPT Sep-2016 Vol. 8 Issue No.3 18727-18733 Page 18730

Results R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology We have provided data set containing the facial expression of 150 people and the output estimated is compared with the existing output, to check whether the estimated value is correct or not. Simulation When we observe a facial expression of emotion, we often mimic it. This automatic mimicry reflects underlying simulation that supports accurate emotion recognition. Confusion Matrix This allows more detailed analysis than mere proportion of correct guesses (accuracy). IJPT Sep-2016 Vol. 8 Issue No.3 18727-18733 Page 18731

Conclusion R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology In this paper, we have presented the results obtained when Back-Propagation Neural Network (BPNN) is used for emotion detection. In terms of accuracy, the results were reasonably good. This fast speed of the BPNN approach is achieved at the cost of initial overhead involved in terms of data acquisition and training. Therefore, overall, it can be concluded that because of its remarkable speed in the testing phase, if an application is to be personalized for a single person, say an elderly person living in her house requiring assistance, then the BPNN is perhaps the better choice. This is so because there is enough time to adequately train the system for this particular person before it becomes operational. References 1. Facial Action Coding System, Wikipedia. 2. P. Ekman, W.V. Friesen, J.C. Hager, The Facial Action Coding System: A Technique for the Measurement of Facial Movement. 3. Tie Yun and Ling Guan, Automatic Fiducial Points Detection for Facial Expressions Using Scale Invariant Feature. 4. Bashir Mohammed Ghandi, R. Nagarajan and Hazry Desa, Real-Time System for Facial Emotion Detection Using GPSO Algorithm. 5. Yoshihiro Miyakoshi, and Shohei Kato, Facial Emotion Detection Considering Partial Oc-clusion of Face Using Bayesian Network. 6. Byung-Hun Oh, Kwang-Seok Hong, A Study on Facial Components Detection Method for Face-based Emotion Recognition. 7. Bashir Mohammed Ghandi, R. Nagarajan S. Yaacob, and Hazry Desa, GPSO versus Neural Network in Facial Emotion Detection. 8. Maja Pantic and Ioannis Patras, Dynamics of Facial Expression: Recognition of FacialActions and Their Temporal Segments From Face Profile Image Sequences. 9. Yingli Tian, Takeo Kanade, Jeffrey F. Cohn, Recognizing Action Units for Facial Expression Analysis. 10. Ankush Rai, "Artificial Intelligence for Emotion Recognition", Journal of Artificial Intelligence Research & Advances, Vol3, Issue 3, 2014. 11. Ankush Rai,Attribute Based Level Adaptive Thresholding Algorithm for Object Extraction, Journal of Advancement in Robotics, Volume 1, Issue 1, 2014 IJPT Sep-2016 Vol. 8 Issue No.3 18727-18733 Page 18732

12. Ankush Rai,Characterizing Face Encoding Mechanism by Selective Object Pattern in Brains Using Synthetic Intelligence & its Simultaneous replication of Visual System That Encode Faces, Research & Reviews : Journal of Computational Biology, Volume 3, Issue 2, 2014. 13. Ankush Rai,Attribute Based Level Adaptive ThresholdingAlgorithm (ABLATA) for Image Compression and Transmission,Journal of Mathematics and Computer ScienceVol 12, Issue 3, pp: 211-218, 2014. 14. AbhaChoubey, G.R.Sinha, Ankush Rai, Application of Image Denoising Through Comorbid Pixel Regularization Algorithm Based on Neuro-Fuzzy Rule, Journal of Embedded System & Applications, Vol 1, Issue 2, 2014. 15. Ankush Rai, Learning-based Edge Detection for Video Sequences, Recent trend in Parallel Computing, Vol 2 Issue 3, 2015. Corresponding Author: R Jagdeesh Kanan* Email: jagadeeshkannan.r@vit.ac.in IJPT Sep-2016 Vol. 8 Issue No.3 18727-18733 Page 18733