Recognition of Facial Expressions for Images using Neural Network

Similar documents
Facial expression recognition with spatiotemporal local descriptors

Facial Expression Recognition Using Principal Component Analysis

Development of novel algorithm by combining Wavelet based Enhanced Canny edge Detection and Adaptive Filtering Method for Human Emotion Recognition

A framework for the Recognition of Human Emotion using Soft Computing models

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information

EARLY STAGE DIAGNOSIS OF LUNG CANCER USING CT-SCAN IMAGES BASED ON CELLULAR LEARNING AUTOMATE

MRI Image Processing Operations for Brain Tumor Detection

Edge Detection Techniques Using Fuzzy Logic

Detection of Glaucoma and Diabetic Retinopathy from Fundus Images by Bloodvessel Segmentation

Extraction of Blood Vessels and Recognition of Bifurcation Points in Retinal Fundus Image

This is the accepted version of this article. To be published as : This is the author version published as:

EXTRACTION OF RETINAL BLOOD VESSELS USING IMAGE PROCESSING TECHNIQUES

Automatic Classification of Perceived Gender from Facial Images

R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology

HUMAN EMOTION DETECTION THROUGH FACIAL EXPRESSIONS

Facial Expression Biometrics Using Tracker Displacement Features

Statistical and Neural Methods for Vision-based Analysis of Facial Expressions and Gender

AUTOMATIC DIABETIC RETINOPATHY DETECTION USING GABOR FILTER WITH LOCAL ENTROPY THRESHOLDING

Image Processing of Eye for Iris Using. Canny Edge Detection Technique

Emotion Detection Using Physiological Signals. M.A.Sc. Thesis Proposal Haiyan Xu Supervisor: Prof. K.N. Plataniotis

Face Analysis : Identity vs. Expressions

Automated Brain Tumor Segmentation Using Region Growing Algorithm by Extracting Feature

Intelligent Edge Detector Based on Multiple Edge Maps. M. Qasim, W.L. Woon, Z. Aung. Technical Report DNA # May 2012

On Shape And the Computability of Emotions X. Lu, et al.

Image Enhancement and Compression using Edge Detection Technique

EMOTION DETECTION THROUGH SPEECH AND FACIAL EXPRESSIONS

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China

Detection of Facial Landmarks from Neutral, Happy, and Disgust Facial Images

Automatic Hemorrhage Classification System Based On Svm Classifier

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1

Agitation sensor based on Facial Grimacing for improved sedation management in critical care

Emotion Recognition using a Cauchy Naive Bayes Classifier

Object recognition and hierarchical computation

Facial Event Classification with Task Oriented Dynamic Bayesian Network

Blue Eyes Technology

Genetic Algorithm based Feature Extraction for ECG Signal Classification using Neural Network

Who Needs Cheeks? Eyes and Mouths are Enough for Emotion Identification. and. Evidence for a Face Superiority Effect. Nila K Leigh

Automatic Detection of Diabetic Retinopathy Level Using SVM Technique

Neuromorphic convolutional recurrent neural network for road safety or safety near the road

CPSC81 Final Paper: Facial Expression Recognition Using CNNs

International Journal for Science and Emerging

International Journal of Advance Engineering and Research Development EARLY DETECTION OF GLAUCOMA USING EMPIRICAL WAVELET TRANSFORM

Segmentation of Tumor Region from Brain Mri Images Using Fuzzy C-Means Clustering And Seeded Region Growing

Recognition of facial expressions using Gabor wavelets and learning vector quantization

Sign Language to Number by Neural Network

Study on Aging Effect on Facial Expression Recognition

Effect of Sensor Fusion for Recognition of Emotional States Using Voice, Face Image and Thermal Image of Face

Detection Of Red Lesion In Diabetic Retinopathy Using Adaptive Thresholding Method

Improved Intelligent Classification Technique Based On Support Vector Machines

COMPARATIVE STUDY ON FEATURE EXTRACTION METHOD FOR BREAST CANCER CLASSIFICATION

Cancer Cells Detection using OTSU Threshold Algorithm

ANALYSIS AND DETECTION OF BRAIN TUMOUR USING IMAGE PROCESSING TECHNIQUES

Gender Based Emotion Recognition using Speech Signals: A Review

Mammogram Analysis: Tumor Classification

Recognizing Emotions from Facial Expressions Using Neural Network

ABSTRACT I. INTRODUCTION

ECG Beat Recognition using Principal Components Analysis and Artificial Neural Network

DETECTING DIABETES MELLITUS GRADIENT VECTOR FLOW SNAKE SEGMENTED TECHNIQUE

The 29th Fuzzy System Symposium (Osaka, September 9-, 3) Color Feature Maps (BY, RG) Color Saliency Map Input Image (I) Linear Filtering and Gaussian

LOCATING BRAIN TUMOUR AND EXTRACTING THE FEATURES FROM MRI IMAGES

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING

Gabor Wavelet Approach for Automatic Brain Tumor Detection

Facial Expression Classification Using Convolutional Neural Network and Support Vector Machine

Earlier Detection of Cervical Cancer from PAP Smear Images

Study And Development Of Digital Image Processing Tool For Application Of Diabetic Retinopathy

ISSN (Online): International Journal of Advanced Research in Basic Engineering Sciences and Technology (IJARBEST) Vol.4 Issue.

Edge Detection Techniques Based On Soft Computing

Gray level cooccurrence histograms via learning vector quantization

A Common Framework for Real-Time Emotion Recognition and Facial Action Unit Detection

PERFORMANCE ANALYSIS OF THE TECHNIQUES EMPLOYED ON VARIOUS DATASETS IN IDENTIFYING THE HUMAN FACIAL EMOTION

Facial Behavior as a Soft Biometric

EXTRACT THE BREAST CANCER IN MAMMOGRAM IMAGES

Facial Feature Model for Emotion Recognition Using Fuzzy Reasoning

Reading Assignments: Lecture 18: Visual Pre-Processing. Chapters TMB Brain Theory and Artificial Intelligence

Biologically-Inspired Human Motion Detection

BONE CANCER DETECTION USING ARTIFICIAL NEURAL NETWORK

Automatic Emotion Recognition Using Facial Expression: A Review

Classification of benign and malignant masses in breast mammograms

Automated Blood Vessel Extraction Based on High-Order Local Autocorrelation Features on Retinal Images

Blood Vessel Segmentation for Retinal Images Based on Am-fm Method

Performance evaluation of the various edge detectors and filters for the noisy IR images

Design of Palm Acupuncture Points Indicator

Feasibility Study in Digital Screening of Inflammatory Breast Cancer Patients using Selfie Image

Skin color detection for face localization in humanmachine

An efficient method for Segmentation and Detection of Brain Tumor in MRI images

N RISCE 2K18 ISSN International Journal of Advance Research and Innovation

PNN -RBF & Training Algorithm Based Brain Tumor Classifiction and Detection

A Study of Facial Expression Reorganization and Local Binary Patterns

A NOVEL FACIAL EXPRESSION RECOGNITION METHOD US- ING BI-DIMENSIONAL EMD BASED EDGE DETECTION. Zijing Qin

IJREAS Volume 2, Issue 2 (February 2012) ISSN: LUNG CANCER DETECTION USING DIGITAL IMAGE PROCESSING ABSTRACT

Valence-arousal evaluation using physiological signals in an emotion recall paradigm. CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry.

Detection of microcalcifications in digital mammogram using wavelet analysis

FACIAL EXPRESSION RECOGNITION FROM IMAGE SEQUENCES USING SELF-ORGANIZING MAPS

Using Computational Models to Understand ASD Facial Expression Recognition Patterns

The Role of Face Parts in Gender Recognition

Keywords: 3D Interaction, Multimodal Interaction, Virtual Reality, Psychology, Autism Spectrum Disorders, Expression Recognition Techniques.

Generalization of a Vision-Based Computational Model of Mind-Reading

Keywords Fuzzy Logic, Fuzzy Rule, Fuzzy Membership Function, Fuzzy Inference System, Edge Detection, Regression Analysis.

Enhanced Detection of Lung Cancer using Hybrid Method of Image Segmentation

Sign Language in the Intelligent Sensory Environment

Transcription:

Recognition of Facial Expressions for Images using Neural Network Shubhangi Giripunje Research Scholar, Dept.of Electronics Engg., GHRCE, Nagpur, India Preeti Bajaj Senior IEEE Member, Professor, Dept.of Electronics, G.H.Raisoni College of Engineering, Nagpur, India ABSTRACT Globally terrorism continues to destroy the lives of people. To identify the terrorist amongst the other people is very difficult rather impossible. This exploratory study aims at investigating the effects of terrorism to recognize emotions. The current paper presents a view-based approach to the representation and recognition of human facial expression. In the designing of facial expression recognition (FER) system, Authors can take advantage of the resources and developed the algorithms for the system. The system divides into 3 modules, i.e. Preprocessing, Feature Extraction and Classification. The basis of the representation is a temporal template where the features are used typically based on local spatial position or displacement of specific points and regions of the face. In this paper, five different facial expressions were considered. Extracting the features, firstly logarithmic Gabor filters were applied. Then the Optimal subsets of features were selected for each expression. The classification tasks were performed using the Neural Network. Secondly, this study indicates that the YALE database contains expressers that expressed expressions. Keywords Keywords- Terrorism, facial expression, emotions, Neural Network 1. INTRODUCTION Globally terrorism continues to destroy the lives of people to identify the terrorist in public places or even amongst few is very difficult. If we can succeed in identification of correct emotional state of the human being and then train such system so as to identify expressions then probably terrorists will be identified easily even at the crowded place. To determine emotional state of any individual it is required to know his or her facial expressions. Facial Expression is a bodily state involving various physical structures; it is gross or finegrained behavior, and it occurs in particular situations. Any empirical research on facial expression involves only some part of the broad meaning of the term. After over a century of research, facial expression of terrorists still do debate upon what kind of facial expressions are and how they are communicated. Facial expressions comprise more than its outward physical expression; it also consists of internal feelings and thoughts, as well as other internal processes of which the person having the emotion may not be aware [1]. In worldwide, many researchers have studied the automatic facial expression recognition which facilitate a more intelligent and natural human machine interface of new multimedia product. Several methods and algorithms have been proposed for facial expression analysis from both static images and image sequences. In 26/11 Mumbai terror attack, terrorists were stayed in Mumbai for two months before attack. To recognize them as a terrorist is public place is very difficult for local public and police also. Just to looking towards the attack on September 11, it is immensely feeling about this technology which should be more widely used amongst researchers, advocates, designers, citizen groups, political leaders and manufacturers and if it used then we have to give more emphasis on the protection of the privacy of public. Taking into consideration all these important and sensitive issues, some airports are considering Face recognition cameras as a security measure. In this paper authors have focused on the recognition of facial expression from single digital images with emphasis on the feature extraction. This paper is organized in three sections. Section I describes Preprocessing, section II narrates the Feature extraction by Gabor filter and section III includes the results of terrorist Facial Expression recognition using Neural Network. 2. METHODOLOGIES The facial expression recognition problem has lately undergone various approaches that may be divided in two main categories: static and motion dependent. In static approaches, recognition of a facial expression is performed using a single image of a face. Motion dependent approaches extract temporal information by using at least two instances of a face in the same emotional state [5]. In the static approach the performance and generalization capabilities of different low-dimensional representations for facial emotion classification from static face images showing happy, angry, sad, and neutral expressions is used. Three general strategies are compared: The first approach consist the average face as a generic template using for each face and classifies the facial expression of each face with comparison of the best matching template. The second strategy uses a multi-layered perception trained with the back propagation of error algorithm on a subset of all facial expressions and subsequently tested on unseen face images. The third approach introduces a preprocessing step prior to the learning of an internal representation by the perception. The feature extraction stage computes the oriented response to six odd-symmetric and six even-symmetric Gabor-filters has been used in different way by different researchers. Many researchers have interest to recognize facial expression and use human computer interactions information. Number of automatic methods developed to recognize facial expressions in images [11-14]. 3

3. IMPLIMENTATIONS The complete work is based on the assumption that if continuous video of these terrorists is recorded wherein their facial expressions are being captured. It is noticed that only one expression which leads to anger is detected as shown in figure.1, wherein if normal person s video is captured, it will be mix expression which is going to be detected as shown in figure.2. Fig1. Terrorist expression filter which can reduced the intensity variation between the neighbor pixels. It is often used to reduce noise in images. The standard deviation of the Gaussian is produced the degree of smoothing. The Gaussian outputs a "weighted average" of each pixels neighborhood, with the average weighted more towards the value of the central pixels. This is in contrast to the mean filters uniformly weighted average. A gentler smoothing provides by Gaussian who preserves edges better than a similarly sized mean filter. The Gaussian Smooth color image is then has to be converted into YCBCR image, which converts the RGB values to the YCBCR color image. Every row of YCBCR consist the same color to the particular row in RGB colormap. Then extracting the luminance (Y) column gives the corresponding grayscale image of the given RGB image 3.3 Primary Facial Expressions A most effective system developed to recognize a small set of prototypic emotional expressions. The most widely-used set is perhaps human universal facial expressions of emotion which consists of five basic expression categories that have been shown to be recognizable across cultures. All the facial expressions have been recognized in human beings from different backgrounds, and they have been noticed even from each faces even though they are deaf and blind. In Earlier years, facial expression analysis was essentially a research topic for psychologists. However, now a day s image processing and pattern recognition have motivated significant research activities on automatic facial expression recognition. Basic facial expressions, shown in Fig. 3, are happiness, surprise, anger, sadness and neutral, [YALE database]. Fig2. Mixed expressions for normal persons from YALE database 3.1 Template Matching In the automatic facial feature extraction technique, the use of template prototypes is portrayed on the 2-D space in grayscale format. This technique uses correlation as basic tools for comparing the template with the part of the image. To investigate the recognition and generalization performance of the various methods several computer experiments were undertaken. For extraction of textured image features the Gabor Filters is motivated by various factors. The Gabor filters can be considered as scale tunable edge and orientation and line (bar) detectors, and the statistic of these micro features in a given region is often used to characterize the underlying texture information. Gabor features have been used in several image analysis applications including textures classification and segmentation, image recognition, image registration and motion tracking. 3.2 Gaussian Smoothing The Gaussian smoothing operator is a convolution operator that is used to blur the images and to remove the details and noise. It is same as a mean filter, but it represents the shape of a Gaussian (`bell-shaped') hump by using a different kernel that. The result of Gaussian smoothing is to blur an image, similarly by the mean filter. For implementation of image smoothing the simple, easy and instinctive method is Mean Fig 3.Basic facial expressions phenotypes In the Fig.3 (a) shows the facial expression Happiness in which the eyebrows are relaxed. The corners of the mouth is pulled back headed back the ears and mouth is open. Fig.3 (b) shows the facial expression Surprise in which the eyebrows are raised. The upper eyelids are wide open, the lower relaxed. The jaw is opened. In the Fig.3 (c) shows the facial expression Anger in which the inner eyebrows are pulled downward and together. The eyes are wide open. The lips are tightly closed on each other. In the Fig.3 (d) shows the facial expression Sadness in which the inner eyebrows are bent upward. The eyes are slightly closed. The mouth is relaxed. In the Fig.3 (e) shows the facial expression Neutral in which all face muscles are relaxed. Eyelids are tangent to the iris. The mouth is closed and lips are in contact. 3.4 Feature Extraction To achieve better robustness and accuracy, before the normalized facial images are fed into classifier, they are projected into some feature spaces in which, hopefully, the faces are more separable. In this way, the image processing and classifier designing can be tacked separately. In this module, we only focus on the image processing task, so when designing the classifier we can consider it as a general recognition task and won't need to think about the image 4

properties. In the training stage, feature selection is performed to determine which Gabor filters are useful, so only these features are computed in the final system In facial expression recognition developed both a reliable and fast facial features extraction algorithm, whose goals is both the localization of the different facial features (mouth, eyes, eyebrows, nose, etc) and the analysis of the emotional information that they contain (through the shape, the displacements). Facial expressions give important clues about emotions. Therefore, various techniques have been proposed to classify human affective states. The facial features are used based on local spatial position or displacement of specific points and regions of the face. 3.5 Gabor Filtering The Gabor Filter extracts the textured image features. In space and frequency Gabor illustration has been shown to be finest for minimization of joint two dimensional uncertainties. The direction and edge and line detectors for scale tunable consist by these filters. To characterize the underlying texture information with the help of the statistic of these micro features in a given region. Gabor features have been used in several image analysis applications including textures classification and segmentation, image recognition, image registration and motion tracking. The neighboring region of a pixel may be described by the response of a group of Gabor filters in different frequencies and directions, which have a reference to the specific pixel. In that way, a feature vector may be formed, containing the responses of those filters. A Gabor filter is a linear filter whose impulse response is defined by a harmonic function multiplied by a Gaussian function. Therefore, a filter bank usually consists of Gabor filters with different scales and rotations. The convolution of filters is with the signal then result is in a so-called Gabor space. This process is closely related to processes in the primary visual cortex. In the image Relations between activations for a specific spatial location are very distinctive between objects Furthermore, in order to create a space object representation the important activations can be extracted from the Gabor space. The use of Gabor filtering was deemed to be one suitable tool. As it can be mathematically deduced from the filter s form, it ensures simultaneous optimum localization in the natural space as well as in frequency space. The filter is applied both on the localized area and the template in four different spatial frequencies. According to research results reported in [7-10], Gabor wavelet decomposition is useful in facial image analysis, and it's also shown that Gabor filters with different parameters carry complementary information. Consequently, it is reasonable to use a group of Gabor functions to represent images. In the Figure 4 the feature vector is shown for the emotion Anger which is calculated by the Gabor filter. Fig.4.Facial feature vector using Gabor filter for the image of Yale database. 4. NEURAL NETWORK LEVENBERG- MARQUARDT ALGORITHM Neural Networks has the ability to derive meaning from complex and imprecise data. It can be used to extract patterns and detect trends that are much complex. In the category of information it has been given to analyze, a trained neural network can be thought of as an "expert". The Levenberg- Marquardt algorithm seems to be the fastest method for training moderate-sized feed forward neural networks (up to several hundred weights). It provides a solution in numerical manner to the mathematical problem of minimizing a sum of squares of several, generally nonlinear functions that depends on a common set of parameters the minimization problem were arises in least squares curve fitting The Levenberg- Marquardt algorithm (LMA) interpolates between the Gauss- Newton algorithm (GNA) and the method of gradient descent. With comparison, the LMA is more robust than the GNA. In many cases LMA finds a solution even if it starts very far off the final minimum. The most popular curve-fitting algorithm is LMA; in almost any software that provides a generic curvefitting tool the LMA is used; few users will ever need another curve-fitting algorithm. The flow diagram of the Facial recognition system is shown in Figure.5 5

Data base of images Image is selected by opening window Gaussian filtering is done to remove unwanted noise Is frontal face detected? No Fig6. Image Preprocessing Yes Feature Extraction Emotion Recognition Happy Surprise Fear Angry AngSa Neutral Fig7. Simulation window showing Facial expression Anger of terrorist Veerappan has been recognized Fig 5- The flow diagram of the facial expression recognition system 5. SIMULATION RESULTS The image has smoothened using Gaussian smoothing. Then it has to be converted to YCbCr format to eliminate chrominance and enhance luminance. After YCbCr image is formed image is converted to Gray Scale is shown in Figure. 6. The feature values obtained earlier are fed to the neural network, which gives a value corresponding to Facial Expression for terrorist shown in Figure.7 and Figure.8. An analysis of the confusion matrix for YALE database (Table1) suggests that out of all the expressions, anger and neutral was the confused with all the other expressions. The database contains 125 color images in jpg format. There are 16 different people s images for different facial expressions which consist with or without glasses. Fig8. Simulation window showing Facial expression Anger of terrorist Abu Salem has been recognized 6

6. CONCLUSIONS To identify the terrorist among the other people above approach works very well. This exploratory study aims at investigating the effects of terrorism to recognize emotions. The basis of the representation is a temporal template where the features are used typically based on local spatial position or displacement of specific points and regions of the face. In this paper, five different facial expressions were considered. Table1: Confusion matrix for YALE database International Journal of Computer Applications (0975 8887) The system is working well for Yale database as well as for identifying few terrorist for whom Interpol was in search of in last decade. Although this is beginning of the research and authors are still working on various other approaches and real time data with the help of ATS department. 7. ACKNOWLEDGMENTS Authors wish to thank Ajith Abraham for the valuable comments made in the initial version of the manuscript. Expressions Happiness Anger Surprise Sad Neutral Happiness 99.99% 0.0% 0.0012% 0.0016% 0.004% Anger 0.0% 99.82% 0.043% 0.035% 0.097% Surprise 0.0% 0.004% 99.99% 0.0% 0.0045% Sad 0.0% 0.005% 0.0% 99.98% 0.005% Neutral 0.0% 0.010% 0.0% 0.0% 99.98% 8. REFERENCES [1] R. W. Picard, E. Vyzas, and J. Healey, Toward machine emotional intelligence: Analysis of affective physiological state, IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 10, pp. 1175 1191, Oct. 2001. [2] C.H. Messom, A Sarrafzadeh, M.J. Johnson, F. Chao Affective state estimation from facial images using neural networks and fuzzy logic. [3] Fasel, J. Luettin, Automatic facial expression analysis: a survey, Pattern Recognition, Vol. 36, 2003, pp.259-275. [4] M. Pantic, J. M. Rothkrantz, Automatic Analysis of Facial Expressions: The State of the Art, IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 22, 2000, pp.1424-1444. [5] Seyed Mehdi Lajevardi, Margaret Lech, "Facial Expression Recognition Using Neural Networks and Log-Gabor Filters," Digital Image Computing: Techniques and Applications, pp.77-83,2008. [6] Christine L. Lisetti, David E. Rumelhart Facial Expression Recognition Using a Neural Network (1998),In Proceedings of the 11 th International FLAIRS Conference. [7] M. S. Bartlett, P. A. Viola, T. J. Sejnowski, B. A. Golomb, J. Larsen, J. C. Hager,and P. Ekman. Classifying facial actions IEEE Transactions On Pattern Analysis And Machine Intelligence, Vol. 21, No. 10, October 1999 [8] Liu and H. Wechsler. Gabor feature based classification using the enhanced fisher linear discriminant model for face recognition, 2002. [9] W. Liu and Z. Wang. Facial expression recognition based on fusion of multiple Gabor features In ICPR '06: Proceedings of the 18th International Conference on Pattern Recognition, pages 536-539, Washington, DC, USA, 2006. IEEE Computer Society. [10] P. Michel and R. E. Kaliouby. Real time facial expression recognition in video using support vector machines In ICMI '03: Proceedings of the 5th international conference on Multimodal interfaces, pages 258{264, New York, NY, USA, 2003. ACM. [11] Ruicong Zhi, Qiuqi Ruan, "A Comparative Study on Region-Based Moments for Facial Expression Recognition,", in Congress on Image and Signal Processing, Vol. 2, pp.600-604, 2008. [12] Irene Kotsia, Ioannis Pitas, Facial Expression Recognition in Image Sequences Using Geometric Deformation Features and Support Vector Machines in IEEE Transactions on Image Processing 16(1): pp.172-187, 2007. [13] Kakumanu.P., Nikolaos G. Bourbakis, A Local-Global Graph Approach for Facial Expression Recognition. ICTAI, pp 685-692,2006. [14] Aleksic. P.S., Aggelos K. Katsaggelos. Automatic facial expression recognition using facial animation parameters and multistream HMMs.IEEE Transactions on Information Forensics and Security 1(1): pp. 3-11, 2006. 7