Classifying Facial Pain Expressions
|
|
- Arleen Page
- 5 years ago
- Views:
Transcription
1 Classifying Facial Pain Expressions Individual Classifiers vs. Global Classifiers Michael Siebers 1, Miriam Kunz 2, Stefan Lautenbacher 2, and Ute Schmid 1 1 Faculty Information Systems and Applied Computer Science, University of Bamberg, Germany 2 Physiological Psychology, University of Bamberg, Germany Abstract. Pain is a highly affective state that is accompanied by a facial expression. In this paper we compare different classifiers on their possibilities to classify pain from facial point data. Furthermore we investigate the need for training classifiers on each subject s data individually. We show that most used classifiers are suitable for the classifying facial pain expressions. The second issue cannot be finally decided. 1 Motivation and Introduction Pain is an highly unpleasant feeling everybody is familiar with. Humans have an urge to communicate pain to others. This is done both verbally the typical ouch and nonverbally. Other persons react on those pain messages, e.g. with pity by providing help. Important for nonverbal communicating pain is the facial expression. Especially for those individuals that are not able to communicate pain verbally (e.g. patients with dementia) the facial expression of pain can serve as a valid communication channel of pain [1, 2]. This finding can be valueable for the clinical context where individualized classifiers for pain can support relatives and nursing staff in recognizing pain states of patients[3]. This pilot study tries to find possibilities for deciding whether a person experiences pain using facial images. To be precise, this study tries to answer two questions. First, which classifiers are suitable to predict pain by facial features. Second, is a global classifier, trained on different persons, sufficient or must individualized classifiers be trained for each person as psychological studies [1, 2] suggest. Since this is a pilot study we are only handling a limited number of data records. The work reported in this paper is mainly explorative concerning the general investigation of suitable learning approaches, appropriate feature selection, and efficient preprocessing of data. Most importantly, we aimed at getting first support for the psychological hypothesis that pain classification must be dealt with individual classifiers and that general classifiers as often used for classification of emotions as happiness [4] are significantly less reliable.
2 2 Data Acquisition In order to learn classifiers we needed pain-annotated facial data. These were derived from videos obtained during a previously conducted psychophysiological study and stills of non-painful and painful events were extracted. On those images facial features were annotated. Based on these features relational measures were calculated. 2.1 Psychophysiological Study In this study, pressure stimuli of various intensities were applied to the upper edge of the trapezius muscle using a pressure Algometer (SOMETIC). Each subject received 40 stimuli, half-and-half painful and not painful. As the first stimulus a pressure of 5 kg was applied. This stimulus was repeated with increasing weight until the subject showed a pain face or the maximum of 8 kg was reached. This target intensity was used as the painful stimulus, whereas 1 kg was used to induce non-painful pressure sensations. A total of 30 subjects took part in the study. Each of them was video taped. In the background a LED light lit up while pressure was applied. In addition to the video recordings, the subjective pain ratings were assessed on a Visual Analogue Scale 3. Furthermore, the pressure intensity and expert s ratings on whether subjects displayed pain-typical facial expressions (this was done by M. Kunz who is a certified coder for facial expressions) were recorded for each stimulus application. As this study was not initially conducted to be used for automated facial pain detection, it is no surprise that we came across some problems concerning image quality. The main one was that some subjects hair was hiding parts of their faces. 2.2 Still Extraction Since we wanted to evaluate the classification performances of different classifiers, we decided to use ideal data only. Therfore among all subjects only those showing the most prototypic pain faces were chosen for further consideration. Six person showed appropriate mimics. From their videos ten stills per stimulus were extracted. Stills were taken from all 20 non-pain stimuli and all pain stimuli showing a pain face. All stills were taken from within the first second after facial expression onset. This resulted in a total of 1200 non-pain and 880 pain images. From further study one subject was excluded. He showed a pain face only in six of the 20 pain stimuli others at least twice as much. Additionally he was the only male among the remaining subjects. 3 A visual analogue scale (VAS) is a scale where values are stated on a continuum ranging from one extreme to another. In this study it is implemented as percentage value.
3 Fig. 1. Annotated feature points and shapes. 2.3 Facial Feature Annotation On the images we marked 58 points as seen in Fig. 1. Those formed the face s contour (13 points), mouth (8 points), nose (11 points), eyes (8 points each) and eye brows (5 points each). We chose those points with FACS [5] in mind. However, we didn t use FACS directly, as we are in doubt whether it is suited to map facial pain expressions. For the annotation process we used the AM Tools of Tim Cootes [6]. As the images hardly displayed variation only the first three images of each stimulus were used. Few stimuli had to be excluded as the face was not completely on the images. So the final data set consisted of 534 records (294 non-pain; 240 pain) of five female subjects. To eliminate the size and the position of the face within the image the x- and y-coordinates of the points were standardized to a range of [0, 1]. 2.4 Relational Measures To the pure data record additional attributes were added. Distances considered as relevant were calculated and added to the data record. width and height of the mouth widths and heights of both eyes span from the tip to the root of the nose distance between the eye brows distances between eye and eye brow distance between mouth and nose All distances can be seen in Fig. 2. Moreover we added the angle between the mouth corners and the mouth centre.
4 Fig. 2. Calculated distances and angles It might be that these distances scale proportionally, or it might not be. To survey this we included proportions into the data set. ratio of mouth height to mouth width ratio of eye height to eye width (both sides) [called: eye ratio] ratio of left eye width to right eye width ratio of left eye height to right eye height ratio of left eye ratio to right eye ratio It is understood that the pain face is not the usual face but a distortion of the neutral face. Taking this into account we averaged the distances, angles and ratios of the non-pain data records and added for every record the ratios of those attribute to their means. As the deformation of the face is potentially highly individual we also added the ratios of those attributes to their means of the considered subject s non-pain data. Of course all mentioned relational measures are implied in the point data alone. However they would be inaccessible for some classifiers, e.g. Decision Trees (see Sect. 3.1). Hence we decided to include those measures in an explicit form. 3 Experiments Based on the data set with 534 entries and 178 attributes (including image id, person number and class) classifiers were trained. 3.1 Classifiers In the following the used classifier are briefly described. For further reading see [7] or [8](Support Vector Machines).
5 Decision Trees classify an example according to a set of tree structured if-thenrules. Each inner node of the tree denotes an attribute and branches according to its values. The Decision Tree classifier is the only symbolic classifier we used. We considered using it as the constructed decision model is human understandable and thus has an explaining element. For learning this trees we used ID3Numerical, a modification of Quinlan s ID3 [9] which can handle numerical attributes. Naive Bayes is a statistical classifier that is based on Bayes Law. It estimates the probability P (c x). For this purpose it assumes that all attributes are statistical independent (thus naive). Support Vector Machines try to find a hyperplane probably in a higher space that separates the classes. For classification only the data records closest to the hyperplane are used the Support Vectors. k-nearest Neighbours (knn) considers each data record as points in IR n. An unseen instance is now classified searching the k nearest points in the training example space and assigning their mode class. All training examples are kept in the feature space. Therefor knn is called a lazy learner. The Perceptron considers each data record as a vector of real-valued inputs (x 1,..., x n ). Those are weighted with (w 0,..., w n ). For classification it calculates the linear combination n i=0 w ix i, where x 0 is always 1. If the result is greater than 0 it returns positive, negative otherwise. Neural Networks use a graph of sigmoid units to calculate the desired class. Sigmoid units are similar to perceptrons except that they use a sigmoidal output. This output is serving as input for the next unit. In this study we used a linear and acyclic network. Classification by Regression trains a regression model for each class. The class with the higher predicted value is then selected. As base regression model we used linear regression. Linear regression tries to find the linear function that predicts the examples best. 3.2 Procedure For each classifier the learning generally consisted of two phases. First its optimal parameters were obtained then its performance was measured in an crossvalidation. As knn suffers from curse of dimensionality, the attributes were weighted for this classifier before parameter optimization. Cross-Validation was done using 10 partitions. The partitions were drawn according to single data records by means of stratified sampling 4. 4 Stratified exampling means randomly assigning data records to folds while trying to presume class distribution.
6 Parameters were optimized via a systematic grid search. Parameter combinations were systematically evaluated using 10-folded cross-validations. The combination which performed best was selected. For the Neural Network no systematic approach was chosen. Taking the long training times and the size of the parameter space into account we decided on an evolutionary technique. Attribute Weighting was performed using forward weighting. Initially every attribute was assigned a weight of 0. Each attribute was then independently weighted using a linear search. Attribute Selection was used to overcome Naive Bayes assumptions that attributes are statistical independent. Carried out as forward selection, an initial population is created one individual per attribute. Then further attributes are added to the best ones as long as performance increases. This proceedings were carried out with the whole data set (global) and once for each subject using only its data (individual classifiers). We ran the experiments with RapidMiner 5 on a Fujitsu Siemens Computers LIFEBOOK T Series (Intel c Core TM 2 Duo P8400, 2 GB) and a Fujitsu Siemens Computers Esprimo (Intel R Pentium R GHz, 1 GB). 4 Results The results of the different classifiers are shown in Tab. 1. Obviously most classifiers are suitable for this task. Table 1. Experiments results global individual Decision Tree Support Vector Machine Regression Perceptron a b Neural Network c d Naive Bayes k-nearest Neighbours a Parameter Optimization canceled after 19 days. Best interim result is displayed. b Parameters not optimized due to time constraints. c Parameter Optimization canceled after 10 days. Using not optimal parameters. d Parameters not optimized due to time constraints. 5
7 4.1 Classifier Performance Apparently Support Vector Machines and k-nearest Neighbours perform best. Unfortunately it is not possible to build an exact ranking. Due to the number of subjects, no significance tests were made. Decision Tree, Support Vector Machine, Regression, Naive Bayes and k-nearest Neighbours perform similar as adults classifying clinical pain videos[10]. No detailed statement about the Neural Network can be made. The global parameter optimization crashed after 10 days. Since no interim results were available performance estimation was done with manually chosen parameters. A repetition of the parameter optimization was not possible due to time limitations. The detail plot in Fig. 3 shows that the data is probably not linear separable. This explains the bad performance of the Perceptron classifier. It performs even worse than guessing 6. Fig. 3. Detail plot of the used data set. The topmost attributes of the global decision tree model were used as axis. 4.2 Global vs. Individual Classification For most classifiers the individual approach seems to deliver better results. But due to the small number of subjects also here no detailed comparison is possible. 6 Guessing probability: for global classification, for individual classification
8 5 Conclusion and Future Work We showed that many classifiers are suitable for predicting pain by facial expression. Global and individual classifiers perform nearly equally. But due to the low subject number and since we only selected those subjects that showed prototypical facial pain displays[11] it is by now questionable if these findings can be generalized. Currently we are able to decide if a shown face is prototypical for pain or not. However, the true question is whether the person whose face is depicted is experiencing pain. Therefore we will deal with predicting the self-reported VAS values in further work. Additionally we will tackle the issue of mixing up different emotions, first trying to distinguish between pain and disgust. Therefore we will use more subjects for the initial psychological study. Further studies should also address a broader variety of subjects for example regarding age or gender. Acknowledgements We thank Simone Burkhardt and Ina Schulz for the data collection and picture extraction. The study was supported by the Dr. Werner Jackstädt-Stiftung. References 1. Kunz, M., Mylius, V., Scharmann, S., Schepelman, K., Lautenbacher, S.: Influence of dementia on multiple components of pain. European Journal of Pain 13 (2009) Kunz, M., Scharmann, S., Hemmeter, U., Schepelman, K., Lautenbacher, S.: The facial expression of pain in patients with dementia. PAIN 133 (2007) Salomon, P.E., Prkachin, K.M., Farewell, V.: Enhancing sensitivity to facial expression of pain. PAIN 71 (1997) Strupp, S., Schmitz, N., Berns, K.: Visual-Based Emotion Detection for Natural Man-Machine Interaction. In Dengel, A.R., Berns, K., Breuel, T.M., Bomarius, F., Roth-Berghofer, T.R., eds.: KI 2008: advances in artificial intelligence: 31st Annual German Conference on AI, KI 2008, Kaiserslautern, Germany, September 23-26, 2008 ; proceedings. Volume 5243 of LNAI., Berlin, Springer (2008) Ekman, P., Friesen, W.V.: Facial action coding system. Consulting Psychologists Press, Palo Alto, Calif. (1978) 6. Cootes, T.F.: Am Tools. timothy.f.cootes/software/am tools doc/index.html 7. Mitchell, T.M.: Machine learning. McGraw-Hill, Boston, Mass. (1997) 8. Bishop, C.M.: Pattern Recognition and Machine Learning. 1. aufl. edn. Springer Science + Business Media LLC, New York, NY (2006) 9. Quinlan, J.R.: Induction of decision trees. Machine Learning 1(1) (1986) Deyo, K.S., Prkachin, K.M., Mercer, S.R.: Development of sensitivity to facial expression of pain. PAIN 107(1-2) (2004/1) Prkachin, K.M.: The consistency of facial expressions of pain: a comparison across modalities. PAIN 51 (1992)
Statistical and Neural Methods for Vision-based Analysis of Facial Expressions and Gender
Proc. IEEE Int. Conf. on Systems, Man and Cybernetics (SMC 2004), Den Haag, pp. 2203-2208, IEEE omnipress 2004 Statistical and Neural Methods for Vision-based Analysis of Facial Expressions and Gender
More informationThis is the accepted version of this article. To be published as : This is the author version published as:
QUT Digital Repository: http://eprints.qut.edu.au/ This is the author version published as: This is the accepted version of this article. To be published as : This is the author version published as: Chew,
More informationGender Based Emotion Recognition using Speech Signals: A Review
50 Gender Based Emotion Recognition using Speech Signals: A Review Parvinder Kaur 1, Mandeep Kaur 2 1 Department of Electronics and Communication Engineering, Punjabi University, Patiala, India 2 Department
More informationReactive agents and perceptual ambiguity
Major theme: Robotic and computational models of interaction and cognition Reactive agents and perceptual ambiguity Michel van Dartel and Eric Postma IKAT, Universiteit Maastricht Abstract Situated and
More informationClassification and attractiveness evaluation of facial emotions for purposes of plastic surgery using machine-learning methods and R
Classification and attractiveness evaluation of facial emotions for purposes of plastic surgery using machine-learning methods and R erum 2018 Lubomír Štěpánek 1, 2 Pavel Kasal 2 Jan Měšťák 3 1 Institute
More informationFacial Expression Biometrics Using Tracker Displacement Features
Facial Expression Biometrics Using Tracker Displacement Features Sergey Tulyakov 1, Thomas Slowe 2,ZhiZhang 1, and Venu Govindaraju 1 1 Center for Unified Biometrics and Sensors University at Buffalo,
More informationData mining for Obstructive Sleep Apnea Detection. 18 October 2017 Konstantinos Nikolaidis
Data mining for Obstructive Sleep Apnea Detection 18 October 2017 Konstantinos Nikolaidis Introduction: What is Obstructive Sleep Apnea? Obstructive Sleep Apnea (OSA) is a relatively common sleep disorder
More informationEmotion Recognition using a Cauchy Naive Bayes Classifier
Emotion Recognition using a Cauchy Naive Bayes Classifier Abstract Recognizing human facial expression and emotion by computer is an interesting and challenging problem. In this paper we propose a method
More informationWho Needs Cheeks? Eyes and Mouths are Enough for Emotion Identification. and. Evidence for a Face Superiority Effect. Nila K Leigh
1 Who Needs Cheeks? Eyes and Mouths are Enough for Emotion Identification and Evidence for a Face Superiority Effect Nila K Leigh 131 Ave B (Apt. 1B) New York, NY 10009 Stuyvesant High School 345 Chambers
More informationSUPPLEMENTARY INFORMATION. Table 1 Patient characteristics Preoperative. language testing
Categorical Speech Representation in the Human Superior Temporal Gyrus Edward F. Chang, Jochem W. Rieger, Keith D. Johnson, Mitchel S. Berger, Nicholas M. Barbaro, Robert T. Knight SUPPLEMENTARY INFORMATION
More informationStudy on Aging Effect on Facial Expression Recognition
Study on Aging Effect on Facial Expression Recognition Nora Algaraawi, Tim Morris Abstract Automatic facial expression recognition (AFER) is an active research area in computer vision. However, aging causes
More informationR Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology
ISSN: 0975-766X CODEN: IJPTFI Available Online through Research Article www.ijptonline.com FACIAL EMOTION RECOGNITION USING NEURAL NETWORK Kashyap Chiranjiv Devendra, Azad Singh Tomar, Pratigyna.N.Javali,
More informationINTRODUCTION TO MACHINE LEARNING. Decision tree learning
INTRODUCTION TO MACHINE LEARNING Decision tree learning Task of classification Automatically assign class to observations with features Observation: vector of features, with a class Automatically assign
More informationAutomated Medical Diagnosis using K-Nearest Neighbor Classification
(IMPACT FACTOR 5.96) Automated Medical Diagnosis using K-Nearest Neighbor Classification Zaheerabbas Punjani 1, B.E Student, TCET Mumbai, Maharashtra, India Ankush Deora 2, B.E Student, TCET Mumbai, Maharashtra,
More informationEmpirical function attribute construction in classification learning
Pre-publication draft of a paper which appeared in the Proceedings of the Seventh Australian Joint Conference on Artificial Intelligence (AI'94), pages 29-36. Singapore: World Scientific Empirical function
More informationA Possibility for Expressing Multi-Emotion on Robot Faces
The 5 th Conference of TRS Conference 26-27 May 2011, Bangkok, Thailand A Possibility for Expressing Multi-Emotion on Robot Faces Trin Veerasiri 1*, Djitt Laowattana 2 Institute of Field robotics, King
More informationPupil Dilation as an Indicator of Cognitive Workload in Human-Computer Interaction
Pupil Dilation as an Indicator of Cognitive Workload in Human-Computer Interaction Marc Pomplun and Sindhura Sunkara Department of Computer Science, University of Massachusetts at Boston 100 Morrissey
More information10CS664: PATTERN RECOGNITION QUESTION BANK
10CS664: PATTERN RECOGNITION QUESTION BANK Assignments would be handed out in class as well as posted on the class blog for the course. Please solve the problems in the exercises of the prescribed text
More informationA Comparison of Collaborative Filtering Methods for Medication Reconciliation
A Comparison of Collaborative Filtering Methods for Medication Reconciliation Huanian Zheng, Rema Padman, Daniel B. Neill The H. John Heinz III College, Carnegie Mellon University, Pittsburgh, PA, 15213,
More informationJ2.6 Imputation of missing data with nonlinear relationships
Sixth Conference on Artificial Intelligence Applications to Environmental Science 88th AMS Annual Meeting, New Orleans, LA 20-24 January 2008 J2.6 Imputation of missing with nonlinear relationships Michael
More informationModeling Sentiment with Ridge Regression
Modeling Sentiment with Ridge Regression Luke Segars 2/20/2012 The goal of this project was to generate a linear sentiment model for classifying Amazon book reviews according to their star rank. More generally,
More informationObservational Category Learning as a Path to More Robust Generative Knowledge
Observational Category Learning as a Path to More Robust Generative Knowledge Kimery R. Levering (kleveri1@binghamton.edu) Kenneth J. Kurtz (kkurtz@binghamton.edu) Department of Psychology, Binghamton
More informationDetecting Cognitive States Using Machine Learning
Detecting Cognitive States Using Machine Learning Xuerui Wang & Tom Mitchell Center for Automated Learning and Discovery School of Computer Science Carnegie Mellon University xuerui,tom.mitchell @cs.cmu.edu
More informationApplication of Artificial Neural Network-Based Survival Analysis on Two Breast Cancer Datasets
Application of Artificial Neural Network-Based Survival Analysis on Two Breast Cancer Datasets Chih-Lin Chi a, W. Nick Street b, William H. Wolberg c a Health Informatics Program, University of Iowa b
More informationANALYSIS OF FACIAL FEATURES OF DRIVERS UNDER COGNITIVE AND VISUAL DISTRACTIONS
ANALYSIS OF FACIAL FEATURES OF DRIVERS UNDER COGNITIVE AND VISUAL DISTRACTIONS Nanxiang Li and Carlos Busso Multimodal Signal Processing (MSP) Laboratory Department of Electrical Engineering, The University
More informationUnderstanding Facial Expressions and Microexpressions
Understanding Facial Expressions and Microexpressions 1 You can go to a book store and find many books on bodylanguage, communication and persuasion. Many of them seem to cover the same material though:
More informationFacial Feature Model for Emotion Recognition Using Fuzzy Reasoning
Facial Feature Model for Emotion Recognition Using Fuzzy Reasoning Renan Contreras, Oleg Starostenko, Vicente Alarcon-Aquino, and Leticia Flores-Pulido CENTIA, Department of Computing, Electronics and
More informationAn assistive application identifying emotional state and executing a methodical healing process for depressive individuals.
An assistive application identifying emotional state and executing a methodical healing process for depressive individuals. Bandara G.M.M.B.O bhanukab@gmail.com Godawita B.M.D.T tharu9363@gmail.com Gunathilaka
More informationDetection of Facial Landmarks from Neutral, Happy, and Disgust Facial Images
Detection of Facial Landmarks from Neutral, Happy, and Disgust Facial Images Ioulia Guizatdinova and Veikko Surakka Research Group for Emotions, Sociality, and Computing Tampere Unit for Computer-Human
More informationFrom Pixels to People: A Model of Familiar Face Recognition by Burton, Bruce and Hancock. Presented by Tuneesh K Lella
From Pixels to People: A Model of Familiar Face Recognition by Burton, Bruce and Hancock Presented by Tuneesh K Lella Agenda Motivation IAC model Front-End to the IAC model Combination model Testing the
More informationSocial Context Based Emotion Expression
Social Context Based Emotion Expression Radosław Niewiadomski (1), Catherine Pelachaud (2) (1) University of Perugia, Italy (2) University Paris VIII, France radek@dipmat.unipg.it Social Context Based
More informationInternational Journal of Computer Science Trends and Technology (IJCST) Volume 5 Issue 1, Jan Feb 2017
RESEARCH ARTICLE Classification of Cancer Dataset in Data Mining Algorithms Using R Tool P.Dhivyapriya [1], Dr.S.Sivakumar [2] Research Scholar [1], Assistant professor [2] Department of Computer Science
More informationCPSC81 Final Paper: Facial Expression Recognition Using CNNs
CPSC81 Final Paper: Facial Expression Recognition Using CNNs Luis Ceballos Swarthmore College, 500 College Ave., Swarthmore, PA 19081 USA Sarah Wallace Swarthmore College, 500 College Ave., Swarthmore,
More informationNetwork Analysis of Toxic Chemicals and Symptoms: Implications for Designing First-Responder Systems
Network Analysis of Toxic Chemicals and Symptoms: Implications for Designing First-Responder Systems Suresh K. Bhavnani 1 PhD, Annie Abraham 1, Christopher Demeniuk 1, Messeret Gebrekristos 1 Abe Gong
More informationViewpoint dependent recognition of familiar faces
Viewpoint dependent recognition of familiar faces N. F. Troje* and D. Kersten *Max-Planck Institut für biologische Kybernetik, Spemannstr. 38, 72076 Tübingen, Germany Department of Psychology, University
More informationA framework for the Recognition of Human Emotion using Soft Computing models
A framework for the Recognition of Human Emotion using Soft Computing models Md. Iqbal Quraishi Dept. of Information Technology Kalyani Govt Engg. College J Pal Choudhury Dept. of Information Technology
More informationA Human-Markov Chain Monte Carlo Method For Investigating Facial Expression Categorization
A Human-Markov Chain Monte Carlo Method For Investigating Facial Expression Categorization Daniel McDuff (djmcduff@mit.edu) MIT Media Laboratory Cambridge, MA 02139 USA Abstract This paper demonstrates
More informationRachael E. Jack, Caroline Blais, Christoph Scheepers, Philippe G. Schyns, and Roberto Caldara
Current Biology, Volume 19 Supplemental Data Cultural Confusions Show that Facial Expressions Are Not Universal Rachael E. Jack, Caroline Blais, Christoph Scheepers, Philippe G. Schyns, and Roberto Caldara
More informationFace Analysis : Identity vs. Expressions
Hugo Mercier, 1,2 Patrice Dalle 1 Face Analysis : Identity vs. Expressions 1 IRIT - Université Paul Sabatier 118 Route de Narbonne, F-31062 Toulouse Cedex 9, France 2 Websourd Bâtiment A 99, route d'espagne
More informationAre Faces Special? A Visual Object Recognition Study: Faces vs. Letters. Qiong Wu St. Bayside, NY Stuyvesant High School
Are Faces Special? A Visual Object Recognition Study: Faces vs. Letters Qiong Wu 58-11 205 St. Bayside, NY 11364 Stuyvesant High School 345 Chambers St. New York, NY 10282 Q. Wu (2001) Are faces special?
More informationInternational Journal of Pharma and Bio Sciences A NOVEL SUBSET SELECTION FOR CLASSIFICATION OF DIABETES DATASET BY ITERATIVE METHODS ABSTRACT
Research Article Bioinformatics International Journal of Pharma and Bio Sciences ISSN 0975-6299 A NOVEL SUBSET SELECTION FOR CLASSIFICATION OF DIABETES DATASET BY ITERATIVE METHODS D.UDHAYAKUMARAPANDIAN
More informationRecognizing Scenes by Simulating Implied Social Interaction Networks
Recognizing Scenes by Simulating Implied Social Interaction Networks MaryAnne Fields and Craig Lennon Army Research Laboratory, Aberdeen, MD, USA Christian Lebiere and Michael Martin Carnegie Mellon University,
More informationLocal Image Structures and Optic Flow Estimation
Local Image Structures and Optic Flow Estimation Sinan KALKAN 1, Dirk Calow 2, Florentin Wörgötter 1, Markus Lappe 2 and Norbert Krüger 3 1 Computational Neuroscience, Uni. of Stirling, Scotland; {sinan,worgott}@cn.stir.ac.uk
More informationDrive-reducing behaviors (eating, drinking) Drive (hunger, thirst) Need (food, water)
Instinct Theory: we are motivated by our inborn automated behaviors that generally lead to survival. But instincts only explain why we do a small fraction of our behaviors. Does this behavior adequately
More informationFACIAL EXPRESSION RECOGNITION FROM IMAGE SEQUENCES USING SELF-ORGANIZING MAPS
International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998 FACIAL EXPRESSION RECOGNITION FROM IMAGE SEQUENCES USING SELF-ORGANIZING MAPS Ayako KATOH*, Yasuhiro FUKUI**
More informationPredicting Breast Cancer Survivability Rates
Predicting Breast Cancer Survivability Rates For data collected from Saudi Arabia Registries Ghofran Othoum 1 and Wadee Al-Halabi 2 1 Computer Science, Effat University, Jeddah, Saudi Arabia 2 Computer
More informationA hybrid approach for identification of root causes and reliability improvement of a die bonding process a case study
Reliability Engineering and System Safety 64 (1999) 43 48 A hybrid approach for identification of root causes and reliability improvement of a die bonding process a case study Han-Xiong Li a, *, Ming J.
More informationUNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Midterm, 2016
UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Midterm, 2016 Exam policy: This exam allows one one-page, two-sided cheat sheet; No other materials. Time: 80 minutes. Be sure to write your name and
More informationThe Role of Face Parts in Gender Recognition
The Role of Face Parts in Gender Recognition Yasmina Andreu Ramón A. Mollineda Pattern Analysis and Learning Section Computer Vision Group University Jaume I of Castellón (Spain) Y. Andreu, R.A. Mollineda
More informationHow to Create Better Performing Bayesian Networks: A Heuristic Approach for Variable Selection
How to Create Better Performing Bayesian Networks: A Heuristic Approach for Variable Selection Esma Nur Cinicioglu * and Gülseren Büyükuğur Istanbul University, School of Business, Quantitative Methods
More informationEvaluating Classifiers for Disease Gene Discovery
Evaluating Classifiers for Disease Gene Discovery Kino Coursey Lon Turnbull khc0021@unt.edu lt0013@unt.edu Abstract Identification of genes involved in human hereditary disease is an important bioinfomatics
More informationValidating the Visual Saliency Model
Validating the Visual Saliency Model Ali Alsam and Puneet Sharma Department of Informatics & e-learning (AITeL), Sør-Trøndelag University College (HiST), Trondheim, Norway er.puneetsharma@gmail.com Abstract.
More informationAssignment Question Paper I
Subject : - Discrete Mathematics Maximum Marks : 30 1. Define Harmonic Mean (H.M.) of two given numbers relation between A.M.,G.M. &H.M.? 2. How we can represent the set & notation, define types of sets?
More informationUNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2014
UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2014 Exam policy: This exam allows two one-page, two-sided cheat sheets (i.e. 4 sides); No other materials. Time: 2 hours. Be sure to write
More information1. The figure below shows the lengths in centimetres of fish found in the net of a small trawler.
Bivariate Data 1 IB MATHEMATICS SL Topic: Bivariate Data NAME: DATE: 1. The figure below shows the lengths in centimetres of fish found in the net of a small trawler. Number of fish 11 10 9 8 7 6 5 4 3
More informationFacial Expression Recognition Using Principal Component Analysis
Facial Expression Recognition Using Principal Component Analysis Ajit P. Gosavi, S. R. Khot Abstract Expression detection is useful as a non-invasive method of lie detection and behaviour prediction. However,
More informationCAS Seminar - Spiking Neurons Network (SNN) Jakob Kemi ( )
CAS Seminar - Spiking Neurons Network (SNN) Jakob Kemi (820622-0033) kemiolof@student.chalmers.se November 20, 2006 Introduction Biological background To be written, lots of good sources. Background First
More informationPositive emotion expands visual attention...or maybe not...
Positive emotion expands visual attention...or maybe not... Taylor, AJ, Bendall, RCA and Thompson, C Title Authors Type URL Positive emotion expands visual attention...or maybe not... Taylor, AJ, Bendall,
More informationEfficacy of the Extended Principal Orthogonal Decomposition Method on DNA Microarray Data in Cancer Detection
202 4th International onference on Bioinformatics and Biomedical Technology IPBEE vol.29 (202) (202) IASIT Press, Singapore Efficacy of the Extended Principal Orthogonal Decomposition on DA Microarray
More informationFacial Pain Expression in Dementia: A Review of the Experimental and Clinical Evidence
Send Orders for Reprints to reprints@benthamscience.ae Current Alzheimer Research, 2017, 14, 000-000 1 REVIEW ARTICLE Facial Pain Expression in Dementia: A Review of the Experimental and Clinical Evidence
More informationA Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China
A Vision-based Affective Computing System Jieyu Zhao Ningbo University, China Outline Affective Computing A Dynamic 3D Morphable Model Facial Expression Recognition Probabilistic Graphical Models Some
More informationSpotting Liars and Deception Detection skills - people reading skills in the risk context. Alan Hudson
Spotting Liars and Deception Detection skills - people reading skills in the risk context Alan Hudson < AH Business Psychology 2016> This presentation has been prepared for the Actuaries Institute 2016
More informationABSTRACT I. INTRODUCTION. Mohd Thousif Ahemad TSKC Faculty Nagarjuna Govt. College(A) Nalgonda, Telangana, India
International Journal of Scientific Research in Computer Science, Engineering and Information Technology 2018 IJSRCSEIT Volume 3 Issue 1 ISSN : 2456-3307 Data Mining Techniques to Predict Cancer Diseases
More informationPerformance Analysis of Different Classification Methods in Data Mining for Diabetes Dataset Using WEKA Tool
Performance Analysis of Different Classification Methods in Data Mining for Diabetes Dataset Using WEKA Tool Sujata Joshi Assistant Professor, Dept. of CSE Nitte Meenakshi Institute of Technology Bangalore,
More information1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1
1. INTRODUCTION Sign language interpretation is one of the HCI applications where hand gesture plays important role for communication. This chapter discusses sign language interpretation system with present
More informationBackground Information
Background Information Erlangen, November 26, 2017 RSNA 2017 in Chicago: South Building, Hall A, Booth 1937 Artificial intelligence: Transforming data into knowledge for better care Inspired by neural
More informationKeywords Missing values, Medoids, Partitioning Around Medoids, Auto Associative Neural Network classifier, Pima Indian Diabetes dataset.
Volume 7, Issue 3, March 2017 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Medoid Based Approach
More informationLearning to Rank Authenticity from Facial Activity Descriptors Otto von Guericke University, Magdeburg - Germany
Learning to Rank Authenticity from Facial s Otto von Guericke University, Magdeburg - Germany Frerk Saxen, Philipp Werner, Ayoub Al-Hamadi The Task Real or Fake? Dataset statistics Training set 40 Subjects
More informationApplication of Artificial Neural Networks in Classification of Autism Diagnosis Based on Gene Expression Signatures
Application of Artificial Neural Networks in Classification of Autism Diagnosis Based on Gene Expression Signatures 1 2 3 4 5 Kathleen T Quach Department of Neuroscience University of California, San Diego
More informationTo What Extent Can the Recognition of Unfamiliar Faces be Accounted for by the Direct Output of Simple Cells?
To What Extent Can the Recognition of Unfamiliar Faces be Accounted for by the Direct Output of Simple Cells? Peter Kalocsai, Irving Biederman, and Eric E. Cooper University of Southern California Hedco
More informationAMBCO 1000+P AUDIOMETER
Model 1000+ Printer User Manual AMBCO 1000+P AUDIOMETER AMBCO ELECTRONICS 15052 REDHILL AVE SUITE #D TUSTIN, CA 92780 (714) 259-7930 FAX (714) 259-1688 WWW.AMBCO.COM 10-1004, Rev. A DCO 17 008, 11 13 17
More informationBayesian modeling of human concept learning
To appear in Advances in Neural Information Processing Systems, M. S. Kearns, S. A. Solla, & D. A. Cohn (eds.). Cambridge, MA: MIT Press, 999. Bayesian modeling of human concept learning Joshua B. Tenenbaum
More informationThe Role of Feedback in Categorisation
The Role of in Categorisation Mark Suret (m.suret@psychol.cam.ac.uk) Department of Experimental Psychology; Downing Street Cambridge, CB2 3EB UK I.P.L. McLaren (iplm2@cus.cam.ac.uk) Department of Experimental
More informationBiologically-Inspired Human Motion Detection
Biologically-Inspired Human Motion Detection Vijay Laxmi, J. N. Carter and R. I. Damper Image, Speech and Intelligent Systems (ISIS) Research Group Department of Electronics and Computer Science University
More informationFacial expression recognition with spatiotemporal local descriptors
Facial expression recognition with spatiotemporal local descriptors Guoying Zhao, Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and Information Engineering, P. O. Box
More informationFacial Behavior as a Soft Biometric
Facial Behavior as a Soft Biometric Abhay L. Kashyap University of Maryland, Baltimore County 1000 Hilltop Circle, Baltimore, MD 21250 abhay1@umbc.edu Sergey Tulyakov, Venu Govindaraju University at Buffalo
More informationMotives as Intrinsic Activation for Human-Robot Interaction
Motives as Intrinsic Activation for Human-Robot Interaction Jochen Hirth and Karsten Berns Abstract For humanoid robots that should assist humans in their daily life the capability of an adequate interaction
More informationFinding Information Sources by Model Sharing in Open Multi-Agent Systems 1
Finding Information Sources by Model Sharing in Open Multi-Agent Systems Jisun Park, K. Suzanne Barber The Laboratory for Intelligent Processes and Systems The University of Texas at Austin 20 E. 24 th
More informationPSY111 Notes. For Session 3, Carrington Melbourne. C. Melbourne PSY111 Session 3,
PSY111 Notes For Session 3, 2015. Carrington Melbourne C. Melbourne PSY111 Session 3, 2015 1 Psychology111: Week 1 Psychology is the scientific investigation of mental processes and behaviour. It understands
More informationAuto-Encoder Pre-Training of Segmented-Memory Recurrent Neural Networks
Auto-Encoder Pre-Training of Segmented-Memory Recurrent Neural Networks Stefan Glüge, Ronald Böck and Andreas Wendemuth Faculty of Electrical Engineering and Information Technology Cognitive Systems Group,
More information1/12/2012. How can you tell if someone is experiencing an emotion? Emotion. Dr.
http://www.bitrebels.com/design/76-unbelievable-street-and-wall-art-illusions/ 1/12/2012 Psychology 456 Emotion Dr. Jamie Nekich A Little About Me Ph.D. Counseling Psychology Stanford University Dissertation:
More informationUnderstanding Emotions. How does this man feel in each of these photos?
Understanding Emotions How does this man feel in each of these photos? Emotions Lecture Overview What are Emotions? Facial displays of emotion Culture-based and sex-based differences Definitions Spend
More informationValence-arousal evaluation using physiological signals in an emotion recall paradigm. CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry.
Proceedings Chapter Valence-arousal evaluation using physiological signals in an emotion recall paradigm CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry Abstract The work presented in this paper aims
More informationDetection of Cognitive States from fmri data using Machine Learning Techniques
Detection of Cognitive States from fmri data using Machine Learning Techniques Vishwajeet Singh, K.P. Miyapuram, Raju S. Bapi* University of Hyderabad Computational Intelligence Lab, Department of Computer
More informationTemporal Context and the Recognition of Emotion from Facial Expression
Temporal Context and the Recognition of Emotion from Facial Expression Rana El Kaliouby 1, Peter Robinson 1, Simeon Keates 2 1 Computer Laboratory University of Cambridge Cambridge CB3 0FD, U.K. {rana.el-kaliouby,
More informationThe Impact of Schemas on the Placement of Eyes While Drawing.
The Red River Psychology Journal PUBLISHED BY THE MSUM PSYCHOLOGY DEPARTMENT The Impact of Schemas on the Placement of Eyes While Drawing. Eloise M. Warren. Minnesota State University Moorhead Abstract.
More informationEmotion Theory. Dr. Vijay Kumar
Emotion Theory Dr. Vijay Kumar Emotions Just how many emotions are there? Basic Emotions Some have criticized Plutchik s model as applying only to English-speakers Revised model of basic emotions includes:
More informationLearning Classifier Systems (LCS/XCSF)
Context-Dependent Predictions and Cognitive Arm Control with XCSF Learning Classifier Systems (LCS/XCSF) Laurentius Florentin Gruber Seminar aus Künstlicher Intelligenz WS 2015/16 Professor Johannes Fürnkranz
More informationIs it possible to give a philosophical definition of sexual desire?
Issue 1 Spring 2016 Undergraduate Journal of Philosophy Is it possible to give a philosophical definition of sexual desire? William Morgan - The University of Sheffield pp. 47-58 For details of submission
More informationChapter 1. Introduction
Chapter 1 Introduction Artificial neural networks are mathematical inventions inspired by observations made in the study of biological systems, though loosely based on the actual biology. An artificial
More informationFormulating Emotion Perception as a Probabilistic Model with Application to Categorical Emotion Classification
Formulating Emotion Perception as a Probabilistic Model with Application to Categorical Emotion Classification Reza Lotfian and Carlos Busso Multimodal Signal Processing (MSP) lab The University of Texas
More informationViewpoint-dependent recognition of familiar faces
Perception, 1999, volume 28, pages 483 ^ 487 DOI:10.1068/p2901 Viewpoint-dependent recognition of familiar faces Nikolaus F Trojeô Max-Planck Institut fïr biologische Kybernetik, Spemannstrasse 38, 72076
More informationMammogram Analysis: Tumor Classification
Mammogram Analysis: Tumor Classification Term Project Report Geethapriya Raghavan geeragh@mail.utexas.edu EE 381K - Multidimensional Digital Signal Processing Spring 2005 Abstract Breast cancer is the
More informationERA: Architectures for Inference
ERA: Architectures for Inference Dan Hammerstrom Electrical And Computer Engineering 7/28/09 1 Intelligent Computing In spite of the transistor bounty of Moore s law, there is a large class of problems
More informationNatural Scene Statistics and Perception. W.S. Geisler
Natural Scene Statistics and Perception W.S. Geisler Some Important Visual Tasks Identification of objects and materials Navigation through the environment Estimation of motion trajectories and speeds
More informationPerformance Based Evaluation of Various Machine Learning Classification Techniques for Chronic Kidney Disease Diagnosis
Performance Based Evaluation of Various Machine Learning Classification Techniques for Chronic Kidney Disease Diagnosis Sahil Sharma Department of Computer Science & IT University Of Jammu Jammu, India
More informationAnalysis of Hoge Religious Motivation Scale by Means of Combined HAC and PCA Methods
Analysis of Hoge Religious Motivation Scale by Means of Combined HAC and PCA Methods Ana Štambuk Department of Social Work, Faculty of Law, University of Zagreb, Nazorova 5, HR- Zagreb, Croatia E-mail:
More informationClassification of Epileptic Seizure Predictors in EEG
Classification of Epileptic Seizure Predictors in EEG Problem: Epileptic seizures are still not fully understood in medicine. This is because there is a wide range of potential causes of epilepsy which
More informationECG Beat Recognition using Principal Components Analysis and Artificial Neural Network
International Journal of Electronics Engineering, 3 (1), 2011, pp. 55 58 ECG Beat Recognition using Principal Components Analysis and Artificial Neural Network Amitabh Sharma 1, and Tanushree Sharma 2
More informationPrediction Models of Diabetes Diseases Based on Heterogeneous Multiple Classifiers
Int. J. Advance Soft Compu. Appl, Vol. 10, No. 2, July 2018 ISSN 2074-8523 Prediction Models of Diabetes Diseases Based on Heterogeneous Multiple Classifiers I Gede Agus Suwartane 1, Mohammad Syafrullah
More information