Empirical Cognitive Modeling

Size: px
Start display at page:

Download "Empirical Cognitive Modeling"

Transcription

1 Empirical Cognitive Modeling Tanja Schultz Felix Putze Dominic Heger Lecture Cognitive Modeling (SS 2012) 1/49

2 Outline What are empirical cognitive modeling and recognition of human cognitive and affective states? General architecture for empirical cognitive modeling Training data generation Example: Emotion recognition from bio-potential signals Interplay of empirical modeling and formal cognitive models 2/49

3 What is Empirical Cognitive Modeling? Empirical Science Knowledge must be based on data/information from observations or experiments Empirical Cognitive Modeling is data-driven approach of learning a model of a user state from empirical data (e.g. sensory data) User States are results of a person s mental processes at a particular point of time Mental or physiological configurations of a human Reflecting universal patterns of affect or cognition Usually have behavioral implications 3/49

4 Human Cognitive States Cognitive States States of information processing Examples: Workload Relax -> load -> overload Stress, flow Activities Listening, reading, contemplating, Many others: Constitutions and personality Intentions, motivations 4/49 Four antique temperaments: choleric; melancholic; sanguine; phlegmatic (wikimedia)

5 Human Affective States Affective States States of the experiencing of feeling or emotion Emotions Discrete basic emotion models: Anger, fear, sadness, surprise, joy, disgust, Dimensional models: Valence, arousal, dominance Appraisal models Mood Dimensional models: Good/bad, awake/tired, nervous/calm Boredom, anxiety Disorders: maniac, major depressive disorder, 5/49 Basic emotions by Paul Ekman

6 Goals Assessment of the current state of a person is important aspect to enrich human machine interaction (adaptive interfaces) Human needs are widely ignored in today s human machine interaction Natural communication and implicit interfaces (machine integrates into human interaction styles) Entertaining interaction (e.g. gaming) Empathic behavior (e.g. humanoid robots) Intelligent assistants ARMAR III thinkinghead.edu.au Social robotics 6/49

7 Other Applications Evaluation of usability and product quality Self assessments by questionnaires are often difficult Get quantitative physiological measures Medical/Psychological use Communication for locked in patients, people with autism, Diagnosis New (neuro-)psychological insights Others: Virtual realities Smart surveillance Edutainment, 7/49

8 How to assess cognitive/affective states? How do humans do that? Theory of mind : The ability to make inferences about others mental states (Premack & Woodruff, 1978) Methods Dialog, questionnaires How are you doing, today? Difficult to interpret, easy to fool Context, environment Driver might be tired after 10 hours in the car Strongly situation dependent Biosignals and their correlates Known correlations between physiological measures and user states 8/49

9 Modalities Biosignals are messages that originate from physical (or chemical) actions of the human body Biosignals and correlates for user state recognition Brain activity (EEG, fmri, NIRS, ) Eye activity (EOG, eye tracking) Heart rate (ECG, PPG) Muscle activity (EMG) Speech (Prosody, choice of words, ) Skin conductance (EDA) Body temperature Vision (Facial expressions, posture, gesture, motion, ) More details on biosignals are presented in our winter term lecture Biosignale und Benutzerschnittstellen 9/49

10 General Architecture for User State Recognition Input: Multimodal digital sensor data Pre-processing Feature extraction Recognition Sensor Fusion Integrating User States in cognitive models 10/49

11 General Architecture for User State Recognition Input: Multimodal digital sensor data Pre-processing Feature extraction Recognition Sensor Fusion Integrating User States in cognitive models 11/49

12 Taxonomy of Biosignals Measurable Biosignals and their correlates Mechanical Thermal Electrical Chemical Acoustic Gestures Warmth Body sounds Mimics Motion Brain Eyes Muscles Heart Nonverbalarticulation Speech 12/49

13 Multimodality Multimodal user state recognition systems use information from more than one input modality Pro: Con: More information (higher recognition accuracy) Redundant information (higher robustness) Continuously available stream of input data (many signals are not emitted all the time, e.g. speech) Higher system complexity Need synchronization of input data streams Need suitable fusion scheme More data needs to be processed (cpu, memory) 13/49

14 Acquisition of Physiological Signals: EDA Rise Electrodermal Acticity (EDA) = Skin conductance (SC) Mostly dependent on activity of sweat glants Related to physical activity and physiological arousal Reacts to stimuli with shard rises of the signal EDA Amplitude Latency Half Life 50% Decay 37% 14/49

15 Acquisition of Physiological Signals: BVP Blood Volume Pressure (BVP): Measures change of blood volume in a extremity (e.g. finger, ear) Signal measured using a photoplethysmyograph Based on optical properties of blood Send red light into the tissue, capture the reflected light Can be used to deduce heart rate (frequency) and blood pressure (amplitude) 15/49

16 General Architecture for User State Recognition Input: Multimodal digital sensor data Pre-processing Feature extraction Recognition Sensor Fusion Integrating User States in cognitive models 16/49

17 Pre-Processing - Framing Cut stream of digital input data into suitable portions (frames) for recognition Design frames to represent smallest information entity of the recognition system Frame length depending on modality and features to be extracted Time duration for stable features (not too short and not too long) Few milliseconds for phoneme parts in speech recognition Multiple seconds for variabilities in skin conductance Overlapping Compensate small slopes of window functions Higher temporal resolution 17/49

18 Preprocessing - Artifacts Recorded sensor data is often highly contaminated with noise, artifacts and outliers Artifact filtering techniques suitable for modality and feature extraction Drop data if it is not within the range of physiological possible values (e.g. EEG data larger than 150 μv) Decompose the signal by source separation techniques (e.g. Independent Component Analysis) to identify artifact components and reconstruct the signal without those components (e.g. EEG eye artifacts) 18/49

19 General Architecture for User State Recognition Input: Multimodal digital sensor data Pre-processing Feature extraction Recognition Sensor Fusion Integrating User States in cognitive models 19/49

20 Feature extraction Extract relevant information from the raw data and represent them in a certain vector space Statistical properties of the signal portion Mean, variance Histogram values Exploit sensor and task specific domain knowledge E.g. 2 nd derivative of a fitted 2 nd order polynomial x 1... x 4 x 2 x 3 20/49

21 Feature extraction Often the following techniques are applied within feature extraction: Filtering / Filter banks Finite Impulse Response, Infinite Impulse Response, Spectral transformation FFT, Wavelets, Feature selection or dimensionality reduction Sequential Forward Selection, Principal Component Analysis, Filtering based on mutual information, correlation, Normalization Z-scores, Linear scaling within a certain value range, 21/49

22 General Architecture for User State Recognition Input: Multimodal digital sensor data Pre-processing Feature extraction Recognition Sensor Fusion Integrating User States in cognitive models 22/49

23 Pattern Recognition Classification Mapping between feature space and discrete number of classes Techniques: Naïve Bayes, SVM, ANN, DT, Example: Recognition of discrete emotions Regression Model for functional description of the features Continuous output variables Techniques: Ordinary least squares, generalized linear regression, Example: Recognition of continuous workload Reinforcement learning Find suitable actions to take in a given situation in order to maximize a reward We will have a complete lecture on this topic 23/49

24 Sequence modeling User state can have temporal dynamics (e.g. facial expressions in videos) Model dynamic progression of elementary patterns Techniques: Hidden Markov Models (HMMs) States, observations, state transitions Dynamic Bayesian Nets (DBNs) Directed Acyclic Graphs Edges represent conditional probabilities Tracking algorithms (Kalman filters, Particle filters, ) Recursively apply system model Update state estimation by (noisy) observations 24/49

25 Pattern Recognition Taxonomy of pattern recognition Algorithm examples: Artificial Neural Nets (ANN) Decision Trees (DT) Gaussian Mixture Models (GMM) Generalized Linear Regression (GLR) Support Vector Machine (SVM) using RBF kernel 25/49

26 Which pattern recognition algorithm to use? Comparisons of different general purpose recognition algorithms often show similar performances Differences in flexibility are equally important Adaptability Amount of training data needed Training convergence Computational costs (training and recognition) More details of the different pattern recognition algorithms: Pattern recognition lecture (Prof. Beyerer) Machine learning lectures (Prof. Dillmann, Prof. Zöllner) Bishop: Pattern Recognition and Machine Learning Duda, Hart, Stork: Pattern Classification 26/49

27 General Architecture for User State Recognition Input: Multimodal digital sensor data Pre-processing Feature extraction Recognition Sensor Fusion Integrating User States in cognitive models 27/49

28 Sensor Fusion Features usually have different stochastic distributions Few features can dominate, e.g. with high values and variance Need to be normalized to be comparable Data from different sensors are not received at the same time Different sampling rates Mapping of corresponding features Interpolation of features Unsynchronized recording Use synchronization channel (marker) Timestamps synchronized by Network time protocol Fusion level Data level: Combine raw data Feature level: Concatenate feature vectors Decision level: Combine recognition results 28/49

29 General Architecture for User State Recognition Input: Multimodal digital sensor data Pre-processing Feature extraction Recognition Sensor Fusion Output: Recognized user state 29/49

30 Training Data Acquisition To train (statistical) models supervised recognition algorithms need labeled data Often: There is no data like more data But equally important: High data quality Typically recorded sensor data has noise, artifacts, outliers, incomplete data, Clever design of system setup and sensors can help Normalization and signal processing (e.g. ICA for EEG eye artifacts) Good coverage of all situations in the training data Training situations should be as similar as possible to run-time situations Representative training data Problem: Real world training data is rare and hard to collect and acted data is unnatual Emotion Elicitation 30/49

31 Elicitation of User states Approach: Induce cognitive/affective states using user study experiments Example: International Affective Picture System (IAPS) for emotion elicitation + Large data collection (pictures, videos, sounds) + Freely available for research purposes + Evaluated using questionnaires + Simple experimental setup Unrealistic experimental situation People are differently affected (ground truth?) Ambiguities in stimuli Fixation Relaxation 2 sec 6 sec 15 sec 31/49

32 Wizard of Oz Paradigm Collect experimental data for building a new system for Human-Machine-Interaction Chicken and egg problem: Enrollment of a new system that can not be built (trained), yet Wizard of Oz Experiment for data collection: Users thinks to interact with an autonomous system A human operator (wizard) controls the system The users are not conscious about this fact Pros: Cons: The recorded interactions can be used as training data Gain insights on human behavior in certain interactions Hard to design Often still differences to real world scenarios Time-consuming (only possible to collect small amounts of data) 32/49

33 Generating Ground Truth For training (supervised) statistical models, we need to assign ground truth labels to the data A-Priori assignment E.g. Setting is designed to create the emotion anger label it as such What if we are wrong about the effect of our treatment? Rating by Judges Have (naïve or expert) judges watch/listen to the data and rate it Relies on humans as experts Judges cannot peek insight a subject s mind Rating by the Subjects Have subjects label the data themselves Yields their true subjective emotions Approach 1: During the experiment, e.g. think-aloud (intrusive) Approach 2: Label afterwards (memory might betray subjects) 33/49

34 More on experiment design and evaluation in our winter-term lecture Design and Evaluation of innovative User Interfaces 34/49

35 Example: Multimodal Emotion Recognition K. Takahashi (2004): Remarks on emotion recognition from bio-potential signals Emotion recognition from multimodal biosignals Discrete emotions: joy, anger, sadness, fear, and relax Modalities: Frontal brain activity using an EEG head band Pulse at earlobe by Photoplethysmography (PPG) Skin conductance at finger tips (EDA) 35/49

36 Emotion recognition from bio-potential signals Data collection: Audio-visual stimuli presented at a PC 10 film clips (evaluated to be emotional) 12 subjects, age: years Feature extraction: Data: EDA, PPG: Time domain signals EEG: Three frequency bands Calculated features for each modality: Six statistical features: mean, standard deviation, means of the absolute value of the 1. and 2. differences, Feature fusion, normalized by their standard deviation 36/49

37 Emotion recognition from bio-potential signals Recognition: Support vector machines for discriminating the 5 emotions Leave one out cross-validation Average recognition rate is rather low (41.7%) but far above chance level The multimodal system using all three sensors gives best accuracies for joy, anger, and fear 37/49

38 General Architecture for User State Recognition Input: Multimodal digital sensor data Pre-processing Feature extraction Recognition Sensor Fusion Integrating User States in cognitive models 38/49

39 Recognition of Cognitive and Affective States Up to now: Recognition of cognitive and affective states is based on interpretation of low-level biosignal features (pure bottom-up approach) Problems of the presented general architecture What is an adequate system reaction to e.g. a certain emotion of a user? Can we recognize all cognitive/affective states in the described way? Appraisal based emotions? Multiple user states can co-occur at one point of time Same biosignals can have different meaning in different contexts Same user states can emit different biosignals in different contexts Ignores intentions/goals/tasks/ that might influence user s perception We need to model influences of additional background knowledge (top-down) Cognitive Modeling 39/49

40 Combination of Empirical and Formal Models Combine Empirical Model with knowledge contained in formal cognitive model Augment recognition with information from formal cognitive model Augment formal model with live information from empiric recognizer Results in feedback loop: Prediction Formal Model Empirical Model Observation Way to integrate a validated a-priori knowledge with state-ofthe-art empirical models 40/49

41 Integration of Arousal in Cognitive Architecture Update formal cognitive model of arousal using recognized arousal from electrodermal activity Arousal describes reactivity to stimuli e.g. lethargic calm engaged - hyperactive Arousal has impact on memory (e.g. Yerkes-Dodson law) Correlation between electro-dermal activity and arousal Adaptation of the (ACT-R) memory model match the current recognized arousal state of the person 41/49

42 Acquisition of Physiological Signals: EDA Rise Electrodermal Acticity (EDA) = Skin conductance (SC) Mostly dependent on activity of sweat glants Related to physical activity and physiological arousal Reacts to stimuli with shard rises of the signal EDA Amplitude Latency Half Life 50% Decay 37% 42/49

43 Arousal Impact on Memory Experiment by Kleinsmith and Kaplan (1964) Pared associations task: Present 8 emotional words together with a number Arousal determined by electrodermal activity (EDA) for each word Subjects should recall pared associations of the 3 words with highest arousal 3 words with lowest arousal Recall at 2min, 20 min, 45 min, 1 day, 1 week Surprising results Low arousal: Recall rate drops High arousal: Initial blocking Words are better recalled after some time Experiment has been criticized as EDA might have measured not only arousal effects (Mather, 2007) 43/49

44 Activation of memory in ACT-R model Remember ACT-R memory consists of chunks Each chunk has base-level activation Activation equation has no parameter for arousal Resulting activation (d=0.05) Equation doe not model is not high arousal curve adequately 44/49

45 Adaptation of Activation Equation to match Adaptation of the base-level activation equation Replace decay parameter d by Linear function of arousal at time of encoding a => d d ( 1 a/ a ) 0 i.e. decay becomes growth a h ( a a ) n s h The term causes initial blocking for high arousal levels 45/49

46 Resulting Activation Resulting curves For typical parameters Scaling factor Nominal value High arousal threshold 46/49

47 General Architecture for User State Recognition Input: Multimodal digital sensor data Pre-processing Feature extraction Recognition Sensor Fusion Integrating User States in cognitive models 47/49

48 Wrap up Cognitive and affective states recognition can be involved but important for many human-computer-interaction systems General architecture Empirical Cognitive Modeling Connect empirical and formal Cognitive Modeling Empirical modeling is exciting research area We have lots of interesting Bachelor and Master theses 48/49

EECS 433 Statistical Pattern Recognition

EECS 433 Statistical Pattern Recognition EECS 433 Statistical Pattern Recognition Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1 / 19 Outline What is Pattern

More information

Emotion Detection Using Physiological Signals. M.A.Sc. Thesis Proposal Haiyan Xu Supervisor: Prof. K.N. Plataniotis

Emotion Detection Using Physiological Signals. M.A.Sc. Thesis Proposal Haiyan Xu Supervisor: Prof. K.N. Plataniotis Emotion Detection Using Physiological Signals M.A.Sc. Thesis Proposal Haiyan Xu Supervisor: Prof. K.N. Plataniotis May 10 th, 2011 Outline Emotion Detection Overview EEG for Emotion Detection Previous

More information

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China A Vision-based Affective Computing System Jieyu Zhao Ningbo University, China Outline Affective Computing A Dynamic 3D Morphable Model Facial Expression Recognition Probabilistic Graphical Models Some

More information

Gender Based Emotion Recognition using Speech Signals: A Review

Gender Based Emotion Recognition using Speech Signals: A Review 50 Gender Based Emotion Recognition using Speech Signals: A Review Parvinder Kaur 1, Mandeep Kaur 2 1 Department of Electronics and Communication Engineering, Punjabi University, Patiala, India 2 Department

More information

Affective Game Engines: Motivation & Requirements

Affective Game Engines: Motivation & Requirements Affective Game Engines: Motivation & Requirements Eva Hudlicka Psychometrix Associates Blacksburg, VA hudlicka@ieee.org psychometrixassociates.com DigiPen Institute of Technology February 20, 2009 1 Outline

More information

Emotion based E-learning System using Physiological Signals. Dr. Jerritta S, Dr. Arun S School of Engineering, Vels University, Chennai

Emotion based E-learning System using Physiological Signals. Dr. Jerritta S, Dr. Arun S School of Engineering, Vels University, Chennai CHENNAI - INDIA Emotion based E-learning System using Physiological Signals School of Engineering, Vels University, Chennai Outline Introduction Existing Research works on Emotion Recognition Research

More information

Emotion Recognition using a Cauchy Naive Bayes Classifier

Emotion Recognition using a Cauchy Naive Bayes Classifier Emotion Recognition using a Cauchy Naive Bayes Classifier Abstract Recognizing human facial expression and emotion by computer is an interesting and challenging problem. In this paper we propose a method

More information

Valence-arousal evaluation using physiological signals in an emotion recall paradigm. CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry.

Valence-arousal evaluation using physiological signals in an emotion recall paradigm. CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry. Proceedings Chapter Valence-arousal evaluation using physiological signals in an emotion recall paradigm CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry Abstract The work presented in this paper aims

More information

VIDEO SURVEILLANCE AND BIOMEDICAL IMAGING Research Activities and Technology Transfer at PAVIS

VIDEO SURVEILLANCE AND BIOMEDICAL IMAGING Research Activities and Technology Transfer at PAVIS VIDEO SURVEILLANCE AND BIOMEDICAL IMAGING Research Activities and Technology Transfer at PAVIS Samuele Martelli, Alessio Del Bue, Diego Sona, Vittorio Murino Istituto Italiano di Tecnologia (IIT), Genova

More information

A Brain Computer Interface System For Auto Piloting Wheelchair

A Brain Computer Interface System For Auto Piloting Wheelchair A Brain Computer Interface System For Auto Piloting Wheelchair Reshmi G, N. Kumaravel & M. Sasikala Centre for Medical Electronics, Dept. of Electronics and Communication Engineering, College of Engineering,

More information

Error Detection based on neural signals

Error Detection based on neural signals Error Detection based on neural signals Nir Even- Chen and Igor Berman, Electrical Engineering, Stanford Introduction Brain computer interface (BCI) is a direct communication pathway between the brain

More information

Introduction to Machine Learning. Katherine Heller Deep Learning Summer School 2018

Introduction to Machine Learning. Katherine Heller Deep Learning Summer School 2018 Introduction to Machine Learning Katherine Heller Deep Learning Summer School 2018 Outline Kinds of machine learning Linear regression Regularization Bayesian methods Logistic Regression Why we do this

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 5: Data analysis II Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis II 6 Single

More information

Towards an EEG-based Emotion Recognizer for Humanoid Robots

Towards an EEG-based Emotion Recognizer for Humanoid Robots The 18th IEEE International Symposium on Robot and Human Interactive Communication Toyama, Japan, Sept. 27-Oct. 2, 2009 WeC3.4 Towards an EEG-based Emotion Recognizer for Humanoid Robots Kristina Schaaff

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 10: Brain-Computer Interfaces Ilya Kuzovkin So Far Stimulus So Far So Far Stimulus What are the neuroimaging techniques you know about? Stimulus So Far

More information

Mental State Recognition by using Brain Waves

Mental State Recognition by using Brain Waves Indian Journal of Science and Technology, Vol 9(33), DOI: 10.17485/ijst/2016/v9i33/99622, September 2016 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 Mental State Recognition by using Brain Waves

More information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, S. Lee, U. Neumann, S. Narayanan Emotion

More information

M.Sc. in Cognitive Systems. Model Curriculum

M.Sc. in Cognitive Systems. Model Curriculum M.Sc. in Cognitive Systems Model Curriculum April 2014 Version 1.0 School of Informatics University of Skövde Sweden Contents 1 CORE COURSES...1 2 ELECTIVE COURSES...1 3 OUTLINE COURSE SYLLABI...2 Page

More information

CS148 - Building Intelligent Robots Lecture 5: Autonomus Control Architectures. Instructor: Chad Jenkins (cjenkins)

CS148 - Building Intelligent Robots Lecture 5: Autonomus Control Architectures. Instructor: Chad Jenkins (cjenkins) Lecture 5 Control Architectures Slide 1 CS148 - Building Intelligent Robots Lecture 5: Autonomus Control Architectures Instructor: Chad Jenkins (cjenkins) Lecture 5 Control Architectures Slide 2 Administrivia

More information

Human Factors & User Experience LAB

Human Factors & User Experience LAB Human Factors & User Experience LAB User Experience (UX) User Experience is the experience a user has when interacting with your product, service or application Through research and observation of users

More information

Introduction to affect computing and its applications

Introduction to affect computing and its applications Introduction to affect computing and its applications Overview What is emotion? What is affective computing + examples? Why is affective computing useful? How do we do affect computing? Some interesting

More information

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening

AUDL GS08/GAV1 Signals, systems, acoustics and the ear. Pitch & Binaural listening AUDL GS08/GAV1 Signals, systems, acoustics and the ear Pitch & Binaural listening Review 25 20 15 10 5 0-5 100 1000 10000 25 20 15 10 5 0-5 100 1000 10000 Part I: Auditory frequency selectivity Tuning

More information

PHYSIOLOGICAL RESEARCH

PHYSIOLOGICAL RESEARCH DOMAIN STUDIES PHYSIOLOGICAL RESEARCH In order to understand the current landscape of psychophysiological evaluation methods, we conducted a survey of academic literature. We explored several different

More information

Facial expression recognition with spatiotemporal local descriptors

Facial expression recognition with spatiotemporal local descriptors Facial expression recognition with spatiotemporal local descriptors Guoying Zhao, Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and Information Engineering, P. O. Box

More information

Development of 2-Channel Eeg Device And Analysis Of Brain Wave For Depressed Persons

Development of 2-Channel Eeg Device And Analysis Of Brain Wave For Depressed Persons Development of 2-Channel Eeg Device And Analysis Of Brain Wave For Depressed Persons P.Amsaleka*, Dr.S.Mythili ** * PG Scholar, Applied Electronics, Department of Electronics and Communication, PSNA College

More information

Brain Computer Interface. Mina Mikhail

Brain Computer Interface. Mina Mikhail Brain Computer Interface Mina Mikhail minamohebn@gmail.com Introduction Ways for controlling computers Keyboard Mouse Voice Gestures Ways for communicating with people Talking Writing Gestures Problem

More information

Active User Affect Recognition and Assistance

Active User Affect Recognition and Assistance Active User Affect Recognition and Assistance Wenhui Liao, Zhiwe Zhu, Markus Guhe*, Mike Schoelles*, Qiang Ji, and Wayne Gray* Email: jiq@rpi.edu Department of Electrical, Computer, and System Eng. *Department

More information

Noise-Robust Speech Recognition Technologies in Mobile Environments

Noise-Robust Speech Recognition Technologies in Mobile Environments Noise-Robust Speech Recognition echnologies in Mobile Environments Mobile environments are highly influenced by ambient noise, which may cause a significant deterioration of speech recognition performance.

More information

MACHINE LEARNING ANALYSIS OF PERIPHERAL PHYSIOLOGY FOR EMOTION DETECTION

MACHINE LEARNING ANALYSIS OF PERIPHERAL PHYSIOLOGY FOR EMOTION DETECTION MACHINE LEARNING ANALYSIS OF PERIPHERAL PHYSIOLOGY FOR EMOTION DETECTION A Thesis Presented by Sarah M Brown to The Department of Electrical and Computer Engineering in partial fulfillment of the requirements

More information

R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology

R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology ISSN: 0975-766X CODEN: IJPTFI Available Online through Research Article www.ijptonline.com FACIAL EMOTION RECOGNITION USING NEURAL NETWORK Kashyap Chiranjiv Devendra, Azad Singh Tomar, Pratigyna.N.Javali,

More information

Emote to Win: Affective Interactions with a Computer Game Agent

Emote to Win: Affective Interactions with a Computer Game Agent Emote to Win: Affective Interactions with a Computer Game Agent Jonghwa Kim, Nikolaus Bee, Johannes Wagner and Elisabeth André Multimedia Concepts and Application, Faculty for Applied Computer Science

More information

Edge Based Grid Super-Imposition for Crowd Emotion Recognition

Edge Based Grid Super-Imposition for Crowd Emotion Recognition Edge Based Grid Super-Imposition for Crowd Emotion Recognition Amol S Patwardhan 1 1Senior Researcher, VIT, University of Mumbai, 400037, India ---------------------------------------------------------------------***---------------------------------------------------------------------

More information

How do you design an intelligent agent?

How do you design an intelligent agent? Intelligent Agents How do you design an intelligent agent? Definition: An intelligent agent perceives its environment via sensors and acts rationally upon that environment with its effectors. A discrete

More information

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2014

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2014 UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2014 Exam policy: This exam allows two one-page, two-sided cheat sheets (i.e. 4 sides); No other materials. Time: 2 hours. Be sure to write

More information

Research Proposal on Emotion Recognition

Research Proposal on Emotion Recognition Research Proposal on Emotion Recognition Colin Grubb June 3, 2012 Abstract In this paper I will introduce my thesis question: To what extent can emotion recognition be improved by combining audio and visual

More information

Recognizing Scenes by Simulating Implied Social Interaction Networks

Recognizing Scenes by Simulating Implied Social Interaction Networks Recognizing Scenes by Simulating Implied Social Interaction Networks MaryAnne Fields and Craig Lennon Army Research Laboratory, Aberdeen, MD, USA Christian Lebiere and Michael Martin Carnegie Mellon University,

More information

Formulating Emotion Perception as a Probabilistic Model with Application to Categorical Emotion Classification

Formulating Emotion Perception as a Probabilistic Model with Application to Categorical Emotion Classification Formulating Emotion Perception as a Probabilistic Model with Application to Categorical Emotion Classification Reza Lotfian and Carlos Busso Multimodal Signal Processing (MSP) lab The University of Texas

More information

Sociable Robots Peeping into the Human World

Sociable Robots Peeping into the Human World Sociable Robots Peeping into the Human World An Infant s Advantages Non-hostile environment Actively benevolent, empathic caregiver Co-exists with mature version of self Baby Scheme Physical form can evoke

More information

Rethinking Cognitive Architecture!

Rethinking Cognitive Architecture! Rethinking Cognitive Architecture! Reconciling Uniformity and Diversity via Graphical Models! Paul Rosenbloom!!! 1/25/2010! Department of Computer Science &! Institute for Creative Technologies! The projects

More information

Motion Control for Social Behaviours

Motion Control for Social Behaviours Motion Control for Social Behaviours Aryel Beck a.beck@ntu.edu.sg Supervisor: Nadia Magnenat-Thalmann Collaborators: Zhang Zhijun, Rubha Shri Narayanan, Neetha Das 10-03-2015 INTRODUCTION In order for

More information

Dynamic Control Models as State Abstractions

Dynamic Control Models as State Abstractions University of Massachusetts Amherst From the SelectedWorks of Roderic Grupen 998 Dynamic Control Models as State Abstractions Jefferson A. Coelho Roderic Grupen, University of Massachusetts - Amherst Available

More information

Outline. Teager Energy and Modulation Features for Speech Applications. Dept. of ECE Technical Univ. of Crete

Outline. Teager Energy and Modulation Features for Speech Applications. Dept. of ECE Technical Univ. of Crete Teager Energy and Modulation Features for Speech Applications Alexandros Summariza(on Potamianos and Emo(on Tracking in Movies Dept. of ECE Technical Univ. of Crete Alexandros Potamianos, NatIONAL Tech.

More information

SPEECH TO TEXT CONVERTER USING GAUSSIAN MIXTURE MODEL(GMM)

SPEECH TO TEXT CONVERTER USING GAUSSIAN MIXTURE MODEL(GMM) SPEECH TO TEXT CONVERTER USING GAUSSIAN MIXTURE MODEL(GMM) Virendra Chauhan 1, Shobhana Dwivedi 2, Pooja Karale 3, Prof. S.M. Potdar 4 1,2,3B.E. Student 4 Assitant Professor 1,2,3,4Department of Electronics

More information

AFOSR PI Meeting Dec 1-3, 2014 Program Director: Dr. Darema Dynamic Integration of Motion and Neural Data to Capture Human Behavior

AFOSR PI Meeting Dec 1-3, 2014 Program Director: Dr. Darema Dynamic Integration of Motion and Neural Data to Capture Human Behavior AFOSR PI Meeting Dec 1-3, 2014 Program Director: Dr. Darema Dynamic Integration of Motion and Neural Data to Capture Human Behavior Dimitris Metaxas(PI, Rutgers) D. Pantazis (co-pi, Head MEG Lab, MIT)

More information

Smart Sensor Based Human Emotion Recognition

Smart Sensor Based Human Emotion Recognition Smart Sensor Based Human Emotion Recognition 1 2 3 4 5 6 7 8 9 1 11 12 13 14 15 16 17 18 19 2 21 22 23 24 25 26 27 28 29 3 31 32 33 34 35 36 37 38 39 4 41 42 Abstract A smart sensor based emotion recognition

More information

Augmented Cognition to enhance human sensory awareness, cognitive functioning and psychic functioning: a research proposal in two phases

Augmented Cognition to enhance human sensory awareness, cognitive functioning and psychic functioning: a research proposal in two phases Augmented Cognition to enhance human sensory awareness, cognitive functioning and psychic functioning: a research proposal in two phases James Lake MD (egret4@sbcglobal.net) Overview and objectives Augmented

More information

Computational Cognitive Neuroscience

Computational Cognitive Neuroscience Computational Cognitive Neuroscience Computational Cognitive Neuroscience Computational Cognitive Neuroscience *Computer vision, *Pattern recognition, *Classification, *Picking the relevant information

More information

Learning Classifier Systems (LCS/XCSF)

Learning Classifier Systems (LCS/XCSF) Context-Dependent Predictions and Cognitive Arm Control with XCSF Learning Classifier Systems (LCS/XCSF) Laurentius Florentin Gruber Seminar aus Künstlicher Intelligenz WS 2015/16 Professor Johannes Fürnkranz

More information

Thought Technology Ltd.

Thought Technology Ltd. Thought Technology Ltd. 8205 Montreal/ Toronto Blvd. Suite 223, Montreal West, QC H4X 1N1 Canada Tel: (800) 361-3651 ۰ (514) 489-8251 Fax: (514) 489-8255 E-mail: mail@thoughttechnology.com Webpage: http://www.thoughttechnology.com

More information

Contributions to Brain MRI Processing and Analysis

Contributions to Brain MRI Processing and Analysis Contributions to Brain MRI Processing and Analysis Dissertation presented to the Department of Computer Science and Artificial Intelligence By María Teresa García Sebastián PhD Advisor: Prof. Manuel Graña

More information

Reinforcement Learning in RAT, Approaches

Reinforcement Learning in RAT, Approaches Machine Learning in Robot Assisted Therapy (RAT) Applications of Supervised, Unsupervised and Reinforcement Learning in RAT, Approaches Alexander Kohles Lennart Aigner SS 2018 Main Question How to improve

More information

Lecture 13: Finding optimal treatment policies

Lecture 13: Finding optimal treatment policies MACHINE LEARNING FOR HEALTHCARE 6.S897, HST.S53 Lecture 13: Finding optimal treatment policies Prof. David Sontag MIT EECS, CSAIL, IMES (Thanks to Peter Bodik for slides on reinforcement learning) Outline

More information

Thesis Rehabilitation robotics (RIA) Robotics for Bioengineering Forefront research at PRISMA Lab and ICAROS Center

Thesis Rehabilitation robotics (RIA) Robotics for Bioengineering Forefront research at PRISMA Lab and ICAROS Center Thesis Rehabilitation robotics (RIA) RIA-1. Mechanical design of sensorized and under-actuated artificial hands with simulation and/or prototype tests The thesis work involves the study of kinematics of

More information

An assistive application identifying emotional state and executing a methodical healing process for depressive individuals.

An assistive application identifying emotional state and executing a methodical healing process for depressive individuals. An assistive application identifying emotional state and executing a methodical healing process for depressive individuals. Bandara G.M.M.B.O bhanukab@gmail.com Godawita B.M.D.T tharu9363@gmail.com Gunathilaka

More information

Introduction and Historical Background. August 22, 2007

Introduction and Historical Background. August 22, 2007 1 Cognitive Bases of Behavior Introduction and Historical Background August 22, 2007 2 Cognitive Psychology Concerned with full range of psychological processes from sensation to knowledge representation

More information

Identification of Neuroimaging Biomarkers

Identification of Neuroimaging Biomarkers Identification of Neuroimaging Biomarkers Dan Goodwin, Tom Bleymaier, Shipra Bhal Advisor: Dr. Amit Etkin M.D./PhD, Stanford Psychiatry Department Abstract We present a supervised learning approach to

More information

EEG Features in Mental Tasks Recognition and Neurofeedback

EEG Features in Mental Tasks Recognition and Neurofeedback EEG Features in Mental Tasks Recognition and Neurofeedback Ph.D. Candidate: Wang Qiang Supervisor: Asst. Prof. Olga Sourina Co-Supervisor: Assoc. Prof. Vladimir V. Kulish Division of Information Engineering

More information

Online Recognition of Facial Actions for natural EEG-based BCI Applications

Online Recognition of Facial Actions for natural EEG-based BCI Applications Online Recognition of Facial Actions for natural EEG-based BCI Applications Dominic Heger, Felix Putze, and Tanja Schultz Cognitive Systems Lab (CSL) Karlsruhe Institute of Technology (KIT), Germany {dominic.heger,

More information

Sound Analysis Research at LabROSA

Sound Analysis Research at LabROSA Sound Analysis Research at LabROSA Dan Ellis Laboratory for Recognition and Organization of Speech and Audio Dept. Electrical Eng., Columbia Univ., NY USA dpwe@ee.columbia.edu http://labrosa.ee.columbia.edu/

More information

Bayesian Networks Representation. Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University

Bayesian Networks Representation. Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University Bayesian Networks Representation Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University March 16 th, 2005 Handwriting recognition Character recognition, e.g., kernel SVMs r r r a c r rr

More information

On Shape And the Computability of Emotions X. Lu, et al.

On Shape And the Computability of Emotions X. Lu, et al. On Shape And the Computability of Emotions X. Lu, et al. MICC Reading group 10.07.2013 1 On Shape and the Computability of Emotion X. Lu, P. Suryanarayan, R. B. Adams Jr., J. Li, M. G. Newman, J. Z. Wang

More information

Emotional Design. D. Norman (Emotional Design, 2004) Model with three levels. Visceral (lowest level) Behavioral (middle level) Reflective (top level)

Emotional Design. D. Norman (Emotional Design, 2004) Model with three levels. Visceral (lowest level) Behavioral (middle level) Reflective (top level) Emotional Design D. Norman (Emotional Design, 2004) Model with three levels Visceral (lowest level) Behavioral (middle level) Reflective (top level) Emotional Intelligence (EI) IQ is not the only indicator

More information

HCS 7367 Speech Perception

HCS 7367 Speech Perception Long-term spectrum of speech HCS 7367 Speech Perception Connected speech Absolute threshold Males Dr. Peter Assmann Fall 212 Females Long-term spectrum of speech Vowels Males Females 2) Absolute threshold

More information

Novel single trial movement classification based on temporal dynamics of EEG

Novel single trial movement classification based on temporal dynamics of EEG Novel single trial movement classification based on temporal dynamics of EEG Conference or Workshop Item Accepted Version Wairagkar, M., Daly, I., Hayashi, Y. and Nasuto, S. (2014) Novel single trial movement

More information

Artificial Emotions to Assist Social Coordination in HRI

Artificial Emotions to Assist Social Coordination in HRI Artificial Emotions to Assist Social Coordination in HRI Jekaterina Novikova, Leon Watts Department of Computer Science University of Bath Bath, BA2 7AY United Kingdom j.novikova@bath.ac.uk Abstract. Human-Robot

More information

DISCRETE WAVELET PACKET TRANSFORM FOR ELECTROENCEPHALOGRAM- BASED EMOTION RECOGNITION IN THE VALENCE-AROUSAL SPACE

DISCRETE WAVELET PACKET TRANSFORM FOR ELECTROENCEPHALOGRAM- BASED EMOTION RECOGNITION IN THE VALENCE-AROUSAL SPACE DISCRETE WAVELET PACKET TRANSFORM FOR ELECTROENCEPHALOGRAM- BASED EMOTION RECOGNITION IN THE VALENCE-AROUSAL SPACE Farzana Kabir Ahmad*and Oyenuga Wasiu Olakunle Computational Intelligence Research Cluster,

More information

High-level Vision. Bernd Neumann Slides for the course in WS 2004/05. Faculty of Informatics Hamburg University Germany

High-level Vision. Bernd Neumann Slides for the course in WS 2004/05. Faculty of Informatics Hamburg University Germany High-level Vision Bernd Neumann Slides for the course in WS 2004/05 Faculty of Informatics Hamburg University Germany neumann@informatik.uni-hamburg.de http://kogs-www.informatik.uni-hamburg.de 1 Contents

More information

Classification of EEG signals in an Object Recognition task

Classification of EEG signals in an Object Recognition task Classification of EEG signals in an Object Recognition task Iacob D. Rus, Paul Marc, Mihaela Dinsoreanu, Rodica Potolea Technical University of Cluj-Napoca Cluj-Napoca, Romania 1 rus_iacob23@yahoo.com,

More information

A micropower support vector machine based seizure detection architecture for embedded medical devices

A micropower support vector machine based seizure detection architecture for embedded medical devices A micropower support vector machine based seizure detection architecture for embedded medical devices The MIT Faculty has made this article openly available. Please share how this access benefits you.

More information

Facial Expression Recognition Using Principal Component Analysis

Facial Expression Recognition Using Principal Component Analysis Facial Expression Recognition Using Principal Component Analysis Ajit P. Gosavi, S. R. Khot Abstract Expression detection is useful as a non-invasive method of lie detection and behaviour prediction. However,

More information

EEG signal classification using Bayes and Naïve Bayes Classifiers and extracted features of Continuous Wavelet Transform

EEG signal classification using Bayes and Naïve Bayes Classifiers and extracted features of Continuous Wavelet Transform EEG signal classification using Bayes and Naïve Bayes Classifiers and extracted features of Continuous Wavelet Transform Reza Yaghoobi Karimoi*, Mohammad Ali Khalilzadeh, Ali Akbar Hossinezadeh, Azra Yaghoobi

More information

PHM for Rail Maintenance Management: at the crossroads between data science and physics

PHM for Rail Maintenance Management: at the crossroads between data science and physics PHM for Rail Maintenance Management: at the crossroads between data science and physics Pierre Dersin, Ph.D. PHM Director, ALSTOM Digital Mobility 15/09/2016 Presentation outline Introduction: the benefits

More information

Automatic Classification of Perceived Gender from Facial Images

Automatic Classification of Perceived Gender from Facial Images Automatic Classification of Perceived Gender from Facial Images Joseph Lemley, Sami Abdul-Wahid, Dipayan Banik Advisor: Dr. Razvan Andonie SOURCE 2016 Outline 1 Introduction 2 Faces - Background 3 Faces

More information

Artificial Intelligence Lecture 7

Artificial Intelligence Lecture 7 Artificial Intelligence Lecture 7 Lecture plan AI in general (ch. 1) Search based AI (ch. 4) search, games, planning, optimization Agents (ch. 8) applied AI techniques in robots, software agents,... Knowledge

More information

MORE than 15 years after the early studies in Affective

MORE than 15 years after the early studies in Affective IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, VOL. X, NO. X, MONTH 20XX 1 Learning Deep Physiological Models of Affect Héctor P. Martínez, Student Member, IEEE, Yoshua Bengio, and Georgios N. Yannakakis, Member,

More information

Vikas Rajendra. Master s Thesis Committee: Assistant Professor Omid Dehzangi, Chair Professor Bruce Maxim Assistant Professor Anys Bacha

Vikas Rajendra. Master s Thesis Committee: Assistant Professor Omid Dehzangi, Chair Professor Bruce Maxim Assistant Professor Anys Bacha Characterization and Identification of Distraction During Naturalistic Driving Using Wearable Non- Intrusive Physiological Measure of Galvanic Skin Responses by Vikas Rajendra A thesis submitted in partial

More information

Computational Perception /785. Auditory Scene Analysis

Computational Perception /785. Auditory Scene Analysis Computational Perception 15-485/785 Auditory Scene Analysis A framework for auditory scene analysis Auditory scene analysis involves low and high level cues Low level acoustic cues are often result in

More information

PCA Enhanced Kalman Filter for ECG Denoising

PCA Enhanced Kalman Filter for ECG Denoising IOSR Journal of Electronics & Communication Engineering (IOSR-JECE) ISSN(e) : 2278-1684 ISSN(p) : 2320-334X, PP 06-13 www.iosrjournals.org PCA Enhanced Kalman Filter for ECG Denoising Febina Ikbal 1, Prof.M.Mathurakani

More information

MRI Image Processing Operations for Brain Tumor Detection

MRI Image Processing Operations for Brain Tumor Detection MRI Image Processing Operations for Brain Tumor Detection Prof. M.M. Bulhe 1, Shubhashini Pathak 2, Karan Parekh 3, Abhishek Jha 4 1Assistant Professor, Dept. of Electronics and Telecommunications Engineering,

More information

Neuro-Inspired Statistical. Rensselaer Polytechnic Institute National Science Foundation

Neuro-Inspired Statistical. Rensselaer Polytechnic Institute National Science Foundation Neuro-Inspired Statistical Pi Prior Model lfor Robust Visual Inference Qiang Ji Rensselaer Polytechnic Institute National Science Foundation 1 Status of Computer Vision CV has been an active area for over

More information

Intelligent Agents. Philipp Koehn. 16 February 2017

Intelligent Agents. Philipp Koehn. 16 February 2017 Intelligent Agents Philipp Koehn 16 February 2017 Agents and Environments 1 Agents include humans, robots, softbots, thermostats, etc. The agent function maps from percept histories to actions: f : P A

More information

Vorlesung Grundlagen der Künstlichen Intelligenz

Vorlesung Grundlagen der Künstlichen Intelligenz Vorlesung Grundlagen der Künstlichen Intelligenz Reinhard Lafrenz / Prof. A. Knoll Robotics and Embedded Systems Department of Informatics I6 Technische Universität München www6.in.tum.de lafrenz@in.tum.de

More information

EMOTIONS are one of the most essential components of

EMOTIONS are one of the most essential components of 1 Hidden Markov Model for Emotion Detection in Speech Cyprien de Lichy, Pratyush Havelia, Raunaq Rewari Abstract This paper seeks to classify speech inputs into emotion labels. Emotions are key to effective

More information

Human Activities: Handling Uncertainties Using Fuzzy Time Intervals

Human Activities: Handling Uncertainties Using Fuzzy Time Intervals The 19th International Conference on Pattern Recognition (ICPR), Tampa, FL, 2009 Human Activities: Handling Uncertainties Using Fuzzy Time Intervals M. S. Ryoo 1,2 and J. K. Aggarwal 1 1 Computer & Vision

More information

Motivation represents the reasons for people's actions, desires, and needs. Typically, this unit is described as a goal

Motivation represents the reasons for people's actions, desires, and needs. Typically, this unit is described as a goal Motivation What is motivation? Motivation represents the reasons for people's actions, desires, and needs. Reasons here implies some sort of desired end state Typically, this unit is described as a goal

More information

SENSATION AND PERCEPTION KEY TERMS

SENSATION AND PERCEPTION KEY TERMS SENSATION AND PERCEPTION KEY TERMS BOTTOM-UP PROCESSING BOTTOM-UP PROCESSING refers to processing sensory information as it is coming in. In other words, if I flash a random picture on the screen, your

More information

Morton-Style Factorial Coding of Color in Primary Visual Cortex

Morton-Style Factorial Coding of Color in Primary Visual Cortex Morton-Style Factorial Coding of Color in Primary Visual Cortex Javier R. Movellan Institute for Neural Computation University of California San Diego La Jolla, CA 92093-0515 movellan@inc.ucsd.edu Thomas

More information

Sensation is the conscious experience associated with an environmental stimulus. It is the acquisition of raw information by the body s sense organs

Sensation is the conscious experience associated with an environmental stimulus. It is the acquisition of raw information by the body s sense organs Sensation is the conscious experience associated with an environmental stimulus. It is the acquisition of raw information by the body s sense organs Perception is the conscious experience of things and

More information

A HMM-based Pre-training Approach for Sequential Data

A HMM-based Pre-training Approach for Sequential Data A HMM-based Pre-training Approach for Sequential Data Luca Pasa 1, Alberto Testolin 2, Alessandro Sperduti 1 1- Department of Mathematics 2- Department of Developmental Psychology and Socialisation University

More information

SVM-based Discriminative Accumulation Scheme for Place Recognition

SVM-based Discriminative Accumulation Scheme for Place Recognition SVM-based Discriminative Accumulation Scheme for Place Recognition Andrzej Pronobis CAS/CVAP, KTH Stockholm, Sweden pronobis@csc.kth.se Óscar Martínez Mozos AIS, University Of Freiburg Freiburg, Germany

More information

Data mining for Obstructive Sleep Apnea Detection. 18 October 2017 Konstantinos Nikolaidis

Data mining for Obstructive Sleep Apnea Detection. 18 October 2017 Konstantinos Nikolaidis Data mining for Obstructive Sleep Apnea Detection 18 October 2017 Konstantinos Nikolaidis Introduction: What is Obstructive Sleep Apnea? Obstructive Sleep Apnea (OSA) is a relatively common sleep disorder

More information

Affective Dialogue Communication System with Emotional Memories for Humanoid Robots

Affective Dialogue Communication System with Emotional Memories for Humanoid Robots Affective Dialogue Communication System with Emotional Memories for Humanoid Robots M. S. Ryoo *, Yong-ho Seo, Hye-Won Jung, and H. S. Yang Artificial Intelligence and Media Laboratory Department of Electrical

More information

Bio-Feedback Based Simulator for Mission Critical Training

Bio-Feedback Based Simulator for Mission Critical Training Bio-Feedback Based Simulator for Mission Critical Training Igor Balk Polhemus, 40 Hercules drive, Colchester, VT 05446 +1 802 655 31 59 x301 balk@alum.mit.edu Abstract. The paper address needs for training

More information

EEG Signal Description with Spectral-Envelope- Based Speech Recognition Features for Detection of Neonatal Seizures

EEG Signal Description with Spectral-Envelope- Based Speech Recognition Features for Detection of Neonatal Seizures EEG Signal Description with Spectral-Envelope- Based Speech Recognition Features for Detection of Neonatal Seizures Temko A., Nadeu C., Marnane W., Boylan G., Lightbody G. presented by Ladislav Rampasek

More information

COMPARISON BETWEEN GMM-SVM SEQUENCE KERNEL AND GMM: APPLICATION TO SPEECH EMOTION RECOGNITION

COMPARISON BETWEEN GMM-SVM SEQUENCE KERNEL AND GMM: APPLICATION TO SPEECH EMOTION RECOGNITION Journal of Engineering Science and Technology Vol. 11, No. 9 (2016) 1221-1233 School of Engineering, Taylor s University COMPARISON BETWEEN GMM-SVM SEQUENCE KERNEL AND GMM: APPLICATION TO SPEECH EMOTION

More information

REAL-TIME SMILE SONIFICATION USING SURFACE EMG SIGNAL AND THE EVALUATION OF ITS USABILITY.

REAL-TIME SMILE SONIFICATION USING SURFACE EMG SIGNAL AND THE EVALUATION OF ITS USABILITY. REAL-TIME SMILE SONIFICATION USING SURFACE EMG SIGNAL AND THE EVALUATION OF ITS USABILITY Yuki Nakayama 1 Yuji Takano 2 Masaki Matsubara 3 Kenji Suzuki 4 Hiroko Terasawa 3,5 1 Graduate School of Library,

More information

Situation Reaction Detection Using Eye Gaze And Pulse Analysis

Situation Reaction Detection Using Eye Gaze And Pulse Analysis Situation Reaction Detection Using Eye Gaze And Pulse Analysis 1 M. Indumathy, 2 Dipankar Dey, 2 S Sambath Kumar, 2 A P Pranav 1 Assistant Professor, 2 UG Scholars Dept. Of Computer science and Engineering

More information

FACIAL EXPRESSION RECOGNITION FROM IMAGE SEQUENCES USING SELF-ORGANIZING MAPS

FACIAL EXPRESSION RECOGNITION FROM IMAGE SEQUENCES USING SELF-ORGANIZING MAPS International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998 FACIAL EXPRESSION RECOGNITION FROM IMAGE SEQUENCES USING SELF-ORGANIZING MAPS Ayako KATOH*, Yasuhiro FUKUI**

More information

Selection of Feature for Epilepsy Seizer Detection Using EEG

Selection of Feature for Epilepsy Seizer Detection Using EEG International Journal of Neurosurgery 2018; 2(1): 1-7 http://www.sciencepublishinggroup.com/j/ijn doi: 10.11648/j.ijn.20180201.11 Selection of Feature for Epilepsy Seizer Detection Using EEG Manisha Chandani

More information