Distributed Multisensory Signals Acquisition and Analysis in Dyadic Interactions

Size: px
Start display at page:

Download "Distributed Multisensory Signals Acquisition and Analysis in Dyadic Interactions"

Transcription

1 Distributed Multisensory Signals Acquisition and Analysis in Dyadic Interactions Ashish Tawari Cuong Tran Anup Doshi Thorsten Zander Max Planck Institute for Intelligent Systems, Department Empirical Inference, Tuebingen, Germany Mohan M. Trivedi Copyright is held by the author/owner(s). CHI 12, May 5 10, 2012, Austin, Texas, USA. ACM /12/05. Abstract Human-machine interaction could be enhanced by providing information about the user s state, allowing for automated adaption of the system. Such context-aware system, however, should be able to deal with spontaneous and subtle user behavior. The artificial intelligence behind such systems, hence, also needs to deal with spontaneous behavior data for training as well as evaluation. Although harder to collect and annotate, spontaneous behavior data are preferable to posed as they are representative of real world behavior. Towards this end, we have designed a distributed testbed for multisensory signals acquisition while facilitating spontaneous interactions. We recorded audio-visual as well as physiological signals from 6 pairs of subjects while they were playing a bluffing dice game against each other. In this paper, we introduce the collected database and provide our preliminary results of bluff detection based on spatio-temporal face image signal analysis. Author Keywords Multimodal database; emotion recognition; deception detection; facial expression analysis. ACM Classification Keywords I.5.m [PATTERN RECOGNITION]: Miscellaneous;

2 Introduction The human-computer paradigm suggests that user interfaces of the future need to perceive subtleties and changes in the user s behavior and to initiate interactions based on this information rather than simply responding to the user s commands. The future human-centered multimodal HCI will change the ways in which we interact with computer systems. The key component for the design of context-sensitive systems is the ability to recognize and generate social signals and social behaviors - in order to become more effective and more efficient. Among humans, social interactions involve explicit information (i.e. intentionally sending a message) as well as implicit information. Such information might also be relevant for a more intuitive HCI. For example, information into users affective state is critical for a good evaluation of their experience when interacting with the systems. An estimation of affect is of paramount importance for recreational systems such as games the whole point is to have fun and be engaged. Successful games carefully balance frustration, accomplishment, delight, pleasure, etc. to deliver fun. Direct measures of affect that do not interrupt the flow of games would be extraordinarily useful. Consequently, integrating information on aspects of user state into HCI could lead to a more natural way of interaction between human and machine. Such information can broadly be divided into three modalities: audio, visual and physiological. Among these modalities physiological signals are often neglected, since they can not be sensed all the times. Yet, the research in psychophysiology has produced strong evidence that a range of somatic and physiological measurements including pupillary diameter, heart rate, galvanic skin response, temperature, respiration rate, brain signals such as electroencephalogram (EEG) have shown in part or group high correlation with affective states like arousal [1] as well as cognitive states [11]. Active research in this field along with the recent advent of non-intrusive sensors and wearable computers, which promises less invasive physiological sensing [8], are pushing hard to bring technologies out of the lab and into society and onto the market. On the other hand, speech and vision being the primary senses for human expression and perception, significant research effort has been focused on developing intelligent systems with audio and video interfaces [6]. Moreover, the ease of availability of non-contact and non-intrusive sensors has encouraged researchers both in academia and industry to pursue design and development of the intelligent systems using visual and auditory channels. One of the aims of the intelligent systems is to perform automatic analysis of human behaviors. Tawari and Trivedi use audio modality for affect recognition in real world environment as well as in controlled studio settings [9]. Doshi and Trivedi [2] proposes attention estimation using a Bayesian framework which incorporates vision based gaze estimation (looking the driver) and visual saliency maps (looking the environment) as well as cognitive models that affects relationship between gaze and attention. Importance of such intelligent systems in HCI is undisputed. Hence to understand user state and behavior, HCI should incorporate one or more above mention modalities. The first step in automatic analysis of human behavior is the development of a database. Although harder to collect and annotate, spontaneous behavior data are preferred to posed/acted as they are representative of real world behavior. In more recent years, number of efforts have been put to develop such databases. However, databases

3 with all the three modalities are lacking. One such notable effort is of Soleymani et. al. [7]. Authors have captured synchronized recording of audio-visual and physiological signals in spontaneous behavior with the goal of affect recognition research. Emotions, however, are induced in the participants using affective stimuli. In this paper, we present a novel testbed for investigating temporal dynamics of multisensory signals in spontaneous HCI settings and the collected database along with the preliminary study using visual signal. To the best of our knowledge, our database is the first database, collected in spontaneous dyadic interactions between two human players, which has all the three modalities. Distributed Multi-modal Multi-sensory Testbed It is well recognized that interpreting the mix of audio-visual signals is essential in human-human communication. Hence, it seems natural to strive for a multimodal intelligent system with ability to perceive, analyze and respond to their surroundings in a way that is seamless to humans. There exists a vast literature on multimodal interfaces [3] also because of their many advantages: they prevent errors, bring robustness to the interface, help the user to correct errors or recover from them more easily, bring more bandwidth to the communication, and add alternative communication methods to different situations and environments. The artificial intelligence behind such interfaces, however, needs to deal with spontaneous behavior data for training as well as evaluation to understand their suitability in real world environment. Towards this end, we introduce a novel distributed testbed for investigating Temporal Dynamics of multi-sensory Signals (TDSS) in spontaneous dyadic interactions among humans. The testbed consists of two separate rooms with each room having similar equipments and capability to synchronously acquire multisensory signals. The players can interact with video-conferencing like setup where they can see each other s face and hear sound. Figure 1 shows the various components of the testbed. The testbed is capable of capturing multimodal data that includes audiovisual and physiological signals. Video feeds consist of player s face and upper body. We used three cameras as shown in Figure 1 along with eye tracker systems. Audio signal includes two audio channels per player in close (attached to the player s body) and far field settings. The physiological signal consists of high-density EEG (Brainproducts ActiCap) to capture brain s spontaneous electrical activity along with electrooculogram (EOG), electromyogram (EMG) to record physiologic properties of muscles, Electrocardiogram (ECG) to record physiologic properties of heart and galvanic skin response (GSR) signals. A summery of database characteristics is given in Table 1. Investigating Bluffing With the ambitious goal of mind reading, which plays key role in realistic interactions, we explore deception behavior in this first study utilizing our TDSS testbed. We adapted a German drinking game called Mäxchen, which included states dedicated to the decision of the player whether to bluff or to quit [4]. In the experiment, two players were situated in the two rooms of the distributed testbed. Starting with an account of 20 points, each player rolled two dices indicating a two-digit number in alternation. The player in turn needed a higher number than his predecessor, otherwise, he/she had to bluff or to quit. However, bluffing was more risky as its detection by the opponent

4 Modalities 2 channels per subjects (far and near Audio field) (sampling rate - 48kHz; resolution - 32 bits/sample) 3 video feeds (one for face and two for upper body) (frame rate Video fps; resolution ), Eye Gaze (30Hz) 64-channel high-density electroencephalogram (EEG), Electromyogram (EMG), Electrocardiogram (ECG), Physiological Electrooculogram (EOG) and Galvanic skin response (GSR) - 500Hz) Participants and Sessions No. of participants 12 No. of participants 2 per session Session 4 hours per session length Table 1: Multimodal Synchronized Database Content Summery Figure 1: Distributed testbed for multisensory signal acquisition in social interactive setting.

5 Bluffing game A typical trial had the following time line: player A pressed a button to roll his dice, the result appeared on screen 2 seconds later. He had to wait for an auditory go-signal to announce his true or alleged number or to quit. Responses were given verbally, synchronously with a button press to cue the time of response. After player A s response player B had to decide whether to accuse A of having bluffed or to accept the number and to roll the dice in his turn. Feature Type Geometric (per frame) Appearance (per frame) Statistical (per sequence) Extracted feature 2-D image coordinates of 49 facial landmark locations corresponding to eyebrows, eyes, nose and mouth regions and their first derivative over time Gabor filter (8 orientations and 5 spacial frequencies) magnitude response at 49 facial landmark locations and their first derivative over time Histograms (5-bin) of features extracted from each frame over the whole sequence Table 2: The list of feature extracted from video sequence for the bluffing classification experiment cost 2 points, while quitting only cost 1 point. If a bluff was wrongly accused, the accuser lose 2 points. A game was won when the opponent had lost all 20 points. Each pair of subjects played eight games. Twelve paid subjects were invited for the study. For motivation, subjects got extra monetary bonus for each won game. Facial dynamics in bluffing In this preliminary study, our goal is to classify trials into the categories Bluff, Truth or Quit based on visual signals. In a typical experiment, one player rolled the dice 250 times; of that 140 are truths (non-bluff s), 50 are quits and 60 are bluffs. Facial feature extraction and classification For Face analysis, we calculate spatio-temporal features. Figure 2 shows an overview of the system. From each video sequence, we extract three types of features: Geometric based (facial landmarks), appearance based (Gabor filter response) and statistical (histograms over sequence of frames). From each frame, we automatically extract 66 facial landmark using deformable model fitting [5] and align the face such that the centers of the eyes are roughly 50 pixels apart and are horizontally aligned. Table 2 provides the list of all the relevant features extracted from the aligned face. The calculated features are then utilized for classification tasks. We used the discriminative relevance vector machine (RVM) classifier, which is based on sparse Bayesian learning, developed by Michael Tipping [10]. The algorithm is a Bayesian counterpart to the popular support vector machines and is used to train a classifier that translates a given feature vector into a class membership probability which can then be thresholded to determine a true positive and false positive rate for classification tasks. Figure 2: Facial feature extraction and training Results In this study, we investigate two different classification tasks - Bluff Vs Truth and Bluff Vs Quit - using video data from one session. The video data corresponds to time where the user makes decision (truth, bluff or quit). The player presses a button before announcing his/her decision. We explored different durations around this button press and the best performance corresponding to 0.5 sec prior sec past the button press is reported in the paper. We also compare two configurations - full face and left/right half face. Using 5 fold cross validation approach, we generated average Receiver Operating Curve (ROC). Figure 3 shows ROC for the classification between Bluff Vs Truth. The area under curve (AUC) for full face condition is 69% while for half face condition is 58%. Figure 4 shows ROC for Bluff Vs Quit classification. In this case, we have AUC of 79% and 75% for full and left half face respectively. An AUC of 50% indicates chance level performace. Clearly, the performace using full face is quite above chance level for both the tasks. A higher AUC for Bluff Vs Quit classification can also be attributed to the fact that while quitting a player does not speak any

6 Figure 3: ROC curve of classification task - Truth Vs Bluff. Red curve shows performance using full face and blue shows performance using left half face. Figure 4: ROC curve of classification task - Bluff Vs Quit. Red curve shows performance using full face and blue shows performance using left half face. number. It s also important to note the drop in AUC using half face specially for Bluff Vs Truth classification. This can be attributed to the fact that mircoexpressions duing bluffing are subtle and detection of bluffing will be benefitted using full face which can capture any asymmetry in facial geometry as well. Concluding remarks In this paper, we presented a novel testbed capable of acquiring distributed multisensory data in time-synchronized fashion while facilitating communication in dyadic interactions. In the preliminary study of bluff detection, our analysis suggest that facial dynamics does provide useful information. While analysis is based one subject s data and more subjects must be included before making any conclusion, the findings reported is very encouraging. Our focus in this paper, however, is to introduce the testbed and dataset which we believe is the unique contribution. The Presence of a natural and spontaneous human behavior multimodal database will certainly benefit studies involving user understanding, which in turn is essential for effective HCI. Our continuing and longer term goal is to study multimodal system and towards this end, we will analyze other available modalities namely - physiological and audio. References [1] J. Cacioppo, G. G. Berntson, J. T. Larsen, K. M. Poehlmann, and T. A. Ito. The psychophysiology of emotion. In M. Lewis and J. Haviland-Jones, editors, Handbook of Emotions, pages [2] A. Doshi and M. Trivedi. Attention estimation by simultaneous observation of viewer and view. In Comp Vision and Pattern Recog, pages 21 27, [3] A. Jaimes and N. Sebe. Multimodal human-computer interaction: A survey. Computer Vision and Image Understanding, 108(1-2): , [4] J. Reissland and T. O. Zander. Automated detection of bluffing in a gamerevealing a complex covert user state with a passive bci. In Proc. of the Human Factors and Ergonomics Society Europe Chapter, [5] J. Saragih, S. Lucey, and J. Cohn. Face alignment through subspace constrained mean-shifts. In Int. Conf. on Computer Vision, pages , [6] S. Shivappa, M. M. Trivedi, and B. Rao. Audio-visual information fusion in human computer interfaces and intelligent environments: A survey. Proceedings of the IEEE, 98(10): , October [7] M. Soleymani, J. Lichtenauer, T. Pun, and M. Pantic. A multi-modal affective database for affect recognition and implicit tagging. IEEE Transactions on Affective Computing, 99, [8] T. Starner. The challenges of wearable computing: Part 1. Micro, IEEE, 21(4):44 52, jul/aug [9] A. Tawari and M. M. Trivedi. Speech emotion analysis in noisy real-world environment. Int. Conf. on Pattern Recognition, pages , [10] M. E. Tipping. Sparse bayesian learning and the relevance vector machine. J. Mach. Learn. Res., 1: , September [11] T. O. Zander and C. Kothe. Towards passive brain-computer interfaces: applying brain-computer interface technology to human-machine systems in general. Journal of Neural Engineering, 8(2), 2011.

Valence-arousal evaluation using physiological signals in an emotion recall paradigm. CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry.

Valence-arousal evaluation using physiological signals in an emotion recall paradigm. CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry. Proceedings Chapter Valence-arousal evaluation using physiological signals in an emotion recall paradigm CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry Abstract The work presented in this paper aims

More information

Active User Affect Recognition and Assistance

Active User Affect Recognition and Assistance Active User Affect Recognition and Assistance Wenhui Liao, Zhiwe Zhu, Markus Guhe*, Mike Schoelles*, Qiang Ji, and Wayne Gray* Email: jiq@rpi.edu Department of Electrical, Computer, and System Eng. *Department

More information

Affective Game Engines: Motivation & Requirements

Affective Game Engines: Motivation & Requirements Affective Game Engines: Motivation & Requirements Eva Hudlicka Psychometrix Associates Blacksburg, VA hudlicka@ieee.org psychometrixassociates.com DigiPen Institute of Technology February 20, 2009 1 Outline

More information

Augmented Cognition to enhance human sensory awareness, cognitive functioning and psychic functioning: a research proposal in two phases

Augmented Cognition to enhance human sensory awareness, cognitive functioning and psychic functioning: a research proposal in two phases Augmented Cognition to enhance human sensory awareness, cognitive functioning and psychic functioning: a research proposal in two phases James Lake MD (egret4@sbcglobal.net) Overview and objectives Augmented

More information

PHYSIOLOGICAL RESEARCH

PHYSIOLOGICAL RESEARCH DOMAIN STUDIES PHYSIOLOGICAL RESEARCH In order to understand the current landscape of psychophysiological evaluation methods, we conducted a survey of academic literature. We explored several different

More information

Facial expression recognition with spatiotemporal local descriptors

Facial expression recognition with spatiotemporal local descriptors Facial expression recognition with spatiotemporal local descriptors Guoying Zhao, Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and Information Engineering, P. O. Box

More information

This is the accepted version of this article. To be published as : This is the author version published as:

This is the accepted version of this article. To be published as : This is the author version published as: QUT Digital Repository: http://eprints.qut.edu.au/ This is the author version published as: This is the accepted version of this article. To be published as : This is the author version published as: Chew,

More information

The 29th Fuzzy System Symposium (Osaka, September 9-, 3) Color Feature Maps (BY, RG) Color Saliency Map Input Image (I) Linear Filtering and Gaussian

The 29th Fuzzy System Symposium (Osaka, September 9-, 3) Color Feature Maps (BY, RG) Color Saliency Map Input Image (I) Linear Filtering and Gaussian The 29th Fuzzy System Symposium (Osaka, September 9-, 3) A Fuzzy Inference Method Based on Saliency Map for Prediction Mao Wang, Yoichiro Maeda 2, Yasutake Takahashi Graduate School of Engineering, University

More information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, S. Lee, U. Neumann, S. Narayanan Emotion

More information

Facial Behavior as a Soft Biometric

Facial Behavior as a Soft Biometric Facial Behavior as a Soft Biometric Abhay L. Kashyap University of Maryland, Baltimore County 1000 Hilltop Circle, Baltimore, MD 21250 abhay1@umbc.edu Sergey Tulyakov, Venu Govindaraju University at Buffalo

More information

Estimating Intent for Human-Robot Interaction

Estimating Intent for Human-Robot Interaction Estimating Intent for Human-Robot Interaction D. Kulić E. A. Croft Department of Mechanical Engineering University of British Columbia 2324 Main Mall Vancouver, BC, V6T 1Z4, Canada Abstract This work proposes

More information

VIDEO SURVEILLANCE AND BIOMEDICAL IMAGING Research Activities and Technology Transfer at PAVIS

VIDEO SURVEILLANCE AND BIOMEDICAL IMAGING Research Activities and Technology Transfer at PAVIS VIDEO SURVEILLANCE AND BIOMEDICAL IMAGING Research Activities and Technology Transfer at PAVIS Samuele Martelli, Alessio Del Bue, Diego Sona, Vittorio Murino Istituto Italiano di Tecnologia (IIT), Genova

More information

ANALYSIS OF FACIAL FEATURES OF DRIVERS UNDER COGNITIVE AND VISUAL DISTRACTIONS

ANALYSIS OF FACIAL FEATURES OF DRIVERS UNDER COGNITIVE AND VISUAL DISTRACTIONS ANALYSIS OF FACIAL FEATURES OF DRIVERS UNDER COGNITIVE AND VISUAL DISTRACTIONS Nanxiang Li and Carlos Busso Multimodal Signal Processing (MSP) Laboratory Department of Electrical Engineering, The University

More information

Human Factors & User Experience LAB

Human Factors & User Experience LAB Human Factors & User Experience LAB User Experience (UX) User Experience is the experience a user has when interacting with your product, service or application Through research and observation of users

More information

Skin color detection for face localization in humanmachine

Skin color detection for face localization in humanmachine Research Online ECU Publications Pre. 2011 2001 Skin color detection for face localization in humanmachine communications Douglas Chai Son Lam Phung Abdesselam Bouzerdoum 10.1109/ISSPA.2001.949848 This

More information

Pupillary Response Based Cognitive Workload Measurement under Luminance Changes

Pupillary Response Based Cognitive Workload Measurement under Luminance Changes Pupillary Response Based Cognitive Workload Measurement under Luminance Changes Jie Xu, Yang Wang, Fang Chen, Eric Choi National ICT Australia 1 University of New South Wales jie.jackxu@gmail.com, {yang.wang,

More information

Sign Language in the Intelligent Sensory Environment

Sign Language in the Intelligent Sensory Environment Sign Language in the Intelligent Sensory Environment Ákos Lisztes, László Kővári, Andor Gaudia, Péter Korondi Budapest University of Science and Technology, Department of Automation and Applied Informatics,

More information

Facial Expression Biometrics Using Tracker Displacement Features

Facial Expression Biometrics Using Tracker Displacement Features Facial Expression Biometrics Using Tracker Displacement Features Sergey Tulyakov 1, Thomas Slowe 2,ZhiZhang 1, and Venu Govindaraju 1 1 Center for Unified Biometrics and Sensors University at Buffalo,

More information

A Multimodal Interface for Robot-Children Interaction in Autism Treatment

A Multimodal Interface for Robot-Children Interaction in Autism Treatment A Multimodal Interface for Robot-Children Interaction in Autism Treatment Giuseppe Palestra giuseppe.palestra@uniba.it Floriana Esposito floriana.esposito@uniba.it Berardina De Carolis berardina.decarolis.@uniba.it

More information

Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation

Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation Aldebaro Klautau - http://speech.ucsd.edu/aldebaro - 2/3/. Page. Frequency Tracking: LMS and RLS Applied to Speech Formant Estimation ) Introduction Several speech processing algorithms assume the signal

More information

ABSTRACT I. INTRODUCTION

ABSTRACT I. INTRODUCTION International Journal of Scientific Research in Computer Science, Engineering and Information Technology 2017 IJSRCSEIT Volume 2 Issue 5 ISSN : 2456-3307 An Innovative Artificial Replacement to Facilitate

More information

Emotion Recognition using a Cauchy Naive Bayes Classifier

Emotion Recognition using a Cauchy Naive Bayes Classifier Emotion Recognition using a Cauchy Naive Bayes Classifier Abstract Recognizing human facial expression and emotion by computer is an interesting and challenging problem. In this paper we propose a method

More information

Affective Preference From Physiology in Videogames: a Lesson Learned from the TORCS Experiment

Affective Preference From Physiology in Videogames: a Lesson Learned from the TORCS Experiment Affective Preference From Physiology in Videogames: a Lesson Learned from the TORCS Experiment M. Garbarino, M. Matteucci, A. Bonarini garbarino@elet.polimi.it Dipartimento di Elettronica e Informazione,

More information

Attention Estimation by Simultaneous Observation of Viewer and View

Attention Estimation by Simultaneous Observation of Viewer and View Attention Estimation by Simultaneous Observation of Viewer and View Anup Doshi and Mohan M. Trivedi Computer Vision and Robotics Research Lab University of California, San Diego La Jolla, CA 92093-0434

More information

Human Visual Behaviour for Collaborative Human-Machine Interaction

Human Visual Behaviour for Collaborative Human-Machine Interaction Human Visual Behaviour for Collaborative Human-Machine Interaction Andreas Bulling Perceptual User Interfaces Group Max Planck Institute for Informatics Saarbrücken, Germany bulling@mpi-inf.mpg.de Abstract

More information

Neurostyle. Medical Innovation for Better Life

Neurostyle. Medical Innovation for Better Life Neurostyle Medical Innovation for Better Life Neurostyle Pte Ltd is a company dedicated to design, develop, manufacture and distribute neurological and neuromuscular medical devices. Strategically located

More information

Blue Eyes Technology

Blue Eyes Technology Blue Eyes Technology D.D. Mondal #1, Arti Gupta *2, Tarang Soni *3, Neha Dandekar *4 1 Professor, Dept. of Electronics and Telecommunication, Sinhgad Institute of Technology and Science, Narhe, Maharastra,

More information

The Ordinal Nature of Emotions. Georgios N. Yannakakis, Roddy Cowie and Carlos Busso

The Ordinal Nature of Emotions. Georgios N. Yannakakis, Roddy Cowie and Carlos Busso The Ordinal Nature of Emotions Georgios N. Yannakakis, Roddy Cowie and Carlos Busso The story It seems that a rank-based FeelTrace yields higher inter-rater agreement Indeed, FeelTrace should actually

More information

Can Saliency Map Models Predict Human Egocentric Visual Attention?

Can Saliency Map Models Predict Human Egocentric Visual Attention? Can Saliency Map Models Predict Human Egocentric Visual Attention? Kentaro Yamada 1, Yusuke Sugano 1, Takahiro Okabe 1 Yoichi Sato 1, Akihiro Sugimoto 2, and Kazuo Hiraki 3 1 The University of Tokyo, Tokyo,

More information

Simultaneous Real-Time Detection of Motor Imagery and Error-Related Potentials for Improved BCI Accuracy

Simultaneous Real-Time Detection of Motor Imagery and Error-Related Potentials for Improved BCI Accuracy Simultaneous Real-Time Detection of Motor Imagery and Error-Related Potentials for Improved BCI Accuracy P. W. Ferrez 1,2 and J. del R. Millán 1,2 1 IDIAP Research Institute, Martigny, Switzerland 2 Ecole

More information

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING 134 TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING H.F.S.M.Fonseka 1, J.T.Jonathan 2, P.Sabeshan 3 and M.B.Dissanayaka 4 1 Department of Electrical And Electronic Engineering, Faculty

More information

Emotion based E-learning System using Physiological Signals. Dr. Jerritta S, Dr. Arun S School of Engineering, Vels University, Chennai

Emotion based E-learning System using Physiological Signals. Dr. Jerritta S, Dr. Arun S School of Engineering, Vels University, Chennai CHENNAI - INDIA Emotion based E-learning System using Physiological Signals School of Engineering, Vels University, Chennai Outline Introduction Existing Research works on Emotion Recognition Research

More information

Real-time SVM Classification for Drowsiness Detection Using Eye Aspect Ratio

Real-time SVM Classification for Drowsiness Detection Using Eye Aspect Ratio Real-time SVM Classification for Drowsiness Detection Using Eye Aspect Ratio Caio B. Souto Maior a, *, Márcio C. Moura a, João M. M. de Santana a, Lucas M. do Nascimento a, July B. Macedo a, Isis D. Lins

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 5: Data analysis II Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis II 6 Single

More information

DISCRETE WAVELET PACKET TRANSFORM FOR ELECTROENCEPHALOGRAM- BASED EMOTION RECOGNITION IN THE VALENCE-AROUSAL SPACE

DISCRETE WAVELET PACKET TRANSFORM FOR ELECTROENCEPHALOGRAM- BASED EMOTION RECOGNITION IN THE VALENCE-AROUSAL SPACE DISCRETE WAVELET PACKET TRANSFORM FOR ELECTROENCEPHALOGRAM- BASED EMOTION RECOGNITION IN THE VALENCE-AROUSAL SPACE Farzana Kabir Ahmad*and Oyenuga Wasiu Olakunle Computational Intelligence Research Cluster,

More information

Field Evaluation with Cognitively- Impaired Older Adults of Attention Management in the Embodied Conversational Agent Louise

Field Evaluation with Cognitively- Impaired Older Adults of Attention Management in the Embodied Conversational Agent Louise Field Evaluation with Cognitively- Impaired Older Adults of Attention Management in the Embodied Conversational Agent Louise P. Wargnier, G. Carletti, Y. Laurent-Corniquet, S. Benveniste, P. Jouvelot and

More information

MSAS: An M-mental health care System for Automatic Stress detection

MSAS: An M-mental health care System for Automatic Stress detection Quarterly of Clinical Psychology Studies Allameh Tabataba i University Vol. 7, No. 28, Fall 2017, Pp 87-94 MSAS: An M-mental health care System for Automatic Stress detection Saeid Pourroostaei Ardakani*

More information

PREFACE Emotions and Personality in Personalized Systems

PREFACE Emotions and Personality in Personalized Systems PREFACE Emotions and Personality in Personalized Systems Introduction Personality and emotions shape our daily lives by having a strong influence on our preferences, decisions and behaviour in general.

More information

Motion Control for Social Behaviours

Motion Control for Social Behaviours Motion Control for Social Behaviours Aryel Beck a.beck@ntu.edu.sg Supervisor: Nadia Magnenat-Thalmann Collaborators: Zhang Zhijun, Rubha Shri Narayanan, Neetha Das 10-03-2015 INTRODUCTION In order for

More information

Advanced Audio Interface for Phonetic Speech. Recognition in a High Noise Environment

Advanced Audio Interface for Phonetic Speech. Recognition in a High Noise Environment DISTRIBUTION STATEMENT A Approved for Public Release Distribution Unlimited Advanced Audio Interface for Phonetic Speech Recognition in a High Noise Environment SBIR 99.1 TOPIC AF99-1Q3 PHASE I SUMMARY

More information

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1 1. INTRODUCTION Sign language interpretation is one of the HCI applications where hand gesture plays important role for communication. This chapter discusses sign language interpretation system with present

More information

REACTION TIME MEASUREMENT APPLIED TO MULTIMODAL HUMAN CONTROL MODELING

REACTION TIME MEASUREMENT APPLIED TO MULTIMODAL HUMAN CONTROL MODELING XIX IMEKO World Congress Fundamental and Applied Metrology September 6 11, 2009, Lisbon, Portugal REACTION TIME MEASUREMENT APPLIED TO MULTIMODAL HUMAN CONTROL MODELING Edwardo Arata Y. Murakami 1 1 Digital

More information

Intelligent Object Group Selection

Intelligent Object Group Selection Intelligent Object Group Selection Hoda Dehmeshki Department of Computer Science and Engineering, York University, 47 Keele Street Toronto, Ontario, M3J 1P3 Canada hoda@cs.yorku.ca Wolfgang Stuerzlinger,

More information

Chapter 6. Attention. Attention

Chapter 6. Attention. Attention Chapter 6 Attention Attention William James, in 1890, wrote Everyone knows what attention is. Attention is the taking possession of the mind, in clear and vivid form, of one out of what seem several simultaneously

More information

Research Proposal on Emotion Recognition

Research Proposal on Emotion Recognition Research Proposal on Emotion Recognition Colin Grubb June 3, 2012 Abstract In this paper I will introduce my thesis question: To what extent can emotion recognition be improved by combining audio and visual

More information

GfK Verein. Detecting Emotions from Voice

GfK Verein. Detecting Emotions from Voice GfK Verein Detecting Emotions from Voice Respondents willingness to complete questionnaires declines But it doesn t necessarily mean that consumers have nothing to say about products or brands: GfK Verein

More information

Agitation sensor based on Facial Grimacing for improved sedation management in critical care

Agitation sensor based on Facial Grimacing for improved sedation management in critical care Agitation sensor based on Facial Grimacing for improved sedation management in critical care The 2 nd International Conference on Sensing Technology ICST 2007 C. E. Hann 1, P Becouze 1, J. G. Chase 1,

More information

Title of Thesis. Study on Audiovisual Integration in Young and Elderly Adults by Event-Related Potential

Title of Thesis. Study on Audiovisual Integration in Young and Elderly Adults by Event-Related Potential Title of Thesis Study on Audiovisual Integration in Young and Elderly Adults by Event-Related Potential 2014 September Yang Weiping The Graduate School of Natural Science and Technology (Doctor s Course)

More information

Pupil Dilation as an Indicator of Cognitive Workload in Human-Computer Interaction

Pupil Dilation as an Indicator of Cognitive Workload in Human-Computer Interaction Pupil Dilation as an Indicator of Cognitive Workload in Human-Computer Interaction Marc Pomplun and Sindhura Sunkara Department of Computer Science, University of Massachusetts at Boston 100 Morrissey

More information

Emotion Detection Using Physiological Signals. M.A.Sc. Thesis Proposal Haiyan Xu Supervisor: Prof. K.N. Plataniotis

Emotion Detection Using Physiological Signals. M.A.Sc. Thesis Proposal Haiyan Xu Supervisor: Prof. K.N. Plataniotis Emotion Detection Using Physiological Signals M.A.Sc. Thesis Proposal Haiyan Xu Supervisor: Prof. K.N. Plataniotis May 10 th, 2011 Outline Emotion Detection Overview EEG for Emotion Detection Previous

More information

EEG, ECG, EMG. Mitesh Shrestha

EEG, ECG, EMG. Mitesh Shrestha EEG, ECG, EMG Mitesh Shrestha What is Signal? A signal is defined as a fluctuating quantity or impulse whose variations represent information. The amplitude or frequency of voltage, current, electric field

More information

Facial Event Classification with Task Oriented Dynamic Bayesian Network

Facial Event Classification with Task Oriented Dynamic Bayesian Network Facial Event Classification with Task Oriented Dynamic Bayesian Network Haisong Gu Dept. of Computer Science University of Nevada Reno haisonggu@ieee.org Qiang Ji Dept. of ECSE Rensselaer Polytechnic Institute

More information

Thermo-Message: Exploring the Potential of Heat as a Modality of Peripheral Expression

Thermo-Message: Exploring the Potential of Heat as a Modality of Peripheral Expression Thermo-Message: Exploring the Potential of Heat as a Modality of Peripheral Expression Wonjun Lee Dep. of Industrial Design, KAIST 335 Gwahak-ro, Yuseong-gu, Daejeon 305-701, South Korea lwjustin15@kaist.ac.kr

More information

Automated Image Biometrics Speeds Ultrasound Workflow

Automated Image Biometrics Speeds Ultrasound Workflow Whitepaper Automated Image Biometrics Speeds Ultrasound Workflow ACUSON SC2000 Volume Imaging Ultrasound System S. Kevin Zhou, Ph.D. Siemens Corporate Research Princeton, New Jersey USA Answers for life.

More information

Temporal Context and the Recognition of Emotion from Facial Expression

Temporal Context and the Recognition of Emotion from Facial Expression Temporal Context and the Recognition of Emotion from Facial Expression Rana El Kaliouby 1, Peter Robinson 1, Simeon Keates 2 1 Computer Laboratory University of Cambridge Cambridge CB3 0FD, U.K. {rana.el-kaliouby,

More information

Face Analysis : Identity vs. Expressions

Face Analysis : Identity vs. Expressions Hugo Mercier, 1,2 Patrice Dalle 1 Face Analysis : Identity vs. Expressions 1 IRIT - Université Paul Sabatier 118 Route de Narbonne, F-31062 Toulouse Cedex 9, France 2 Websourd Bâtiment A 99, route d'espagne

More information

AVR Based Gesture Vocalizer Using Speech Synthesizer IC

AVR Based Gesture Vocalizer Using Speech Synthesizer IC AVR Based Gesture Vocalizer Using Speech Synthesizer IC Mr.M.V.N.R.P.kumar 1, Mr.Ashutosh Kumar 2, Ms. S.B.Arawandekar 3, Mr.A. A. Bhosale 4, Mr. R. L. Bhosale 5 Dept. Of E&TC, L.N.B.C.I.E.T. Raigaon,

More information

Affective pictures and emotion analysis of facial expressions with local binary pattern operator: Preliminary results

Affective pictures and emotion analysis of facial expressions with local binary pattern operator: Preliminary results Affective pictures and emotion analysis of facial expressions with local binary pattern operator: Preliminary results Seppo J. Laukka 1, Antti Rantanen 1, Guoying Zhao 2, Matti Taini 2, Janne Heikkilä

More information

Perceptual Studies. Perceptual Studies. Conclusions. Perceptual Studies. Strengths? Weakness? Follow-on Studies?

Perceptual Studies. Perceptual Studies. Conclusions. Perceptual Studies. Strengths? Weakness? Follow-on Studies? Perceptual Studies Jason Harrison, Ron Rensink, and Michiel van de Panne, Obscuring Length Changes During Animated Motion. ACM Transactions on Graphics, 23(3), Proceedings of SIGGRAPH 2004. Perceptual

More information

Implicit Emotional Tagging of Multimedia Using EEG Signals and Brain Computer Interface

Implicit Emotional Tagging of Multimedia Using EEG Signals and Brain Computer Interface Implicit Emotional Tagging of Multimedia Using EEG Signals and Brain Computer Interface Ashkan Yazdani ashkan.yazdani@epfl.ch Jong-Seok Lee jong-seok.lee@epfl.ch Touradj Ebrahimi touradj.ebrahimi@epfl.ch

More information

Emotional Design. D. Norman (Emotional Design, 2004) Model with three levels. Visceral (lowest level) Behavioral (middle level) Reflective (top level)

Emotional Design. D. Norman (Emotional Design, 2004) Model with three levels. Visceral (lowest level) Behavioral (middle level) Reflective (top level) Emotional Design D. Norman (Emotional Design, 2004) Model with three levels Visceral (lowest level) Behavioral (middle level) Reflective (top level) Emotional Intelligence (EI) IQ is not the only indicator

More information

HearIntelligence by HANSATON. Intelligent hearing means natural hearing.

HearIntelligence by HANSATON. Intelligent hearing means natural hearing. HearIntelligence by HANSATON. HearIntelligence by HANSATON. Intelligent hearing means natural hearing. Acoustic environments are complex. We are surrounded by a variety of different acoustic signals, speech

More information

1 Introduction. Abstract: Accurate optic disc (OD) segmentation and fovea. Keywords: optic disc segmentation, fovea detection.

1 Introduction. Abstract: Accurate optic disc (OD) segmentation and fovea. Keywords: optic disc segmentation, fovea detection. Current Directions in Biomedical Engineering 2017; 3(2): 533 537 Caterina Rust*, Stephanie Häger, Nadine Traulsen and Jan Modersitzki A robust algorithm for optic disc segmentation and fovea detection

More information

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China A Vision-based Affective Computing System Jieyu Zhao Ningbo University, China Outline Affective Computing A Dynamic 3D Morphable Model Facial Expression Recognition Probabilistic Graphical Models Some

More information

Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH)

Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH) Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH) Matt Huenerfauth Raja Kushalnagar Rochester Institute of Technology DHH Auditory Issues Links Accents/Intonation Listening

More information

Hand Gesture Recognition and Speech Conversion for Deaf and Dumb using Feature Extraction

Hand Gesture Recognition and Speech Conversion for Deaf and Dumb using Feature Extraction Hand Gesture Recognition and Speech Conversion for Deaf and Dumb using Feature Extraction Aswathy M 1, Heera Narayanan 2, Surya Rajan 3, Uthara P M 4, Jeena Jacob 5 UG Students, Dept. of ECE, MBITS, Nellimattom,

More information

An assistive application identifying emotional state and executing a methodical healing process for depressive individuals.

An assistive application identifying emotional state and executing a methodical healing process for depressive individuals. An assistive application identifying emotional state and executing a methodical healing process for depressive individuals. Bandara G.M.M.B.O bhanukab@gmail.com Godawita B.M.D.T tharu9363@gmail.com Gunathilaka

More information

Benjamin Blankertz Guido Dornhege Matthias Krauledat Klaus-Robert Müller Gabriel Curio

Benjamin Blankertz Guido Dornhege Matthias Krauledat Klaus-Robert Müller Gabriel Curio Benjamin Blankertz Guido Dornhege Matthias Krauledat Klaus-Robert Müller Gabriel Curio During resting wakefulness distinct idle rhythms located over various brain areas: around 10 Hz (alpha range) over

More information

Error Detection based on neural signals

Error Detection based on neural signals Error Detection based on neural signals Nir Even- Chen and Igor Berman, Electrical Engineering, Stanford Introduction Brain computer interface (BCI) is a direct communication pathway between the brain

More information

More cooperative, or more uncooperative: Decision-making after subliminal priming with emotional faces

More cooperative, or more uncooperative: Decision-making after subliminal priming with emotional faces More cooperative, or more uncooperative: Decision-making after subliminal priming with emotional s Juan Liu* Institute of Aviation Medicine, China Air Force juanyaya@sina.com ABSTRACT Is subliminal priming

More information

Video Saliency Detection via Dynamic Consistent Spatio- Temporal Attention Modelling

Video Saliency Detection via Dynamic Consistent Spatio- Temporal Attention Modelling AAAI -13 July 16, 2013 Video Saliency Detection via Dynamic Consistent Spatio- Temporal Attention Modelling Sheng-hua ZHONG 1, Yan LIU 1, Feifei REN 1,2, Jinghuan ZHANG 2, Tongwei REN 3 1 Department of

More information

Studying the time course of sensory substitution mechanisms (CSAIL, 2014)

Studying the time course of sensory substitution mechanisms (CSAIL, 2014) Studying the time course of sensory substitution mechanisms (CSAIL, 2014) Christian Graulty, Orestis Papaioannou, Phoebe Bauer, Michael Pitts & Enriqueta Canseco-Gonzalez, Reed College. Funded by the Murdoch

More information

Evaluating Feature Extraction Methods of Electrooculography (EOG) Signal for Human-Computer Interface

Evaluating Feature Extraction Methods of Electrooculography (EOG) Signal for Human-Computer Interface Available online at www.sciencedirect.com Procedia Engineering 32 (2012) 246 252 I-SEEC2011 Evaluating Feature Extraction Methods of Electrooculography (EOG) Signal for Human-Computer Interface S. Aungsakul,

More information

SoPhy: Smart Socks for Video Consultations of Physiotherapy

SoPhy: Smart Socks for Video Consultations of Physiotherapy SoPhy: Smart Socks for Video Consultations of Physiotherapy Deepti Aggarwal 1, Thuong Hoang 1, Weiyi Zhang 1, Bernd Ploderer 1,2, Frank Vetere 1, Mark Bradford 3 1 Microsoft Research Center for SocialNUI,

More information

Eye movements, recognition, and memory

Eye movements, recognition, and memory Eye movements, recognition, and memory Garrison W. Cottrell TDLC Gary's Unbelievable Research Unit (GURU) Computer Science and Engineering Department Institute for Neural Computation UCSD Joint work with

More information

SoundRecover2 the first adaptive frequency compression algorithm More audibility of high frequency sounds

SoundRecover2 the first adaptive frequency compression algorithm More audibility of high frequency sounds Phonak Insight April 2016 SoundRecover2 the first adaptive frequency compression algorithm More audibility of high frequency sounds Phonak led the way in modern frequency lowering technology with the introduction

More information

Coordination in Sensory Integration

Coordination in Sensory Integration 15 Coordination in Sensory Integration Jochen Triesch, Constantin Rothkopf, and Thomas Weisswange Abstract Effective perception requires the integration of many noisy and ambiguous sensory signals across

More information

AFOSR PI Meeting Dec 1-3, 2014 Program Director: Dr. Darema Dynamic Integration of Motion and Neural Data to Capture Human Behavior

AFOSR PI Meeting Dec 1-3, 2014 Program Director: Dr. Darema Dynamic Integration of Motion and Neural Data to Capture Human Behavior AFOSR PI Meeting Dec 1-3, 2014 Program Director: Dr. Darema Dynamic Integration of Motion and Neural Data to Capture Human Behavior Dimitris Metaxas(PI, Rutgers) D. Pantazis (co-pi, Head MEG Lab, MIT)

More information

EMOTION DETECTION THROUGH SPEECH AND FACIAL EXPRESSIONS

EMOTION DETECTION THROUGH SPEECH AND FACIAL EXPRESSIONS EMOTION DETECTION THROUGH SPEECH AND FACIAL EXPRESSIONS 1 KRISHNA MOHAN KUDIRI, 2 ABAS MD SAID AND 3 M YUNUS NAYAN 1 Computer and Information Sciences, Universiti Teknologi PETRONAS, Malaysia 2 Assoc.

More information

A Matrix of Material Representation

A Matrix of Material Representation A Matrix of Material Representation Hengfeng Zuo a, Mark Jones b, Tony Hope a, a Design and Advanced Technology Research Centre, Southampton Institute, UK b Product Design Group, Faculty of Technology,

More information

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES

USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES USING AUDITORY SALIENCY TO UNDERSTAND COMPLEX AUDITORY SCENES Varinthira Duangudom and David V Anderson School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA 30332

More information

Validity of Haptic Cues and Its Effect on Priming Visual Spatial Attention

Validity of Haptic Cues and Its Effect on Priming Visual Spatial Attention Validity of Haptic Cues and Its Effect on Priming Visual Spatial Attention J. Jay Young & Hong Z. Tan Haptic Interface Research Laboratory Purdue University 1285 EE Building West Lafayette, IN 47907 {youngj,

More information

Implementation of Automatic Retina Exudates Segmentation Algorithm for Early Detection with Low Computational Time

Implementation of Automatic Retina Exudates Segmentation Algorithm for Early Detection with Low Computational Time www.ijecs.in International Journal Of Engineering And Computer Science ISSN: 2319-7242 Volume 5 Issue 10 Oct. 2016, Page No. 18584-18588 Implementation of Automatic Retina Exudates Segmentation Algorithm

More information

An Overview of BMIs. Luca Rossini. Workshop on Brain Machine Interfaces for Space Applications

An Overview of BMIs. Luca Rossini. Workshop on Brain Machine Interfaces for Space Applications An Overview of BMIs Luca Rossini Workshop on Brain Machine Interfaces for Space Applications European Space Research and Technology Centre, European Space Agency Noordvijk, 30 th November 2009 Definition

More information

Biomedical Instrumentation E. Blood Pressure

Biomedical Instrumentation E. Blood Pressure Biomedical Instrumentation E. Blood Pressure Dr Gari Clifford Adapted from slides by Prof. Lionel Tarassenko Blood pressure Blood is pumped around the body by the heart. It makes its way around the body

More information

Novel single trial movement classification based on temporal dynamics of EEG

Novel single trial movement classification based on temporal dynamics of EEG Novel single trial movement classification based on temporal dynamics of EEG Conference or Workshop Item Accepted Version Wairagkar, M., Daly, I., Hayashi, Y. and Nasuto, S. (2014) Novel single trial movement

More information

TURKISH SIGN LANGUAGE RECOGNITION USING HIDDEN MARKOV MODEL

TURKISH SIGN LANGUAGE RECOGNITION USING HIDDEN MARKOV MODEL TURKISH SIGN LANGUAGE RECOGNITION USING HIDDEN MARKOV MODEL Kakajan Kakayev 1 and Ph.D. Songül Albayrak 2 1,2 Department of Computer Engineering, Yildiz Technical University, Istanbul, Turkey kkakajan@gmail.com

More information

Detecting affective covert user states with passive Brain-Computer Interfaces

Detecting affective covert user states with passive Brain-Computer Interfaces Detecting affective covert user states with passive Brain-Computer Interfaces Thorsten O. Zander Team PhyPA TU Berlin, Germany tza@mms.tu-berlin.de Sabine Jatzev Team PhyPA TU Berlin, Germany sja@mms.tu-berlin.de

More information

An Attentional Framework for 3D Object Discovery

An Attentional Framework for 3D Object Discovery An Attentional Framework for 3D Object Discovery Germán Martín García and Simone Frintrop Cognitive Vision Group Institute of Computer Science III University of Bonn, Germany Saliency Computation Saliency

More information

Validating the Visual Saliency Model

Validating the Visual Saliency Model Validating the Visual Saliency Model Ali Alsam and Puneet Sharma Department of Informatics & e-learning (AITeL), Sør-Trøndelag University College (HiST), Trondheim, Norway er.puneetsharma@gmail.com Abstract.

More information

(Visual) Attention. October 3, PSY Visual Attention 1

(Visual) Attention. October 3, PSY Visual Attention 1 (Visual) Attention Perception and awareness of a visual object seems to involve attending to the object. Do we have to attend to an object to perceive it? Some tasks seem to proceed with little or no attention

More information

R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology

R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology ISSN: 0975-766X CODEN: IJPTFI Available Online through Research Article www.ijptonline.com FACIAL EMOTION RECOGNITION USING NEURAL NETWORK Kashyap Chiranjiv Devendra, Azad Singh Tomar, Pratigyna.N.Javali,

More information

Using the method of adjustment to enhance collision warning perception

Using the method of adjustment to enhance collision warning perception Using the method of adjustment to enhance collision warning perception Jesse L. Eisert, Bridget A. Lewis, & Carryl L. Baldwin Psychology, George Mason Univeristy, Fairfax, Virginia, USA The psychophysical

More information

VIDEO SALIENCY INCORPORATING SPATIOTEMPORAL CUES AND UNCERTAINTY WEIGHTING

VIDEO SALIENCY INCORPORATING SPATIOTEMPORAL CUES AND UNCERTAINTY WEIGHTING VIDEO SALIENCY INCORPORATING SPATIOTEMPORAL CUES AND UNCERTAINTY WEIGHTING Yuming Fang, Zhou Wang 2, Weisi Lin School of Computer Engineering, Nanyang Technological University, Singapore 2 Department of

More information

ERA: Architectures for Inference

ERA: Architectures for Inference ERA: Architectures for Inference Dan Hammerstrom Electrical And Computer Engineering 7/28/09 1 Intelligent Computing In spite of the transistor bounty of Moore s law, there is a large class of problems

More information

This is a repository copy of Facial Expression Classification Using EEG and Gyroscope Signals.

This is a repository copy of Facial Expression Classification Using EEG and Gyroscope Signals. This is a repository copy of Facial Expression Classification Using EEG and Gyroscope Signals. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/116449/ Version: Accepted Version

More information

Where No Interface Has Gone Before: What Can the Phaser Teach Us About Label Usage in HCI?

Where No Interface Has Gone Before: What Can the Phaser Teach Us About Label Usage in HCI? Where No Interface Has Gone Before: What Can the Phaser Teach Us About Label Usage in HCI? Franklin P. Tamborello, II Phillip H. Chung Michael D. Byrne Rice University Department of Psychology 61 S. Main,

More information

SEAT BELT VIBRATION AS A STIMULATING DEVICE FOR AWAKENING DRIVERS ABSTRACT

SEAT BELT VIBRATION AS A STIMULATING DEVICE FOR AWAKENING DRIVERS ABSTRACT SEAT BELT VIBRATION AS A STIMULATING DEVICE FOR AWAKENING DRIVERS ABSTRACT This paper presents a safety driving system that uses a seat belt vibration as a stimulating device for awakening drivers. The

More information

Online Vigilance Analysis Combining Video and Electrooculography Features

Online Vigilance Analysis Combining Video and Electrooculography Features Online Vigilance Analysis Combining Video and Electrooculography Features Ruo-Fei Du 1,Ren-JieLiu 1,Tian-XiangWu 1 and Bao-Liang Lu 1,2,3,4, 1 Center for Brain-like Computing and Machine Intelligence Department

More information

An active unpleasantness control system for indoor noise based on auditory masking

An active unpleasantness control system for indoor noise based on auditory masking An active unpleasantness control system for indoor noise based on auditory masking Daisuke Ikefuji, Masato Nakayama, Takanabu Nishiura and Yoich Yamashita Graduate School of Information Science and Engineering,

More information