EMOTION RECOGNITION FROM USERS EEG SIGNALS WITH THE HELP OF STIMULUS VIDEOS

Size: px
Start display at page:

Download "EMOTION RECOGNITION FROM USERS EEG SIGNALS WITH THE HELP OF STIMULUS VIDEOS"

Transcription

1 EMOTION RECOGNITION FROM USERS EEG SIGNALS WITH THE HELP OF STIMULUS VIDEOS Yachen Zhu, Shangfei Wang School of Computer Science and Technology University of Science and Technology of China 2327, Hefei, Anhui Qiang Ji Department of Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute Troy, NY ABSTRACT In this paper, we propose a novel approach to recognize users emotions from electroencephalogram (EEG) signals by using stimulus videos as privileged, which is only available during training. Firstly, five frequency features are extracted from each channel of the EEG signals, and several audio/visual features are extracted from video stimulus. Secondly, features are selected by statistical analyses. Then, a new EEG feature space is constructed using Canonical Correlation Analysis under the help of video content. Finally, a support vector machine is adopted as the classifier on the constructed EEG feature space. Experimental results on two benchmark databases demonstrate that video content, as the context, can improve the emotion recognition performance when employed as privileged. Index Terms emotion recognition, videos, EEG, privileged, CCA 1. INTRODUCTION The ability to understand human emotions is desirable for human-computer interaction. Users emotions can be detected from voice, visual behavior and physiological signals. To the best of our knowledge, one of the earliest study on emotional recognition is conducted by AJ Fridlund et. al. [1], who applied pattern recognition to classify emotions from physiological features. Since then, thousands of papers have been published. Early research mainly recognize deliberately displayed and exaggerated emotions from single modality. Recently, an increasing number of studies are reported to recognize natural and spontaneous emotions from multi- is the corresponding author. This work has been supported by National Program 863 (28AA1Z122), the National Science Foundation of China (Grant No , ), project from Anhui Science and Technology Agency (116c858) and the Fundamental Research Funds for the Central Universities. modalities. A detailed survey of emotion recognition can be find in [2][3][4]. Although present research recognizes users emotion from multi-modalities, including face, voice, and physiological signals, and achieves great progress, there still exist many challenges [3]. One of them is context, such as the environment, observed subject and current task. Apart from users spontaneous response, the context, in which the emotional behavior is displayed, is very important [3] for emotion recognition. Specifically, for analyzing the emotions from video-induced EEG signals, the stimulus video, could provide important assistance in building the video-based context modal for emotion recognition. Therefore, in this paper, we propose a new emotion recognition approach to classify emotions from EEG signals with the help of the stimulus videos. The users emotional response, such as EEG, is available during both training and testing, and it is called as available. The context, such as the stimulus, is often available during training, but is not available during testing. This kind of is called as privileged [5], which may be exploited to find better feature s- pace or construct better classifier for the available, and therefore improve the recognition performance using available. In this paper, we recognize emotion from electroencephalograph (EEG) signals, which are collected when subjects watch emotion-inducing videos. The stimuli videos, are regarded as privileged. First, five frequency features are extracted from each channel of the EEG signals, and several audio/visual features are extracted from video stimulus. Second, statistical analyses are constructed to explore the relations between emotional tags and EEG/video features. Third, a new EEG feature space is constructed with the help of video content using Canonical Correlation Analysis (CCA). Finally, the Support Vector Machine (SVM) is trained on the new EEG feature space. Experimental results on two benchmark databases demonstrate that our approach outperforms the ones of merely using EEG fea-

2 tures, even several fusion methods of previous work, in both valence and arousal spaces. This shows the model can benefit from the physical-physiological-related space constructed by CCA. The outline of this paper is organized as follows. In section 2, we propose a framework of emotion recognition and introduced the method in each step. Section 3 presents the experimental results on dataset from [6] and MAHNOB-HCI, as well the analyses and comparison with current work. Section 4 gives the conclusion. 2. METHODOLOGY Fig. 1 shows the framework of our emotion recognition approach. Firstly, features of EEG and video are extracted. Then, statistical analyses for hypothesis testing are conducted to check whether there exists significant difference in every feature between the two groups of emotions. After this step, relations between EEG signals and video content are exploited using CCA. Then, EEG features are transformed according to this relationship and used for training classifiers. The video features are privileged used to optimize the EEG features during training. Fig. 1. Framework of recognizing emotions with privileged by exploiting relations between physical and physiological spaces Visual-Audio features We extract both visual and audio features from videos. For visual features, lighting, color and motion are powerful tools to establish the mood of a scene and affect the emotions of the viewer according to cinematograph and psychology. Thus, three features, named lighting key, color energy and visual excitement are extracted from video clips. The details of features can be found in [1]. For audio features, thirty one commonly used audio features are extracted from video, including average energy, average energy intensity, spectrum flux, Zero Crossing Rate (ZCR), standard deviation of ZCR, 12 Mel-frequency Cepstral Coefficients (MFCCs), log energy of MFCC, and the standard deviations of the above 13 M- FCCs [11], are extracted by using PRAAT (V5.2.35) [12] Relations between EEG / video features and user s emotions We explore the relations between EEG / video features and user s emotions with the approach in [6]. After feature extraction, we conduct statistical analyses for hypothesis testing to analyze whether there exists significant difference in every feature between the two groups of emotional tags. The null hypothesis H means the median difference between positive and negative valence (or high and low arousal) for a feature is zero. The alternative hypothesis H1 is that the median d- ifference between positive and negative valence (or high and low arousal) for a feature is not zero. We may reject the null hypothesis when the P-value is less than the significant level. The procedures are described as follows: First, normality test is performed on each feature. If the feature is not normally distributed, Kolmogorov-Smirnov test is used. Otherwise, homogeneity of the features is tested. If the variance is homogeneous, T-test with homogeneity of variance is performed; otherwise, T-test with inhomogeneity of variance is performed. In our study, P-value threshold is set to.5 to capture more features that could represent the nature of EEG signals or video contents. This procedure can also be used as feature selection Feature extraction EEG features First, noise mitigation is carried out. HEGO and VEOG are removed and a bandpass filter with a lower cutoff frequency of.3 Hz and a higher cutoff frequency of 45 Hz is used to remove DC drifts and suppress the 5Hz power line interference [7, 8]. Then the power spectrum (PS) is calculated and divided into five segments [9], which are the delta (-4 Hz), theta (4-8 Hz), alpha (8-13 Hz), beta (13-3 Hz) and gamma (3-45 Hz) frequency bands. The ratio of the power in each frequency band to the overall power is extracted as the feature Constructing a new EEG feature space After feature selection, we construct a new EEG feature space with the help of video content using CCA. The main idea of CCA is to assess the relationship between metric independent variables and multiple dependent measures. In the our approach, denote the original training features of video and EEG by V and E. The goal of CCA is to find a and b, and use them to construct canonical components F and G F = Va, (1) G = Eb, (2)

3 where F and G are corresponding canonical components which have the highest Pearson Correlation Coefficients, which is actually obtained for training. Here we use the approach described in [13]. The relationship between V, E, a, b are the follows: (V T V) 1 V T E(E T E) 1 E T Va = β 2 a (3) (E T E) 1 E T V(V T V) 1 V T Eb = β 2 b (4) where β = a T V T Eb. Thus, β can be solved as the max eigenvalue of (V T V) 1 V T E(E T E) 1 E T V and a is the corresponding eigenvector. In the similar way, b is the eigenvector of (E T E) 1 E T V(V T V) 1 V T E. With this calculation, the relationship between E and V is specified to b, which could be used for training and testing. Moreover, the relationship between these two spaces could also be described as follows: F = 1 β V(VT V) 1 V T G, (5) G = 1 β E(ET E) 1 E T F, (6) After exploiting this relation, we adopt it in training and testing phases. In training phase, G is employed for training the model. And in testing phase, let E t be the testing EEG features since video features has been used as privileged, transfer E t into the video-related space E t by 2.4. Classifier and emotion recognition E t = E b. (7) A SVM classifier is employed recognize users emotions from EEG signals. During training, both EEG signals and video stimulus are available. Therefore, we obtain the EEG feature space using G of Eq. 2. During testing, only EEG signals are available, we obtain the EEG feature space using E t in Eq EXPERIMENTS 3.1. Experimental conditions To validate the performance of our approach, we conducted experiments on two benchmark databases, one is collected by Wang et. al. [6], the other is MAHNOB-HCI [14]. Wang et. al. collected users 197 EEG responses to 92 video stimulus. Users emotional self-assessment are fivescale evaluations (i.e. -2, -1,, 1, 2) for both valence and arousal. In our work, we divide them into two groups based on whether they are higher than zero or not. Finally, in arousal space, 149 EEG recordings are high, and 48 recordings are low; 7 videos are high and 22 are low. In valence space, 77 EEG recordings are positive, and 12 are negative; 3 video clips are positive and 62 are negative. MAHNOB-HCI is a multi-modal database for emotion recognition and implicit tagging. It includes the physiological signals from 27 participants correspond to 2 videos. Subjects emotional self-assessment are nine-scale evaluations, from 1 to 9, for both valence and arousal. In our work, we defined the ratings as positive or high if they are larger than 5, otherwise, we define them as negative or low. Thus, we get 533 EEG segments corresponding to 2 stimulus videos. For valence, there are 289 positive and 244 negative EEG segments, as well as, 7 positive and 13 negative videos. For arousal, there are 268 high and 265 low EEG segments, as well as 1 high and 1 low video stimulus. To validate the effectiveness of our proposed emotion recognition approach, we conduct three experiments: emotion recognition from EEG signals only, emotion recognition from EEG signals after Principle Component Analysis (PCA), and emotion recognition from EEG signals with the help of video content. The first experiment recognize emotions from the selected EEG features as described in Section 2.1.1; the second experiment recognize emotions from the EEG features which are transferred using PCA from the selected EEG features; The third one recognize emotions from the new EEG feature space, which is built with the help of video content using CCA. Furthermore, to test the generalized ability of our approach, leave-one-video out cross-validations are performed to simulate the EEG response to a unknown video. In addition, in the training procedure of each fold, a model selection procedure is carried out for tuning the best candidate hyper-parameters of the model Experimental Results and Analyses Experimental results for feature selections Fig. 2 shows the frequency distribution of selected EEG features in both datasets. Fig. 2(a) and Fig. 2(c) shows the selected features in valence space. We find that selected features main occurs in frontal area, which indicates that frontal area highly correlate with humans emotions. Fig. 2(b) and Fig. 2(d) illustrate the area that are active for arousal response. It could be discovered that features mainly gather in occipital area, which means this area is highly relevant to the excitement of humans emotions [15]. For video features, we also calculated the mean selected frequencies in all folds of cross-validation. Fig. 3 shows the frequencies of selected video features in both datasets. Fig. 3(a) and Fig. 3(c) shows the selected frequencies in valence space. For visual features, even though only three of them are extracted, two of them are selected, which means visual stimulus strongly influence emotions on valence [16]. For audio features, MFCCs are discriminating features [16], since a majority of them are with high selected frequencies. In arousal space, Fig. 3(b) and Fig. 3(d) illustrate the features that are highly related to the assessment. It could be discovered that the frequencies of video features are lower than most of au-

4 (a) (b) (c) (d) Fig. 2. (a) The distribution of EEG features with significant difference on valence in [6]. (b) The distribution of EEG features with significant difference on arousal in [6]. (c) The distribution of EEG features with significant difference on valence in MAHNOB-HCI. (d) The distribution of EEG features with significant difference on arousal in MAHNOB-HCI (a) (b) (c) (d) Fig. 3. (a) The selected frequencies of video features with significant difference on valence in [6]. (b) The selected frequencies of video features with significant difference on arousal in [6]. (c) The selected frequencies of video with significant difference on valence in MAHNOB-HCI. (d) The selected frequencies of video features with significant difference on arousal in MAHNOB- HCI. dio features. This further confirms MFCC features are important for emotion recognition. Moreover, none of the visual features are selected in 3(b). One possible reason is that the number of visual features is three, which is much smaller compared to 31 audio features Valence Recognition Results Table 2. Emotion recognition results on MAHNOB-HCI in valence space EEG EEG + PCA Video as privileged Positive Negative Positive Negative Positive Negative Positive Negative Accuracy 55.72% 54.3% 58.16% F1-score The emotional valence recognition results on the two databases are shown in Table 1 and Table 2. From these t- wo tables, we can find follows: First, the method of obtaining video as privileged outperforms the ones of merely using EEG features, i.e., with and without PCA; Second, the single-modality method using PCA shows a negative influence on emotion recognition. In the EEG only method, when PCA processed EEG features are used for training, all the three parameters, ie., accuracy, F1-score and, are slightly decreased by 2.53%,.488 and.31 respectively. However, when video features are employed as privileged through C- CA, these parameters are increased by 3.5%,.764 and.584, respectively. Table 2 shows the recognition results on MAHNOB-HCI. Compared with EEG only method, when video features are obtained as privileged, these parameters are increased by 2.46%,.492 and.268. Similar with the situation of the collected data, when PCA were employed before training SVM, these values are slightly reduced by 1.69%,.144 and.143. The improvement of using privileged indicates the calibration from merging video features by CCA, which means the video features is benefit in recognizing emotions by physiological response in valence space. Moreover, the reduction of performance of using PCA might be caused by the loss of since part of the unfaithful features have been removed in feature selection. These two phenomenon shows the validity of obtaining video features as privileged by CCA in valence space Arousal recognition results The emotion recognition results on arousal are shown in Table 3 and Table 4. From these two tables, similar phenomenon could be discovered: First, obtaining video as privileged in-

5 Table 1. Emotion recognition results on the data of [6] in valence space EEG EEG + PCA Video as privileged Independent featurelevel fusion in [6] fusion in [6] level fusion in [6] Decision-level Dependent feature- Positive Negative Positive Negative Positive Negative Positive Negative Positive Negative Positive Negative Positive Negative Accuracy % 76.65% 78.68% 77.66% 76.65% F1-score Table 3. Emotion recognition results on the data of [6] in arousal space EEG EEG + PCA Video as privileged Independent featurelevel fusion in [6] fusion in [6] level fusion in [6] Decision-level Dependent feature- High Low High Low High Low High Low High Low High Low High Low Accuracy % % F1-score Table 4. Emotion recognition results on MAHNOB-HCI in arousal space EEG EEG + PCA Video as privileged High Low High Low High Low High Low Accuracy 6.23% 55.53% 61.35% F1-score formation could improve the performance comparing with the ones of merely using single modality; Second, the singlemodal method using PCA also shows a negative influence on emotion recognition. The recognition results on the collected data in arousal s- pace are shown in Table 3. In the EEG only method, when PCA processed EEG features are used for training, the F1- score and are decreased by.563 and.282. When video features are employed as privileged through CCA, even though the accuracy is reduced by 5.7%, the F1-score and are increased by.339 and.159. This is due to the fact that in both of the EEGonly results, a majority of low samples are misclassified. Table 4 presents the recognition results on MAHNOB-HCI. Compared with EEG only method, when video features are obtained as privileged, the accuracy is slightly increased by.75%. The F1-score and average is increased by.531 and.119. Similar with the experimental results on Wang et al. s data, when PCA were employed before training SVM, these values are reduced by 4.,.397 and.467. The improvement of using privileged also indicates the video features is benefit for recognizing e- motions by physiological response in arousal space. From the recognition results on valence and arousal, it is clear that the approach of obtaining video features as privileged through CCA has a better performance than that of merely using EEG features, which is higher than that of using PCA after feature selection phase Comparison with related works There exist one related work using the database collected by Wang et al.[6], and another related work using MAHNOB- HCI database [17]. Since the feature extraction and selection methods used in this work are different from those used in [17], but are the same as those used in [6], we only compare our work with [6], as shown in Table 1 and Table Valence From Table 1, we can find that: compared with the results of three fusion methods proposed in [6], our method performs slightly better than the dependent feature-level fusion, which also builds the relationship between EEG and video features. We get the same in accuracy,.82 higher in F1-score and.47 higher in average. Even though the increase is not distinct, considering the features we used for recognition are only EEG features, the method we use utilize less knowledge than the fusion methods and still reaches similar performance Arousal Compared with the results of three fusion methods in Table 3, our method performs better than the decision-level fusion and dependent feature-level fusion with.14/.828 higher in F1-score and.51/.344 in average. The decision-level fusion and dependent feature-level fusion proposed in [6] use EEG signals and video content during both

6 training and testing. While, our method only use video content in training. It further demonstrate our proposed method can recognize emotion effectively by employing stimulus video during training. 4. CONCLUSION In this paper, we proposed a novel emotion recognition approach with privileged by exploiting relations between EEG signals and stimulus videos. Specifically, CCA is used to construct the relationship between EEG and video features, which could help to calibrate the feature by mapping them into a highly context-aware space. The experimental results on two benchmark databases verify the effectiveness of our approach. With the growing availability of built-in sensors, we believe emotional recognition with the help of users spontaneous responses will attract more and more attention. Thus, in the future, we will expand this study to other modalities such as user s expressions and gestures to broaden the way of applications. Besides, we only build relationships between EEG and video features implicitly through SVM in this work. We shall discover and employ such relations explicitly to improve EEG classification in the future. References [1] AJ Fridlund and Carroll E Izard, Electromyographic studies of facial expressions of emotions and patterns of emotions, Social psychophysiology: A sourcebook, pp , [2] Moataz El Ayadi, Mohamed S Kamel, and Fakhri Karray, Survey on speech emotion recognition: Features, classification schemes, and databases, Pattern Recognition, vol. 44, no. 3, pp , 211. [3] Zhihong Zeng, Maja Pantic, Glenn I Roisman, and Thomas S Huang, A survey of affect recognition methods: Audio, visual, and spontaneous expressions, Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 31, no. 1, pp , 29. [4] Hatice Gunes, Björn Schuller, Maja Pantic, and Roddy Cowie, Emotion representation, analysis and synthesis in continuous space: A survey, in Automatic Face & Gesture Recognition and Workshops (FG 211), 211 IEEE International Conference on. IEEE, 211, p- p [5] Vladimir Vapnik and Akshay Vashist, A new learning paradigm: Learning using privileged, Neural Networks, vol. 22, no. 5, pp , 29. [6] Shangfei Wang, Yachen Zhu, Guobing Wu, and Qiang Ji, Hybrid video emotional tagging using users eeg and video content, Multimedia Tools and Applications, pp. 1 27, 213. [7] Sander Koelstra, Christian Muhl, and Ioannis Patras, Eeg analysis for implicit tagging of video data, in Affective Computing and Intelligent Interaction and Workshops, 29. ACII 29. 3rd International Conference on. IEEE, 29, pp [8] Sander Koelstra, Ashkan Yazdani, Mohammad Soleymani, Christian Mhl, Jong Seok Lee, Anton Nijholt, Thierry Pun, Touradj Ebrahimi, and Ioannis Patras, Single trial classification of eeg and peripheral physiological signals for recognition of emotions induced by music videos, Brain Informatics, pp. 89 1, 21. [9] Zhong Ji and Shuren Qin, Detection of eeg basic rhythm feature by using band relative intensity ratio (brir), in Acoustics, Speech, and Signal Processing, 23. Proceedings.(ICASSP 3). 23 IEEE International Conference on. IEEE, 23, vol. 6, pp. VI 429. [1] Hee Lin Wang and Loong Fah Cheong, Affective understanding in film, Circuits and Systems for Video Technology, IEEE Transactions on, vol. 16, no. 6, pp , 26. [11] Molau Sirko, Pitz Michael, Schlter Ralf, and Ney Hermann, Computing mel-frequency cepstral coefficients on the power spectrum, in Acoustics, Speech, and Signal Processing, 21. Proceedings. (ICASSP 1). 21 IEEE International Conference on, 21, vol. 1, pp vol.1. [12] Paul Boersma, Praat, a system for doing phonetics by computer, Glot international, vol. 5, no. 9/1, pp , 22. [13] David Weenink, Canonical correlation analysis, in IFA Proceedings, 23, vol. 25, pp [14] M. Soleymani, J. Lichtenauer, T. Pun, and M. Pantic, A multimodal database for affect recognition and implicit tagging, Affective Computing, IEEE Transactions on, vol. 3, no. 1, pp , 212. [15] Pierre Krolak-Salmon, Marie-Anne Hénaff, Alain Vighetto, Olivier Bertrand, and François Mauguière, Early amygdala reaction to fear spreading in occipital, temporal, and frontal cortex: a depth electrode erp study in human, Neuron, vol. 42, no. 4, pp , 24. [16] Min Xu, Jesse S Jin, Suhuai Luo, and Lingyu Duan, Hierarchical movie affective content analysis based on arousal and valence features, in Proceedings of the 16th ACM international conference on Multimedia. ACM, 28, pp [17] Sander Koelstra and Ioannis Patras, Fusion of facial expressions and eeg for implicit affective tagging, Image and Vision Computing, 212.

Valence-arousal evaluation using physiological signals in an emotion recall paradigm. CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry.

Valence-arousal evaluation using physiological signals in an emotion recall paradigm. CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry. Proceedings Chapter Valence-arousal evaluation using physiological signals in an emotion recall paradigm CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry Abstract The work presented in this paper aims

More information

Emotion Detection Using Physiological Signals. M.A.Sc. Thesis Proposal Haiyan Xu Supervisor: Prof. K.N. Plataniotis

Emotion Detection Using Physiological Signals. M.A.Sc. Thesis Proposal Haiyan Xu Supervisor: Prof. K.N. Plataniotis Emotion Detection Using Physiological Signals M.A.Sc. Thesis Proposal Haiyan Xu Supervisor: Prof. K.N. Plataniotis May 10 th, 2011 Outline Emotion Detection Overview EEG for Emotion Detection Previous

More information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, S. Lee, U. Neumann, S. Narayanan Emotion

More information

INTER-RATER RELIABILITY OF ACTUAL TAGGED EMOTION CATEGORIES VALIDATION USING COHEN S KAPPA COEFFICIENT

INTER-RATER RELIABILITY OF ACTUAL TAGGED EMOTION CATEGORIES VALIDATION USING COHEN S KAPPA COEFFICIENT INTER-RATER RELIABILITY OF ACTUAL TAGGED EMOTION CATEGORIES VALIDATION USING COHEN S KAPPA COEFFICIENT 1 NOR RASHIDAH MD JUREMI, 2 *MOHD ASYRAF ZULKIFLEY, 3 AINI HUSSAIN, 4 WAN MIMI DIYANA WAN ZAKI Department

More information

DISCRETE WAVELET PACKET TRANSFORM FOR ELECTROENCEPHALOGRAM- BASED EMOTION RECOGNITION IN THE VALENCE-AROUSAL SPACE

DISCRETE WAVELET PACKET TRANSFORM FOR ELECTROENCEPHALOGRAM- BASED EMOTION RECOGNITION IN THE VALENCE-AROUSAL SPACE DISCRETE WAVELET PACKET TRANSFORM FOR ELECTROENCEPHALOGRAM- BASED EMOTION RECOGNITION IN THE VALENCE-AROUSAL SPACE Farzana Kabir Ahmad*and Oyenuga Wasiu Olakunle Computational Intelligence Research Cluster,

More information

Blue Eyes Technology

Blue Eyes Technology Blue Eyes Technology D.D. Mondal #1, Arti Gupta *2, Tarang Soni *3, Neha Dandekar *4 1 Professor, Dept. of Electronics and Telecommunication, Sinhgad Institute of Technology and Science, Narhe, Maharastra,

More information

Gender Based Emotion Recognition using Speech Signals: A Review

Gender Based Emotion Recognition using Speech Signals: A Review 50 Gender Based Emotion Recognition using Speech Signals: A Review Parvinder Kaur 1, Mandeep Kaur 2 1 Department of Electronics and Communication Engineering, Punjabi University, Patiala, India 2 Department

More information

PERSONALIZED VIDEO EMOTION TAGGING THROUGH A TOPIC MODEL

PERSONALIZED VIDEO EMOTION TAGGING THROUGH A TOPIC MODEL PERSONALIZED VIDEO EMOTION TAGGING THROUGH A TOPIC MODEL Shan Wu Shangfei Wang Zhen Gao Key Lab of Computing and Communication Software of Anhui Province, School of Computer Science and Technology, University

More information

Emotion Affective Color Transfer Using Feature Based Facial Expression Recognition

Emotion Affective Color Transfer Using Feature Based Facial Expression Recognition , pp.131-135 http://dx.doi.org/10.14257/astl.2013.39.24 Emotion Affective Color Transfer Using Feature Based Facial Expression Recognition SeungTaek Ryoo and Jae-Khun Chang School of Computer Engineering

More information

A Brain Computer Interface System For Auto Piloting Wheelchair

A Brain Computer Interface System For Auto Piloting Wheelchair A Brain Computer Interface System For Auto Piloting Wheelchair Reshmi G, N. Kumaravel & M. Sasikala Centre for Medical Electronics, Dept. of Electronics and Communication Engineering, College of Engineering,

More information

Audio-visual Classification and Fusion of Spontaneous Affective Data in Likelihood Space

Audio-visual Classification and Fusion of Spontaneous Affective Data in Likelihood Space 2010 International Conference on Pattern Recognition Audio-visual Classification and Fusion of Spontaneous Affective Data in Likelihood Space Mihalis A. Nicolaou, Hatice Gunes and Maja Pantic, Department

More information

EMOTION CLASSIFICATION: HOW DOES AN AUTOMATED SYSTEM COMPARE TO NAÏVE HUMAN CODERS?

EMOTION CLASSIFICATION: HOW DOES AN AUTOMATED SYSTEM COMPARE TO NAÏVE HUMAN CODERS? EMOTION CLASSIFICATION: HOW DOES AN AUTOMATED SYSTEM COMPARE TO NAÏVE HUMAN CODERS? Sefik Emre Eskimez, Kenneth Imade, Na Yang, Melissa Sturge- Apple, Zhiyao Duan, Wendi Heinzelman University of Rochester,

More information

Feature Extraction for Emotion Recognition and Modelling using Neurophysiological Data

Feature Extraction for Emotion Recognition and Modelling using Neurophysiological Data Feature Extraction for Emotion Recognition and Modelling using Neurophysiological Data Anas Samara School of Computing and Mathematics Ulster University Belfast BT37 0QB United Kingdom Email: samara-a@email.ulster.ac.uk

More information

Design of Palm Acupuncture Points Indicator

Design of Palm Acupuncture Points Indicator Design of Palm Acupuncture Points Indicator Wen-Yuan Chen, Shih-Yen Huang and Jian-Shie Lin Abstract The acupuncture points are given acupuncture or acupressure so to stimulate the meridians on each corresponding

More information

Facial expression recognition with spatiotemporal local descriptors

Facial expression recognition with spatiotemporal local descriptors Facial expression recognition with spatiotemporal local descriptors Guoying Zhao, Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and Information Engineering, P. O. Box

More information

SIGNIFICANT PREPROCESSING METHOD IN EEG-BASED EMOTIONS CLASSIFICATION

SIGNIFICANT PREPROCESSING METHOD IN EEG-BASED EMOTIONS CLASSIFICATION SIGNIFICANT PREPROCESSING METHOD IN EEG-BASED EMOTIONS CLASSIFICATION 1 MUHAMMAD NADZERI MUNAWAR, 2 RIYANARTO SARNO, 3 DIMAS ANTON ASFANI, 4 TOMOHIKO IGASAKI, 5 BRILIAN T. NUGRAHA 1 2 5 Department of Informatics,

More information

Emotion Recognition using a Cauchy Naive Bayes Classifier

Emotion Recognition using a Cauchy Naive Bayes Classifier Emotion Recognition using a Cauchy Naive Bayes Classifier Abstract Recognizing human facial expression and emotion by computer is an interesting and challenging problem. In this paper we propose a method

More information

Fusion of visible and thermal images for facial expression recognition

Fusion of visible and thermal images for facial expression recognition Front. Comput. Sci., 2014, 8(2): 232 242 DOI 10.1007/s11704-014-2345-1 Fusion of visible and thermal images for facial expression recognition Shangfei WANG 1,2, Shan HE 1,2,YueWU 3, Menghua HE 1,2,QiangJI

More information

Recognition of Sleep Dependent Memory Consolidation with Multi-modal Sensor Data

Recognition of Sleep Dependent Memory Consolidation with Multi-modal Sensor Data Recognition of Sleep Dependent Memory Consolidation with Multi-modal Sensor Data The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation

More information

On Shape And the Computability of Emotions X. Lu, et al.

On Shape And the Computability of Emotions X. Lu, et al. On Shape And the Computability of Emotions X. Lu, et al. MICC Reading group 10.07.2013 1 On Shape and the Computability of Emotion X. Lu, P. Suryanarayan, R. B. Adams Jr., J. Li, M. G. Newman, J. Z. Wang

More information

Emotion classification using linear predictive features on wavelet-decomposed EEG data*

Emotion classification using linear predictive features on wavelet-decomposed EEG data* Emotion classification using linear predictive features on wavelet-decomposed EEG data* Luka Kraljević, Student Member, IEEE, Mladen Russo, Member, IEEE, and Marjan Sikora Abstract Emotions play a significant

More information

Outline. Teager Energy and Modulation Features for Speech Applications. Dept. of ECE Technical Univ. of Crete

Outline. Teager Energy and Modulation Features for Speech Applications. Dept. of ECE Technical Univ. of Crete Teager Energy and Modulation Features for Speech Applications Alexandros Summariza(on Potamianos and Emo(on Tracking in Movies Dept. of ECE Technical Univ. of Crete Alexandros Potamianos, NatIONAL Tech.

More information

Analysis of EEG signals and facial expressions for continuous emotion detection

Analysis of EEG signals and facial expressions for continuous emotion detection IEEE TRANSACTIONS ON AFFECTIVE COMPUTING 1 Analysis of EEG signals and facial expressions for continuous emotion detection Mohammad Soleymani, Member, IEEE, Sadjad Asghari-Esfeden, Student member, IEEE

More information

Emotion Detection from EEG signals with Continuous Wavelet Analyzing

Emotion Detection from EEG signals with Continuous Wavelet Analyzing American Journal of Computing Research Repository, 2014, Vol. 2, No. 4, 66-70 Available online at http://pubs.sciepub.com/ajcrr/2/4/3 Science and Education Publishing DOI:10.12691/ajcrr-2-4-3 Emotion Detection

More information

Classification of EEG signals in an Object Recognition task

Classification of EEG signals in an Object Recognition task Classification of EEG signals in an Object Recognition task Iacob D. Rus, Paul Marc, Mihaela Dinsoreanu, Rodica Potolea Technical University of Cluj-Napoca Cluj-Napoca, Romania 1 rus_iacob23@yahoo.com,

More information

COMPARISON BETWEEN GMM-SVM SEQUENCE KERNEL AND GMM: APPLICATION TO SPEECH EMOTION RECOGNITION

COMPARISON BETWEEN GMM-SVM SEQUENCE KERNEL AND GMM: APPLICATION TO SPEECH EMOTION RECOGNITION Journal of Engineering Science and Technology Vol. 11, No. 9 (2016) 1221-1233 School of Engineering, Taylor s University COMPARISON BETWEEN GMM-SVM SEQUENCE KERNEL AND GMM: APPLICATION TO SPEECH EMOTION

More information

A Bayesian Framework for Video Affective Representation. SOLEYMANI, Mohammad, et al. Abstract

A Bayesian Framework for Video Affective Representation. SOLEYMANI, Mohammad, et al. Abstract Proceedings Chapter A Bayesian Framework for Video Affective Representation SOLEYMANI, Mohammad, et al. Abstract Emotions that are elicited in response to a video scene contain valuable information for

More information

Audio-based Emotion Recognition for Advanced Automatic Retrieval in Judicial Domain

Audio-based Emotion Recognition for Advanced Automatic Retrieval in Judicial Domain Audio-based Emotion Recognition for Advanced Automatic Retrieval in Judicial Domain F. Archetti 1,2, G. Arosio 1, E. Fersini 1, E. Messina 1 1 DISCO, Università degli Studi di Milano-Bicocca, Viale Sarca,

More information

Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners

Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners Hatice Gunes and Maja Pantic Department of Computing, Imperial College London 180 Queen

More information

A Bayesian Framework for Video Affective Representation

A Bayesian Framework for Video Affective Representation A Framework for Video Affective Representation Mohammad Soleymani Joep J.M. Kierkels Guillaume Chanel Thierry Pun Computer Vision and Multimedia Laboratory, Computer Science Department University of Geneva

More information

Music-induced Emotions and Musical Regulation and Emotion Improvement Based on EEG Technology

Music-induced Emotions and Musical Regulation and Emotion Improvement Based on EEG Technology Music-induced Emotions and Musical Regulation and Emotion Improvement Based on EEG Technology Xiaoling Wu 1*, Guodong Sun 2 ABSTRACT Musical stimulation can induce emotions as well as adjust and improve

More information

Development of 2-Channel Eeg Device And Analysis Of Brain Wave For Depressed Persons

Development of 2-Channel Eeg Device And Analysis Of Brain Wave For Depressed Persons Development of 2-Channel Eeg Device And Analysis Of Brain Wave For Depressed Persons P.Amsaleka*, Dr.S.Mythili ** * PG Scholar, Applied Electronics, Department of Electronics and Communication, PSNA College

More information

Heart Murmur Recognition Based on Hidden Markov Model

Heart Murmur Recognition Based on Hidden Markov Model Journal of Signal and Information Processing, 2013, 4, 140-144 http://dx.doi.org/10.4236/jsip.2013.42020 Published Online May 2013 (http://www.scirp.org/journal/jsip) Heart Murmur Recognition Based on

More information

Audiovisual Conflict Detection in Political Debates

Audiovisual Conflict Detection in Political Debates Audiovisual Conflict Detection in Political Debates Yannis Panagakis 1, Stefanos Zafeiriou 1, and Maja Pantic 1,2 1 Department of Computing, 2 EEMCS, Imperial College London, University of Twente, 180

More information

Divide-and-Conquer based Ensemble to Spot Emotions in Speech using MFCC and Random Forest

Divide-and-Conquer based Ensemble to Spot Emotions in Speech using MFCC and Random Forest Published as conference paper in The 2nd International Integrated Conference & Concert on Convergence (2016) Divide-and-Conquer based Ensemble to Spot Emotions in Speech using MFCC and Random Forest Abdul

More information

Distributed Multisensory Signals Acquisition and Analysis in Dyadic Interactions

Distributed Multisensory Signals Acquisition and Analysis in Dyadic Interactions Distributed Multisensory Signals Acquisition and Analysis in Dyadic Interactions Ashish Tawari atawari@ucsd.edu Cuong Tran cutran@cs.ucsd.edu Anup Doshi anup.doshi@gmail.com Thorsten Zander Max Planck

More information

EEG-Based Emotion Recognition via Fast and Robust Feature Smoothing

EEG-Based Emotion Recognition via Fast and Robust Feature Smoothing EEG-Based Emotion Recognition via Fast and Robust Feature Smoothing Cheng Tang 1, Di Wang 2, Ah-Hwee Tan 1,2, and Chunyan Miao 1,2 1 School of Computer Science and Engineering 2 Joint NTU-UBC Research

More information

COMPARING EEG SIGNALS AND EMOTIONS PROVOKED BY IMAGES WITH DIFFERENT AESTHETIC VARIABLES USING EMOTIVE INSIGHT AND NEUROSKY MINDWAVE

COMPARING EEG SIGNALS AND EMOTIONS PROVOKED BY IMAGES WITH DIFFERENT AESTHETIC VARIABLES USING EMOTIVE INSIGHT AND NEUROSKY MINDWAVE COMPARING EEG SIGNALS AND EMOTIONS PROVOKED BY IMAGES WITH DIFFERENT AESTHETIC VARIABLES USING EMOTIVE INSIGHT AND NEUROSKY MINDWAVE NEDVĚDOVÁ Marie (CZ), MAREK Jaroslav (CZ) Abstract. The paper is part

More information

Patients EEG Data Analysis via Spectrogram Image with a Convolution Neural Network

Patients EEG Data Analysis via Spectrogram Image with a Convolution Neural Network Patients EEG Data Analysis via Spectrogram Image with a Convolution Neural Network Longhao Yuan and Jianting Cao ( ) Graduate School of Engineering, Saitama Institute of Technology, Fusaiji 1690, Fukaya-shi,

More information

Detecting emotion from EEG signals using the emotive epoc device

Detecting emotion from EEG signals using the emotive epoc device See discussions, stats, and author profiles for this publication at: http://www.researchgate.net/publication/262288467 Detecting emotion from EEG signals using the emotive epoc device CONFERENCE PAPER

More information

Selection of Feature for Epilepsy Seizer Detection Using EEG

Selection of Feature for Epilepsy Seizer Detection Using EEG International Journal of Neurosurgery 2018; 2(1): 1-7 http://www.sciencepublishinggroup.com/j/ijn doi: 10.11648/j.ijn.20180201.11 Selection of Feature for Epilepsy Seizer Detection Using EEG Manisha Chandani

More information

Classification of People using Eye-Blink Based EOG Peak Analysis.

Classification of People using Eye-Blink Based EOG Peak Analysis. Classification of People using Eye-Blink Based EOG Peak Analysis. Andrews Samraj, Thomas Abraham and Nikos Mastorakis Faculty of Computer Science and Engineering, VIT University, Chennai- 48, India. Technical

More information

International Journal of Research in Engineering, Science and Management Volume-1, Issue-9, September ISSN (Online):

International Journal of Research in Engineering, Science and Management Volume-1, Issue-9, September ISSN (Online): 192 Implementing Emotion Recognition Algorithm Using Physiological Signals and Enhancing Classification Accuracy for Possible Application in Depression Diagnosis Bala Priya C Student, Department of ECE,

More information

A Novel Capsule Neural Network Based Model For Drowsiness Detection Using Electroencephalography Signals

A Novel Capsule Neural Network Based Model For Drowsiness Detection Using Electroencephalography Signals A Novel Capsule Neural Network Based Model For Drowsiness Detection Using Electroencephalography Signals Luis Guarda Bräuning (1) Nicolas Astorga (1) Enrique López Droguett (1) Marcio Moura (2) Marcelo

More information

On the Use of Brainprints as Passwords

On the Use of Brainprints as Passwords 9/24/2015 2015 Global Identity Summit (GIS) 1 On the Use of Brainprints as Passwords Zhanpeng Jin Department of Electrical and Computer Engineering Department of Biomedical Engineering Binghamton University,

More information

Nonlinear signal processing and statistical machine learning techniques to remotely quantify Parkinson's disease symptom severity

Nonlinear signal processing and statistical machine learning techniques to remotely quantify Parkinson's disease symptom severity Nonlinear signal processing and statistical machine learning techniques to remotely quantify Parkinson's disease symptom severity A. Tsanas, M.A. Little, P.E. McSharry, L.O. Ramig 9 March 2011 Project

More information

Assessing Functional Neural Connectivity as an Indicator of Cognitive Performance *

Assessing Functional Neural Connectivity as an Indicator of Cognitive Performance * Assessing Functional Neural Connectivity as an Indicator of Cognitive Performance * Brian S. Helfer 1, James R. Williamson 1, Benjamin A. Miller 1, Joseph Perricone 1, Thomas F. Quatieri 1 MIT Lincoln

More information

Real-Time Electroencephalography-Based Emotion Recognition System

Real-Time Electroencephalography-Based Emotion Recognition System International Review on Computers and Software (I.RE.CO.S.), Vol. 11, N. 5 ISSN 1828-03 May 2016 Real-Time Electroencephalography-Based Emotion Recognition System Riyanarto Sarno, Muhammad Nadzeri Munawar,

More information

Emotion Classification along Valence Axis Using ERP Signals

Emotion Classification along Valence Axis Using ERP Signals Emotion Classification along Valence Axis Using ERP Signals [1] Mandeep Singh, [2] Mooninder Singh, [3] Ankita Sandel [1, 2, 3]Department of Electrical & Instrumentation Engineering, Thapar University,

More information

The Ordinal Nature of Emotions. Georgios N. Yannakakis, Roddy Cowie and Carlos Busso

The Ordinal Nature of Emotions. Georgios N. Yannakakis, Roddy Cowie and Carlos Busso The Ordinal Nature of Emotions Georgios N. Yannakakis, Roddy Cowie and Carlos Busso The story It seems that a rank-based FeelTrace yields higher inter-rater agreement Indeed, FeelTrace should actually

More information

Brain Computer Interface. Mina Mikhail

Brain Computer Interface. Mina Mikhail Brain Computer Interface Mina Mikhail minamohebn@gmail.com Introduction Ways for controlling computers Keyboard Mouse Voice Gestures Ways for communicating with people Talking Writing Gestures Problem

More information

Event Related Potentials: Significant Lobe Areas and Wave Forms for Picture Visual Stimulus

Event Related Potentials: Significant Lobe Areas and Wave Forms for Picture Visual Stimulus Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology ISSN 2320 088X IMPACT FACTOR: 6.017 IJCSMC,

More information

Decision tree SVM model with Fisher feature selection for speech emotion recognition

Decision tree SVM model with Fisher feature selection for speech emotion recognition Sun et al. EURASIP Journal on Audio, Speech, and Music Processing (2019) 2019:2 https://doi.org/10.1186/s13636-018-0145-5 RESEARCH Decision tree SVM model with Fisher feature selection for speech emotion

More information

MULTI-CHANNEL COMMUNICATION

MULTI-CHANNEL COMMUNICATION INTRODUCTION Research on the Deaf Brain is beginning to provide a new evidence base for policy and practice in relation to intervention with deaf children. This talk outlines the multi-channel nature of

More information

- - Xiaofen Xing, Bolun Cai, Yinhu Zhao, Shuzhen Li, Zhiwei He, Weiquan Fan South China University of Technology

- - Xiaofen Xing, Bolun Cai, Yinhu Zhao, Shuzhen Li, Zhiwei He, Weiquan Fan South China University of Technology - - - - -- Xiaofen Xing, Bolun Cai, Yinhu Zhao, Shuzhen Li, Zhiwei He, Weiquan Fan South China University of Technology 1 Outline Ø Introduction Ø Feature Extraction Ø Multi-modal Hierarchical Recall Framework

More information

R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology

R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology ISSN: 0975-766X CODEN: IJPTFI Available Online through Research Article www.ijptonline.com FACIAL EMOTION RECOGNITION USING NEURAL NETWORK Kashyap Chiranjiv Devendra, Azad Singh Tomar, Pratigyna.N.Javali,

More information

A Pilot Study on Emotion Recognition System Using Electroencephalography (EEG) Signals

A Pilot Study on Emotion Recognition System Using Electroencephalography (EEG) Signals A Pilot Study on Emotion Recognition System Using Electroencephalography (EEG) Signals 1 B.K.N.Jyothirmai, 2 A.Narendra Babu & 3 B.Chaitanya Lakireddy Balireddy College of Engineering E-mail : 1 buragaddajyothi@gmail.com,

More information

EMOTION DETECTION THROUGH SPEECH AND FACIAL EXPRESSIONS

EMOTION DETECTION THROUGH SPEECH AND FACIAL EXPRESSIONS EMOTION DETECTION THROUGH SPEECH AND FACIAL EXPRESSIONS 1 KRISHNA MOHAN KUDIRI, 2 ABAS MD SAID AND 3 M YUNUS NAYAN 1 Computer and Information Sciences, Universiti Teknologi PETRONAS, Malaysia 2 Assoc.

More information

A Large Video Database for Computational Models of Induced Emotion

A Large Video Database for Computational Models of Induced Emotion A Large Video Database for Computational Models of Induced Emotion Yoann Baveye, Jean-Noël Bettinelli, Emmanuel Dellandréa, Liming Chen, and Christel Chamaret Université de Lyon, CNRS Ecole Centrale de

More information

Active User Affect Recognition and Assistance

Active User Affect Recognition and Assistance Active User Affect Recognition and Assistance Wenhui Liao, Zhiwe Zhu, Markus Guhe*, Mike Schoelles*, Qiang Ji, and Wayne Gray* Email: jiq@rpi.edu Department of Electrical, Computer, and System Eng. *Department

More information

A Protocol for Cross-Validating Large Crowdsourced Data: The Case of the LIRIS-ACCEDE Affective Video Dataset

A Protocol for Cross-Validating Large Crowdsourced Data: The Case of the LIRIS-ACCEDE Affective Video Dataset A Protocol for Cross-Validating Large Crowdsourced Data: The Case of the LIRIS-ACCEDE Affective Video Dataset ABSTRACT Yoann Baveye, Christel Chamaret Technicolor 975, avenue des Champs Blancs 35576 Cesson

More information

Audio-Visual Emotion Recognition in Adult Attachment Interview

Audio-Visual Emotion Recognition in Adult Attachment Interview Audio-Visual Emotion Recognition in Adult Attachment Interview Zhihong Zeng, Yuxiao Hu, Glenn I. Roisman, Zhen Wen, Yun Fu and Thomas S. Huang University of Illinois at Urbana-Champaign IBM T.J.Watson

More information

WAVELET ENERGY DISTRIBUTIONS OF P300 EVENT-RELATED POTENTIALS FOR WORKING MEMORY PERFORMANCE IN CHILDREN

WAVELET ENERGY DISTRIBUTIONS OF P300 EVENT-RELATED POTENTIALS FOR WORKING MEMORY PERFORMANCE IN CHILDREN WAVELET ENERGY DISTRIBUTIONS OF P300 EVENT-RELATED POTENTIALS FOR WORKING MEMORY PERFORMANCE IN CHILDREN Siti Zubaidah Mohd Tumari and Rubita Sudirman Department of Electronic and Computer Engineering,

More information

Incorporation of Imaging-Based Functional Assessment Procedures into the DICOM Standard Draft version 0.1 7/27/2011

Incorporation of Imaging-Based Functional Assessment Procedures into the DICOM Standard Draft version 0.1 7/27/2011 Incorporation of Imaging-Based Functional Assessment Procedures into the DICOM Standard Draft version 0.1 7/27/2011 I. Purpose Drawing from the profile development of the QIBA-fMRI Technical Committee,

More information

Noise-Robust Speech Recognition Technologies in Mobile Environments

Noise-Robust Speech Recognition Technologies in Mobile Environments Noise-Robust Speech Recognition echnologies in Mobile Environments Mobile environments are highly influenced by ambient noise, which may cause a significant deterioration of speech recognition performance.

More information

Study on Aging Effect on Facial Expression Recognition

Study on Aging Effect on Facial Expression Recognition Study on Aging Effect on Facial Expression Recognition Nora Algaraawi, Tim Morris Abstract Automatic facial expression recognition (AFER) is an active research area in computer vision. However, aging causes

More information

Video Saliency Detection via Dynamic Consistent Spatio- Temporal Attention Modelling

Video Saliency Detection via Dynamic Consistent Spatio- Temporal Attention Modelling AAAI -13 July 16, 2013 Video Saliency Detection via Dynamic Consistent Spatio- Temporal Attention Modelling Sheng-hua ZHONG 1, Yan LIU 1, Feifei REN 1,2, Jinghuan ZHANG 2, Tongwei REN 3 1 Department of

More information

Affective Game Engines: Motivation & Requirements

Affective Game Engines: Motivation & Requirements Affective Game Engines: Motivation & Requirements Eva Hudlicka Psychometrix Associates Blacksburg, VA hudlicka@ieee.org psychometrixassociates.com DigiPen Institute of Technology February 20, 2009 1 Outline

More information

EEG based analysis and classification of human emotions is a new and challenging field that has gained momentum in the

EEG based analysis and classification of human emotions is a new and challenging field that has gained momentum in the Available Online through ISSN: 0975-766X CODEN: IJPTFI Research Article www.ijptonline.com EEG ANALYSIS FOR EMOTION RECOGNITION USING MULTI-WAVELET TRANSFORMS Swati Vaid,Chamandeep Kaur, Preeti UIET, PU,

More information

AUDIO-VISUAL EMOTION RECOGNITION USING AN EMOTION SPACE CONCEPT

AUDIO-VISUAL EMOTION RECOGNITION USING AN EMOTION SPACE CONCEPT 16th European Signal Processing Conference (EUSIPCO 28), Lausanne, Switzerland, August 25-29, 28, copyright by EURASIP AUDIO-VISUAL EMOTION RECOGNITION USING AN EMOTION SPACE CONCEPT Ittipan Kanluan, Michael

More information

PREFACE Emotions and Personality in Personalized Systems

PREFACE Emotions and Personality in Personalized Systems PREFACE Emotions and Personality in Personalized Systems Introduction Personality and emotions shape our daily lives by having a strong influence on our preferences, decisions and behaviour in general.

More information

Sound Texture Classification Using Statistics from an Auditory Model

Sound Texture Classification Using Statistics from an Auditory Model Sound Texture Classification Using Statistics from an Auditory Model Gabriele Carotti-Sha Evan Penn Daniel Villamizar Electrical Engineering Email: gcarotti@stanford.edu Mangement Science & Engineering

More information

Classification of Emotional Signals from the DEAP Dataset

Classification of Emotional Signals from the DEAP Dataset Classification of Emotional Signals from the DEAP Dataset Giuseppe Placidi 1, Paolo Di Giamberardino 2, Andrea Petracca 1, Matteo Spezialetti 1 and Daniela Iacoviello 2 1 A 2 VI_Lab, c/o Department of

More information

PHYSIOLOGICAL RESEARCH

PHYSIOLOGICAL RESEARCH DOMAIN STUDIES PHYSIOLOGICAL RESEARCH In order to understand the current landscape of psychophysiological evaluation methods, we conducted a survey of academic literature. We explored several different

More information

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 3,900 116,000 120M Open access books available International authors and editors Downloads Our

More information

Simultaneous Real-Time Detection of Motor Imagery and Error-Related Potentials for Improved BCI Accuracy

Simultaneous Real-Time Detection of Motor Imagery and Error-Related Potentials for Improved BCI Accuracy Simultaneous Real-Time Detection of Motor Imagery and Error-Related Potentials for Improved BCI Accuracy P. W. Ferrez 1,2 and J. del R. Millán 1,2 1 IDIAP Research Institute, Martigny, Switzerland 2 Ecole

More information

AMIGOS: A Dataset for Affect, Personality and Mood Research on Individuals and Groups

AMIGOS: A Dataset for Affect, Personality and Mood Research on Individuals and Groups IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, VOL.?, NO.?,?? 1 AMIGOS: A Dataset for Affect, Personality and Mood Research on Individuals and Groups Juan Abdon Miranda-Correa, Student Member, IEEE, Mojtaba

More information

EEG CORRELATES DURING VIDEO QUALITY PERCEPTION.

EEG CORRELATES DURING VIDEO QUALITY PERCEPTION. EEG CORRELATES DURING VIDEO QUALITY PERCEPTION. Eleni Kroupi 1, Philippe Hanhart 1, Jong-Seok Lee 2, Martin Rerabek 1, and Touradj Ebrahimi 1 1 Multimedia Signal Processing Group (MMSPG) École Polytechnique

More information

1 Introduction. Fig. 1 Examples of microaneurysms

1 Introduction. Fig. 1 Examples of microaneurysms 3rd International Conference on Multimedia Technology(ICMT 2013) Detection of Microaneurysms in Color Retinal Images Using Multi-orientation Sum of Matched Filter Qin Li, Ruixiang Lu, Shaoguang Miao, Jane

More information

Vital Responder: Real-time Health Monitoring of First- Responders

Vital Responder: Real-time Health Monitoring of First- Responders Vital Responder: Real-time Health Monitoring of First- Responders Ye Can 1,2 Advisors: Miguel Tavares Coimbra 2, Vijayakumar Bhagavatula 1 1 Department of Electrical & Computer Engineering, Carnegie Mellon

More information

Online Speaker Adaptation of an Acoustic Model using Face Recognition

Online Speaker Adaptation of an Acoustic Model using Face Recognition Online Speaker Adaptation of an Acoustic Model using Face Recognition Pavel Campr 1, Aleš Pražák 2, Josef V. Psutka 2, and Josef Psutka 2 1 Center for Machine Perception, Department of Cybernetics, Faculty

More information

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING 134 TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING H.F.S.M.Fonseka 1, J.T.Jonathan 2, P.Sabeshan 3 and M.B.Dissanayaka 4 1 Department of Electrical And Electronic Engineering, Faculty

More information

Development of novel algorithm by combining Wavelet based Enhanced Canny edge Detection and Adaptive Filtering Method for Human Emotion Recognition

Development of novel algorithm by combining Wavelet based Enhanced Canny edge Detection and Adaptive Filtering Method for Human Emotion Recognition International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 12, Issue 9 (September 2016), PP.67-72 Development of novel algorithm by combining

More information

Morton-Style Factorial Coding of Color in Primary Visual Cortex

Morton-Style Factorial Coding of Color in Primary Visual Cortex Morton-Style Factorial Coding of Color in Primary Visual Cortex Javier R. Movellan Institute for Neural Computation University of California San Diego La Jolla, CA 92093-0515 movellan@inc.ucsd.edu Thomas

More information

Neuro Q no.2 = Neuro Quotient

Neuro Q no.2 = Neuro Quotient TRANSDISCIPLINARY RESEARCH SEMINAR CLINICAL SCIENCE RESEARCH PLATFORM 27 July 2010 School of Medical Sciences USM Health Campus Neuro Q no.2 = Neuro Quotient Dr.Muzaimi Mustapha Department of Neurosciences

More information

EEG Analysis on Brain.fm (Focus)

EEG Analysis on Brain.fm (Focus) EEG Analysis on Brain.fm (Focus) Introduction 17 subjects were tested to measure effects of a Brain.fm focus session on cognition. With 4 additional subjects, we recorded EEG data during baseline and while

More information

AFFECTIVE COMPUTING. Affective Computing. Introduction. Guoying Zhao 1 / 67

AFFECTIVE COMPUTING. Affective Computing. Introduction. Guoying Zhao 1 / 67 Affective Computing Introduction Guoying Zhao 1 / 67 Your Staff Assoc. Prof. Guoying Zhao - email: guoying.zhao@oulu.fi - office: TS302 - phone: 0294 487564 - Wed. 3-4pm Dr. Xiaohua Huang (Assistant Lecturer)

More information

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1 1. INTRODUCTION Sign language interpretation is one of the HCI applications where hand gesture plays important role for communication. This chapter discusses sign language interpretation system with present

More information

Working Memory Impairments Limitations of Normal Children s in Visual Stimuli using Event-Related Potentials

Working Memory Impairments Limitations of Normal Children s in Visual Stimuli using Event-Related Potentials 2015 6th International Conference on Intelligent Systems, Modelling and Simulation Working Memory Impairments Limitations of Normal Children s in Visual Stimuli using Event-Related Potentials S. Z. Mohd

More information

This is the accepted version of this article. To be published as : This is the author version published as:

This is the accepted version of this article. To be published as : This is the author version published as: QUT Digital Repository: http://eprints.qut.edu.au/ This is the author version published as: This is the accepted version of this article. To be published as : This is the author version published as: Chew,

More information

Research Article Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition

Research Article Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition Hindawi Computational Intelligence and Neuroscience Volume 2017, Article ID 2107451, 8 pages https://doi.org/10.1155/2017/2107451 Research Article Fusion of Facial Expressions and EEG for Multimodal Emotion

More information

LATERAL INHIBITION MECHANISM IN COMPUTATIONAL AUDITORY MODEL AND IT'S APPLICATION IN ROBUST SPEECH RECOGNITION

LATERAL INHIBITION MECHANISM IN COMPUTATIONAL AUDITORY MODEL AND IT'S APPLICATION IN ROBUST SPEECH RECOGNITION LATERAL INHIBITION MECHANISM IN COMPUTATIONAL AUDITORY MODEL AND IT'S APPLICATION IN ROBUST SPEECH RECOGNITION Lu Xugang Li Gang Wang Lip0 Nanyang Technological University, School of EEE, Workstation Resource

More information

EEG-Based Emotion Recognition using 3D Convolutional Neural Networks

EEG-Based Emotion Recognition using 3D Convolutional Neural Networks EEG-Based Emotion Recognition using 3D Convolutional Neural Networks Elham S.Salama, Reda A.El-Khoribi, Mahmoud E.Shoman, Mohamed A.Wahby Shalaby Information Technology Department Faculty of Computers

More information

The impact of numeration on visual attention during a psychophysical task; An ERP study

The impact of numeration on visual attention during a psychophysical task; An ERP study The impact of numeration on visual attention during a psychophysical task; An ERP study Armita Faghani Jadidi, Raheleh Davoodi, Mohammad Hassan Moradi Department of Biomedical Engineering Amirkabir University

More information

Human Emotions Identification and Recognition Using EEG Signal Processing

Human Emotions Identification and Recognition Using EEG Signal Processing Human Emotions Identification and Recognition Using EEG Signal Processing Ashna Y 1, Vysak Valsan 2 1Fourth semester, M.Tech, Dept. of ECE, JCET, Palakkad, Affliated to Kerala Technological University,

More information

Performance Analysis of Human Brain Waves for the Detection of Concentration Level

Performance Analysis of Human Brain Waves for the Detection of Concentration Level Performance Analysis of Human Brain Waves for the Detection of Concentration Level Kalai Priya. E #1, Janarthanan. S #2 1,2 Electronics and Instrumentation Department, Kongu Engineering College, Perundurai.

More information

Implementation of Image Processing and Classification Techniques on EEG Images for Emotion Recognition System

Implementation of Image Processing and Classification Techniques on EEG Images for Emotion Recognition System Implementation of Image Processing and Classification Techniques on EEG Images for Emotion Recognition System Priyanka Abhang, Bharti Gawali Department of Computer Science and Information Technology, Dr.

More information

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for

What you re in for. Who are cochlear implants for? The bottom line. Speech processing schemes for What you re in for Speech processing schemes for cochlear implants Stuart Rosen Professor of Speech and Hearing Science Speech, Hearing and Phonetic Sciences Division of Psychology & Language Sciences

More information

A Study of Smartphone Game Users through EEG Signal Feature Analysis

A Study of Smartphone Game Users through EEG Signal Feature Analysis , pp. 409-418 http://dx.doi.org/10.14257/ijmue.2014.9.11.39 A Study of Smartphone Game Users through EEG Signal Feature Analysis Jung-Yoon Kim Graduate School of Advanced Imaging Science, Multimedia &

More information

The 29th Fuzzy System Symposium (Osaka, September 9-, 3) Color Feature Maps (BY, RG) Color Saliency Map Input Image (I) Linear Filtering and Gaussian

The 29th Fuzzy System Symposium (Osaka, September 9-, 3) Color Feature Maps (BY, RG) Color Saliency Map Input Image (I) Linear Filtering and Gaussian The 29th Fuzzy System Symposium (Osaka, September 9-, 3) A Fuzzy Inference Method Based on Saliency Map for Prediction Mao Wang, Yoichiro Maeda 2, Yasutake Takahashi Graduate School of Engineering, University

More information