Musicolor: is there a link among mood, color and music?

Size: px
Start display at page:

Download "Musicolor: is there a link among mood, color and music?"

Transcription

1 Musicolor: is there a link among mood, color and music? Andressa Eloisa Valengo Universidade Federal do Paraná Departamento de Informática aev11@inf.ufpr.br Francine Machado Resende Universidade Federal do Paraná Departamento de Informática fmr12@inf.ufpr.br Abstract In order to connect color, mood and music, the present work claims to search for the best way to classify musical and color data, relating them to emotions and mood. The database was created through an online form, linking three colors to each song. After that, a manual classification of the songs was made and the songs names were replaced for their respective mood. The next step is the classification and clustering tests. For this problem, some techniques implemented in the scikit-learn Python package were used. The classification was performed using k-neighbors Classifier and Support Vector Machine (SVM) while clustering was made using the K-means algorithm. Besides the use of different methods of classification, variations in datasets were also adopted to test different approaches. The best results were obtained for classification using the knn algorithm and binary datasets (only happy/sad). 1. Introduction It is almost an intuitive notion that music can evoke feelings in human beings. This notion has long been used for purposes as marketing [4] and therapeutic effects [2]. More recently, scientists have even explored the means by which this emotions are induced [8] and which aspects of music are related to the emotions [7]. Another curious notion is that we relate music to colors [14]. As colors are also associated with emotions [21, 13], we can hypothesize that the relations of music to colors are mediated by emotions. When the same piece of music was played with different emotional intentions, listeners attributed different color profiles to the piece [3]. In a recent study [15], this notion that color-music associations are mediated by emotion was investigated. In this study, participants first associated pieces of music with emotions, and chose colors that matched that music. Then, participants associated emotional faces to the colors and to the music. Results show that indeed musiccolor associations seem to be mediated by emotions [15]. Research in this field seem to point to the fact that faster or major-key music is associated with brighter, yellower colors [3, 10, 15, 16, 20]. Recently, Lindborg and Friberg [10] designed a beautiful study to further analyze color-music associations. After asking for the listeners to attribute a color to some musics, they tried to use the data to predict the color tat would be associated with the music, by multiple regression analysis. The prediction was successful for the Lightness parameter of the color. Additionaly, the model used for the prediction was more successful when it included the emotions attributed to the music in addition to acoustic parameters [10]. From this curious connection between color and music, came an idea to make a project that aims to use color, as a reflection of the person s mood, to predict the music the person wants to listen. The main objective of the present work was to search for the best way to classify musical and color data, relating them to emotions and mood, as well as to make sure if they are consistent and enough to start the project Related Works Attempts to automatically classify the emotion of music are being made. The main difficulties are on which audio features and which mood categories to use. Using BP neural networks, [5] classified music in four mood categories with a precision of 67%. A Support Vector Machine (SVM) active learning method was also used, achieving only a 50% accuracy on mood classification [12]. Lu et al. [11] implemented an hierarquical framework for mood detection and achieved an 86% accuracy. Using K-means clusterization, Hu et al. [6] established a ground-truth set for Music Information Retrieval (MIR) systems. The 3 established clusters seem to match categories classified emotionally as aggressive/angry, mellow/calm, or upbeat/happy [6]. To check if style and mood tags could be propagated, Sordo et al. [19] used content-based (CB) similarity and propagation was successful for happy, angry, and sad tags, but not to Mysterious tags. Skowronek et al. [17] classified music in 12 mood categories, and the performance var- 1

2 ied from 77% to 91%, depending on the category. Different kinds of audio features to be used for SVM mood classification showed that spectral features performed better than rhythm or dinamics features [18]. An attempt to reduce the audio features using both MRMR and PCA methods with SVM classifiers, showed that only 27,9% of 538 features were really contributing to the classification [1]. Adding the lyrics to the analyses seemed to optimize music classification for some emotions, specially for happy and sad categories [9]. A mood classification based only on the lyrics was also tried, achieving a 60% accuracy [23]. An important cultural issue was also analyzed, verifying if audio features, mood categories, and classification models developed for Western music were also applicable for non- Western, more specifically Chinese, music. The mood categories were applicable, and the same features could be used, however, when using English music as training to classify Chinese music, or vice-versa, the performance was not as good [22]. To the best of our knowledge, no studies included Brazilian music, as the present study does (in addition to North American and European music). Most importantly, this is the first attempt to use colors as a way of classifying music trough a machine learning method. 2. Methods 2.1. Dataset Creation To establish the database, a survey was created and shared through social networks. The survey included a question for the name of the person, a question with 24 color options (Figure 1) from which the person had to choose three (that suited him/her best at that moment), and a question for the person to write the song (artist - song name) he/she would like to listen at that moment. The color options go from bright shades to dark ones and they were shuffled to split similar colors or shades. The survey had a total of 654 answers. In the dataset, the input was not the song itself but a classification in which the song was fitted. To classify the songs, answers were first organized alphabetically according to the name of the artist and the song, to ensure that the same song always received the same classification. Repeated answers (n = 5), with the same person, same song and same colors, with a short period of time between the answers, where excluded. Incomplete or incorrect answers (songs that do not exist) where also excluded. A total of 642 answers remained for classification. The songs were listened to in YouTube, at least until the first chorus, always preferring the original audio, not live performances, and not looking at the video. To the classification, the rhythm of the song was considered and the lyrics were only taken into consideration when their Table 1. Number (N) of answers for each category Category N Category N angry 15 relax 19 betrayed 6 religious 1 childish 8 romantic excited 38 goodbye 4 romanic calm 68 happy 73 sad 75 hope 26 sexy 8 lady-killer 2 spooky 12 missing someone 15 thankful 5 needy 6 thoughtful excited 50 party 32 thoughtful calm 56 politicized 4 very happy 6 reborn 24 very sad 60 regret 26 victory 3 meaning caught the attention of the listener. The categories were: angry, betrayed, childish, goodbye, happy, hope, lady-killer, missing someone, needy, party, politicized, reborn, regret, relax, religious, romantic, sad, sexy, spooky, thankful, thoughtful, very happy, very sad, and victory. As there were too many songs in some categories, sugcategories were created to be included in the tests: calm romantic, excited romantic, calm thoughtful, and excited thoughtful Dataset Representation The complete dataset has 642 instances and each one of them has 3 colors and the label (category) membership. The Red Green Blue (RGB) system was selected to represent each color, this way each instance has nine attributes, because the colors in the RGB system are represented by 3 values ranging from 0 to 255. In addition, other three attributes were created for each instance, consisting of the mean of the three red values, of the three blue and of the three green values. Tests were performed using (a) the 9 values for the 3 colors, (b) the 3 average (AVG) values Algorithms and Techniques Once the dataset was created, the next task was to perform the color-to-category classification. In this step, machine learning was applied by using classification and clustering techniques implemented in the scikit-learn Python package. Even though this problem clearly involved supervised learning, clustering was applied in order to evaluate the music-to-category classification and also to verify how close to each other were the categories. Supervised learning was performed using k-neighbors Classifier and Support Vector Machine (SVM), whilst K-means was used for clustering. Different algorithms were used for classification to find out which one handles the best this kind of problem. 2

3 Figure 1. Colors included on the survey Tests Figure 2 represents the pipeline for the tests. Different input datasets for each representation were used for each classification or clustering method and their parameter variations. Firstly, tests were done using all instances (N = 642) with and without (default) subcategories from romantic and thoughtful categories. Because some feelings are easier to recognize than others, other dataset configurations were created: only happy or sad (N = 214), only positive or negative categories (N = 532), and strict positive or negative categories (N = 364). The configuration only happy or sad includes only the instances classified as happy, very happy, sad and very sad. The configuration only positive or negative represents the categories classified as negative (sad, very sad, needy, missing someone, regret, spooky, angry, betrayed, goodbye) or positive (happy, very happy, reborn, victory, thankful, hope, relax, religious, sexy, lady-killer, party, romantic). The configuration strict positive or negative is an intermediate between only happy or sad and only negative or positive, including on positive the categories happy, very happy, reborn, victory, thankful, and party; and on negative sad, very sad, needy, missing someone, regret, spooky, angry, betrayed, and goodbye. Each configuration was split in two parts of 90% training and 10% test by using the train test split function from scikit-learn. In the second round of tests using the k- Neighbors Classifiers, different numbers of neighbors were used for each dataset configuration, according to the respective performances in the first round: only happy or sad = [3, 5, 7, 9, 11, 13, 15], only positive or negative = [10, 20, 30, 40, 50, 60, 70], and strict positive or negative = [20, 25, 30, 35, 40]. In addition, these tests included only the AVG representation. 3. Results 3.1. Classification When testing weights parameters of k-neighbors Classifiers, uniform parameter always performed better than distance parameter (Table 2), probably because the instances on the training sets are close to each other in the space, so it is better that all neighbors have the same weight when voting the category for the new instance. Considering the three color representation (all features) and the average (AVG), in the vast majority of cases, tests with the AVG representation were more accurate (Table 2), because AVG represents a mix of the three colors, what is reasonable when considering that the colors are supposed to stand for moods. The main idea is that, for a given mood, one would choose colors that are in a way close to each other (brightness, shade, and saturation). The chosen colors also may have similar RGB values. Tests with restrictive configurations (only happy or sad, only positive or negative, and strict positive or negative) showed more accurate results (Table 2) than the default datasets (with and without subcategories) because it is easier to assign categories, to the musics, when the moods are 3

4 Figure 2. Pipeline of the tests. more contrasting. This reflects the difficulty encountered when making music-to-category classification. Even though the results for the binary configurations were more accurate (Table 2), it must be said that if random assignment was made, a 50% accuracy was expected, thus, the accuracy have to be interpreted considering 50% as baseline. Hence, the results obtained using the default datasets may not be that bad. The three k-neighbors classifiers (knn and its variants) had almost the same accuracy for different numbers of neighbors (Figure 3). In addition, for the only happy or sad configuration, classification using five neighbors was more accurate (Figure 3 - A). On the other hand, only positive or negative and strict positive or negative better accuracy values were achieved using eighty-five (Figure 3 - B) and twenty-five neighbors (Figure 3 - C), respectively. Considering the results obtained in the second round of tests (Figure 3), cross-validation was performed in order to Table 2. Accuracy of k-neighbors Classifier tests. The accuracy is the value of the highest peak obtained in the tests among the k-nearest Neighbor (KNN) algorithm and its two variants kdtree and BallTree. ALL: All Features; AVG: Average. distance uniform ALL AVG ALL AVG Default 18% 20% 20% 24% Default + Subcategories 14% 10% 16% 14% Only Happy/Sad 65% 70% 65% 75% Only Positive/Negative 50% 42% 54% 55% Strict Positive/Negative 46% 54% 55% 65% evaluate the classifier (knn - brute variation). As shown in Table 3, the average accuracy when classifying only happy or sad configuration was 64% (+/- 24%), while the configurations only positive or negative and strict positive or negative had mean accuracy values of 57% (+/-16%) and 4

5 Only Happy/Sad Only +/- Strict +/- 50% 42.59% 40.54% Table 4. SVM accuracy for the average color representation Only Happy/Sad Only +/- Strict +/ % 40.74% 43.24% Table 5. SVM accuracy for the three color representation ues (Table 4) than the three color representation (Table 5) for two of the configurations. These SVM results (Table 4, Table 5 ) show that k- Neighbors classifiers performed better than SVM on our dataset. On the other hand, in the literature, music mood classification is best achieved using SVM. This difference is probably because in our dataset the moods are represented in a different way, by colors. Figure 3. Accuracy for different number of Neighbors. The configurations only happy or sad, only positive or negative, and strict positive or negative were used in A, B, and C respectively. Table 3. Cross-validation for KNN. The cross-validation was performed on the training data (AVG representation) using ten folds. In the second, third and fourth columns are shown the accuracy values for only happy sad, only positive or negative, and strict positive or negative configurations. Fold Happy/Sad Only +/- Strict +/- 1 st 45.0% 73.5% 60.6% 2 nd 65.0% 61.2% 57.7% 3 rd 60.0% 59.2% 63.6% 4 th 75.0% 55.1% 60.6% 5 th 85.0% 53.1% 60.6% 6 th 63.2% 55.3% 60.6% 7 th 52.6% 40.4% 48.5% 8 th 77.7% 51.1% 51.5% 9 th % 62.5% 10 th 50.0% 58.7% 61.3% Mean 64 (+/-24)% 57 (+/-16)% 59 (+/-9)% 59% (+/-9%), respectively. Confirming that only happy and sad is the best configuration. When testing the representations of the dataset using the SVM classifier (average and three color), once again, the average (AVG) representation showed better accuracy val Clustering In order to evaluate the music-to-category classification, clustering was performed. The results (Image 4, Image 5) show that for all representation configurations, the higher the number of clusters, the best is the silhouette value. This improvement is probably not relevant, because it may reflect that the clusters are overlapping, since all values are close to zero. When considering two clusters, some categories were more assigned to one cluster than the other. The percentage of each category that was placed in each cluster is showed in Table 6. The categories sad, very sad, and angry were mostly assigned to the same cluster (C1). Whilst happy, party, and reborn were mostly assigned to the remaining cluster (C2). This supports the idea that binary classifications (as in happy/sad configuration) are better than complex classifications. 4. Conclusion and Future Work To the best of our knowledge, this is the first study to relate color-mood-music applying machine learning. For classification, k-neighbors and SVM were tested, and K- means was used for clustering. Among knn and its implementation variations (KdTree and BallTree), knn was more accurate. In addition, knn showed better results than SVM. Different datasets were created, and using the binary ones for classification, better accuracy values were obtained. This is probably because it is easier to classify music moods when they are contrasting, as was the case of only happy and sad configuration. One limitation of the present study was the small number of instances in the datasets. The results would probably be better if the datasets were larger. Additionally, the musiccategory could be improved if at least three people classified 5

6 Figure 4. Silhouette analysis for kmeans clustering on average representation. Table 6. Statistics summary of the 2 clusters of default dataset C1 C2 Category 43.5% 56.5% happy 60.9% 39.1% regret 42.3% 57.7% party 44.4% 55.6% calm 36.4% 63.6% hope 58.2% 41.8% very sad 90.0% 10.0% spooky 64.7% 35.3% sad 67.4% 32.6% thoughtful excited 55.1% 44.9% thoughtful calm 71.4% 28.6% angry 40.0% 60.0% romantic excited 37.3% 62.7% romantic calm 46.2% 53.8% missing someone 36.4% 63.6% reborn Figure 5. Silhouette analysis for kmeans clustering on three colors representation. the data, or even if an automatic classification method was used. In further works the ground truth mood classification of music can be obtained from existing datasets, for example from last.fm API. But one drawback would be the lack of Brazilian music in those databases. In case of the creation of a new dataset (color-mood-music), it would be interesting to develop a system where a person could listen to a music and attribute colors to that music. This work was a first step to relate color and music through mood and it shows that the idea is feasible and worthy of further studies. References [1] B. K. Baniya and C. S. Hong. Music Mood Classification using Reduced Audio Features., pages , [2] C. F. Barber. The use of music and colour theory as a behaviour modifier. British journal of nursing (Mark Allen Publishing), 8(7):443 8, [3] R. Bresin. What is the color of that music performance? Proceedings of the International Computer Music Conference, pages , [4] G. C. Bruner II. Music, Mood, and Marketing. Journal of Marketing, 54(4):94 104, [5] Y. Feng, Y. Zhuang, and Y. Pan. Music information retrieval by detecting mood via computational media aesthetics. Proceedings IEEE/WIC International Conference on Web Intelligence (WI 2003), pages , [6] X. Hu, M. Bay, and J. Downie. Creating a Simplified Music Mood Classification Ground-Truth Set. ISMIR, [7] P. G. Hunter, E. G. Schellenberg, and U. Schimmack. Feelings and perceptions of happiness and sadness induced by music: Similarities, differences, and mixed emotions. Psychology of Aesthetics, Creativity, and the Arts, 4(1):47 56, [8] P. N. Justin and D. Västfjäll. Emotional Responses to Music: The Need to Consider Underlying Mechanisms. Behavioral and Brain Sciences, 31(5): , [9] C. Laurier, J. Grivolla, and P. Herrera. Multimodal Music Mood Classification Using Audio and Lyrics. Machine Learning and Applications, ICMLA 08., pages , [10] P. Lindborg and A. K. Friberg. Colour Association with Music Is Mediated by Emotion: Evidence from an Experiment Using a CIE Lab Interface and Interviews. PloS one, 10(12):e , [11] L. Lu, D. Liu, and H. J. Zhang. Automatic mood detection and tracking of music audio signals. IEEE Transactions on Audio, Speech and Language Processing, 14(1):5 18, [12] M. I. Mandel, G. E. Poliner, and D. P. W. Ellis. Support vector machine active learning for music retrieval. Multimedia Systems, 12(1):3 13,

7 [13] K. Naz and H. Epps. Relationship between color and emotion: a study of college students. College Student J, 38(3): , [14] H. S. Odbert, T. F. Karwoski, and a. B. Eckerson. Studies In Synesthetic Thinking: I. Musical and Verbal Associations of Color and Mood. The Journal of General Psychology, 26: , [15] S. E. Palmer, K. B. Schloss, Z. Xu, and L. R. Prado-Leon. Music-color associations are mediated by emotion. Proceedings of the National Academy of Sciences, 110(22): , [16] D. J. POLZELLA and J. L. HASSEN. Aesthetic preferences for combinations of color and music. Perceptual and Motor Skills, 85(3): , Dec [17] J. Skowronek, M. F. McKinney, and S. Van De Par. A Demonstrator for Automatic Music Mood Estimation. IS- MIR, pages , [18] Y. Song, S. Dixon, and M. Pearce. Evaluation of Musical Features for Emotion Classification. International Society for Music Information Retrieval Conference (ISMIR), pages , [19] M. Sordo, C. Laurier, and Ò. Celma. Annotating Music Collections : How Content-Based Similarity Helps to Propagate Labels. ISMIR, pages , [20] T. Tsang and K. B. Schloss. Associations between Color and Music are Mediated by Emotion and Influenced by Tempo. the Yale Review of Undergraduate Research in Psychology, pages 82 93, [21] L. B. Wexner. The Degree to Which Colors (Hues) Are Associated with Mood-Tones. The Journal of Applied Psychology, 38(6): , [22] Y.-H. Yang and X. Hu. Cross-cultural music mood classification: A comparison on English and Chinese songs. 13th International Society for Music Information Retrieval Conference (ISMIR), pages 19 24, [23] T. C. Ying, S. Doraisamy, and L. N. Abdullah. Genre and mood classification using lyric features International Conference on Information Retrieval & Knowledge Management (CAMP), pages ,

Valence-arousal evaluation using physiological signals in an emotion recall paradigm. CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry.

Valence-arousal evaluation using physiological signals in an emotion recall paradigm. CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry. Proceedings Chapter Valence-arousal evaluation using physiological signals in an emotion recall paradigm CHANEL, Guillaume, ANSARI ASL, Karim, PUN, Thierry Abstract The work presented in this paper aims

More information

E-MRS: Emotion-based Movie Recommender System. Ai Thanh Ho, Ilusca L. L. Menezes, and Yousra Tagmouti

E-MRS: Emotion-based Movie Recommender System. Ai Thanh Ho, Ilusca L. L. Menezes, and Yousra Tagmouti E-MRS: Emotion-based Movie Recommender System Ai Thanh Ho, Ilusca L. L. Menezes, and Yousra Tagmouti IFT 6251 11/12/2006 Outline Introduction and Future Work 2 Introduction Electronic commerce Large amount

More information

Facial expression recognition with spatiotemporal local descriptors

Facial expression recognition with spatiotemporal local descriptors Facial expression recognition with spatiotemporal local descriptors Guoying Zhao, Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and Information Engineering, P. O. Box

More information

Emotion Recognition using a Cauchy Naive Bayes Classifier

Emotion Recognition using a Cauchy Naive Bayes Classifier Emotion Recognition using a Cauchy Naive Bayes Classifier Abstract Recognizing human facial expression and emotion by computer is an interesting and challenging problem. In this paper we propose a method

More information

Decision Support System for Skin Cancer Diagnosis

Decision Support System for Skin Cancer Diagnosis The Ninth International Symposium on Operations Research and Its Applications (ISORA 10) Chengdu-Jiuzhaigou, China, August 19 23, 2010 Copyright 2010 ORSC & APORC, pp. 406 413 Decision Support System for

More information

ANALYSIS AND DETECTION OF BRAIN TUMOUR USING IMAGE PROCESSING TECHNIQUES

ANALYSIS AND DETECTION OF BRAIN TUMOUR USING IMAGE PROCESSING TECHNIQUES ANALYSIS AND DETECTION OF BRAIN TUMOUR USING IMAGE PROCESSING TECHNIQUES P.V.Rohini 1, Dr.M.Pushparani 2 1 M.Phil Scholar, Department of Computer Science, Mother Teresa women s university, (India) 2 Professor

More information

Annotation and Retrieval System Using Confabulation Model for ImageCLEF2011 Photo Annotation

Annotation and Retrieval System Using Confabulation Model for ImageCLEF2011 Photo Annotation Annotation and Retrieval System Using Confabulation Model for ImageCLEF2011 Photo Annotation Ryo Izawa, Naoki Motohashi, and Tomohiro Takagi Department of Computer Science Meiji University 1-1-1 Higashimita,

More information

The Ordinal Nature of Emotions. Georgios N. Yannakakis, Roddy Cowie and Carlos Busso

The Ordinal Nature of Emotions. Georgios N. Yannakakis, Roddy Cowie and Carlos Busso The Ordinal Nature of Emotions Georgios N. Yannakakis, Roddy Cowie and Carlos Busso The story It seems that a rank-based FeelTrace yields higher inter-rater agreement Indeed, FeelTrace should actually

More information

Color Concepts Color as a Design Element

Color Concepts Color as a Design Element Color Concepts Color as a Design Element Color Associations If one says red and there are fifty people listening, it can be expected that there will be fifty reds in their minds. And all these reds will

More information

Analyzing Personality through Social Media Profile Picture Choice

Analyzing Personality through Social Media Profile Picture Choice Analyzing Personality through Social Media Profile Picture Choice Leqi Liu, Daniel Preoţiuc-Pietro, Zahra Riahi Mohsen E. Moghaddam, Lyle Ungar ICWSM 2016 Positive Psychology Center University of Pennsylvania

More information

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 5, JULY

IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 5, JULY IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 19, NO. 5, JULY 2011 1057 A Framework for Automatic Human Emotion Classification Using Emotion Profiles Emily Mower, Student Member, IEEE,

More information

EMOTION CLASSIFICATION: HOW DOES AN AUTOMATED SYSTEM COMPARE TO NAÏVE HUMAN CODERS?

EMOTION CLASSIFICATION: HOW DOES AN AUTOMATED SYSTEM COMPARE TO NAÏVE HUMAN CODERS? EMOTION CLASSIFICATION: HOW DOES AN AUTOMATED SYSTEM COMPARE TO NAÏVE HUMAN CODERS? Sefik Emre Eskimez, Kenneth Imade, Na Yang, Melissa Sturge- Apple, Zhiyao Duan, Wendi Heinzelman University of Rochester,

More information

On Shape And the Computability of Emotions X. Lu, et al.

On Shape And the Computability of Emotions X. Lu, et al. On Shape And the Computability of Emotions X. Lu, et al. MICC Reading group 10.07.2013 1 On Shape and the Computability of Emotion X. Lu, P. Suryanarayan, R. B. Adams Jr., J. Li, M. G. Newman, J. Z. Wang

More information

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China A Vision-based Affective Computing System Jieyu Zhao Ningbo University, China Outline Affective Computing A Dynamic 3D Morphable Model Facial Expression Recognition Probabilistic Graphical Models Some

More information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, S. Lee, U. Neumann, S. Narayanan Emotion

More information

Toward Web 2.0 music information retrieval: Utilizing emotion-based, user-assigned descriptors

Toward Web 2.0 music information retrieval: Utilizing emotion-based, user-assigned descriptors Toward Web 2.0 music information retrieval: Utilizing emotion-based, user-assigned descriptors Hyuk-Jin Lee School of Library and Information Studies, Texas Woman's University, Stoddard Hall, Room 414,

More information

INTER-RATER RELIABILITY OF ACTUAL TAGGED EMOTION CATEGORIES VALIDATION USING COHEN S KAPPA COEFFICIENT

INTER-RATER RELIABILITY OF ACTUAL TAGGED EMOTION CATEGORIES VALIDATION USING COHEN S KAPPA COEFFICIENT INTER-RATER RELIABILITY OF ACTUAL TAGGED EMOTION CATEGORIES VALIDATION USING COHEN S KAPPA COEFFICIENT 1 NOR RASHIDAH MD JUREMI, 2 *MOHD ASYRAF ZULKIFLEY, 3 AINI HUSSAIN, 4 WAN MIMI DIYANA WAN ZAKI Department

More information

Analyzing Personality through Social Media Profile Picture Choice

Analyzing Personality through Social Media Profile Picture Choice Analyzing Personality through Social Media Profile Picture Choice Daniel Preoţiuc-Pietro Leqi Liu, Zahra Riahi, Mohsen E. Moghaddam, Lyle Ungar ICWSM 2016 Positive Psychology Center University of Pennsylvania

More information

TITLE: A Data-Driven Approach to Patient Risk Stratification for Acute Respiratory Distress Syndrome (ARDS)

TITLE: A Data-Driven Approach to Patient Risk Stratification for Acute Respiratory Distress Syndrome (ARDS) TITLE: A Data-Driven Approach to Patient Risk Stratification for Acute Respiratory Distress Syndrome (ARDS) AUTHORS: Tejas Prahlad INTRODUCTION Acute Respiratory Distress Syndrome (ARDS) is a condition

More information

Classification of EEG signals in an Object Recognition task

Classification of EEG signals in an Object Recognition task Classification of EEG signals in an Object Recognition task Iacob D. Rus, Paul Marc, Mihaela Dinsoreanu, Rodica Potolea Technical University of Cluj-Napoca Cluj-Napoca, Romania 1 rus_iacob23@yahoo.com,

More information

Detection of Cognitive States from fmri data using Machine Learning Techniques

Detection of Cognitive States from fmri data using Machine Learning Techniques Detection of Cognitive States from fmri data using Machine Learning Techniques Vishwajeet Singh, K.P. Miyapuram, Raju S. Bapi* University of Hyderabad Computational Intelligence Lab, Department of Computer

More information

Formulating Emotion Perception as a Probabilistic Model with Application to Categorical Emotion Classification

Formulating Emotion Perception as a Probabilistic Model with Application to Categorical Emotion Classification Formulating Emotion Perception as a Probabilistic Model with Application to Categorical Emotion Classification Reza Lotfian and Carlos Busso Multimodal Signal Processing (MSP) lab The University of Texas

More information

LESSON PLAN FOR WEEK BEGINNING: May 26,2014. GENERAL OBJECTIVE: To ensure that students are able to understand why people use bullying.

LESSON PLAN FOR WEEK BEGINNING: May 26,2014. GENERAL OBJECTIVE: To ensure that students are able to understand why people use bullying. LESSON PLAN FOR WEEK BEGINNING: May 26,2014 SUBJECT: TOPIC: Guidance Bullying GRADE: 8/9 GENERAL OBJECTIVE: To ensure that students are able to understand why people use bullying. SPECIFIC OBJECTIVES:

More information

A Comparison of Collaborative Filtering Methods for Medication Reconciliation

A Comparison of Collaborative Filtering Methods for Medication Reconciliation A Comparison of Collaborative Filtering Methods for Medication Reconciliation Huanian Zheng, Rema Padman, Daniel B. Neill The H. John Heinz III College, Carnegie Mellon University, Pittsburgh, PA, 15213,

More information

COMBINING CATEGORICAL AND PRIMITIVES-BASED EMOTION RECOGNITION. University of Southern California (USC), Los Angeles, CA, USA

COMBINING CATEGORICAL AND PRIMITIVES-BASED EMOTION RECOGNITION. University of Southern California (USC), Los Angeles, CA, USA COMBINING CATEGORICAL AND PRIMITIVES-BASED EMOTION RECOGNITION M. Grimm 1, E. Mower 2, K. Kroschel 1, and S. Narayanan 2 1 Institut für Nachrichtentechnik (INT), Universität Karlsruhe (TH), Karlsruhe,

More information

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1 1. INTRODUCTION Sign language interpretation is one of the HCI applications where hand gesture plays important role for communication. This chapter discusses sign language interpretation system with present

More information

of subjective evaluations.

of subjective evaluations. Human communication support by the taxonomy of subjective evaluations Emi SUEYOSHI*, Emi YANO*, Isao SHINOHARA**, Toshikazu KATO* *Department of Industrial and Systems Engineering, Chuo University, 1-13-27,

More information

Characterization of Human Sperm Components for an Accu. an Accurate Morphological Analysis. Violeta Chang. June, 2014

Characterization of Human Sperm Components for an Accu. an Accurate Morphological Analysis. Violeta Chang. June, 2014 Characterization of Human Sperm Components for an Accurate Morphological Analysis Department of Computer Science University of Chile June, 2014 Outline 1 Introduction The Basic Problem That We Studied

More information

Data mining for Obstructive Sleep Apnea Detection. 18 October 2017 Konstantinos Nikolaidis

Data mining for Obstructive Sleep Apnea Detection. 18 October 2017 Konstantinos Nikolaidis Data mining for Obstructive Sleep Apnea Detection 18 October 2017 Konstantinos Nikolaidis Introduction: What is Obstructive Sleep Apnea? Obstructive Sleep Apnea (OSA) is a relatively common sleep disorder

More information

Online Speaker Adaptation of an Acoustic Model using Face Recognition

Online Speaker Adaptation of an Acoustic Model using Face Recognition Online Speaker Adaptation of an Acoustic Model using Face Recognition Pavel Campr 1, Aleš Pražák 2, Josef V. Psutka 2, and Josef Psutka 2 1 Center for Machine Perception, Department of Cybernetics, Faculty

More information

Face Analysis : Identity vs. Expressions

Face Analysis : Identity vs. Expressions Hugo Mercier, 1,2 Patrice Dalle 1 Face Analysis : Identity vs. Expressions 1 IRIT - Université Paul Sabatier 118 Route de Narbonne, F-31062 Toulouse Cedex 9, France 2 Websourd Bâtiment A 99, route d'espagne

More information

HUMAN ABILITY DEVELOPMENT ORGNIZATION BRAIN ABILITY DEVELOPMENT PROGRAM

HUMAN ABILITY DEVELOPMENT ORGNIZATION BRAIN ABILITY DEVELOPMENT PROGRAM HUMAN ABILITY DEVELOPMENT ORGNIZATION Concept BRAIN ABILITY DEVELOPMENT PROGRAM According to the scientist's reviews, man only incorporates less than 10% of his brain capacity. This shows how awesome a

More information

Research Proposal on Emotion Recognition

Research Proposal on Emotion Recognition Research Proposal on Emotion Recognition Colin Grubb June 3, 2012 Abstract In this paper I will introduce my thesis question: To what extent can emotion recognition be improved by combining audio and visual

More information

Emotion Detection Using Physiological Signals. M.A.Sc. Thesis Proposal Haiyan Xu Supervisor: Prof. K.N. Plataniotis

Emotion Detection Using Physiological Signals. M.A.Sc. Thesis Proposal Haiyan Xu Supervisor: Prof. K.N. Plataniotis Emotion Detection Using Physiological Signals M.A.Sc. Thesis Proposal Haiyan Xu Supervisor: Prof. K.N. Plataniotis May 10 th, 2011 Outline Emotion Detection Overview EEG for Emotion Detection Previous

More information

Development of novel algorithm by combining Wavelet based Enhanced Canny edge Detection and Adaptive Filtering Method for Human Emotion Recognition

Development of novel algorithm by combining Wavelet based Enhanced Canny edge Detection and Adaptive Filtering Method for Human Emotion Recognition International Journal of Engineering Research and Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 12, Issue 9 (September 2016), PP.67-72 Development of novel algorithm by combining

More information

Sound Texture Classification Using Statistics from an Auditory Model

Sound Texture Classification Using Statistics from an Auditory Model Sound Texture Classification Using Statistics from an Auditory Model Gabriele Carotti-Sha Evan Penn Daniel Villamizar Electrical Engineering Email: gcarotti@stanford.edu Mangement Science & Engineering

More information

Improved Intelligent Classification Technique Based On Support Vector Machines

Improved Intelligent Classification Technique Based On Support Vector Machines Improved Intelligent Classification Technique Based On Support Vector Machines V.Vani Asst.Professor,Department of Computer Science,JJ College of Arts and Science,Pudukkottai. Abstract:An abnormal growth

More information

Visualization of emotional rating scales

Visualization of emotional rating scales Visualization of emotional rating scales Stephanie Greer UC Berkeley CS294-10 smgreer@berkeley.edu ABSTRACT Visual ratings scales have the potential to be a powerful tool for recording emotional information

More information

Journal of Education and Practice ISSN (Paper) ISSN X (Online) Vol.5, No.10, 2014

Journal of Education and Practice ISSN (Paper) ISSN X (Online) Vol.5, No.10, 2014 Mirrors and Self Image: From the Perspective of the Art Students (An Applied Aesthetic Study at Al-Yarmouk University, Irbid Jordan) Dr. Insaf Rabadi, Associate Professor, Faculty of Arts, Yarmouk University

More information

IDENTIFYING STRESS BASED ON COMMUNICATIONS IN SOCIAL NETWORKS

IDENTIFYING STRESS BASED ON COMMUNICATIONS IN SOCIAL NETWORKS IDENTIFYING STRESS BASED ON COMMUNICATIONS IN SOCIAL NETWORKS 1 Manimegalai. C and 2 Prakash Narayanan. C manimegalaic153@gmail.com and cprakashmca@gmail.com 1PG Student and 2 Assistant Professor, Department

More information

Application of Soft-Computing techniques for analysis of external influential factors in the emotional interpretation of visual stimuli.

Application of Soft-Computing techniques for analysis of external influential factors in the emotional interpretation of visual stimuli. Int'l Conf Artificial Intelligence ICAI'17 275 Application of Soft-Computing techniques for analysis of external influential factors in the emotional interpretation of visual stimuli Arturo Peralta 1,

More information

Using Fuzzy Classifiers for Perceptual State Recognition

Using Fuzzy Classifiers for Perceptual State Recognition Using Fuzzy Classifiers for Perceptual State Recognition Ian Beaver School of Computing and Engineering Science Eastern Washington University Cheney, WA 99004-2493 USA ibeaver@mail.ewu.edu Atsushi Inoue

More information

CPSC81 Final Paper: Facial Expression Recognition Using CNNs

CPSC81 Final Paper: Facial Expression Recognition Using CNNs CPSC81 Final Paper: Facial Expression Recognition Using CNNs Luis Ceballos Swarthmore College, 500 College Ave., Swarthmore, PA 19081 USA Sarah Wallace Swarthmore College, 500 College Ave., Swarthmore,

More information

Reader s Emotion Prediction Based on Partitioned Latent Dirichlet Allocation Model

Reader s Emotion Prediction Based on Partitioned Latent Dirichlet Allocation Model Reader s Emotion Prediction Based on Partitioned Latent Dirichlet Allocation Model Ruifeng Xu, Chengtian Zou, Jun Xu Key Laboratory of Network Oriented Intelligent Computation, Shenzhen Graduate School,

More information

Gathering a dataset of multi-modal mood-dependent perceptual responses to music

Gathering a dataset of multi-modal mood-dependent perceptual responses to music Gathering a dataset of multi-modal mood-dependent perceptual responses to music Matevž Pesek 1, Primož Godec 1, Mojca Poredoš 1, Gregor Strle 2, Jože Guna 3, Emilija Stojmenova 3, Matevž Pogačnik 3 and

More information

Analyzing Emotional Semantics of Abstract Art Using Low-Level Image Features

Analyzing Emotional Semantics of Abstract Art Using Low-Level Image Features Analyzing Emotional Semantics of Abstract Art Using Low-Level Image Features He Zhang 1, Eimontas Augilius 1, Timo Honkela 1, Jorma Laaksonen 1, Hannes Gamper 2, and Henok Alene 1 1 Department of Information

More information

Implementation of Automatic Retina Exudates Segmentation Algorithm for Early Detection with Low Computational Time

Implementation of Automatic Retina Exudates Segmentation Algorithm for Early Detection with Low Computational Time www.ijecs.in International Journal Of Engineering And Computer Science ISSN: 2319-7242 Volume 5 Issue 10 Oct. 2016, Page No. 18584-18588 Implementation of Automatic Retina Exudates Segmentation Algorithm

More information

Source and Description Category of Practice Level of CI User How to Use Additional Information. Intermediate- Advanced. Beginner- Advanced

Source and Description Category of Practice Level of CI User How to Use Additional Information. Intermediate- Advanced. Beginner- Advanced Source and Description Category of Practice Level of CI User How to Use Additional Information Randall s ESL Lab: http://www.esllab.com/ Provide practice in listening and comprehending dialogue. Comprehension

More information

Analyzing Personality through Social Media Profile Picture Choice

Analyzing Personality through Social Media Profile Picture Choice Analyzing Personality through Social Media Profile Picture Choice Leqi Liu, Daniel Preoţiuc-Pietro, Zahra Riahi Mohsen E. Moghaddam, Lyle Ungar ICWSM 2016 Positive Psychology Center University of Pennsylvania

More information

Find Similarities and Differences Across Texts

Find Similarities and Differences Across Texts Find Similarities and Differences Across Texts When you read two selections about the same topic, compare and contrast them. To compare and contrast selections about the same topic, ask yourself these

More information

ECTA Handouts Keynote Address. Affective Education. Cognitive Behaviour Therapy. Affective Education. Affective Education 19/06/2010

ECTA Handouts Keynote Address. Affective Education. Cognitive Behaviour Therapy. Affective Education. Affective Education 19/06/2010 ECTA Handouts Keynote Address ECTA: International Trends in Behavioural Guidance Approaches 26 th June 2010 Cognitive Behaviour Therapy Affective Development (maturity, vocabulary and repair). Cognitive

More information

GfK Verein. Detecting Emotions from Voice

GfK Verein. Detecting Emotions from Voice GfK Verein Detecting Emotions from Voice Respondents willingness to complete questionnaires declines But it doesn t necessarily mean that consumers have nothing to say about products or brands: GfK Verein

More information

Predicting Diabetes and Heart Disease Using Features Resulting from KMeans and GMM Clustering

Predicting Diabetes and Heart Disease Using Features Resulting from KMeans and GMM Clustering Predicting Diabetes and Heart Disease Using Features Resulting from KMeans and GMM Clustering Kunal Sharma CS 4641 Machine Learning Abstract Clustering is a technique that is commonly used in unsupervised

More information

Perception. Sensation and Perception. Sensory Systems An Overview of the Perception Process

Perception. Sensation and Perception. Sensory Systems An Overview of the Perception Process Perception Sensation and Perception Cross Cultural Studies in Consumer Behavior Assist. Prof. Dr. Özge Özgen Department of International Business and Trade Sensation: The immediate response of our sensory

More information

Resource File: Body Image

Resource File: Body Image Resource File: Body Image By Caitlin Erickson S00136290 1 Contents Page PAGE # Activity 1... 3 Activity 2... 4 Activity 3... 5 Activity 4... 7 Activity 5... 8 Appendix 1... 10 Appendix 2... 11 Appendix

More information

A Comparison of Perceptions on the Investment Theory of Creativity between Chinese and American

A Comparison of Perceptions on the Investment Theory of Creativity between Chinese and American 2009 Fifth International Conference on Natural Computation A Comparison of Perceptions on the Investment Theory of Creativity between Chinese and American Pingping Liu, Xingli Zhang, Jiannong Shi * Institute

More information

Contrastive Analysis on Emotional Cognition of Skeuomorphic and Flat Icon

Contrastive Analysis on Emotional Cognition of Skeuomorphic and Flat Icon Contrastive Analysis on Emotional Cognition of Skeuomorphic and Flat Icon Xiaoming Zhang, Qiang Wang and Yan Shi Abstract In the field of designs of interface and icons, as the skeuomorphism style fades

More information

The Importance Of Colour

The Importance Of Colour The Importance Of Colour Colour is the first thing we register when we are assessing anything and we make an immediate response to it before anything else. Colour is one of the most effective tools that

More information

A HMM-based Pre-training Approach for Sequential Data

A HMM-based Pre-training Approach for Sequential Data A HMM-based Pre-training Approach for Sequential Data Luca Pasa 1, Alberto Testolin 2, Alessandro Sperduti 1 1- Department of Mathematics 2- Department of Developmental Psychology and Socialisation University

More information

E-MRS: Emotion-based Movie Recommender System

E-MRS: Emotion-based Movie Recommender System E-MRS: Emotion-based Movie Recommender System Ai Thanh Ho Ilusca L. L. Menezes Yousra Tagmouti Department of Informatics and Operations Research University of Montreal, Quebec, Canada E-mail: {hothitha,lopedei,tagmouty}@iro.umontreal.ca

More information

Gender Based Emotion Recognition using Speech Signals: A Review

Gender Based Emotion Recognition using Speech Signals: A Review 50 Gender Based Emotion Recognition using Speech Signals: A Review Parvinder Kaur 1, Mandeep Kaur 2 1 Department of Electronics and Communication Engineering, Punjabi University, Patiala, India 2 Department

More information

2 Psychological Processes : An Introduction

2 Psychological Processes : An Introduction 2 Psychological Processes : An Introduction 2.1 Introduction In our everyday life we try to achieve various goals through different activities, receive information from our environment, learn about many

More information

Classification of ECG Data for Predictive Analysis to Assist in Medical Decisions.

Classification of ECG Data for Predictive Analysis to Assist in Medical Decisions. 48 IJCSNS International Journal of Computer Science and Network Security, VOL.15 No.10, October 2015 Classification of ECG Data for Predictive Analysis to Assist in Medical Decisions. A. R. Chitupe S.

More information

Classroom Data Collection and Analysis using Computer Vision

Classroom Data Collection and Analysis using Computer Vision Classroom Data Collection and Analysis using Computer Vision Jiang Han Department of Electrical Engineering Stanford University Abstract This project aims to extract different information like faces, gender

More information

(SAT). d) inhibiting automatized responses.

(SAT). d) inhibiting automatized responses. Which of the following findings does NOT support the existence of task-specific mental resources? 1. a) It is more difficult to combine two verbal tasks than one verbal task and one spatial task. 2. b)

More information

Language Volunteer Guide

Language Volunteer Guide Language Volunteer Guide Table of Contents Introduction How You Can Make an Impact Getting Started 3 4 4 Style Guidelines Captioning Translation Review 5 7 9 10 Getting Started with Dotsub Captioning Translation

More information

Handwritten Kannada Vowels and English Character Recognition System

Handwritten Kannada Vowels and English Character Recognition System Handwritten Kannada Vowels and English Character Recognition System B.V.Dhandra, Gururaj Mukarambi Department of P.G.Studies and Research in Computer Science, Gulbarga University, Gulbarga, Karnataka dhandra_b_v@yahoo.co.in

More information

Introducing a dataset of emotional and color responses to music

Introducing a dataset of emotional and color responses to music Introducing a dataset of emotional and color responses to music Matevž Pesek 1, Primož Godec 1, Mojca Poredoš 1, Gregor Strle 2, Jože Guna 3, Emilija Stojmenova 3, Matevž Pogačnik 3, Matija Marolt 1 1

More information

Using simulated body language and colours to express emotions with the Nao robot

Using simulated body language and colours to express emotions with the Nao robot Using simulated body language and colours to express emotions with the Nao robot Wouter van der Waal S4120922 Bachelor Thesis Artificial Intelligence Radboud University Nijmegen Supervisor: Khiet Truong

More information

An Improved Algorithm To Predict Recurrence Of Breast Cancer

An Improved Algorithm To Predict Recurrence Of Breast Cancer An Improved Algorithm To Predict Recurrence Of Breast Cancer Umang Agrawal 1, Ass. Prof. Ishan K Rajani 2 1 M.E Computer Engineer, Silver Oak College of Engineering & Technology, Gujarat, India. 2 Assistant

More information

AC : USABILITY EVALUATION OF A PROBLEM SOLVING ENVIRONMENT FOR AUTOMATED SYSTEM INTEGRATION EDUCA- TION USING EYE-TRACKING

AC : USABILITY EVALUATION OF A PROBLEM SOLVING ENVIRONMENT FOR AUTOMATED SYSTEM INTEGRATION EDUCA- TION USING EYE-TRACKING AC 2012-4422: USABILITY EVALUATION OF A PROBLEM SOLVING ENVIRONMENT FOR AUTOMATED SYSTEM INTEGRATION EDUCA- TION USING EYE-TRACKING Punit Deotale, Texas A&M University Dr. Sheng-Jen Tony Hsieh, Texas A&M

More information

Design of Palm Acupuncture Points Indicator

Design of Palm Acupuncture Points Indicator Design of Palm Acupuncture Points Indicator Wen-Yuan Chen, Shih-Yen Huang and Jian-Shie Lin Abstract The acupuncture points are given acupuncture or acupressure so to stimulate the meridians on each corresponding

More information

The Role of Feedback in Categorisation

The Role of Feedback in Categorisation The Role of in Categorisation Mark Suret (m.suret@psychol.cam.ac.uk) Department of Experimental Psychology; Downing Street Cambridge, CB2 3EB UK I.P.L. McLaren (iplm2@cus.cam.ac.uk) Department of Experimental

More information

REPORT ON EMOTIONAL INTELLIGENCE QUESTIONNAIRE: GENERAL

REPORT ON EMOTIONAL INTELLIGENCE QUESTIONNAIRE: GENERAL REPORT ON EMOTIONAL INTELLIGENCE QUESTIONNAIRE: GENERAL Name: Email: Date: Sample Person sample@email.com IMPORTANT NOTE The descriptions of emotional intelligence the report contains are not absolute

More information

Ch. 1 Collecting and Displaying Data

Ch. 1 Collecting and Displaying Data Ch. 1 Collecting and Displaying Data In the first two sections of this chapter you will learn about sampling techniques and the different levels of measurement for a variable. It is important that you

More information

Color Difference Equations and Their Assessment

Color Difference Equations and Their Assessment Color Difference Equations and Their Assessment In 1976, the International Commission on Illumination, CIE, defined a new color space called CIELAB. It was created to be a visually uniform color space.

More information

SUPPRESSION OF MUSICAL NOISE IN ENHANCED SPEECH USING PRE-IMAGE ITERATIONS. Christina Leitner and Franz Pernkopf

SUPPRESSION OF MUSICAL NOISE IN ENHANCED SPEECH USING PRE-IMAGE ITERATIONS. Christina Leitner and Franz Pernkopf 2th European Signal Processing Conference (EUSIPCO 212) Bucharest, Romania, August 27-31, 212 SUPPRESSION OF MUSICAL NOISE IN ENHANCED SPEECH USING PRE-IMAGE ITERATIONS Christina Leitner and Franz Pernkopf

More information

Facial Expression Recognition Using Principal Component Analysis

Facial Expression Recognition Using Principal Component Analysis Facial Expression Recognition Using Principal Component Analysis Ajit P. Gosavi, S. R. Khot Abstract Expression detection is useful as a non-invasive method of lie detection and behaviour prediction. However,

More information

Bootstrapped Integrative Hypothesis Test, COPD-Lung Cancer Differentiation, and Joint mirnas Biomarkers

Bootstrapped Integrative Hypothesis Test, COPD-Lung Cancer Differentiation, and Joint mirnas Biomarkers Bootstrapped Integrative Hypothesis Test, COPD-Lung Cancer Differentiation, and Joint mirnas Biomarkers Kai-Ming Jiang 1,2, Bao-Liang Lu 1,2, and Lei Xu 1,2,3(&) 1 Department of Computer Science and Engineering,

More information

Evaluating Classifiers for Disease Gene Discovery

Evaluating Classifiers for Disease Gene Discovery Evaluating Classifiers for Disease Gene Discovery Kino Coursey Lon Turnbull khc0021@unt.edu lt0013@unt.edu Abstract Identification of genes involved in human hereditary disease is an important bioinfomatics

More information

Discovering Meaningful Cut-points to Predict High HbA1c Variation

Discovering Meaningful Cut-points to Predict High HbA1c Variation Proceedings of the 7th INFORMS Workshop on Data Mining and Health Informatics (DM-HI 202) H. Yang, D. Zeng, O. E. Kundakcioglu, eds. Discovering Meaningful Cut-points to Predict High HbAc Variation Si-Chi

More information

Estimating Multiple Evoked Emotions from Videos

Estimating Multiple Evoked Emotions from Videos Estimating Multiple Evoked Emotions from Videos Wonhee Choe (wonheechoe@gmail.com) Cognitive Science Program, Seoul National University, Seoul 151-744, Republic of Korea Digital Media & Communication (DMC)

More information

From Sentiment to Emotion Analysis in Social Networks

From Sentiment to Emotion Analysis in Social Networks From Sentiment to Emotion Analysis in Social Networks Jie Tang Department of Computer Science and Technology Tsinghua University, China 1 From Info. Space to Social Space Info. Space! Revolutionary changes!

More information

Audio-based Emotion Recognition for Advanced Automatic Retrieval in Judicial Domain

Audio-based Emotion Recognition for Advanced Automatic Retrieval in Judicial Domain Audio-based Emotion Recognition for Advanced Automatic Retrieval in Judicial Domain F. Archetti 1,2, G. Arosio 1, E. Fersini 1, E. Messina 1 1 DISCO, Università degli Studi di Milano-Bicocca, Viale Sarca,

More information

Nature Neuroscience: doi: /nn Supplementary Figure 1. Behavioral training.

Nature Neuroscience: doi: /nn Supplementary Figure 1. Behavioral training. Supplementary Figure 1 Behavioral training. a, Mazes used for behavioral training. Asterisks indicate reward location. Only some example mazes are shown (for example, right choice and not left choice maze

More information

IMPLEMENTATION OF AN AUTOMATED SMART HOME CONTROL FOR DETECTING HUMAN EMOTIONS VIA FACIAL DETECTION

IMPLEMENTATION OF AN AUTOMATED SMART HOME CONTROL FOR DETECTING HUMAN EMOTIONS VIA FACIAL DETECTION IMPLEMENTATION OF AN AUTOMATED SMART HOME CONTROL FOR DETECTING HUMAN EMOTIONS VIA FACIAL DETECTION Lim Teck Boon 1, Mohd Heikal Husin 2, Zarul Fitri Zaaba 3 and Mohd Azam Osman 4 1 Universiti Sains Malaysia,

More information

Emotion Affective Color Transfer Using Feature Based Facial Expression Recognition

Emotion Affective Color Transfer Using Feature Based Facial Expression Recognition , pp.131-135 http://dx.doi.org/10.14257/astl.2013.39.24 Emotion Affective Color Transfer Using Feature Based Facial Expression Recognition SeungTaek Ryoo and Jae-Khun Chang School of Computer Engineering

More information

Classification of normal and abnormal images of lung cancer

Classification of normal and abnormal images of lung cancer IOP Conference Series: Materials Science and Engineering PAPER OPEN ACCESS Classification of normal and abnormal images of lung cancer To cite this article: Divyesh Bhatnagar et al 2017 IOP Conf. Ser.:

More information

Detection and Recognition of Sign Language Protocol using Motion Sensing Device

Detection and Recognition of Sign Language Protocol using Motion Sensing Device Detection and Recognition of Sign Language Protocol using Motion Sensing Device Rita Tse ritatse@ipm.edu.mo AoXuan Li P130851@ipm.edu.mo Zachary Chui MPI-QMUL Information Systems Research Centre zacharychui@gmail.com

More information

EECS 433 Statistical Pattern Recognition

EECS 433 Statistical Pattern Recognition EECS 433 Statistical Pattern Recognition Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1 / 19 Outline What is Pattern

More information

IJESRT. Scientific Journal Impact Factor: (ISRA), Impact Factor: 1.852

IJESRT. Scientific Journal Impact Factor: (ISRA), Impact Factor: 1.852 IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY Performance Analysis of Brain MRI Using Multiple Method Shroti Paliwal *, Prof. Sanjay Chouhan * Department of Electronics & Communication

More information

Outline. Teager Energy and Modulation Features for Speech Applications. Dept. of ECE Technical Univ. of Crete

Outline. Teager Energy and Modulation Features for Speech Applications. Dept. of ECE Technical Univ. of Crete Teager Energy and Modulation Features for Speech Applications Alexandros Summariza(on Potamianos and Emo(on Tracking in Movies Dept. of ECE Technical Univ. of Crete Alexandros Potamianos, NatIONAL Tech.

More information

Variable Features Selection for Classification of Medical Data using SVM

Variable Features Selection for Classification of Medical Data using SVM Variable Features Selection for Classification of Medical Data using SVM Monika Lamba USICT, GGSIPU, Delhi, India ABSTRACT: The parameters selection in support vector machines (SVM), with regards to accuracy

More information

This project is designed to be a post-reading reflection of Dark Water Rising. Please read this carefully and follow all instructions!

This project is designed to be a post-reading reflection of Dark Water Rising. Please read this carefully and follow all instructions! Dark Water Rising Project -Incoming 7th grade enrolled in Pre-AP Incoming 7th grade students registered for Pre-AP English are expected to read Dark Water Rising by Marian Hale for summer reading. It is

More information

Jia Jia Tsinghua University 25/01/2018

Jia Jia Tsinghua University 25/01/2018 Jia Jia jjia@tsinghua.edu.cn Tsinghua University 25/01/2018 Mental health is a level of psychological wellbeing, or an absence of mental illness. The WHO states that the well-being of an individual is

More information

Using Deep Convolutional Networks for Gesture Recognition in American Sign Language

Using Deep Convolutional Networks for Gesture Recognition in American Sign Language Using Deep Convolutional Networks for Gesture Recognition in American Sign Language Abstract In the realm of multimodal communication, sign language is, and continues to be, one of the most understudied

More information

Retinal Blood Vessel Segmentation Using Fuzzy Logic

Retinal Blood Vessel Segmentation Using Fuzzy Logic Retinal Blood Vessel Segmentation Using Fuzzy Logic Sahil Sharma Chandigarh University, Gharuan, India. Er. Vikas Wasson Chandigarh University, Gharuan, India. Abstract This paper presents a method to

More information

Facial Emotion Recognition with Facial Analysis

Facial Emotion Recognition with Facial Analysis Facial Emotion Recognition with Facial Analysis İsmail Öztel, Cemil Öz Sakarya University, Faculty of Computer and Information Sciences, Computer Engineering, Sakarya, Türkiye Abstract Computer vision

More information

An assistive application identifying emotional state and executing a methodical healing process for depressive individuals.

An assistive application identifying emotional state and executing a methodical healing process for depressive individuals. An assistive application identifying emotional state and executing a methodical healing process for depressive individuals. Bandara G.M.M.B.O bhanukab@gmail.com Godawita B.M.D.T tharu9363@gmail.com Gunathilaka

More information

PERCEPTUAL ANALYSES OF ACTION-RELATED IMPACT SOUNDS

PERCEPTUAL ANALYSES OF ACTION-RELATED IMPACT SOUNDS PERCEPTUAL ANALYSES OF ACTION-RELATED IMPACT SOUNDS Marie-Céline Bézat 12, Vincent Roussarie 1, Richard Kronland-Martinet 2, Solvi Ystad 2, Stephen McAdams 3 1 PSA Peugeot Citroën 2 route de Gisy 78943

More information