The Analysis of Learner s Concentration by Facial Expression Changes & Movements
|
|
- Claude Burke
- 6 years ago
- Views:
Transcription
1 The Analysis of Learner s Concentration by Facial Expression Changes & Movements Seunghui Cha 1 and Wookhyun Kim 2 1 Ph.D Student, Department of Computer Engineering, Yeungnam University, Gyeongsan, 280 Daehak-Ro, South Korea. 2 Professor, Department of Computer Engineering, Yeungnam University, Gyeongsan, 280 Daehak-Ro, South Korea. Abstract This paper proposes to analyze the concentration of learning. It is a system to detect and analyze features of facial expressions in the video data for the verification of the leaner s concentration. The leaner s concentration is important to class effectiveness for learners. The facial expression changes depending on the leaner s unusual environment and emotion. This system analyzes and reflects the learner s current situation. First, we detect the learner s face on the learning video image and detect feature points in the detected face. Second, we decide the learner s facial criteria value and analyze coordinate values of feature points. We decide criteria values of the frontal face, opened eye and closed mouth. Third, we compare values of detected feature points with criteria values and decide the turned face, the downed face, the closed eye, the opened mouth and facial emotions (smile, surprise, sadness, anger). Lastly, we determine the state of concentration and the state of non-concentration while learning. It is not concentrated when coordinates have out of criteria values and it is concentrated when coordinates have criteria values. We have confirmed the learner's concentration by the changed amount of the distribution of feature points using the proposed system. Keywords: Analysis of the concentration, learner's concentration, feature point, analysis of the image, movement of face, analysis of facial emotion, facial expression. INTRODUCTION In recent years, research in the field of facial recognition has rapidly developed. Facial recognition has been applied to various fields such as facial expression recognition and iris recognition. There are other types of recognition such as facial recognition, pedestrian recognition, as well as movement recognition, such as hands in human recognition. Additionally a number of other types of recognition, not specifically related to human recognition, exist and include items such as, vehicle recognition, license plate recognition, and several object recognitions. In particular, the research of facial recognition continues in the facial image and security field. The research related to the facial image on the learning lacks in the field of education. In this paper, we propose a method to analyze concentration of learning through the detection of the facial feature points in the learner's facial expression image. We convert it to coordinate after taking a variety of facial images during the learning process. In order to determine whether the learner s state is concentration or non-concentration, we examined the status of a smile, surprise, sadness and anger on facial images. This paper is organized as follows. First, we described the concentration of the learning by related works and introduce algorithms of the face detection and feature extraction. Second, we described in detail about system design to analyze the learner's concentration using detection system of facial feature points proposed in this paper. Finally, we described conclusions and described limitations and future researches about this study. RELATED WORK The learner's concentration is important [1][2][3] to class effectiveness for learning. The opposite of concentration is non-concentration, distraction and dispersion. Research studies related to concentration are divided into two areas. There are researches for improving and measuring the concentration. The study to measure the learner s concentration on the learning environment of the e-learning suggests the ways to measure the concentration while learning after examining the learner s eye and brain waves [4]. When a learner in an e-learning context bows his head to write, the study has the way that delays the video speed to examine the learner s concentration state at the same time [5]. We use the Viola-Jones [6] algorithm, which is one of the appearancebased methods for face detection. We extract feature points by using the Shi & Tomasi [7] algorithm. DESIGN OF THE LEARNER S CONCENTRATION ANALYSIS SYSTEM We examine changes of the learner's facial features in the video image. In learner's facial features, we check the following facial features the smile, surprise, sadness, anger, closed mouth, opened mouth (the shape of the mouth is changed based on pronunciation), opened eye, closed eye, frontal face, turned face and downed face
2 In order to find the feature points of the face, we extract facial feature points after detecting the facial area. We set default values of the face, the eye and the mouth in the coordinate of extracted feature points and we determine the concentration state and the non-concentration state by comparing default values and various values of learner's facial features. The concentration is defined as the situation of the opened eye, the closed mouth and the frontal face in the situation of concentration on the learning. The non-concentration is defined as the situation of the closed eye, the opened mouth, the downed face, the turned face and facial expressions of emotion in the situation of nonconcentration while learning. If the length value of the eyes is less than the criteria value and the value occurs above 0.9ms, we determine the state of the closed eye. It determines the state of non-concentration while learning. If that value is larger than the criteria value and the value occurs under 0.9ms, it is the blinking eye. The blinking eye is the state of concentration while learning. Figure 1. Criteria values First, we detect the face in the learner s image and extract facial feature points. We determine criteria values of the frontal state. These values determine with the coordinate values of the face, the eyes and the mouth of the frontal facial feature point. Next, after determining criteria value, we analyze by comparing the learner's state and recognize facial expressions. The determination of criteria values and recognition of facial expressions are explained in details. Lastly, we determine whether the learner's state is concentration or nonconcentration. A. Definition of the concentration and non-concentration In this paper, the criteria value of concentration or nonconcentration is defined as follows: B. Criteria value Fig. 1(a) is the frontal facial length, Fig. 1(b) is the facial center, Fig. 1(c) is the length of the opened eye, Fig. 1(d) is the length of the closed mouth, and Fig. 1(e) is the width of the closed mouth. With these, we determine five criteria values in the state of the frontal face. After the determination of criteria values, we compare criteria values with the learner s coordinate values of learning, and then we determine the state of the turned face, the downed face, the closed eye, facial expressions of emotions and the opened mouth. After we compare criteria values (see Fig. 1) with extracted feature points, we determine the downed face, turned face, closed eye, opened mouth, and the expressions of facial emotion (see Fig. 2). Fig. 1(a) is the minimum value of Y coordinate of the eye and the maximum value of Y coordinate of the mouth, this is determined as a criteria value of the facial length. And we detect the difference value of the maximum and minimum of the Y coordinate of the facial length from learner s coordinate value on the learning, if the value is less than criteria value we determine the state of the downed face (see Fig. 2(h))
3 Figure 2. The analysis of the feature points. (a): frontal face; (b): closed eye; (c): smile; (d): surprise; (e): sadness; (f): anger; (g): turned face; (h): downed face; (i): [ah]; (j): [ae]; (k): [e]; (l): [o] In order to determine the state of the turned face, we determine criteria values of the facial center (see Fig. 1(b)). It is the maximum value of the X coordinate of the right eye and the minimum value of the X coordinate of the left eye. We detect the different values of the maximum and minimum of the X coordinate of the eye with extracted feature points from learner s coordinate value while learning. If the value is less than criteria value, we determine it as the state of the turned face to the left (see Fig. 2(g)). If the value is larger than that, we determine it as the state of the turned face to the right. So, if that value is larger or less than the criteria value, both determine the state of the turned face. In order to compare the closed eye with the opened eye, we determine criteria value of the opened eye (see Fig. 1(c)), This value is the maximum value and the minimum value of the Y coordinate of the eye. Thus the different values of the maximum value and the minimum value of the Y coordinate of the eye are shown from learner s coordinate value while learning. If the value is less than criteria value and the value occurs over 0.9ms, we determine the state of the closed eye (see Fig. 2(b)). If that value is less than 0.9ms, that is the blinking eye. In order to decide the opened mouth, we determine criteria value of the closed mouth (see Figs. 5(d) and 5(e)). These are the maximum and the minimum values of the Y coordinate and X coordinate of the mouth. Thus different values of the maximum and the minimum values of the Y coordinate and X coordinate of the mouth from learner s coordinate value of learning, if the different value is larger than criteria value of the X coordinate, we determine the state of the opened mouth ([ah], [ae], [e]) (see Figs. 6(i), 6(j) and 6(k)). If the different values are less than the criteria values of the X coordinate, we determine the state of the opened mouth ([o]) (see Fig. 2(l)). If the different values are larger than the criteria values of the Y coordinate, we determine the state of the opened mouth ([ah], [ae], [e], [o]). Thus, if these values are less or larger than the criteria values, both determine the state of the opened mouth. We determine expressions of facial emotion (smile, surprise, sadness, anger) by using coordinate values of the mouth and the eyes. In order to decide the smile, surprise, sadness and anger, we determine criteria values of the opened eye and the closed mouth (see Figs. 5(c), 5(d) and 5(e)). Thus different values of the X and Y coordinate of the eye and the mouth from learner s coordinate value while learning. If different values of the X and Y coordinate of the mouth are larger than criteria value and different values of the Y coordinate of the eye are less than criteria value, we determine the state of the smiling face (see Fig. 2(c)). Thus, different values of the X and Y coordinate of the eyes and the mouth (see Fig. 2(d)) contain different values of the X and Y coordinate of the mouth and are more than the criteria values and difference values of the Y coordinate of the eye that is larger than criteria value, we determine the state of the surprised face (see Fig. 2(f)). Also if different values of the Y coordinate of the eye are less than criteria values, we determine the state of the sad face (see Fig. 2(e)). If the different values of the Y coordinate of the eye are larger than criteria values, we determine the state of an angry face (see Fig. 2(f)). Thus, if the difference value of the Y coordinate of the eye is less or larger than criteria value, both determine states of the facial expressions of emotions (smile, surprise, sadness, anger). EXPERIMENTS We take a video image on the learner s state and we detect the learner s face in the image. We determine criteria values after we extract coordinate values of feature points in the detected face. From this, we determine whether a student has the frontal face, the downed face, the turned face, opened eyes, closed 11346
4 eyes, expressions of emotion (smile, surprise, sadness, anger) or the opened mouth ([ah], [ae], [e], [o]) with the detected coordinate values, and then the state of the concentration while learning is determined. Fig. 3 observes the changed state of objects for three minutes which shows the changed feature points in the face image after taking the learner s video image. With this, we identify the state of the downed face, the turned face, closed eyes, smile, surprise, sadness, anger and the opened mouth by the changed distribution of feature point in each frame. If feature points get out of the criteria range (see in Fig. 3), those are states of the turned face, closed eyes, smile, surprise, sadness, anger and the opened mouth. The period where the facial center value fell sharply (see Fig. 3(a)) is the state that the object is actually turned his head to the left. The changed facial length (see Fig. 3(b)) is actually the nonconcentration state of the surprise, anger and opened mouth ([ah], [ae] and [o]) and if there is the period where the facial length value fell sharply, this period is actually the period of the downed face. This states the changed width of mouth (see Fig. 3(c)) is actually the non-concentration states of the smile, surprise, turned face and opened mouth ([ae], [e], [o]). The changed length of mouth (see Fig. 3(d)) is actually the non-concentration state of surprise, sadness, downed face and opened mouth ([ah], [ae], [o]). Red circles (see Fig. 3(e)) are states of the blinking eyes for 0.1 seconds. Blinking eyes appear six times for three minutes. These are states of blinking eye in the real situation, these are not state of the non-concentration. The others show the changeable length values of the eyes, these are states of nonconcentration of smile, surprise, sadness, anger and closed eye in real situations. Red arrows except red circles (see Fig. 3) were actually nonconcentration states of the closed eye, opened mouth, downed face, turned face and facial expressions of emotion (smile, surprise, sadness, anger), we observed the same state in the chart. The rest was actually the concentration state of the frontal face with the opened eye and closed mouth in which we also observed as the same state in the chart
5 Figure 3. The analysis of the learner s state (a): Turned face; (b): Length of face; (c): Width of mouth; (d): Length of Mouth; (e): Closed eye As the Result of the experiments on 30 students in a classroom, the detection rate of feature points and the comparison value of each of the detected features are shown in Table 1. The detection rate of smile, surprise, sadness, anger, turned face, downed face, closed eye, opened mouth ([ah], [ae], [e], [o]) are 95.31%, 93.94%, 89.98%, 88.76%, 94.45%, 95.24%, 93.94%, 92.63%, 90.12%, 87.25% and 97.45% respectively. The detection of the smile, turned face, opened mouth ([ah], [o]) has a high detection rate, but the detection rate of the sadness, anger and opened mouth ([e]) is lower than the others. Problems occurred in the experiment are as follows. When the object is suddenly far away relatively, the detection rate is lower. Also, if more light shines on one side of the face, the detection rate is lower. Table 1: Detection Rate and Analysis of Detected Features. X: x coordinate value; Y: y coordinate value Facial Movement Detected Feature Values & Criteria value Detection Rate ( % ) Eye Mouth Facial length Facial center y x y y x Smile Small Large Surprise Large Small Large Large Sadness S Small Anger Large Large Large
6 Turned Face Small Small or Large Downed Face Small Small Closed Eye Small [ah] Large Large [ae] Large Large Large [e] Large [o] Small Large Large CONCLUSION In this paper, we detected facial feature points for concentration analysis of learners. We determined the state of the non-concentration from out of criteria values of facial feature points. We confirmed whether the learner is the state of concentration or non-concentration in relation to how the learner s face falls down or turns from side to side, or if their eyes open or close or their mouth moves to the up and down or moves to the left and right based on the analysis of data. ACKNOWLEDGEMENT The corresponding author is Wookhyun Kim. [5] Joohee Kang, Minjea Park, Yujung Yun, Hoyang Choi, Seongwon Park and Kwangsu Cho, Study on e-learner's Attention improvements, Korea HCI society conference, pp , Feb [6] P. Viola and M. Jones, Rapid Object Detection using a Boosted Cascade of Simple Features, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 1, pp , [7] Shi, J., and C. Tomasi, Good Features to Track, IEEE Conference on Computer Vision and Pattern Recognition, pp , Jun REFERENCES [1] Seunghui Cha, Wookhyun Kim, An Analysis Method of Class Concentration by Facial Feature Detection, international Information Institute (Tokyo). Information vol.18. no.6(b) p , Jun [2] Seunghui Cha, Wookhyun Kim, Concentration analysis by detecting face features of learners, Communications, Computers and Signal Processing (PACRIM), 2015 IEEE Pacific Rim Conference on, p.46-51, [3] Seunghui Cha, Jong Wook Kwak, Wookhyun Kim, Performance Analysis of Face Detection Algorithms for Efficient Comparison of Prediction Time and Accuracy, Proceedings of the International Conference on Image Processing, Computer Vision, and Pattern Recognition (IPCV), p , [4] Hyung-Mo Ahn, Sang-Cheon Nam and Ki-Sang Song, Application of bio-signal measurement to identify learning concentration in e-learning environment, The Korean association of computer education, Vol. 16, pp ,
Classroom Data Collection and Analysis using Computer Vision
Classroom Data Collection and Analysis using Computer Vision Jiang Han Department of Electrical Engineering Stanford University Abstract This project aims to extract different information like faces, gender
More informationFacial expression recognition with spatiotemporal local descriptors
Facial expression recognition with spatiotemporal local descriptors Guoying Zhao, Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and Information Engineering, P. O. Box
More informationFacial Emotion Recognition with Facial Analysis
Facial Emotion Recognition with Facial Analysis İsmail Öztel, Cemil Öz Sakarya University, Faculty of Computer and Information Sciences, Computer Engineering, Sakarya, Türkiye Abstract Computer vision
More informationVolume 2. Lecture Notes in Electrical Engineering 215. Kim Chung Eds. IT Convergence and Security IT Convergence and Security 2012.
Lecture Notes in Electrical Engineering 215 Kuinam J. Kim Kyung-Yong Chung Editors IT Convergence and Security 2012 Volume 2 The proceedings approaches the subject matter with problems in technical convergence
More informationEmotion Affective Color Transfer Using Feature Based Facial Expression Recognition
, pp.131-135 http://dx.doi.org/10.14257/astl.2013.39.24 Emotion Affective Color Transfer Using Feature Based Facial Expression Recognition SeungTaek Ryoo and Jae-Khun Chang School of Computer Engineering
More informationA Study on Reducing Stress through Deep Breathing
A Study on Reducing Stress through Deep Breathing Bong-Young Kim 1 1 Information and Telecommunication of Department, Soongsil University, Seoul, Korea. 369, Sangdo-ro Dongjak-gu, Seou Myung-Jin Bae *2
More informationEmotion Recognition using a Cauchy Naive Bayes Classifier
Emotion Recognition using a Cauchy Naive Bayes Classifier Abstract Recognizing human facial expression and emotion by computer is an interesting and challenging problem. In this paper we propose a method
More informationGender Based Emotion Recognition using Speech Signals: A Review
50 Gender Based Emotion Recognition using Speech Signals: A Review Parvinder Kaur 1, Mandeep Kaur 2 1 Department of Electronics and Communication Engineering, Punjabi University, Patiala, India 2 Department
More informationComparison of Lip Image Feature Extraction Methods for Improvement of Isolated Word Recognition Rate
, pp.57-61 http://dx.doi.org/10.14257/astl.2015.107.14 Comparison of Lip Image Feature Extraction Methods for Improvement of Isolated Word Recognition Rate Yong-Ki Kim 1, Jong Gwan Lim 2, Mi-Hye Kim *3
More informationReal-time SVM Classification for Drowsiness Detection Using Eye Aspect Ratio
Real-time SVM Classification for Drowsiness Detection Using Eye Aspect Ratio Caio B. Souto Maior a, *, Márcio C. Moura a, João M. M. de Santana a, Lucas M. do Nascimento a, July B. Macedo a, Isis D. Lins
More informationFACIAL EXPRESSION RECOGNITION FROM IMAGE SEQUENCES USING SELF-ORGANIZING MAPS
International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998 FACIAL EXPRESSION RECOGNITION FROM IMAGE SEQUENCES USING SELF-ORGANIZING MAPS Ayako KATOH*, Yasuhiro FUKUI**
More informationUsing Affect Awareness to Modulate Task Experience: A Study Amongst Pre-Elementary School Kids
Proceedings of the Twenty-Seventh International Florida Artificial Intelligence Research Society Conference Using Affect Awareness to Modulate Task Experience: A Study Amongst Pre-Elementary School Kids
More informationEffect of Early Childhood Pre-Service Teachers Character Strengths and Happiness on their Anger Expression
Indian Journal of Science and Technology, Vol 9(46), DOI: 10.17485/ijst/2016/v9i46/107852, December 2016 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 Effect of Early Childhood Pre-Service Teachers
More informationAnalysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information
Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, S. Lee, U. Neumann, S. Narayanan Emotion
More informationCURRICULUM VITAE CHUL LEE, M.D., Ph.D
CURRICULUM VITAE CHUL LEE, M.D., Ph.D. --------------------------------- PRESENT TITLE: President, University of Ulsan (Since 3. 2011) Professor of Psychiatry, University of Ulsan College of Medicine (UUCM),
More informationValence and Gender Effects on Emotion Recognition Following TBI. Cassie Brown Arizona State University
Valence and Gender Effects on Emotion Recognition Following TBI Cassie Brown Arizona State University Knox & Douglas (2009) Social Integration and Facial Expression Recognition Participants: Severe TBI
More informationOnline Vigilance Analysis Combining Video and Electrooculography Features
Online Vigilance Analysis Combining Video and Electrooculography Features Ruo-Fei Du 1,Ren-JieLiu 1,Tian-XiangWu 1 and Bao-Liang Lu 1,2,3,4, 1 Center for Brain-like Computing and Machine Intelligence Department
More informationGeneral Psych Thinking & Feeling
General Psych Thinking & Feeling Piaget s Theory Challenged Infants have more than reactive sensing Have some form of discrimination (reasoning) 1-month-old babies given a pacifier; never see it Babies
More informationMental State Recognition by using Brain Waves
Indian Journal of Science and Technology, Vol 9(33), DOI: 10.17485/ijst/2016/v9i33/99622, September 2016 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 Mental State Recognition by using Brain Waves
More informationEffect of Sensor Fusion for Recognition of Emotional States Using Voice, Face Image and Thermal Image of Face
Effect of Sensor Fusion for Recognition of Emotional States Using Voice, Face Image and Thermal Image of Face Yasunari Yoshitomi 1, Sung-Ill Kim 2, Takako Kawano 3 and Tetsuro Kitazoe 1 1:Department of
More informationAnalysis of Chemotherapy Telephone Helpline in the Ambulatory Oncology Unit
, pp.24-28 http://dx.doi.org/10.14257/astl.2013 Analysis of Chemotherapy Telephone Helpline in the Ambulatory Oncology Unit Hee Jin, Kim 1, Hye Jin, Kim 1 and Kyung Ja, Kang 2 1 Seoul National University
More informationDevelopment of 2-Channel Eeg Device And Analysis Of Brain Wave For Depressed Persons
Development of 2-Channel Eeg Device And Analysis Of Brain Wave For Depressed Persons P.Amsaleka*, Dr.S.Mythili ** * PG Scholar, Applied Electronics, Department of Electronics and Communication, PSNA College
More informationSmileTracker: Automatically and Unobtrusively Recording Smiles and their Context
SmileTracker: Automatically and Unobtrusively Recording Smiles and their Context Natasha Jaques * MIT Media Lab 75 Amherst St. Cambridge, MA 02142 USA jaquesn@mit.edu * Both authors contributed equally
More informationA Judgment of Intoxication using Hybrid Analysis with Pitch Contour Compare (HAPCC) in Speech Signal Processing
A Judgment of Intoxication using Hybrid Analysis with Pitch Contour Compare (HAPCC) in Speech Signal Processing Seong-Geon Bae #1 1 Professor, School of Software Application, Kangnam University, Gyunggido,
More informationEMOTIONAL LIGHTING SYSTEM ABLE TO EMOTION REASONING USING FUZZY INFERENCE
EMOTIONAL LIGHTING SYSTEM ABLE TO EMOTION REASONING USING FUZZY INFERENCE 1 BO-RAM KIM, 2 DONG KEUN KIM 1 Department of Mobile Software, Graduate School, Sangmyung University, Seoul 03016, Korea 2 Department
More informationDetection of Facial Landmarks from Neutral, Happy, and Disgust Facial Images
Detection of Facial Landmarks from Neutral, Happy, and Disgust Facial Images Ioulia Guizatdinova and Veikko Surakka Research Group for Emotions, Sociality, and Computing Tampere Unit for Computer-Human
More informationStudy on User Interface of Pathology Picture Archiving and Communication System
Original Article Healthc Inform Res. 2014 January;20(1):45-51. pissn 2093-3681 eissn 2093-369X Study on User Interface of Pathology Picture Archiving and Communication System Dasueran Kim, MS 1, Peter
More informationFacial Expression Classification Using Convolutional Neural Network and Support Vector Machine
Facial Expression Classification Using Convolutional Neural Network and Support Vector Machine Valfredo Pilla Jr, André Zanellato, Cristian Bortolini, Humberto R. Gamba and Gustavo Benvenutti Borba Graduate
More informationSupporting Arabic Sign Language Recognition with Facial Expressions
Supporting Arabic Sign Language Recognition with Facial Expressions SASLRWFE Ghada Dahy Fathy Faculty of Computers and Information Cairo University Cairo, Egypt g.dahy@fci-cu.edu.eg E.Emary Faculty of
More informationA Study on Warning Sound for Drowsiness Driving Prevention System
A Study on Warning Sound for Drowsiness Driving Prevention System Ik-Soo Ahn 1, Myung-Jin Bae *2 1,2 Information and Telecommunication of Department, Soongsil University, 369, Sangdo-ro Dongjak-gu, Seou,
More informationSeung Hyeok Han, MD, PhD Department of Internal Medicine Yonsei University College of Medicine
Seung Hyeok Han, MD, PhD Department of Internal Medicine Yonsei University College of Medicine The Scope of Optimal BP BP Reduction CV outcomes & mortality CKD progression - Albuminuria - egfr decline
More informationAutomatic Emotion Recognition Using Facial Expression: A Review
Automatic Emotion Recognition Using Facial Expression: A Review Monika Dubey 1, Prof. Lokesh Singh 2 1Department of Computer Science & Engineering, Technocrats Institute of Technology, Bhopal, India 2Asst.Professor,
More informationEmotion Coaching Training Peer Mentoring Course
With thanks to Dan Hartley, Minehead Middle School Emotion Coaching Training Peer Mentoring Course www.emotioncoaching.co.uk Your brain is a like a computer It processes information you get from the world
More informationPrimary Level Classification of Brain Tumor using PCA and PNN
Primary Level Classification of Brain Tumor using PCA and PNN Dr. Mrs. K.V.Kulhalli Department of Information Technology, D.Y.Patil Coll. of Engg. And Tech. Kolhapur,Maharashtra,India kvkulhalli@gmail.com
More informationNew Hope for Recognizing Twins by Using Facial Motion
New Hope for Recognizing Twins by Using Facial Motion Li Zhang, Ning Ye 2, Elisa Martinez Marroquin 3, Dong Guo 4, Terence Sim School of Computing, National University of Singapore, 2 Bioinformatics Institute,
More informationACT Program. Left Main Intensive Course FFR & IVUS Guided PCI CTO LIVE from the Experts TAVI Session. Organizing Director
ACT Program Asan Medical Center Interventional Cardiology Training Program Left Main Intensive Course FFR & IVUS Guided PCI CTO LIVE from the Experts TAVI Session Organizing Director Seung-Jung Park, MD
More informationThe effect of sports star image perceived by participants of athletes on psychological desire and athlete satisfaction
, pp.151-155 http://dx.doi.org/10.14257/astl.2015.113.31 The effect of sports star image perceived by participants of athletes on psychological desire and Min-Jun Kim 1, Jung-In Yoo 2, Joo-Hyug Jung 3,
More informationUrine Biomarker Calibration for Ovarian Cancer Diagnosis
, pp.218-222 http://dx.doi.org/10.14257/astl.2013.29.45 Urine Biomarker Calibration for Ovarian Cancer Diagnosis Eun-Suk Yang, Yu-Seop Kim, Chan-Young Park, Hye-Jeong Song, Jong-Dae Kim Dept. of Ubiquitous
More informationAnalysis of Eye Movements according to Emotions during Elementary Students' Observation Activities
, pp.217-221 http://dx.doi.org/10.14257/astl.2016. Analysis of Eye Movements according to Emotions during Elementary Students' Observation Activities Younghyun Park 1, Eunae Kim 1, Ilho Yang 1 1 Department
More informationStatistical data preparation: management of missing values and outliers
KJA Korean Journal of Anesthesiology Statistical Round pissn 2005-6419 eissn 2005-7563 Statistical data preparation: management of missing values and outliers Sang Kyu Kwak 1 and Jong Hae Kim 2 Departments
More informationHwa-Byung Treated by Using Ascending Kidney Water and Descending Heart Fire Pharmacopuncture: Three Case Studies
ISSN 2093-6966 [Print], ISSN 2234-6856 [Online] Journal of Pharmacopuncture 2017;20[2]:132-138 DOI: https://doi.org/10.3831/kpi.2017.20.018 Case report Hwa-Byung Treated by Using Ascending Kidney Water
More informationChanges in the seroprevalence of IgG anti-hepatitis A virus between 2001 and 2013: experience at a single center in Korea
pissn 2287-2728 eissn 2287-285X Original Article Clinical and Molecular Hepatology 214;2:162-167 Changes in the seroprevalence of IgG anti-hepatitis A virus between 21 and 213: experience at a single center
More informationAn assistive application identifying emotional state and executing a methodical healing process for depressive individuals.
An assistive application identifying emotional state and executing a methodical healing process for depressive individuals. Bandara G.M.M.B.O bhanukab@gmail.com Godawita B.M.D.T tharu9363@gmail.com Gunathilaka
More informationTemporal Context and the Recognition of Emotion from Facial Expression
Temporal Context and the Recognition of Emotion from Facial Expression Rana El Kaliouby 1, Peter Robinson 1, Simeon Keates 2 1 Computer Laboratory University of Cambridge Cambridge CB3 0FD, U.K. {rana.el-kaliouby,
More informationCurriculum Vitae. Ye Seul Kim
Curriculum Vitae Ye Seul Kim CONTACT INFORMATION E-mail: yeseulkim1@mail.usf.edu Mobile: +1 (813) 368-1701 Address: Department of Marketing, Muma College of Business, University of South Florida, 4202
More informationAgitation sensor based on Facial Grimacing for improved sedation management in critical care
Agitation sensor based on Facial Grimacing for improved sedation management in critical care The 2 nd International Conference on Sensing Technology ICST 2007 C. E. Hann 1, P Becouze 1, J. G. Chase 1,
More informationMODULE 41: THEORIES AND PHYSIOLOGY OF EMOTION
MODULE 41: THEORIES AND PHYSIOLOGY OF EMOTION EMOTION: a response of the whole organism, involving 1. physiological arousal 2. expressive behaviors, and 3. conscious experience A mix of bodily arousal
More informationNeuromorphic convolutional recurrent neural network for road safety or safety near the road
Neuromorphic convolutional recurrent neural network for road safety or safety near the road WOO-SUP HAN 1, IL SONG HAN 2 1 ODIGA, London, U.K. 2 Korea Advanced Institute of Science and Technology, Daejeon,
More informationVoice Detection using Speech Energy Maximization and Silence Feature Normalization
, pp.25-29 http://dx.doi.org/10.14257/astl.2014.49.06 Voice Detection using Speech Energy Maximization and Silence Feature Normalization In-Sung Han 1 and Chan-Shik Ahn 2 1 Dept. of The 2nd R&D Institute,
More informationMountain Ginseng Pharmacopuncture Treatment on Three Amyotrophic Lateral Sclerosis Patients -Case Report-
3 119 DOI : 10.3831/KPI.2010.13.4.119 3 Received : 10. 11. 06 Revised : 10. 11. 18 Accepted : 10. 11. 26 Key Words: Mountain Ginseng Pharmacopuncture(MG P), Amyotrophic Lateral Sclerosis(ALS), Amyotrophic
More informationA framework for the Recognition of Human Emotion using Soft Computing models
A framework for the Recognition of Human Emotion using Soft Computing models Md. Iqbal Quraishi Dept. of Information Technology Kalyani Govt Engg. College J Pal Choudhury Dept. of Information Technology
More informationAccessible Computing Research for Users who are Deaf and Hard of Hearing (DHH)
Accessible Computing Research for Users who are Deaf and Hard of Hearing (DHH) Matt Huenerfauth Raja Kushalnagar Rochester Institute of Technology DHH Auditory Issues Links Accents/Intonation Listening
More informationEBT Daily Check In. 1. Accept your current state, please scroll to the bottom. 2. Use a tool, please scroll to the corresponding tool.
EBT Daily Check In Take a deep breath, and check in with your feelings. Ask yourself: Right now, what is my stress level? Am I: State 1: Feeling Great? State 2: Feeling Good State 3: Feeling A Little Stressed
More informationR Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology
ISSN: 0975-766X CODEN: IJPTFI Available Online through Research Article www.ijptonline.com FACIAL EMOTION RECOGNITION USING NEURAL NETWORK Kashyap Chiranjiv Devendra, Azad Singh Tomar, Pratigyna.N.Javali,
More informationResearch Article Decision Making Method Based on Importance-Dangerousness Analysis for the Potential Risk Behavior of Construction Laborers
Mathematical Problems in Engineering Volume 2015, Article ID 502121, 8 pages http://dx.doi.org/10.1155/2015/502121 Research Article Decision Making Method Based on Importance-Dangerousness Analysis for
More informationEvaluating the Comprehensive Model of Ego-integrity for Senior Patients in Convalescent Hospitals: Influence Factors and Outcome Variables
, pp.317-326 http://dx.doi.org/10.14257/ijbsbt.2015.7.5.30 Evaluating the Comprehensive Model of Ego-integrity for Senior Patients in Convalescent Hospitals: Influence Factors and Outcome Variables HyeSun
More informationA Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China
A Vision-based Affective Computing System Jieyu Zhao Ningbo University, China Outline Affective Computing A Dynamic 3D Morphable Model Facial Expression Recognition Probabilistic Graphical Models Some
More informationBio-Feedback Based Simulator for Mission Critical Training
Bio-Feedback Based Simulator for Mission Critical Training Igor Balk Polhemus, 40 Hercules drive, Colchester, VT 05446 +1 802 655 31 59 x301 balk@alum.mit.edu Abstract. The paper address needs for training
More informationCan discoid lateral meniscus be returned to the correct anatomic position and size of the native lateral meniscus after surgery?
Can discoid lateral meniscus be returned to the correct anatomic position and size of the native lateral meniscus after surgery? Seong Hwan Kim,*M.D. 1, Joong Won Lee M.D. 2, and Sang Hak Lee, M.D. 2 From
More informationDesign of Sporty SQI using Semantic Differential and Verification of its Effectiveness
Design of Sporty SQI using Semantic Differential and Verification of its Effectiveness Gahee KWON 1 ; Jae Hyuk PARK 1 ; Han Sol PARK 1 ; Sang Il LEE 2 ; Yeon Soo KIM 3 ; Yeon June Kang 1 1 Seoul National
More informationResult of screening and surveillance colonoscopy in young Korean adults < 50 years
SEP 25, 2017 Result of screening and surveillance colonoscopy in young Korean adults < 50 years Jae Myung Cha, MD. PhD. Department of Internal Medicine, Kyung Hee University Hospital at Gang Dong, Kyung
More informationRisk diagnosis based on diameter of abdominal aortic aneurysm
Technology and Health Care 24 (2016) S569 S575 DOI 10.3233/THC-161183 IOS Press S569 Risk diagnosis based on diameter of abdominal aortic aneurysm Jin-Hyoung Jeong a, Jun-Tae Kim a,nam-sunkim b, Jae-Hyun
More informationThis is the accepted version of this article. To be published as : This is the author version published as:
QUT Digital Repository: http://eprints.qut.edu.au/ This is the author version published as: This is the accepted version of this article. To be published as : This is the author version published as: Chew,
More informationEstimation of Systolic and Diastolic Pressure using the Pulse Transit Time
Estimation of Systolic and Diastolic Pressure using the Pulse Transit Time Soo-young Ye, Gi-Ryon Kim, Dong-Keun Jung, Seong-wan Baik, and Gye-rok Jeon Abstract In this paper, algorithm estimating the blood
More informationEMOTIONS difficult to define (characteristics =)
LECTURE 6: EMOTIONS MOTIVATION desires, needs, and interests that arouse or activate an organism and direct it toward a specific goal EMOTIONS difficult to define (characteristics =) a) b) c) Smiles: Fake-polite:
More informationCPSC81 Final Paper: Facial Expression Recognition Using CNNs
CPSC81 Final Paper: Facial Expression Recognition Using CNNs Luis Ceballos Swarthmore College, 500 College Ave., Swarthmore, PA 19081 USA Sarah Wallace Swarthmore College, 500 College Ave., Swarthmore,
More informationRecognising Emotions from Keyboard Stroke Pattern
Recognising Emotions from Keyboard Stroke Pattern Preeti Khanna Faculty SBM, SVKM s NMIMS Vile Parle, Mumbai M.Sasikumar Associate Director CDAC, Kharghar Navi Mumbai ABSTRACT In day to day life, emotions
More informationFinding Cultural Differences and Motivation Factors of Foreign Construction Workers
Journal of Building Construction and Planning Research, 2015, 3, 35-46 Published Online June 2015 in SciRes. http://www.scirp.org/journal/jbcpr http://dx.doi.org/10.4236/jbcpr.2015.32005 Finding Cultural
More informationEstimation of Stellate Ganglion Block Injection Point Using the Cricoid Cartilage as Landmark Through X-ray Review
Original Article Korean J Pain 2011 September; Vol. 24, No. 3: 141-145 pissn 2005-9159 eissn 2093-0569 http://dx.doi.org/10.3344/kjp.2011.24.3.141 Estimation of Stellate Ganglion Block Injection Point
More informationEdge Based Grid Super-Imposition for Crowd Emotion Recognition
Edge Based Grid Super-Imposition for Crowd Emotion Recognition Amol S Patwardhan 1 1Senior Researcher, VIT, University of Mumbai, 400037, India ---------------------------------------------------------------------***---------------------------------------------------------------------
More informationACUTE LEUKEMIA CLASSIFICATION USING CONVOLUTION NEURAL NETWORK IN CLINICAL DECISION SUPPORT SYSTEM
ACUTE LEUKEMIA CLASSIFICATION USING CONVOLUTION NEURAL NETWORK IN CLINICAL DECISION SUPPORT SYSTEM Thanh.TTP 1, Giao N. Pham 1, Jin-Hyeok Park 1, Kwang-Seok Moon 2, Suk-Hwan Lee 3, and Ki-Ryong Kwon 1
More informationModule 1 Worksheet: INTRODUCTION TO END-OF-LIFE DEMENTIA CARE
Your Name: Date: Module 1 Worksheet: INTRODUCTION TO END-OF-LIFE DEMENTIA CARE 1. Circle all of the following that are examples of a reflex: Sucking Talking c) Grasping d) Waving 2. Define mottled. (Hint:
More informationThe Relationship between Media Sports Involvement Experiences and Sports Values and Sports Participation
The Relationship between Media Sports Involvement Experiences and Sports Values and Sports Participation Nam-Ik Kim* and Sun-Mun Park** * Department of Physical Education Graduate School, Catholic Kwadong
More informationHUMAN EMOTION DETECTION THROUGH FACIAL EXPRESSIONS
th June. Vol.88. No. - JATIT & LLS. All rights reserved. ISSN: -8 E-ISSN: 87- HUMAN EMOTION DETECTION THROUGH FACIAL EXPRESSIONS, KRISHNA MOHAN KUDIRI, ABAS MD SAID AND M YUNUS NAYAN Computer and Information
More informationRachael E. Jack, Caroline Blais, Christoph Scheepers, Philippe G. Schyns, and Roberto Caldara
Current Biology, Volume 19 Supplemental Data Cultural Confusions Show that Facial Expressions Are Not Universal Rachael E. Jack, Caroline Blais, Christoph Scheepers, Philippe G. Schyns, and Roberto Caldara
More informationAffective Character Network for Understanding Plots of Narrative Multimedia Contents
Affective Character Network for Understanding Plots of Narrative Multimedia Contents O-Joun Lee 1 and Jason J. Jung 1 Department of Computer Engineering Chung-Ang University Seoul, Korea 156-756 {concerto34,j3ung}@cau.ac.kr
More informationA Subjectivity Study on Eating Habits among Female College Students
Indian Journal of Science and Technology, Vol 9(), DOI: 0.8/ijst/0/v9i/089, December 0 ISSN (Print) : 09-8 ISSN (Online) : 09- A Subjectivity Study on Eating Habits among Female College Students JeeHee
More informationMultiple Intelligences of the High Primary Stage Students
Multiple Intelligences of the High Primary Stage Students Dr. Emad M. Al-Salameh Special Education Department, Al- Balqa' Applied University PO box 15, Salt, Jordan Tel: 962-777-238-617 E-mail: imad_alsalameh@yahoo.com
More informationDEEP LEARNING BASED VISION-TO-LANGUAGE APPLICATIONS: CAPTIONING OF PHOTO STREAMS, VIDEOS, AND ONLINE POSTS
SEOUL Oct.7, 2016 DEEP LEARNING BASED VISION-TO-LANGUAGE APPLICATIONS: CAPTIONING OF PHOTO STREAMS, VIDEOS, AND ONLINE POSTS Gunhee Kim Computer Science and Engineering Seoul National University October
More informationStrategies using Facial Expressions and Gaze Behaviors for Animated Agents
Strategies using Facial Expressions and Gaze Behaviors for Animated Agents Masahide Yuasa Tokyo Denki University 2-1200 Muzai Gakuendai, Inzai, Chiba, 270-1382, Japan yuasa@sie.dendai.ac.jp Abstract. This
More informationImpact of Sound Insulation in a Combine Cabin
Original Article J. of Biosystems Eng. 40(3):159-164. (2015. 9) http://dx.doi.org/10.5307/jbe.2015.40.3.159 Journal of Biosystems Engineering eissn : 2234-1862 pissn : 1738-1266 Impact of Sound Insulation
More informationA Study on CCTV-Based Dangerous Behavior Monitoring System
, pp.95-99 http://dx.doi.org/10.14257/astl.2013.42.22 A Study on CCTV-Based Dangerous Behavior Monitoring System Young-Bin Shim 1, Hwa-Jin Park 1, Yong-Ik Yoon 1 1 Dept. of Multimedia, Sookmyung Women
More informationMeasurement of Sleep and Understanding of Sleep Apnea in Video
, pp.115-122 http://dx.doi.org/10.14257/ijmue.2015.10.7.12 Measurement of Sleep and Understanding of Sleep Apnea in Video Seong-Yoon Shin 1 and Sang-Won Lee 2* 1 First Author, Professor, Department of
More informationStudy N Ages Study type Methodology Main findings m;f mean (sd) FSIQ mean (sd) Del;UPD Range Range
Table 5 Social Cognition Study N Ages Study type Methodology Main findings m;f mean (sd) FSIQ mean (sd) Del;UPD Range Range Lo et al.(2013) 66 median 11 ToM 14 stories, 33 questions, 3 pretence Median
More informationThe Effects of Mind Subtraction Meditation on Depression, Social Anxiety, Aggression, and Cortisol Levels of Elementary School Children in South Korea
외부학술지게재논문요약 The Effects of Mind Subtraction Meditation on Depression, Social Anxiety, Aggression, and Cortisol Levels of Elementary School Children in South Korea Authors: Yang-Gyeong Yoo (Department of
More informationDivide-and-Conquer based Ensemble to Spot Emotions in Speech using MFCC and Random Forest
Published as conference paper in The 2nd International Integrated Conference & Concert on Convergence (2016) Divide-and-Conquer based Ensemble to Spot Emotions in Speech using MFCC and Random Forest Abdul
More informationMusic Recommendation System for Human Attention Modulation by Facial Recognition on a driving task: A Proof of Concept
Music Recommendation System for Human Attention Modulation by Facial Recognition on a driving task: A Proof of Concept Roberto Avila - Vázquez 1, Sergio Navarro Tuch 1, Rogelio Bustamante Bello, Ricardo
More informationEnhanced Facial Expressions Recognition using Modular Equable 2DPCA and Equable 2DPC
Enhanced Facial Expressions Recognition using Modular Equable 2DPCA and Equable 2DPC Sushma Choudhar 1, Sachin Puntambekar 2 1 Research Scholar-Digital Communication Medicaps Institute of Technology &
More informationPERFORMANCE ANALYSIS OF THE TECHNIQUES EMPLOYED ON VARIOUS DATASETS IN IDENTIFYING THE HUMAN FACIAL EMOTION
PERFORMANCE ANALYSIS OF THE TECHNIQUES EMPLOYED ON VARIOUS DATASETS IN IDENTIFYING THE HUMAN FACIAL EMOTION Usha Mary Sharma 1, Jayanta Kumar Das 2, Trinayan Dutta 3 1 Assistant Professor, 2,3 Student,
More informationStudy on Aging Effect on Facial Expression Recognition
Study on Aging Effect on Facial Expression Recognition Nora Algaraawi, Tim Morris Abstract Automatic facial expression recognition (AFER) is an active research area in computer vision. However, aging causes
More informationTime Series Changes in Cataract Surgery in Korea
pissn: 111-8942 eissn: 292-9382 Korean J Ophthalmol 218;32(3):182-189 https://doi.org/1.3341/kjo.217.72 Time Series Changes in Cataract Surgery in Korea Original Article Ju Hwan Song 1*, Jung Youb Kang
More informationDiagnosis of human operator behaviour in case of train driving: interest of facial recognition
Diagnosis of human operator behaviour in case of train driving: interest of facial recognition Cyril LEGRAND, Philippe Richard, Vincent Benard, Frédéric Vanderhaegen, Patrice Caulier To cite this version:
More informationArtificial Intelligence for Robot-Assisted Treatment of Autism
Artificial Intelligence for Robot-Assisted Treatment of Autism Giuseppe Palestra, Berardina De Carolis, and Floriana Esposito Department of Computer Science, University of Bari, Bari, Italy giuseppe.palestra@uniba.it
More informationThe Accuracy of Current Methods in Deciding the Timing of Epiphysiodesis
The Accuracy of Current Methods in Deciding the Timing of Epiphysiodesis Soon Chul Lee MD 1, Sung Wook Seo MD 2, Kyung Sup Lim MD 2, Jong Sup Shim MD 2 Department of Orthopaedic Surgery, 1 Bundang CHA
More informationEGIS BILIARY STENT. 1. Features & Benefits 2. Ordering information 3. References
EGIS BILIARY STENT 1. Features & Benefits 2. Ordering information 3. References 1. Features & Benefits (1) Features Superior flexibility & conformability 4 Types Single bare, Single cover, Double bare,
More informationEmotion based E-learning System using Physiological Signals. Dr. Jerritta S, Dr. Arun S School of Engineering, Vels University, Chennai
CHENNAI - INDIA Emotion based E-learning System using Physiological Signals School of Engineering, Vels University, Chennai Outline Introduction Existing Research works on Emotion Recognition Research
More informationINTRODUCTION. ORIGINAL ARTICLE Copyright 2016 Korean Neuropsychiatric Association
ORIGINAL ARTICLE https://doi.org/10.4306/pi.2016.13.6.590 Print ISSN 1738-3684 / On-line ISSN 1976-3026 OPEN ACCESS A Comparative Study of Computerized Memory Test and The Korean version of the Consortium
More informationFacial Expression Biometrics Using Tracker Displacement Features
Facial Expression Biometrics Using Tracker Displacement Features Sergey Tulyakov 1, Thomas Slowe 2,ZhiZhang 1, and Venu Govindaraju 1 1 Center for Unified Biometrics and Sensors University at Buffalo,
More informationSociable Robots Peeping into the Human World
Sociable Robots Peeping into the Human World An Infant s Advantages Non-hostile environment Actively benevolent, empathic caregiver Co-exists with mature version of self Baby Scheme Physical form can evoke
More information