Facial Event Classification with Task Oriented Dynamic Bayesian Network

Size: px
Start display at page:

Download "Facial Event Classification with Task Oriented Dynamic Bayesian Network"

Transcription

1 Facial Event Classification with Task Oriented Dynamic Bayesian Network Haisong Gu Dept. of Computer Science University of Nevada Reno Qiang Ji Dept. of ECSE Rensselaer Polytechnic Institute Abstract Facial events include all activities of face and facial features in spatial or temporal space, such as facial expressions, face gesture, gaze and furrow happening, etc. Developing an automated system for facial event classification is always a challenging task due to the richness, ambiguity and dynamic nature of facial expressions. This paper presents an efficient approach to real-world facial event classification. By integrating Dynamic Bayesian Network (DBN) with a general-purpose facial behavior description language, a task-oriented stochastic and temporal framework is constructed to systematically represent and classify facial events of interest. Based on the task oriented DBN, we can spatially and temporally incorporate results from previous times and prior knowledge of the application domain. With the top-down inference, the system can make active selection among multiple visual channels to identify the most effective sensory channels to use. With the bottom-up inference from observed evidences, the current facial event can be classified with a desired confident level via the belief propagation. We applied the task-oriented DBN framework to monitoring driver vigilance. Experimental results demonstrate the feasibility and efficiency of our approach. 1. Introduction The general-purpose facial expression analysis has been explored for decades. Numerous techniques have been proposed. A recent survey of existing works can be found in [1]. In terms of accuracy and robustness, it is still hard for real-world applications. The difficulty comes from two folds. One is the richness and complexity of facial expressions. The number of expressions occurring daily is more than It is hard to accurately classify them, even by an experienced human expression coder. The other is the variety of expression appearances. Expressions can appear in the form of geometrical deformation of feature points, eye movements,or furrow happenings. A single visual cue will not be able to efficiently capture all of these changes. However, each specific application only involves several limited expressions of interest. A task-oriented approach can be used to remove the ambiguity due to rich expressions. Furthermore, the multiple visual cues provide several effective sensing channels. By actively selecting the most informative channel, the effective features can be captured so as to make the recognition highly efficient. These two methods constitute the fundamental parts in our facial event classification system. In order to infer mental states from measured facial information, one quantitative description system for facial expression is necessary. Ekman s group [2] proposed an unified description method of expression: Facial Action Coding System (FACS). It totally has 71 primitive units, called Action Unit(AU). Based on them, any expression display can be represented by a single AU or the AU s combination. So far some research groups have explored the FACSbased classification [3, 4, 5, 6]. FACS representation of facial expression, however, is static and deterministic. Facial event develops over time and the detected facial features contain uncertainties. Recognizing facial events solely based on FACS is inadequate. In this paper, we introduce a dynamic and stochastic facial expressions representation framework based on the Dynamic Bayesian Networks so as to efficiently realize the recognition of facial events from an image sequence. Figure 1: the DBN based recognition Fig. 1 shows our DBN-based processing flow. For each application, a task-oriented DBN is created with the expression database of the application domain. Based on FACS, each facial event of interest is disassembled up to single AUs through several layers. Each single AU associates with its most informative sensor(s). With the Bayesian network,

2 we not only classify what is the current facial event, but also determine which sensing channel is the most effective to use next. 2. Multiple visual channels The face is a rich resource to infer the mental state. Different mental states often appear in different forms of facial features. So a sensing system providing with multiple visual cues is required to efficiently extract the information from faces. So far, we have developed different methods for facial sensing, which include: 1. IR based eye detector, which uses IR camera to robustly detect and track the eye pupils in real-time [7]. 2. Facial feature tracking system. Each landmark on face is modeled with a set of Gabor wavelets. Simultaneously using the global head movement and the smoothing constrains realizes a practical tracking system for multiple facial features under real-world variable conditions [8]. 3. furrow detection. As the result of facial feature tracking, each tracked facial landmark is identified with its physical meaning. Based on the location of feature points, the region is set for each possible furrow. The presence of furrows and wrinkles within the targeted regions is determined by edge feature analysis. 4. Head orientation estimation, which utilizes the correlation between 3D head orientation and properties of pupils. We build a so-called Pupil Feature Space(PFS) which is constructed by seven pupil features: such as, inter-pupil distance, pupil size, orientation, etc. The PCA method is used to represent pose distribution in PSF space. The head orientation can be determined by mapping the pupil parameters to PFS space. 5. Head motion estimation. The results of facial feature tracking provide the spatial relationship among feature points in forms of local facial graph. Based on the 2D local similarity transformation, we can determine the inter-frame global similarity parameters within the in-plane motion. Then, by using the tip of nose as the center we further map them to the feature points in the current frame by a scaling transformation. The obtained zooming parameter can be used to determine the head motion in depth. All of the above methods are implemented in nearly realtime mode. The detailed description of these methods can be found in[7, 8, 9] driver vigilance, we try to classify the drowsy related facial events to identify in-vehicle driver states. The events related to Inattention, Yawning and Falling asleep are targeted. The single AUs associated with these fatigue events are shown in Fig. 2. Their corresponding descriptions are presented in Table 1. AU7 AU9 AU25 AU26 AU27 AU43 AU51/52 AU55 AU56 AU58 AU61 AU62 Figure 2: Action units for components of fatigue facial expressions (some adapted from [2]) Facial expression is the entire facial behavior. However, the single AU usually focuses on local feature movement. In this paper, we propose a whole-to-part model structure to connect the single AUs and its entire facial event. Fig. 3 shows the spatial relationship of entities related to each vigilance level with local facial features and the associated visual channels. It mainly consists of 3 layers: Entire facial display, Partial facial display and Single AU. The entire display layer includes all of the events (or vigilance levels) of interest. In the partial display layer, we divide an entire facial display into partial display of local facial features, such as eye motion, face gesture, upper face and lower face. The connection between the entire and partial layers is based on each entire display and its corresponding feature appearances. In the single AU layer, all of related single AUs are listed and connected to its partial facial display according to FACS. Table 1: Descriptions of AUs in Fig. 2 AU# Descriptions AU# Descriptions AU7 Lid Tightener AU9 Nose Wrinkle AU25 Lips part AU26 Jaw Drop AU27 Mouth Stretch AU43 Eye Closure AU51 Head Turn Left AU52 Head Turn Right AU53 Head Up AU55 Head Tilt Left AU56 Head Tilt Right AU58 Head Back AU61 Eyes Turn Left AU62 Eyes Turn Right 3 Facial event classification 3.1. Spatial Dependency A particular application always focuses on a limited number of facial events of interest. For instance, to monitor Figure 3: BN for vigilance detection.

3 Since in this spatial modeling, the nodes within each layer are conditionally independent and are non-exclusive with each other, and the connection between the different layer is casual, we can build a Bayesian Network (BN) for facial event analysis. In the BN model, the top layer can be viewed as a simple Naive Bayes classifier. It consists of hypothesis variable C with three mutually exclusive states c 1,c 2,c 3, for inattention, yawning and falling asleep respectively. The hypothesis variable serves as a parent of three children nodes A 1, A 2, and A 3 corresponding to the facial events Ia,Y w and Fa respectively. The goal for this layer is to find the probability of state c i given A j = a j : Pr(C = c i A 1 = a 1,A 2 = a 2,A 3 = a 3 ) Parent Nodes Table 2: Probabilities Child Nodes Fatigue Ia Yw Fa Level T F T F T F Inattention Yawning Falling asleep In other words, this probability represents the chance of the class state c i when each attribute variable has the value where A j = a j. When the probability of specific state is maximal, it has the largest chance in which the facial event belongs to the variable c i. The conditional probabilities between the states of the hypothesis variable and the corresponding event variables are given in Table 2. Since in our case A i is unobservable, values of A i should be inferred through its lower layers. H K E I G F C Z F J Y O N D A B J F F X I E Figure 4: The geometric relations of facial features and furrows In the intermediate layers, the spatial relationships are represented by one of the typical BN connections: Serial, Diverging, and Converging connection[10]. In the bottom layer, each single AU associates with specific movement measurement of facial features. We connect each single AU with its effective visual channel. Table 3 gives the quantitative description of each AU, its quantification using the feature marked in Fig. 4 and the associated visual channels Temporal dependency A natural facial event evolves over time from the onset, the apex and the offset. The facial appearance can be classified to the underlying mental state by accumulating evidences C G K H l1 up to the apex. Static BN (SBN) modeling of facial expression works only with visual evidences and beliefs from a single time instant. It lacks the ability to express temporal relationship and dependencies in a video sequence. In order to overcome this limitation, we use Dynamic BNs (DBN) for modeling the dynamic aspect of the facial event. Table 3: Measurement of AUs AU Method Appearance Sensing Channel AU7 IFH non-increased Upper face Facial tracker and HGF increased AU9 wrinkle increased Upper face Furrow detector in JFF J AU25 DB < T 1 Lower face Facial tracker and NA non-increased AU26 T 1 < DB < T 2 Lower face Facial tracker AU27 DB > T 2 and Lower face Facial tracker l1 increased AU43 Pupils lost for a while pupil Eye tracker AU51 OZ turn left Head Gaze detector Orientation AU52 OZ turn right Head Gaze detector Orientation AU55 Head moves Head motion Head motion left-down detector AU56 Head moves Head motion Head motion right-down detector AU58 Head moves back Head motion Head motion detector AU61 E and E move left pupil Eye tracker AU62 E and E move right pupil Eye tracker * Note: T 1 and T 2 are predefined thresholds. In Fig.5, the DBN is made up of interconnected time slices of SBNs as exactly described in the preceding section. The relationships between two neighboring time slices are modeled by the first order Hidden Markov Model, i.e., random variables at temporal instance T are affected by observable variables at T, as well as by the corresponding random variables at preceding temporal instance T 1 only. The evolution of facial expression is defined by moving one time frame in accordance with the process of video sequences, so that the visual information at the previous time provides classification support for current hypothesis. Eventually, the belief of the current hypothesis of the mental state is inferred relying on the combined information of current visual cues through causal dependencies in the current time slice, as well as the preceding evidences through temporal dependencies. Figure 5: DBN in terms of temporal instance T In contrast to one frame as the temporal instance used in pervious DBN based sequence analysis, we also define another time scale in the task oriented network, that is the phase instance. One phase instance is a continuous facial

4 change which proceeds for a limited period of time covering a single ONSET, APEX, or OFFSET duration. The temporal dependency in the complicated facial event can be modeled by the phase instance as well as the frame instance. At each frame instance, the bottom-up inference is executed to classify each current facial display. In each phase instance, the top-down inference is conducted to actively select effective visual channel(s). In the next subsection, we will give the details on the facial event inference mechanism for Two scale active inference mechanism Figure 6: DBN in terms of phase instance We cast the multiple visual channel system into a DBN framework, while the DBN provides a dynamic knowledge representation and control structure that allows sensory information to be selected and combined according to the rules of probability theory and current recognition tasks. Usually in order to find the best feature region, we need to exhaustively search all the visual channels. However, the task-oriented approach provides an efficient method of purposive selection. As described before, in the temporal dependency of our DBN, there exits two time scales. One is the phase instance which usually consists of several consecutive frames, the other the frame instance. Fig. 6 depicts an example of temporal dependency in term of phase instances. Each facial event recognition will proceed two phases : Detection and Verification. At first, the Inattention detection (Ia-Detection) phase is triggered. With the specified detection target, in this phase the related visual channels are selected and activated, based on the top-down inference of our task-oriented BN. After one Inattention facial display is detected, the system evolves into the Inattention verification (Ia-Verification) phase. Meanwhile the Yawning detection (Ya-Detection) phase is triggered. In this way, we can determine the entire targeted facial displays according to the current phase instance. From the entire facial display, a top-down BN inference is conducted to identify the associated single AUs so as to activate the corresponded visual channels. Once the phase instance changes, we conduct the decision-making for visual channels and also adjust the parameters of CPTs in the same BN structure according to the current phase configuration. From each frame slice, the system checks the extracted information from activated visual channels and collects the evidences. With the observed evidences, a bottom-up BN inference is conducted to classify the current facial display. We summarize the purposive sensing in one recognition cycle as follows: 1. Set entire facial displays in the detection phase. With the top-down BN inference, identify the associated single AUs and activate the corresponding visual channels. 2. In each frame slice, classify the current facial display by the bottom-up BN inference from the observed evidences and seek the ONSET of the facial event by the curve of the posterior probability. 3. If the ONSET is identified with a high confidence, evolve into the verification phase. Otherwise goto step Based on the targeted display(s) in the verification phase, the top-down inference is used to activate the most informative visual channels From the activated visual channels, collect the observation evidences and seek the APEX of the current facial event frame by frame. 6. If the APEX in the verification phase is identified with a high confidence, reset the recognition cycle, goto step 1. Otherwise goto step 5. 4 Experimental Results 4.1. Static facial event classification Figure 7: Some typical facial displays in fatigue sequences with tracked features attached Fig. 7 collects some frames of typical expressions from two different fatigue sequences. Fig. 7(a), (b), (c) depict the neutral state, inattention and yawning state of one person. Fig. 7(d), (e), (f) show the inattention, yawning and falling asleep states in the other sequence. In the fatigue detection oriented Bayesian Network (Fig. 3), we set the prior probabilities for all the levels (inattention,yawning, and falling asleep) to be the same to conduct the bottom-up classification from the observed evidences. Table 4: Static classification No. Observation Evidences Results ED OE HD FT FD (Ia) (Yw) (Fa) (a) x x x x x (b) x AU51 x x x (c) x x x AU7 AU AU27 (d) x AU52 x x x (e) x x x AU7 x AU27 (f) x AU51 AU55 x x Table 4 shows observed evidences from each visual channel and the classification result for each image. ED, 1 The most informative channel is defined as the one that can maximally reduce the uncertainty of the hypothesis

5 OE, HD, FT and FD in the table indicate Eye Detector, Head Orientation estimator, Head motion Detector, Facial Feature Tracker and Furrow Detector, respectively. (Ia), (Yw) and (Fa) represent Inattention, Yawning and Falling asleep, respectively. x means no evidence. The (a) row of Table 4 shows that there is no fatigue-related evidence available from all the sensory channels in the image of Fig. 7(a). The posterior probability of each fatigue state obtaining from the Static BN inference has the same value as It means the person is in the neutral state, or the states in which there are no fatigue related facial displays. The row (b) in Table 4 shows that in Fig. 7(b) the evidence of AU51 is detected from the visual channel(oe). The maximum posterior probability is obtained in the Inattention. The facial event is classified as Inattention. The row (c) shows that AU7, AU27 and AU9 are detected from the visual channels FT and FD, respectively. The classification result is the Yawning. The row (d) and (e) show that the corresponding classification results are Inattention and Yawningstates, respectively. The row (f) shows that there are AU51 and AU55 detected from the channels OE and HD, respectively. The posterior probabilities of Inattention and Falling asleep are the same(0.47). At this moment, there exists ambiguity in the static facial display. The system is difficult to identify fatigue states. The above static recognition experiments verify that a good classification can easily be obtained for typical facial displays with the task-oriented BN. However, when the ambiguity exists among the displays, or in the transition period of states, the additional temporal information is often required to resolve the ambiguity Dynamic event classification A moderate fatigue sequence From the frame 21 to 185, the posterior probability of Inattention keeps the highest among the three states. In this period, the probability curve of Inattention goes down several times because of neutral states between different Inattention facial displays. At these moments, there are no fatigue related evidences observed at all. With the temporal dependency from previous classification results, the mental state of the subject is still classified as Inattention although the value is comparatively lower. After the frame 185, the Yawning evidences (AU25, 26, 27) are consistently detected. The probability curve of Yawning increases gradually. During this period, the yawning curve also reduces one time because evidences of both Yawning and Inattention are detected simultaneously at around the frame 200. Figure 9: The posterior probability curves The experiments of the sequence validates that the DBNbased approach successfully integrates temporal information in previous moments to remove the ambiguity due to multiple conflict evidences or transition periods Dynamic inference and sensing Figure 8: One video clip with a blended facial displays of neutral, Inattention and Yawning state Fig. 8 shows a moderate fatigue sequence, which consists of samples from one 210-frame sequence at the interval of 10 frames. The clip starts from 20-frame neutral states. Then several facial events of Inattention are preformed with some neutral states in the transition moments. Finally the subject opened his mouth and preformed facial events of Yawning. Fig. 9 depicts the posterior probability curves of the three fatigue levels, obtained by the task-oriented DBN system. We uniformly set the transitional probability between two consecutive temporal slices as 0.5. In the first 20 frames, there are no fatigue related evidence extracted. The probability values of three levels are the same as Figure 10: a typical fatigue sequence Fig. 10 shows samples from a 120 frame sequence at the interval of 5 frames, which consists of Inattention, Yawning and Falling asleep states in order. Falling Asleep Yawning Inattention Image Frame Figure 11: The posterior probability curves Fig. 11 depicts the posterior probability values of three states obtained from the inference of DBN. We can see

6 in this graph that the confident level goes down due to some supporting evidences missing, such as at the frames 70,80. Because of the integration of the previous results and the knowledge of application domain, the classification is still stable even at these moments. The frame 110 here corresponds to Fig. 7(f) in the static experiment. The obtained posterior probabilities of Inattention, Yawning and Fallingasleep are ,0.0034, and , respective. The temporal dependency based on DBN significantly removed the ambiguity in the static classification, shown as in the row (f) of Table 4. Figure 12: Sensor selection Fig. 12 depicts the selection of activated visual channels during the recognition of the above clip. According to the task-oriented DBN, the Inattention related visual channels, which are Head motion Detector (HD), Head Orientation estimator (OE) and Eye Detector (ED), were initially activated. At frame 5 the evidence of Inattention was detected from the OE channel. The system started to focus on OE. At frame 20, the ONSET of the Inattention display was detected based on the curve of the posterior probability of Inattention in Fig. 11. So the system finished the detection phase of Inattention and evolved into the verification phase of Inattention. At the same time the detection phase of Yawning was triggered and the associated visual channels, which are Facial Feature Tracker(FT) and Furrow Detector(FD) were activated. At frame 40, the system identified the APEX of the Inattention expression and verified the Inattention state. After that, at frame 45 the new evidence on Yawning was detected from the FT channel. The system focused on it. At frame 60, the ONSET of the Yawning expression was identified. The system evolved into the verification phase of Yawning and the detection phase of Falling asleep. The HD channel was activated. At frame 75 and 85, one APEX of Yawning was identified and the Yawning state was verified, respectively. From frame 85, the new evidence on Falling asleep was detected and the system focused on the HD channel. At frame 95, the ONSET of the Falling asleep was identified. The system further evolved into the verification phase of the Falling asleep. Finally the APEX of the Falling asleep was identified by the curve of the posterior probability of Falling asleep in Fig. 11. The experimental sequences here simulate typical fatigue evolution processes, with only a short transition period. In reality, the Yawning states sometimes can last a certain long period and don t evolve into the Falling asleep state. In this case, the activated channels will keep the same for a while. Sometimes, the Yawning expression may disappear for some reasons. In this case, the system cannot detect any evidences of Yawning for a certain period time. All three posterior probabilities tend to lower and will be the same after a while. The fatigue states will disappear at this point. The system will reset to the detection phase for Inattention. 5 Conclusion In this paper, we presented a practicable and efficient framework for real-world facial event recognition. The proposed method has several favorable properties: A dynamic and stochastic facial expressions representation framework based on combining FACS and DBNs is proposed. This framework can be adapted for different applications. The domain knowledge and previous analysis results are successfully integrated systematically to remove ambiguities in facial event displays. The selective sensing structure among the multiple visual channels makes the recognition of facial events very efficiently and can easily extend to encapsulate other new effective visual channels, such as estimation of gaze and eyelid movement, even for sensors of different modalities. This project is supported in part by a grant from AFOSR under the grant number F References [1] M. Pantic and L. Rothkrantz, Automatic analysis of facial expressions: The state of the art, IEEE Trans. Pattern Anal. Machine Intell., vol. 22, no. 12, pp , [2] P. Ekman, W. V. Friesen, and J. C. Hager, Facial Action Coding System (FACS): Manual. San Francisco, CA: CD Rom, [3] J. Zhao and G. Kearney, Classifying facial emotions by backpropagation neural networks with fuzzy inputs, in Proc. Int l Conf. Neural Information Processing, pp , [4] M. Pantic and L. Rothkrantz, Expert system for automatic analysis of facial expression, J. Image and Vision Computing,vol.18, no.11, pp , [5] Y. Tian, T. Kanade, and J. F. Cohn, Recognizing action units for facial expression analysis, IEEE Trans. Pattern Anal. Machine Intell., vol. 23, no. 2, pp , [6] J. J. Lien, T. Kanade, J. F. Cohn, and C. Li, Detection, tracking, and classification of action units in facial expression, Int l J. Robotics and Autonomous System, vol. 31, no. -, pp , [7] Z. Zhu, Q. Ji, K. Fujimura, and K. Lee, Combining Kalman filtering and mean shift for real time eye tracking under active ir illumination, in Proc. Int l Conf. Pattern Recognition, Aug [8] H. Gu, Q. Ji, and Z. Zhu, Active facial tracking for fatigue detection, in IEEE Workshop on Applications of Computer Vision, (Florida, USA), [9] Q. Ji and Z. Zhu, Eye and gaze tracking for interactive graphic display, in Smart Graphics, (Hawthorne, NY, USA), [10] F. V. Jensen, Bayesian Networks and Decision Graphs. Springer- Verlag: Springer-Verlag New York Inc., 2001.

Task oriented facial behavior recognition with selective sensing

Task oriented facial behavior recognition with selective sensing Computer Vision and Image Understanding 100 (2005) 385 415 www.elsevier.com/locate/cviu Task oriented facial behavior recognition with selective sensing Haisong Gu a, Yongmian Zhang a, Qiang Ji b, * a

More information

Neuro-Inspired Statistical. Rensselaer Polytechnic Institute National Science Foundation

Neuro-Inspired Statistical. Rensselaer Polytechnic Institute National Science Foundation Neuro-Inspired Statistical Pi Prior Model lfor Robust Visual Inference Qiang Ji Rensselaer Polytechnic Institute National Science Foundation 1 Status of Computer Vision CV has been an active area for over

More information

NON-INTRUSIVE REAL TIME HUMAN FATIGUE MODELLING AND MONITORING

NON-INTRUSIVE REAL TIME HUMAN FATIGUE MODELLING AND MONITORING NON-INTRUSIVE REAL TIME HUMAN FATIGUE MODELLING AND MONITORING Peilin Lan, Qiang Ji, Carl G. Looney Department of Computer Science, University of Nevada at Reno, NV 89557 Department of ECSE, Rensselaer

More information

Automatic Facial Action Unit Recognition by Modeling Their Semantic And Dynamic Relationships

Automatic Facial Action Unit Recognition by Modeling Their Semantic And Dynamic Relationships Chapter 10 Automatic Facial Action Unit Recognition by Modeling Their Semantic And Dynamic Relationships Yan Tong, Wenhui Liao, and Qiang Ji Abstract A system that could automatically analyze the facial

More information

Active User Affect Recognition and Assistance

Active User Affect Recognition and Assistance Active User Affect Recognition and Assistance Wenhui Liao, Zhiwe Zhu, Markus Guhe*, Mike Schoelles*, Qiang Ji, and Wayne Gray* Email: jiq@rpi.edu Department of Electrical, Computer, and System Eng. *Department

More information

Facial expression recognition with spatiotemporal local descriptors

Facial expression recognition with spatiotemporal local descriptors Facial expression recognition with spatiotemporal local descriptors Guoying Zhao, Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and Information Engineering, P. O. Box

More information

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China A Vision-based Affective Computing System Jieyu Zhao Ningbo University, China Outline Affective Computing A Dynamic 3D Morphable Model Facial Expression Recognition Probabilistic Graphical Models Some

More information

Generalization of a Vision-Based Computational Model of Mind-Reading

Generalization of a Vision-Based Computational Model of Mind-Reading Generalization of a Vision-Based Computational Model of Mind-Reading Rana el Kaliouby and Peter Robinson Computer Laboratory, University of Cambridge, 5 JJ Thomson Avenue, Cambridge UK CB3 FD Abstract.

More information

This is the accepted version of this article. To be published as : This is the author version published as:

This is the accepted version of this article. To be published as : This is the author version published as: QUT Digital Repository: http://eprints.qut.edu.au/ This is the author version published as: This is the accepted version of this article. To be published as : This is the author version published as: Chew,

More information

Facial Feature Model for Emotion Recognition Using Fuzzy Reasoning

Facial Feature Model for Emotion Recognition Using Fuzzy Reasoning Facial Feature Model for Emotion Recognition Using Fuzzy Reasoning Renan Contreras, Oleg Starostenko, Vicente Alarcon-Aquino, and Leticia Flores-Pulido CENTIA, Department of Computing, Electronics and

More information

R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology

R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology ISSN: 0975-766X CODEN: IJPTFI Available Online through Research Article www.ijptonline.com FACIAL EMOTION RECOGNITION USING NEURAL NETWORK Kashyap Chiranjiv Devendra, Azad Singh Tomar, Pratigyna.N.Javali,

More information

Face Analysis : Identity vs. Expressions

Face Analysis : Identity vs. Expressions Hugo Mercier, 1,2 Patrice Dalle 1 Face Analysis : Identity vs. Expressions 1 IRIT - Université Paul Sabatier 118 Route de Narbonne, F-31062 Toulouse Cedex 9, France 2 Websourd Bâtiment A 99, route d'espagne

More information

A Real-Time Human Stress Monitoring System Using Dynamic Bayesian Network

A Real-Time Human Stress Monitoring System Using Dynamic Bayesian Network A Real-Time Human Stress Monitoring System Using Dynamic Bayesian Network Wenhui Liao, Weihong Zhang, Zhiwei Zhu and Qiang Ji {liaow, zhangw9, zhuz, jiq}@rpi.edu Department of Electrical, Computer and

More information

High-level Vision. Bernd Neumann Slides for the course in WS 2004/05. Faculty of Informatics Hamburg University Germany

High-level Vision. Bernd Neumann Slides for the course in WS 2004/05. Faculty of Informatics Hamburg University Germany High-level Vision Bernd Neumann Slides for the course in WS 2004/05 Faculty of Informatics Hamburg University Germany neumann@informatik.uni-hamburg.de http://kogs-www.informatik.uni-hamburg.de 1 Contents

More information

Facial Behavior as a Soft Biometric

Facial Behavior as a Soft Biometric Facial Behavior as a Soft Biometric Abhay L. Kashyap University of Maryland, Baltimore County 1000 Hilltop Circle, Baltimore, MD 21250 abhay1@umbc.edu Sergey Tulyakov, Venu Govindaraju University at Buffalo

More information

CROWD ANALYTICS VIA ONE SHOT LEARNING AND AGENT BASED INFERENCE. Peter Tu, Ming-Ching Chang, Tao Gao. GE Global Research

CROWD ANALYTICS VIA ONE SHOT LEARNING AND AGENT BASED INFERENCE. Peter Tu, Ming-Ching Chang, Tao Gao. GE Global Research CROWD ANALYTICS VIA ONE SHOT LEARNING AND AGENT BASED INFERENCE Peter Tu, Ming-Ching Chang, Tao Gao GE Global Research ABSTRACT For the purposes of inferring social behavior in crowded conditions, three

More information

Facial Expression Biometrics Using Tracker Displacement Features

Facial Expression Biometrics Using Tracker Displacement Features Facial Expression Biometrics Using Tracker Displacement Features Sergey Tulyakov 1, Thomas Slowe 2,ZhiZhang 1, and Venu Govindaraju 1 1 Center for Unified Biometrics and Sensors University at Buffalo,

More information

Study on Aging Effect on Facial Expression Recognition

Study on Aging Effect on Facial Expression Recognition Study on Aging Effect on Facial Expression Recognition Nora Algaraawi, Tim Morris Abstract Automatic facial expression recognition (AFER) is an active research area in computer vision. However, aging causes

More information

Facial Expression Recognition Using Principal Component Analysis

Facial Expression Recognition Using Principal Component Analysis Facial Expression Recognition Using Principal Component Analysis Ajit P. Gosavi, S. R. Khot Abstract Expression detection is useful as a non-invasive method of lie detection and behaviour prediction. However,

More information

Reactive agents and perceptual ambiguity

Reactive agents and perceptual ambiguity Major theme: Robotic and computational models of interaction and cognition Reactive agents and perceptual ambiguity Michel van Dartel and Eric Postma IKAT, Universiteit Maastricht Abstract Situated and

More information

Actions in the Eye: Dynamic Gaze Datasets and Learnt Saliency Models for Visual Recognition

Actions in the Eye: Dynamic Gaze Datasets and Learnt Saliency Models for Visual Recognition Actions in the Eye: Dynamic Gaze Datasets and Learnt Saliency Models for Visual Recognition Stefan Mathe, Cristian Sminchisescu Presented by Mit Shah Motivation Current Computer Vision Annotations subjectively

More information

Recognizing Scenes by Simulating Implied Social Interaction Networks

Recognizing Scenes by Simulating Implied Social Interaction Networks Recognizing Scenes by Simulating Implied Social Interaction Networks MaryAnne Fields and Craig Lennon Army Research Laboratory, Aberdeen, MD, USA Christian Lebiere and Michael Martin Carnegie Mellon University,

More information

International Journal of Scientific & Engineering Research, Volume 7, Issue 2, February-2016 ISSN

International Journal of Scientific & Engineering Research, Volume 7, Issue 2, February-2016 ISSN International Journal of Scientific & Engineering Research, Volume 7, Issue 2, February-2016 ISSN 2229-5518 444 A Review on Real-Time Nonintrusive Monitoring and Prediction of Driver Fatigue Miss. Renuka

More information

Learning Utility for Behavior Acquisition and Intention Inference of Other Agent

Learning Utility for Behavior Acquisition and Intention Inference of Other Agent Learning Utility for Behavior Acquisition and Intention Inference of Other Agent Yasutake Takahashi, Teruyasu Kawamata, and Minoru Asada* Dept. of Adaptive Machine Systems, Graduate School of Engineering,

More information

Emotion Recognition using a Cauchy Naive Bayes Classifier

Emotion Recognition using a Cauchy Naive Bayes Classifier Emotion Recognition using a Cauchy Naive Bayes Classifier Abstract Recognizing human facial expression and emotion by computer is an interesting and challenging problem. In this paper we propose a method

More information

Emotion Affective Color Transfer Using Feature Based Facial Expression Recognition

Emotion Affective Color Transfer Using Feature Based Facial Expression Recognition , pp.131-135 http://dx.doi.org/10.14257/astl.2013.39.24 Emotion Affective Color Transfer Using Feature Based Facial Expression Recognition SeungTaek Ryoo and Jae-Khun Chang School of Computer Engineering

More information

User Affective State Assessment for HCI Systems

User Affective State Assessment for HCI Systems Association for Information Systems AIS Electronic Library (AISeL) AMCIS 2004 Proceedings Americas Conference on Information Systems (AMCIS) 12-31-2004 Xiangyang Li University of Michigan-Dearborn Qiang

More information

Katsunari Shibata and Tomohiko Kawano

Katsunari Shibata and Tomohiko Kawano Learning of Action Generation from Raw Camera Images in a Real-World-Like Environment by Simple Coupling of Reinforcement Learning and a Neural Network Katsunari Shibata and Tomohiko Kawano Oita University,

More information

THE FIELD of human computer interaction has moved

THE FIELD of human computer interaction has moved IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART A: SYSTEMS AND HUMANS, VOL. 35, NO. 1, JANUARY 2005 93 Active Affective State Detection and User Assistance With Dynamic Bayesian Networks Xiangyang

More information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information

Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. M. Lee, A. Kazemzadeh, S. Lee, U. Neumann, S. Narayanan Emotion

More information

Statistical and Neural Methods for Vision-based Analysis of Facial Expressions and Gender

Statistical and Neural Methods for Vision-based Analysis of Facial Expressions and Gender Proc. IEEE Int. Conf. on Systems, Man and Cybernetics (SMC 2004), Den Haag, pp. 2203-2208, IEEE omnipress 2004 Statistical and Neural Methods for Vision-based Analysis of Facial Expressions and Gender

More information

Design of Palm Acupuncture Points Indicator

Design of Palm Acupuncture Points Indicator Design of Palm Acupuncture Points Indicator Wen-Yuan Chen, Shih-Yen Huang and Jian-Shie Lin Abstract The acupuncture points are given acupuncture or acupressure so to stimulate the meridians on each corresponding

More information

M.Sc. in Cognitive Systems. Model Curriculum

M.Sc. in Cognitive Systems. Model Curriculum M.Sc. in Cognitive Systems Model Curriculum April 2014 Version 1.0 School of Informatics University of Skövde Sweden Contents 1 CORE COURSES...1 2 ELECTIVE COURSES...1 3 OUTLINE COURSE SYLLABI...2 Page

More information

A framework for the Recognition of Human Emotion using Soft Computing models

A framework for the Recognition of Human Emotion using Soft Computing models A framework for the Recognition of Human Emotion using Soft Computing models Md. Iqbal Quraishi Dept. of Information Technology Kalyani Govt Engg. College J Pal Choudhury Dept. of Information Technology

More information

The 29th Fuzzy System Symposium (Osaka, September 9-, 3) Color Feature Maps (BY, RG) Color Saliency Map Input Image (I) Linear Filtering and Gaussian

The 29th Fuzzy System Symposium (Osaka, September 9-, 3) Color Feature Maps (BY, RG) Color Saliency Map Input Image (I) Linear Filtering and Gaussian The 29th Fuzzy System Symposium (Osaka, September 9-, 3) A Fuzzy Inference Method Based on Saliency Map for Prediction Mao Wang, Yoichiro Maeda 2, Yasutake Takahashi Graduate School of Engineering, University

More information

ERA: Architectures for Inference

ERA: Architectures for Inference ERA: Architectures for Inference Dan Hammerstrom Electrical And Computer Engineering 7/28/09 1 Intelligent Computing In spite of the transistor bounty of Moore s law, there is a large class of problems

More information

Sign Language in the Intelligent Sensory Environment

Sign Language in the Intelligent Sensory Environment Sign Language in the Intelligent Sensory Environment Ákos Lisztes, László Kővári, Andor Gaudia, Péter Korondi Budapest University of Science and Technology, Department of Automation and Applied Informatics,

More information

Real-time SVM Classification for Drowsiness Detection Using Eye Aspect Ratio

Real-time SVM Classification for Drowsiness Detection Using Eye Aspect Ratio Real-time SVM Classification for Drowsiness Detection Using Eye Aspect Ratio Caio B. Souto Maior a, *, Márcio C. Moura a, João M. M. de Santana a, Lucas M. do Nascimento a, July B. Macedo a, Isis D. Lins

More information

Automated Volumetric Cardiac Ultrasound Analysis

Automated Volumetric Cardiac Ultrasound Analysis Whitepaper Automated Volumetric Cardiac Ultrasound Analysis ACUSON SC2000 Volume Imaging Ultrasound System Bogdan Georgescu, Ph.D. Siemens Corporate Research Princeton, New Jersey USA Answers for life.

More information

Detection of Facial Landmarks from Neutral, Happy, and Disgust Facial Images

Detection of Facial Landmarks from Neutral, Happy, and Disgust Facial Images Detection of Facial Landmarks from Neutral, Happy, and Disgust Facial Images Ioulia Guizatdinova and Veikko Surakka Research Group for Emotions, Sociality, and Computing Tampere Unit for Computer-Human

More information

Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners

Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners Hatice Gunes and Maja Pantic Department of Computing, Imperial College London 180 Queen

More information

Can Saliency Map Models Predict Human Egocentric Visual Attention?

Can Saliency Map Models Predict Human Egocentric Visual Attention? Can Saliency Map Models Predict Human Egocentric Visual Attention? Kentaro Yamada 1, Yusuke Sugano 1, Takahiro Okabe 1 Yoichi Sato 1, Akihiro Sugimoto 2, and Kazuo Hiraki 3 1 The University of Tokyo, Tokyo,

More information

Neuromorphic convolutional recurrent neural network for road safety or safety near the road

Neuromorphic convolutional recurrent neural network for road safety or safety near the road Neuromorphic convolutional recurrent neural network for road safety or safety near the road WOO-SUP HAN 1, IL SONG HAN 2 1 ODIGA, London, U.K. 2 Korea Advanced Institute of Science and Technology, Daejeon,

More information

A Study of Facial Expression Reorganization and Local Binary Patterns

A Study of Facial Expression Reorganization and Local Binary Patterns A Study of Facial Expression Reorganization and Local Binary Patterns Poonam Verma #1, Deepshikha Rathore *2 #1 MTech Scholar,Sanghvi Innovative Academy Indore *2 Asst.Professor,Sanghvi Innovative Academy

More information

A Unified Probabilistic Framework For Measuring The Intensity of Spontaneous Facial Action Units

A Unified Probabilistic Framework For Measuring The Intensity of Spontaneous Facial Action Units A Unified Probabilistic Framework For Measuring The Intensity of Spontaneous Facial Action Units Yongqiang Li 1, S. Mohammad Mavadati 2, Mohammad H. Mahoor and Qiang Ji Abstract Automatic facial expression

More information

VIDEO SURVEILLANCE AND BIOMEDICAL IMAGING Research Activities and Technology Transfer at PAVIS

VIDEO SURVEILLANCE AND BIOMEDICAL IMAGING Research Activities and Technology Transfer at PAVIS VIDEO SURVEILLANCE AND BIOMEDICAL IMAGING Research Activities and Technology Transfer at PAVIS Samuele Martelli, Alessio Del Bue, Diego Sona, Vittorio Murino Istituto Italiano di Tecnologia (IIT), Genova

More information

Human Activities: Handling Uncertainties Using Fuzzy Time Intervals

Human Activities: Handling Uncertainties Using Fuzzy Time Intervals The 19th International Conference on Pattern Recognition (ICPR), Tampa, FL, 2009 Human Activities: Handling Uncertainties Using Fuzzy Time Intervals M. S. Ryoo 1,2 and J. K. Aggarwal 1 1 Computer & Vision

More information

Pythia WEB ENABLED TIMED INFLUENCE NET MODELING TOOL SAL. Lee W. Wagenhals Alexander H. Levis

Pythia WEB ENABLED TIMED INFLUENCE NET MODELING TOOL SAL. Lee W. Wagenhals Alexander H. Levis Pythia WEB ENABLED TIMED INFLUENCE NET MODELING TOOL Lee W. Wagenhals Alexander H. Levis ,@gmu.edu Adversary Behavioral Modeling Maxwell AFB, Montgomery AL March 8-9, 2007 1 Outline Pythia

More information

AFOSR PI Meeting Dec 1-3, 2014 Program Director: Dr. Darema Dynamic Integration of Motion and Neural Data to Capture Human Behavior

AFOSR PI Meeting Dec 1-3, 2014 Program Director: Dr. Darema Dynamic Integration of Motion and Neural Data to Capture Human Behavior AFOSR PI Meeting Dec 1-3, 2014 Program Director: Dr. Darema Dynamic Integration of Motion and Neural Data to Capture Human Behavior Dimitris Metaxas(PI, Rutgers) D. Pantazis (co-pi, Head MEG Lab, MIT)

More information

Real Time Sign Language Processing System

Real Time Sign Language Processing System Real Time Sign Language Processing System Dibyabiva Seth (&), Anindita Ghosh, Ariruna Dasgupta, and Asoke Nath Department of Computer Science, St. Xavier s College (Autonomous), Kolkata, India meetdseth@gmail.com,

More information

To What Extent Can the Recognition of Unfamiliar Faces be Accounted for by the Direct Output of Simple Cells?

To What Extent Can the Recognition of Unfamiliar Faces be Accounted for by the Direct Output of Simple Cells? To What Extent Can the Recognition of Unfamiliar Faces be Accounted for by the Direct Output of Simple Cells? Peter Kalocsai, Irving Biederman, and Eric E. Cooper University of Southern California Hedco

More information

Development of goal-directed gaze shift based on predictive learning

Development of goal-directed gaze shift based on predictive learning 4th International Conference on Development and Learning and on Epigenetic Robotics October 13-16, 2014. Palazzo Ducale, Genoa, Italy WePP.1 Development of goal-directed gaze shift based on predictive

More information

Advanced FACS Methodological Issues

Advanced FACS Methodological Issues 7/9/8 Advanced FACS Methodological Issues Erika Rosenberg, University of California, Davis Daniel Messinger, University of Miami Jeffrey Cohn, University of Pittsburgh The th European Conference on Facial

More information

Automatic Coding of Facial Expressions Displayed During Posed and Genuine Pain

Automatic Coding of Facial Expressions Displayed During Posed and Genuine Pain Automatic Coding of Facial Expressions Displayed During Posed and Genuine Pain Gwen C. Littlewort Machine Perception Lab, Institute for Neural Computation University of California, San Diego La Jolla,

More information

Gender Based Emotion Recognition using Speech Signals: A Review

Gender Based Emotion Recognition using Speech Signals: A Review 50 Gender Based Emotion Recognition using Speech Signals: A Review Parvinder Kaur 1, Mandeep Kaur 2 1 Department of Electronics and Communication Engineering, Punjabi University, Patiala, India 2 Department

More information

Fusion of visible and thermal images for facial expression recognition

Fusion of visible and thermal images for facial expression recognition Front. Comput. Sci., 2014, 8(2): 232 242 DOI 10.1007/s11704-014-2345-1 Fusion of visible and thermal images for facial expression recognition Shangfei WANG 1,2, Shan HE 1,2,YueWU 3, Menghua HE 1,2,QiangJI

More information

Eye movements, recognition, and memory

Eye movements, recognition, and memory Eye movements, recognition, and memory Garrison W. Cottrell TDLC Gary's Unbelievable Research Unit (GURU) Computer Science and Engineering Department Institute for Neural Computation UCSD Joint work with

More information

An assistive application identifying emotional state and executing a methodical healing process for depressive individuals.

An assistive application identifying emotional state and executing a methodical healing process for depressive individuals. An assistive application identifying emotional state and executing a methodical healing process for depressive individuals. Bandara G.M.M.B.O bhanukab@gmail.com Godawita B.M.D.T tharu9363@gmail.com Gunathilaka

More information

Real-time Automatic Deceit Detection from Involuntary Facial Expressions

Real-time Automatic Deceit Detection from Involuntary Facial Expressions Real-time Automatic Deceit Detection from Involuntary Facial Expressions Zhi Zhang, Vartika Singh, Thomas E. Slowe, Sergey Tulyakov, and Venugopal Govindaraju Center for Unified Biometrics and Sensors

More information

Computational Models of Visual Attention: Bottom-Up and Top-Down. By: Soheil Borhani

Computational Models of Visual Attention: Bottom-Up and Top-Down. By: Soheil Borhani Computational Models of Visual Attention: Bottom-Up and Top-Down By: Soheil Borhani Neural Mechanisms for Visual Attention 1. Visual information enter the primary visual cortex via lateral geniculate nucleus

More information

Facial Expression and Consumer Attitudes toward Cultural Goods

Facial Expression and Consumer Attitudes toward Cultural Goods Facial Expression and Consumer Attitudes toward Cultural Goods Chih-Hsiang Ko, Chia-Yin Yu Department of Industrial and Commercial Design, National Taiwan University of Science and Technology, 43 Keelung

More information

Chapter 6. Results. 6.1 Introduction

Chapter 6. Results. 6.1 Introduction Chapter 6 Results 6.1 Introduction This chapter presents results of both optimization and characterization approaches. In the optimization case, we report results of an experimental study done with persons.

More information

Blue Eyes Technology

Blue Eyes Technology Blue Eyes Technology D.D. Mondal #1, Arti Gupta *2, Tarang Soni *3, Neha Dandekar *4 1 Professor, Dept. of Electronics and Telecommunication, Sinhgad Institute of Technology and Science, Narhe, Maharastra,

More information

Probabilistic Graphical Models: Applications in Biomedicine

Probabilistic Graphical Models: Applications in Biomedicine Probabilistic Graphical Models: Applications in Biomedicine L. Enrique Sucar, INAOE Puebla, México May 2012 What do you see? What we see depends on our previous knowledge (model) of the world and the information

More information

Institute for Neural Computation, University of California, San Diego, La Jolla, CA , USA 2

Institute for Neural Computation, University of California, San Diego, La Jolla, CA , USA 2 In A. Esposito, N. Bourbakis, N. Avouris, and I. Hatzilygeroudis. (Eds.) Lecture Notes in Computer Science, Vol 5042: Verbal and Nonverbal Features of Human-human and Humanmachine Interaction, Springer

More information

International Journal of Software and Web Sciences (IJSWS)

International Journal of Software and Web Sciences (IJSWS) International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) ISSN (Print): 2279-0063 ISSN (Online): 2279-0071 International

More information

Local Image Structures and Optic Flow Estimation

Local Image Structures and Optic Flow Estimation Local Image Structures and Optic Flow Estimation Sinan KALKAN 1, Dirk Calow 2, Florentin Wörgötter 1, Markus Lappe 2 and Norbert Krüger 3 1 Computational Neuroscience, Uni. of Stirling, Scotland; {sinan,worgott}@cn.stir.ac.uk

More information

Automated facial expression measurement: Recent applications to basic research in human behavior, learning, and education

Automated facial expression measurement: Recent applications to basic research in human behavior, learning, and education 1 Automated facial expression measurement: Recent applications to basic research in human behavior, learning, and education Marian Stewart Bartlett and Jacob Whitehill, Institute for Neural Computation,

More information

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING 134 TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING H.F.S.M.Fonseka 1, J.T.Jonathan 2, P.Sabeshan 3 and M.B.Dissanayaka 4 1 Department of Electrical And Electronic Engineering, Faculty

More information

Biologically-Inspired Human Motion Detection

Biologically-Inspired Human Motion Detection Biologically-Inspired Human Motion Detection Vijay Laxmi, J. N. Carter and R. I. Damper Image, Speech and Intelligent Systems (ISIS) Research Group Department of Electronics and Computer Science University

More information

Time Experiencing by Robotic Agents

Time Experiencing by Robotic Agents Time Experiencing by Robotic Agents Michail Maniadakis 1 and Marc Wittmann 2 and Panos Trahanias 1 1- Foundation for Research and Technology - Hellas, ICS, Greece 2- Institute for Frontier Areas of Psychology

More information

M Cells. Why parallel pathways? P Cells. Where from the retina? Cortical visual processing. Announcements. Main visual pathway from retina to V1

M Cells. Why parallel pathways? P Cells. Where from the retina? Cortical visual processing. Announcements. Main visual pathway from retina to V1 Announcements exam 1 this Thursday! review session: Wednesday, 5:00-6:30pm, Meliora 203 Bryce s office hours: Wednesday, 3:30-5:30pm, Gleason https://www.youtube.com/watch?v=zdw7pvgz0um M Cells M cells

More information

Multimodal Coordination of Facial Action, Head Rotation, and Eye Motion during Spontaneous Smiles

Multimodal Coordination of Facial Action, Head Rotation, and Eye Motion during Spontaneous Smiles Multimodal Coordination of Facial Action, Head Rotation, and Eye Motion during Spontaneous Smiles Jeffrey F. Cohn jeffcohn@cs.cmu.edu Lawrence Ian Reed lirst6@pitt.edu suyoshi Moriyama Carnegie Mellon

More information

Edge Based Grid Super-Imposition for Crowd Emotion Recognition

Edge Based Grid Super-Imposition for Crowd Emotion Recognition Edge Based Grid Super-Imposition for Crowd Emotion Recognition Amol S Patwardhan 1 1Senior Researcher, VIT, University of Mumbai, 400037, India ---------------------------------------------------------------------***---------------------------------------------------------------------

More information

Finger spelling recognition using distinctive features of hand shape

Finger spelling recognition using distinctive features of hand shape Finger spelling recognition using distinctive features of hand shape Y Tabata 1 and T Kuroda 2 1 Faculty of Medical Science, Kyoto College of Medical Science, 1-3 Imakita Oyama-higashi, Sonobe, Nantan,

More information

A Common Framework for Real-Time Emotion Recognition and Facial Action Unit Detection

A Common Framework for Real-Time Emotion Recognition and Facial Action Unit Detection A Common Framework for Real-Time Emotion Recognition and Facial Action Unit Detection Tobias Gehrig and Hazım Kemal Ekenel Facial Image Processing and Analysis Group, Institute for Anthropomatics Karlsruhe

More information

MPEG-4 Facial Expression Synthesis based on Appraisal Theory

MPEG-4 Facial Expression Synthesis based on Appraisal Theory MPEG-4 Facial Expression Synthesis based on Appraisal Theory L. Malatesta, A. Raouzaiou, K. Karpouzis and S. Kollias Image, Video and Multimedia Systems Laboratory, National Technical University of Athens,

More information

Automated Drowsiness Detection For Improved Driving Safety

Automated Drowsiness Detection For Improved Driving Safety Automated Drowsiness Detection For Improved Driving Safety Esra Vural 1,2 and Mujdat Cetin 1 and Aytul Ercil 1 and Gwen Littlewort 2 and Marian Bartlett 2 and Javier Movellan 2 1 Sabanci University Faculty

More information

Quantification of facial expressions using high-dimensional shape transformations

Quantification of facial expressions using high-dimensional shape transformations Journal of Neuroscience Methods xxx (2004) xxx xxx Quantification of facial expressions using high-dimensional shape transformations Ragini Verma a,, Christos Davatzikos a,1, James Loughead b,2, Tim Indersmitten

More information

doi: / _59(

doi: / _59( doi: 10.1007/978-3-642-39188-0_59(http://dx.doi.org/10.1007/978-3-642-39188-0_59) Subunit modeling for Japanese sign language recognition based on phonetically depend multi-stream hidden Markov models

More information

Audio-visual Classification and Fusion of Spontaneous Affective Data in Likelihood Space

Audio-visual Classification and Fusion of Spontaneous Affective Data in Likelihood Space 2010 International Conference on Pattern Recognition Audio-visual Classification and Fusion of Spontaneous Affective Data in Likelihood Space Mihalis A. Nicolaou, Hatice Gunes and Maja Pantic, Department

More information

A Comparison of Collaborative Filtering Methods for Medication Reconciliation

A Comparison of Collaborative Filtering Methods for Medication Reconciliation A Comparison of Collaborative Filtering Methods for Medication Reconciliation Huanian Zheng, Rema Padman, Daniel B. Neill The H. John Heinz III College, Carnegie Mellon University, Pittsburgh, PA, 15213,

More information

Driver Alertness Detection Research Using Capacitive Sensor Array

Driver Alertness Detection Research Using Capacitive Sensor Array University of Iowa Iowa Research Online Driving Assessment Conference 2001 Driving Assessment Conference Aug 16th, 12:00 AM Driver Alertness Detection Research Using Capacitive Sensor Array Philp W. Kithil

More information

Exploring the Influence of Particle Filter Parameters on Order Effects in Causal Learning

Exploring the Influence of Particle Filter Parameters on Order Effects in Causal Learning Exploring the Influence of Particle Filter Parameters on Order Effects in Causal Learning Joshua T. Abbott (joshua.abbott@berkeley.edu) Thomas L. Griffiths (tom griffiths@berkeley.edu) Department of Psychology,

More information

Scenes Categorization based on Appears Objects Probability

Scenes Categorization based on Appears Objects Probability 2016 IEEE 6th International Conference on System Engineering and Technology (ICSET) Oktober 3-4, 2016 Bandung Indonesia Scenes Categorization based on Appears Objects Probability Marzuki 1, Egi Muhamad

More information

BAYESIAN NETWORK FOR FAULT DIAGNOSIS

BAYESIAN NETWORK FOR FAULT DIAGNOSIS BAYESIAN NETWOK FO FAULT DIAGNOSIS C.H. Lo, Y.K. Wong and A.B. ad Department of Electrical Engineering, The Hong Kong Polytechnic University Hung Hom, Kowloon, Hong Kong Fax: +852 2330 544 Email: eechlo@inet.polyu.edu.hk,

More information

Rethinking Cognitive Architecture!

Rethinking Cognitive Architecture! Rethinking Cognitive Architecture! Reconciling Uniformity and Diversity via Graphical Models! Paul Rosenbloom!!! 1/25/2010! Department of Computer Science &! Institute for Creative Technologies! The projects

More information

Dynamic Control Models as State Abstractions

Dynamic Control Models as State Abstractions University of Massachusetts Amherst From the SelectedWorks of Roderic Grupen 998 Dynamic Control Models as State Abstractions Jefferson A. Coelho Roderic Grupen, University of Massachusetts - Amherst Available

More information

An Attentional Framework for 3D Object Discovery

An Attentional Framework for 3D Object Discovery An Attentional Framework for 3D Object Discovery Germán Martín García and Simone Frintrop Cognitive Vision Group Institute of Computer Science III University of Bonn, Germany Saliency Computation Saliency

More information

Naveen Kumar H N 1, Dr. Jagadeesha S 2 1 Assistant Professor, Dept. of ECE, SDMIT, Ujire, Karnataka, India 1. IJRASET: All Rights are Reserved 417

Naveen Kumar H N 1, Dr. Jagadeesha S 2 1 Assistant Professor, Dept. of ECE, SDMIT, Ujire, Karnataka, India 1. IJRASET: All Rights are Reserved 417 Physiological Measure of Drowsiness Using Image Processing Technique Naveen Kumar H N 1, Dr. Jagadeesha S 2 1 Assistant Professor, Dept. of ECE, SDMIT, Ujire, Karnataka, India 1 2 Professor, Dept. of ECE,

More information

Part III. Chapter 14 Insights on spontaneous facial expressions from automatic expression measurement

Part III. Chapter 14 Insights on spontaneous facial expressions from automatic expression measurement To appear in Giese,M. Curio, C., Bulthoff, H. (Eds.) Dynamic Faces: Insights from Experiments and Computation. MIT Press. 2009. Part III Chapter 14 Insights on spontaneous facial expressions from automatic

More information

Sensory Cue Integration

Sensory Cue Integration Sensory Cue Integration Summary by Byoung-Hee Kim Computer Science and Engineering (CSE) http://bi.snu.ac.kr/ Presentation Guideline Quiz on the gist of the chapter (5 min) Presenters: prepare one main

More information

Detection of Driver s Low Vigilance Using Vehicle Steering. Information and Facial Inattention Features

Detection of Driver s Low Vigilance Using Vehicle Steering. Information and Facial Inattention Features Detection of Driver s Low Vigilance Using Vehicle Steering Information and Facial Inattention Features Jia-Xiu Liu Engineer, Automotive Research and Testing Center No. 6, Lugong S. 7 th Rd., Lukang, Changhua

More information

Machine Analysis of Facial Expressions

Machine Analysis of Facial Expressions 20 Machine Analysis of Facial Expressions Maja Pantic 1 and Marian Stewart Bartlett 2 1 Computing Department, Imperial College London, 2 Inst. Neural Computation, University of California 1 UK, 2 USA 1.

More information

A Non Intrusive Human Fatigue Monitoring System

A Non Intrusive Human Fatigue Monitoring System A Non Intrusive Human Fatigue Monitoring System Muhammad Jafar Ali, Suvra Sarkar, GNVA Pavan Kumar, and John-John Cabibihan Abstract An algorithm for the real time detection of human fatigue has been developed

More information

Recognition of Facial Expressions for Images using Neural Network

Recognition of Facial Expressions for Images using Neural Network Recognition of Facial Expressions for Images using Neural Network Shubhangi Giripunje Research Scholar, Dept.of Electronics Engg., GHRCE, Nagpur, India Preeti Bajaj Senior IEEE Member, Professor, Dept.of

More information

A MULTIMODAL NONVERBAL HUMAN-ROBOT COMMUNICATION SYSTEM ICCB 2015

A MULTIMODAL NONVERBAL HUMAN-ROBOT COMMUNICATION SYSTEM ICCB 2015 VI International Conference on Computational Bioengineering ICCB 2015 M. Cerrolaza and S.Oller (Eds) A MULTIMODAL NONVERBAL HUMAN-ROBOT COMMUNICATION SYSTEM ICCB 2015 SALAH SALEH *, MANISH SAHU, ZUHAIR

More information

Hierarchical Convolutional Features for Visual Tracking

Hierarchical Convolutional Features for Visual Tracking Hierarchical Convolutional Features for Visual Tracking Chao Ma Jia-Bin Huang Xiaokang Yang Ming-Husan Yang SJTU UIUC SJTU UC Merced ICCV 2015 Background Given the initial state (position and scale), estimate

More information

Coordination in Sensory Integration

Coordination in Sensory Integration 15 Coordination in Sensory Integration Jochen Triesch, Constantin Rothkopf, and Thomas Weisswange Abstract Effective perception requires the integration of many noisy and ambiguous sensory signals across

More information

Recognition of sign language gestures using neural networks

Recognition of sign language gestures using neural networks Recognition of sign language gestures using neural s Peter Vamplew Department of Computer Science, University of Tasmania GPO Box 252C, Hobart, Tasmania 7001, Australia vamplew@cs.utas.edu.au ABSTRACT

More information