Perception of emotional gaits using avatar animation of real and artificially synthesized gaits

Size: px
Start display at page:

Download "Perception of emotional gaits using avatar animation of real and artificially synthesized gaits"

Transcription

1 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction Perception of emotional gaits using avatar animation of real and artificially synthesized gaits Halim Hicheur, Hideki Kadone, Julie Grèzes and Alain Berthoz Department of Medicine, University of Fribourg, 90 boulevard de Pérolles, Fribourg, Switzerland. Center for Cybernics Research, University of Tsukuba, Tsukuba, Japan. Laboratory of Cognitive Neuroscience, INSERM U960, Ecole Normale Superieure, Paris, France. Laboratoire de Physiologie de la Perception et de l Action, Collège de France, Paris, France. Abstract The emotional component of human walking patterns can be characterized by a limited set of kinematic cues [8]. Here, we tested whether artificial synthesis of emotional gaits based on these cues can facilitate emotion perception in human observers. To this purpose, we recorded neutral gaits and artificially modified the walking speed, the upperbody posture or some combination thereof. Nave observers had to judge emotion conveyed by these animated avatars or by real emotional gaits recorded in professional actors. We found that the recognition rates of both animated and real movies were comparable and particularly high (e.g. > 85 %): all emotions but fear were unambiguously perceived in walking avatars thanks to a particular combination of walking speed and head/trunk posture. This reveals a strong coupling between motor and perceptual processes underlying emotion expression and recognition. The potential applications of these findings in the fields of animated motion picture and humanoid robotics are discussed. Keywords-emotion, locomotion, posture, avatar animation, gait analysis I. INTRODUCTION Emotional body language (EBL) provides reliable cues to recognize emotions even when viewed from distance and when facial expression is not visible. As revealed by few studies in the literature, EBL can readily be recognized in static postures [13], [4], whole body movements [15], [1] or even in simple dynamic point-light displays [5]. At the methodological level, previous investigations on the neural basis of EBL perception stressed the relevance of using stimuli consisting of whole-body emotional expressions as well as those including biological movements [7], [4], [6], [11] in addition to using static facial expressions of emotions. While these studies extended the question of the expression of emotions to the whole body posture, kinematic properties of the emotional whole body movement received little attention. In the case of arm movements, Pollick and colleagues [12] showed that humans perceive particularly well different emotions conveyed by arm movements using kinematic cues like the peaks of wrist velocities and accelerations. Atkinson and colleagues [1],[2] extended these findings to the case of body gestures. In their study, nave observers had to judge the emotions conveyed in grey-scale movies of people expressing different emotional actions (including walking). Inverting movies orientations or playing them backwards impaired emotion recognition significantly more for patch-light displays (where the body was not visible as a whole) compared to the identical but fully illuminated displays, but the recognition performance was still above chance level for patch-light displays, providing evidence for distinct contributions of form-related and motion-related cues to emotions recognition from whole body movements. Troje [14] proposed a computational approach for analyzing and synthesizing human gait patterns, successfully applied in the case of gender recognition during the production of locomotor behavior. A comparable approach, based on ICAbased algorithms [10] was recently applied to the recognition of emotional gaits. Here, we aimed at investigating whether a minimum set of behaviorally-relevant variables can be used to generate realistic emotional gaits. For instance, how emotions affect the production of walking behaviors was recently described [8] as a specific combination of walking speed and upperbody changes. We tested here the hypothesis that these combined motion-related and posture-related changes also determine the processes underlying emotion perception. We believe that such bio-inspired approach could be beneficial to the implementation of emotional gaits of bipedal robots as well as for the animation of artificial walkers in arts entertainment and multimedia. II. METHODS: SYNTHESIS AND PERCEPTION OF REAL AND ANIMATED MOTION A. Kinematic features of emotional gaits Actors walked in five emotional states (neutral NE, joy JO, anger AN, sadness SA and fear FE). Their wholebody movements were recorded using a VICON motion capture system equipped with 24 cameras. We used a total of 41 markers placed on the body using the Vicon Plug in Gait model (VICON; Oxford Metrics Limited, Oxford, UK). Importantly, the actors were instructed to feel the emotion before starting walking. They were then free to interpret and express the emotions with a minimal guidance from their (professional) instructor. In the case of fear, a scenario of /13 $ IEEE DOI /ACII

2 walking forward in a dark and dangerous room was specified to some actors in order to have them walking (and not running, as did some actors who spontaneously expressed fearful escape behaviours). The comparison between emotional and neutral gait patterns (performed at different speeds by nave observers - normal/natural, slow and fast speeds) allowed discriminating between speed-related and emotion-specific changes. The kinematic analysis was performed at both global (i.e. walking speed, step length, duration and width as well as stance/swing phases durations) and local (angular motion of the head, trunk, arms and legs across step cycles) levels. The discriminative power of each of these motion components revealed that almost all emotional gaits were induced by a particular combination of speed and head-trunk orientation in space [8]. In what follows, we describe how we integrated these findings in order to simulate emotions in virtual characters (avatars) walking at different speeds. B. Animation of avatars from real motion We followed a procedure of combining motion capture and character animation techniques to create avatar animation. Two models of avatars were created from the motion capture of two actors (one male and one female). We first created the skeleton of each actor using the markers coordinates (Autodesk Motion BuilderTM 7.5). We then selected a particular character (Plastic Man, Motion BuilderTM software, see figure 1) and created a dynamic calibration of this character by mapping the body segments of the character onto the (motion captured) skeleton of the actor: this was done after a recording of specific movements which aimed at acquiring information at every joint level. The dynamic calibration information was then used to establish a correspondence between joints of the experimentally recorded skeleton and those of the animated character. Once this manual phase of animation was completed for both actors, we obtained avatar models of the actors which we then used to animate all recorded gaits. This phase resulted in animations of 50 emotional gaits. NEUTRAL ANGER REAL MOVIES ANIMATED MOVIES Figure 1. Snapshots of real and animated movies illustrating here the neutral and angry gaits. Animated movies were generated either using real motion-capture data recorded in two actors or through artificial modulations of gait patterns recorded in the neutral condition (see text for details). C. Animation of avatars from artificially synthesized motion The animated gaits recorded in the neutral condition were used to generate artificial gaits by manipulating 2 parameters, the walking speed and the upper body orientation (either the head orientation alone or the head and trunk orientations). In order to test whether a gait could convey emotional information, three types of manipulations were applied at the level of the walking speed only, the upper body orientation only or some combination thereof. This led to a large range of configurations, as detailed below. 1) Speed changes: The walking speed was either decreased (by 60%, 50%, 40%, 30% or 20%, corresponding to the S40, S50, S60, S70 and S80 conditions, respectively) or increased (by 20%, 30%, 40%, 50% or 60%, corresponding to the S120, S130, S140, S150 or S160 conditions, respectively), compared to the neutral condition. The choice of these speed values was based on the observations of emotional gaits reported in [8]. Speed changes were performed by scaling the markers coordinate in the direction of heading as well as extending the time length of the neutral movies (this last possibility generated unrealistic steps). To change speed to S times, marker coordinate scaling generated S 1/2 times speed change, and then time length change adjusted to S times speed change. The scaling procedure provided changes coherent with step length changes reported in [8]. It was achieved as follows. For every marker M i, we applied a single scaling parameter (which was adjusted as a function of the desired speed) to the coordinate (x), which corresponded to the direction of locomotion, of all markers using the following equation: X i (t) =α(x i (t) x i (0)) + x i (0) (1) where X i is the new coordinate of the marker M i at the instant t, after scaling the original coordinate x i (t) with the coefficient α. This was applied for real and virtual gaits. The speed was modified by changing the play speed of the movie. 2) Orientation changes: Changes in upper body orientation were performed at the head (H), the trunk (T) or at both head and trunk (HT) levels. For each segment, 4 types of increments /decrements were performed, resulting in 12 changes. A rotation of the segment (H, T or HT) was performed either in the downwards (D) direction (by -8 or -16 degrees, leading to the HD08, HD16, TD08, TD16, HTD08 and HTD16 head HD, trunk TD and head-trunk HTD postures, respectively) or in the upwards (U) direction (by 8 or 16 degrees, leading to the HU08, HU16, TU08, TU16, HTU08 and HTU16 postures). These rotations were directly applied on head and trunk joint angles computed from the modeled skeleton of each actor. The choice of this range of rotation (from -16 to 16 degrees) was based on the range of changes reported in the emotional gaits described in [8]. 461

3 3) Speed and orientation changes: Finally, we also combined speed and orientation changes similar to that mentioned above. Note here that the speed was reduced or increased by 30% or 60%. This yielded a total of 48 different speed-orientation (3 segments x 4 speeds x 4 angles) combinations for each actor. The whole 50 real emotional gaits were converted in a movie format with a black background (figure 1). The artificially generated gaits represented a total of 140 movies also converted in this format. A total of 95 movies per actor (25 recorded gaits, 10 accelerated or decelerated gaits, 12 gaits with orientation changes and 48 gaits with both speed and orientation changes) was generated (a total of 95 x 2 actors = 190 movies, was thus generated, including 140 artificially generated movies). The visual height of avatars on the screen was 8.6 centimetres for the actress and 9.2 centimetres for the actor. Real and animated movies were mixed and presented in a randomized manner. D. Control condition Since the artificial 3D environment and the absence of facial expression could have biased the perception of animated gaits, we recorded real movies using a numeric camera (HDR-SR7E Sony Handycam Camera - 50 Hz sampling frequency) in 3 actors (2 males and 1 female) who already participated to the initial study [8]. They wore black clothes and were filmed from profile with a white background (figure 1). The distance between the observer and the camera was equal to 7.38 meters and the actors walked for 5 meters (5 repetitions were performed for each condition). Movies resolution was reduced so that facial expression could not be perceived. The visual height of the actor on the screen was equal to 7.53 ± 1.15 centimetres. These settings were chosen to fit the visual height of our animated avatars. A total of 75 movies was generated following this procedure and used as control stimuli. E. Emotion discrimination task Naïve observers had to judge the emotions conveyed in real or animated movies. The video-clips were presented using the Presentation software v12.0. For each movie, observers were first asked to judge the emotion conveyed by the gait as quick as possible and then to rate its intensity. For the first task, observers had to choose among six buttons displayed on the computer screen (neutral, fear, anger, joy, sad, unknown) using the keyboard. The unknown button was used to avoid having observers spend too much time on a movie for which a particular emotion was hardly identifiable. To avoid any learning effect, stimuli were presented in a random order. After this first choice, observers judged the intensity of the perceived emotion (if any) by rating it on a scale which extremities were labeled Low and High. They had to slide a mouse cursor along this scale and collected scores ranged from 0 to 500. For all stimuli (20 observers x 75-3 actors x 5 emotions x 5 repetitions - tested trials), we computed the recognition rate, the response time for both correct and incorrect responses and the perceived intensity. Since the perceived intensity is a subjective measurement, we normalized intensity scores for each observer into percentage representation, using the maximal and minimal intensity scores delivered by a particular observer. The absolute response time was expressed in seconds and the normalized response time was expressed in percentage of the movies duration (for purpose of comparison between the tested emotions which were characterized by different speeds). We also calculated the duration of the intensity response. Two types of movies (real and animated) were used as stimuli in two separate groups, as detailed below. 1) Control condition (real movies): Twenty (20) naïve observers (12 males and 8 females, 29.9 ± 5.9 years old) participated in this experimental session. 2) Simulated emotional gaits (real-motion or artificiallygenerated animated movies): Thirty-two (32) nave observers (18 males and 14 females, 26.8 ± 6.5 years old, different from those tested in 1) participated in this session. They were randomly assigned to two groups (N=16) for which movies were generated from actor or actress movements. This was done to avoid any confusion as avatars had the same appearance. Indeed, given that tested actors had different natural walking speeds, shifting from one avatar to another during movie presentation might have considerably biased observers perception. Note that animated movies based on real motion-capture or artificially-modified neutral gaits were mixed and presented in a randomized manner. F. Statistical analysis We performed repeated measurements analysis of variance (ANOVA) and t tests with the Statistica 5.1 software package (Statsoft) in order to compare the mean recognition rates, response times, emotional intensities as well as intensity response times. As observers performances for the actor and the actress movies did not significantly differ (ANOVA tests performed on the recognition rates of all tested gaits between the two groups of observers F(1,15) p > 0.05), we consequently pooled together all movies for the analysis. III. RESULTS A. Real movies (Control condition) 1) Recognition rates: The distribution of observers responses to real movies is presented in figure 2a. On average, the emotion recognition rate was superior to 90% except for the JO condition where it was around 75%. Interestingly, observers answered anger instead of joy in more than 10% of the cases. 2) Response times: The response times were longer for fear and sadness compared to the other conditions (F (1, 222) = , p < 0.01, see figure 2b). The response times observed for neutral, joy and anger conditions did 462

4 a b Figure 2. Perception of emotion from real movies : a- Distribution of the observers responses across emotional gaits (fear FE, sadness SA, neutral NE, joy JO, anger AN and unknown UN) b- Response times (in seconds or in % of movie duration m.d.), perceived emotional intensity (in % of maximal intensity M.I.) and intensity response time duration. (Mean ± SD) not significantly differ (F (2, 444) = 3.87, p > 0.01). Observers delivered their answers after 8 seconds for the fear condition and around 7 seconds for sadness (this difference was statistically significantly different, F (1, 222) =22.21, p < 0.01) and after 6 seconds in the other conditions. In contrast, the response times computed after normalization (after dividing by the movie duration) revealed that fear and sadness were recognized faster than anger and joy (F (1, 222) = , p < 0.01). Indeed, observers answered at 55% of the movie duration while they waited after the end of the movie (around 110%) for anger and joy. The response time for the neutral condition was intermediate (around 83%) and was significantly different from fear-sadness and joyanger groups, respectively (F (1, 222) = , p < 0.01 and F (1, 222) = , p < 0.01, respectively). Observers answered significantly faster for fear compared to sadness (F (1, 222) = 7.32, p < 0.01) although response times of these two emotions were close one to another. Joy was also recognized faster than anger (F (1, 222) = 27.23, p < 0.01). Since actors walked a same distance for the different emotions, the observed differences between the absolute response times and the normalized ones were a function of the movie duration only. 3) Perceived emotional intensity: The perceived emotional intensity was comparable across emotions and was around 65% of the maximal intensity. However, statistical comparisons revealed that only sadness, joy and anger scores are not significantly different (p > 0.05) and that the perceived intensity for fear was significantly higher than for joy, anger and sadness (F (1, 222) =12.89, p < 0.01). As expected, the neutral condition score was close to 0. The intensity response time was comparable across emotions (p > 0.05) and close to 2 seconds: it was significantly longer (F (1, 222) = 80.26, p < 0.01) than for neutral gaits (around 1.2 seconds). These results showed that the expressiveness of actors gaits is good: recognition rates are around 90% (with the exception of joy) and intensity scores are significantly higher for emotional vs neutral gaits. B. Animated movies 1) Real-motion animated movies 1a) Recognition rate The distribution of observers responses to avatar movies is presented in figure 3a. On average, the emotion recognition rate was superior to 90% except for the JO condition for which it was around 65%. Interestingly here, observers answered anger instead of joy in about 20% of the movies. The same observation was reported for the control condition. 1b) Response times The response times (figure 3b) significantly differed across emotions (F (4, 412) = 4.96, p < 0.01). They were equal to 5 seconds for sadness, joy and anger, and were not found to significantly differ from the neutral (p > 0.05) condition. However, they were significantly longer (around 7 seconds) for fear compared to other conditions (F (1, 103) = 4.62, p = 0.03). They were 1 to 2 seconds shorter than those observed in real movies for fear and sadness conditions. The response times computed after normalization (figure 3b) revealed that fear and sadness were recognized faster than anger and joy, as observed for real movies (F (1, 103) = , p < 0.01). Indeed, observers answered at 55% of movie duration for fear and sadness while they waited after the end of the movie (around 150% of movie duration) for anger and joy. The response time for the neutral condition (around 135%) was close to but significantly lower than that of the joy-anger group (F (1, 103) = 17.08, p < 0.01). Observers answered significantly faster to fear compared to sadness (F (1, 103) = 11.60, p < 0.01) although normalized response times were close one to another. Joy and anger were recognized after the same duration (p > 0.05). Since actors walked a same distance for all emotions, the difference observed between absolute response times and normalized ones were, here also, a function of movie duration only. 1c) Perceived emotional intensity The perceived emotional intensity was systematically higher in animated compared to real movies. While the 463

5 a b Figure 3. Perception of emotion from avatar movies animated using real motion capture data: Same legend as figure 2. Note that the recognition rates are similar to that observed in real movies. intensity was around 65% for the real movies, the perceived intensity of the emotion ranged between 60 (for joy) and 80% (for anger and fear) for the avatars (figure 3a). Statistical comparisons revealed that fear, sadness and anger scores (around 80%) were not significantly different (p>0.05) and that the perceived intensity of joy was significantly lower (around 55%) than the fear-anger-sadness group (F (1, 103) = 44.70, p < 0.01). As expected, the neutral condition score is close to 0 here also. The intensity response time was around 1,5 seconds (figure 3b) and was comparable across emotions (p > 0.05) while it was significantly longer (F (1, 103) = 58.78, p < 0.01) than that of neutral gaits (around 0.9 seconds). The intensity response time was around 500 milliseconds shorter for animated compared to real movies. These results showed that the expressiveness of avatars gaits is good as the recognition rates were around 90% (with the exception of joy): intensity scores were higher for emotional gaits and the duration for choosing a particular intensity was shorter compared to real movies. 2) Artificially-generated animated movies In this section, we examined the respective and combined effects of speed and upper body postural changes artificially introduced into neutral gaits. 2a) Speed changes The effects of decelerating or accelerating the neutral gaits on observers responses are illustrated in figure 4a1. For speeds ranged between 60 and 130% of the original speed, no emotion was dominantly perceived and observers answered neutral in more than 40% of the cases. For decelerated gaits, fear and sadness progressively reached significant scores with speeds S40 and S50 where they were recognized in 60% of observers responses. The opposite was observed for accelerated gaits where joy and anger were reported for S150 and S160 in more than 60% of the cases. The only situation where a single emotion was perceived above the chance level (50%) was the speed 160: here anger was perceived in more than 60% of the cases. Thus, speed changes provided a first cue for discrimination emotions, yet insufficient alone to reach significant emotion recognition rates. 2b) Head and trunk postural changes The effects of inclining the upper body downwards or upwards on observers responses are illustrated in figure 4 a2. A change in the head/trunk orientation by up to 16 degrees yielded significant emotion recognition rates. An orientation changed in the downwards direction (for both head and trunk segments) resulted in 40% of the cases either in sadness or in anger recognition while a change in the upwards direction resulted in joy recognition in more than 50% of the cases. Fear was the less frequently perceived response (rates < 5%). Orientation changes provided emotional cues (except for fear), yet insufficient to reach significant and systematic recognition rates. 2c) Combined speed and postural changes The effects of changing speed and upper body posture are illustrated in figure 4a3. These combined changes yielded significant emotions recognition (except for fear). 2c1) Recognition rate A decelerated gait (S40 or S70) combined with a headtrunk downwards inclination resulted in sadness perception. An accelerated gait (Speed160) combined with a similar downwards inclination resulted in anger perception and eventually, an accelerated gait (Speed130) combined with an upwards orientation change of the upper boy resulted in joy recognition. More specifically the highest scores were obtained for the conditions Speed40HTD16, Speed40TU16 and Speed160TD16 (figure 4b) where the recognition rates were around 95%, 70% and 90%, respectively for the sadness, joy and anger responses. The recognition rate of fear was not increased by adding orientation changes (as expected from results reported in [8]) and a decelerated gait by up to 50% (S50) resulted in 50% of fear perception. 2c2) Response times Observers answered after 8, 6, 6 and 4 seconds for the S50 (fear-like), Speed40HTD16 (sadness-like), Speed40TU16 (joy-like) and Speed160TD16 (anger-like), respectively (not shown). These delays in responses were 1 to 2 seconds 464

6 % total responses' number a1 a3 SPEED + ORIENTATION SPEED a2 ORIENTATION b Figure 4. Perception of emotions from artificially-generated animated movies. a 1 -a 2 -a 3 : Distribution of the observers responses across artificial gaits synthesized by modifying, the walking speed (a 1 ), the upper-body posture (a 2 ) or a combination thereof (a 3 ), of neutral gaits. The colored arrows in a 1 -a 2 -a 3 correspond to the most perceived emotions in b. longer than for real emotional gaits except for joy or anger recognition for which the delay was shorter for the Speed160TD16 condition compared to the real anger. When normalized with respect to movie duration, this longer response time becomes even more evident for fear (150% of the movie duration for condition Speed50 instead of 50% for fear) and sadness (100% of the movie duration for Speed40HTD16 instead of 60% for sadness). 2c3) Perceived emotional intensity The perceived emotional intensity is also higher for real emotional gaits compared with artificially generated emotional gaits. This was particularly true for the fear condition (80% versus 20%) but also hold for other conditions (75% versus 60%, 60% versus 40% and 75% versus 55% respectively for artificial and real sadness, joy and anger conditions). The intensity response times were comparable between real and animated movies (they were even shorter for animated movies in fear and anger conditions, by up to 500 ms). Taken together, these results indicated that emotions were recognized (yet with a lower intensity) with scores comparable to those observed in real movies) except for fear. These speed and orientation changes matched the emotion-specific values reported in [8]. IV. DISCUSSION We observed that real and animated emotional gaits were recognized with similar scores. This was mainly explained by a particular combination of two kinematic cues (namely the walking speed and the upper body orientation), as evidenced by the strong emotional percepts (with the exception of fear) provided by an artificial modulation of these parameters in animated neutral gaits. However, the perceived intensity was weaker in the artificial vs emotional animated gaits, suggesting a role of other cues in the generation of emotional percepts. The underlying motor-perceptual interactions as well as the potential impact of these findings are discussed below. A. Perception of the emotional walking behaviour Human ability to recognize a human gait from point-light displays [9], gender [14] or emotions from different arm or whole body gestures [12], [1] depend upon the visual availability of kinematic and form-related cues [2]. Based on the kinematic changes occurring during emotional vs neutral gaits [8], we hypothesized that emotions can be inferred from the visual processing of walking speed and upper body orientation. According to this hypothesis, we created artificial gaits by manipulating these parameters in neutral gaits. It should be noted that that observers answered unknown in few cases in our study, meaning that both visual appearance of avatars and the animated motion fairly reproduced natural gaits produced by actors. We observed that neither walking speed changes alone, nor postural changes alone, could correctly produce the perception of emotional gaits. In contrast, artificial gaits with specific combinations of speed and postural changes (Speed130TU16, Speed40HTD16 and Speed160HTD16) were unambiguously recognized as joy, anger and sadness, respectively. These quantitative modulations of neutral gaits strikingly match the kinematic variations observed across emotional gait patterns [8], revealing a strong coupling between motor and perceptual processes involved in emotion production and recognition, respectively. The nature of these sensorimotor interactions was investigated. Shared motor representations used to predict the sensory consequences of ones own actions were shown to be also used to predict anothers behavior and associated predicted somatosensory consequences (involving parietal and premotor cortices [16][17]). This kind of approach may be useful in identifying the cortical structures involved in the processing of the real and artificial stimuli tested in our study. Our approach proved to be particularly helpful for all emotions except for fear. Fearful gaits are also characterized by very weak discriminative features at the behavioural level 465

7 [8]. One possible explanation might be that fear requires additional motion features (like specific arm movements) that are more context-dependent compared to other emotions (see Methods section). As the walking speed and postural changes were sufficient to induce emotion perception for other emotions, we can only speculate that the fearful behaviour tested in the present study requires taking into account additional and probably discrete features which still remain to be identified. The present study, at least, clearly confirms that emotions can be inferred from the whole body movements, independently from facial expressions. B. Potential applications The ability of humanoid robots to adequately interact with humans can be substantially improved by taking into account variables informing about their emotional states. While these cues greatly rely on the analysis of facial expression at close distances, whole-body motion cues also convey information about more distant humans. Comparatively to the pattern recognition algorithms required to interpret facial expressions, the efficient perception of emotions from human gait can be based on a limited set of behaviorally-relevant cues: the walking speed and the upper-body orientation. The same variables could also be integrated into the motion controller of walking robots. This is particularly interesting if one consider the high dimensionality of any bipedal system. One could also imagine the implementation of such bioinspired approach in the animation of artificial walkers in arts entertainment and multimedia. Indeed, the realistic animation of virtual characters often relies on computation of huge motion capture databases: the use of such simple bioinspired variables can contribute to reduce the computational cost associated with the generation of such artificial emotional gaits. While further studies are required to provide an exhaustive understanding of the factors determining emotion recognition from gaits (including fear, for instance), our findings clearly show that a limited set of kinematic cues are sufficient to generate strong emotional percepts during whole-body movements. ACKNOWLEDGMENT This work was supported by EU-COBOL and French Locanthrope projects. REFERENCES [1] Atkinson, A. P., Dittrich, W. H., Gemmell, A. J., & Young, A. W. (2004). Emotion perception from dynamic and static body expressions in point-light and full-light displays. Perception, 33, [2] Atkinson, A. P., Tunstall, M. L., & Dittrich, W. H. (2007). Evidence for distinct contributions of form and motion information to the recognition of emotions from body gestures. Cognition, 104, [3] Bonda, E., Petrides, M., Ostry, D., & Evans, A. (1996). Specific involvement of human parietal systems and the amygdala in the perception of biological motion. J.Neurosci., 16, [4] De Gelder, B., Snyder, J., Greve, D., Gerard, G., & Hadjikhani, N. (2004). Fear fosters flight: a mechanism for fear contagion when perceiving emotion expressed by a whole body. Proc.Natl.Acad.Sci.U.S.A, 101, [5] Dittrich, W. H., Troscianko, T., Lea, S. E., & Morgan, D. (1996). Perception of emotion from dynamic point-light displays represented in dance. Perception, 25, [6] Grèzes, J., Pichon, S., & De, Gelder. B. (2007). Perceiving fear in dynamic body expressions. Neuroimage., 35, [7] Hadjikhani, N. & de, Gelder. B. (2003). Seeing fearful body expressions activates the fusiform cortex and amygdala. Current Biology, 13, [8] Hicheur, H., Kadone, H., Grèzes, J., & Berthoz, A. (2013). The combined role of motion-related cues and upper body posture for the expression of emotions during human walking. K.Mombaur and K.Berns (Eds.): Modeling, Simulation and Optimization, COSMOS, 18, [9] Johansson, G. (1973). Visual perception of biological motion and a model for its analysis. Percept. Psychophys. 14 (2): [10] Omlor, L. & Giese, M. A. (2006). Unsupervised learning of spatio- temporal primitives of emotional gait. Perception and Interactive Technologies 2006, Lecture Notes in Artificial Intelligence, 4021, [11] Pichon, S., De Gelder, B., & Grèzes, J. (2007). Emotional modulation of visual and motor areas by dynamic body expressions of anger. Social Neuroscience,1-14. [12] Pollick, F. E., Paterson, H. M., Bruderlin, A., & Sanford, A. J. (2001). Perceiving affect from arm movement. Cognition, 82, B51-B61. [13] Sprengelmeyer, R., Rausch, M., Eysel, U. T., & Przuntek, H. (1998). Neural structures associated with recognition of facial expressions of basic emotions. Proc.Biol.Sci., 265, [14] Troje, N. F. (2002). Decomposing biological motion: a framework or analysis and synthesis of human gait patterns. J.Vis., 2, [15] Wallbott, H. G. & Scherer, K. R. (1986). Cues and channels in emotion recognition. Journal of personality and Social Psychology, 51, [16] Jeannerod, M. (2001). Neural Simulation of Action: A Unifying Mechanism for Motor Cognition. NeuroImage, 14, S103- S109. [17] Wilson, M. & Knoblich, G. (2005). The case for motor involvement in perceiving conspecifics. Psychological Bulletin, 131,

Men fear other men most: Gender specific brain activations in. perceiving threat from dynamic faces and bodies. An fmri. study.

Men fear other men most: Gender specific brain activations in. perceiving threat from dynamic faces and bodies. An fmri. study. Men fear other men most: Gender specific brain activations in perceiving threat from dynamic faces and bodies. An fmri study. Kret, ME, Pichon, S 2,4, Grèzes, J 2, & de Gelder, B,3 Cognitive and Affective

More information

Biologically-Inspired Human Motion Detection

Biologically-Inspired Human Motion Detection Biologically-Inspired Human Motion Detection Vijay Laxmi, J. N. Carter and R. I. Damper Image, Speech and Intelligent Systems (ISIS) Research Group Department of Electronics and Computer Science University

More information

Virtual Shapers & Movers: Form and Motion affect Sex Perception

Virtual Shapers & Movers: Form and Motion affect Sex Perception Virtual Shapers & Movers: Form and Motion affect Sex Perception Rachel McDonnell 1 Sophie Jörg 1 Jessica K. Hodgins 3 Fiona Newell 2 Carol O Sullivan 1 1 Graphics Research Group and 2 Institute of Neuroscience,

More information

Evaluating the emotional content of human motions on real and virtual characters

Evaluating the emotional content of human motions on real and virtual characters Evaluating the emotional content of human motions on real and virtual characters Rachel McDonnell 1 Sophie Jörg 1 Joanna McHugh 2 Fiona Newell 2 Carol O Sullivan 1 1 Graphics Vision & Visualisation Group

More information

Competing Frameworks in Perception

Competing Frameworks in Perception Competing Frameworks in Perception Lesson II: Perception module 08 Perception.08. 1 Views on perception Perception as a cascade of information processing stages From sensation to percept Template vs. feature

More information

Competing Frameworks in Perception

Competing Frameworks in Perception Competing Frameworks in Perception Lesson II: Perception module 08 Perception.08. 1 Views on perception Perception as a cascade of information processing stages From sensation to percept Template vs. feature

More information

Do you have to look where you go? Gaze behaviour during spatial decision making

Do you have to look where you go? Gaze behaviour during spatial decision making Do you have to look where you go? Gaze behaviour during spatial decision making Jan M. Wiener (jwiener@bournemouth.ac.uk) Department of Psychology, Bournemouth University Poole, BH12 5BB, UK Olivier De

More information

Hierarchical Stimulus Processing by Pigeons

Hierarchical Stimulus Processing by Pigeons Entire Set of Printable Figures For Hierarchical Stimulus Processing by Pigeons Cook In one experiment, Cerella (1980) trained pigeons to discriminate intact drawings of Charlie Brown from normal drawings

More information

Artificial Intelligence Lecture 7

Artificial Intelligence Lecture 7 Artificial Intelligence Lecture 7 Lecture plan AI in general (ch. 1) Search based AI (ch. 4) search, games, planning, optimization Agents (ch. 8) applied AI techniques in robots, software agents,... Knowledge

More information

Orientation Specific Effects of Automatic Access to Categorical Information in Biological Motion Perception

Orientation Specific Effects of Automatic Access to Categorical Information in Biological Motion Perception Orientation Specific Effects of Automatic Access to Categorical Information in Biological Motion Perception Paul E. Hemeren (paul.hemeren@his.se) University of Skövde, School of Humanities and Informatics

More information

Rules of apparent motion: The shortest-path constraint: objects will take the shortest path between flashed positions.

Rules of apparent motion: The shortest-path constraint: objects will take the shortest path between flashed positions. Rules of apparent motion: The shortest-path constraint: objects will take the shortest path between flashed positions. The box interrupts the apparent motion. The box interrupts the apparent motion.

More information

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China A Vision-based Affective Computing System Jieyu Zhao Ningbo University, China Outline Affective Computing A Dynamic 3D Morphable Model Facial Expression Recognition Probabilistic Graphical Models Some

More information

Evaluating the emotional content of human motions on real and virtual characters

Evaluating the emotional content of human motions on real and virtual characters Evaluating the emotional content of human motions on real and virtual characters Rachel McDonnell 1 Sophie Jörg 1 Joanna McHugh 2 Fiona Newell 2 Carol O Sullivan 1 1 Graphics Vision&VisualisationGroup

More information

Emotion perception from dynamic and static body expressions in point-light and full-light displays

Emotion perception from dynamic and static body expressions in point-light and full-light displays Perception, 2004, volume 33, pages 717 ^ 746 DOI:10.1068/p5096 Emotion perception from dynamic and static body expressions in point-light and full-light displays Anthony P Atkinsonô Department of Psychology,

More information

This version was downloaded from Northumbria Research Link:

This version was downloaded from Northumbria Research Link: Citation: Neave, Nick (2011) Male dance moves that catch a woman s eye. In: Northumbria Research Conference, 5 May - 6 May 2011, Northumbria University, Newcastle-upon-Tyne. URL: This version was downloaded

More information

Using simulated body language and colours to express emotions with the Nao robot

Using simulated body language and colours to express emotions with the Nao robot Using simulated body language and colours to express emotions with the Nao robot Wouter van der Waal S4120922 Bachelor Thesis Artificial Intelligence Radboud University Nijmegen Supervisor: Khiet Truong

More information

Chapter 6. Results. 6.1 Introduction

Chapter 6. Results. 6.1 Introduction Chapter 6 Results 6.1 Introduction This chapter presents results of both optimization and characterization approaches. In the optimization case, we report results of an experimental study done with persons.

More information

Perceiving emotion in crowds: the role of dynamic body postures on the perception of emotion in crowded scenes

Perceiving emotion in crowds: the role of dynamic body postures on the perception of emotion in crowded scenes Exp Brain Res (2010) 204:361 372 DOI 10.1007/s00221-009-2037-5 RESEARCH ARTICLE Perceiving emotion in crowds: the role of dynamic body postures on the perception of emotion in crowded scenes Joanna Edel

More information

(Visual) Attention. October 3, PSY Visual Attention 1

(Visual) Attention. October 3, PSY Visual Attention 1 (Visual) Attention Perception and awareness of a visual object seems to involve attending to the object. Do we have to attend to an object to perceive it? Some tasks seem to proceed with little or no attention

More information

Department of Psychology, University of Virginia, 102 Gilmer Hall, P.O. Box. Department of Neurology, University of Lübeck, Lübeck, Germany

Department of Psychology, University of Virginia, 102 Gilmer Hall, P.O. Box. Department of Neurology, University of Lübeck, Lübeck, Germany When in infancy does the fear bias develop? Tobias Grossmann 1 & Sarah Jessen 2 1 Department of Psychology, University of Virginia, 102 Gilmer Hall, P.O. Box 400400, Charlottesville, VA 22904, U.S.A. 2

More information

Lecture 2 BME 599: Modeling & Simulation of Movement

Lecture 2 BME 599: Modeling & Simulation of Movement Simulation Lab #1 Lecture 2 Question of the Day How high can you jump without an approach or swinging your arms? How should you coordinate muscle forces to produce a maximum height jump? Outline for Today

More information

Emotional Body Language Displayed by Artificial Agents

Emotional Body Language Displayed by Artificial Agents Emotional Body Language Displayed by Artificial Agents ARYEL BECK, STRI & School of Computer Science, University of Hertfordshire, UK BRETT STEVENS, School of Creative Technologies, University of Portsmouth,

More information

Katsunari Shibata and Tomohiko Kawano

Katsunari Shibata and Tomohiko Kawano Learning of Action Generation from Raw Camera Images in a Real-World-Like Environment by Simple Coupling of Reinforcement Learning and a Neural Network Katsunari Shibata and Tomohiko Kawano Oita University,

More information

A FRÖHLICH EFFECT IN MEMORY FOR AUDITORY PITCH: EFFECTS OF CUEING AND OF REPRESENTATIONAL GRAVITY. Timothy L. Hubbard 1 & Susan E.

A FRÖHLICH EFFECT IN MEMORY FOR AUDITORY PITCH: EFFECTS OF CUEING AND OF REPRESENTATIONAL GRAVITY. Timothy L. Hubbard 1 & Susan E. In D. Algom, D. Zakay, E. Chajut, S. Shaki, Y. Mama, & V. Shakuf (Eds.). (2011). Fechner Day 2011: Proceedings of the 27 th Annual Meeting of the International Society for Psychophysics (pp. 89-94). Raanana,

More information

Artificial Emotions to Assist Social Coordination in HRI

Artificial Emotions to Assist Social Coordination in HRI Artificial Emotions to Assist Social Coordination in HRI Jekaterina Novikova, Leon Watts Department of Computer Science University of Bath Bath, BA2 7AY United Kingdom j.novikova@bath.ac.uk Abstract. Human-Robot

More information

Supporting Information

Supporting Information 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 Supporting Information Variances and biases of absolute distributions were larger in the 2-line

More information

Skin color detection for face localization in humanmachine

Skin color detection for face localization in humanmachine Research Online ECU Publications Pre. 2011 2001 Skin color detection for face localization in humanmachine communications Douglas Chai Son Lam Phung Abdesselam Bouzerdoum 10.1109/ISSPA.2001.949848 This

More information

Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners

Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners Dimensional Emotion Prediction from Spontaneous Head Gestures for Interaction with Sensitive Artificial Listeners Hatice Gunes and Maja Pantic Department of Computing, Imperial College London 180 Queen

More information

Valence and Gender Effects on Emotion Recognition Following TBI. Cassie Brown Arizona State University

Valence and Gender Effects on Emotion Recognition Following TBI. Cassie Brown Arizona State University Valence and Gender Effects on Emotion Recognition Following TBI Cassie Brown Arizona State University Knox & Douglas (2009) Social Integration and Facial Expression Recognition Participants: Severe TBI

More information

Manual Annotation and Automatic Image Processing of Multimodal Emotional Behaviours: Validating the Annotation of TV Interviews

Manual Annotation and Automatic Image Processing of Multimodal Emotional Behaviours: Validating the Annotation of TV Interviews Manual Annotation and Automatic Image Processing of Multimodal Emotional Behaviours: Validating the Annotation of TV Interviews J.-C. Martin 1, G. Caridakis 2, L. Devillers 1, K. Karpouzis 2, S. Abrilian

More information

Channel Dominance in Decoding Affective Nonverbal Behavior Displayed by Avatars 1. What You See is What You Get:

Channel Dominance in Decoding Affective Nonverbal Behavior Displayed by Avatars 1. What You See is What You Get: Channel Dominance in Decoding Affective Nonverbal Behavior Displayed by Avatars 1 What You See is What You Get: Channel Dominance in the Decoding of Affective Nonverbal Behavior Displayed by Avatars Presentation

More information

The Graphesthesia Paradigm: Drawing Letters on the Body to Investigate the Embodied Nature of Perspective-Taking

The Graphesthesia Paradigm: Drawing Letters on the Body to Investigate the Embodied Nature of Perspective-Taking Short and Sweet The Graphesthesia Paradigm: Drawing Letters on the Body to Investigate the Embodied Nature of Perspective-Taking i-perception January-February 2017, 1 5! The Author(s) 2017 DOI: 10.1177/2041669517690163

More information

FACIAL EXPRESSION RECOGNITION FROM IMAGE SEQUENCES USING SELF-ORGANIZING MAPS

FACIAL EXPRESSION RECOGNITION FROM IMAGE SEQUENCES USING SELF-ORGANIZING MAPS International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998 FACIAL EXPRESSION RECOGNITION FROM IMAGE SEQUENCES USING SELF-ORGANIZING MAPS Ayako KATOH*, Yasuhiro FUKUI**

More information

Affective Body Expression Perception and Recognition: A Survey

Affective Body Expression Perception and Recognition: A Survey IEEE TRANSACTIONS ON JOURNAL NAME, MANUSCRIPT ID 1 Affective Body Expression Perception and Recognition: A Survey Andrea Kleinsmith and Nadia Bianchi-Berthouze Abstract Thanks to the decreasing cost of

More information

Recognizing Scenes by Simulating Implied Social Interaction Networks

Recognizing Scenes by Simulating Implied Social Interaction Networks Recognizing Scenes by Simulating Implied Social Interaction Networks MaryAnne Fields and Craig Lennon Army Research Laboratory, Aberdeen, MD, USA Christian Lebiere and Michael Martin Carnegie Mellon University,

More information

Emotion Recognition using a Cauchy Naive Bayes Classifier

Emotion Recognition using a Cauchy Naive Bayes Classifier Emotion Recognition using a Cauchy Naive Bayes Classifier Abstract Recognizing human facial expression and emotion by computer is an interesting and challenging problem. In this paper we propose a method

More information

Cognition xxx (2011) xxx xxx. Contents lists available at ScienceDirect. Cognition. journal homepage:

Cognition xxx (2011) xxx xxx. Contents lists available at ScienceDirect. Cognition. journal homepage: Cognition xxx (2011) xxx xxx Contents lists available at ScienceDirect Cognition journal homepage: www.elsevier.com/locate/cognit He throws like a girl (but only when he s sad): Emotion affects sex-decoding

More information

Pupil Dilation as an Indicator of Cognitive Workload in Human-Computer Interaction

Pupil Dilation as an Indicator of Cognitive Workload in Human-Computer Interaction Pupil Dilation as an Indicator of Cognitive Workload in Human-Computer Interaction Marc Pomplun and Sindhura Sunkara Department of Computer Science, University of Massachusetts at Boston 100 Morrissey

More information

Psychological Research

Psychological Research Psychol Res (1984) 46:121-127 Psychological Research Springer-Verlag 1984 Research note: Peak velocity timing invariance Alan M. Wing I and Ed Miller 2 1 Medical Research Council Applied Psychology Unit,

More information

FAILURES OF OBJECT RECOGNITION. Dr. Walter S. Marcantoni

FAILURES OF OBJECT RECOGNITION. Dr. Walter S. Marcantoni FAILURES OF OBJECT RECOGNITION Dr. Walter S. Marcantoni VISUAL AGNOSIA -damage to the extrastriate visual regions (occipital, parietal and temporal lobes) disrupts recognition of complex visual stimuli

More information

Facial expression recognition with spatiotemporal local descriptors

Facial expression recognition with spatiotemporal local descriptors Facial expression recognition with spatiotemporal local descriptors Guoying Zhao, Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and Information Engineering, P. O. Box

More information

Sensory Cue Integration

Sensory Cue Integration Sensory Cue Integration Summary by Byoung-Hee Kim Computer Science and Engineering (CSE) http://bi.snu.ac.kr/ Presentation Guideline Quiz on the gist of the chapter (5 min) Presenters: prepare one main

More information

Does scene context always facilitate retrieval of visual object representations?

Does scene context always facilitate retrieval of visual object representations? Psychon Bull Rev (2011) 18:309 315 DOI 10.3758/s13423-010-0045-x Does scene context always facilitate retrieval of visual object representations? Ryoichi Nakashima & Kazuhiko Yokosawa Published online:

More information

General Psych Thinking & Feeling

General Psych Thinking & Feeling General Psych Thinking & Feeling Piaget s Theory Challenged Infants have more than reactive sensing Have some form of discrimination (reasoning) 1-month-old babies given a pacifier; never see it Babies

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 11: Attention & Decision making Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis

More information

Time Experiencing by Robotic Agents

Time Experiencing by Robotic Agents Time Experiencing by Robotic Agents Michail Maniadakis 1 and Marc Wittmann 2 and Panos Trahanias 1 1- Foundation for Research and Technology - Hellas, ICS, Greece 2- Institute for Frontier Areas of Psychology

More information

Supplementary materials for: Executive control processes underlying multi- item working memory

Supplementary materials for: Executive control processes underlying multi- item working memory Supplementary materials for: Executive control processes underlying multi- item working memory Antonio H. Lara & Jonathan D. Wallis Supplementary Figure 1 Supplementary Figure 1. Behavioral measures of

More information

Temporal Context and the Recognition of Emotion from Facial Expression

Temporal Context and the Recognition of Emotion from Facial Expression Temporal Context and the Recognition of Emotion from Facial Expression Rana El Kaliouby 1, Peter Robinson 1, Simeon Keates 2 1 Computer Laboratory University of Cambridge Cambridge CB3 0FD, U.K. {rana.el-kaliouby,

More information

Best Practice: CLINICAL

Best Practice: CLINICAL Best Practice: CLINICAL Go to the section that is most appropriate for you Key points... 1 Introduction... 1 Preparation... 3 General Movement Analysis... 4 Gait Analysis... 6 Treatment... 8 Earlier in

More information

THE ENCODING OF PARTS AND WHOLES

THE ENCODING OF PARTS AND WHOLES THE ENCODING OF PARTS AND WHOLES IN THE VISUAL CORTICAL HIERARCHY JOHAN WAGEMANS LABORATORY OF EXPERIMENTAL PSYCHOLOGY UNIVERSITY OF LEUVEN, BELGIUM DIPARTIMENTO DI PSICOLOGIA, UNIVERSITÀ DI MILANO-BICOCCA,

More information

The Effects of Action on Perception. Andriana Tesoro. California State University, Long Beach

The Effects of Action on Perception. Andriana Tesoro. California State University, Long Beach ACTION ON PERCEPTION 1 The Effects of Action on Perception Andriana Tesoro California State University, Long Beach ACTION ON PERCEPTION 2 The Effects of Action on Perception Perception is a process that

More information

The influence of visual motion on fast reaching movements to a stationary object

The influence of visual motion on fast reaching movements to a stationary object Supplemental materials for: The influence of visual motion on fast reaching movements to a stationary object David Whitney*, David A. Westwood, & Melvyn A. Goodale* *Group on Action and Perception, The

More information

Young infants detect the direction of biological motion in point-light displays

Young infants detect the direction of biological motion in point-light displays Direction of biological motion 1 In press, Infancy Young infants detect the direction of biological motion in point-light displays Valerie A. Kuhlmeier, Nikolaus F. Troje, and Vivian Lee Department of

More information

ArteSImit: Artefact Structural Learning through Imitation

ArteSImit: Artefact Structural Learning through Imitation ArteSImit: Artefact Structural Learning through Imitation (TU München, U Parma, U Tübingen, U Minho, KU Nijmegen) Goals Methodology Intermediate goals achieved so far Motivation Living artefacts will critically

More information

Affective Game Engines: Motivation & Requirements

Affective Game Engines: Motivation & Requirements Affective Game Engines: Motivation & Requirements Eva Hudlicka Psychometrix Associates Blacksburg, VA hudlicka@ieee.org psychometrixassociates.com DigiPen Institute of Technology February 20, 2009 1 Outline

More information

Viewpoint and the Recognition of People From Their Movements

Viewpoint and the Recognition of People From Their Movements Journal of Experimental Psychology: Human Perception and Performance 2009, Vol. 35, No. 1, 39 49 2009 American Psychological Association 0096-1523/09/$12.00 DOI: 10.1037/a0012728 Viewpoint and the Recognition

More information

The formation of trajectories during goal-oriented locomotion in humans. I. A stereotyped behaviour

The formation of trajectories during goal-oriented locomotion in humans. I. A stereotyped behaviour The formation of trajectories during goal-oriented locomotion in humans. I. A stereotyped behaviour Halim Hicheur, Quang-Cuong Pham, Gustavo Arechavaleta, Jean-Paul Laumond, Alain Berthoz August, 7 Abstract

More information

7 Grip aperture and target shape

7 Grip aperture and target shape 7 Grip aperture and target shape Based on: Verheij R, Brenner E, Smeets JBJ. The influence of target object shape on maximum grip aperture in human grasping movements. Exp Brain Res, In revision 103 Introduction

More information

Satiation in name and face recognition

Satiation in name and face recognition Memory & Cognition 2000, 28 (5), 783-788 Satiation in name and face recognition MICHAEL B. LEWIS and HADYN D. ELLIS Cardiff University, Cardiff, Wales Massive repetition of a word can lead to a loss of

More information

MPEG-4 Facial Expression Synthesis based on Appraisal Theory

MPEG-4 Facial Expression Synthesis based on Appraisal Theory MPEG-4 Facial Expression Synthesis based on Appraisal Theory L. Malatesta, A. Raouzaiou, K. Karpouzis and S. Kollias Image, Video and Multimedia Systems Laboratory, National Technical University of Athens,

More information

Ergonomic Test of the Kinesis Contoured Keyboard

Ergonomic Test of the Kinesis Contoured Keyboard Global Ergonomic Technologies, Inc. Ergonomic Test of the Kinesis Contoured Keyboard Prepared by Wanda Smith, President Dan Cronin, Engineer December 16, 1992 Executive Summary A study was conducted of

More information

5.8 Departure from cognitivism: dynamical systems

5.8 Departure from cognitivism: dynamical systems 154 consciousness, on the other, was completely severed (Thompson, 2007a, p. 5). Consequently as Thompson claims cognitivism works with inadequate notion of cognition. This statement is at odds with practical

More information

Introduction to affect computing and its applications

Introduction to affect computing and its applications Introduction to affect computing and its applications Overview What is emotion? What is affective computing + examples? Why is affective computing useful? How do we do affect computing? Some interesting

More information

Action Recognition based on Hierarchical Self-Organizing Maps

Action Recognition based on Hierarchical Self-Organizing Maps Action Recognition based on Hierarchical Self-Organizing Maps Miriam Buonamente 1, Haris Dindo 1, and Magnus Johnsson 2 1 RoboticsLab, DICGIM, University of Palermo, Viale delle Scienze, Ed. 6, 90128 Palermo,

More information

arxiv: v1 [cs.ro] 11 May 2016

arxiv: v1 [cs.ro] 11 May 2016 A Hierarchical Emotion Regulated Sensorimotor Model: Case Studies arxiv:1605.03269v1 [cs.ro] 11 May 2016 Junpei Zhong 1,2, Rony Novianto 3, Mingjun Dai 4, Xinzheng Zhang 5, Angelo Cangelosi 1 1. Centre

More information

Congruency Effects with Dynamic Auditory Stimuli: Design Implications

Congruency Effects with Dynamic Auditory Stimuli: Design Implications Congruency Effects with Dynamic Auditory Stimuli: Design Implications Bruce N. Walker and Addie Ehrenstein Psychology Department Rice University 6100 Main Street Houston, TX 77005-1892 USA +1 (713) 527-8101

More information

IN recent years, there has been a growing interest in the

IN recent years, there has been a growing interest in the IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, VOL. 4, NO. 1, JANUARY-MARCH 2013 15 Affective Body Expression Perception and Recognition: A Survey Andrea Kleinsmith and Nadia Bianchi-Berthouze Abstract Thanks

More information

Simultaneous Real-Time Detection of Motor Imagery and Error-Related Potentials for Improved BCI Accuracy

Simultaneous Real-Time Detection of Motor Imagery and Error-Related Potentials for Improved BCI Accuracy Simultaneous Real-Time Detection of Motor Imagery and Error-Related Potentials for Improved BCI Accuracy P. W. Ferrez 1,2 and J. del R. Millán 1,2 1 IDIAP Research Institute, Martigny, Switzerland 2 Ecole

More information

Kinematic cues in perceptual weight judgement and their origins in box lifting

Kinematic cues in perceptual weight judgement and their origins in box lifting Psychological Research (2007) 71: 13 21 DOI 10.1007/s00426-005-0032-4 ORIGINAL ARTICLE A. F. de C. Hamilton Æ D. W. Joyce Æ J. R. Flanagan C. D. Frith Æ D. M. Wolpert Kinematic cues in perceptual weight

More information

Learning Classifier Systems (LCS/XCSF)

Learning Classifier Systems (LCS/XCSF) Context-Dependent Predictions and Cognitive Arm Control with XCSF Learning Classifier Systems (LCS/XCSF) Laurentius Florentin Gruber Seminar aus Künstlicher Intelligenz WS 2015/16 Professor Johannes Fürnkranz

More information

Decline of the McCollough effect by orientation-specific post-adaptation exposure to achromatic gratings

Decline of the McCollough effect by orientation-specific post-adaptation exposure to achromatic gratings *Manuscript Click here to view linked References Decline of the McCollough effect by orientation-specific post-adaptation exposure to achromatic gratings J. Bulthé, H. Op de Beeck Laboratory of Biological

More information

The Effect of Posture and Dynamics on the Perception of Emotion

The Effect of Posture and Dynamics on the Perception of Emotion University of Pennsylvania ScholarlyCommons Center for Human Modeling and Simulation Department of Computer & Information Science 2013 The Effect of Posture and Dynamics on the Perception of Emotion Aline

More information

Changing expectations about speed alters perceived motion direction

Changing expectations about speed alters perceived motion direction Current Biology, in press Supplemental Information: Changing expectations about speed alters perceived motion direction Grigorios Sotiropoulos, Aaron R. Seitz, and Peggy Seriès Supplemental Data Detailed

More information

A computational model of cooperative spatial behaviour for virtual humans

A computational model of cooperative spatial behaviour for virtual humans A computational model of cooperative spatial behaviour for virtual humans Nhung Nguyen and Ipke Wachsmuth Abstract This chapter introduces a model which connects representations of the space surrounding

More information

Facial Expression Biometrics Using Tracker Displacement Features

Facial Expression Biometrics Using Tracker Displacement Features Facial Expression Biometrics Using Tracker Displacement Features Sergey Tulyakov 1, Thomas Slowe 2,ZhiZhang 1, and Venu Govindaraju 1 1 Center for Unified Biometrics and Sensors University at Buffalo,

More information

Selective bias in temporal bisection task by number exposition

Selective bias in temporal bisection task by number exposition Selective bias in temporal bisection task by number exposition Carmelo M. Vicario¹ ¹ Dipartimento di Psicologia, Università Roma la Sapienza, via dei Marsi 78, Roma, Italy Key words: number- time- spatial

More information

PERCEPTION and COMMUNICATION. Week 2

PERCEPTION and COMMUNICATION. Week 2 PERCEPTION and COMMUNICATION Week 2 1 Perception Perception a sequence of events from the receipt of a stimulus to the response to it. It is how people interpret what happens around them what they think

More information

University of Manitoba - MPT: Neurological Clinical Skills Checklist

University of Manitoba - MPT: Neurological Clinical Skills Checklist Name: Site: Assessment Skills Observed Performed Becoming A. Gross motor function i. Describe movement strategies (quality, devices, timeliness, independence): supine sidelying sit stand supine long sitting

More information

Investigation of Human Whole Body Motion Using a Three-Dimensional Neuromusculoskeletal Model

Investigation of Human Whole Body Motion Using a Three-Dimensional Neuromusculoskeletal Model Investigation of Human Whole Body Motion Using a Three-Dimensional Neuromusculoskeletal Model 1 Akinori Nagano, 2 Senshi Fukashiro, 1 Ryutaro Himeno a-nagano@riken.jp, fukashiro@idaten.c.u-tokyo.ac.jp,

More information

The Simon Effect as a Function of Temporal Overlap between Relevant and Irrelevant

The Simon Effect as a Function of Temporal Overlap between Relevant and Irrelevant University of North Florida UNF Digital Commons All Volumes (2001-2008) The Osprey Journal of Ideas and Inquiry 2008 The Simon Effect as a Function of Temporal Overlap between Relevant and Irrelevant Leslie

More information

Running head: FACIAL EXPRESSION AND SKIN COLOR ON APPROACHABILITY 1. Influence of facial expression and skin color on approachability judgment

Running head: FACIAL EXPRESSION AND SKIN COLOR ON APPROACHABILITY 1. Influence of facial expression and skin color on approachability judgment Running head: FACIAL EXPRESSION AND SKIN COLOR ON APPROACHABILITY 1 Influence of facial expression and skin color on approachability judgment Federico Leguizamo Barroso California State University Northridge

More information

Psychology Perception

Psychology Perception Psychology 343 - Perception James R. Sawusch, 360 Park Hall jsawusch@buffalo.edu 645-0238 TA is Timothy Pruitt, 312 Park tapruitt@buffalo.edu Text is Sensation & Perception by Goldstein (8th edition) PSY

More information

The Effects of Carpal Tunnel Syndrome on the Kinematics of Reach-to-Pinch Function

The Effects of Carpal Tunnel Syndrome on the Kinematics of Reach-to-Pinch Function The Effects of Carpal Tunnel Syndrome on the Kinematics of Reach-to-Pinch Function Raviraj Nataraj, Peter J. Evans, MD, PhD, William H. Seitz, MD, Zong-Ming Li. Cleveland Clinic, Cleveland, OH, USA. Disclosures:

More information

An Overview of BMIs. Luca Rossini. Workshop on Brain Machine Interfaces for Space Applications

An Overview of BMIs. Luca Rossini. Workshop on Brain Machine Interfaces for Space Applications An Overview of BMIs Luca Rossini Workshop on Brain Machine Interfaces for Space Applications European Space Research and Technology Centre, European Space Agency Noordvijk, 30 th November 2009 Definition

More information

TAKE-OFF CHARACTERISTICS OF DOUBLE BACK SOMERSAULTS ON THE FLOOR

TAKE-OFF CHARACTERISTICS OF DOUBLE BACK SOMERSAULTS ON THE FLOOR TAKE-OFF CHARACTERISTICS OF DOUBLE BACK SOMERSAULTS ON THE FLOOR H. Geiblinger, W. E. Morrison & P. A. McLaughlin Biomechanics Unit, Dep't of Physical Education and Recreation and Centre for Rehabilitation,

More information

Effect of Positive and Negative Instances on Rule Discovery: Investigation Using Eye Tracking

Effect of Positive and Negative Instances on Rule Discovery: Investigation Using Eye Tracking Effect of Positive and Negative Instances on Rule Discovery: Investigation Using Eye Tracking Miki Matsumuro (muro@cog.human.nagoya-u.ac.jp) Kazuhisa Miwa (miwa@is.nagoya-u.ac.jp) Graduate School of Information

More information

Application of ecological interface design to driver support systems

Application of ecological interface design to driver support systems Application of ecological interface design to driver support systems J.D. Lee, J.D. Hoffman, H.A. Stoner, B.D. Seppelt, and M.D. Brown Department of Mechanical and Industrial Engineering, University of

More information

Inventions on expressing emotions In Graphical User Interface

Inventions on expressing emotions In Graphical User Interface From the SelectedWorks of Umakant Mishra September, 2005 Inventions on expressing emotions In Graphical User Interface Umakant Mishra Available at: https://works.bepress.com/umakant_mishra/26/ Inventions

More information

Richard C. Gershon, PhD.

Richard C. Gershon, PhD. Richard C. Gershon, PhD. gershon@northwestern.edu www.healthmeasures.net Collectively PROMIS and the NIH Toolbox consist of over 250 measures. This particular presentation is limited to describing the

More information

A Computational Model For Action Prediction Development

A Computational Model For Action Prediction Development A Computational Model For Action Prediction Development Serkan Bugur 1, Yukie Nagai 3, Erhan Oztop 2, and Emre Ugur 1 1 Bogazici University, Istanbul, Turkey. 2 Ozyegin University, Istanbul, Turkey. 3

More information

Emotions of Living Creatures

Emotions of Living Creatures Robot Emotions Emotions of Living Creatures motivation system for complex organisms determine the behavioral reaction to environmental (often social) and internal events of major significance for the needs

More information

Perceptual and Motor Skills, 2010, 111, 3, Perceptual and Motor Skills 2010 KAZUO MORI HIDEKO MORI

Perceptual and Motor Skills, 2010, 111, 3, Perceptual and Motor Skills 2010 KAZUO MORI HIDEKO MORI Perceptual and Motor Skills, 2010, 111, 3, 785-789. Perceptual and Motor Skills 2010 EXAMINATION OF THE PASSIVE FACIAL FEEDBACK HYPOTHESIS USING AN IMPLICIT MEASURE: WITH A FURROWED BROW, NEUTRAL OBJECTS

More information

Temporal properties in masking biological motion

Temporal properties in masking biological motion Perception & Psychophysics 2005, 67 (3), 435-443 Temporal properties in masking biological motion ERIC HIRIS, DEVON HUMPHREY, and ALEXANDRA STOUT St. Mary s College of Maryland, St. Mary s City, Maryland

More information

0-3 DEVELOPMENT. By Drina Madden. Pediatric Neuropsychology 1

0-3 DEVELOPMENT. By Drina Madden. Pediatric Neuropsychology   1 0-3 DEVELOPMENT By Drina Madden DrinaMadden@hotmail.com www.ndcbrain.com 1 PHYSICAL Body Growth Changes in height and weight are rapid in the first two years of life. Development moves from head to tail

More information

The role of spatial and temporal information in biological motion perception

The role of spatial and temporal information in biological motion perception 2007 volume 3 no 4 419-428 Advances in Cognitive Psychology The role of spatial and temporal information in biological motion perception Joachim Lange 1,2 and Markus Lappe 1 1 Department of Psychology

More information

Characterization of 3D Gestural Data on Sign Language by Extraction of Joint Kinematics

Characterization of 3D Gestural Data on Sign Language by Extraction of Joint Kinematics Human Journals Research Article October 2017 Vol.:7, Issue:4 All rights are reserved by Newman Lau Characterization of 3D Gestural Data on Sign Language by Extraction of Joint Kinematics Keywords: hand

More information

Introduction to Psychology. Lecture no: 27 EMOTIONS

Introduction to Psychology. Lecture no: 27 EMOTIONS Lecture no: 27 EMOTIONS o Derived from the Latin word Emovere emotion means to excite, stir up or agitate. o A response that includes feelings such as happiness, fear, sadness, grief, sorrow etc: it is

More information

arxiv: v1 [cs.hc] 20 Feb 2014

arxiv: v1 [cs.hc] 20 Feb 2014 arxiv:1402.5047v1 [cs.hc] 20 Feb 2014 Real-time Automatic Emotion Recognition from Body Gestures Stefano Piana stefano.piana@dist.unige.it Francesca Odone francesca.odone@unige.it ABSTRACT Although psychological

More information

Who Needs Cheeks? Eyes and Mouths are Enough for Emotion Identification. and. Evidence for a Face Superiority Effect. Nila K Leigh

Who Needs Cheeks? Eyes and Mouths are Enough for Emotion Identification. and. Evidence for a Face Superiority Effect. Nila K Leigh 1 Who Needs Cheeks? Eyes and Mouths are Enough for Emotion Identification and Evidence for a Face Superiority Effect Nila K Leigh 131 Ave B (Apt. 1B) New York, NY 10009 Stuyvesant High School 345 Chambers

More information

Building Better Balance

Building Better Balance Building Better Balance The Effects of MS on Balance Individuals with MS experience a decline in their balance due to various MS related impairments. Some of these impairments can be improved with exercise

More information