Embodied Me. 28. Embodied Mental Imagery in Cognitive Robots. Part F 28. Alessandro Di Nuovo, Davide Marocco, Santo Di Nuovo, Angelo Cangelosi

Size: px
Start display at page:

Download "Embodied Me. 28. Embodied Mental Imagery in Cognitive Robots. Part F 28. Alessandro Di Nuovo, Davide Marocco, Santo Di Nuovo, Angelo Cangelosi"

Transcription

1 69 Embodied Me 28. Embodied Mental Imagery in Cognitive Robots Alessandro Di Nuovo, Davide Marocco, Santo Di Nuovo, Angelo Cangelosi This chapter is focused on discussing the concept of mental imagery as a fundamental cognitive capability to enhance the performance of cognitive robots. Indeed, the emphasis will be on the embodied imagery mechanisms applied to build artificial cognitive models of motor imagery and mental simulation to control complex behaviors of humanoid platforms, which represent the artificial body. With the aim of providing a panorama of the research activity on the topic, first we give an introduction on the neuroscientific and psychological background of mental imagery in order to help the reader to contextualize the multidisciplinary environment in which we operate. Then, we review the work done in the field of artificial cognitive systems and robotics to mimic the process behind the human ability of creating mental images of events and experiences, and to use this process as a cognitive mechanism to improve the behavior of complex robots. Finally, we report the detail of three recent empirical studies in which mental imagery approaches were modelled trough artificial neural networks (ANNs) to enable a cognitive robot with some human-like capabilities. 28. Mental Imagery Research Background Models and Approaches Based on Mental Imagery in Cognitive Systems and Robotics Experiments The Humanoid Robotic Platform: The icub First Experimental Study: Motor Imagery for Performance Improvement Second Experimental Study: Mental Training Evoked by Language Third Experimental Study: Spatial Imagery Conclusion References These empirical studies exemplify how the proprioceptive information can be used by mental imagery models to enhance the performance of the robot, giving evidence of the embodied cognition theories in the context of artificial cognitive systems. Part F 28 The work presented in this chapter takes inspiration from the human capability to build representations of the physical world in its mind. In particular, we studied the motor imagery, which is considered a multimodal simulation that activates the same, or very similar, sensory and motor modalities that are activated when human beings interact with the environment in real time. This is strictly related to the embodied cognition hypothesis, which affirms that all aspects of human cognition are shaped by aspects of the body. Similarly, when artificial intelligence was moving his first steps, Alan Turing argued that in order to think and speak a machine may need a human-like body and that the development of robot cognitive skills might be just as simple as teaching a child [28.]. This is the motivation behind a strongly humanoid design of some of the recent and most advanced robotic platforms, for example, icub [28.2], NAO [28.3], and Advanced Step in Innovative MObility (ASIMO) [28.4]. These platforms are equipped with sophisticated motors and sensors, which replicate animal or human sensorimotor input output streams. The sensors and actuators arrangement determine a highly redundant morphological structure of humanoid robots, which are traditionally difficult to control and, thus, require complex models implementing more sophisticated and efficient mechanisms that resemble the human cognition [28.5]. In this multidisciplinary context, improving the skill of a robot in terms of motor control and navigation capabilities, especially in the case of a complex robot

2 62 Part F Model-Based Reasoning in Cognitive Science with many degrees of freedom, is a timely and important issue in current robotics research. Among the many bio-inspired mechanisms and models already tested in the field of robot control and navigation, the use of mental imagery principles is of interest in modeling mental imagery as a complex, goal directed and flexible motor planning strategy and to go further in the development of artificial cognitive systems capable to better interact with the environment and refine their cognitive motor skill in an open-ended process. 28. Mental Imagery Research Background Part F 28. Mental imagery, as the process behind the human ability of creating mental images of events and experiences, has long been the subject of research and debate in philosophy, psychology, cognitive science, and more recently, neuroscience [28.6]. Some of the main effects of mental practice on physical performance have been well established in experiments with humans as in the fields of sports science, work psychology, and motor rehabilitation. Neuropsychological research has long highlighted the complexity of brain activation in the activity of imagination. Studies have demonstrated the localization partly similar and partly different between imagery, perception, and visual memory [28.7, 8], while, in [28.9], it was demonstrated by a clinical case as visuospatial perception and imagery can be functionally separated in activating brain. In [28.], authors proposed a revision of the constructs relevant to cognitive styles, placing them into a complex framework of heuristics regarding multiple levels of information processing, from the attentional and perceptual to metacognitive ones. These heuristics are grouped according to the type of regulatory function that they take from the automatic coding of data to the conscious use of cognitive resources. In this view, also at the cerebral area activation level, the distinction between elaboration of object properties (like shape or color) and spatial relations is better representative of a different style in the use of mental images than the ancient dichotomy verbal visual [28.]. But only quite recently a growing amount of evidence from empirical studies begun to demonstrate the relationship between bodily experiences and mental processes that actively involve body representations. This is also due to the fact that in the past, philosophical and scientific investigations of the topic primarily focused upon visual mental imagery. Contemporary imagery research has now broadly extended its scope to include every experience that resembles the experience of perceiving from any sensorial modality. From this perspective, understanding the role of the body in cognitive processes is extremely important and psychological and neuroscience studies are extremely important in this regard. Wilson [28.2] identified six claims in the current view of embodied cognition:. Cognition is situated 2. Cognition is time-pressured 3. We off-load cognitive work onto the environment 4. The environment is part of the cognitive system 5. Cognition is for action 6. Offline cognition is bodily based. Among those six claims, the last claim is particularly important. According to this claim, sensorimotor functions that evolved for action and perception are used during offline cognition that occurs when the perceiver represents social objects, situations, or events in times and places other than the ordinary ones. This principle is reinforced by the concept of embodied cognition [28.3, 4], which affirms that the nature of intelligence is largely determined by the form of the body. Indeed, the body and every physical experience made through the body, shape the form of intelligence that can be observed in any autonomous systems. This means that even if the mind does not directly interact with the environment, it is able to apply mechanisms of sensory processing and motor control by using some innate abilities such as memory (implicit, short, and long term), problem solving, and mental imagery. These capabilities have been well studied in psychology and neuroscience, but the debate is still open on the issue of mental imagery, where mental imagery is defined as a sensation activated without sensorial stimulation. Many evidences from empirical sciences have demonstrated the relationship between bodily experiences and mental processes that involve body representation. Neuropsychological research has demonstrated that the same brain areas are activated during seeing or recalling by images [28.5] and that areas controlling perception are needed also for maintaining mental images active in working memory. Therefore, mental imagery may be considered as a kind of biological simulation. In [28.6], author observed that the primary motor cortex M is activated during the production of motor images as well as during the production of active movement. Similarly, experimental studies show that neural mechanisms underlying real-time visual perception and mental visualization are the same when a task is mentally recalled [28.7]. Nevertheless, the neural

3 Embodied Mental Imagery in Cognitive Robots 28. Mental Imagery Research Background 62 mechanisms involved in the active elaboration of mental images might be different from those involved in passive elaborations [28.8]. These studies demonstrate the tight relationship between mental imagery and motor activities (i. e., how the image in mind can influence movements and motor skills). Recent research, both in experimental as well as practical contexts, suggests that imagined and executed movement planning relies on internal models for action [28.9]. These representations are frequently associated with the notion of internal (forward) models and are hypothesized to be an integral part of action planning [28.2, 2]. Furthermore, in [28.22], authors suggest that motor imagery may be a necessary prerequisite for motor planning. In [28.23], Jeannerod studied the role of motor imagery in action planning and proposed the so-called equivalence hypothesis suggesting that motor simulation and motor control processes are functionally equivalent [28.24, 25]. These studies, together with many others (e.g., [28.26]), demonstrate how the images that we have in mind might influence movement execution and the acquisition and refinement of motor skills. For this reason, understanding the relationship that exists between mental imagery and motor activities has become a relevant topic in domains in which improving motor skills is crucial for obtaining better performance, such as in sport and rehabilitation. Therefore, it is also possible to exploit mental imagery for improving a human s motor performance and this is achieved thanks to special mental training techniques. Mental training is widely used among professional athletes, and many researchers began to study its beneficial effects for the rehabilitation of patients, in particular after cerebral lesions. In [28.24], for example, the authors analyzed motor imagery during mental training procedures in patients and athletes. Their findings support the notion that mental training procedures can be applied as a therapeutic tool in rehabilitation and in applications for empowering standard training methodologies. Others studies have shown that motor skills can be improved through mental imagery techniques. Jeannerod and Decety [28.8] discuss how training based on mental simulation can influence motor performance in terms of muscular strength and reduction of variability. In [28.27], authors show that imaginary fifth finger abductions led to an increased level of muscular strength. The authors note that the observed increment in muscle strength is not due to a gain in muscle mass. Rather, it is based on higher level changes in cortical maps, presumably resulting in a more efficient recruitment of motor units. These findings are in line with other studies, specifically focused on motor imagery, which shows the enhancement of mobility range [28.28] or increased accuracy [28.29] after mental training. Interestingly, it should be noted that such effects operate both ways: mental imagery can influence motor performance and the extent of physical practice can change the areas activated by mental imagery [28.3]. As a result of these studies, new opportunities for the use of mental training have opened up in collateral fields, such as medical and orthopaedic traumatologic rehabilitation. For instance, mental practice has been used to rehabilitate motor deficits in a variety of neurological disorders [28.3]. Mental training can be successfully applied in helping a person to regain lost movement patterns after joint operations or joint replacements and in neurological rehabilitation. Mental practice has also been used in combination with actual practice to rehabilitate motor deficits in a patient with subacute stroke [28.32], and several studies have also shown improvement in strength, function, and use of both upper and lower extremities in chronic stroke patients [28.33, 34]. In sport, beneficial effects of mental training for the performance enhancement of athletes are well established and several works are focused on this topic with tests, analysis, and in new training principles [ ]. In [28.39], for example, a cognitive-behavioral training program was implemented to improve the free-throw performance of college basketball players, finding improvements of over 5%. Furthermore, the trial in [28.4], where mental imagery is used to enhance the training phase of hockey athletes to score a goal, showed that imaginary practice allowed athletes to achieve better performance. Despite the fact that there is ample evidence that mental imagery, and in particular motor imagery, contributes to enhancing motor performance, the topic still attracts new research, such as [28.4] that investigated the effect of mental practice to improve game plans or strategies of play in open skills in a trial with female pickers. Results of the trial support the assumption that motor imagery may lead to improved motor performance in open skills when compared to the no-practice condition. Another recent paper [28.42] demonstrated that sports experts showed more focused activation patterns in prefrontal areas while performing imagery tasks than novices. Part F 28.

4 622 Part F Model-Based Reasoning in Cognitive Science Part F Models and Approaches Based on Mental Imagery in Cognitive Systems and Robotics The introduction of humanoid robots had a great impact on the fast growing field of cognitive robotics, which represents the intersection of artificial cognitive modeling and robotic engineering. Cognitive robotics aims to provide new understanding on how human beings develop their higher cognitive functions. Thanks to the many potential applications, researchers in cognitive robotics are still facing several challenges in developing complex behaviors [28.43]. As exemplified in the following literature review, a key role is played by mental imagery and its mechanisms in order to enhance motor control in autonomous robots, and to develop autonomous systems that are capable of exploiting the characteristics of mental imagery training to better interact with the environment and refine their motor skills in an open-ended process [28.44]. Indeed, among the many hypotheses and models already tested in the field of cognitive systems and robotics, the use of mental imagery as a cognitive tool capable of enhancing robot behaviors is both innovative and well-grounded in experimental data at different levels. A model-based learning approach for mobile robot navigation was presented in [28.45], where it is discussed how a behavior-based robot can construct a symbolic process that accounts for its deliberative thinking processes using internal models of the environment. The approach is based on a forward modeling scheme using recurrent neural learning, and results show that the robot is capable of learning grammatical structure hidden in the geometry of the workspace from the local sensory inputs through its navigational experiences. An example of the essential role mental imagery can play in human robot interaction was recognized by Roy et al. [28.46]. She presented a robot, called Ripley, which is able to translate spoken language into actions for object manipulation guided by visual and haptic perception. The robot maintained a dynamic mental model, a three-dimensional model of its immediate physical environment that it used to mediate perception, manipulation planning, and language. The contents of the robot s mental model could be updated based on linguistic, visual, or haptic input. The mental model endowed Ripley with object permanence, remembering the position of objects when they were out of its sensory field. Experiments on internal simulation of perception using ANN robot controllers are presented by Ziemke et al. [28.47]. The paper focuses on a series of experiments in which feedforward neural networks (FFNNs) were evolved to control collision-free corridor following behavior in a simulated Khepera robot and predict the sensory input of next time step as accurately as possible. The trained robot is actually able to move blindly in a simple environment for hundreds of time steps, successfully handling several multistep turns. In [28.48], authors present a neurorobotics experiment in which developmental learning processes of the goal-directed actions of a robot were examined. The robot controller was implemented with a multiple timescales recurrent neural network (RNN) model, which is characterized by the coexistence of slow and fast dynamics in generating anticipatory behaviors. Through the iterative tutoring of the robot for multiple goal-directed actions, interesting developmental processes emerged. Behavior primitives in the earlier fast context network part were self-organizing, while they appeared to be sequenced in the later, slow context part. Also observed was that motor images were generated in the early stage of development. The studypresented in [28.49] show how simulated robots evolved for the ability to display a contextdependent periodic behavior can spontaneously develop an internal model and rely on it to fulfil their task when sensory stimulation is temporarily unavailable. Results suggest that internal models might have arisen for behavioral reasons and successively exapted for other cognitive functions. Moreover, the obtained results suggest that self-generated internal states need not match in detail the corresponding sensory states and might rather encode more abstract and motor-oriented information. Fascinatingly, in [28.5], authors explore the idea of dreams as a form of mental imagery and the possible role they might play in mental simulations and in the emergence and refinement of the ability to generate predictions on the possible outcomes of actions. In brief, what the authors propose is that robots might first need to possess some of the characteristics related to the ability to dream (particularly those found in infants and children) before they can acquire a robust ability to use mental imagery. This ability to dream, according to them, would assist robots in the generation of predictions of future sensory states and of situations in the world. Internal simulations can help artificial agents to solve the stereo-matching problem, operating on the sensorimotor domain, with retinal images that mimic the cone distribution on the human retina [28.5]. This is accomplished by applying internal sensorimotor simulation and (subconscious) mental imagery to the

5 Embodied Mental Imagery in Cognitive Robots 28.2 Models and Approaches Based on Mental Imagery 623 process of stereo matching. Such predictive matching is competitive to classical approaches from computer vision, and it has moreover the considerable advantage that it is fully adaptive and can cope with highly distorted images. A computational model of mental simulation that includes biological aspects of brain circuits that appear to be involved in goal-directed navigation processes is presented in [28.52]. The model supports the view of the brain as a powerful anticipatory system, capable of generating and exploiting mental simulation for predicting and assessing future sensory motor events. The authors show how mental simulations can be used to evaluate future events in a navigation context, in order to support mechanisms of decision-making. The proposed mechanism is based on the assumption that choices about actions are made by simulating movements and their sensory effects using the same brain areas that are active during overt actions execution. An interpretation of mental imagery based on the context of homeostatic adaptation is presented in [28.53], where the internal dynamics of a highly complex self-organized system is loosely coupled with a sensory-motor dynamic guided by the environment. This original view is supported by the analysis of a neural network model that controls a simulated agent facing sensor shifts. The agent is able to perceive a light in the environment through some light sensors placed around its body and its task is that of approaching the light. When the sensors are swapped, the agent perceives the light in the opposite direction of its real position and the control systems has to autonomously detect the shifting sensor and act accordingly. The authors speculate that mental imagery could be a viable way for creating self-organized internal dynamics that is loosely coupled with sensory motor dynamics. The loose coupling allows the creation of endogenous input stimulations, similar to real ones that could allow the internal system to sustain its internal dynamics and, eventually, reshape such dynamics while modifying endogenous input stimulations. Lallee and Dominey [28.54] suggest the idea that mental imagery can be seen as a way for an autonomous system of generating internal representation and exploiting the convergence of different multimodal contingencies. That is, given a set of sensory-motor contingencies specific to many different modalities, learned by an autonomous agent in interaction with the environment, mental imagery constitutes the bridge toward even more complex multimodal convergence. The model proposed by the authors is based on the hierarchical organization of the cortex and it is based on a set of interconnected artificial neural networks that control the humanoid robot icub in tasks that involve coordination between vision, hand-arm control, and language. The chapter also highlights interesting relations between the model and neurophysiological and neuropsychological findings that the model can account for. An extension of the neurocomputational model TRoPICAL (two route, prefrontal instruction, competition of affordances, language simulation) is proposed by [28.55] to implement an embodied cognition approach to mental rotation processes, a classic task in mental imagery research. The extended model develops new features that allow it to implement mental simulation, sensory prediction, as well as enhancing the model s capacity to encode somatosensorial information. The model, applied to a simulated humanoid robot (icub) in a series of mental rotation tests, shows the ability to solve the mental rotation tasks in line with results coming from psychology research. The authors also claim the emergence of links between overt movements with mental rotations, suggesting that affordance and embodied processes play an important role in mental rotation capacities. Starting from the fact that some evidence in experimental psychology has suggested that imagery ability is crucial for the correct understanding of social intention, an interesting study to investigate intentionfrom-movement understanding is presented in [28.56]. Authors aim is to show the importance of including the more cognitive aspects of social context for further development of the optimal theories of motor control, with positive effects on robot companions that afford true interaction with human users. In the paper, the authors present a simple but thoroughly executed experiment, first to confirm that the nature of the motor intention leads to early modulations of movement kinematics. Second, they tested whether humans use imagery to read an agent s intention when observing the very first element of a complex action sequence. A neural network model to produce an anticipatory behavior by means of a multimodal off-line Hebbian association is proposed in [28.57]. The model emulates a process of mental imagery, in which visual and tactile stimuli are associated during a long-term predictive simulation chain motivated by covert actions. Such model was studied by means of two experiments with a physical Pioneer 3-DX robot that developed a mechanism to produce visually conditioned obstacle avoidance behavior. Part F 28.2

6 624 Part F Model-Based Reasoning in Cognitive Science 28.3 Experiments In this section, we present three experimental studies that exemplify the capabilities and the performance improvements achievable by an imagery-enabled robot. Results of experimental tests with the simulator of the icub humanoid robot platform are presented as evidence of the opportunities given by the use of artificial mental imagery in cognitive artificial systems. The first study, [28.58], details a model of a controller, based on a dual network architecture, which allows the humanoid robot icub to improve autonomously its sensorimotor skills. This is achieved by endowing the controller of a secondary neural system that by exploiting the sensorimotor skills already acquired by the robot, is able to generate additional imaginary examples that can be used by the controller itself to improve the performance through a simulated mental training. The second study, [28.59], builds on the previous study showing that the robot could imagine or mentally recall and accurately execute movements learned in previous training phases, strictly on the basis of the verbal commands. Further tests show that data obtained with imagination could be used to simulate mental training processes, such as those that have been employed with human subjects in sports training, in order to enhance precision in the performance of new tasks through the association of different verbal commands. The third study, [28.6], explored how the relationship between spatial mental imagery practice in a training phase could increase accuracy in sports related performance. The focus is on the capability to estimate, after a period of training with proprioceptive and visual stimuli, the position into a soccer field when the robot acquires the goal The Humanoid Robotic Platform: The icub The cognitive robotic platform used for the experiments presented here is the simulation of the icub humanoid robot controlled by artificial neural networks. The icub a) b) Fig. 28.a,b The icub humanoid robot platform: (a) The realistic simulator; (b) The real platform (Fig. 28.) is an open-source humanoid robot platform designed to facilitate cognitive developmental robotics research as detailed in [28.2]. At the current state the icub platform is a child-like humanoid robot :5 m tall, with 53 degrees of freedom (DoF) distributed in the head, arms, hands, and legs. The implementation used for the experiments presented here is a simulation of the icub humanoid robot (Fig. 28.). The simulator, which was developed with the aim to accurately reproduce the physics and the dynamics of the physical icub [28.6], allows the creation of realistic physical scenarios in which the robot can interact with a virtual environment. Physical constraints and interactions that occur between the environment and the robot are simulated using a software library that provides an accurate simulation of rigid body dynamics and collisions First Experimental Study: Motor Imagery for Performance Improvement The first experimental study explored the application of mental simulation to robot controllers, with the aim to mimic the mental training techniques to improve the motor performance of the robot. To this end, a model of a controller based on neural networks was designed to allow the icub to autonomously improve its sensorimotor skills. The experimental task is to throw a small cube of side size 2 cm and weight 4 g as far as possible according to an externally given velocity for the movement. The task phases are shown in Fig and it is the realization of a ballistic action, involving the simultaneous movement of the right arm and of the torso, with the aim to throw a small object as far as possible according to an externally given velocity for the movement. Ballistic movements can be defined as rapid movements initiated by muscular contraction and continued by momentum [28.62]. These movements are typical in sport actions, such as throwing and jumping (e.g., a soccer kick, a tennis serve, or a boxing punch). In this experiment, we focus on two main features that characterize a ballistic movement: () it is executed by the brain with a predefined order, which is fully programmed before the actual movement realization and (2) it is executed as a whole and will not be subject to interference or modification until its completion. This definition of ballistic movement implies that proprioceptive feedback is not needed to control the movement and that its development is only based on starting conditions [28.63]. It should be noted here that since ballistic movements are

7 Embodied Mental Imagery in Cognitive Robots 28.3 Experiments 625 a) b) Preparation Acceleration Release Fig. 28.2a c Three phases of the movement: (a) Preparation: The object is grabbed and shoulder and wrist joints are positioned at 9 ı ; (b) Acceleration: The shoulder joint accelerates until a given angular velocity is reached, while the wrist rotates down; (c) Release: the object is released andthrownaway by definition not affected by external interferences, the training can be performed without considering the surrounding environment, as well as vision and auditory information. To build the input output training and testing sets, all values were normalized in the range Œ;. To control the robot, we designed a dual neural network architecture, which can operate to improve autonomously the robot motor skills with techniques inspired by the ones that are employed with human subjects in sports training. This is achieved through two interconnected neural networks to control the robot: a FFNN that directly controls the robot s joints and c) an RNN that is able to generate additional imaginary training data by exploiting the sensorimotor skills acquired by the robot. The new data, in turn, can be used to generate additional imaginary examples that can be used by the controller itself to improve the performance through an additional learning process. We show that data obtained with artificial imagination could be used to simulate mental training to learn new tasks and enhance their performance. The artificial neural network architecture that models the mental training process is detailed in Fig. 28.3a. It is a three-layer dual recurrent neural network (DRNN) with two recurrent sets of connections, one from the hidden layer and one from the output layer, which feed directly to the hidden layer through a set of context units added to the input layer. At each iteration (epoch), the context units hold a copy of the previous values of the hidden and outputs units. This creates an internal memory of the network, which allows it to exhibit dynamic temporal behavior [28.64]. It should be noted that in preliminary experiments, this architecture proved to show better performances and improved stability with respect to classical architectures, for example, Jordan [28.65] or Elman [28.66]. The DRNN comprises 5 output neurons, 2 neurons in the hidden layer, and 3 neurons in the input layer (6 of them encode the proprioceptive inputs from the robot joints and 25 are the context units). Neuron activations are computed according to (28.) and(28.2). The six proprioceptive inputs encode, respectively, the shoulder pitch angular velocity (constant during the movement), positions of shoulder pitch and hand wrist pitch (at time t), elapsed time, expected duration time (at time t), and the grab/release command ( if the object is grabbed, otherwise). The five outputs encode the pre- a) b) Imagined outputs Control output Output n 2 Outputs Motor outputs (used to build imagined samples in imagery mode) Vel Arm Hand Grab Output layer n n 2 n 3 n N Hidden layer n Input Controller net Real world inputs Inputs Imagery net Vel Arm Hand Grab N Motor information Context units Input layer Fig. 28.3a,b Design of the cognitive architecture of the first experimental study: (a) The dual network architecture (FFNN C RNN). (b) Detail of RNN: Brown connections (recurrences and predicted output for the FFNN) are active only in imagery mode, meanwhile light grey links (external input output from real world) are deactivated

8 626 Part F Model-Based Reasoning in Cognitive Science dictions of shoulder and wrist positions at the next time step, the estimation of the elapsed time, the estimation of the movement duration, and the grab/release state. The DRNN retrieves information from joint encoders to predict the movement duration during the acceleration phase. The activation time step of the DRNN is 3 ms. The object is released when at time t the predicted time to release, t p, is lower than the next step time t 2. A functional representation of the neural system that controls the robot is given in Fig. 28.3a: a threelayer FFNN, which implements the actual motor controller, and of RNN. The RNN models the motor imagery and it is represented in detail in Fig. 28.3b. The choice of the FFNN architecture as a controller was made according to the nature of the problem, which does not need proprioceptive feedback during movement execution. Therefore, the FFNN sets the duration of the entire movement according to the given speed in input and the robot s arm is activated according to the duration indicated by the FFNN. On the other hand, the RNN was chosen because of its capability to predict and extract new information from previous experiences, so as to produce new imaginary training data, which are used to integrate the training set of the FFNN. The FFNN comprises one neuron in the input layer, which is fed with the desired angular velocity of the shoulder by the experimenter, four neurons in the hidden layer, and one neuron in the output layer. The output unit encodes the duration of the movement given the velocity in input. To train the FFNN, we used a classic backpropagation algorithm as the learning process. The learning phase lasted 6 epochs with a learning rate ( ) of:2, without momentum (i. e., D ). The network parameters were initialized with randomly chosen values in the range Œ :; :. The FFNN was trained by providing a desired angular velocity of the shoulder joint as input and the movement duration as output. After the movement duration is set by the FFNN, the arm is activated according to the desired velocity and for the duration indicated by the FFNN output. Activations of hidden and output units y i are calculated at a discrete time, by passing the net input u i to the logistic function, as it is described in (28.)and(28.2) u i D X yi w ij k i ; (28.) i y i D : (28.2) e ui As learning process, we used the classic backpropagation algorithm, the goal of which is to find optimal values of synaptic weights that minimize the error E, defined as the error between the teaching sequences and the output sequences produced by the network. The error function E is calculated as follows E D 2 px ky i t i k 2 ; (28.3) id where p is the number of outputs, t i is the desired activation value of the output unit i, andy i is the actual activation of the same unit produced by the neural network, calculated using (28.2) and(28.3). During the training phase, synaptic weights at learning step n C are updated using the error calculated at the previous learning step n, whichinturndependonthe error E.Activations of hidden and output units y i are calculated by passing the net input u i to the function, as it is described in (28.2)and(28.3). The backpropagation algorithm updates link weights and neuron biases, with a learning rate ( ) of :2 and a momentum factor () of :6, according to the following equation w ij.n C / D ı i y j C w ij.n/; (28.4) where y j is the activation of unit j, is the learning rate, and is the momentum. To calculate the error at first step of the backpropagation algorithm, initial values of back links are initialized to one. The network parameters are initialized with randomly chosen values in the range Œ :; :. The experimental study is divided into two phases: in the first phase the FFNN is trained to predict the duration of the movement for a given angular velocity in input. Meanwhile, using the same movements the RNN was trained by a simple heuristic to predict its own subsequent sensorimotor state. To this end, joint angle information over time was sampled in order to build 2 input output sequences corresponding to different directions of the movement. In addition, in order to model the autonomous throw of an object, the primitive action to grab/release was also considered in the motor information fed to the network. In the second phase, the RNN operates in offline mode and, thus, its prediction is made according only to the internal model built during the training phase. Normalized joint position of shoulder pitch, torso yaw, and hand wrist pitch are the proprioceptive information for input and output neurons. Another neuron implements the grab/release command, respectively, with values and. In this study, we intended to test the impact of mental training in action performance in a different speed range that was not experienced before. Because of this, we split both the learning and testing dataset into two subsets according to the duration of the movement:

9 Embodied Mental Imagery in Cognitive Robots 28.3 Experiments 627 Fast range subset comprises examples that last less than :3s Slow range subset comprises all the others (i. e., those that last more than :3s). A reference benchmark controller (RBC) was used to build training and testing datasets for the learning phase. The training dataset comprises data collected during the execution of the RBC with angular velocity of shoulder pitch ranging from 5 to 85 ı =s and using a step of 25 ı =s. Thus, the learning dataset comprises 22 input output series. Similarly, the testing dataset was built in the range from 6 to 84 ı =s andusingastepof 4 ı =s, and it comprises 8 input output series. The learning and testing dataset for the FFNN comprises one pair of data: the angular velocity of the shoulder as input and the execution time, that is, duration, as desired output. The learning and testing datasets for the RNN comprises sequences of 25 elements collected using a time-step of :3 s. All data in both learning and testing datasets are normalized in the range Œ;. Results are shown for the testing set only. To test the mental training, we compared results on three different case studies:. Full range: For benchmarking purposes, it is the performance obtained by the FFNN when it is trained using the full range of examples (slow C fast) 2. Slow range only training: The performance obtained by the FFNN only when it is trained using only the slow-range subset. This case stressed the generalization capability of the controller when it is tested with the fast range subset 3. Slow range plus mental training: In this case the two architectures operate together as a single hierarchical architecture, in which first both nets are trained Distance (m) 5 Benchmark 4 Full range training Slow range only training 3 Slow range plus mental training Angular velocity (degrees/s) Fig Comparison of the distance reached by the object after throwing with the FFNN as controller and different training approaches. Negative values represent the objects falling backward with the slow range subset, then the RNN runs in mental imagery mode to build a new dataset of fast examples for the FFNN that is incrementally trained this way. As expected, the FFNN is the best controller for the task if the full range is given as training, thus, it is the ideal controller for the task (Table 28.). But, not surprisingly, in Table 28.2 it is shown that the FFNN it is not able to generalize with the fast range when it is trained with the slow range only. These results show that generalization capability of the RNN helps to feed the FFNN with new data to cover the fast range, simulating mental training. In fact, the FFNN, trained only with the slow subset, is not able to foresee the trend of duration in the fast range; this implies that fast movements last longer than needed and, because the inclination angle is over 9 ı, the object falls backward (Fig. 28.4). Table 28. Full-range training: comparison of average results of feedforward and recurrent artificial neural nets Test Feedforward net Recurrent net range Duration Release point Duration Release point type s Error% Degree Error% s Error% Degree Error% Slow :472 :75 3:78 6:46 :482 3:6 33:345 :96 Fast :22 :87 3:976 6:34 :94 5:67 28:88 22:8 Full :37 :2 3:486 6:39 :36 4:86 3:32 8:2 Table 28.2 FFNN: Comparison of average performance improvement with artificial mental training Test Slow range only training Slow range plus mental training range Duration Release point Duration Release point type s Error% Degree Error% s Error% Degree Error% Slow :474 :38 3:63 7:88 :47 :74 3:774 8:8 Fast :247 26:92 64:95 :72 :88 7:2 2:9 35:89 Full :335 6:99 5:593 7:34 :298 5:3 24:74 25:

10 628 Part F Model-Based Reasoning in Cognitive Science The FFNN failure in predicting temporal dynamics is explainable by the simplistic information used to train the FFNN, which seems to be not enough to reliably predict the duration time in a faster range, never experienced before. On the contrary, the greater amount of information that comes from the proprioception and the fact that the RNN has to integrate over time those information in order to perform the movement, makes the RNN able to create a sort of internal model of the robot s body behavior. This allows the RNN to better generalize and, therefore guide the FFNN in enhancing its performance. This interesting aspect of the RNN can be partially unveiled by analyzing the internal dynamic of the neural network, which can be done by reducing the complexity of the hidden neuron activations trough a principal component analysis. Figure 28.5a, for example, presents the values of the first principal component at the last timestep, that is, after the neural network has finished the throwing movement, for all the test cases, both slow and fast showing that the internal representations are a) Component value Full range Plus mental training Slow only Test case b) Component value 3 Slow 2 Fast Medium Timestep Fig (a) Hidden units activation analysis. Lines represent final values of the first principal component for all test cases; (b) hidden units activation analysis. Lines represent the values of first principal component for a slow velocity, a medium velocity and a fast velocity very similar, also in the case in which the RNN is trained with the slow range only. Interestingly, the series shown in Fig are highly correlated with the duration time, which is never explicitly given to the neural network. The correlation is 97:5 for the fullrange training, 99:4 for slow only and 99:37 for slow plus mental training. This result demonstrates that the RNN is able to extrapolate the duration time from the input sequences and to generalize when operating with new sequences never experienced before. Similarly, Fig. 28.5b shows the first principal component of the RNN in relation to different angular velocities: slow being the slowest test velocity, medium the fastest within the slow range, and fast the fastest possible velocity tested in the experiment. As can be seen, the RNN is able to uncover the similarities in the temporal dynamics that link slow and fast cases. Hence, it is finally able to better approximate the correct trajectory of joint positions also in a situation not experienced before Second Experimental Study: Mental Training Evoked by Language In this experimental study, we dealt with motor imagery and how verbal instruction may evoke the ability to imagine movements, already seen before or new ones obtained by combination of past experiences. These imagined movements either replicate the expected new movement required by verbal commands or correspond in accuracy to those learned and executed during training phases. Motor imagery is defined as a dynamic state during which representations of a given motor act are internally rehearsed in working memory without any overt motor output [28.67]. This study extends the first experimental study presented above, by focusing on the integration of auditory stimuli in the form of verbal instructions, to the motor stimuli already experienced by the robot in past simulations. Simple verbal instructions are added to the training phase of the robot, in order to explore the impact that linguistic stimuli could have in its processes of mental imagery practice and subsequent motor execution and performance. In particular, we tested the ability of our model to use imagery to execute new orders, obtained combining two single instructions. This study has been inspired by embodied language approaches, which are based on evidence that language comprehension is grounded in the same neural systems that are used to perceive, plan, and take action in the external world. Figure 28.6 presents pictures of the action with the icub simulator, which was commanded to execute the four throw tasks according to the verbal command is-

11 Embodied Mental Imagery in Cognitive Robots 28.3 Experiments 629 sued. The basic task is the same presented in Fig. 28.2; in this case the torso is also moving to obey to the verbal commands left and right. The neural system that controls the robot is a threelayer RNN with the architecture proposed by Elman [28.66]. The Elman RNN adds in the input layer a set of context units, directly connected with the middle (hidden) layer with a weight of one (i. e., directly copied). At each time step, the input is propagated in a standard feedforward fashion, and then a learning rule is applied. The fixed back connections result in the context units always maintaining a copy of the previous values of the hidden units (since they propagate over the connections before the learning rule is applied). This creates an internal state of the network, which allows it to exhibit dynamic temporal behavior. To model mental imagery the outputs related with the motor activities are redirected to corresponding inputs. Similarly to the previous experimental study, after the learning phase in which real data collected during simulator execution was used to train the RNN and for comparison with imagined data, we tested the ability of the RNN architecture to model mental imagery. As before, this was achieved by adding other back connections from motor outputs to motor inputs; at the same time connections from/to joint encoders and motor controllers are deactivated. This setup is presented in Fig. 28.7, where red connections are the ones active only when the imagery mode is on, while green connections are deactivated, including the motor controller. Specific neurons, one for each verbal instruction, were included in the input layer of the RNN in order for it to take into account these commands, while the sensorimotor information is directed to the rest of the neurons in the input layer. The RNN architecture implemented, as presented in Fig. 28.7, has 4 output units, 2 units in the hidden layer, and 27 units in the input layer, 7 of Front Back Left Right Fig Examples of the icub simulator in action: pictures of the execution of throw tasks them encode the proprioceptive inputs from the robot s joints and 2 are the context units, that is, are back links of hidden units, they only copy the value from output of upper unit to the input of lower unit. The learning algorithm and parameters are the same as the second experiment. As proprioceptive motor information, we take into account just the following three joints, shoulder pitch, torso yaw, and hand wrist pitch. In addition, in order to model the throw of an object, the primitive action to grab/release was also considered in the motor information fed to the network. Visual information was not computed and speech input processing was based on standard speech recognition systems. Using the icub simulator, we performed two experiments: The first experiment aimed to evaluate the ability of the RNN to model artificial mental imagery. It was divided into two phases: in the first phase the network was trained to predict its own subsequent sensorimotor state. The task was to throw in different directions (forward, left, right, back) a small object that was placed in the right hand of the robot, Motor controller and actuators Vel Arm Hand Grab Output layer N Hidden layer Vel Arm Hand Grab F B R L N Motor information Language processor Context units Fig Recurrent neural network architecture used in the second experimental study. Brown connections are active only in imagery mode, meanwhile light grey connections are deactivated

12 63 Part F Model-Based Reasoning in Cognitive Science which is able to grab and release it. To this end the RNN was trained using the proprioceptive information collected from the robot. The proprioceptive information consisted of sensorimotor data (i. e., joint positions) and of verbal commands given to the robot according to directions. In the second phase, we tested the ability of the RNN to model mental imagery providing only the auditory stimulus (i. e., the verbal commands) and requiring the network to obtain sensorimotor information from its own outputs. The goal of the second experiment was to test the ability of the RNN to imagine how to accomplish a new task. In this case we had three phases: in the first phase (real training), the RNN was trained to throw front and just to move left and right (with no throw). In the second phase (imagined action), the RNN was asked to imagine its own subsequent sensorimotor state when the throw command is issued together with a side command (left or right). In the final phase (mental training), the input/output series obtained are used for an additional mental training of the RNN. After the first and third phase, experimental tests with the icub simulator were made to measure the performance of the RNN to control the robot. In this experiment, we tested the ability of the RNN to recreate its own subsequent sensorimotor state in absence of external stimuli. In Fig. 28.8, wepresent a comparison of training and imagined trajectories of learned movements according to the verbal command issued:. Shows results with the FRONT command 2. With the BACK command 3. With the RIGHT command 4. For the LEFT command. Imagined trajectories are accurate with respect to the ones used to train the robot only in Fig. 28.8b, we notice a slight difference between imagined and training positions of the arm. This difference can be attributed to the fact that the BACK command is the only one that does not require the arm to stop early in throwing the object. In other words, the difference is related to the timing of the movement rather than to the accuracy. Results show that the RNN is able to recall the correct trajectories of the movement according to the verbal command issued. The trajectories are the sequence of joint positions adopted in the movements. The second test was conducted to evaluate the ability of the RNN to build its own subsequent sensorimotor states when it is asked to accomplish new tasks not experienced before. In this case, the RNN was trained only to throw front and to move right and left (without throwing). To allow the RNN to generalize, training examples were created using an algorithm that randomly chose joint positions not involved in the movement, that is, when throwing, the torso joint had a fixed position that was randomly chosen. The same was true for arm joints when moving right and left. Test cases, presented in Fig. 28.9, were composed using two commands (e.g., throw together with right or left). In our experiments, we tested two different approaches in the language processing. In this test two commands were computed at the same time, so that input neurons associated with throw and right (or left) were fully activated at the same time with value. Results of the mental training experiment are presented in Fig. 28.,whichshowtheerroroftorsoand arm joint position with respect to the ideal ones. The before mental training column presents the results of tests made without additional training, the after mental training column shows results after the simulated mental training, the imagined action only column refers to totally imagined data (i. e., when the RNN predicts its own subsequent input). Comparing results before and after mental training an improvement in precision of dual command execution could be noticed, this should be accounted to the additional training that helps the RNN to operate in a condition not experienced before. It should be noticed also that the throw right task has worse performance compared to that of throw left with icub simulations, but the same result is not achieved in imagined only action mode. This could be mainly explained by the influence of real proprioceptive information coming from robot joints that modifies the ideal behavior expected by the RNN, as evidenced by the comparison between imagined only and the real tests. In fact, we noticed that when a right command is issued the robot torso is initially moved on the left for few time-steps and then it turns right. Since the total time of the movement is due to the arm movement to throw, the initial error for the torso could not be recovered Third Experimental Study: Spatial Imagery For this experiment the environment is a square portion of a soccer field, whose length and width are both 5 m. At one end is placed a goal :94 m wide, as can be seen in Fig. 28.b, which is represented all in blue to contrast with the background and to be easily recognized. The robot can be positioned anywhere in this square and, as starting position, the ball is placed in front of his left foot (Fig. 28.a). The neural system that controls the robot is a fully connected RNN with 6 hidden units, 37 input units,

13 Embodied Mental Imagery in Cognitive Robots 28.3 Experiments 63 a) Position b) Position Time step c) Position Training torso Imagined torso Training arm Imagined arm Time step d) Position Training hand Imagined hand Time step Time step Fig First test: A comparison of training and imagined trajectories of learned movements Throw left Throw right % 8 6 Right Left 4 2 Fig Pictures present the execution of the two composed tasks: throw left and throw right. As test cases for autonomous learning with icub simulator, the actions to throw left or right are now obtained by learning and then combining the basic actions of throw and move left or right and 6 output units. The main difference between a standard feedforward neural network and the RNN is that in the latter case, the training set consists in a series of input output sequences. The RNN architecture allows the robot to learn dynamical sequences of actions as they develop in time. The goal of the learning process is to find optimal values of synaptic weights that minimize the error, defined as the error between the teaching Arm Torso Arm Torso Arm Torso After Imagined action Before Fig. 28. Second test: Autonomous learning of new combined commands, error of torso and arm joint positions with respect to ideal ones sequences and the output sequences produced by the network. Figure 28.2 presents the neural system. Input variables are as follows: The angle of, respectively, the neck (left right movement), the torso (left right movement), and

14 632 Part F Model-Based Reasoning in Cognitive Science a) b) Motor commands... Kick N Hidden layer r θ Fig. 28.a,b icub robot simulator: kicking the ball (a), the goal (b) the body rotation joint (a joint attached to the body that allows the robot to rotate on its vertical axis). Visual information is provided through a vector of 32 bits, which encodes a simplified visual image obtained by the eyes and allows the robot to locate the goal in its visual field. Polar coordinates of the robot with respect to the goal (represented by the radius and the polar angle). The six outputs are, respectively, the polar coordinates of the robot with respect to the goal, the desired angle of the neck, the torso and the body rotation joint, and an additional binary output that makes the robot kick the ball when its value is. In the learning phase, this output was set to only when the motion ends and the robot must kick the ball to the goal. All input and output variables are normalized in Œ;. Angle () ranges from 9 ı to 9 ı, while radius (r) could vary from to 5 m. As angle and radius values are normalized in Œ;, the ı (i. e., when the goal is in front of the robot) equals to :5. For training the neural network, we used the backpropagation through time algorithm (BPTT), which is typically used to train neural networks with recurrent nodes for time-related tasks. This algorithm allows a neural network to learn the dynamical sequences of input output patterns as they develop in time. Since we are interested in the dynamic and time-dependent processes of the robot object interaction, an algorithm that allows to take into account dynamic events is more suitable than the standard backpropagation algorithm [28.68]. For a detailed description of the BPTT algorithm, see also [28.69]. The main difference between a standard backpropagation algorithm and the BPTT is that in the latter case the training set consists in a series of input output sequences, rather than in a single input output pattern. The BPTT allows the robot to learn sequences of actions. The goal of the learning pro- Motor (joint encoders) Vision (pixel vector) Polar coordinates Fig Neural network architecture for spatial position estimation. Dotted lines (recurrences from output to input) are active only when the network operates in imagery mode cess is to find optimal values of synaptic weights that minimize the error E, defined as the error between the teaching sequences and the output sequences produced by the network. The error function E is calculated as follows E D X X X y its y its.yits. y its // 2 ; s t i (28.5) where y its is the desired activation value of the output unit i at time t for the sequence s and y its is the actual activation of the same unit produced by the neural network, calculated using (28.)and(28.2). During the training phase, synaptic weights at learning step n C are updated using the error ı i calculated at the previous learning step n, which in turn depend on the error E, according to (28.4). To build the training set, we used the icub simulator primitives to position the robot in eight different locations of the field. For each position the robot rotated to acquire the goal by means of a simple search algorithm. During the movement, motor and visual information were sampled in input output sequences of 2 time steps. The command to kick the ball is issued after the acquisition of the target (the middle of the goal). The kicking movement was preprogrammed. Then, the neural network was trained to predict the next sensory state (excluding the visual input) by means of a backpropagation through time algorithm for 5 epochs, after which the mean squared error (MSE) error in estimating the six output variables was :56. In a preliminary study the robot was first controlled by the same algorithm used for collecting the train series (controlled condition), the neural network was not

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence To understand the network paradigm also requires examining the history

More information

Time Experiencing by Robotic Agents

Time Experiencing by Robotic Agents Time Experiencing by Robotic Agents Michail Maniadakis 1 and Marc Wittmann 2 and Panos Trahanias 1 1- Foundation for Research and Technology - Hellas, ICS, Greece 2- Institute for Frontier Areas of Psychology

More information

Learning Classifier Systems (LCS/XCSF)

Learning Classifier Systems (LCS/XCSF) Context-Dependent Predictions and Cognitive Arm Control with XCSF Learning Classifier Systems (LCS/XCSF) Laurentius Florentin Gruber Seminar aus Künstlicher Intelligenz WS 2015/16 Professor Johannes Fürnkranz

More information

M.Sc. in Cognitive Systems. Model Curriculum

M.Sc. in Cognitive Systems. Model Curriculum M.Sc. in Cognitive Systems Model Curriculum April 2014 Version 1.0 School of Informatics University of Skövde Sweden Contents 1 CORE COURSES...1 2 ELECTIVE COURSES...1 3 OUTLINE COURSE SYLLABI...2 Page

More information

An Artificial Neural Network Architecture Based on Context Transformations in Cortical Minicolumns

An Artificial Neural Network Architecture Based on Context Transformations in Cortical Minicolumns An Artificial Neural Network Architecture Based on Context Transformations in Cortical Minicolumns 1. Introduction Vasily Morzhakov, Alexey Redozubov morzhakovva@gmail.com, galdrd@gmail.com Abstract Cortical

More information

Computational Explorations in Cognitive Neuroscience Chapter 7: Large-Scale Brain Area Functional Organization

Computational Explorations in Cognitive Neuroscience Chapter 7: Large-Scale Brain Area Functional Organization Computational Explorations in Cognitive Neuroscience Chapter 7: Large-Scale Brain Area Functional Organization 1 7.1 Overview This chapter aims to provide a framework for modeling cognitive phenomena based

More information

Sparse Coding in Sparse Winner Networks

Sparse Coding in Sparse Winner Networks Sparse Coding in Sparse Winner Networks Janusz A. Starzyk 1, Yinyin Liu 1, David Vogel 2 1 School of Electrical Engineering & Computer Science Ohio University, Athens, OH 45701 {starzyk, yliu}@bobcat.ent.ohiou.edu

More information

Realization of Visual Representation Task on a Humanoid Robot

Realization of Visual Representation Task on a Humanoid Robot Istanbul Technical University, Robot Intelligence Course Realization of Visual Representation Task on a Humanoid Robot Emeç Erçelik May 31, 2016 1 Introduction It is thought that human brain uses a distributed

More information

Carnegie Mellon University Annual Progress Report: 2011 Formula Grant

Carnegie Mellon University Annual Progress Report: 2011 Formula Grant Carnegie Mellon University Annual Progress Report: 2011 Formula Grant Reporting Period January 1, 2012 June 30, 2012 Formula Grant Overview The Carnegie Mellon University received $943,032 in formula funds

More information

5.8 Departure from cognitivism: dynamical systems

5.8 Departure from cognitivism: dynamical systems 154 consciousness, on the other, was completely severed (Thompson, 2007a, p. 5). Consequently as Thompson claims cognitivism works with inadequate notion of cognition. This statement is at odds with practical

More information

Grounding Ontologies in the External World

Grounding Ontologies in the External World Grounding Ontologies in the External World Antonio CHELLA University of Palermo and ICAR-CNR, Palermo antonio.chella@unipa.it Abstract. The paper discusses a case study of grounding an ontology in the

More information

Learning and Adaptive Behavior, Part II

Learning and Adaptive Behavior, Part II Learning and Adaptive Behavior, Part II April 12, 2007 The man who sets out to carry a cat by its tail learns something that will always be useful and which will never grow dim or doubtful. -- Mark Twain

More information

Biologically-Inspired Human Motion Detection

Biologically-Inspired Human Motion Detection Biologically-Inspired Human Motion Detection Vijay Laxmi, J. N. Carter and R. I. Damper Image, Speech and Intelligent Systems (ISIS) Research Group Department of Electronics and Computer Science University

More information

DYNAMICISM & ROBOTICS

DYNAMICISM & ROBOTICS DYNAMICISM & ROBOTICS Phil/Psych 256 Chris Eliasmith Dynamicism and Robotics A different way of being inspired by biology by behavior Recapitulate evolution (sort of) A challenge to both connectionism

More information

AN EPIC COMPUTATIONAL MODEL OF VERBAL WORKING MEMORY D. E. Kieras, D. E. Meyer, S. T. Mueller, T. L. Seymour University of Michigan Sponsored by the

AN EPIC COMPUTATIONAL MODEL OF VERBAL WORKING MEMORY D. E. Kieras, D. E. Meyer, S. T. Mueller, T. L. Seymour University of Michigan Sponsored by the AN EPIC COMPUTATIONAL MODEL OF VERBAL WORKING MEMORY D. E. Kieras, D. E. Meyer, S. T. Mueller, T. L. Seymour University of Michigan Sponsored by the U.S. Office of Naval Research 1 Introduction During

More information

ArteSImit: Artefact Structural Learning through Imitation

ArteSImit: Artefact Structural Learning through Imitation ArteSImit: Artefact Structural Learning through Imitation (TU München, U Parma, U Tübingen, U Minho, KU Nijmegen) Goals Methodology Intermediate goals achieved so far Motivation Living artefacts will critically

More information

Theoretical Neuroscience: The Binding Problem Jan Scholz, , University of Osnabrück

Theoretical Neuroscience: The Binding Problem Jan Scholz, , University of Osnabrück The Binding Problem This lecture is based on following articles: Adina L. Roskies: The Binding Problem; Neuron 1999 24: 7 Charles M. Gray: The Temporal Correlation Hypothesis of Visual Feature Integration:

More information

Reactive agents and perceptual ambiguity

Reactive agents and perceptual ambiguity Major theme: Robotic and computational models of interaction and cognition Reactive agents and perceptual ambiguity Michel van Dartel and Eric Postma IKAT, Universiteit Maastricht Abstract Situated and

More information

A HMM-based Pre-training Approach for Sequential Data

A HMM-based Pre-training Approach for Sequential Data A HMM-based Pre-training Approach for Sequential Data Luca Pasa 1, Alberto Testolin 2, Alessandro Sperduti 1 1- Department of Mathematics 2- Department of Developmental Psychology and Socialisation University

More information

Robot Learning Letter of Intent

Robot Learning Letter of Intent Research Proposal: Robot Learning Letter of Intent BY ERIK BILLING billing@cs.umu.se 2006-04-11 SUMMARY The proposed project s aim is to further develop the learning aspects in Behavior Based Control (BBC)

More information

Modeling of Hippocampal Behavior

Modeling of Hippocampal Behavior Modeling of Hippocampal Behavior Diana Ponce-Morado, Venmathi Gunasekaran and Varsha Vijayan Abstract The hippocampus is identified as an important structure in the cerebral cortex of mammals for forming

More information

Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention

Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention Tapani Raiko and Harri Valpola School of Science and Technology Aalto University (formerly Helsinki University of

More information

MEMORY MODELS. CHAPTER 5: Memory models Practice questions - text book pages TOPIC 23

MEMORY MODELS. CHAPTER 5: Memory models Practice questions - text book pages TOPIC 23 TOPIC 23 CHAPTER 65 CHAPTER 5: Memory models Practice questions - text book pages 93-94 1) Identify the three main receptor systems used by a performer in sport. Where is the filtering mechanism found

More information

CHAPTER 6: Memory model Practice questions at - text book pages 112 to 113

CHAPTER 6: Memory model Practice questions at - text book pages 112 to 113 QUESTIONS AND ANSWERS QUESTIONS AND ANSWERS CHAPTER 6: Memory model Practice questions at - text book pages 112 to 113 1) Which of the following sequences reflects the order in which the human brain processes

More information

Behavior Architectures

Behavior Architectures Behavior Architectures 5 min reflection You ve read about two very different behavior architectures. What are the most significant functional/design differences between the two approaches? Are they compatible

More information

CHAPTER I INTRODUCTION

CHAPTER I INTRODUCTION CHAPTER I INTRODUCTION "By means of image and parable one can persuade, but prove nothing. That is why the world of science and learning is so wary of image and parable." Nietzche Art therapy has typically

More information

(Visual) Attention. October 3, PSY Visual Attention 1

(Visual) Attention. October 3, PSY Visual Attention 1 (Visual) Attention Perception and awareness of a visual object seems to involve attending to the object. Do we have to attend to an object to perceive it? Some tasks seem to proceed with little or no attention

More information

Lecture 2.1 What is Perception?

Lecture 2.1 What is Perception? Lecture 2.1 What is Perception? A Central Ideas in Perception: Perception is more than the sum of sensory inputs. It involves active bottom-up and topdown processing. Perception is not a veridical representation

More information

Models of Imitation and Mirror Neuron Activity. COGS171 FALL Quarter 2011 J. A. Pineda

Models of Imitation and Mirror Neuron Activity. COGS171 FALL Quarter 2011 J. A. Pineda Models of Imitation and Mirror Neuron Activity COGS171 FALL Quarter 2011 J. A. Pineda Basis for Models Since a majority of mirror neurons have been found in motor areas (IFG and IPL), it is reasonable

More information

Augmented Cognition to enhance human sensory awareness, cognitive functioning and psychic functioning: a research proposal in two phases

Augmented Cognition to enhance human sensory awareness, cognitive functioning and psychic functioning: a research proposal in two phases Augmented Cognition to enhance human sensory awareness, cognitive functioning and psychic functioning: a research proposal in two phases James Lake MD (egret4@sbcglobal.net) Overview and objectives Augmented

More information

Oxford Foundation for Theoretical Neuroscience and Artificial Intelligence

Oxford Foundation for Theoretical Neuroscience and Artificial Intelligence Oxford Foundation for Theoretical Neuroscience and Artificial Intelligence Oxford Foundation for Theoretical Neuroscience and Artificial Intelligence For over two millennia, philosophers and scientists

More information

The Concept of Simulation in Control-Theoretic Accounts of Motor Control and Action Perception

The Concept of Simulation in Control-Theoretic Accounts of Motor Control and Action Perception The Concept of Simulation in Control-Theoretic Accounts of Motor Control and Action Perception Mitchell Herschbach (mherschb@ucsd.edu) Department of Philosophy, University of California, San Diego, 9500

More information

Lecture 6. Perceptual and Motor Schemas

Lecture 6. Perceptual and Motor Schemas CS564 - Brain Theory and Artificial Intelligence Lecture 6. Perceptual and Motor Reading Assignments: TMB2:* Sections 2.1, 2.2, 5.1 and 5.2. HBTNN: Schema Theory (Arbib) [Also required] Distributed Artificial

More information

Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6)

Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6) Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6) BPNN in Practice Week 3 Lecture Notes page 1 of 1 The Hopfield Network In this network, it was designed on analogy of

More information

CS148 - Building Intelligent Robots Lecture 5: Autonomus Control Architectures. Instructor: Chad Jenkins (cjenkins)

CS148 - Building Intelligent Robots Lecture 5: Autonomus Control Architectures. Instructor: Chad Jenkins (cjenkins) Lecture 5 Control Architectures Slide 1 CS148 - Building Intelligent Robots Lecture 5: Autonomus Control Architectures Instructor: Chad Jenkins (cjenkins) Lecture 5 Control Architectures Slide 2 Administrivia

More information

Introduction to the Special Issue on Multimodality of Early Sensory Processing: Early Visual Maps Flexibly Encode Multimodal Space

Introduction to the Special Issue on Multimodality of Early Sensory Processing: Early Visual Maps Flexibly Encode Multimodal Space BRILL Multisensory Research 28 (2015) 249 252 brill.com/msr Introduction to the Special Issue on Multimodality of Early Sensory Processing: Early Visual Maps Flexibly Encode Multimodal Space Roberto Arrighi1,

More information

Hierarchical dynamical models of motor function

Hierarchical dynamical models of motor function ARTICLE IN PRESS Neurocomputing 70 (7) 975 990 www.elsevier.com/locate/neucom Hierarchical dynamical models of motor function S.M. Stringer, E.T. Rolls Department of Experimental Psychology, Centre for

More information

Cognitive Neuroscience Section 4

Cognitive Neuroscience Section 4 Perceptual categorization Cognitive Neuroscience Section 4 Perception, attention, and memory are all interrelated. From the perspective of memory, perception is seen as memory updating by new sensory experience.

More information

Chapter 8: Visual Imagery & Spatial Cognition

Chapter 8: Visual Imagery & Spatial Cognition 1 Chapter 8: Visual Imagery & Spatial Cognition Intro Memory Empirical Studies Interf MR Scan LTM Codes DCT Imagery & Spatial Cognition Rel Org Principles ImplEnc SpatEq Neuro Imaging Critique StruEq Prop

More information

Dynamic Control Models as State Abstractions

Dynamic Control Models as State Abstractions University of Massachusetts Amherst From the SelectedWorks of Roderic Grupen 998 Dynamic Control Models as State Abstractions Jefferson A. Coelho Roderic Grupen, University of Massachusetts - Amherst Available

More information

The Role of Action in Object Categorization

The Role of Action in Object Categorization From: FLAIRS-02 Proceedings. Copyright 2002, AAAI (www.aaai.org). All rights reserved. The Role of Action in Object Categorization Andrea Di Ferdinando* Anna M. Borghi^ Domenico Parisi* *Institute of Cognitive

More information

Toward A Cognitive Computer Vision System

Toward A Cognitive Computer Vision System Toward A Cognitive Computer Vision System D. Paul Benjamin Pace University, 1 Pace Plaza, New York, New York 10038, 212-346-1012 benjamin@pace.edu Damian Lyons Fordham University, 340 JMH, 441 E. Fordham

More information

A computational model of cooperative spatial behaviour for virtual humans

A computational model of cooperative spatial behaviour for virtual humans A computational model of cooperative spatial behaviour for virtual humans Nhung Nguyen and Ipke Wachsmuth Abstract This chapter introduces a model which connects representations of the space surrounding

More information

Evolving Imitating Agents and the Emergence of a Neural Mirror System

Evolving Imitating Agents and the Emergence of a Neural Mirror System Evolving Imitating Agents and the Emergence of a Neural Mirror System Elhanan Borenstein and Eytan Ruppin,2 School of Computer Science, Tel Aviv University, Tel-Aviv 6998, Israel 2 School of Medicine,

More information

Thalamocortical Feedback and Coupled Oscillators

Thalamocortical Feedback and Coupled Oscillators Thalamocortical Feedback and Coupled Oscillators Balaji Sriram March 23, 2009 Abstract Feedback systems are ubiquitous in neural systems and are a subject of intense theoretical and experimental analysis.

More information

Motor Systems I Cortex. Reading: BCP Chapter 14

Motor Systems I Cortex. Reading: BCP Chapter 14 Motor Systems I Cortex Reading: BCP Chapter 14 Principles of Sensorimotor Function Hierarchical Organization association cortex at the highest level, muscles at the lowest signals flow between levels over

More information

Cell Responses in V4 Sparse Distributed Representation

Cell Responses in V4 Sparse Distributed Representation Part 4B: Real Neurons Functions of Layers Input layer 4 from sensation or other areas 3. Neocortical Dynamics Hidden layers 2 & 3 Output layers 5 & 6 to motor systems or other areas 1 2 Hierarchical Categorical

More information

Introduction and Historical Background. August 22, 2007

Introduction and Historical Background. August 22, 2007 1 Cognitive Bases of Behavior Introduction and Historical Background August 22, 2007 2 Cognitive Psychology Concerned with full range of psychological processes from sensation to knowledge representation

More information

LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES

LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES Reactive Architectures LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES An Introduction to MultiAgent Systems http://www.csc.liv.ac.uk/~mjw/pubs/imas There are many unsolved (some would say insoluble) problems

More information

This is a repository copy of Feeling the Shape: Active Exploration Behaviors for Object Recognition With a Robotic Hand.

This is a repository copy of Feeling the Shape: Active Exploration Behaviors for Object Recognition With a Robotic Hand. This is a repository copy of Feeling the Shape: Active Exploration Behaviors for Object Recognition With a Robotic Hand. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/121310/

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 7: Network models Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis II 6 Single neuron

More information

Human cogition. Human Cognition. Optical Illusions. Human cognition. Optical Illusions. Optical Illusions

Human cogition. Human Cognition. Optical Illusions. Human cognition. Optical Illusions. Optical Illusions Human Cognition Fang Chen Chalmers University of Technology Human cogition Perception and recognition Attention, emotion Learning Reading, speaking, and listening Problem solving, planning, reasoning,

More information

Categories Formation in Self-Organizing Embodied Agents

Categories Formation in Self-Organizing Embodied Agents Categories Formation in Self-Organizing Embodied Agents Stefano Nolfi Institute of Cognitive Sciences and Technologies National Research Council (CNR) Viale Marx, 15, 00137, Rome, Italy s.nolfi@istc.cnr.it

More information

Assistive Technology for Senior Adults Facing Cognitive Impairments: Neuroscience Considerations. Roger P. Carrillo.

Assistive Technology for Senior Adults Facing Cognitive Impairments: Neuroscience Considerations. Roger P. Carrillo. Assistive Technology for Senior Adults Facing Cognitive Impairments: Neuroscience Considerations Roger P. Carrillo Senior Advisor Camanio Care, Inc. 2 Assistive Technology for Senior Adults Facing Cognitive

More information

Natural Scene Statistics and Perception. W.S. Geisler

Natural Scene Statistics and Perception. W.S. Geisler Natural Scene Statistics and Perception W.S. Geisler Some Important Visual Tasks Identification of objects and materials Navigation through the environment Estimation of motion trajectories and speeds

More information

Incorporation of Imaging-Based Functional Assessment Procedures into the DICOM Standard Draft version 0.1 7/27/2011

Incorporation of Imaging-Based Functional Assessment Procedures into the DICOM Standard Draft version 0.1 7/27/2011 Incorporation of Imaging-Based Functional Assessment Procedures into the DICOM Standard Draft version 0.1 7/27/2011 I. Purpose Drawing from the profile development of the QIBA-fMRI Technical Committee,

More information

EEL-5840 Elements of {Artificial} Machine Intelligence

EEL-5840 Elements of {Artificial} Machine Intelligence Menu Introduction Syllabus Grading: Last 2 Yrs Class Average 3.55; {3.7 Fall 2012 w/24 students & 3.45 Fall 2013} General Comments Copyright Dr. A. Antonio Arroyo Page 2 vs. Artificial Intelligence? DEF:

More information

Learning Utility for Behavior Acquisition and Intention Inference of Other Agent

Learning Utility for Behavior Acquisition and Intention Inference of Other Agent Learning Utility for Behavior Acquisition and Intention Inference of Other Agent Yasutake Takahashi, Teruyasu Kawamata, and Minoru Asada* Dept. of Adaptive Machine Systems, Graduate School of Engineering,

More information

Carl Wernicke s Contribution to Theories of Conceptual Representation in the Cerebral Cortex. Nicole Gage and Gregory Hickok Irvine, California

Carl Wernicke s Contribution to Theories of Conceptual Representation in the Cerebral Cortex. Nicole Gage and Gregory Hickok Irvine, California Carl Wernicke s Contribution to Theories of Conceptual Representation in the Cerebral Cortex Nicole Gage and Gregory Hickok Irvine, California Acknowledgments Christian Sekirnjak, Ph.D. San Diego, CA Heidi

More information

Pavlovian, Skinner and other behaviourists contribution to AI

Pavlovian, Skinner and other behaviourists contribution to AI Pavlovian, Skinner and other behaviourists contribution to AI Witold KOSIŃSKI Dominika ZACZEK-CHRZANOWSKA Polish Japanese Institute of Information Technology, Research Center Polsko Japońska Wyższa Szko

More information

Competing Frameworks in Perception

Competing Frameworks in Perception Competing Frameworks in Perception Lesson II: Perception module 08 Perception.08. 1 Views on perception Perception as a cascade of information processing stages From sensation to percept Template vs. feature

More information

Competing Frameworks in Perception

Competing Frameworks in Perception Competing Frameworks in Perception Lesson II: Perception module 08 Perception.08. 1 Views on perception Perception as a cascade of information processing stages From sensation to percept Template vs. feature

More information

Effects of Cognitive Load on Processing and Performance. Amy B. Adcock. The University of Memphis

Effects of Cognitive Load on Processing and Performance. Amy B. Adcock. The University of Memphis Effects of Cognitive Load 1 Running Head: Effects of Cognitive Load Effects of Cognitive Load on Processing and Performance Amy B. Adcock The University of Memphis Effects of Cognitive Load 2 Effects of

More information

Plasticity of Cerebral Cortex in Development

Plasticity of Cerebral Cortex in Development Plasticity of Cerebral Cortex in Development Jessica R. Newton and Mriganka Sur Department of Brain & Cognitive Sciences Picower Center for Learning & Memory Massachusetts Institute of Technology Cambridge,

More information

Grounded Cognition. Lawrence W. Barsalou

Grounded Cognition. Lawrence W. Barsalou Grounded Cognition Lawrence W. Barsalou Department of Psychology Emory University July 2008 Grounded Cognition 1 Definition of grounded cognition the core representations in cognition are not: amodal symbols

More information

The significance of sensory motor functions as indicators of brain dysfunction in children

The significance of sensory motor functions as indicators of brain dysfunction in children Archives of Clinical Neuropsychology 18 (2003) 11 18 The significance of sensory motor functions as indicators of brain dysfunction in children Abstract Ralph M. Reitan, Deborah Wolfson Reitan Neuropsychology

More information

0-3 DEVELOPMENT. By Drina Madden. Pediatric Neuropsychology 1

0-3 DEVELOPMENT. By Drina Madden. Pediatric Neuropsychology   1 0-3 DEVELOPMENT By Drina Madden DrinaMadden@hotmail.com www.ndcbrain.com 1 PHYSICAL Body Growth Changes in height and weight are rapid in the first two years of life. Development moves from head to tail

More information

Bundles of Synergy A Dynamical View of Mental Function

Bundles of Synergy A Dynamical View of Mental Function Bundles of Synergy A Dynamical View of Mental Function Ali A. Minai University of Cincinnati University of Cincinnati Laxmi Iyer Mithun Perdoor Vaidehi Venkatesan Collaborators Hofstra University Simona

More information

The Physiology of the Senses Chapter 8 - Muscle Sense

The Physiology of the Senses Chapter 8 - Muscle Sense The Physiology of the Senses Chapter 8 - Muscle Sense www.tutis.ca/senses/ Contents Objectives... 1 Introduction... 2 Muscle Spindles and Golgi Tendon Organs... 3 Gamma Drive... 5 Three Spinal Reflexes...

More information

Embodiment in GLAIR: A Grounded Layered Architecture. with Integrated Reasoning for Autonomous Agents. Henry Hexmoor. Johan Lammens.

Embodiment in GLAIR: A Grounded Layered Architecture. with Integrated Reasoning for Autonomous Agents. Henry Hexmoor. Johan Lammens. Embodiment in GLAIR: A Grounded Layered Architecture with Integrated Reasoning for Autonomous Agents Henry Hexmoor Johan Lammens Stuart Shapiro Computer Science Department 226 Bell Hall State University

More information

Mental Imagery. What is Imagery? What We Can Imagine 3/3/10. What is nature of the images? What is the nature of imagery for the other senses?

Mental Imagery. What is Imagery? What We Can Imagine 3/3/10. What is nature of the images? What is the nature of imagery for the other senses? Mental Imagery What is Imagery? What is nature of the images? Exact copy of original images? Represented in terms of meaning? If so, then how does the subjective sensation of an image arise? What is the

More information

2012 Course : The Statistician Brain: the Bayesian Revolution in Cognitive Science

2012 Course : The Statistician Brain: the Bayesian Revolution in Cognitive Science 2012 Course : The Statistician Brain: the Bayesian Revolution in Cognitive Science Stanislas Dehaene Chair in Experimental Cognitive Psychology Lecture No. 4 Constraints combination and selection of a

More information

Selective bias in temporal bisection task by number exposition

Selective bias in temporal bisection task by number exposition Selective bias in temporal bisection task by number exposition Carmelo M. Vicario¹ ¹ Dipartimento di Psicologia, Università Roma la Sapienza, via dei Marsi 78, Roma, Italy Key words: number- time- spatial

More information

On Intelligence. Contents. Outline. On Intelligence

On Intelligence. Contents. Outline. On Intelligence On Intelligence From Wikipedia, the free encyclopedia On Intelligence: How a New Understanding of the Brain will Lead to the Creation of Truly Intelligent Machines is a book by Palm Pilot-inventor Jeff

More information

Semiotics and Intelligent Control

Semiotics and Intelligent Control Semiotics and Intelligent Control Morten Lind 0rsted-DTU: Section of Automation, Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark. m/i@oersted.dtu.dk Abstract: Key words: The overall purpose

More information

Visual Selection and Attention

Visual Selection and Attention Visual Selection and Attention Retrieve Information Select what to observe No time to focus on every object Overt Selections Performed by eye movements Covert Selections Performed by visual attention 2

More information

Institute of Psychology C.N.R. - Rome

Institute of Psychology C.N.R. - Rome Institute of Psychology C.N.R. - Rome Proximodistal sensory evolution in a population of artificial neural networks Matthew Schlesinger Domenico Parisi Institute of Psychology, National Research Council

More information

Human-Robot Collaboration: From Psychology to Social Robotics

Human-Robot Collaboration: From Psychology to Social Robotics Human-Robot Collaboration: From Psychology to Social Robotics Judith Bütepage, Danica Kragic arxiv:1705.10146v1 [cs.ro] 29 May 2017 Robotics, Perception and Learning Lab (RPL), CSC, KTH Royal Institute

More information

ICS 606. Intelligent Autonomous Agents 1. Intelligent Autonomous Agents ICS 606 / EE 606 Fall Reactive Architectures

ICS 606. Intelligent Autonomous Agents 1. Intelligent Autonomous Agents ICS 606 / EE 606 Fall Reactive Architectures Intelligent Autonomous Agents ICS 606 / EE 606 Fall 2011 Nancy E. Reed nreed@hawaii.edu 1 Lecture #5 Reactive and Hybrid Agents Reactive Architectures Brooks and behaviors The subsumption architecture

More information

Cognitive Psychology. Robert J. Sternberg EDITION. Yak University THOIVISOISI * WADSWORTH

Cognitive Psychology. Robert J. Sternberg EDITION. Yak University THOIVISOISI * WADSWORTH EDITION Cognitive Psychology Robert J. Sternberg Yak University THOIVISOISI * WADSWORTH Australia Canada Mexico Singapore Spain United Kingdom United States C H A P T E R 1 Introduction to Cognitive Psychology

More information

The Relation Between Perception and Action: What Should Neuroscience Learn From Psychology?

The Relation Between Perception and Action: What Should Neuroscience Learn From Psychology? ECOLOGICAL PSYCHOLOGY, 13(2), 117 122 Copyright 2001, Lawrence Erlbaum Associates, Inc. The Relation Between Perception and Action: What Should Neuroscience Learn From Psychology? Patrick R. Green Department

More information

Application of Artificial Neural Networks in Classification of Autism Diagnosis Based on Gene Expression Signatures

Application of Artificial Neural Networks in Classification of Autism Diagnosis Based on Gene Expression Signatures Application of Artificial Neural Networks in Classification of Autism Diagnosis Based on Gene Expression Signatures 1 2 3 4 5 Kathleen T Quach Department of Neuroscience University of California, San Diego

More information

A model of the interaction between mood and memory

A model of the interaction between mood and memory INSTITUTE OF PHYSICS PUBLISHING NETWORK: COMPUTATION IN NEURAL SYSTEMS Network: Comput. Neural Syst. 12 (2001) 89 109 www.iop.org/journals/ne PII: S0954-898X(01)22487-7 A model of the interaction between

More information

Optical Illusions 4/5. Optical Illusions 2/5. Optical Illusions 5/5 Optical Illusions 1/5. Reading. Reading. Fang Chen Spring 2004

Optical Illusions 4/5. Optical Illusions 2/5. Optical Illusions 5/5 Optical Illusions 1/5. Reading. Reading. Fang Chen Spring 2004 Optical Illusions 2/5 Optical Illusions 4/5 the Ponzo illusion the Muller Lyer illusion Optical Illusions 5/5 Optical Illusions 1/5 Mauritz Cornelis Escher Dutch 1898 1972 Graphical designer World s first

More information

Computational Cognitive Neuroscience

Computational Cognitive Neuroscience Computational Cognitive Neuroscience Computational Cognitive Neuroscience Computational Cognitive Neuroscience *Computer vision, *Pattern recognition, *Classification, *Picking the relevant information

More information

Reading Assignments: Lecture 5: Introduction to Vision. None. Brain Theory and Artificial Intelligence

Reading Assignments: Lecture 5: Introduction to Vision. None. Brain Theory and Artificial Intelligence Brain Theory and Artificial Intelligence Lecture 5:. Reading Assignments: None 1 Projection 2 Projection 3 Convention: Visual Angle Rather than reporting two numbers (size of object and distance to observer),

More information

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1 1. INTRODUCTION Sign language interpretation is one of the HCI applications where hand gesture plays important role for communication. This chapter discusses sign language interpretation system with present

More information

Learning to Use Episodic Memory

Learning to Use Episodic Memory Learning to Use Episodic Memory Nicholas A. Gorski (ngorski@umich.edu) John E. Laird (laird@umich.edu) Computer Science & Engineering, University of Michigan 2260 Hayward St., Ann Arbor, MI 48109 USA Abstract

More information

Lesson 6 Learning II Anders Lyhne Christensen, D6.05, INTRODUCTION TO AUTONOMOUS MOBILE ROBOTS

Lesson 6 Learning II Anders Lyhne Christensen, D6.05, INTRODUCTION TO AUTONOMOUS MOBILE ROBOTS Lesson 6 Learning II Anders Lyhne Christensen, D6.05, anders.christensen@iscte.pt INTRODUCTION TO AUTONOMOUS MOBILE ROBOTS First: Quick Background in Neural Nets Some of earliest work in neural networks

More information

Integrating Cognition, Emotion and Autonomy

Integrating Cognition, Emotion and Autonomy Integrating Cognition, Emotion and Autonomy Tom Ziemke School of Humanities & Informatics University of Skövde, Sweden tom.ziemke@his.se 2005-07-14 ICEA = Integrating Cognition, Emotion and Autonomy a

More information

Lecture 9: Lab in Human Cognition. Todd M. Gureckis Department of Psychology New York University

Lecture 9: Lab in Human Cognition. Todd M. Gureckis Department of Psychology New York University Lecture 9: Lab in Human Cognition Todd M. Gureckis Department of Psychology New York University 1 Agenda for Today Discuss writing for lab 2 Discuss lab 1 grades and feedback Background on lab 3 (rest

More information

Do you have to look where you go? Gaze behaviour during spatial decision making

Do you have to look where you go? Gaze behaviour during spatial decision making Do you have to look where you go? Gaze behaviour during spatial decision making Jan M. Wiener (jwiener@bournemouth.ac.uk) Department of Psychology, Bournemouth University Poole, BH12 5BB, UK Olivier De

More information

On the Sense of Agency and of Object Permanence in Robots

On the Sense of Agency and of Object Permanence in Robots On the Sense of Agency and of Object Permanence in Robots Sarah Bechtle 1, Guido Schillaci 2 and Verena V. Hafner 2 Abstract This work investigates the development of the sense of object permanence in

More information

PART - A 1. Define Artificial Intelligence formulated by Haugeland. The exciting new effort to make computers think machines with minds in the full and literal sense. 2. Define Artificial Intelligence

More information

PS3021, PS3022, PS4040

PS3021, PS3022, PS4040 School of Psychology Important Degree Information: B.Sc./M.A. Honours The general requirements are 480 credits over a period of normally 4 years (and not more than 5 years) or part-time equivalent; the

More information

Modeling Individual and Group Behavior in Complex Environments. Modeling Individual and Group Behavior in Complex Environments

Modeling Individual and Group Behavior in Complex Environments. Modeling Individual and Group Behavior in Complex Environments Modeling Individual and Group Behavior in Complex Environments Dr. R. Andrew Goodwin Environmental Laboratory Professor James J. Anderson Abran Steele-Feldman University of Washington Status: AT-14 Continuing

More information

Chapter 5. Summary and Conclusions! 131

Chapter 5. Summary and Conclusions! 131 ! Chapter 5 Summary and Conclusions! 131 Chapter 5!!!! Summary of the main findings The present thesis investigated the sensory representation of natural sounds in the human auditory cortex. Specifically,

More information

Coordination in Sensory Integration

Coordination in Sensory Integration 15 Coordination in Sensory Integration Jochen Triesch, Constantin Rothkopf, and Thomas Weisswange Abstract Effective perception requires the integration of many noisy and ambiguous sensory signals across

More information

Introduction to Deep Reinforcement Learning and Control

Introduction to Deep Reinforcement Learning and Control Carnegie Mellon School of Computer Science Deep Reinforcement Learning and Control Introduction to Deep Reinforcement Learning and Control Lecture 1, CMU 10703 Katerina Fragkiadaki Logistics 3 assignments

More information

Objectives. Objectives Continued 8/13/2014. Movement Education and Motor Learning Where Ortho and Neuro Rehab Collide

Objectives. Objectives Continued 8/13/2014. Movement Education and Motor Learning Where Ortho and Neuro Rehab Collide Movement Education and Motor Learning Where Ortho and Neuro Rehab Collide Roderick Henderson, PT, ScD, OCS Wendy Herbert, PT, PhD Janna McGaugh, PT, ScD, COMT Jill Seale, PT, PhD, NCS Objectives 1. Identify

More information