NEURAL SYSTEMS FOR INTEGRATING ROBOT BEHAVIOURS

Size: px
Start display at page:

Download "NEURAL SYSTEMS FOR INTEGRATING ROBOT BEHAVIOURS"

Transcription

1 NEURAL SYSTEMS FOR INTEGRATING ROBOT BEHAVIOURS Brett Browning & Gordon Wyeth University of Queensland Computer Science and Electrical Engineering Department & Mail to: Brett Browning, Department of Computer Science and Electrical Engineering, University of Queensland, St Lucia, Brisbane, Australia, Phone: Fax:

2 NEURAL SYSTEMS FOR INTEGRATING ROBOT BEHAVIOURS Brett Browning & Gordon Wyeth University of Queensland Computer Science and Electrical Engineering Department & Abstract This paper compares and contrasts two approaches to integrating robot behaviours. Robot behaviours can be produced using neural networks, as illustrated by Braitenberg s vehicles [Braitenberg, 1984]. Experiments show that homogeneous integration is flawed with a behaviour stability problem and with a scalability problem that makes it difficult to use for large scale, complicated control problems. In contrast it is shown that competitive integration leads to incremental design and avoids stability and scalability problems. The experiments are conducted in a maze environment using a real robot. 1. Introduction There is a growing interest from the research community in developing intelligence systems for mobile robots that are based upon connectionist and biologically plausible models [Pfeifer, 1996]. The resulting systems, which utilise Artificial Neural Networks (ANNs), have the potential to make intelligent agents smarter, and offer insight into cognitive science issues that explore the link between brain and behaviour. In previous work, we have shown that ANN systems are readily applicable to the generation of robot behaviours [Wyeth, 1997]. The integration of behaviours in the previous work was handled by assigning different weights to different behaviours. In this system, called homogeneous integration, behaviours connected with larger weights tended to subsume behaviours connected with smaller weights. Behaviours could therefore be prioritised and integrated in a meaningful manner. In this paper, it is shown that homogeneous integration is ill suited to robot systems with multiple behaviours that share a single sensory resource. A new scheme for behaviour integration, called competitive integration, overcomes the problems associated with homogeneous integration. The competitive integration approach produces more robust architectures and is scalable to more difficult problems. The improvements are illustrated on a small mobile robot in a maze environment. 1.1 Overview of the paper Section 2 presents an overview of robot behaviour generation using neural components and describes the robot used for the experiments. Section 3 details the neural model that was used for the networks described in Sections 4 and 5. Section 4 details the homogeneous integration approach to the maze traversing robot. Section 5 describes the competitive integration approach and its performance in the maze. The results of each of the networks will be compared and contrasted in the discussion section (Section 6). 2. Background The techniques proposed have much in common with behaviour based robotics [Brooks, 1990]. Behaviour based robots are based on the principle that intelligence emerges from the many competing behaviours within an agent, rather than from a single intelligence producing process. In this paper, a subset of behaviours is discussed. All behaviours described here are reactive; they do not rely on the memory of previous activity to perform their functions. For the purposes of this paper, reactive behaviours will be referred to as schemas. For a neural control system, schemas are implemented as neural networks that do not maintain an internal representation of the world. This implies that they have very limited state information, and are mainly feedforward structures. 2.1 Braitenberg Vehicles Love Fear Hate Curious Figure 1:The fundamental Braitenberg vehicles. Each robot behaviour is produced by two connections. [Braitenberg, 1984] describes a series of vehicles that demonstrate how simple structures that resemble neurons can create animat behaviour that appears intelligent. The first few of these vehicles became somewhat popularised [Dewdney, 1988] and came to be known as Braitenberg Vehicles. Four of the fundamental examples are shown in Figure 1. The operation of these vehicles is, at once, both simple and profound. Consider these vehicles to be operating on a plain with randomly placed lights. The 1

3 arcs at the front of the vehicle represent lights sensors that produce a signal based on the intensity of nearby light sources. The boxes at the back represent propulsion units that drive the vehicle at a velocity proportional to the signal that the actuator receives. In between sensors and actuators are connections that may be inhibitory or excitory. These simple components provide each vehicle with behaviour representative of the labelled emotions, which is readily understood from thinking out the expected reactions of each type of vehicle. The connections found in Braitenberg vehicles bear resemblance to the weights used in Artificial Neural Network (ANN) research. Similarly, the units Braitenberg proposes for combining behaviours closely resemble the units used in ANN research. This paper explores the performance of the units proposed by Braitenberg for combining behaviour in the context of a real robot. 2.3 The Robot and its Environment The experiments with neural control systems were developed using a real robot - CUQEE III. CUQEE III is a small, fully autonomous, mobile robot that is used in micromouse competitions [Otten, 1990]. The robot is shown in Figure Neural Model The artificial networks used in this paper are based on connectionist units that are common to ANN research. All artificial neurons have the same generic structure and perform the computation: V = g ( w ξ θ ) = g w i i i i i ξ θ ij j i j Here V i is the activation, w i is the weight vector, θ i is the threshold and ξ is the input vector for unit i. Note that the input vector may be the sensory input, or it may be the outputs of other units. The units can be classed based upon the transfer function g i (.) used for the unit. The simplest neurons are linear units where the transfer function is a linear function; these are used for the motor units LV and RV. The speed of the motor is proportional to the activation. Negative activations cause the motors to drive in reverse to the normal direction. The majority of units in this paper will consist of piece-wise linear units. In this case the transfer function g i (.) is 0 for any negative activation and is linear for any positive activation. For units with self-reinforcing connections it is necessary to limit the output of the unit. This is achieved by imposing a saturation point such that above this point the output remains the same. This is shown in Figure 3. Output linear Saturation Figure 2. A picture of CUQEE III. The robot is fully autonomous and fits in the palm of the hand. CUQEE III has three distance sensors located on the sensor arm. The sensors detect the distance to the walls in directly in front of and to the either side of the robot. The robot also has two drive wheels that are arranged in a wheel chair arrangement one on either side. A velocity control loop is implemented in software for each wheel. The robots environment consists of a maze with walls arranged on an orthogonal grid. The grid is roughly twice the width of the robot. In a micromouse competition, the robot has to find its way to the centre of the maze through randomly placed walls. For the purposes of this paper, only reactive navigation processes such as corridor following are considered. For the purposes of the neural control system, the sensor readings are converted to activations with values between 0 and 1. A wall that is closer produces a higher activation of the sensor. The sensor inputs are represented as the activations of three neurons: Sensor Left (SL), Sensor Centre (SC) and Sensor Right (SR) for the left, centre and right sensors respectively. The velocity of each drive wheel is controlled by the activations of two neurons: LV for the Left Velocity, and RV for the Right Velocity. Input Figure 3. The piece-wise linear transfer function. The sensory input of the robot does not suffer greatly from noise. However, due to the limited resolution of the sensory input combined with the movement of the robot throughout the maze the sensory inputs can vary dramatically. This can have a drastic effect on the performance of the network, thus it becomes necessary to augment the neural units with a short-term memory. This makes the unit less sensitive to short term variation of its input. This is helpful in certain situations (such as turning a corner) as it is necessary to continue the behaviour for a short time after the sensory input has changed. In order to implement the short-term memory effect, a first order decay to the output of the unit is added. Effectively this makes the units leaky integrators. The new equation for the output is: do ( t i ) τ i = Oi ( t) + Vi( t) dt Here V i (t) is given by the equation presented earlier, and O i (t) is the new output of the unit that is transmitted to the connecting neurons. It is important to 2

4 realise that the time constant, τ i, is directly related to the velocity of movement for the vehicle and controls the reaction rate of the network. Thus the time constant should be chosen carefully. A time constant of 40 ms is used throughout. 4. Homogeneous Integration The schema integration discussed in this section was inspired by the schema integration used by Braitenberg for vehicle 3c [Braitenberg, 1984 pp 12]. The approach consists of summing the outputs from each schema together and using the resultant sum to drive the robot. The strength or weight of each connection leads to a behaviour hierarchy, which in essence creates the personality of the agent. For a given scenario for the robot, changing the strength of the integration connections changes the behaviour of the robot. The connections from the schema units to the motor units are referred to as the motor association layer, and the sensory-to-schema units as the schema layer. 4.1 Schema Selection It is helpful to visualise the sensory space of the robot to see how the problem can be partitioned. The sensory space of CUQEE III is three dimensional, formed by the orthogonal dimensions SL, SC and SR. Sensory input is represented as the vector s which is formed with the activations of (SL, SC, SR). Similarly the weight vectors for each of the schema units can be represented in sensory space. S SR s U L SC thresholds on each of the motor units. Since CUQEE s sensors have limited range, sensory input is often lost during cornering and U turns. Lack of sensory input causes the robot to behave as if it were in an empty area. By virtue of the neural constructs used each schema is tuned to a particular sensory input. This means the schema becomes more active when the dot product of the sensory input vector and the schema vector increases. Since all schema vectors are normalised, the unit with the highest activation will be the one with its schema vector closest to the sensory input. Schema units, in turn, represent a particular behaviour of the motor outputs. This behaviour is generated by weights chosen between the schema units and the motor units: the motor association layer. When the robot is in a corridor with no dead end, the sensory input will be confined to the SL-SR plane. Thus the S schema will be the most active indicating the robot should drive straight ahead. When there are no walls to the right of the robot, but the walls on the left and in front of the robot are present, the sensory vector will be confined to the SL-SC plane. In this situation the robot must turn to the right, hence the R schema vector is located in the SL-SC plane. Similarly the L schema vector is located in the SR-SC plane. Finally, when the robot is in a dead end (walls in front and to either side) the sensory vector will be approximately equiangular to the SL, SC and SR axes; the U prototype vector is orientated in this direction. The selective tuning of schema units places constraints upon the motor association weights. For example, when faced with the situation which causes the L schema unit to be the most active the robot can only turn left. Thus the motor association weights must reflect this response. In this case it means the connection to the RV unit must be excitory and stronger than the connection to the LV unit. It should be noted that the constraints are only relative to the schema under consideration. The constraints do not affect any other SL R Sensor Units SL SC SR Figure 4. Sensory Space: The cube that forms the sensory space of the robot and the partitioning of schemas. Here s is an example sensory input, which in this case is for a straight corridor. There are five situations that are faced by the robot: straight sections, left and right corners, dead-ends and empty areas. By covering the sensory space with the weight vectors of the schemas it is possible to generate schemas that logically represent these situations. The weight vectors for the schemas are shown in Figure 4 as: go left (L), go straight (S), go right (R) and U turn (U). Note that once weight vectors are chosen, the network is virtually complete. The fifth situation that can be faced by the robot, empty areas, is covered by ensuring that in the absence of sensory input the motor units maintain non-zero activity levels. This can be achieved with non-zero Schema weights are the normalised vectors in Sensor Space defined before. L S R U * U unit has an experimentally determined threshold of 0.58 LV Motor Units RV Figure 5. The Homogeneous network. Here thick lines indicate strong connections, thin lines weak connections. Dashed lines indicate inhibitory synapses (negative weights) and non-dashed lines are excitory synapses (positive weights). 3

5 schema to motor connections. Thus the relative strengths of different schema to motor unit connections can still be modified. 4.3 Integration Figure 5 shows the network developed using the homogeneous integration approach. As mentioned above, the schema layer places constraints upon the motor association weights. However, the relative strengths of each of the weights are yet to be determined. The schema hierarchy is enforced by the combination of the motor association layer and the schema layer. The schema unit activation will depend on the current sensory input. When this is combined with the relative strengths of the motor association weights the schema hierarchy for the current sensory input is defined. Note that the behaviour of the robot is defined by the combination of all the elements in the schema hierarchy. For the robot the motor association weights are ordered by relative strength as (from strongest to weakest): U turn, Left and Right turns followed closely by Straight. In the absence of sensory input activity in the network is maintained by the negative thresholds on the LV and RV units. Due to the confines of the dead end, the U schema must control the robot quite precisely. To achieve precise control, the U schema must dominate the other behaviours completely whenever it activates. As a result the U schema has much stronger motor association weights. A threshold term is added to the U schema to ensure that it does not activate unless SL, SC and SR are all sufficiently active. There is a limit to the strength with which motor association weights can be increased. The strength of the weights not only represents the domination of a behaviour, but also the gain in the feedback loop created by the sensors, the actuators and the environment. As the gain is increased in this feedback loop, stability decreases and the performance of the robot degrades sharply. This point is highlighted by the following results. 4.4 Results Rather than attempting to test all possible situations, we will show the results of the network in the two main scenarios faced by the robot. Figure 6 shows the robot performing a left and right turn followed by a dead-end. Crash! Figure 6. The homogeneous network performing left and right hand turns, followed by a dead-end. The robot started from the left of the picture. Note that the robot failed to perform the U turn correctly and crashed into the wall. Clearly, the homogenous integration network can perform left and right turns. The speed of the robot during the turn, the sharpness of the corner and the time it takes to re-centre itself in the corridor are a result of the shifts in activation amongst the schema units. Changes in the time constants chosen for each unit and the relative strengths of the motor association weights affect the performance of the robot. The homogeneous network successfully slows the robot down as it enters each corner. This shows the effectiveness of the S schema. Similarly, the L and R schema units work effectively. This is shown by the robot successfully negotiating the corners and remaining close to the centre of the corridor on straight sections. However, the network fails to adequately control the robot in the dead end section of the maze as the robot crashes half way through executing a U turn. The problem occurs when the robot is too close to wall and the sensors can no longer detect the wall correctly. Without the stimulus of the sensors, the U-turn schema cannot dominate the other schema units sufficiently. Simply increasing the schema unit s motor association weights would achieve the domination, but would compromise the stability of the schema s control. The method of homogeneous integration of neural control systems has failed to provide a solution to reactive navigation in a maze environment. No amount of tweaking provides a reliable solution in this situation. The following section shows a method that is both stable and reliable. 5. Competitive Integration The key to improving the results of the last section is a change in the approach to integrating schemas. In this section a new network is tested on the same problem using a new approach that the authors have termed competitive integration. Competitive integration means that schemas are made to compete against each other, rather than cooperate together, to generate behaviour. Schemas inhibit one another, with strength that is proportional to dominance in the hierarchy. A schema that is higher up in the hierarchy has greater importance, and therefore its inhibitory weight to the other schemas is stronger. The main motivation for using this approach to integrate schemas is to overcome the problems associated with cooperative integration. By making schemas compete against each other, only one schema will win and will effectively gain complete control of the robot without upsetting stability through excessive gain. When combined with leaky integration within schema units, the dominance will shift smoothly from one schema to the next. Furthermore, the noise tolerance within the network is improved, as fluctuations in sensory input do not cause similar fluctuations in the control of the robot. 5.1 The New Schema Integration The major difficulty with the homogeneous integration network was the coordination between the U schema and the general maze traversal schemas (S, L and R). Using 4

6 competitive integration these will be the two superschemas that will compete with each other for control. The new network is shown in Figure 7. Sensor Units SL SC SR New U-turn Schema. maze traversal, and the other schemas are unchanged, the left and right turn experiment is the same as shown Figure 6. As desired the robot is now capable of performing the U-turn successfully. Furthermore, the U turn is performed at a controllable rate. SLR units as before ST 1 ST 2 Weights as before LV RV U Figure 8. The competitive integration network performing left and right turns followed by a U-turn. The robot performed the task correctly and at a controllable pace. Motor Units Figure 7. The competitive integration network. The S, L and R units are the same as before. The ST1 unit starts U firing and U s recurrent connection forces it into saturation. ST2 disables U at the end of the turn. The U-turn schema has been modified so that it activates at the start of the turn, and disables itself at the end of the turn. This is achieved by using a recurrent connection on the U unit and using ST1 and ST2 to activate and deactivate the U unit, respectively. The recurrent connection in the U unit makes it behave like a flip-flop. Positive input causes the unit to saturate positively, and then stay saturated. Conversely, negative input causes the unit to saturate negatively, and then stay saturated until sufficient positive input is received. The two units ST1 and ST2 act as switches to flip the U unit between these two states. ST1 is tuned to dead ends and ST2 is tuned to corridors. The resulting vectors are the same as used for the U schema unit and S schema unit in the homogeneous integration, respectively. It is important to ensure that ST1 activates only in dead ends and ST2 activates at the end of the turn and not during the turn. This is achieved by using high thresholds on the units that ensure that sensor vector has indeed come very close to one of the prototype schema vectors. It should be noted that in contrast to homogeneous integration, where the U unit had to fire for the duration of the turn, here ST1 only needs to fire at the start of the turn and ST2 at the end of the turn. While the U schema is active it inhibits the other schemas by virtue of the competitive inhibitory connection. While active, the U schema has sole control of the robot. Achieving sole control allows the U schema to perform precise motor control without noise from the other schemas. Importantly, the domination of motor control is achieved without compromising the stability. 5.2 Results Figure 8 shows the performance of the network under the same conditions as for homogeneous integration. As the U-turn schema does not become active during normal 6. Discussion There are two main problems with homogeneous integration: stability and scalability. Stability of the schema behaviour is a major issue as each schema forms a closed control loop with sensors, the schema, the motors and the real world. As a result the motor association weights form the gain control of the control loop. Too large a gain can result in instability and too small a gain may not perform the desired task effectively. With schemas such as the U-turn schema, stability becomes a significant issue. Stability becomes more difficult as the number of schemas increase. This means the networks are not scalable and design cannot be performed in an incremental fashion. The motor association weights must be changed every time new schemas are added. This is a serious problem for using this approach to integration in more complicated systems. Clearly the results show that competitive integration produces a more robust network than homogeneous integration, at least for this application. Stability is no longer an issue as only a small number of schemas are active at any one time. Incremental design is also possible, as it is only a matter of changing the strength of the inhibitory connections between the schemas to reflect the new hierarchy. The improvements in behaviour are not without cost. The addition of recurrent and competitive connections precludes the use of the many training algorithms designed for feed forward networks. Without the possibility of conventional training algorithms, the neural robot designer must either hand choose weights, or perhaps rely on some evolutionary context to select appropriate weight values. 7. Conclusions In this paper, two possible approaches to schema integration have been described. The results have shown that of the two approaches, competitive integration provides improved autonomous behaviour. We have shown that homogeneous integration is flawed with the stability and scalability problems. These flaws severely 5

7 undermine the usefulness of the homogeneous approach to more complicated systems. The results have shown that competitive integration offers an improved design approach for the reactive schemas required for minimalist neural control of mobile robots. 8. Bibliography [Braitenberg, 1984] Braitenberg, V. (1984). Vehicles: Experiments in Synthetic Psychology, MIT Press, Cambridge, MA. [Brooks, 1990] Brooks, R.A. (1990). Elephants Don t Play Chess. Robotics and Autonomous Systems, vol. 6, pp [Dewdney, 1987] Dewdney, A.K. (1987). Braitenberg memoirs: vehicles for probing behaviour roam a dark plain marked with lights. Scientific American, vol. 256, no. 3, March [Otten, 1990] Otten, D. (1990). Building MITEE Mouse III. Circuit Cellar Ink. pp [Pfeifer, 1996] Pfeifer, R. (1996) Building Fungus Eaters: Design Principles of Autonomous Agents, From Animals to Animats 4. ed. Maes, P. et al., Cambridge, MA: MIT Press. [Wyeth, 1997] Wyeth, G.F. (1997). Neural Mechanisms for Training Autonomous Robots. Mechatronics and Machine Vision in Practice, Toowoomba, Australia, IEEE Computer Society Press, September 1997, pp

Affective Action Selection and Behavior Arbitration for Autonomous Robots

Affective Action Selection and Behavior Arbitration for Autonomous Robots Affective Action Selection and Behavior Arbitration for Autonomous Robots Matthias Scheutz Department of Computer Science and Engineering University of Notre Dame Notre Dame, IN 46556, USA mscheutz@cse.nd.edu

More information

Semiotics and Intelligent Control

Semiotics and Intelligent Control Semiotics and Intelligent Control Morten Lind 0rsted-DTU: Section of Automation, Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark. m/i@oersted.dtu.dk Abstract: Key words: The overall purpose

More information

Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6)

Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6) Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6) BPNN in Practice Week 3 Lecture Notes page 1 of 1 The Hopfield Network In this network, it was designed on analogy of

More information

Sparse Coding in Sparse Winner Networks

Sparse Coding in Sparse Winner Networks Sparse Coding in Sparse Winner Networks Janusz A. Starzyk 1, Yinyin Liu 1, David Vogel 2 1 School of Electrical Engineering & Computer Science Ohio University, Athens, OH 45701 {starzyk, yliu}@bobcat.ent.ohiou.edu

More information

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence To understand the network paradigm also requires examining the history

More information

Modeling of Hippocampal Behavior

Modeling of Hippocampal Behavior Modeling of Hippocampal Behavior Diana Ponce-Morado, Venmathi Gunasekaran and Varsha Vijayan Abstract The hippocampus is identified as an important structure in the cerebral cortex of mammals for forming

More information

ENVIRONMENTAL REINFORCEMENT LEARNING: A Real-time Learning Architecture for Primitive Behavior Refinement

ENVIRONMENTAL REINFORCEMENT LEARNING: A Real-time Learning Architecture for Primitive Behavior Refinement ENVIRONMENTAL REINFORCEMENT LEARNING: A Real-time Learning Architecture for Primitive Behavior Refinement TaeHoon Anthony Choi, Eunbin Augustine Yim, and Keith L. Doty Machine Intelligence Laboratory Department

More information

Reactive agents and perceptual ambiguity

Reactive agents and perceptual ambiguity Major theme: Robotic and computational models of interaction and cognition Reactive agents and perceptual ambiguity Michel van Dartel and Eric Postma IKAT, Universiteit Maastricht Abstract Situated and

More information

Learning Classifier Systems (LCS/XCSF)

Learning Classifier Systems (LCS/XCSF) Context-Dependent Predictions and Cognitive Arm Control with XCSF Learning Classifier Systems (LCS/XCSF) Laurentius Florentin Gruber Seminar aus Künstlicher Intelligenz WS 2015/16 Professor Johannes Fürnkranz

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction Artificial neural networks are mathematical inventions inspired by observations made in the study of biological systems, though loosely based on the actual biology. An artificial

More information

The Advantages of Evolving Perceptual Cues

The Advantages of Evolving Perceptual Cues The Advantages of Evolving Perceptual Cues Ian Macinnes and Ezequiel Di Paolo Centre for Computational Neuroscience and Robotics, John Maynard Smith Building, University of Sussex, Falmer, Brighton, BN1

More information

Learning and Adaptive Behavior, Part II

Learning and Adaptive Behavior, Part II Learning and Adaptive Behavior, Part II April 12, 2007 The man who sets out to carry a cat by its tail learns something that will always be useful and which will never grow dim or doubtful. -- Mark Twain

More information

CS148 - Building Intelligent Robots Lecture 5: Autonomus Control Architectures. Instructor: Chad Jenkins (cjenkins)

CS148 - Building Intelligent Robots Lecture 5: Autonomus Control Architectures. Instructor: Chad Jenkins (cjenkins) Lecture 5 Control Architectures Slide 1 CS148 - Building Intelligent Robots Lecture 5: Autonomus Control Architectures Instructor: Chad Jenkins (cjenkins) Lecture 5 Control Architectures Slide 2 Administrivia

More information

Information Processing During Transient Responses in the Crayfish Visual System

Information Processing During Transient Responses in the Crayfish Visual System Information Processing During Transient Responses in the Crayfish Visual System Christopher J. Rozell, Don. H. Johnson and Raymon M. Glantz Department of Electrical & Computer Engineering Department of

More information

Discrimination and Generalization in Pattern Categorization: A Case for Elemental Associative Learning

Discrimination and Generalization in Pattern Categorization: A Case for Elemental Associative Learning Discrimination and Generalization in Pattern Categorization: A Case for Elemental Associative Learning E. J. Livesey (el253@cam.ac.uk) P. J. C. Broadhurst (pjcb3@cam.ac.uk) I. P. L. McLaren (iplm2@cam.ac.uk)

More information

Spiking Inputs to a Winner-take-all Network

Spiking Inputs to a Winner-take-all Network Spiking Inputs to a Winner-take-all Network Matthias Oster and Shih-Chii Liu Institute of Neuroinformatics University of Zurich and ETH Zurich Winterthurerstrasse 9 CH-857 Zurich, Switzerland {mao,shih}@ini.phys.ethz.ch

More information

Unmanned autonomous vehicles in air land and sea

Unmanned autonomous vehicles in air land and sea based on Gianni A. Di Caro lecture on ROBOT CONTROL RCHITECTURES SINGLE AND MULTI-ROBOT SYSTEMS: A CASE STUDY IN SWARM ROBOTICS Unmanned autonomous vehicles in air land and sea Robots and Unmanned Vehicles

More information

Re: ENSC 370 Project Gerbil Functional Specifications

Re: ENSC 370 Project Gerbil Functional Specifications Simon Fraser University Burnaby, BC V5A 1S6 trac-tech@sfu.ca February, 16, 1999 Dr. Andrew Rawicz School of Engineering Science Simon Fraser University Burnaby, BC V5A 1S6 Re: ENSC 370 Project Gerbil Functional

More information

How Neurons Do Integrals. Mark Goldman

How Neurons Do Integrals. Mark Goldman How Neurons Do Integrals Mark Goldman Outline 1. What is the neural basis of short-term memory? 2. A model system: the Oculomotor Neural Integrator 3. Neural mechanisms of integration: Linear network theory

More information

Learning Utility for Behavior Acquisition and Intention Inference of Other Agent

Learning Utility for Behavior Acquisition and Intention Inference of Other Agent Learning Utility for Behavior Acquisition and Intention Inference of Other Agent Yasutake Takahashi, Teruyasu Kawamata, and Minoru Asada* Dept. of Adaptive Machine Systems, Graduate School of Engineering,

More information

Evolutionary Programming

Evolutionary Programming Evolutionary Programming Searching Problem Spaces William Power April 24, 2016 1 Evolutionary Programming Can we solve problems by mi:micing the evolutionary process? Evolutionary programming is a methodology

More information

Artificial Intelligence. Outline

Artificial Intelligence. Outline Artificial Intelligence Embodied Intelligence (R. Brooks, MIT) Outline Key perspectives for thinking about how an intelligent system interacts with world Compare mainstream AI to early artificial creature

More information

Cell Responses in V4 Sparse Distributed Representation

Cell Responses in V4 Sparse Distributed Representation Part 4B: Real Neurons Functions of Layers Input layer 4 from sensation or other areas 3. Neocortical Dynamics Hidden layers 2 & 3 Output layers 5 & 6 to motor systems or other areas 1 2 Hierarchical Categorical

More information

Supplementary materials for: Executive control processes underlying multi- item working memory

Supplementary materials for: Executive control processes underlying multi- item working memory Supplementary materials for: Executive control processes underlying multi- item working memory Antonio H. Lara & Jonathan D. Wallis Supplementary Figure 1 Supplementary Figure 1. Behavioral measures of

More information

Behavior Architectures

Behavior Architectures Behavior Architectures 5 min reflection You ve read about two very different behavior architectures. What are the most significant functional/design differences between the two approaches? Are they compatible

More information

Institute of Psychology C.N.R. - Rome. Using emergent modularity to develop control systems for mobile robots

Institute of Psychology C.N.R. - Rome. Using emergent modularity to develop control systems for mobile robots Institute of Psychology C.N.R. - Rome Using emergent modularity to develop control systems for mobile robots Stefano Nolfi Institute of Psychology, National Research Council, Rome, Italy. e-mail: stefano@kant.irmkant.rm.cnr.it

More information

Visual Categorization: How the Monkey Brain Does It

Visual Categorization: How the Monkey Brain Does It Visual Categorization: How the Monkey Brain Does It Ulf Knoblich 1, Maximilian Riesenhuber 1, David J. Freedman 2, Earl K. Miller 2, and Tomaso Poggio 1 1 Center for Biological and Computational Learning,

More information

Preparing More Effective Liquid State Machines Using Hebbian Learning

Preparing More Effective Liquid State Machines Using Hebbian Learning 2006 International Joint Conference on Neural Networks Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 2006 Preparing More Effective Liquid State Machines Using Hebbian Learning

More information

An Escalation Model of Consciousness

An Escalation Model of Consciousness Bailey!1 Ben Bailey Current Issues in Cognitive Science Mark Feinstein 2015-12-18 An Escalation Model of Consciousness Introduction The idea of consciousness has plagued humanity since its inception. Humans

More information

An Overview on Soft Computing in Behavior Based Robotics

An Overview on Soft Computing in Behavior Based Robotics An Overview on Soft Computing in Behavior Based Robotics Frank Hoffmann Fakultät Elektrotechnik und Informationstechnik Universität Dortmund D-44221 Dortmund (Germany) E-mail: hoffmann@esr.e-technik.uni-dortmund.de

More information

Time Experiencing by Robotic Agents

Time Experiencing by Robotic Agents Time Experiencing by Robotic Agents Michail Maniadakis 1 and Marc Wittmann 2 and Panos Trahanias 1 1- Foundation for Research and Technology - Hellas, ICS, Greece 2- Institute for Frontier Areas of Psychology

More information

A general error-based spike-timing dependent learning rule for the Neural Engineering Framework

A general error-based spike-timing dependent learning rule for the Neural Engineering Framework A general error-based spike-timing dependent learning rule for the Neural Engineering Framework Trevor Bekolay Monday, May 17, 2010 Abstract Previous attempts at integrating spike-timing dependent plasticity

More information

Observational Learning Based on Models of Overlapping Pathways

Observational Learning Based on Models of Overlapping Pathways Observational Learning Based on Models of Overlapping Pathways Emmanouil Hourdakis and Panos Trahanias Institute of Computer Science, Foundation for Research and Technology Hellas (FORTH) Science and Technology

More information

Categories Formation in Self-Organizing Embodied Agents

Categories Formation in Self-Organizing Embodied Agents Categories Formation in Self-Organizing Embodied Agents Stefano Nolfi Institute of Cognitive Sciences and Technologies National Research Council (CNR) Viale Marx, 15, 00137, Rome, Italy s.nolfi@istc.cnr.it

More information

Perceptual Grouping in a Self-Organizing Map of Spiking Neurons

Perceptual Grouping in a Self-Organizing Map of Spiking Neurons Perceptual Grouping in a Self-Organizing Map of Spiking Neurons Yoonsuck Choe Department of Computer Sciences The University of Texas at Austin August 13, 2001 Perceptual Grouping Group Two! Longest Contour?

More information

The storage and recall of memories in the hippocampo-cortical system. Supplementary material. Edmund T Rolls

The storage and recall of memories in the hippocampo-cortical system. Supplementary material. Edmund T Rolls The storage and recall of memories in the hippocampo-cortical system Supplementary material Edmund T Rolls Oxford Centre for Computational Neuroscience, Oxford, England and University of Warwick, Department

More information

Ch.20 Dynamic Cue Combination in Distributional Population Code Networks. Ka Yeon Kim Biopsychology

Ch.20 Dynamic Cue Combination in Distributional Population Code Networks. Ka Yeon Kim Biopsychology Ch.20 Dynamic Cue Combination in Distributional Population Code Networks Ka Yeon Kim Biopsychology Applying the coding scheme to dynamic cue combination (Experiment, Kording&Wolpert,2004) Dynamic sensorymotor

More information

Lesson 6 Learning II Anders Lyhne Christensen, D6.05, INTRODUCTION TO AUTONOMOUS MOBILE ROBOTS

Lesson 6 Learning II Anders Lyhne Christensen, D6.05, INTRODUCTION TO AUTONOMOUS MOBILE ROBOTS Lesson 6 Learning II Anders Lyhne Christensen, D6.05, anders.christensen@iscte.pt INTRODUCTION TO AUTONOMOUS MOBILE ROBOTS First: Quick Background in Neural Nets Some of earliest work in neural networks

More information

Neural Cognitive Modelling: A Biologically Constrained Spiking Neuron Model of the Tower of Hanoi Task

Neural Cognitive Modelling: A Biologically Constrained Spiking Neuron Model of the Tower of Hanoi Task Neural Cognitive Modelling: A Biologically Constrained Spiking Neuron Model of the Tower of Hanoi Task Terrence C. Stewart (tcstewar@uwaterloo.ca) Chris Eliasmith (celiasmith@uwaterloo.ca) Centre for Theoretical

More information

Katsunari Shibata and Tomohiko Kawano

Katsunari Shibata and Tomohiko Kawano Learning of Action Generation from Raw Camera Images in a Real-World-Like Environment by Simple Coupling of Reinforcement Learning and a Neural Network Katsunari Shibata and Tomohiko Kawano Oita University,

More information

Module 1. Introduction. Version 1 CSE IIT, Kharagpur

Module 1. Introduction. Version 1 CSE IIT, Kharagpur Module 1 Introduction Lesson 2 Introduction to Agent 1.3.1 Introduction to Agents An agent acts in an environment. Percepts Agent Environment Actions An agent perceives its environment through sensors.

More information

Neural Cognitive Modelling: A Biologically Constrained Spiking Neuron Model of the Tower of Hanoi Task

Neural Cognitive Modelling: A Biologically Constrained Spiking Neuron Model of the Tower of Hanoi Task Neural Cognitive Modelling: A Biologically Constrained Spiking Neuron Model of the Tower of Hanoi Task Terrence C. Stewart (tcstewar@uwaterloo.ca) Chris Eliasmith (celiasmith@uwaterloo.ca) Centre for Theoretical

More information

Pavlovian, Skinner and other behaviourists contribution to AI

Pavlovian, Skinner and other behaviourists contribution to AI Pavlovian, Skinner and other behaviourists contribution to AI Witold KOSIŃSKI Dominika ZACZEK-CHRZANOWSKA Polish Japanese Institute of Information Technology, Research Center Polsko Japońska Wyższa Szko

More information

Learning in neural networks

Learning in neural networks http://ccnl.psy.unipd.it Learning in neural networks Marco Zorzi University of Padova M. Zorzi - European Diploma in Cognitive and Brain Sciences, Cognitive modeling", HWK 19-24/3/2006 1 Connectionist

More information

Evolution of Plastic Sensory-motor Coupling and Dynamic Categorization

Evolution of Plastic Sensory-motor Coupling and Dynamic Categorization Evolution of Plastic Sensory-motor Coupling and Dynamic Categorization Gentaro Morimoto and Takashi Ikegami Graduate School of Arts and Sciences The University of Tokyo 3-8-1 Komaba, Tokyo 153-8902, Japan

More information

ASSOCIATIVE MEMORY AND HIPPOCAMPAL PLACE CELLS

ASSOCIATIVE MEMORY AND HIPPOCAMPAL PLACE CELLS International Journal of Neural Systems, Vol. 6 (Supp. 1995) 81-86 Proceedings of the Neural Networks: From Biology to High Energy Physics @ World Scientific Publishing Company ASSOCIATIVE MEMORY AND HIPPOCAMPAL

More information

Neuromorphic computing

Neuromorphic computing Neuromorphic computing Robotics M.Sc. programme in Computer Science lorenzo.vannucci@santannapisa.it April 19th, 2018 Outline 1. Introduction 2. Fundamentals of neuroscience 3. Simulating the brain 4.

More information

Evolving Internal Memory for T-Maze Tasks in Noisy Environments

Evolving Internal Memory for T-Maze Tasks in Noisy Environments Evolving Internal Memory for T-Maze Tasks in Noisy Environments DaeEun Kim Cognitive Robotics Max Planck Institute for Human Cognitive and Brain Sciences Amalienstr. 33, Munich, D-80799 Germany daeeun@cbs.mpg.de

More information

Hierarchical dynamical models of motor function

Hierarchical dynamical models of motor function ARTICLE IN PRESS Neurocomputing 70 (7) 975 990 www.elsevier.com/locate/neucom Hierarchical dynamical models of motor function S.M. Stringer, E.T. Rolls Department of Experimental Psychology, Centre for

More information

A Model of Visually Guided Plasticity of the Auditory Spatial Map in the Barn Owl

A Model of Visually Guided Plasticity of the Auditory Spatial Map in the Barn Owl A Model of Visually Guided Plasticity of the Auditory Spatial Map in the Barn Owl Andrea Haessly andrea@cs.utexas.edu Joseph Sirosh sirosh@cs.utexas.edu Risto Miikkulainen risto@cs.utexas.edu Abstract

More information

(c) KSIS Politechnika Poznanska

(c) KSIS Politechnika Poznanska Fundamentals of Autonomous Systems Control architectures in robotics Dariusz Pazderski 1 1 Katedra Sterowania i In»ynierii Systemów, Politechnika Pozna«ska 9th March 2016 Introduction Robotic paradigms

More information

Thalamocortical Feedback and Coupled Oscillators

Thalamocortical Feedback and Coupled Oscillators Thalamocortical Feedback and Coupled Oscillators Balaji Sriram March 23, 2009 Abstract Feedback systems are ubiquitous in neural systems and are a subject of intense theoretical and experimental analysis.

More information

Gender Based Emotion Recognition using Speech Signals: A Review

Gender Based Emotion Recognition using Speech Signals: A Review 50 Gender Based Emotion Recognition using Speech Signals: A Review Parvinder Kaur 1, Mandeep Kaur 2 1 Department of Electronics and Communication Engineering, Punjabi University, Patiala, India 2 Department

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 7: Network models Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis II 6 Single neuron

More information

A model to explain the emergence of reward expectancy neurons using reinforcement learning and neural network $

A model to explain the emergence of reward expectancy neurons using reinforcement learning and neural network $ Neurocomputing 69 (26) 1327 1331 www.elsevier.com/locate/neucom A model to explain the emergence of reward expectancy neurons using reinforcement learning and neural network $ Shinya Ishii a,1, Munetaka

More information

An Artificial Synaptic Plasticity Mechanism for Classical Conditioning with Neural Networks

An Artificial Synaptic Plasticity Mechanism for Classical Conditioning with Neural Networks An Artificial Synaptic Plasticity Mechanism for Classical Conditioning with Neural Networks Caroline Rizzi Raymundo (B) and Colin Graeme Johnson School of Computing, University of Kent, Canterbury, Kent

More information

Figure 1: The rectilinear environment. Figure 3: The angle environment. degrees, and each turn is bounded by a straight passage. The dead end environm

Figure 1: The rectilinear environment. Figure 3: The angle environment. degrees, and each turn is bounded by a straight passage. The dead end environm Nature versus Nurture in Evolutionary Computation: Balancing the Roles of the Training Environment and the Fitness Function in Producing Behavior Jordan Wales, Jesse Wells and Lisa Meeden jwales1@swarthmore.edu,

More information

FUZZY LOGIC AND FUZZY SYSTEMS: RECENT DEVELOPMENTS AND FUTURE DIWCTIONS

FUZZY LOGIC AND FUZZY SYSTEMS: RECENT DEVELOPMENTS AND FUTURE DIWCTIONS FUZZY LOGIC AND FUZZY SYSTEMS: RECENT DEVELOPMENTS AND FUTURE DIWCTIONS Madan M. Gupta Intelligent Systems Research Laboratory College of Engineering University of Saskatchewan Saskatoon, Sask. Canada,

More information

Timing and the cerebellum (and the VOR) Neurophysiology of systems 2010

Timing and the cerebellum (and the VOR) Neurophysiology of systems 2010 Timing and the cerebellum (and the VOR) Neurophysiology of systems 2010 Asymmetry in learning in the reverse direction Full recovery from UP using DOWN: initial return to naïve values within 10 minutes,

More information

EEL-5840 Elements of {Artificial} Machine Intelligence

EEL-5840 Elements of {Artificial} Machine Intelligence Menu Introduction Syllabus Grading: Last 2 Yrs Class Average 3.55; {3.7 Fall 2012 w/24 students & 3.45 Fall 2013} General Comments Copyright Dr. A. Antonio Arroyo Page 2 vs. Artificial Intelligence? DEF:

More information

A brief comparison between the Subsumption Architecture and Motor Schema Theory in light of Autonomous Exploration by Behavior

A brief comparison between the Subsumption Architecture and Motor Schema Theory in light of Autonomous Exploration by Behavior A brief comparison between the Subsumption Architecture and Motor Schema Theory in light of Autonomous Exploration by Behavior Based Robots Dip N. Ray 1*, S. Mukhopadhyay 2, and S. Majumder 1 1 Surface

More information

CS 771 Artificial Intelligence. Intelligent Agents

CS 771 Artificial Intelligence. Intelligent Agents CS 771 Artificial Intelligence Intelligent Agents What is AI? Views of AI fall into four categories 1. Thinking humanly 2. Acting humanly 3. Thinking rationally 4. Acting rationally Acting/Thinking Humanly/Rationally

More information

Rolls,E.T. (2016) Cerebral Cortex: Principles of Operation. Oxford University Press.

Rolls,E.T. (2016) Cerebral Cortex: Principles of Operation. Oxford University Press. Digital Signal Processing and the Brain Is the brain a digital signal processor? Digital vs continuous signals Digital signals involve streams of binary encoded numbers The brain uses digital, all or none,

More information

Oxford Foundation for Theoretical Neuroscience and Artificial Intelligence

Oxford Foundation for Theoretical Neuroscience and Artificial Intelligence Oxford Foundation for Theoretical Neuroscience and Artificial Intelligence Oxford Foundation for Theoretical Neuroscience and Artificial Intelligence For over two millennia, philosophers and scientists

More information

Biceps Activity EMG Pattern Recognition Using Neural Networks

Biceps Activity EMG Pattern Recognition Using Neural Networks Biceps Activity EMG Pattern Recognition Using eural etworks K. Sundaraj University Malaysia Perlis (UniMAP) School of Mechatronic Engineering 0600 Jejawi - Perlis MALAYSIA kenneth@unimap.edu.my Abstract:

More information

Lecture 1: Neurons. Lecture 2: Coding with spikes. To gain a basic understanding of spike based neural codes

Lecture 1: Neurons. Lecture 2: Coding with spikes. To gain a basic understanding of spike based neural codes Lecture : Neurons Lecture 2: Coding with spikes Learning objectives: To gain a basic understanding of spike based neural codes McCulloch Pitts Neuron I w in Σ out Θ Examples: I = ; θ =.5; w=. - in = *.

More information

International Journal of Advanced Computer Technology (IJACT)

International Journal of Advanced Computer Technology (IJACT) Abstract An Introduction to Third Generation of Neural Networks for Edge Detection Being inspired by the structure and behavior of the human visual system the spiking neural networks for edge detection

More information

Combining associative learning and nonassociative learning to achieve robust reactive navigation in mobile robots

Combining associative learning and nonassociative learning to achieve robust reactive navigation in mobile robots Combining associative learning and nonassociative learning to achieve robust reactive navigation in mobile robots Carolina Chang cchang@ldc.usb.ve Grupo de Inteligencia Artificial, Departamento de Computación

More information

Grounding Ontologies in the External World

Grounding Ontologies in the External World Grounding Ontologies in the External World Antonio CHELLA University of Palermo and ICAR-CNR, Palermo antonio.chella@unipa.it Abstract. The paper discusses a case study of grounding an ontology in the

More information

CHAPTER I From Biological to Artificial Neuron Model

CHAPTER I From Biological to Artificial Neuron Model CHAPTER I From Biological to Artificial Neuron Model EE543 - ANN - CHAPTER 1 1 What you see in the picture? EE543 - ANN - CHAPTER 1 2 Is there any conventional computer at present with the capability of

More information

Neural Coding. Computing and the Brain. How Is Information Coded in Networks of Spiking Neurons?

Neural Coding. Computing and the Brain. How Is Information Coded in Networks of Spiking Neurons? Neural Coding Computing and the Brain How Is Information Coded in Networks of Spiking Neurons? Coding in spike (AP) sequences from individual neurons Coding in activity of a population of neurons Spring

More information

Recognition of English Characters Using Spiking Neural Networks

Recognition of English Characters Using Spiking Neural Networks Recognition of English Characters Using Spiking Neural Networks Amjad J. Humaidi #1, Thaer M. Kadhim *2 Control and System Engineering, University of Technology, Iraq, Baghdad 1 601116@uotechnology.edu.iq

More information

arxiv: v2 [cs.lg] 1 Jun 2018

arxiv: v2 [cs.lg] 1 Jun 2018 Shagun Sodhani 1 * Vardaan Pahuja 1 * arxiv:1805.11016v2 [cs.lg] 1 Jun 2018 Abstract Self-play (Sukhbaatar et al., 2017) is an unsupervised training procedure which enables the reinforcement learning agents

More information

Application of Artificial Neural Networks in Classification of Autism Diagnosis Based on Gene Expression Signatures

Application of Artificial Neural Networks in Classification of Autism Diagnosis Based on Gene Expression Signatures Application of Artificial Neural Networks in Classification of Autism Diagnosis Based on Gene Expression Signatures 1 2 3 4 5 Kathleen T Quach Department of Neuroscience University of California, San Diego

More information

Reading Assignments: Lecture 5: Introduction to Vision. None. Brain Theory and Artificial Intelligence

Reading Assignments: Lecture 5: Introduction to Vision. None. Brain Theory and Artificial Intelligence Brain Theory and Artificial Intelligence Lecture 5:. Reading Assignments: None 1 Projection 2 Projection 3 Convention: Visual Angle Rather than reporting two numbers (size of object and distance to observer),

More information

ICS 606. Intelligent Autonomous Agents 1. Intelligent Autonomous Agents ICS 606 / EE 606 Fall Reactive Architectures

ICS 606. Intelligent Autonomous Agents 1. Intelligent Autonomous Agents ICS 606 / EE 606 Fall Reactive Architectures Intelligent Autonomous Agents ICS 606 / EE 606 Fall 2011 Nancy E. Reed nreed@hawaii.edu 1 Lecture #5 Reactive and Hybrid Agents Reactive Architectures Brooks and behaviors The subsumption architecture

More information

Computational Neuroscience. Instructor: Odelia Schwartz

Computational Neuroscience. Instructor: Odelia Schwartz Computational Neuroscience 2017 1 Instructor: Odelia Schwartz From the NIH web site: Committee report: Brain 2025: A Scientific Vision (from 2014) #1. Discovering diversity: Identify and provide experimental

More information

Lecture 5- Hybrid Agents 2015/2016

Lecture 5- Hybrid Agents 2015/2016 Lecture 5- Hybrid Agents 2015/2016 Ana Paiva * These slides are based on the book by Prof. M. Woodridge An Introduction to Multiagent Systems and the slides online compiled by Professor Jeffrey S. Rosenschein..

More information

Plasticity of Cerebral Cortex in Development

Plasticity of Cerebral Cortex in Development Plasticity of Cerebral Cortex in Development Jessica R. Newton and Mriganka Sur Department of Brain & Cognitive Sciences Picower Center for Learning & Memory Massachusetts Institute of Technology Cambridge,

More information

Artificial organisms that sleep

Artificial organisms that sleep Artificial organisms that sleep Marco Mirolli 1,2, Domenico Parisi 1 1 Institute of Cognitive Sciences and Technologies, National Research Council Viale Marx 15, 137, Rome, Italy parisi@ip.rm.cnr.it 2

More information

Cognitive Modelling Themes in Neural Computation. Tom Hartley

Cognitive Modelling Themes in Neural Computation. Tom Hartley Cognitive Modelling Themes in Neural Computation Tom Hartley t.hartley@psychology.york.ac.uk Typical Model Neuron x i w ij x j =f(σw ij x j ) w jk x k McCulloch & Pitts (1943), Rosenblatt (1957) Net input:

More information

REACTION TIME MEASUREMENT APPLIED TO MULTIMODAL HUMAN CONTROL MODELING

REACTION TIME MEASUREMENT APPLIED TO MULTIMODAL HUMAN CONTROL MODELING XIX IMEKO World Congress Fundamental and Applied Metrology September 6 11, 2009, Lisbon, Portugal REACTION TIME MEASUREMENT APPLIED TO MULTIMODAL HUMAN CONTROL MODELING Edwardo Arata Y. Murakami 1 1 Digital

More information

Morton-Style Factorial Coding of Color in Primary Visual Cortex

Morton-Style Factorial Coding of Color in Primary Visual Cortex Morton-Style Factorial Coding of Color in Primary Visual Cortex Javier R. Movellan Institute for Neural Computation University of California San Diego La Jolla, CA 92093-0515 movellan@inc.ucsd.edu Thomas

More information

Theta sequences are essential for internally generated hippocampal firing fields.

Theta sequences are essential for internally generated hippocampal firing fields. Theta sequences are essential for internally generated hippocampal firing fields. Yingxue Wang, Sandro Romani, Brian Lustig, Anthony Leonardo, Eva Pastalkova Supplementary Materials Supplementary Modeling

More information

ADAPTING COPYCAT TO CONTEXT-DEPENDENT VISUAL OBJECT RECOGNITION

ADAPTING COPYCAT TO CONTEXT-DEPENDENT VISUAL OBJECT RECOGNITION ADAPTING COPYCAT TO CONTEXT-DEPENDENT VISUAL OBJECT RECOGNITION SCOTT BOLLAND Department of Computer Science and Electrical Engineering The University of Queensland Brisbane, Queensland 4072 Australia

More information

International Journal of Scientific & Engineering Research Volume 4, Issue 2, February ISSN THINKING CIRCUIT

International Journal of Scientific & Engineering Research Volume 4, Issue 2, February ISSN THINKING CIRCUIT International Journal of Scientific & Engineering Research Volume 4, Issue 2, February-2013 1 THINKING CIRCUIT Mr.Mukesh Raju Bangar Intern at Govt. Dental College and hospital, Nagpur Email: Mukeshbangar008@gmail.com

More information

A method to define agricultural robot behaviours

A method to define agricultural robot behaviours A method to define agricultural robot behaviours Professor Simon Blackmore PhD candidate Spyros Fountas AgroTechnology The Royal Veterinary and Agricultural University Agrovej 10 DK-2630 Taastrup (Simon@unibots.com)

More information

Emotion Explained. Edmund T. Rolls

Emotion Explained. Edmund T. Rolls Emotion Explained Edmund T. Rolls Professor of Experimental Psychology, University of Oxford and Fellow and Tutor in Psychology, Corpus Christi College, Oxford OXPORD UNIVERSITY PRESS Contents 1 Introduction:

More information

MRI Image Processing Operations for Brain Tumor Detection

MRI Image Processing Operations for Brain Tumor Detection MRI Image Processing Operations for Brain Tumor Detection Prof. M.M. Bulhe 1, Shubhashini Pathak 2, Karan Parekh 3, Abhishek Jha 4 1Assistant Professor, Dept. of Electronics and Telecommunications Engineering,

More information

Fundamentals of Computational Neuroscience 2e

Fundamentals of Computational Neuroscience 2e Fundamentals of Computational Neuroscience 2e Thomas Trappenberg January 7, 2009 Chapter 1: Introduction What is Computational Neuroscience? What is Computational Neuroscience? Computational Neuroscience

More information

Applied Neuroscience. Conclusion of Science Honors Program Spring 2017

Applied Neuroscience. Conclusion of Science Honors Program Spring 2017 Applied Neuroscience Conclusion of Science Honors Program Spring 2017 Review Circle whichever is greater, A or B. If A = B, circle both: I. A. permeability of a neuronal membrane to Na + during the rise

More information

Application of distributed lighting control architecture in dementia-friendly smart homes

Application of distributed lighting control architecture in dementia-friendly smart homes Application of distributed lighting control architecture in dementia-friendly smart homes Atousa Zaeim School of CSE University of Salford Manchester United Kingdom Samia Nefti-Meziani School of CSE University

More information

T. R. Golub, D. K. Slonim & Others 1999

T. R. Golub, D. K. Slonim & Others 1999 T. R. Golub, D. K. Slonim & Others 1999 Big Picture in 1999 The Need for Cancer Classification Cancer classification very important for advances in cancer treatment. Cancers of Identical grade can have

More information

Increasing Motor Learning During Hand Rehabilitation Exercises Through the Use of Adaptive Games: A Pilot Study

Increasing Motor Learning During Hand Rehabilitation Exercises Through the Use of Adaptive Games: A Pilot Study Increasing Motor Learning During Hand Rehabilitation Exercises Through the Use of Adaptive Games: A Pilot Study Brittney A English Georgia Institute of Technology 85 5 th St. NW, Atlanta, GA 30308 brittney.english@gatech.edu

More information

Direct memory access using two cues: Finding the intersection of sets in a connectionist model

Direct memory access using two cues: Finding the intersection of sets in a connectionist model Direct memory access using two cues: Finding the intersection of sets in a connectionist model Janet Wiles, Michael S. Humphreys, John D. Bain and Simon Dennis Departments of Psychology and Computer Science

More information

University of Cambridge Engineering Part IB Information Engineering Elective

University of Cambridge Engineering Part IB Information Engineering Elective University of Cambridge Engineering Part IB Information Engineering Elective Paper 8: Image Searching and Modelling Using Machine Learning Handout 1: Introduction to Artificial Neural Networks Roberto

More information

Self-organizing continuous attractor networks and path integration: one-dimensional models of head direction cells

Self-organizing continuous attractor networks and path integration: one-dimensional models of head direction cells INSTITUTE OF PHYSICS PUBLISHING Network: Comput. Neural Syst. 13 (2002) 217 242 NETWORK: COMPUTATION IN NEURAL SYSTEMS PII: S0954-898X(02)36091-3 Self-organizing continuous attractor networks and path

More information

Intelligent Agents. Soleymani. Artificial Intelligence: A Modern Approach, Chapter 2

Intelligent Agents. Soleymani. Artificial Intelligence: A Modern Approach, Chapter 2 Intelligent Agents CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016 Soleymani Artificial Intelligence: A Modern Approach, Chapter 2 Outline Agents and environments

More information

arxiv:adap-org/ v1 28 Jul 1997

arxiv:adap-org/ v1 28 Jul 1997 Learning from mistakes Dante R. Chialvo* and Per Bak* Niels Bohr Institute, Blegdamsvej 17, Copenhagen, Denmark. Division of Neural Systems, Memory and Aging, University of Arizona, Tucson, AZ 85724, USA.

More information