Realization of Visual Representation Task on a Humanoid Robot

Similar documents
Observational Learning Based on Models of Overlapping Pathways

The storage and recall of memories in the hippocampo-cortical system. Supplementary material. Edmund T Rolls

Neuromorphic computing

Evaluating the Effect of Spiking Network Parameters on Polychronization

Designing Behaviour in Bio-inspired Robots Using Associative Topologies of Spiking-Neural-Networks

Modeling of Hippocampal Behavior

VS : Systemische Physiologie - Animalische Physiologie für Bioinformatiker. Neuronenmodelle III. Modelle synaptischer Kurz- und Langzeitplastizität

Recognition of English Characters Using Spiking Neural Networks

Katsunari Shibata and Tomohiko Kawano

A general error-based spike-timing dependent learning rule for the Neural Engineering Framework

Basics of Computational Neuroscience: Neurons and Synapses to Networks

Modulators of Spike Timing-Dependent Plasticity

Cell Responses in V4 Sparse Distributed Representation

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence

Prof. Greg Francis 7/31/15

Technischer Bericht TUM. Institut für Informatik. Technische Universität München

International Journal of Advanced Computer Technology (IJACT)

Temporally asymmetric Hebbian learning and neuronal response variability

The synaptic Basis for Learning and Memory: a Theoretical approach

LEARNING ARBITRARY FUNCTIONS WITH SPIKE-TIMING DEPENDENT PLASTICITY LEARNING RULE

Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6)

Part 11: Mechanisms of Learning

Task 1: Machine Learning with Spike-Timing-Dependent Plasticity (STDP)

Beyond Vanilla LTP. Spike-timing-dependent-plasticity or STDP

2Lesson. Outline 3.3. Lesson Plan. The OVERVIEW. Lesson 3.3 Why does applying pressure relieve pain? LESSON. Unit1.2

Two Simple NeuroCognitive Associative Memory Models

Active Sites model for the B-Matrix Approach

Sparse Coding in Sparse Winner Networks

Introduction to Computational Neuroscience

Implementing Spike-Timing-Dependent Plasticity on SpiNNaker Neuromorphic Hardware

Self-Organization and Segmentation with Laterally Connected Spiking Neurons

A SUPERVISED LEARNING

arxiv: v1 [q-bio.nc] 25 Apr 2017

Axon initial segment position changes CA1 pyramidal neuron excitability

Synaptic Plasticity and the NMDA Receptor

Exploring the Functional Significance of Dendritic Inhibition In Cortical Pyramidal Cells

Different inhibitory effects by dopaminergic modulation and global suppression of activity

Basics of Computational Neuroscience

How has Computational Neuroscience been useful? Virginia R. de Sa Department of Cognitive Science UCSD

Computational Cognitive Science

Consciousness as representation formation from a neural Darwinian perspective *

Computational Cognitive Science. The Visual Processing Pipeline. The Visual Processing Pipeline. Lecture 15: Visual Attention.

Modeling Depolarization Induced Suppression of Inhibition in Pyramidal Neurons

Representational Switching by Dynamical Reorganization of Attractor Structure in a Network Model of the Prefrontal Cortex

Information processing at single neuron level*

Cerebral Cortex. Edmund T. Rolls. Principles of Operation. Presubiculum. Subiculum F S D. Neocortex. PHG & Perirhinal. CA1 Fornix CA3 S D

Applied Neuroscience. Conclusion of Science Honors Program Spring 2017

Spiking neural network simulator: User s Guide

Mike Davies Director, Neuromorphic Computing Lab Intel Labs

Solving the distal reward problem with rare correlations

11/2/2011. Basic circuit anatomy (the circuit is the same in all parts of the cerebellum)

Rolls,E.T. (2016) Cerebral Cortex: Principles of Operation. Oxford University Press.

Spiking Inputs to a Winner-take-all Network

Neurobiology: The nerve cell. Principle and task To use a nerve function model to study the following aspects of a nerve cell:

Lateral Inhibition Explains Savings in Conditioning and Extinction

Investigation of Physiological Mechanism For Linking Field Synapses

A developmental learning rule for coincidence. tuning in the barn owl auditory system. Wulfram Gerstner, Richard Kempter J.

Synchrony-Induced Switching Behavior of Spike Pattern Attractors Created by Spike-Timing-Dependent Plasticity

Evolving Imitating Agents and the Emergence of a Neural Mirror System

Two lectures on autism

Shadowing and Blocking as Learning Interference Models

A Model of Visually Guided Plasticity of the Auditory Spatial Map in the Barn Owl

Computational cognitive neuroscience: 2. Neuron. Lubica Beňušková Centre for Cognitive Science, FMFI Comenius University in Bratislava

Aperiodic Dynamics and the Self-Organization of Cognitive Maps in Autonomous Agents

Spike-timing-dependent synaptic plasticity can form zero lag links for cortical oscillations.

Hearing Thinking. Keywords: Cortical Neurons, Granular Synthesis, Synaptic Plasticity

Solving the distal reward problem with rare correlations

Validating the Visual Saliency Model

Active Control of Spike-Timing Dependent Synaptic Plasticity in an Electrosensory System

Model-Based Reinforcement Learning by Pyramidal Neurons: Robustness of the Learning Rule

Plasticity of Cerebral Cortex in Development

Reading Neuronal Synchrony with Depressing Synapses

Learning and Adaptive Behavior, Part II

Lecture 2.1 What is Perception?

arxiv:adap-org/ v1 28 Jul 1997

PHY3111 Mid-Semester Test Study. Lecture 2: The hierarchical organisation of vision

Synchrony Detection by Analogue VLSI Neurons with Bimodal STDP Synapses

A model to explain the emergence of reward expectancy neurons using reinforcement learning and neural network $

Synaptic Plasticity and Connectivity Requirements to Produce Stimulus-Pair Specific Responses in Recurrent Networks of Spiking Neurons

Introduction to Computational Neuroscience

Brain-inspired Balanced Tuning for Spiking Neural Networks

What do you notice? Edited from

Autonomous Mobile Robotics

Biomimetic Cortical Nanocircuits: The BioRC Project. Alice C. Parker NSF Emerging Models of Technology Meeting July 24, 2008

Symbolic Pointillism: Computer Art motivated by Human Perception

Firing Cell: An Artificial Neuron with a Simulation of Long-Term-Potentiation-Related Memory

Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention

The Re(de)fined Neuron

Preparing More Effective Liquid State Machines Using Hebbian Learning

PRECISE processing of only important regions of visual

Lecture 22: A little Neurobiology

Reactive, Proactive, and Inductive Agents: An evolutionary path for biological and artificial spiking networks

Neurally Inspired Mechanisms for the Dynamic Visual Attention Map Generation Task

Neuromorphic Self-Organizing Map Design for Classification of Bioelectric-Timescale Signals

You submitted this quiz on Sun 19 May :32 PM IST (UTC +0530). You got a score of out of

Computational Approaches in Cognitive Neuroscience

Signal Processing by Multiplexing and Demultiplexing in Neurons

STDP between a Pair of Recurrently Connected Neurons with Asynchronous and Synchronous Stimulation AZADEH HASSANNEJAD NAZIR

The 29th Fuzzy System Symposium (Osaka, September 9-, 3) Color Feature Maps (BY, RG) Color Saliency Map Input Image (I) Linear Filtering and Gaussian

arxiv: v1 [q-bio.nc] 13 Jan 2008

Transcription:

Istanbul Technical University, Robot Intelligence Course Realization of Visual Representation Task on a Humanoid Robot Emeç Erçelik May 31, 2016 1 Introduction It is thought that human brain uses a distributed form of abstraction to model the external environment, to extract information and to effect its state by making actions. This abstraction may be provided by the visual system that filters the information coming to the retina. This visual stimuli, which may be an object or a scene, are represented by an internal representation. The internal representation is also thought to be generated in a self-organizing way and with the help of reward [1]. In this project, the representation of an object is generated by spiking neural networks (SNN) in a self-organizing way. In addition, the generated representations are used to actuate the humanoid robot, Darwin-OP [2], to accomplish a task. So, the perception of the robot on an object is built in a biologically-plausible way with a computational model. The presented model in this report can be used to test behavioral tasks or models in real environment. For example, the representations of objects may be the input information for decision making models [3] instead of feeding the model with discrete values. 2 Task and Environment In the considered task, the robot is expected to put a perceived object into a box which has the same color with the object. However, since Darwin-OP does not have grasping ability, the task is simplified. The robot is expected to perceive an object and move to the similar color card with the perceived object. 1

Figure 2.1: The environment that the task is realized. Figure 2.2: Color cards that Darwin-OP moves towards. The task is realized in the environment that is seen in Figure 2.1. In the environment, the object is presented to the robot in the middle of the white board. The three colors seen at the bottom of the board are the color cards (Figure 2.2) that Darwin-OP moves towards according to their similarity with the object. There are three different objects to test the model as seen in Figure 2.3. Before the task is realized on Darwin-OP, the model is tested with the unnatural figures that are seen in Figure 2.4. 3 Process To realize task two different codes run together. The code to move Darwin-OP is written in C++ and the code to simulate the computational model of representation is written in Figure 2.3: The objects that Darwin-OP perceives. 2

Figure 2.4: The unnatural figures that are generated to test the model before using the model on Darwin-OP. Figure 3.1: The process to realize the task. Python. The NEST library ([8]) is used to simulate our computational model. The process can be seen in Figure 3.1. At the beginning the object and the color cards are presented to the robot (Figure 3.1-a). The robot takes the picture of environment and sends the picture to the Python code (Figure 3.1-b). Then the Python part splits the picture into images of object and color cards. The images are turned into the stimuli by calculating neuron currents according to their pixel values (Figure 3.1-c). Afterwards, the computational model is simulated by using the stimuli (Figure 3.1-d) and the representations of the presented images are generated according to the activity of neurons in the model (Figure 3.1-e). The representations are compared to each other to determine which of the color card is similar to the object (Figure 3.1-f,g). The information of similar color card is sent to the robot (the C++ coded part) and the robot moves towards the color (Figure 3.1-h,i). 3

Figure 4.1: The computational model that generate representations. 4 Computational Model The computational model consists of two different kind of spiking neuron groups as seen in Figure 4.1. The group of input neurons is at the left-hand side and the group of the output neurons is at the right-hand side. The neurons are modeled by using Izhikevich Regular Spiking neuron model [4]. Each neuron has input current to be activated and input neurons are fed by the currents that are calculated using the presented object to the robot. So, the stimulus of each neuron is one of the currents that are seen at the left-most side of the Figure 4.1. The activity of output neurons generates a representation as the one at the right-most side of the Figure 4.1. Each of the input neurons is connected to the each of the output neurons with excitatory connections. The weights of these connections are randomly assigned at the beginning. The output neurons are connected in all-to-all manner among the group. At the beginning all connections are excitatory and the weights are selected randomly. The excitatory connections are the connections that the presynaptic neuron increases (excites) the activity of the postsynaptic neuron through this connection. Inhibitory connections are the connections that the presynaptic neuron reduces (inhibites) the activity of the postsynaptic neuron through the inhibitory connection. The connections in-between output neurons are updated by using a learning rule that is similar to Hebbian learning rule. 4.1 Learning Rule The learning rule is inspired from the Hebbian Learning Rule ([5]) and Spike Timing Dependent Plasticity (STDP) rule ([6] & [7]). In this learning rule, the update process considers the timing of spikes. When two neurons spike in the same window (50 ms window) then the connection weight between two neurons are increased by 0.01. If there are neurons that do not have a spiking activity in the same window, then the connection weight between the spiking neurons to the passive neurons are decreased by 0.04. This decrease turns the excitatory connections into inhibitory connections. Thus, the activated neurons by a certain image inhibite the other neurons and represent the image. 4

Figure 4.2: Calculation of the stimulus to stimulate the input neurons. 4.2 Method to generate stimulus An image that will be represented by the computational model is coded by currents to feed the input neurons. To calculate the currents, the image is seperated to three images considering its RGB color values (Figure 4.2-a). If we consider n input currents for n input neurons, each of R, G, B images is seperated into n/3 columns and the pixel values are summed to determine the input current comes from that part of the image (Figure 4.2-b). The summed values are scaled by the number of columns and pixels. These values are ordered beginning from the R image to B image and constitute a vector of input currents as seen in Figure 4.2-c. 4.3 Simulation of Neurons The computational model is simulated by using NEST library [8]. Each input is presented to the model for 500ms. Neurons are stimulated by the calculated currents of images. The currents for the images are presented one by one and in order. When the input currents are presented the activity of input neurons can be seen from Figure 4.3. The images are the ones in Figure 2.4. The columns show the activity for each image. The magenta dots indicate the spike activity of the neuron. The y-axis shows the index of neuron and x-axis shows the time. For example, the neurons 12, 13, 14, 15 are highly active for the image 1. The output neurons are activated by input neurons. The activity of output neurons are shown in Figure 4.4. The numbers at the bottom indicate the index of the presented image. This figure shows the activation of the output neurons when they are stimulated by the input neurons. Representations are generated by using the activity of output neurons. The number of spikes in each column are summed and normalized with the maximum spike count in one column. Then the calculated value is colored to indicate the activation density. The representations of the images in Figure 2.4 can be seen in Figure 4.5. To realize task, it is needed to determine the similarity of object to the color cards. That s 5

Figure 4.3: Activity of the input neurons when the images in Figure 2.4 are presented in order. Figure 4.4: Activity of the output neurons when the images in Figure 2.4 are presented in order. 6

Figure 4.5: The representations of the images in Figure 2.4 (in order). Figure 4.6: The colored similarity matrix of the representations in Figure 4.5. why a similarity method is proposed. To determine the similarities, the activity of each output neuron for each image is calculated. The activity is normalized for of the number of spikes. Then the calculated activations are rounded to 1 or 0 according to a threshold. The calculated neuron activities of two images, for which the similarity will be measured, are subtracted. And the elements of the vector resulted from the subtraction are summed. The value indicates how similar two figures are to each other. When the value is low, this means that the two figure is similar. The similarity measurement matrix of the representations in Figure 4.5 can be seen in Figure 4.6. 5 Results The images and the object are presented to the model and the representations are obtained (Figure 5.1). As seen in Figure 5.1, the object is a yellow tea bag (1) and the color cards (2, 3, 4). The representations of each image are shown under the images. According to these representations the object is different from the red card by 5 similarity units, is different from 7

Figure 5.1: The presented images and their corresponding representations. The calculated differences of object to color cords are shown at the bottom. the blue card by 8 units and is different from the yellow car by 3 units. So, the yellow tea bag is represented more similar to the yellow card and the robot move towards the yellow card. To reach these representations, the connection types among the output neurons are changed. At the beginning, all of the connections are in excitatory type. At the end of simulation, very few of them are excitatory and most of them are inhibitory (Figure 5.2). 6 Conclusion In this project, a simple perception model is created for a humanoid robot, Darwin-OP, to provide its perception in real environment. This is achieved by using SNN and the activity of SNN creates the representations. This setup can be a beginning to be used in a psychological experiments to achieve behavioral results. The model can be improved by considering a reward signal during the learning process. In addition, robot has to be able to adjust itself to provide proper light. Another improvement may be to study on robot s self position adjustment ability to take better pictures of the environment. Also, a gripper can be added to the robot to realize more realistic tasks. References [1] Brohan, K., Gurney, K., & Dudek, P. (2010). Using reinforcement learning to guide the development of self-organised feature maps for visual orienting. In Artificial Neural 8

Figure 5.2: Change of connection types through iterations. Blue: Number of excitatory connections, Green: Number of inhibitory connections Networks ICANN 2010 (pp. 180-189). Springer Berlin Heidelberg. [2] Ha, I., Tamura, Y., Asama, H., Han, J., & Hong, D. W. (2011, September). Development of open humanoid platform DARwIn-OP. In SICE Annual Conference (SICE), 2011 Proceedings of (pp. 2178-2181). IEEE. [3] Ercelik, E., & Sengor, N. S. (2015, July). A neurocomputational model implemented on humanoid robot for learning action selection. In Neural Networks (IJCNN), 2015 International Joint Conference on (pp. 1-6). IEEE. [4] Izhikevich, E. M. (2003). Simple model of spiking neurons.ăieee Transactions on neural networks,14(6), 1569-1572. [5] Hebb, D.O. (1949). The Organization of Behavior. New York: Wiley & Sons. [6] Markram, H. and Sakmann, B. (1995). Action potentials propagating back into dendrites triggers changes in efficacy. Soc. Neurosci. Abs. 21 [7] Bi, G. Q. and Poo, M. M. (1998). Synaptic modifications in cultured Hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. J Neurosci, 18:10464-72 [8] Gewaltig M-O & Diesmann M (2007) NEST (Neural Simulation Tool) Scholarpedia 2(4):1430. 9