Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6)

Similar documents
Learning in neural networks

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence

The storage and recall of memories in the hippocampo-cortical system. Supplementary material. Edmund T Rolls

Modeling of Hippocampal Behavior

Cell Responses in V4 Sparse Distributed Representation

1. Introduction 1.1. About the content

1. Introduction 1.1. About the content. 1.2 On the origin and development of neurocomputing

Sparse Coding in Sparse Winner Networks

Exploring the Functional Significance of Dendritic Inhibition In Cortical Pyramidal Cells

Rolls,E.T. (2016) Cerebral Cortex: Principles of Operation. Oxford University Press.

Learning and Adaptive Behavior, Part II

Computational Cognitive Neuroscience

Neural Networks. Nice books to start reading:

Intelligent Control Systems

Question 1 Multiple Choice (8 marks)

A Model of Visually Guided Plasticity of the Auditory Spatial Map in the Barn Owl

Introduction to Computational Neuroscience

Artificial Neural Networks

Lesson 6 Learning II Anders Lyhne Christensen, D6.05, INTRODUCTION TO AUTONOMOUS MOBILE ROBOTS

Plasticity of Cerebral Cortex in Development

Abstract A neural network model called LISSOM for the cooperative self-organization of

University of Cambridge Engineering Part IB Information Engineering Elective

AU B. Sc.(Hon's) (Fifth Semester) Esamination, Introduction to Artificial Neural Networks-IV. Paper : -PCSC-504

Memory, Attention, and Decision-Making

Cerebral Cortex. Edmund T. Rolls. Principles of Operation. Presubiculum. Subiculum F S D. Neocortex. PHG & Perirhinal. CA1 Fornix CA3 S D

CHAPTER I From Biological to Artificial Neuron Model

Hierarchical dynamical models of motor function

Neurons and neural networks II. Hopfield network

CAS Seminar - Spiking Neurons Network (SNN) Jakob Kemi ( )

Spiking Inputs to a Winner-take-all Network

lateral connections are non-modiable and distinct from with short-range excitation and long-range inhibition.

Realization of Visual Representation Task on a Humanoid Robot

University of California Postprints

PMR5406 Redes Neurais e Lógica Fuzzy. Aula 5 Alguns Exemplos

TIME SERIES MODELING USING ARTIFICIAL NEURAL NETWORKS 1 P.Ram Kumar, 2 M.V.Ramana Murthy, 3 D.Eashwar, 4 M.Venkatdas

The Ever-Changing Brain. Dr. Julie Haas Biological Sciences

Associative Memory-I: Storing Patterns

Clusters, Symbols and Cortical Topography

Chapter 1. Introduction

Synaptic theory of gradient learning with empiric inputs. Ila Fiete Kavli Institute for Theoretical Physics. Types of learning

The following article was presented as an oral presentation at the Conference ICANN 98:

Active Sites model for the B-Matrix Approach

A Comparison of Supervised Multilayer Back Propagation and Unsupervised Self Organizing Maps for the Diagnosis of Thyroid Disease

Lecture 1: Neurons. Lecture 2: Coding with spikes. To gain a basic understanding of spike based neural codes

Prof. Greg Francis 7/31/15

Why do we have a hippocampus? Short-term memory and consolidation

Observational Learning Based on Models of Overlapping Pathways

Neural Network Modelling of Autism

A toy model of the brain

Continuous transformation learning of translation invariant representations

Synfire chains with conductance-based neurons: internal timing and coordination with timed input

UMIACS-TR August Feature Maps. Yinong Chen. University of Maryland. College Park, MD 20742

Investigation of Physiological Mechanism For Linking Field Synapses

VISUAL CORTICAL PLASTICITY

How has Computational Neuroscience been useful? Virginia R. de Sa Department of Cognitive Science UCSD

Introduction. Chapter The Perceptual Process

arxiv:adap-org/ v1 28 Jul 1997

Retinal Waves and Ocular Dominance Columns

A Stochastic and Competitive Hebbian Learning Mechanism through Spike-Timing Dependent Plasticity to Lift Hard Weight Constraints on Hebbian Synapses

Yesterday s Picture UNIT 3F

New Ideas for Brain Modelling

Analysis of Mammograms Using Texture Segmentation

Theoretical Neuroscience: The Binding Problem Jan Scholz, , University of Osnabrück

Evaluating the Effect of Spiking Network Parameters on Polychronization

Brain anatomy and artificial intelligence. L. Andrew Coward Australian National University, Canberra, ACT 0200, Australia

Lateral Inhibition Explains Savings in Conditioning and Extinction

Self-Organization and Segmentation with Laterally Connected Spiking Neurons

A Neural Network Model of Naive Preference and Filial Imprinting in the Domestic Chick

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

J Jeffress model, 3, 66ff

International Journal of Scientific & Engineering Research Volume 4, Issue 2, February ISSN THINKING CIRCUIT

PHY3111 Mid-Semester Test Study. Lecture 2: The hierarchical organisation of vision

2 Physiological and Psychological Foundations

Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention

NEOCORTICAL CIRCUITS. specifications

Physiology Unit 2 CONSCIOUSNESS, THE BRAIN AND BEHAVIOR

ASSOCIATIVE MEMORY AND HIPPOCAMPAL PLACE CELLS

PHGY 210,2,4 - Physiology SENSORY PHYSIOLOGY. Martin Paré

COGNITIVE SCIENCE 107A. Hippocampus. Jaime A. Pineda, Ph.D.

All questions below pertain to mandatory material: all slides, and mandatory homework (if any).

Chapter 3: Information Processing

CISC 3250 Systems Neuroscience

Multi-Associative Memory in flif Cell Assemblies

A Neural Network Architecture for.

Computational Explorations in Cognitive Neuroscience Chapter 7: Large-Scale Brain Area Functional Organization

Neuromorphic computing

VS : Systemische Physiologie - Animalische Physiologie für Bioinformatiker. Neuronenmodelle III. Modelle synaptischer Kurz- und Langzeitplastizität

Analysis of in-vivo extracellular recordings. Ryan Morrill Bootcamp 9/10/2014

PSY 215 Lecture 13 (3/7/11) Learning & Memory Dr. Achtman PSY 215. Lecture 13 Topic: Mechanisms of Learning and Memory Chapter 13, section 13.

Improving Associative Memory in a Network of Spiking

Hebbian Plasticity for Improving Perceptual Decisions

NEURAL SYSTEMS FOR INTEGRATING ROBOT BEHAVIOURS

Thalamocortical Feedback and Coupled Oscillators

Module 1 CREATE. Diagram. Getting the hardware sorted: How your brain works. Outside appearance of the human brain

A Model of Perceptual Change by Domain Integration

A Dynamic Field Theory of Visual Recognition in Infant Looking Tasks

Position invariant recognition in the visual system with cluttered environments

Representing Where along with What Information in a Model of a Cortical Patch

Chapter 2--Introduction to the Physiology of Perception

Transcription:

Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6) BPNN in Practice Week 3 Lecture Notes page 1 of 1

The Hopfield Network In this network, it was designed on analogy of brain s memory, which is work by association. For example, we can recognise a familiar face even in an unfamiliar environment within 100-200ms. We can also recall a complete sensory experience, including sounds and scenes, when we hear only a few bars of music. The brain routinely associates one thing with another. Multilayer Neural Networks trained with backpropagation algorithm are used for pattern recognition problems. To emulate the human memory s associative characteristics, we use a recurrent neural network. Week 3 Lecture Notes page 2 of 2

A recurrent neural network has feedback loops from its outputs to its inputs. The presence of such loops has a profound impact on the learning capability of the network. Single layer n-neuron Hopfield network The Hopfield network uses McCulloch and Pitts neurons with the sign activation function as its computing element: The current state of the Hopfield is determined by the current outputs of all neurons, y 1, y 2,,y n. Week 3 Lecture Notes page 3 of 3

Hopfield Learning Algorithm: Step 1: Assign weights o Assign random connections weights with values w = +1 or w = 1 for all i j and ij 0 for i = j ij Step 2: Initialisation o Initialise the network with an unknown pattern: x i = whereo and x + 1or -1 O ( k), 0 i i i i ( k) is N 1, theoutput of is an element at input i of nodei at time t = k = an input pattern, 0 Step 3: Convergence o Iterate until convergence is reached, using the relation: O i ( k N 1 + 1) = f i= 0 w ij O i ( k), 0 j N 1 where the function f(.) is a hard limiting nonlinearity. o Repeat the process until the node outputs remain unchanged. Week 3 Lecture Notes page 4 of 4

o The node outputs then best represent the exemplar pattern that best matches the unknown input. Step 4: Repeat for Next Pattern o Go back to step 2 and repeat for next x i, and so on. Hopfield network can act as an error correction network. Type of Learning Supervised Learning o the input vectors and the corresponding output vectors are given o the ANN learns to approximate the function from the inputs to the outputs Week 3 Lecture Notes page 5 of 5

Reinforcement Learning o the input vectors and a reinforcement signal are given o the reinforcement signal tells how good the true output was Unsupervised Learning o only input are given o the ANN learns to form internal representations or codes for the input data that can then be used e.g. for clustering From now we will look at unsupervised learning neural networks. Week 3 Lecture Notes page 6 of 6

Hebbian Learning In 1949, Donald Hebb proposed one of the key ideas in biological learning, commonly known as Hebb s Law. Hebb s Law states that if neuron i is near enough to excite neuron j and repeatedly participates in its activation, the synaptic connection between these two neurons is strengthened and neuron j becomes more sensitive to stimuli from neuron i. Hebb s Law can be represented in the form of two rules: If two neurons on either side of a connection are activated synchronously, then the weight of that connection is increased. If two neurons on either side of a connection are activated asynchronously, then the weight of that connection is decreased. Week 3 Lecture Notes page 7 of 7

Hebbian learning implies that weights can only increase. To resolve this problem, we might impose a limit on the growth of synaptic weights. It can be implemented by introducing a non-linear forgetting factor into Hebb s Law: where ö is the following factor Forgetting factor usually falls in the interval between 0 and 1, typically between 0.01 and 0.1, to allow only a little forgetting while limiting the weight growth. Week 3 Lecture Notes page 8 of 8

Hebbian Learning Algorithm: Step 1: Initialisation o Set initial synaptic weights and threshold to small random values, say in an interval [0,1]. Step 2: Activation o Compute the neuron output at iteraction p where n is the number of neuron inputs, and è j is the threshold value of neuron j. Step 3: Learning o Update the weights in the network: where Äw ij (p) is the weight correction at iteration p. Week 3 Lecture Notes page 9 of 9

o The weight correction is determine by the generalised activity product rule: Step 4: Iteration Increase iteration p by one, go back to Step 2. Competitive Learning Neurons compete among themselves to be activated While in Hebbian Learning, several output neurons can be activated simultaneously, in competitive learning, only a single output neuron is active at any time. The output neuron that wins the competition is called the winner-takes-all neuron. In the late 1980s, Kohonen introduced a special call of ANN called self-organising maps. These maps are based on competitive learning. Week 3 Lecture Notes page 10 of 10

Self-organising Map Our brain is dominated by the cerebral cortex, a very complex structure of billions of neurons and hundreds of billions of synapses. The cortex includes areas that are responsible for different human activities (motor, visual, auditory, etc), and associated with different sensory input. Each sensory input is mapped into a corresponding area of the cerebral cortex. The cortex is a selforganising computational map in the human brain. Week 3 Lecture Notes page 11 of 11

Feature-mapping Kohonen model The Kohonen Network The Kohonen model provides a topological mapping. It places a fixed number of input patterns from the input layer into a higher dimensional output or Kohonen layer. Week 3 Lecture Notes page 12 of 12

Training in the Kohonen network begins with the winner s neighbourhood of a fairly large size. Then, as training proceeds, the neighbourhood size gradually decreases. The lateral connections are used to create a competition between neurons. The neuron with the largest activation level among all neurons in the output layer becomes the winner. The winning neuron is the only neuron that produces an output signal. The activity of all other neurons is suppressed in the competition. Week 3 Lecture Notes page 13 of 13

The lateral feedback connections produce excitatory or inhibitory effects, depending on the distance from the winning neuron. This is achieved by the use of a Mexican hat function which describes synaptic weights between neurons in the Kohonen layer. In the Kohonen network, a neuron learns by shifting its weights from inactive connections to actives ones. Only the winning neuron and its neighbourhood are allowed to learn. Week 3 Lecture Notes page 14 of 14

Competitive Learning Algorithm Step 1: Initialisation o Set initial synaptic weights to small random values, say in an interval [0, 1], and assign a small positive value to learning rate parameter á Step 2: Activation and Similarity Matching o Activate the Kohonen network by applying the input vector X, and find the winner-takes-all (best matching) neuron j X at iteration p, using the minimum-distance Euclidean criterion where n is the number of neurons in the input layer, and m is the number of neurons in the Kohonen layer. Week 3 Lecture Notes page 15 of 15

Step 3: Learning o Update the synaptic weights where Äw ij (p) is the weight correction at iteration p. o The weight correction is determined by the competitive learning rule: where á is the learning rate parameter, and Ë j (p) is the neighbourhood function central around the winner-takes-all neuron j x at iteration p. Step 4: Iteration o Increase iteration p by one, go back to step 2 and continue until minimum-distance Euclidean criterion is satisfied, or no noticeable changes occur in the feature map Week 3 Lecture Notes page 16 of 16