The storage and recall of memories in the hippocampo-cortical system. Supplementary material. Edmund T Rolls

Similar documents
Cerebral Cortex. Edmund T. Rolls. Principles of Operation. Presubiculum. Subiculum F S D. Neocortex. PHG & Perirhinal. CA1 Fornix CA3 S D

Rolls,E.T. (2016) Cerebral Cortex: Principles of Operation. Oxford University Press.

Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6)

An attractor network is a network of neurons with

A model of the interaction between mood and memory

Modeling of Hippocampal Behavior

Continuous transformation learning of translation invariant representations

Memory, Attention, and Decision-Making

Learning in neural networks

Realization of Visual Representation Task on a Humanoid Robot

Noise in attractor networks in the brain produced by graded firing rate representations

The perirhinal cortex and long-term familiarity memory

Free recall and recognition in a network model of the hippocampus: simulating effects of scopolamine on human memory function

Question 1 Multiple Choice (8 marks)

Hierarchical dynamical models of motor function

Beyond Vanilla LTP. Spike-timing-dependent-plasticity or STDP

Intelligent Control Systems

Cerebral Cortex Principles of Operation

Emotion Explained. Edmund T. Rolls

Cell Responses in V4 Sparse Distributed Representation

Holding Multiple Items in Short Term Memory: A Neural Mechanism

Different inhibitory effects by dopaminergic modulation and global suppression of activity

All questions below pertain to mandatory material: all slides, and mandatory homework (if any).

Introduction to Computational Neuroscience

Self-organizing continuous attractor networks and path integration: one-dimensional models of head direction cells

Basics of Computational Neuroscience: Neurons and Synapses to Networks

Plasticity of Cerebral Cortex in Development

Understanding the Neural Basis of Cognitive Bias Modification as a Clinical Treatment for Depression. Akihiro Eguchi, Daniel Walters, Nele Peerenboom,

Dynamic Stochastic Synapses as Computational Units

Systems Neuroscience November 29, Memory

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence

A general error-based spike-timing dependent learning rule for the Neural Engineering Framework

Evaluating the Effect of Spiking Network Parameters on Polychronization

Synfire chains with conductance-based neurons: internal timing and coordination with timed input

Acetylcholine again! - thought to be involved in learning and memory - thought to be involved dementia (Alzheimer's disease)

Improving Associative Memory in a Network of Spiking

Why do we have a hippocampus? Short-term memory and consolidation

The synaptic Basis for Learning and Memory: a Theoretical approach

Synaptic plasticityhippocampus. Neur 8790 Topics in Neuroscience: Neuroplasticity. Outline. Synaptic plasticity hypothesis

Edmund T. Rolls T. Milward Oxford University, Department of Experimental Psychology, Oxford OX1 3UD, England

Physiology Unit 2 CONSCIOUSNESS, THE BRAIN AND BEHAVIOR

LEARNING ARBITRARY FUNCTIONS WITH SPIKE-TIMING DEPENDENT PLASTICITY LEARNING RULE

Questions Addressed Through Study of Behavioral Mechanisms (Proximate Causes)

Supplementary Figure 1. Basic properties of compound EPSPs at

Memory Systems II How Stored: Engram and LTP. Reading: BCP Chapter 25

Brief History of Work in the area of Learning and Memory

Physiology Unit 2 CONSCIOUSNESS, THE BRAIN AND BEHAVIOR

Biomimetic Cortical Nanocircuits: The BioRC Project. Alice C. Parker NSF Emerging Models of Technology Meeting July 24, 2008

Exploring the Functional Significance of Dendritic Inhibition In Cortical Pyramidal Cells

Decision-making mechanisms in the brain

Lateral Inhibition Explains Savings in Conditioning and Extinction

Representing Where along with What Information in a Model of a Cortical Patch

11/2/2011. Basic circuit anatomy (the circuit is the same in all parts of the cerebellum)


Recognition of English Characters Using Spiking Neural Networks

Learning and Adaptive Behavior, Part II

Learning = an enduring change in behavior, resulting from experience.

An attractor hypothesis of obsessive compulsive disorder

Neural coding and information theory: Grandmother cells v. distributed codes

Cognitive Modelling Themes in Neural Computation. Tom Hartley

2Lesson. Outline 3.3. Lesson Plan. The OVERVIEW. Lesson 3.3 Why does applying pressure relieve pain? LESSON. Unit1.2

CHAPTER I From Biological to Artificial Neuron Model

How has Computational Neuroscience been useful? Virginia R. de Sa Department of Cognitive Science UCSD

LESSON 3.3 WORKBOOK. Why does applying pressure relieve pain?

Applied Neuroscience. Conclusion of Science Honors Program Spring 2017

THE HISTORY OF NEUROSCIENCE

Observational Learning Based on Models of Overlapping Pathways

Modeling Excitatory and Inhibitory Chemical Synapses

Lesson 6 Learning II Anders Lyhne Christensen, D6.05, INTRODUCTION TO AUTONOMOUS MOBILE ROBOTS

Neural Cognitive Modelling: A Biologically Constrained Spiking Neuron Model of the Tower of Hanoi Task

UNINFORMATIVE MEMORIES WILL PREVAIL

Neural Cognitive Modelling: A Biologically Constrained Spiking Neuron Model of the Tower of Hanoi Task

NERVOUS SYSTEM 1 CHAPTER 10 BIO 211: ANATOMY & PHYSIOLOGY I

Axon initial segment position changes CA1 pyramidal neuron excitability

Neuromorphic computing

Efficient Emulation of Large-Scale Neuronal Networks

Prior Knowledge and Memory Consolidation Expanding Competitive Trace Theory

Lesson 14. The Nervous System. Introduction to Life Processes - SCI 102 1

Feedback Education and Neuroscience. Pankaj Sah

CS 453X: Class 18. Jacob Whitehill

Working models of working memory

LESSON 3.3 WORKBOOK. Why does applying pressure relieve pain? Workbook. Postsynaptic potentials

Associative Memory-I: Storing Patterns

STRUCTURAL ELEMENTS OF THE NERVOUS SYSTEM

The Neural Representation of the Gender of Faces in the Primate Visual System: A Computer Modeling Study

Cholinergic suppression of transmission may allow combined associative memory function and self-organization in the neocortex.

Active Control of Spike-Timing Dependent Synaptic Plasticity in an Electrosensory System

arxiv: v1 [q-bio.nc] 25 Apr 2017

Neurons: Structure and communication

Structure of a Neuron:

PSY 215 Lecture 13 (3/7/11) Learning & Memory Dr. Achtman PSY 215. Lecture 13 Topic: Mechanisms of Learning and Memory Chapter 13, section 13.

Action potential. Definition: an all-or-none change in voltage that propagates itself down the axon

Modeling the brain Christ Chris er t Sv S e v nsson e

Structural basis for the role of inhibition in facilitating adult brain plasticity

Activity-Dependent Development II April 25, 2007 Mu-ming Poo

19th AWCBR (Australian Winter Conference on Brain Research), 2001, Queenstown, AU

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

Consciousness as representation formation from a neural Darwinian perspective *

Research Article A Biologically Inspired Computational Model of Basal Ganglia in Action Selection

Hole s Human Anatomy and Physiology Eleventh Edition. Chapter 10

Transcription:

The storage and recall of memories in the hippocampo-cortical system Supplementary material Edmund T Rolls Oxford Centre for Computational Neuroscience, Oxford, England and University of Warwick, Department of Computer Science, Coventry, England. Correspondence: Email: Edmund.Rolls@oxcns.org. Url: http://www.oxcns.org For Rolls, E. T. (2018) The storage and recall of memories in the hippocampo-cortical system. Cell and Tissue Research 373: 577-604.

Box 1. Summary of the architecture and operation of autoassociation or attractor networks The prototypical architecture of an autoassociation memory is shown in Fig. S1. The external input e i is applied to each neuron i by unmodifiable synapses. This produces firing y i of each neuron. Each neuron is connected by a recurrent collateral synaptic connection to the other neurons in the network, via associatively modifiable connection weights w ij. This architecture effectively enables the vector of output firing rates to be associated during learning with itself. Later on, during recall, presentation of part of the external input will force some of the output neurons to fire, but through the recurrent collateral axons and the modified synapses, other neurons can be brought into activity. This process can be repeated a number of times, and recall of a complete pattern may be perfect. Effectively, a pattern can be recalled or recognized because of associations formed between its parts. This of course requires distributed representations. Fig. S1 The architecture of an autoassociative neural network. The recurrent collateral synaptic weights are excitatory. Inhibitory neurons (not illustrated) must be present to control the firing rates. Training for each desired pattern occurs in a single trial. The firing of every output neuron i is forced to a value y i determined by the external input e i Then a Hebb-like associative local learning rule is applied to the recurrent synapses in the network: δw ij = k. y i. y j (1) where k is a learning rate constant, y i is the activation of the dendrite (the postsynaptic term), y j is the presynaptic firing rate, and δw ij is the change in the synaptic weight from axon j to neuron i. This learning algorithm is fast, one-shot, in that a single presentation of an input pattern is all that is needed to store that pattern. It is notable that in a fully connected network, this will result in a symmetric matrix of synaptic weights, that is, the strength of the connection from neuron 1 to neuron 2 will be the same as the strength of the connection from neuron 2 to neuron 1 (both implemented via recurrent collateral synapses).

During recall, an external input e is applied, and produces output firing, operating through a nonlinear activation function. The firing is fed back by the recurrent collateral axons shown in Fig. 3 to produce activation of each output neuron through the modified synapses on each output neuron. The activation h i produced by the recurrent collateral effect on the ith neuron is, in the standard way, the sum of the activations produced in proportion to the firing rate of each axon operating through each modified synapses w ij, that is, h i = j y j w ij (2) where the sum is over the input axons to each neuron, indexed by j. The output firing y i is a function of the activation produced by the recurrent collateral effect (internal recall) and by the external input e i : y i = f(h i + e i) (3) The activation function f should be non-linear, and may be for example binary threshold, linear threshold, sigmoid, etc. A purely linear system would not produce any categorization of the input patterns it receives, and therefore would not be able to effect anything more than a trivial (i.e. linear) form of completion and generalization. During recall, a part of one of the originally learned stimuli can be presented as an external input. The resulting firing is allowed to iterate repeatedly round the recurrent collateral system, gradually on each iteration recalling more and more of the originally learned pattern. Completion thus occurs. If a pattern is presented during recall that is similar to one of the previously learned patterns, then the network settles into a stable recall state in which the firing corresponds to that of the previously learned pattern. The network can thus generalize in its recall to the most similar previously learned pattern. Important results (cf. Section 3.3.2) characterize how many patterns can be stored in a network in this way without interference during recall. Details are provided by Rolls (2008, 2016) in Appendices B.

Box 2. Summary of the architecture and operation of pattern association networks Fig. S2. The architecture of a pattern association network. An unconditioned stimulus (US) has activity or firing rate e i for the i th neuron, and produces firing y i of the i th neuron which is an unconditioned response (UR). The conditioned stimuli (CS) have activity or firing rate x j for the j th axon. During learning, a CS is presented at the same time as a US, and the synaptic weights are modified by an associative synaptic learning rule δw ij = k. y i. x j where k is a learning rate constant, and δw ij is the change in the synaptic weight from axon j to neuron i. During recall, only the CS is presented, and the activation h i is calculated as a dot product between the input stimulus and the synaptic weight vector on a neuron h i = j x j w ij where the sum is over the input axons to each neuron, indexed by j. The output firing y i is a function f of the activation y i = f(h i). The activation function f should be non-linear, and may be for example binary threshold, linear threshold, or sigmoid. Inhibitory neurons not shown in the figure are part of the way in which the threshold of the activation function is set. The non-linearity in the activation function enables interference from other pattern pairs stored in the network to be minimized. The pattern association network thus enables a CS to retrieve a response, the conditioned response (CR), that was present as a UR during the learning. An important property is that if a distorted CS is presented, generalization occurs to the effects produced by the closest CS during training. Details are provided by Rolls (2008, 2016) in Appendices B.

Box 3. Summary of the architecture and operation of competitive networks Fig. S3. The architecture of a competitive network. During training, an input stimulus is presented to the synaptic matrix which has random initial weights w ij and produces activation h i of the i th neuron calculated as a dot product between the input stimulus and the synaptic weight vector on a neuron h i = j x j w ij where the sum is over the input axons to each neuron, indexed by j. The output firing y i is a function f of the activation y i = f(h i). The activation function f should be non-linear, and may be for example binary threshold, linear threshold, or sigmoid. Inhibitory neurons not shown in the figure are part of the way in which the threshold of the activation function is set in which the threshold reflects the firing of all the output neurons. This or other competitive mechanisms result in a typically sparse set of output neurons having firing after the competition. Next an associative synaptic modification rule is applied, while the presynaptic input and the postsynaptic output are both present, δw ij = k. y i. x j where k is a learning rate constant, and δw ij is the change in the synaptic weight from axon j to neuron i. Next the synaptic weight vector on each neuron is normalized, to ensure that no one neuron dominates the classification. This process is repeated for every input pattern in random permuted sequence, and this process is repeated for a small number of training epochs. After the training, each dendritic weight vector points towards a cluster of patterns in the input space. The competitive network has used self-organization with no teacher to categorize the inputs, with patterns close in the input space activating the same set of output neurons, and different clusters of inputs activating different sets of output neurons. Details are provided by Rolls (2008, 2016) in Appendices B. References Rolls ET (2008) Memory, Attention, and Decision-Making: A Unifying Computational Neuroscience Approach. Oxford University Press, Oxford Rolls ET (2016) Cerebral Cortex: Principles of Operation. Oxford University Press, Oxford