Computing with Spikes in Recurrent Neural Networks Dezhe Jin Department of Physics The Pennsylvania State University Presented at ICS Seminar Course, Penn State Jan 9, 2006
Outline Introduction Neurons, neural networks, and neural computations with dynamical attractors Spike sequence attractors Exist for a large class of neural networks Fast convergence Rich structures Summary
Introduction
Brain & local neural networks Human brain : 10 11 neurons Hierarchal, modular, interacting structures: cortical areas Local neural networks
Neuron: membrane potential & spikes A neuron is like a battery charged leaky capacitor Dendrite Cell body Axon Inhibitory neurons Inhibitory conductance Excitatory neurons Excitatory conductance + - + - - + Membrane potential V ~ -70 mv Outside 0 mv Ions Leak conductance Voltagedependent conductance Membrane Input V (mv) Spikes are transmitted to other neurons 0-70 Spike (width ~1msec) Threshold Reset Output time
Local networks: lateral excitation & global inhibition Composition Excitatory neurons : number ~80%, output to other networks Inhibitory neurons : number ~20%, no output to other networks Coupling between the excitatory neurons Lateral excitation Global inhibition via the inhibitory neurons (inter-neurons) Inhibitory neuron Excitatory neuron Inputs from lower area neurons Outputs to other local networks
Computing with dynamical attractors spike Local neural network 4 4 1 1 3 3 2 2 1 2 3 4 1 2 3 4 Membrane potential Tiger attractor time Cow attractor Dynamical attractors potential
Charactering the attractors Encoding capability Is the convergence fast? Is the number of attractors large enough to encode a large number of external input patterns? Spatial or spatiotemporal? Spatial: only spiking rates are important (Hopfield, PNAS, 1984). 1 2 1 2 4 3 3 4
Spatiotemporal patterns of spikes Neurons of the local networks in locust antennal lobe responding to odor presentation Trial 1 Neuron 1 Neuron 2 Membrane potential Trial 2 Neuron 1 Neuron 2 Presentation of odor 200 msec 40 mv Stopfer & Laurent (Nature, 1999)
Spatiotemporal spike attractors For a large class of neural networks, spatiotemporal spikes with precise timings are the dynamical attractors. Fast convergence with a few transient spikes Rich spatiotemporal structures Simplifications Simple models of neurons and the coupling between them No inter-neurons, allowing direct excitation and inhibition between neurons No noise, spike transmission delay,... Roadmap: A special case: winner-take-all computation General case
Winner-take-all computation
The structure of the network Inhibitory connection (global inhibition) Excitatory connection (self-excitation) External inputs No inhibitory inter-neurons Identical neurons, excitatory connection strength, and inhibitory connection strength External inputs constant in time but vary spatially
Neuron model : Leaky integrate-and-fire neuron Leaky integrate- Membrane potential Leak time constant τ dv dt = E R V + I External input Resting membrane potential Spike (not modeled) -and-fire (spike) Spike threshold If the membrane potential reaches a threshold V th (< 0mV), send a spike out and reset the mebrane potential to V r < V th. Reset
δ-pulse coupling τ dv dt = E R V + I G E δ ( t t spike )V ' + G I δ t t spike ( ) ( ) E I V G E : strength of excitatory connection G I : strength of inhibitory connection E I : reversal potential, -75 mv ' t spike, t spike : time of spike reception conductance Spike time
The winner-take-all attractor No spikes External inputs time potential Periodic spiking Neuron with the maximum input The attractor Only the neuron with the maximum input spikes; it spikes periodically.
Fast winner-take-all computation Computation Maximum input selection: peak detection in the external inputs Fast convergence The computation is done as soon as the neuron with the maximum input spikes once. Very few transient spikes are needed. (simulation) Jin & Seung (PRE, 2002)
Intuitive picture Two stage dynamics: between spikes & at the spike With a strong inhibition, spikes from the winner suppress spiking of all other neurons. Between spikes: Race to spike At spike: membrane potentials jump
A mapping technique
The Γ-mapping Spike time without interaction T j,k(n) = τ log 1+ V th V + j,k(n) I j I th τlog ( Γ j,k(n) ). Neuron ID of next spike Threshold current Γ k(n+1),k(n) = min ( Γ j,k(n) ), j=1...n Γ j,k(n) Γ = ψ + ε. j,k(n +1) j Γ k(n+1),k(n) Pseudo-spike time Neuron ID of the nth spike of the network Pseudo-spike times relative to next spike Constants depending on the external inputs and the connection strength
Condition for winner-take-all I i I th > η( G E,G I )( I j I ) th for all j i. Γ i,k(n)=i < Γ j,k(n)=i for all j i. After neuron i spikes once, no other neuron can spike. Maximum input selection : η( G E,G I ) = 1.
Spatiotemporal spike attractors
A class of neural networks Excitatory connection Network structure Strong global inhibition Arbitrary number of spiking neurons Arbitrary connectivity Arbitrary patterns of the external inputs Heterogeneity in neuron properties External inputs Inhibitory connection Simplifications No inter-neurons Leaky integrate-and-fire neuron model Synaptic coupling : δ-pulse No noise, no spike transmission delay External inputs constant in time but distributed spatially
Spike sequence attractors Spike sequence attractor 1 3 4 2 1 2 3 4 2 1 2 3 All spike sequences flow into spike sequence attractors. Timings of the spikes in the attractor are precise. The convergence is fast when the inhibition is strong. (simulation) Jin (PRL, 2002)
Description of the dynamics In between spikes: race to spike One neurons spikes; all membrane potentials jump discontinuously
The Γ-mapping Neuron ID of next spike Pseudo-spike time Γ k(n+1),k(n) = min ( Γ j,k(n) ), j=1...n Γ j,k(n+1) = ψ j,k(n+1) + ε j,k(n+1) Neuron ID of the nth spike of the network Γ j,k(n) Γ k(n+1),k(n). Pseudo-spike times relative to next spike Constants depending on the external inputs and the connection strength
Stability of the mapping Exponential damping of small perturbations Neuron ID 3 1 1 Γ 2 3 2 Perturbed Unperturbed 1 2 1 st 2 nd 3 rd 3 Spike No. Define Δ n max l=1...n Γ l,k(n) Γ ' l,k(n). Then Δ n < λ n 2DΔ 1. ( ). Here, λ < 1, and λ exp minimum connection strength
Trapping of spike sequences Consider two spike sequences: S1=(...,i1,i2,...,iP,iP+1,...), S2=(..., j1, j2,..., jp, jp+1,...), with i n = j n for n = 1,..., P. There exits a finite P * such that if P > P *, i n = j n for all n > P. Moreover, the spike timing difference decreases exponentially with P. Here P * 1 log λ.
Spike sequence attractors All spike sequences will be trapped in periodic patterns (spike sequence attractors). Subsequences of any finite length will appear again in an infinite sequence with finite number of neurons. For N = 2 and P * = 4 : S = ( 1,1,1,1,2,2,1,1,2,1,2,2,1,2,2,2,2,1,2,2,1,2,2,2,... ) Spike sequence attractor
An example N = 1000. 0.4<inhibition strength < 0.6. 0<excitation strength<0.05. τ = 40 msec. Random inputs.
Fast convergence - statistics Histogram Number of transient spikes Number of transient spikes Length of the attractor sequence Simulation 2000 runs. For each run, the connections and the external inputs are randomly set. The maximum of the external inputs is fixed. The range of the connection strength is fixed. Results Poisson distribution of the number of the transient spikes No relationship between the length of the spike sequence attractor and the number of transient spikes
Rich structures - statistics Number of attractors Spike sequence attractors Number of neurons, N Spatial pattern attractors Simulation Averaged over 20 random networks 10 N sets of randomly selected inputs with fixed maximum for each network 10 random initial conditions for each network and each set of inputs Results Exponential growth of the number of spike sequence attractors with the network size On average one attractor for one set of external inputs
Summary Spike sequence attractors are the dynamical attractors for a large class of neural networks. These attractors have two favorable characteristics for neural computation: fast convergence and rich structures.