Neuromorphic Self-Organizing Map Design for Classification of Bioelectric-Timescale Signals

Size: px
Start display at page:

Download "Neuromorphic Self-Organizing Map Design for Classification of Bioelectric-Timescale Signals"

Transcription

1 Neuromorphic Self-Organizing Map Design for Classification of Bioelectric-Timescale Signals Johan Mes, Ester Stienstra, Xuefei You, Sumeet S. Kumar, Amir Zjajo, Carlo Galuzzi and Rene van Leuken Circuits and Systems Group, Delft University of Technology, The Netherlands Department of Data Science and Knowledge Engineering, Maastricht University, The Netherlands Abstract The Self-Organizing Map (SOM) is a recurrent neural network topology that realizes competitive learning for the unsupervised classification of data. In this paper, we investigate the design of a spiking neural network-based SOM for the classification of bioelectric-timescale signals. We present novel insights into the architectural design space, inherent trade-offs, and the critical requirements for designing and configuring neurons, synapses and learning rules to achieve stable and accurate behaviour. We perform this exploration using highlevel architectural simulations and, additionally, through the fullcustom implementation of components. I. INTRODUCTION Artificial Neural Networks (ANN) are an effective means of recognizing and classifying patterns within real-time signals. Conventional ANNs rely on arrays of neurons, each comprising a non-linear summing function, an activation level and an activation function. Neurons are interconnected through synapses, which act as weights, effectively magnifying or suppressing the effect of the pre-synaptic neuron on the postsynaptic neuron. When the aggregated inputs to the neuron match the activation level, the neuron fires, and generates an output using its activation function. Spiking Neural Networks (SNN) are a class of ANNs, which rely on the precise timing of neuron activation to encode information []. When presented with an input signal, SNN neurons respond with a specific temporal sequence of voltage spikes, or spike trains. By adapting synaptic weights, neurons in subsequent layers become selective to specific temporal sequences, enabling classification, and even transformation. Although SNNs have been extensively studied in the past, the challenges related to their use in the classification and processing of real-time signals are less understood. The efficacy and accuracy of the classification are impacted by the choice of neuron model, synapse architecture, learning rule, and input encoding, together forming a complex, nontrivial design space. Prior art in this domain presents limited insights, often in a stand-alone manner for each design choice. For instance, neurons, synapses, interconnects and learning rules are examined independently in [2] [4]. Various neuron models target different design goals ranging from biophysical accuracy [5] to simplicity [6] and computational tractability [7], with a number of component [8] [9] and system-level implementations []. However, configuring them within a practical classification use-case is far from trivial. Similarly for synapses, the choice of learning rules [] [3] is often determined by the ability to configure and deploy them in a stable manner within the SNN architecture. A few notable papers provide insight into the tuning of specific parameters, such as the learning window size to achieve competitive learning in networks [4]. However, since application requirements drive architectural design decisions, these parameters are also impacted by the type of learning behaviour realized by the network topology [5] [6] [7]. In this paper, we explore a practical design trajectory of an SNN-based system for processing and classification of bioelectric-timescale signals. The proposed system implements the Self-Organizing Map (SOM) [7] topology to realize competitive unsupervised learning within the SNN. We present novel insights into the architectural tradeoffs inherent in the design of such SOM classifiers, the critical requirements for neurons, synapses and learning rules to achieve stable and accurate classification behaviour, and details practical experiences from the training and testing of such a system. These insights are delivered on the basis of high-level architectural explorations, and subsequent full-custom UMC65nm implementation. II. ARCHITECTURE The design space of SNNs is evaluated in the context of a classifier for bioelectric-timescale signals. These signals have a frequency range of. Hz 3 Hz, small amplitudes (3 mv), and can be noisy as in the case of electrocardiograms (ECG) and electroencephalograms (EEG). Sampled analog signal values are input to the encoding stage for conversion into spike trains. Each spike train stimulates one or more neurons in the hidden layers of the SNN, corresponding to an output class. The classifier is illustrated in Figure. The characteristics of the output spike train are influenced by a number of factors: SNN topology, neuron model, synapses, learning rules, and input encoding. In this section, we examine each of these factors /7/$3. 27 IEEE 3

2 A. Self-Organising Map (SOM) Topology The behaviour of SNNs is determined primarily by the topology in which neurons are interconnected. In addition, the relative population of excitatory and inhibitory connections also impact the stability and convergence of learned synaptic weights. The SOM [7] topology is a variant of the k-means clustering algorithm, and yields spatially distinct classification of inputs through its intrinsic Winner Takes All (WTA) behaviour [8] [9]. The topology of connections between excitatory and inhibitory neurons produces competition in the SOM, resulting in unsupervised learning within the network. Figure 2 illustrates the synaptic weight matrix for a trained SOM, where excitatory connections are light coloured, and inhibitory connections are dark coloured. The feedback loops in the topology are evident from the pattern of facilitation and depression observed in the matrix. Although similar behaviour can be elicited even from a feed-forward network with lateral inhibition, our experience suggests that achieving stability is non-trivial. This is primarily due to the criticality of excitationinhibition ratio, and its dependence on topology and initial weight distributions. Network delays also play an important role across topologies. In the SOM, synaptic delays facilitate the formation of polychronous groups, i.e. distinct clusters of neurons that fire together in response to a specific input stimulus [2]. Polychronic behaviour is a function of input spike timing and delay patterns in the network. In other topologies using recurrent connections, the network delay influences the precise time at which a spiking neuron causally stimulates itself. Used as an additional parameter, the delay can yield configurable variations in spiking behaviour from a homogeneous array of neurons. B. Neuron model The activation behaviour of SNN neurons can be described using a number of models in literature. Two effective models from a hardware-design perspective are the Integrate and Fire (IF) and Izhikevich [7]. The IF model consists of an integrator, Analog input Input Encoder m Spike trains FC m input neurons SNN n x n toroidal grid neurons n x n Spike trains Fig.. Architecture of the SNN-based classifier. Real-time analog inputs are converted to m spike trains by the input encoder. These m spike trains stimulate m input neurons in the hidden SNN layer. Also illustrated is the grid of n n neurons organized as an SOM. The output of this network is spike timing and frequency information of the population. Grid neurons are fully connected through synaptic weights scaled according to the Mexican hat function. Neuron from # Neuron to # Fig. 2. Synaptic weight matrix of a trained SOM. Colors lighter than grey denote an excitatory connection while colors darker than gray denote an inhibitory connection. The closer to pure black or pure white, the higher the absolute conductivity. Neurons - are input neurons, connected with plastic synapses to grid neurons -. The result of learning is clearly seen in the specific pattern of potentiated connections linking the input neurons to the SOM grid. These connections were initialized with a uniform random distribution before training. accumulating synaptic outputs and generating a spike when the aggregated value exceeds a certain threshold. Following a spike event, the model causes the neuron to hyperpolarize for a short duration known as the refractory period. This prevents the neuron from spiking again, in response to identical synaptic outputs, for the duration of refraction. This behaviour introduces non-linearity in the spiking response of neurons, and is critical in realizing network transfer functions that fit non-linear inputs. However, refraction limits the maximum spiking rate. A refractory period of ms yields a saturating spike rate of khz, which is sufficient for processing of bioelectric-timescale signals, such as ECGs. Neuron implementations in CMOS can use techniques such as [2] to enable configurable refractory periods, as well as configurable voltage thresholds for spiking neurons. An additional means of realizing non-linear response is through the use of spike frequency adaptation, which can be achieved by using output spikes as negative feedback for the membrane capacitance [], or by adapting voltage thresholds in response to output spikes [22]. Izhikevich, on the other hand, is a mathematical model that reproduces the complex spiking behaviour of biologically accurate neuron models, such as Hodgkin-Huxley [5]. The response of the Izhikevich model varies based on the encoding type. When used with rate coding (average firing rate), the model exhibits a linear input to output relationship. How- 4

3 ever, with temporal coding (precise spike timing), the model exhibits a non-linear relationship between firing rate and spike timing. Thus for rate coding with Izhikevich, non-linear transfer functions can be realized only if the requisite nonlinearity is introduced through the topology or in synapses. C. Synapses and Learning rules Learning in SNNs refers to the modification of synaptic weight in response to inputs. Consequently, learning requires two elements - synapses, and an algorithm to update their weights. In this paper, we evaluate two learning algorithms - Spike Timing Dependent Plasticity (STDP), and Triplet STDP (TSTDP), both variants of the classical Hebbian learning rule [23]. STDP uses the difference in firing times of pre- and postsynaptic neurons to determine the extent to which the synaptic weight is modified. The weight change ( w) is given as: w + = f + (w) A + exp( t τ + ) t > () w = f (w) A exp( t τ ) t < (2) The parameters A + and A define the absolute amplitude and τ + and τ define the width of potentiation and depression STDP learning windows, respectively. t is the time difference between the postsynaptic spike and the presynaptic spike along a synaptic connection defined as being positive when the postsynaptic spike occurs later than the presynaptic spike. The function f(w) relates weight change with the current synaptic weight, yielding a non-linear weight update function. Despite this, STDP suffers from the performance limiting ping-pong effect. When the interspike interval approaches the duration of the STDP learning window, simultaneous potentiation and depression occur in the synapse, which leads to unreliable learning behaviour for high-frequency spike trains. This effect is mitigated in the case of TSTDP which relies on two spike pairs per postsynaptic spike in determining the difference in spike time [3]. TSTDP specifies synaptic weight changes as: w + = f + (w) (A 2+ exp( t τ + ) + (A 3+ exp( t 2 τ y )) t > w = f (w) (A 2 exp( t τ ) + (A 3 exp( t 3 τ x )) t < Similar to STDP in these equations t is positive when the postsynaptic spike occurs after a presynaptic spike. The parameters A 2+ and A 2 are the absolute amplitudes of synaptic weight changes. An extra term is added, taking into account a second spike pair. For potentiation this is t 2, while for depression this is a different pair t 3 as defined in [3]. The amplitudes of these triplet terms are multiplied by A 3+ and A 3, respectively, while their width is governed by τ y and τ x. Note that setting A 3+ and A 3 at zero reduces this (3) (4) set of equations to those of basic STDP. The combination of two pair terms allows for more reliable learning even at high spike frequencies, and is especially useful in rate coded and rate-temporal hybrids where spike frequency constraints limit system behaviour. CMOS implementations of TSTDP exhibit additional non-linearity in the weight update function [24], and this is reported as being biologically accurate [] as compared to pair-based STDP. However, TSTDP also has higher hardware costs, requiring storage of two spike times as opposed to one for STDP. At the circuit level, synaptic weights are converted to synaptic currents through an integrator. For large arrays, the overheads imposed by the integrator can limit the achievable integration density. The differential-pair integrator (DPI) circuit [3] supports linear filtering, allowing linear summation of multiple currents from identical synapses. The use of the DPI yields area savings, and enables higher integration densities for SNNs, while the use of sub-threshold operating regions for transistors yields currents in the range of pa. Drive issues are overcome through the use of scaling blocks for charge phase response amplitude, eliminating the need for extra pulseextender circuits. Multiple options exist for the storage of actual synaptic weights. Traditional capacitive storage employs bulky capacitors to lower the effect of leakage. While this is mitigated by the use of digital memories [25], they require current-mode ADCs and DACs for each neuron, adding complexity. Similarly, floating gate memories [26] offer an effective means for long term synaptic weight storage due to their non-volatility. However, the precise programming of synaptic weights with these is challenging. In addition to these, the synapse can also incorporate additional mechanisms for information storage. One of these is bistability [27] [28] [29], which due to its low area and low power consumption, is a comparably efficient storage medium. During spiking events, synaptic weights drift towards one of two voltage rails depending on their relative value compared to the bistability threshold. In the absence of spikes, weights are held constant. These dynamics lend great robustness to the state storing in synapses against stochastic background events [2]. Furthermore, neural networks with two states have been shown to be effective for pattern classification tasks [3]. D. Input Encoding Information presented to the SNN can be coded either as a firing rate (rate coding), or in the precise timing of spikes (temporal coding). Rate coding varies the frequency of spike trains based on the magnitude of the input signal. Except for the initial spike in a train, the timing of individual spikes carry no information. Consequently, this form of coding is incompatible with STDP learning rules that rely on spike timing. Temporal coding on the other hand relies on the precise timing of spikes to encode information. The latency of spike generation from onset of an input stimulus varies inversely with the magnitude of the input signal. Consequently, the higher the input value, the sooner the spike. For temporally coded systems, in addition 5

4 - 2 Neuron # Synaptic weight Timesteps [N *.ms] Time [s] Fig. 3. Example of population temporal coding in a SOM. Input neurons numbered to encode a value using temporal coding and need only spikes to encode any value transmitted. The network neurons numbered to respond to this input after a certain number of input neurons have fired. The amount of input neuron spikes needed for activation to occur determines the latency of the network, and can be modified by potentiation or depression of the synapses linking the input and grid neurons. Fig. 4. Synaptic weight change over time for all connections from one input neuron to SOM grid neurons. The initial weight distribution is an uniform distribution between.8p and.2p and the maximum allowed conductivity is 2p. Due to the use of a saturating weight dependence function f(w), synaptic weights drift towards two limit values of either low conductivity or high conductivity. Note that all synaptic connections eventually participate in learning due to the correct selection of initial weights. TABLE I COMPARISON OF ENCODING SCHEMES Property Spikes/Word Energy/Word WC Link Delay Rate n n E spike 2 min(f f (value) Temporal E spike max(f φ (value)) to STDP, network delay modifications can also be used as a form of learning [3]. As is in the case of the SOM, the delay influences which neurons respond to a spike train resulting from a presented input stimulus. The population of neurons that spike first forms a population temporal code, as illustrated in Figure 3. A short comparative summary of both encoding schemes is presented in Table I. In terms of efficiency, temporal codes offer a higher information capacity than rate codes, for a given amount of energy. E. Training and Testing The SNN is operated in two modes: training and testing. In the training mode, synaptic plasticity is enabled, allowing the network to adapt synaptic weights according to the input signal. During the training phase, the SNN is exposed to data representative of what it could encounter in the testing phase. In this latter phase, the SNN is used for the classification of data, with synaptic plasticity disabled. During training, it is essential that the whole set of training data is presented to the classifier at least once. Multiple iterations of this data improve accuracy of the training. The ordering of training data also impacts the learned function. For instance, to prevent the classifier from learning correlations between consecutive training data items, the training data sequence can be randomized across each iteration. The presentation of data to the SNN and the classification occur in two distinct stages - active and pause. During the active stage, input data is presented to generate representative spike trains. During the pause stage, these spikes are allowed to propagate through the network, and thus generate a classification output. The duration of these stages is determined by the topology and, thus, the latency of the network. III. EXPERIMENTAL RESULTS This section reports experimental results for an SNN-based signal classifier, and presents insights gained from its design and configuration. We used a MATLAB simulation setup for high-level architectural exploration, and subsequently designed relevant components in the UMC65nm low leakage technology node. A. Network Topology The initial and maximum weights of synapses determine the ability of the network to adapt to input data. Especially in the case of synapses between input and grid neurons in Figure, a skewed distribution of weights results in the network evolving such that multiple output neurons respond to the same class of inputs. It is essential, therefore, for this distribution to be sufficiently spread out. The range of initial weights in the distribution also impacts the final outcome. In general, it is observed that the lower limit of the weight range must be adequately high to facilitate the stimulation of a sufficient number of neurons, yet prevent the activation of connections that are never re-used in the future. This behaviour is shown in Figure 4. Similarly, the upper limit of the range should be sufficiently low to prevent over-training of synapses on the early training data. 6

5 9 Spike frequency Time to first spike.4 9 Spike frequency Time to first spike Frequency [Hz] Time to spike [s] Frequency [Hz] Time to spike [s] Input current [A] -7 (a) Input current [-] (b) Fig. 5. (a) Simulation of the Integrate and Fire (IF) model where the output spike rate and time to first spike are recorded for each input. This simulation clearly shows the nonlinear rate-to-rate and rate-to-timing transfer functions. The refractory period was set to ms enforcing a hard upper bound of khz spike frequency. (b) Simulation of the Izhikevich RS model where the output spike rate and time to first spike are recorded for each input. This simulation clearly shows the linear behaviour of rate-to-rate and nonlinear rate-to-timing transfer functions. Seven other Izhikevich models [7] showed comparable results. The x-axis spans a dimensionless abstraction of input current, relevant to the Izhikevich model..9.8 Vrfr=.8 V Vrfr=.22 V Vrfr=.29 V.9.8 Membrane voltage [V] Membrane voltage [V] Time [s] -3 (a) Time [s] (b) Fig. 6. (a) Configurable refractory periods of the neuron cell (b) Non-linear spiking behaviour. B. Neurons According to the Universal Approximation Theory nonlinearity is a critical requirement for neuron models to be able to approximate arbitrary input functions using a network of those neurons [32]. Figure 5(a)-(b) report the transfer functions of IF [6] and Izhikevich [7] model neurons. For rate coded systems, simulations show that the Izhikevich model [7] has a linear input to output relationship across all configurations. The IF model with a refractory period, on the other hand, exhibits a nonlinear saturating exponential input to output relationship for both rate-to-rate and rate-totiming transformations. For rate-to-timing transformations, the Izhikevich model can also be used due to its non-linear timingto-frequency relationship. We implemented a full-custom neuron cell based on a variant of the IF model [] in UMC65nm, exhibiting nonlinear behaviour through its use of leak conductances, as well as spike frequency adaptation through a negative feedback loop. The refractory period of the cell is configured through the voltage V rfr, which controls the leakage current in the reset stage. Increasing the leakage current results in the shortening of the refractory period, as illustrated in Figure 6(a). The simulated spiking behaviour of the extracted layout is shown in Figure 6(b). The non-linearity in spike timing is 68%, with spikes complexes ranging in duration from.86 ms to.26 ms. C. Synapses and Learning rules Figure 7 contrasts the synaptic weight changes in STDP and TSTDP neurons. The ping-pong effect of STDP is observed as the small inverted spikes in the weight change trace, corresponding to an input frequency of Hz and 3 Hz, for a learning window of ms. Predictable synaptic weight changes are observed only until the peak input frequency of 7

6 -9 Resulting synaptic weight change STDP Triplet STDP Weight change [C] Weight change [mv] Input frequency [Hz] Temporal difference of postsynaptic spike [ms] Fig. 7. Comparison of weight change behaviour as a function of input frequency, for a IF neuron with STDP and TSTDP synapses. For this simulation, the first neuron produces spikes for one second for each input frequency, and the resulting weight change over the simulated time step is recorded. This result clearly shows the unreliable frequency dependency of STDP caused by the ping-pong effect after 2 Hz due to the ms learning window period. The sharp drop-off is caused by the refractory period of the IF neurons of ms which causes the second neuron to ignore subsequent input spikes after it has fired. Weight change [mv] Tpost - Tpre [ms] Data Fitted curve Fig. 8. Pair-based learning window of TSTDP circuit. Triangular marks are extracted from TSTDP circuit while red curve shows the exponential fit of these data points. The generated curve matches with classic learning window reported []. 2 Hz. Furthermore, weight changes are observed to flatten out at higher frequencies due to simultaneous potentiation and depression of synapses. TSTDP on the other hand, does not suffer from these pathologies. For pure temporal coding, basic STDP is a viable solution as long as maximum spike frequencies are within the limits imposed by the learning window period. An important factor in the stability of any STDP approach is the configuration of maximum facilitation amplitudes A + and A. In Figure 8 these parameters impact the area of the weight update curves during potentiation and depression. It is observed that stable learning is realized when the aggregate area of depression exceeds that of potentiation in the weight update function, Fig. 9. Synaptic weight change due to temporal difference between postspikes in a triplet. Third-order spike interactions are observed when temporal difference is below 3ms. Thereafter, weight modification is mainly dominated by the pair spikes (constant 2ms throughout the experiment). as shown in the figure. On the contrary, weaker depression results in the extreme potentiation of synaptic weights and the eventual shorting of outputs to inputs. This behaviour prevents the realization of any practical network transfer function. These indications hold true even for TSTDP. TSTDP synapses, although well suited for temporal coding, can additionally support rate coded systems due to the wide frequency range for which they produce weight changes as observed in Figure 8. Basic STDP on the other hand, constrains the maximum frequency range to under 2 Hz for the used learning window. In addition to individual synaptic parameters, the general learning rate of the network also impacts stability. Within the supported range, a low learning rate yields weight stabilization at sub-optimal minima, while a high learning rate causes over-learning of early presented data and in the shorting of outputs to inputs. For network design, it is important to ensure network function stability over time. Although evidence of the relation between limit stability and weight dependence can be contraindicative [33], simulations like Figure 4 show that saturating weight dependencies are able to quickly generate strong connections and can at the same time quickly depress unwanted connections and stabilize. Another method of ensuring stability over time is through the use of bistability mechanisms [2]. Figure 9 reports the weight change induced in the synapse as a function of temporal difference between two post-spikes in a triplet. The influence of spike pairs is negated in this analysis using a fixed 2 ms temporal difference in all experimental runs. As observed, the closer the two post-spikes are, the larger is the effected potentiation, in the synapse. Such third-order spike interactions can be observed for temporal differences under 3 ms. Beyond this, the impact of third order spike interactions wanes, leaving the base potentiation caused by the 2 ms spike pairs, shown as the flat portion of the curve. 8

7 Item Item 2 Item 3 Item 4 Item Item 2 Item 3 Item 4 Item 5 Item 6 Item 7 Item 8 Item 5 Item 6 Item 7 Item 8 Item 9 Item Item Item 9 Item Item (a) (b) Fig.. (a) SOM temporal response for an input signal varying in amplitude from to V in steps of. V. The output grid depicts the time to spike from the onset of input stimulus. Black denotes no activity, while brighter values denote shorter spike times, i.e. faster spike generation. (b) SOM rate coded response for the same signal. Brighter values denote higher spiking frequencies. Note that valuable information is encoded both in initial spike time and in spike rate. D. Training and Testing The non-zero response latency of the SNN means that each input to the classifier must be succeeded by a pause interval. In this interval, neurons are silenced and the network allowed to present its full response to the input. For highlatency, multi-layer or feedback SNNs, it is essential that spiking activity have finished propagating through the entire network before a new training item is presented. The resulting interaction between spike responses for new and old data in the network results in the incorrect modification of synaptic weights. In order to prevent learning of correlations one can fully randomize the order of input data provided. This makes sure that no effect of previous data lingers in the system when the next input is provided. Alternatively, a hard reset can be executed after each exposure of a training item, erasing all membrane voltages and learning windows, but preserving synaptic weights and connectivity. This tends to suppress spiking behaviour in the network. The exposure time itself plays an important role in determining the extent of synaptic facilitation. The presentation of inputs for extended durations of time result in over-learning of that particular input. This manifests as the maximization of synaptic weights along specific paths, essentially creating an all-pass function from the input to the output. Training is generally performed until the aggregate weight change of all synaptic connections in the network stabilizes. We observe three types of aggregate weight change functions over time, for the network - convergent decrease, convergent stable, and random movement. In Figure 4, the aggregate weight change of all synaptic connections is high at the start of the simulation, decreasing steadily as training progresses and weights stabilize. In the remaining two cases of weight change, the network fails to find a stable end point of graph connectivity. There are two ways in which we varied the total training time in which we observed similar network response: increasing exposure (and if needed pause) time per item, and increasing the number of iterations in the training set. For temporal coding varying the amount of training time per item means integer repeated generation of the temporal sequence while for spike rate coding it involves varying the amount of time a spike train with a certain frequency is applied. As with ordered exposure of training items, overlearning and short circuiting has been observed to occur for extended exposures. E. SOM Response Figure illustrates the temporal and rate-coded response of the SOM classifier realized in this work, showing the distinct responses for each input, and grouping behaviour with neighbouring values. The developed SOM classifies the input signal based on amplitude into one of output classes. Figure (a) depicts the temporal coded spatial response for each output class. Despite its latency benefit, this type of coding provides little basis to discriminate between spatially similar responses, as observed for items 3, and. Figure (b) on the other hand provides a distinct rate code corresponding to the dominant output class, alongside the spatial response. This allows even spatially similar responses to be distinguished from one another. Clearly observed in this figure is the initial simultaneous spiking of response neurons to a specific input, and their subsequent feedback activity causing more widespread activation of the neurons around them. IV. CONCLUSIONS In this paper, we explored a practical design trajectory for a spiking neural network based SOM applied to the classification of bioelectric-timescale signals. We examined the 9

8 architectural tradeoffs in the selection of neuron models, learning rules, synapse architectures, and highlighted the role of non-linearity, synaptic weight distributions and configuration of critical parameters on stability. Furthermore, we provided insights into common performance pathologies, and detailed strategies to mitigate them during configuration and training of the architecture. These insights were developed on the basis of high-level architectural explorations, and subsequent fullcustom implementation of components. REFERENCES [] W. Maass, Networks of spiking neurons: The third generation of neural network models, Neural Networks, vol., no. 9, pp , 997. [2] G. Indiveri, B. Linares-Barranco, T. J. Hamilton, A. Van Schaik, R. Etienne-Cummings, T. Delbruck, S.-C. Liu, P. Dudek, P. Häfliger, S. Renaud et al., Neuromorphic silicon neuron circuits, Frontiers in neuroscience, vol. 5, p. 73, 2. [3] C. Bartolozzi and G. Indiveri, Synaptic dynamics in analog vlsi, Neural computation, vol. 9, no., pp , 27. [4] M. R. Azghadi, N. Iannella, S. F. Al-Sarawi, G. Indiveri, and D. Abbott, Spike-based synaptic plasticity in silicon: design, implementation, application, and challenges, Proceedings of the IEEE, vol. 2, no. 5, pp , 24. [5] A. L. Hodgkin and A. F. Huxley, A quantitative description of membrane current and its application to conduction and excitation in nerve, The Journal of Physiology, vol. 7, no. 4, pp , 952. [6] W. Gerstner and W. M. Kistler, Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, 22, section 4... [7] E. Izhikevich, Simple model of spiking neurons, IEEE Trans. on Neural Netw., vol. 4, no. 6, pp , 23. [8] R. Wang et al., A compacht a vlsi conductance-based silicon neuron, in Biomedical Circuits and Systems Conference (BioCAS), 25 IEEE, Atlanta, GA, October 25, pp. 4. [9] J. Wijekoon and P. Dudek, Compact silicon neuron circuit with spiking and bursting behavior, Neural Networks, vol. 2, pp , 28. [] N. Qiao et al., A re-configurable on-line learning spiking neuromorphic processor comprising 256 neurons and 28k synapses, Frontiers in Neuroscience, vol. 9, no. 4, 25. [] G. Q. Bi and M. M. Poo, Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type, The Journal of Neuroscience, vol. 8, no. 24, December 998. [2] J. M. Brader, W. Senn, and S. Fusi, Learning real-world stimuli in a neural network with spike-driven synaptic dynamics, Neural Computation, vol. 9, 27. [3] J.-P. Pfister and W. Gerstner, Triplets of spikes in a model of spike timing-dependent plasticity, The Journal of Neuroscience, vol. 26, no. 38, September 26. [4] S. Song, K. Miller, and L. Abbott, Competitive hebbian learning through spike-timing-dependent synaptic plasticity. Nat Neurosci, 2. [5] M. Oster and S.-C. Liu, A winner-take-all spiking network with spiking inputs, in Proceedings of the 24 th IEEE International Conference on Electronics, Circuits and Systems, 24. ICECS 24., Dec 24, pp [6] W. Maass, T. Natschlger, and H. Markram, Real-time computing without stable states: A new framework for neural computation based on perturbations, Neural Computation, 22. [7] T. Kohonen, Self-organized formation of topologically correct feature maps, Biological Cybernetics, vol. 43, no., pp , 982. [8] T. Rumbell, S. L. Denham, and T. Wennekers, A spiking self-organizing map combining stdp, oscillations, and continuous learning, IEEE Transactions on Neural Networks and Learning Systems, vol. 25, no. 5, pp , May 24. [9] B. Ruf and M. Schmitt, Self-Organizing Maps of Spiking Neurons Using Temporal Coding. Boston, MA: Springer US, 998, pp [2] E. M. Izhikevich, Polychronization: computation with spikes, Neural computation, vol. 8, no. 2, pp , 26. [2] A. van Schaik, Building blocks for electronic spiking neural networks, Neural networks, vol. 4, pp , 2. [22] S. Mihalas and E. Niebur, A generalized linear integrate-and-fire neural model produces diverse spiking behaviors, Neural Computation, vol. 2, no. 3, pp , March 29. [23] D. Hebb, The Organization of Behavior. New York: Wiley & Sons, 949. [24] M. R. Azghadi, S. Al-Sarawi, D. Abbott, and N. Iannella, A neuromorphic vlsi design for spike timing and rate based synaptic plasticity, Neural Networks, vol. 45, pp. 7 82, 23, neuromorphic Engineering: From Neural Systems to Brain-Like Engineered Systems. [25] S. Moradi and G. Indiveri, An event-based neural network architecture with an asynchronous programmable synaptic memory, IEEE transactions on biomedical circuits and systems, vol. 8, no., pp. 98 7, 24. [26] P. Hasler, Continuous-time feedback in floating-gate mos circuits, IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, vol. 48, no., pp , 2. [27] G. Indiveri, E. Chicca, and R. Douglas, A vlsi array of low-power spiking neurons and bistable synapses with spike-timing dependent plasticity, IEEE Transactions on Neural Networks, vol. 7, no., pp. 2 22, Jan 26. [28] S. Mitra, S. Fusi, and G. Indiveri, Real-time classification of complex patterns using spike-based learning in neuromorphic vlsi, IEEE Transactions on Biomedical Circuits and Systems, vol. 3, no., pp , 29. [29] E. Chicca, D. Badoni, V. Dante, M. D Andreagiovanni, G. Salina, L. Carota, S. Fusi, and P. Del Giudice, A vlsi recurrent network of integrate-and-fire neurons connected by plastic synapses with long-term memory, IEEE Transactions on Neural Networks, vol. 4, no. 5, pp , 23. [3] D. J. Amit and S. Fusi, Learning in neural networks with material synapses, Neural Computation, vol. 6, no. 5, pp , 994. [3] E. Geoffrois, J.-M. Edeline, and J.-F. Vibert, Learning by Delay Modifications. Boston, MA: Springer US, 994, pp [32] K. Hornik, Approximation capabilities of multilayer feedforward networks, Neural Networks, vol. 4, no. 2, pp , 99. [33] M. D. Abigail Morrison and W. Gerstner, Phenomenological models of synaptic plasticity based on spike timing, Biol Cybern., vol. 98, no. 6, pp , 28. 2

Spiking Inputs to a Winner-take-all Network

Spiking Inputs to a Winner-take-all Network Spiking Inputs to a Winner-take-all Network Matthias Oster and Shih-Chii Liu Institute of Neuroinformatics University of Zurich and ETH Zurich Winterthurerstrasse 9 CH-857 Zurich, Switzerland {mao,shih}@ini.phys.ethz.ch

More information

Evaluating the Effect of Spiking Network Parameters on Polychronization

Evaluating the Effect of Spiking Network Parameters on Polychronization Evaluating the Effect of Spiking Network Parameters on Polychronization Panagiotis Ioannou, Matthew Casey and André Grüning Department of Computing, University of Surrey, Guildford, Surrey, GU2 7XH, UK

More information

Neuromorhic Silicon Neuron and Synapse: Analog VLSI Implemetation of Biological Structures

Neuromorhic Silicon Neuron and Synapse: Analog VLSI Implemetation of Biological Structures RESEARCH ARTICLE OPEN ACCESS Neuromorhic Silicon Neuron and Synapse: Analog VLSI Implemetation of Biological Structures Ms. Pooja Verma*, Ms. Neha Verma** *(Associate Professor, Department of Electronics

More information

Realization of Visual Representation Task on a Humanoid Robot

Realization of Visual Representation Task on a Humanoid Robot Istanbul Technical University, Robot Intelligence Course Realization of Visual Representation Task on a Humanoid Robot Emeç Erçelik May 31, 2016 1 Introduction It is thought that human brain uses a distributed

More information

Tunable Low Energy, Compact and High Performance Neuromorphic Circuit for Spike-Based Synaptic Plasticity

Tunable Low Energy, Compact and High Performance Neuromorphic Circuit for Spike-Based Synaptic Plasticity Tunable Low Energy, Compact and High Performance Neuromorphic Circuit for Spike-Based Synaptic Plasticity Mostafa Rahimi Azghadi*, Nicolangelo Iannella, Said Al-Sarawi, Derek Abbott School of Electrical

More information

Neuromorphic computing

Neuromorphic computing Neuromorphic computing Robotics M.Sc. programme in Computer Science lorenzo.vannucci@santannapisa.it April 19th, 2018 Outline 1. Introduction 2. Fundamentals of neuroscience 3. Simulating the brain 4.

More information

Synchrony Detection by Analogue VLSI Neurons with Bimodal STDP Synapses

Synchrony Detection by Analogue VLSI Neurons with Bimodal STDP Synapses Synchrony Detection by Analogue VLSI Neurons with Bimodal STDP Synapses Adria Bofill-i-Petit The University of Edinburgh Edinburgh, EH9 JL Scotland adria.bofill@ee.ed.ac.uk Alan F. Murray The University

More information

Dynamic 3D Clustering of Spatio-Temporal Brain Data in the NeuCube Spiking Neural Network Architecture on a Case Study of fmri Data

Dynamic 3D Clustering of Spatio-Temporal Brain Data in the NeuCube Spiking Neural Network Architecture on a Case Study of fmri Data Dynamic 3D Clustering of Spatio-Temporal Brain Data in the NeuCube Spiking Neural Network Architecture on a Case Study of fmri Data Maryam Gholami Doborjeh (B) and Nikola Kasabov Knowledge Engineering

More information

Basics of Computational Neuroscience: Neurons and Synapses to Networks

Basics of Computational Neuroscience: Neurons and Synapses to Networks Basics of Computational Neuroscience: Neurons and Synapses to Networks Bruce Graham Mathematics School of Natural Sciences University of Stirling Scotland, U.K. Useful Book Authors: David Sterratt, Bruce

More information

Shadowing and Blocking as Learning Interference Models

Shadowing and Blocking as Learning Interference Models Shadowing and Blocking as Learning Interference Models Espoir Kyubwa Dilip Sunder Raj Department of Bioengineering Department of Neuroscience University of California San Diego University of California

More information

Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6)

Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6) Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6) BPNN in Practice Week 3 Lecture Notes page 1 of 1 The Hopfield Network In this network, it was designed on analogy of

More information

Recognition of English Characters Using Spiking Neural Networks

Recognition of English Characters Using Spiking Neural Networks Recognition of English Characters Using Spiking Neural Networks Amjad J. Humaidi #1, Thaer M. Kadhim *2 Control and System Engineering, University of Technology, Iraq, Baghdad 1 601116@uotechnology.edu.iq

More information

VS : Systemische Physiologie - Animalische Physiologie für Bioinformatiker. Neuronenmodelle III. Modelle synaptischer Kurz- und Langzeitplastizität

VS : Systemische Physiologie - Animalische Physiologie für Bioinformatiker. Neuronenmodelle III. Modelle synaptischer Kurz- und Langzeitplastizität Bachelor Program Bioinformatics, FU Berlin VS : Systemische Physiologie - Animalische Physiologie für Bioinformatiker Synaptische Übertragung Neuronenmodelle III Modelle synaptischer Kurz- und Langzeitplastizität

More information

Dynamics of Hodgkin and Huxley Model with Conductance based Synaptic Input

Dynamics of Hodgkin and Huxley Model with Conductance based Synaptic Input Proceedings of International Joint Conference on Neural Networks, Dallas, Texas, USA, August 4-9, 2013 Dynamics of Hodgkin and Huxley Model with Conductance based Synaptic Input Priyanka Bajaj and Akhil

More information

Intro. Comp. NeuroSci. Ch. 9 October 4, The threshold and channel memory

Intro. Comp. NeuroSci. Ch. 9 October 4, The threshold and channel memory 9.7.4 The threshold and channel memory The action potential has a threshold. In figure the area around threshold is expanded (rectangle). A current injection that does not reach the threshold does not

More information

A hardware implementation of a network of functional spiking neurons with hebbian learning

A hardware implementation of a network of functional spiking neurons with hebbian learning A hardware implementation of a network of functional spiking neurons with hebbian learning Andrés Upegui, Carlos Andrés Peña-Reyes, Eduardo Sánchez Swiss Federal Institute of Technology, Logic Systems

More information

Mike Davies Director, Neuromorphic Computing Lab Intel Labs

Mike Davies Director, Neuromorphic Computing Lab Intel Labs Mike Davies Director, Neuromorphic Computing Lab Intel Labs Loihi at a Glance Key Properties Integrated Memory + Compute Neuromorphic Architecture 128 neuromorphic cores supporting up to 128k neurons and

More information

A general error-based spike-timing dependent learning rule for the Neural Engineering Framework

A general error-based spike-timing dependent learning rule for the Neural Engineering Framework A general error-based spike-timing dependent learning rule for the Neural Engineering Framework Trevor Bekolay Monday, May 17, 2010 Abstract Previous attempts at integrating spike-timing dependent plasticity

More information

Heterogeneous networks of spiking neurons: self-sustained activity and excitability

Heterogeneous networks of spiking neurons: self-sustained activity and excitability Heterogeneous networks of spiking neurons: self-sustained activity and excitability Cristina Savin 1,2, Iosif Ignat 1, Raul C. Mureşan 2,3 1 Technical University of Cluj Napoca, Faculty of Automation and

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 7: Network models Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis II 6 Single neuron

More information

Real-time inference in a VLSI spiking neural network

Real-time inference in a VLSI spiking neural network Real-time inference in a VLSI spiking neural network Dane Corneil, Daniel Sonnleithner, Emre Neftci, Elisabetta Chicca, Matthew Cook, Giacomo Indiveri, Rodney Douglas Institute of Neuroinformatics University

More information

Sample Lab Report 1 from 1. Measuring and Manipulating Passive Membrane Properties

Sample Lab Report 1 from  1. Measuring and Manipulating Passive Membrane Properties Sample Lab Report 1 from http://www.bio365l.net 1 Abstract Measuring and Manipulating Passive Membrane Properties Biological membranes exhibit the properties of capacitance and resistance, which allow

More information

Temporally asymmetric Hebbian learning and neuronal response variability

Temporally asymmetric Hebbian learning and neuronal response variability Neurocomputing 32}33 (2000) 523}528 Temporally asymmetric Hebbian learning and neuronal response variability Sen Song*, L.F. Abbott Volen Center for Complex Systems and Department of Biology, Brandeis

More information

Neurobiology: The nerve cell. Principle and task To use a nerve function model to study the following aspects of a nerve cell:

Neurobiology: The nerve cell. Principle and task To use a nerve function model to study the following aspects of a nerve cell: Principle and task To use a nerve function model to study the following aspects of a nerve cell: INTRACELLULAR POTENTIAL AND ACTION POTENTIAL Comparison between low and high threshold levels Comparison

More information

Beyond Vanilla LTP. Spike-timing-dependent-plasticity or STDP

Beyond Vanilla LTP. Spike-timing-dependent-plasticity or STDP Beyond Vanilla LTP Spike-timing-dependent-plasticity or STDP Hebbian learning rule asn W MN,aSN MN Δw ij = μ x j (v i - φ) learning threshold under which LTD can occur Stimulation electrode Recording electrode

More information

Running PyNN Simulations on SpiNNaker

Running PyNN Simulations on SpiNNaker Introduction Running PyNN Simulations on SpiNNaker This manual will introduce you to the basics of using the PyNN neural network language on SpiNNaker neuromorphic hardware. Installation The PyNN toolchain

More information

Modulators of Spike Timing-Dependent Plasticity

Modulators of Spike Timing-Dependent Plasticity Modulators of Spike Timing-Dependent Plasticity 1 2 3 4 5 Francisco Madamba Department of Biology University of California, San Diego La Jolla, California fmadamba@ucsd.edu 6 7 8 9 10 11 12 13 14 15 16

More information

International Journal of Advanced Computer Technology (IJACT)

International Journal of Advanced Computer Technology (IJACT) Abstract An Introduction to Third Generation of Neural Networks for Edge Detection Being inspired by the structure and behavior of the human visual system the spiking neural networks for edge detection

More information

CAS Seminar - Spiking Neurons Network (SNN) Jakob Kemi ( )

CAS Seminar - Spiking Neurons Network (SNN) Jakob Kemi ( ) CAS Seminar - Spiking Neurons Network (SNN) Jakob Kemi (820622-0033) kemiolof@student.chalmers.se November 20, 2006 Introduction Biological background To be written, lots of good sources. Background First

More information

Spiking neural network simulator: User s Guide

Spiking neural network simulator: User s Guide Spiking neural network simulator: User s Guide Version 0.55: December 7 2004 Leslie S. Smith, Department of Computing Science and Mathematics University of Stirling, Stirling FK9 4LA, Scotland lss@cs.stir.ac.uk

More information

Sparse Coding in Sparse Winner Networks

Sparse Coding in Sparse Winner Networks Sparse Coding in Sparse Winner Networks Janusz A. Starzyk 1, Yinyin Liu 1, David Vogel 2 1 School of Electrical Engineering & Computer Science Ohio University, Athens, OH 45701 {starzyk, yliu}@bobcat.ent.ohiou.edu

More information

BIONB/BME/ECE 4910 Neuronal Simulation Assignments 1, Spring 2013

BIONB/BME/ECE 4910 Neuronal Simulation Assignments 1, Spring 2013 BIONB/BME/ECE 4910 Neuronal Simulation Assignments 1, Spring 2013 Tutorial Assignment Page Due Date Week 1/Assignment 1: Introduction to NIA 1 January 28 The Membrane Tutorial 9 Week 2/Assignment 2: Passive

More information

EEG Classification with BSA Spike Encoding Algorithm and Evolving Probabilistic Spiking Neural Network

EEG Classification with BSA Spike Encoding Algorithm and Evolving Probabilistic Spiking Neural Network EEG Classification with BSA Spike Encoding Algorithm and Evolving Probabilistic Spiking Neural Network Nuttapod Nuntalid 1, Kshitij Dhoble 1, and Nikola Kasabov 1,2, 1 Knowledge Engineering and Discovery

More information

STDP between a Pair of Recurrently Connected Neurons with Asynchronous and Synchronous Stimulation AZADEH HASSANNEJAD NAZIR

STDP between a Pair of Recurrently Connected Neurons with Asynchronous and Synchronous Stimulation AZADEH HASSANNEJAD NAZIR STDP between a Pair of Recurrently Connected Neurons with Asynchronous and Synchronous Stimulation AZADEH HASSANNEJAD NAZIR Master of Science Thesis Stockholm, Sweden 2012 STDP between a Pair of Recurrently

More information

Signal detection in networks of spiking neurons with dynamical synapses

Signal detection in networks of spiking neurons with dynamical synapses Published in AIP Proceedings 887, 83-88, 7. Signal detection in networks of spiking neurons with dynamical synapses Jorge F. Mejías and Joaquín J. Torres Dept. of Electromagnetism and Physics of the Matter

More information

Rolls,E.T. (2016) Cerebral Cortex: Principles of Operation. Oxford University Press.

Rolls,E.T. (2016) Cerebral Cortex: Principles of Operation. Oxford University Press. Digital Signal Processing and the Brain Is the brain a digital signal processor? Digital vs continuous signals Digital signals involve streams of binary encoded numbers The brain uses digital, all or none,

More information

A VLSI Recurrent Network of Integrate-and-Fire Neurons Connected by Plastic Synapses With Long-Term Memory

A VLSI Recurrent Network of Integrate-and-Fire Neurons Connected by Plastic Synapses With Long-Term Memory IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 14, NO. 5, SEPTEMBER 2003 1297 A VLSI Recurrent Network of Integrate-and-Fire Neurons Connected by Plastic Synapses With Long-Term Memory Elisabetta Chicca, Davide

More information

Implementing Spike-Timing-Dependent Plasticity on SpiNNaker Neuromorphic Hardware

Implementing Spike-Timing-Dependent Plasticity on SpiNNaker Neuromorphic Hardware Implementing Spike-Timing-Dependent Plasticity on SpiNNaker Neuromorphic Hardware Xin Jin, Alexander Rast, Francesco Galluppi, Sergio Davies, and Steve Furber Abstract This paper presents an efficient

More information

Preparing More Effective Liquid State Machines Using Hebbian Learning

Preparing More Effective Liquid State Machines Using Hebbian Learning 2006 International Joint Conference on Neural Networks Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 2006 Preparing More Effective Liquid State Machines Using Hebbian Learning

More information

Learning in neural networks

Learning in neural networks http://ccnl.psy.unipd.it Learning in neural networks Marco Zorzi University of Padova M. Zorzi - European Diploma in Cognitive and Brain Sciences, Cognitive modeling", HWK 19-24/3/2006 1 Connectionist

More information

The storage and recall of memories in the hippocampo-cortical system. Supplementary material. Edmund T Rolls

The storage and recall of memories in the hippocampo-cortical system. Supplementary material. Edmund T Rolls The storage and recall of memories in the hippocampo-cortical system Supplementary material Edmund T Rolls Oxford Centre for Computational Neuroscience, Oxford, England and University of Warwick, Department

More information

Synaptic Plasticity and Spike-based Computation in VLSI Networks of Integrate-and-Fire Neurons

Synaptic Plasticity and Spike-based Computation in VLSI Networks of Integrate-and-Fire Neurons Neural Information Processing Letters and Reviews Vol. 11, Nos. 4-6, April-June 7 Synaptic Plasticity and Spike-based Computation in VLSI Networks of Integrate-and-Fire Neurons Institute for Neuroinformatics

More information

arxiv: v3 [q-bio.nc] 7 Jul 2017

arxiv: v3 [q-bio.nc] 7 Jul 2017 Capturing the Dynamical Repertoire of Single Neurons with Generalized Linear Models arxiv:162.7389v3 [q-bio.nc] 7 Jul 217 Alison I. Weber 1 & Jonathan W. Pillow 2,3 1 Graduate Program in Neuroscience,

More information

A COMPETITIVE NETWORK OF SPIKING VLSI NEURONS

A COMPETITIVE NETWORK OF SPIKING VLSI NEURONS A COMPETITIVE NETWORK OF SPIKING VLSI NEURONS Indiveri G., Horiuchi T., Niebur, E., and Douglas, R. Institute of Neuroinformatics, University of Zurich and ETH Zurich, Switzerland Computational Sensorimotor

More information

Computational cognitive neuroscience: 2. Neuron. Lubica Beňušková Centre for Cognitive Science, FMFI Comenius University in Bratislava

Computational cognitive neuroscience: 2. Neuron. Lubica Beňušková Centre for Cognitive Science, FMFI Comenius University in Bratislava 1 Computational cognitive neuroscience: 2. Neuron Lubica Beňušková Centre for Cognitive Science, FMFI Comenius University in Bratislava 2 Neurons communicate via electric signals In neurons it is important

More information

Neuron Phase Response

Neuron Phase Response BioE332A Lab 4, 2007 1 Lab 4 February 2, 2007 Neuron Phase Response In this lab, we study the effect of one neuron s spikes on another s, combined synapse and neuron behavior. In Lab 2, we characterized

More information

Investigation of Physiological Mechanism For Linking Field Synapses

Investigation of Physiological Mechanism For Linking Field Synapses Investigation of Physiological Mechanism For Linking Field Synapses Richard B. Wells 1, Nick Garrett 2, Tom Richner 3 Microelectronics Research and Communications Institute (MRCI) BEL 316 University of

More information

Task 1: Machine Learning with Spike-Timing-Dependent Plasticity (STDP)

Task 1: Machine Learning with Spike-Timing-Dependent Plasticity (STDP) DARPA Report Task1 for Year 1 (Q1-Q4) Task 1: Machine Learning with Spike-Timing-Dependent Plasticity (STDP) 1. Shortcomings of the deep learning approach to artificial intelligence It has been established

More information

Modeling of Hippocampal Behavior

Modeling of Hippocampal Behavior Modeling of Hippocampal Behavior Diana Ponce-Morado, Venmathi Gunasekaran and Varsha Vijayan Abstract The hippocampus is identified as an important structure in the cerebral cortex of mammals for forming

More information

Context dependent amplification of both rate and event-correlation in a VLSI network of spiking neurons

Context dependent amplification of both rate and event-correlation in a VLSI network of spiking neurons Context dependent amplification of both rate and event-correlation in a VLS network of spiking neurons lisabetta Chicca, Giacomo ndiveri and Rodney J. Douglas nstitute of Neuroinformatics University -

More information

Efficient Emulation of Large-Scale Neuronal Networks

Efficient Emulation of Large-Scale Neuronal Networks Efficient Emulation of Large-Scale Neuronal Networks BENG/BGGN 260 Neurodynamics University of California, San Diego Week 6 BENG/BGGN 260 Neurodynamics (UCSD) Efficient Emulation of Large-Scale Neuronal

More information

Input-speci"c adaptation in complex cells through synaptic depression

Input-specic adaptation in complex cells through synaptic depression 0 0 0 0 Neurocomputing }0 (00) } Input-speci"c adaptation in complex cells through synaptic depression Frances S. Chance*, L.F. Abbott Volen Center for Complex Systems and Department of Biology, Brandeis

More information

A biophysically realistic Model of the Retina

A biophysically realistic Model of the Retina A biophysically realistic Model of the Retina Melissa Louey Piotr Sokół Department of Mechanical Engineering Social and Psychological Sciences The University of Melbourne University College Utrecht Melbourne,

More information

Part 11: Mechanisms of Learning

Part 11: Mechanisms of Learning Neurophysiology and Information: Theory of Brain Function Christopher Fiorillo BiS 527, Spring 2012 042 350 4326, fiorillo@kaist.ac.kr Part 11: Mechanisms of Learning Reading: Bear, Connors, and Paradiso,

More information

A Neuromorphic VLSI Head Direction Cell System

A Neuromorphic VLSI Head Direction Cell System > TCAS-I 8378 < 1 A Neuromorphic VLSI Head Direction Cell System Tarek M. Massoud, Timothy K. Horiuchi, Member, IEEE Abstract The head direction (HD) cell system in the brain of mammals is thought to be

More information

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence To understand the network paradigm also requires examining the history

More information

Active Control of Spike-Timing Dependent Synaptic Plasticity in an Electrosensory System

Active Control of Spike-Timing Dependent Synaptic Plasticity in an Electrosensory System Active Control of Spike-Timing Dependent Synaptic Plasticity in an Electrosensory System Patrick D. Roberts and Curtis C. Bell Neurological Sciences Institute, OHSU 505 N.W. 185 th Avenue, Beaverton, OR

More information

Self-Organization and Segmentation with Laterally Connected Spiking Neurons

Self-Organization and Segmentation with Laterally Connected Spiking Neurons Self-Organization and Segmentation with Laterally Connected Spiking Neurons Yoonsuck Choe Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 USA Risto Miikkulainen Department

More information

Observational Learning Based on Models of Overlapping Pathways

Observational Learning Based on Models of Overlapping Pathways Observational Learning Based on Models of Overlapping Pathways Emmanouil Hourdakis and Panos Trahanias Institute of Computer Science, Foundation for Research and Technology Hellas (FORTH) Science and Technology

More information

Stable Hebbian Learning from Spike Timing-Dependent Plasticity

Stable Hebbian Learning from Spike Timing-Dependent Plasticity The Journal of Neuroscience, December 1, 2000, 20(23):8812 8821 Stable Hebbian Learning from Spike Timing-Dependent Plasticity M. C. W. van Rossum, 1 G. Q. Bi, 2 and G. G. Turrigiano 1 1 Brandeis University,

More information

CHAPTER I From Biological to Artificial Neuron Model

CHAPTER I From Biological to Artificial Neuron Model CHAPTER I From Biological to Artificial Neuron Model EE543 - ANN - CHAPTER 1 1 What you see in the picture? EE543 - ANN - CHAPTER 1 2 Is there any conventional computer at present with the capability of

More information

Dynamic Stochastic Synapses as Computational Units

Dynamic Stochastic Synapses as Computational Units Dynamic Stochastic Synapses as Computational Units Wolfgang Maass Institute for Theoretical Computer Science Technische Universitat Graz A-B01O Graz Austria. email: maass@igi.tu-graz.ac.at Anthony M. Zador

More information

Theta sequences are essential for internally generated hippocampal firing fields.

Theta sequences are essential for internally generated hippocampal firing fields. Theta sequences are essential for internally generated hippocampal firing fields. Yingxue Wang, Sandro Romani, Brian Lustig, Anthony Leonardo, Eva Pastalkova Supplementary Materials Supplementary Modeling

More information

Designing Behaviour in Bio-inspired Robots Using Associative Topologies of Spiking-Neural-Networks

Designing Behaviour in Bio-inspired Robots Using Associative Topologies of Spiking-Neural-Networks Designing Behaviour in Bio-inspired Robots Using Associative Topologies of Spiking-Neural-Networks arxiv:1509.07035v2 [cs.ro] 24 Sep 2015 Cristian Jimenez-Romero The Open University MK7 6AA, United Kingdom

More information

Temporal coding in the sub-millisecond range: Model of barn owl auditory pathway

Temporal coding in the sub-millisecond range: Model of barn owl auditory pathway Temporal coding in the sub-millisecond range: Model of barn owl auditory pathway Richard Kempter* Institut fur Theoretische Physik Physik-Department der TU Munchen D-85748 Garching bei Munchen J. Leo van

More information

Intelligent Control Systems

Intelligent Control Systems Lecture Notes in 4 th Class in the Control and Systems Engineering Department University of Technology CCE-CN432 Edited By: Dr. Mohammed Y. Hassan, Ph. D. Fourth Year. CCE-CN432 Syllabus Theoretical: 2

More information

Modeling Depolarization Induced Suppression of Inhibition in Pyramidal Neurons

Modeling Depolarization Induced Suppression of Inhibition in Pyramidal Neurons Modeling Depolarization Induced Suppression of Inhibition in Pyramidal Neurons Peter Osseward, Uri Magaram Department of Neuroscience University of California, San Diego La Jolla, CA 92092 possewar@ucsd.edu

More information

The control of spiking by synaptic input in striatal and pallidal neurons

The control of spiking by synaptic input in striatal and pallidal neurons The control of spiking by synaptic input in striatal and pallidal neurons Dieter Jaeger Department of Biology, Emory University, Atlanta, GA 30322 Key words: Abstract: rat, slice, whole cell, dynamic current

More information

Information Processing During Transient Responses in the Crayfish Visual System

Information Processing During Transient Responses in the Crayfish Visual System Information Processing During Transient Responses in the Crayfish Visual System Christopher J. Rozell, Don. H. Johnson and Raymon M. Glantz Department of Electrical & Computer Engineering Department of

More information

Temporally Asymmetric Hebbian Learning, Spike Timing and Neuronal Response Variability

Temporally Asymmetric Hebbian Learning, Spike Timing and Neuronal Response Variability Temporally Asymmetric Hebbian Learning, Spike Timing and Neuronal Response Variability L.F. Abbott and Sen Song Volen Center and Department of Biology Brandeis University Waltham MA 02454 Abstract Recent

More information

Cell Responses in V4 Sparse Distributed Representation

Cell Responses in V4 Sparse Distributed Representation Part 4B: Real Neurons Functions of Layers Input layer 4 from sensation or other areas 3. Neocortical Dynamics Hidden layers 2 & 3 Output layers 5 & 6 to motor systems or other areas 1 2 Hierarchical Categorical

More information

AND BIOMEDICAL SYSTEMS Rahul Sarpeshkar

AND BIOMEDICAL SYSTEMS Rahul Sarpeshkar ULTRA-LOW-POWER LOW BIO-INSPIRED INSPIRED AND BIOMEDICAL SYSTEMS Rahul Sarpeshkar Research Lab of Electronics Massachusetts Institute of Technology Electrical Engineering and Computer Science FOE Talk

More information

Racing to Learn: Statistical Inference and Learning in a Single Spiking Neuron with Adaptive Kernels

Racing to Learn: Statistical Inference and Learning in a Single Spiking Neuron with Adaptive Kernels Racing to Learn: Statistical Inference and Learning in a Single Spiking Neuron with Adaptive Kernels Saeed Afshar 1, Libin George 2, Jonathan Tapson 1, André van Schaik 1, Tara Julia Hamilton 1,2 1 Bioelectronics

More information

Learning real world stimuli in a neural network with spike-driven synaptic dynamics

Learning real world stimuli in a neural network with spike-driven synaptic dynamics Learning real world stimuli in a neural network with spike-driven synaptic dynamics Joseph M. Brader, Walter Senn, Stefano Fusi Institute of Physiology, University of Bern, Bühlplatz 5, 314 Bern Abstract

More information

Temporal Adaptation. In a Silicon Auditory Nerve. John Lazzaro. CS Division UC Berkeley 571 Evans Hall Berkeley, CA

Temporal Adaptation. In a Silicon Auditory Nerve. John Lazzaro. CS Division UC Berkeley 571 Evans Hall Berkeley, CA Temporal Adaptation In a Silicon Auditory Nerve John Lazzaro CS Division UC Berkeley 571 Evans Hall Berkeley, CA 94720 Abstract Many auditory theorists consider the temporal adaptation of the auditory

More information

Biomimetic Cortical Nanocircuits: The BioRC Project. Alice C. Parker NSF Emerging Models of Technology Meeting July 24, 2008

Biomimetic Cortical Nanocircuits: The BioRC Project. Alice C. Parker NSF Emerging Models of Technology Meeting July 24, 2008 Biomimetic Cortical Nanocircuits: The BioRC Project Alice C. Parker NSF Emerging Models of Technology Meeting July 24, 2008 The BioRC Project Team and Support Alice Parker, PI and Chongwu Zhou, Co-PI Graduate

More information

Spiking Neural Networks: Neuron Models, Plasticity, and Graph Applications

Spiking Neural Networks: Neuron Models, Plasticity, and Graph Applications Virginia Commonwealth University VCU Scholars Compass Theses and Dissertations Graduate School 2015 Spiking Neural Networks: Neuron Models, Plasticity, and Graph Applications Shaun Donachy Virginia Commonwealth

More information

Evolution of Spiking Neural Controllers for Autonomous Vision-Based Robots

Evolution of Spiking Neural Controllers for Autonomous Vision-Based Robots Evolution of Spiking Neural Controllers for Autonomous Vision-Based Robots Dario Floreano and Claudio Mattiussi Evolutionary & Adaptive Systems, Institute of Robotics Swiss Federal Institute of Technology,

More information

Toward Silicon-Based Cognitive Neuromorphic ICs A Survey

Toward Silicon-Based Cognitive Neuromorphic ICs A Survey Toward Silicon-Based Cognitive Neuromorphic ICs A Survey Georgios Volanis, Angelos Antonopoulos, and Yiorgos Makris The University of Texas at Dallas Alkis A. Hatzopoulos Aristotle University of Thessaloniki

More information

Electrophysiology. General Neurophysiology. Action Potentials

Electrophysiology. General Neurophysiology. Action Potentials 5 Electrophysiology Cochlear implants should aim to reproduce the coding of sound in the auditory system as closely as possible, for best sound perception. The cochlear implant is in part the result of

More information

Spike Timing-Dependent Plasticity in the Address Domain

Spike Timing-Dependent Plasticity in the Address Domain Spike Timing-Dependent Plasticity in the Address Domain R. Jacob Vogelstein, Francesco Tenore, Ralf Philipp, Miriam S. Adlerstein, David H. Goldberg and Gert Cauwenberghs Department of Biomedical Engineering

More information

A software hardware selective attention system

A software hardware selective attention system Neurocomputing 58 60 (2004) 647 653 www.elsevier.com/locate/neucom A software hardware selective attention system L. Carota a;, G. Indiveri b, V. Dante c a Physics Department, L Aquila University, Italy

More information

Computing with Spikes in Recurrent Neural Networks

Computing with Spikes in Recurrent Neural Networks Computing with Spikes in Recurrent Neural Networks Dezhe Jin Department of Physics The Pennsylvania State University Presented at ICS Seminar Course, Penn State Jan 9, 2006 Outline Introduction Neurons,

More information

Neural response time integration subserves. perceptual decisions - K-F Wong and X-J Wang s. reduced model

Neural response time integration subserves. perceptual decisions - K-F Wong and X-J Wang s. reduced model Neural response time integration subserves perceptual decisions - K-F Wong and X-J Wang s reduced model Chris Ayers and Narayanan Krishnamurthy December 15, 2008 Abstract A neural network describing the

More information

Analog-digital simulations of full conductance-based networks of spiking neurons with spike timing dependent plasticity

Analog-digital simulations of full conductance-based networks of spiking neurons with spike timing dependent plasticity Network: Computation in Neural Systems September 2006; 17(3): 211 233 Analog-digital simulations of full conductance-based networks of spiking neurons with spike timing dependent plasticity QUAN ZOU, 1

More information

A SUPERVISED LEARNING

A SUPERVISED LEARNING R E S E A R C H R E P O R T I D I A P A SUPERVISED LEARNING APPROACH BASED ON STDP AND POLYCHRONIZATION IN SPIKING NEURON NETWORKS Hélène Paugam-Moisy 1,2, Régis Martinez 1 and Samy Bengio 2 IDIAP RR 6-54

More information

Effects of Inhibitory Synaptic Current Parameters on Thalamocortical Oscillations

Effects of Inhibitory Synaptic Current Parameters on Thalamocortical Oscillations Effects of Inhibitory Synaptic Current Parameters on Thalamocortical Oscillations 1 2 3 4 5 Scott Cole Richard Gao Neurosciences Graduate Program Department of Cognitive Science University of California,

More information

Dendritic compartmentalization could underlie competition and attentional biasing of simultaneous visual stimuli

Dendritic compartmentalization could underlie competition and attentional biasing of simultaneous visual stimuli Dendritic compartmentalization could underlie competition and attentional biasing of simultaneous visual stimuli Kevin A. Archie Neuroscience Program University of Southern California Los Angeles, CA 90089-2520

More information

Design of Low-Power CMOS Cell Structures Using Subthreshold Conduction Region

Design of Low-Power CMOS Cell Structures Using Subthreshold Conduction Region International Journal of Scientific & Engineering Research, Volume 2, Issue 2, February-2011 1 Design of Low-Power CMOS Cell Structures Using Subthreshold Conduction Region Vishal Sharma, Sanjay Kumar

More information

A Model of Spike-Timing Dependent Plasticity: One or Two Coincidence Detectors?

A Model of Spike-Timing Dependent Plasticity: One or Two Coincidence Detectors? RAPID COMMUNICATION J Neurophysiol 88: 507 513, 2002; 10.1152/jn.00909.2001. A Model of Spike-Timing Dependent Plasticity: One or Two Coincidence Detectors? UMA R. KARMARKAR AND DEAN V. BUONOMANO Departments

More information

FIR filter bank design for Audiogram Matching

FIR filter bank design for Audiogram Matching FIR filter bank design for Audiogram Matching Shobhit Kumar Nema, Mr. Amit Pathak,Professor M.Tech, Digital communication,srist,jabalpur,india, shobhit.nema@gmail.com Dept.of Electronics & communication,srist,jabalpur,india,

More information

Lateral Inhibition Explains Savings in Conditioning and Extinction

Lateral Inhibition Explains Savings in Conditioning and Extinction Lateral Inhibition Explains Savings in Conditioning and Extinction Ashish Gupta & David C. Noelle ({ashish.gupta, david.noelle}@vanderbilt.edu) Department of Electrical Engineering and Computer Science

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 6: Single neuron models Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis I 5 Data analysis II 6 Single

More information

Supervised Learning in Spiking Neural Networks with ReSuMe: Sequence Learning, Classification and Spike-Shifting

Supervised Learning in Spiking Neural Networks with ReSuMe: Sequence Learning, Classification and Spike-Shifting (Accepted for publication in Neural Computation) Supervised Learning in Spiking Neural Networks with ReSuMe: Sequence Learning, Classification and Spike-Shifting Filip Ponulak 1,2, Andrzej Kasiński 1 1

More information

Signal Processing by Multiplexing and Demultiplexing in Neurons

Signal Processing by Multiplexing and Demultiplexing in Neurons Signal Processing by Multiplexing and Demultiplexing in Neurons DavidC. Tam Division of Neuroscience Baylor College of Medicine Houston, TX 77030 dtam@next-cns.neusc.bcm.tmc.edu Abstract Signal processing

More information

5. Low-Power OpAmps. Francesc Serra Graells

5. Low-Power OpAmps. Francesc Serra Graells Intro Subthreshold Class-AB Rail-to-Rail Inverter-Based 1/43 5. Low-Power OpAmps Francesc Serra Graells francesc.serra.graells@uab.cat Departament de Microelectrònica i Sistemes Electrònics Universitat

More information

Spike-timing-dependent synaptic plasticity can form zero lag links for cortical oscillations.

Spike-timing-dependent synaptic plasticity can form zero lag links for cortical oscillations. Neurocomputing 58 6 (24) 85 9 www.elsevier.com/locate/neucom Spike-timing-dependent synaptic plasticity can form zero lag links for cortical oscillations. Andreas Knoblauch a;, Friedrich T. Sommer b a

More information

Imperfect Synapses in Artificial Spiking Neural Networks

Imperfect Synapses in Artificial Spiking Neural Networks Imperfect Synapses in Artificial Spiking Neural Networks A thesis submitted in partial fulfilment of the requirements for the Degree of Master of Computer Science by Hayden Jackson University of Canterbury

More information

Mixed Signal VLSI Circuit Implementation of the Cortical Microcircuit Models

Mixed Signal VLSI Circuit Implementation of the Cortical Microcircuit Models Mixed Signal VLSI Circuit Implementation of the Cortical Microcircuit Models A thesis submitted to the University of Manchester for the degree of Doctor of Philosophy in the Faculty of Engineering and

More information

Different inhibitory effects by dopaminergic modulation and global suppression of activity

Different inhibitory effects by dopaminergic modulation and global suppression of activity Different inhibitory effects by dopaminergic modulation and global suppression of activity Takuji Hayashi Department of Applied Physics Tokyo University of Science Osamu Araki Department of Applied Physics

More information