Developmental Self-Construction and -Configuration of Functional Neocortical Neuronal Networks

Size: px
Start display at page:

Download "Developmental Self-Construction and -Configuration of Functional Neocortical Neuronal Networks"

Transcription

1 Developmental Self-Construction and -Configuration of Functional Neocortical Neuronal Networks Roman Bauer 1,2 *, Frédéric Zubler 1,3, Sabina Pfister 1, Andreas Hauri 1, Michael Pfeiffer 1, Dylan R. Muir 1,4, Rodney J. Douglas 1 1 Institute of Neuroinformatics, University/ETH Zürich, Zürich, Switzerland, 2 School of Computing Science, Newcastle University, Newcastle upon Tyne, United Kingdom, 3 Department of Neurology, Inselspital Bern, Bern University Hospital, University of Bern, Bern, Switzerland, 4 Biozentrum, University of Basel, Basel, Switzerland Abstract The prenatal development of neural circuits must provide sufficient configuration to support at least a set of core postnatal behaviors. Although knowledge of various genetic and cellular aspects of development is accumulating rapidly, there is less systematic understanding of how these various processes play together in order to construct such functional networks. Here we make some steps toward such understanding by demonstrating through detailed simulations how a competitive co-operative ( winner-take-all, WTA) network architecture can arise by development from a single precursor cell. This precursor is granted a simplified gene regulatory network that directs cell mitosis, differentiation, migration, neurite outgrowth and synaptogenesis. Once initial axonal connection patterns are established, their synaptic weights undergo homeostatic unsupervised learning that is shaped by wave-like input patterns. We demonstrate how this autonomous genetically directed developmental sequence can give rise to self-calibrated WTA networks, and compare our simulation results with biological data. Citation: Bauer R, Zubler F, Pfister S, Hauri A, Pfeiffer M, et al. (2014) Developmental Self-Construction and -Configuration of Functional Neocortical Neuronal Networks. PLoS Comput Biol 10(12): e doi: /journal.pcbi Editor: Olaf Sporns, Indiana University, United States of America Received July 24, 2014; Accepted October 9, 2014; Published December 4, 2014 Copyright: ß 2014 Bauer et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Data Availability: The authors confirm that all data underlying the findings are fully available without restriction. All relevant data are within the paper and its Supporting Information files. Funding: This work was supported by the EU grant SECO ( The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing Interests: The authors have declared that no competing interests exist. * roman.bauer@newcastle.ac.uk Introduction In this paper we address the question of how progenitor cells of the neocortical subplate can give rise to large functional neuronal sub-networks in the developed cortex. We choose winner-take-all (WTA) [1,2] connectivity as the target of this self-construction and -configuration process because these sub-networks are consistent with the observed physiology [3,4] and connectivity [5,6] of neurons in the superficial layers of neocortex, and because they are powerful elements of computation [7,8]. WTA networks actively select the strongest of multiple input signals, while suppressing the weaker ones. This fundamental characteristic is applicable in various contexts, and so many studies modeling cortical function are based on WTA modules [8 15]. The idealized WTA network architecture is shown in Fig. 1A. Excitatory neurons are recurrently connected to each other and also with one or more inhibitory neurons, which project back to the excitatory neurons. This architecture does not in itself guarantee WTA functionality. The degree of recurrent excitation, excitation of inhibitory neurons, and inhibition of excitatory neurons need all to lie within preferred ranges [8] in order for the network to exhibit effective WTA behavior. The appropriate neural architecture must be grown, and then the weights of the many synapses must be tuned to fall within the necessary ranges. Such neuronal growth and synapse formation are subject to variability (1B,C), for which the homeostatic learning mechanisms must compensate. The behavior of a WTA network depends on the ratios of the effects of its various excitatory and inhibitory connection paths. In its high excitatory gain regime a WTA network will report only the strongest of its feed-forward inputs, and suppress the remainder of the excitatory neurons, which are weakly activated. In a more relaxed regime (soft-wta, swta) the network will return a pattern of winners that best conforms to its input. In this sense the swta performs a pattern based signal restoration, which is a crucial mechanism for resisting degradation of processing in neural systems across their many computational steps. In this paper we choose to have the developmental process grow and tune these swta networks. Our goal is to demonstrate how plausible genetic developmental mechanisms can combine with homeostatic synaptic tuning to bring networks of neurons into swta functionality (Fig. 1). Our demonstration is based on simulations of the development and growth of neural tissue in 3D physical space using Cx3D [16]. The simulation begins with a single precursor cell. This cell encodes gene-like instructions that are sequentially and conditionally expressed through a gene regulatory network (GRN). By controlling the expression of different genes, this GRN gives rise to pools of differentiated excitatory and inhibitory neurons. These neurons, which are placed randomly in 3D space, extend axons and dendrites and make synapses according to a proximity rule. This process results in a synaptically connected network that matches well experimentally obtained connectivity statistics. PLOS Computational Biology 1 December 2014 Volume 10 Issue 12 e

2 Author Summary Models of learning in artificial neural networks generally assume that the neurons and approximate network are given, and then learning tunes the synaptic weights. By contrast, we address the question of how an entire functional neuronal network containing many differentiated neurons and connections can develop from only a single progenitor cell. We chose a winner-take-all network as the developmental target, because it is a computationally powerful circuit, and a candidate motif of neocortical networks. The key aspect of this challenge is that the developmental mechanisms must be locally autonomous as in Biology: They cannot depend on global knowledge or supervision. We have explored this developmental process by simulating in physical detail the fundamental biological behaviors, such as cell proliferation, neurite growth and synapse formation that give rise to the structural connectivity observed in the superficial layers of the neocortex. These differentiated, approximately connected neurons then adapt their synaptic weights homeostatically to obtain a uniform electrical signaling activity before going on to organize themselves according to the fundamental correlations embedded in a noisy wave-like input signal. In this way the precursor expands itself through development and unsupervised learning into winner-take-all functionality and orientation selectivity in a biologically plausible manner. During this neurite outgrowth, the synaptic weights calibrate themselves homeostatically using experimentally established synaptic scaling [17] and BCM learning rules [18]. This synaptic learning is conditioned by coarsely patterned neuronal activity similar to that of retinal waves or cortico-thalamic loops [19 23]. We compare these grown networks with biological data, and demonstrate WTA functionality. This comparison is done also in the context of cortical functionality, such as orientation selectivity. Importantly, the overall behavior stems solely from local processes, which are instantiated from internally encoded and developmental primitives [24]. Hence, we provide a model that explains the developmental self-construction and -configuration of a neocortical WTA network in a biologically plausible way. Results Development of Differentiated Neurons Based on a Gene-Regulatory-Network Cell proliferation and differentiation into different cell types is specified implicitly in the genetic code of a single precursor cell. This code determines how a given number of excitatory and inhibitory neurons is produced. During the unfolding process of this code, each cell contains the same genetic code, but because of its local environment can follow different developmental trajectories. We model the molecular mechanisms that regulate cell differentiation by a dynamical gene regulatory network (GRN). This GRN is defined by a set of 5 variables (G0, G1, G2, GE, and GI) that represent substance concentrations, where each substance is the expression of a gene. Importantly, all cells have their own instantiations of these variables. The secretion, interaction, and decay of substances, is regulated by the laws of kinetics. The differential equations specifying these dynamics are shown in Methods. During the evolution of the substance concentrations, also cell growth and division is simulated. The cell cycle time and model Figure 1. Winner-take-all architecture. (A) Architecture of an idealised winner-take-all-network. Several excitatory neurons (red) excite a single shared inhibitory neuron, or a shared population of inhibitory neurons (blue). Each excitatory neuron receives inhibitory feedback in proportion to the average activity of the excitatory population. (B) The WTA architecture is embedded in the field of recurrent connections between a population of excitatory and inhibitory neurons. (C) Once the WTA architecture has formed, coarsely structured synaptic input drives synaptic refinement of the recurrent connections within the network. doi: /journal.pcbi g001 parameters of the differential equations are fixed and independent of the substance dynamics. Initially, all concentrations are set to zero. At this stage, only the starter substance G0 is produced, which reaches high concentration levels in the first time step, and triggers the production of a second gene G1. G1 is produced according to a prespecified intrinsic production constant a. This value determines how many cell divisions will occur until the concentration of G1 reaches a value of 0:99. When this threshold is reached, a probabilistic decision is induced: GE or GI, responsible for activating the excitatory and inhibitory cell phenotypes, are triggered with probability p E ~0:8 or p I ~0:2, respectively. Such a GRN network configuration would enable us to generate 2 n cells, where n is the number of symmetric divisions. However, the target number of cells might not be an exponential of 2. Therefore, we have introduced a second gene G2 that is (probabilistically) activated by high concentrations of G1, and that leads to a second round of symmetric division. As for G1, G2 activates GE or GI in a probabilistic manner. The probability to enter into this secondary cell cycle is given by p 2, which is computed based on the target number of cells. The evolution of the GRN across cell types is depicted in Fig. 2. By setting the production rate constant a of gene G1 and the probabilistic activation of G2, we can control the final number of cells produced. The equations for computing the probabilities for either differentiating into neurons by G1 induction (p 1 )orbyg2 induction (p 2 ), depending on the target number of cells, are shown in Methods. Overall, the GRN is designed so that a desired total number of cells is reached, and that the distribution of excitatory vs. inhibitory cells follows the approximate 4:1 ratio observed in PLOS Computational Biology 2 December 2014 Volume 10 Issue 12 e

3 Figure 2. Gene Regulatory Network. (A) Schematic representation of the GRN, composed of five interacting genes that give rise to excitatory and inhibitory neurons. The identity of a neuron is determined by the genes GE and GI for excitatory or inhibitory neurons, respectively. Arrows indicate a positive effect on gene expression. (B) Lineage tree. Nodes indicate cells; boxes indicate gene expression patterns. G0 triggers the expression of G1, which characterizes the undifferentiated state of progenitor cells. After a series of symmetric divisions, G1 reaches a concentration threshold. According to fixed probabilities, G1 can then activate the differentiation toward excitatory (red) or inhibitory (blue) neurons. Alternatively, a small proportion of cells probabilistically undergoes a second round of cell division and activates gene G2, which again promotes the differentiation toward excitatory or inhibitory neurons by the expression of GE or GI. The probabilistic activation of inhibitory or excitatory genes is a simplification, but guarantees the production of a homogeneously mixed population of neurons. doi: /journal.pcbi g002 cortex [25 27] (S1 Figure). Fig. 3(A-D) shows the evolution of an initial cell giving rise to a number of cells which eventually grow out neurites based solely on their genetic encoding. Neurite Growth and Synapse Formation Neurite growth and arborization is caused by growth cone traction and bifurcation. The growth cone is able to sense the presence and gradient of morphogens and other signal molecules, and also able to actively explore the local extracellular space. Importantly, neurite growth is steered via a growth cone model instantiated at the tip of the axon or dendrite, and so is a local process. Diffusable signal molecules are secreted by the cell somata. In these simulations excitatory and inhibitory neurons secrete two characteristic signals, that enable excitatory and inhibitory axons to find inhibitory and excitatory neurons, respectively. The axonal growth cones initially grow out of the somata in random directions. However, they retract whenever the concentration they sense falls below a threshold. The retraction stops and growth recommences when a second higher threshold is exceeded. In this way the axons remain close to substance secreting sources. Retraction is an efficient strategy for establishing connections because axons grow only into regions containing a potential target, and is commonly observed in developing neurons [28 31]. A video of a developing neural network with axonal retraction (simulated in Cx3D) is included in the Supporting Information (S1 Video) and on Youtube ( Axons deploy boutons. Whenever these boutons are sufficiently close to a potential post-synaptic site on a dendrite a synapse is created between them. Consequently, the final synaptic network connectivity depends on the nearly stochastic arrangement of regions of spatial proximity of the outgrowing axons and dendrites. We adapted the parameters of the neurite outgrowth (see Table 1) so that the connectivity of the simulated neuronal growth matched our experimental observations in layers II/III of cat visual cortex [5,32] (see Fig. 4A). Overall, we found that connectivity was robust to reasonable variation of the growth Figure 3. Developmental process for building a competitive network. A single precursor cell (A) contains the genetic code specifying the entire developmental process. (B) The precursor cell first undergoes repeated division to increase the pool of neuronal precursors (black). (C) Precursor neurons then differentiate into excitatory and inhibitory cell classes. (D) Neurite outgrowth begins to provide a scaffold for synaptic connections. (E) A network of differentiated neurons (grey) after neurite outgrowth has finished. For better visualization, examples of excitatory and inhibitory neurons are colored in red and blue, respectively. (F) Synapses (black rectangles) can form at appositions between axons and dendrites. doi: /journal.pcbi g003 parameters and the random location of somata. The absolute numbers of synapses simulated here are smaller than observed in biology, due to constraints on computational resources. However, there is no inherent restriction on scalability using our methods, and so we expect that realistic numbers of cells and synapses could if necessary be simulated using supercomputers. Fig. 4B shows the distribution of the percentage of excitatory input synapses to the neurons, across the whole population. The average percentage of excitatory inputs to a neuron in this network is 84%, which is in good agreement with the experimental data. This result is consistent with observations across species and cortical areas that some 15% of all the synapses are GABAergic [5,33 35], irrespective of neuronal densities. Importantly, this good agreement arises naturally out of the growth model, and did not require extensive tuning of the model parameters. Electrophysiology The self-configuration of electrophysiological processing depends on the tuning of network synaptic weights and neuronal activity. In order to simulate this aspect of the developing PLOS Computational Biology 3 December 2014 Volume 10 Issue 12 e

4 Table 1. Parameters for simulating axonal and dendritic growth. Parameter Value (Ex. Axon/Inh. Axon/Ex. Dendrite/Inh. Dendrite) d minimal (minimal diameter) 0.2/0.2/0.3/0.3 r move (diameter reduction when moving) 0.004/0.012/0.02/0.042 r fork (diameter reduction when bifurcating) 0.12/0.105/0.14/0.12 p base (baseline probability for bifurcation) 0.05/0.08/0.04/0.05 p substance (substance dependent probability for bifurcation) 0.005/0.05/0.0/0.0 speed of growth 100/100/100/100 speed of retraction 5/5/(no retraction)/(no retraction) h 1 (concentration threshold triggering retraction) 1e-8/1e-8/(no retraction)/(no retraction) h 2 (concentration threshold stopping retraction) 0.036/0.036/(no retraction)/(no retraction) w prev (weight of previous growth direction) 0.75/0.75/0.75/0.75 w noise (weight of random direction) 0.25/0.25/0.25/0.25 s max (neurite discretization size) 7/7/7/7 Growth parameters are dependent on the type of the neurite, as well as the neuron type. In order to qualitatively match biological observations, we modeled axons to be longer than dendrites and inhibitory (basket) cells to have smaller spatial extent than excitatory neurons [56, ]. In our model, axons direct their growth based on extracellular substance concentrations. doi: /journal.pcbi t001 networks, we must model also the electrical activity of neurons. However, the time scales of morphological growth and electrophysiological dynamics are many orders of magnitude different, and this difference makes for substantial technical problems in simulation. For simplicity, and for minimizing computational demands we have used a rate-based approach to modeling neuronal activity. We approximate the neuronal activation by a linear-threshold function [36] that describes the output action potential discharge rate of the neuron as a function of its input. This type of neuronal activation function is a good approximation to experimental observations of the adapted current discharge relation of neurons [37,38] and has been used in a wide range of modeling works [8,39 41]. The linear-threshold activation function is: t dx i dt ~{x izmax(s i zi i z X j w ij x j {T,0) where x i denotes the firing rate of a neuron with index i, t is the neuronal time constant, s i is the spontaneous activity, I i is the feedforward input to neuron i, w ij is the weight of the connection from neuron j to neuron i (can be positive or negative, depending on the presynaptic neuron s type), and T is the neuron s threshold. For simplicity, t and T are set to 1 and 0. Exploratory simulations where t inh =t ex yielded very similar results. For computational efficiency, the electrophysiology simulator is implemented as a global process that acts on the total weight matrix of the neuronal network, rather than performing these frequent computations locally. We chose this global methodology because it leads to a significant speed-up compared with a local version that had been used initially. The total weight matrix is obtained by summation of the weights of all synapses in the Cx3D simulation. Using these connection weights, neuronal activity is computed as described in Eq. 1. Connection weight changes resulting from the learning and adaptation (explained below) are computed based on this summed weight matrix and the activities of the two respective connected neurons, which are saved at each electrophysiology time step. ð1þ The same connection weights (and neuronal activities) would be computed if only local processes at the synapses were simulated, because the synaptic learning and adaptation dynamics (Eq. 2 and 3) are dependent on the (locally available) neuronal activities, and linearly dependent on the synaptic weight. Hence, the dynamics of the summed synaptic weights match the sum of the individual synapse weight changes. For reasons of biological plausibility, the electrophysiology simulator incorporates a maximum connection weight. This maximum weight for the functional connection strength between two neurons is determined by counting the number of synapses involved. This number, multiplied by the maximal weight of a single synapse, is defined as the maximum of the total connection weight. Hence, neurons that are connected by few synapses can not establish a strong functional link. In our model, self-configuration of the weights towards swta functionality occurs during sequential developmental phases. Sequential phases of electrical adaptation and learning during development have been observed experimentally [42,43], and have also been applied in previous models [44,45]. During the first, homeostatic phase neurons adapt the synaptic weights of their own input in order to maintain a target output activity. The effect of this phase is to bring the neuronal firing rates into a balanced regime, and so allow for a reliable synaptic learning without interference by unresponsive neurons or runaway excitation. During the second, specification phase the neurons structure their individual responses by correlation-based learning on their inputs. Homeostatic phase. During this first phase of activitydependent adaptation, neurite outgrowth, synapse formation and homeostatic adaptation of neuronal activity occur simultaneously. Neurons implement the synaptic scaling rule [17,46,47], whereby they scale their synaptic input weights to achieve a preferred average output firing rate. Thus, when their average output activity exceeds a given target, neurons scale their excitatory and inhibitory inputs down and up respectively. The opposite effect occurs when the average activity has fallen below the target. Since there is no correlation-based learning during this phase the population of neurons can converge towards stable average levels of activity, but there is no input learning. PLOS Computational Biology 4 December 2014 Volume 10 Issue 12 e

5 Figure 5. Homeostatic adaptation of neuronal firing rates during establishment of synaptic connectivity. (A) Synaptic scaling during neurite outgrowth leads to robust average activities of both excitatory (red) and inhibitory (blue) neurons. The network consists of 250 neurons that are randomly arranged in 3D space. The horizontal axis indicates the estimated real-time when taking into account that the time constant of synaptic scaling is in the order of several hours [17]. At t~50 (dashed line), the neurite outgrowth begins. Average firing rates of layer II/III pyramidal neurons have been shown to be smaller than 1 Hz in-vivo [128,129]. Experimental data indicates that inhibitory neurons have higher activities A i (Eq. 2) than excitatory neurons [68,100,130,131]. In this simulation there are not yet any input projections, so the activity originates solely from internally generated and random activity. (B) Total (excitatory and inhibitory) number of synapses in the network during development. New synapses are formed also after the neurons reach the target average activities, without disrupting the homeostatic adaptation process or bringing the network out of balance. These simulation results demonstrate the robustness of the synaptic scaling process during network growth. doi: /journal.pcbi g005 Figure 4. Connectivity after simulated neurite outgrowth. (A) Comparison of connectivity statistics from Cx3D simulations (blue) with experimental data (red) from [5]. Indicated on the vertical axis are the numbers (normalized with respect to the first bar) of synapses onto a single neuron. The individual bars show the values for the different preand postsynaptic neuron pairs (excitatory or inhibitory synapses onto an excitatory or inhibitory postsynaptic neuron). The numbers match in proportion, while the absolute quantities are higher in the biological data (approximately 155 vs excitatory synapses onto a single excitatory neuron in the simulated and biological connectivity, respectively). This particular simulation consists of 250 neurons (200 excitatory and 50 inhibitory), which are randomly arranged in 3D space. (B) Histogram of the percentage of excitatory input synapses across the simulated network from (A). Each bar indicates the number of neurons that have a particular percentage of excitatory input synapses (after neurite growth and synapse formation have ended). The final distribution has a mean of 84%, which is in line with experimental assessments [5,33 35]. doi: /journal.pcbi g004 The equations for synaptic scaling are given by [48,49]: t SS dw ij dt ~w ij(a i {vx i w) ð2þ where w ij is the connection strength from neuron j to neuron i, t SS is the time constant of the learning rule (usually hours or days), A i is desired average activity of postsynaptic neuron i, and vx i w is the actual average activity of neuron i. Fig. 5 shows that this synaptic scaling permits the simulated network to reach a stable state with robust excitatory and inhibitory firing rates. Post-synaptic scaling is not the only mechanism that can be used for neuronal activity homeostasis. For example, [49] has described an extended version of synaptic scaling: The presynapticdependent synaptic scaling (PSD) rule. We also implemented that PSD rule, but obtained results which differed only slightly from traditional synaptic scaling. In the later stages of this first phase, input neurons (that are not part of the growing network) are added to the model (see Fig. 1C). These input neurons could correspond, for example, to thalamic or cortical layer IV neurons. They are initially fully connected to neurons of the grown network, and their projection efficacies are randomly drawn from a uniform distribution. Importantly, there is a neighborhood relationship amongst the input neurons: Input populations can be topologically close to, or distant from one another. The input neurons provide coarsely patterned input activity to the grown network. We chose hill-shaped patterns of activity centered on a given population, and decaying with topological distance from its center. The centers of these patterns move periodically in a noisy wave-like fashion (see Methods). This patterning of the electrical activity in the input layer can be interpreted as, for example, the retinal waves in early development PLOS Computational Biology 5 December 2014 Volume 10 Issue 12 e

6 [19 23], that can induce correlations within the activities of downstream neural subpopulation. By the end of this homeostatic phase, neurons and synapses have reached their final structural configuration. Overall, this phase prepares the network for the next phase of correlation-based learning of input stimuli. Specification phase. In this phase synapses onto excitatory postsynaptic neurons obey the Bienenstock-Cooper-Munro (BCM) learning rule [18,50,51], rather than synaptic scaling. The BCM rule is composed of a Hebbian term, and a homeostatic term which determines whether the Hebbian synapse grows stronger or weaker. The BCM learning rule is: t BCM dw ij dt ~x i : x j : (xi {h i ) where x i,x j denote the discharge rates of post- and pre-synaptic neurons i,j; h i is the averaged square of neuron i s firing rate, multiplied by a constant (h i ~c i x 2 i ). The constant c i determines the average firing rate that the neuron converges towards in the stationary state; the condition dw ij ~0 is met in the non-trivial dt case where x i ~h i. Let A ex and A inh denote the target average firing rates of excitatory and inhibitory neurons, respectively. Then in order for the neurons to converge to these firing rates, c i is set to 1 1 if neuron i is excitatory, or if it is inhibitory. A ex A inh All synapses (excitatory and inhibitory) made onto excitatory neurons follow the BCM learning rule, while those onto inhibitory neurons follow the synaptic scaling (Eq. 2) rule. While learning is commonly attributed to excitatory synapses, inhibitory synapses can also undergo long-term potentiation (LTP) as well as longterm depression (LTD) [52 55]. The lack of a correlative term for synapses onto inhibitory postsynaptic neurons is, as shown below, necessary to match experimental data on orientation selectivity of excitatory and inhibitory neurons in mouse visual cortex. We therefore hypothesize that basket cells in the superficial layers of cortex homeostatically adapt their input synapses, in contrast to pyramidal neurons, which also use correlational information. We have also explored the case in which the same learning rule is used by all synapses. This case also yields WTA functionality (see below). Given that there are many different classes of inhibitory neurons [56], which differ also in their developmental characteristics [57], it is possible that different interneuron types follow different learning rules. Functional Properties Self-organization of WTA functionality. As a consequence of the synaptic learning in the second developmental phase, the network learns the topology of its inputs. Those neurons which are excited by a common input, become more strongly connected with one another. Because of the competition that is inherent to the BCM learning rule, excitatory neurons become progressively more connected to only particular input neurons (those which evoke their strongest response), while decreasing their affinity to the others. Fig. 6A shows that the final functional connectivity of excitatory neurons indeed exhibits a strong neighborhood relationship: The connection weights are stronger around the diagonal, so that the neurons are close to or distant from one another in weight space. This connection topology reflects the (1- dimensional) topology of the input patterns. ð3þ The inhibitory neurons do not integrate into this topology because the synapses onto inhibitory neurons follow the non- Hebbian synaptic scaling rule, and so their input correlations can not be learned. Fig. 6B,D show examples of the final soft-wta functionality, after the network has learned the input topology. The excitatory neurons receiving the largest input are predominantly enhanced due to recurrent excitation with one another. The inhibitory neurons reflect the overall activity, and reduce the losing neurons activity, more than they are enhanced by excitatory inputs. From a functional point of view, this active selection of the winning population improves the signal to noise ratio, and confirms their swta properties. Unsupervised clustering. WTA networks are able to perform pattern recognition and classification, i.e. that neurons cluster functionally and respond to patterns in a discriminative and classifying manner. We explored whether this property can arise in a biological setting, as captured by our developmental model. To do this, the processes of connectivity establishment and synaptic homeostasis were simulated as described before. However, during the learning phase input patterns consisting of discrete bars of different position and orientation (Fig. 7A) were presented to the network. In this input regime there are no continuous orderings between individual patterns (which is the case for the retinal-wave like activation patterns). Learning the discrete input stimuli causes the population to partition into sub-populations, or assemblies, as shown in Fig. 7C. We demonstrated the generality of this learning by simulating the clustering in response to presentation of only 4 input stimuli, using the same network and simulation parameters as in the case with a full range of stimuli (S2A Figure). We also examined the scenario in which all synapses (including those onto inhibitory neurons) follow the same BCM learning rule. As anticipated, this case yielded networks in which inhibitory neurons cluster along with the excitatory populations (compare S2B Figure with Fig. 7C). The clustered functional connectivity allows the network to decorrelate its inputs, so that even noisy signals can be reliably differentiated. We quantified this ability by testing the network response to a particular pattern U, by comparison with pattern V. This comparison was measured using the scalar product between the activities in the network after presentation of the different input patterns. Let a U and a V be the n-dimensional vectors of the neuronal activities in a network of n neurons, in response to the stimuli U and V, respectively. The scalar product s~a U : av then quantifies whether the responses to the two stimuli U and V are very different (s close to 0) or correlated (s close to 1). To demonstrate that the results are valid under more biologically plausible conditions, noisy stimuli were used. A noisy input stimulus is defined as: MzE input population k is active in pattern U, I k,u ~ E otherwise: where k is the index of the input population and U is the stimulus identifier. M is the amplitude of the active populations in the input (which we set to 10 Hz in this case), and E is uniformly distributed noise in the range ½0,0:3 : MŠ. Fig. 6D, E shows the correlations of the network s responses for 8 different input stimuli before (D) and after (E) learning under noisy vs. not noisy (E~0) conditions. The off-diagonal elements in the correlation matrix are much lower after learning than before. These results demonstrate the decorrelation of the network s activity, and the robustness to input noise. ð4þ PLOS Computational Biology 6 December 2014 Volume 10 Issue 12 e

7 Figure 6. Winner-take-all functionality. (A) Weight matrix of 117 excitatory neurons in a WTA network. After learning the network exhibits a 1- dimensional neighborhood topology, as shown by the predominantly strong weights around the diagonal. This topology mirrors the neighborhood relationship of the input stimuli, which are continuously and periodically moving hills of activity. Only the excitatory connections are shown here, because the inhibitory neurons do not integrate into the neighborhood topology (see text). (B) Demonstration of WTA functionality on the network connectivity shown in (A). Neurons are ordered here such that adjacent neurons connect most strongly. The input to the network (x in ; top row) has a hill shape, with added noise. The network response (x out ; middle row) is a de-noised version of the input with the bump in the same location. The neuronal gain ( xout ; bottom row) is high for neurons receiving the strongest input, and low (or zero) for neurons distant from the main input to the xin network. The dashed horizontal line indicates a gain of 1. (C) Activity of a winning neuron (blue, solid), during presentation of its feedforward input (blue, dashed) in the same simulation as shown in (B). Recurrent connectivity amplifies the response of the neuron for the duration of the stimulus (t[½0; 0:5sŠ). In contrast, a losing neuron (green, solid) receives non-zero feedforward input (green, dashed), but is suppressed due to the WTA functionality of the network. (D) Response of the same network to a different feedforward input. The recovery of a bump shaped activity can occur anywhere in the network topology. doi: /journal.pcbi g006 Competition between states. In addition to the decorrelation of responses, clustering provides competition between inputs. This property is computationally interesting because it forces the network to make a decision based on its input (Fig. 8A). We demonstrated this competition by presenting simultaneously two competing patterns (after the network had learned 4 different patterns). The relative proportion of these patterns in the input was gradually varied between the first and second pattern. The results show that the stronger stimulus non-linearly dominates responses in the WTA network, so that the masked stimulus evokes an activity pattern that resembles that evoked by the strong stimulus alone. These results are in accord with experimental studies in visual cortex [58 60] and auditory cortex [61]. The nature of the competition between the states is dependent on the functional connectivity. Strong recurrent excitation (i.e. a high gain) yields strong inhibition, which results in a marked switching behavior between the different populations. This is because the competition is strong, and so the switch from one state to the other is more evident. A high slope of the transition reflects a functionality similar to a bistable switch. More specifically, the slope of the transition (middle part of the interpolation in Fig. 8A) increases with the gain of the WTA network. This gain can be adjusted via the homeostatic average activities: Higher target activities lead to more recurrent excitation, which increases the gain. Such differently graded competition is seen in Fig. 8B. Bistability is also interesting from a computational point of view, because discrete states can be represented reliably. This kind of reliability is useful for performing computation with states based on analog elements [8,62]. Competition also develops when synapses onto excitatory as well as inhibitory neurons follow the BCM learning rule, as shown in S3 Figure. Correspondence with Orientation Selectivity of Excitatory and Inhibitory Neurons We investigated whether our developmental model can account for experimental findings on orientation selectivity in visual cortex; for example, differences in tuning between excitatory and inhibitory neurons. In order to address this question, we assumed that the hills of activity in the input layer correspond to oriented stimuli (e.g. bars), which are smoothly and periodically rotating between 0 and 180 degrees. As anticipated from the previous PLOS Computational Biology 7 December 2014 Volume 10 Issue 12 e

8 Figure 7. Clustering and decorrelation of representations. (A C) Discrete input patterns give rise to clusters in the functional connectivity of the WTA network. (A) Input stimuli used in the learning process. Filled and empty spheres indicate strongly and weakly active populations, respectively. (B,C) Visualization of the network structure before and after learning. Strongly-coupled neurons are drawn close together; excitatory synaptic connections are indicated by grey links. Excitatory neurons are coloured according to their preferred input pattern (colours in A); inhibitory neurons (square) are drawn in yellow. (B) Before learning, no clustering of synaptic connections is present. (C) After learning, neurons with the same preferred stimulus are strongly interconnected. See S2 Video. (D) Before learning, the response of the network is similar across all stimuli. Shown is the scalar product between the vectors of neuronal responses to pairs of stimuli vx i,x jznoise w. The noise was added in order to assess the sensitivity of the network s activity to a perturbation of the input signal (see text). The high values and uniformity of scalar products in (D) indicates that network responses poorly distinguish between stimuli. (E) After learning, responses to noisy stimulus presentations are highly similar (high values of scalar product; black diagonal), whereas responses to different stimuli are decorrelated (low values of scalar product; light shading). doi: /journal.pcbi g007 results, excitatory neurons become highly orientation selective (Fig. 9), in contrast to inhibitory neurons. These results are in line with biological data. For example, [63] have analyzed orientation selectivity of excitatory and inhibitory neurons in mouse visual cortex. They report inhibitory neurons to be more broadly tuned and hence less selective than excitatory, pyramidal neurons. Similar findings were reported by [64 68]. We also quantified the orientation tuning based on the orientation selectivity index (OSI), which specifies the degree to which a neuron is selective for orientation. The value of this index lies between 0 (non-selective) and 1 (selective to a single, specific orientation). Fig. 9B shows the distribution of the OSI for excitatory and inhibitory neurons in a WTA network, demonstrating the discrepancy of orientation selectivity also on a population level. We conducted additional simulations, which demonstrated that when inhibitory neurons follow the same learning rule as excitatory neurons, they exhibit more narrowly tuned orientation selectivity (Fig. 9C). Hence, experimental findings of orientation selective inhibitory neurons in cat visual cortex [69 72] can also be accounted for by our model. Inhibition of Excitatory Neurons We have analyzed the consequences of our model on the nature of the inhibition of excitatory neurons. As mentioned above, inhibitory synapses onto excitatory neurons are subject to the BCM learning rule (Eq. 3). The competition between excitatory neurons depends on the common input that they all receive from inhibitory neurons. This common input must reflect the overall activity of the network, so that the competition is suitably normalized. However, the inhibition of the excitatory neurons stems from multiple inhibitory neurons, which should partition their common inhibitory task PLOS Computational Biology 8 December 2014 Volume 10 Issue 12 e

9 Figure 8. Stimuli are represented by competing subpopulations. (A) Competition for representation of a mixture of 2 concurrent stimuli. Shown is the normalized average activity of two subpopulations, in response to mixtures of the preferred stimuli of the two populations. For mixtures containing predominately one stimulus (mixture proportions close to 0 and 1), the populations are strongly in competition, and the network represents exclusively the stronger of the two stimuli (responses near 0 and 1). For intermediate mixture proportions, competition causes a rapid shift between representations of the two stimuli (deviation from diagonal reference line). (B) Increasing the gain of the network x out (black line: 1.3, blue: 1.5, red: x in 1.8) increases the stability of representations, and increases the rate of switching between representations due to stronger competition. doi: /journal.pcbi g008 amongst each other in a self-organizing way. We investigated this partitioning, and how an excitatory neuron is inhibited during stimulation. In order to quantify the impact of a neuron j on another neuron i for a given stimulus, we calculate a value that we will call the recursively effective exertion (REE). It is obtained by multiplying the activity of neuron j (under a given stimulus U) with the total connection weight w ij from neuron j to i: REE ij (U)~x j,u : wij The REE value is therefore stimulus-dependent, and dependent on the recurrent network connectivity. Fig. 10 shows that inhibition is distributed non-uniformly: A few inhibitory neurons dominate the suppression of an excitatory neuron. This dominance is due to the BCM learning by inhibitory synapses: Strongly and weakly correlated inhibitory connections to excitatory neurons are strengthened or weakened, respectively. These inhibitory connection strengths converge because of the homeostatic activity regulation, which is part of the BCM learning rule. The nature of inhibition of excitatory neurons is interesting in the context of the anatomy of inhibitory basket cells. These neurons predominantly target locations close to the soma or the proximal dendrites, where they can strongly influence the excitatory neuron [73]. Therefore, it is plausible that the recruitment of a small number of inhibitory neurons is sufficient to inhibit an excitatory neuron. Electrophysiological experiments could in principle validate this hypothesis by showing that only a small proportion of the inhibitory neurons projecting to a pyramidal neuron are predominantly responsible for its suppression. Discussion In this paper we have demonstrated by simulation of physical development in a 3D space, how an autonomous gene regulatory ð5þ network can orchestrate the self-construction and -calibration of a field of soft-wta neural networks, able to perform pattern restoration and classification on their input signals. The importance of this result is that it demonstrates in a systematic and principled way how genetic information contained in a single precursor cell can unfold into a functional network of neurons with highly organized connections and synaptic weights. The principles of morphological and functional development captured in our model are necessarily simplified with respect to the boundless detail of biology. Nevertheless, these principles are both strongly supported by experimental data, and sufficiently rich in their collective expression to explain coherently the complex process of expansion of a genotype to a functional phenotypic neuronal circuit. In this way our work offers a significant advance over previous biological and modeling studies which have focused either on elements of neuronal development, or on learning in networks whose initial connectivity is given. Therefore we expect that methods and results of the kind reported here will be of interest both to developmental biologists seeking a modeling approach to exploring system level processes, as well as to neuronal learning theorists who usually neglect the geneticdevelopmental and homeostatic aspects of detailed learning in favor of an initial network that serves as a basic scaffold for subsequent learning [74 76]. It is relatively easy to express a well-characterized biological process through an explicit simulation. That is, one in which the simulation simply recapitulates the process by expanding some data through a simple model, without regard for physical and mechanistic constraints. By contrast, the simulation methods [16] that we have used here are strictly committed to physical realities such as 3D space, forces, diffusion, gene-expression networks, cellular growth mechanisms, etc. Our methods are also committed to local agency: All active processes are localized to cells, can only have local actions, and have access to only local signals. There is no global controller with global knowledge, able simply to paint the developmental picture into a 3D space. Instead, the ability of a precursor cell to expand to a functional network is the result of collective interaction between localized cellular processes. And overall, the developmental process is the expression of an organization that is encoded only implicitly, rather than explicity, in the GRN of the precursor cell. Thus, our GRN encodes constraints and methods rather than explicit behaviors. In previous work [77,78] we have shown how this approach can be used to explain the development of neocortical lamination and connectivity. In that case we did not consider also the electrophysiological signaling between cells and so the self-configuration of their computational roles, as we have done here. However, the incorporation of electrophysiological signaling into the growth model brings substantial technical difficulties, such as those arising out of the large differences in spatio-temporal scales between cellular developmental and electrophysiological signaling processes, as well as the supply and management of sufficient computational resources. Therefore we have chosen to keep these problems tractable in this first functional study, by restricting our question to a sub-domain of cortical development: How could neuronal precursors expand into functional circuits, at all. Even then, we must be satisfied for the moment with a rate based model of neuronal activity, rather than a fully spiking one. The emphasis of this paper is on the process whereby a precursor expands to some useful network function. The particular function is less relevant, and in any case the functional/ computational details of cortical circuits are as yet not fully understood. We have chosen to induce WTA-like function because our previous work has been focused on the likely similarity PLOS Computational Biology 9 December 2014 Volume 10 Issue 12 e

10 Figure 9. Excitatory neurons are strongly tuned; inhibitory neurons are poorly tuned. Tuning properties of excitatory and inhibitory neurons. (A) Representative tuning curves for 3 excitatory (red, 1-3) and 3 inhibitory (blue, 4-6) neurons in a WTA network after the learning process. Excitatory neurons exhibit strong and narrowly tuned preference for certain inputs, in contrast to inhibitory neurons. (B) Distribution of the orientation selectivity index (OSI) across all excitatory and inhibitory neurons in a WTA network, demonstrating the discrepancy of tuning on a population level. (C) Simulation of the same learning rule for synapses onto excitatory as well as inhibitory neurons yields orientation-tuned neurons in both populations. doi: /journal.pcbi g009 between the WTA motif and the neuronal types and their interconnectivity in the superficial layers of cortex [6]. Moreover these WTA networks are intriguing from both the biological, and computational perspective [3,6 15,41]. The strong recurrent excitation available in the superficial layers of cortex, and their critical dependence on feedback inhibition has been clearly PLOS Computational Biology 10 December 2014 Volume 10 Issue 12 e

Representational Switching by Dynamical Reorganization of Attractor Structure in a Network Model of the Prefrontal Cortex

Representational Switching by Dynamical Reorganization of Attractor Structure in a Network Model of the Prefrontal Cortex Representational Switching by Dynamical Reorganization of Attractor Structure in a Network Model of the Prefrontal Cortex Yuichi Katori 1,2 *, Kazuhiro Sakamoto 3, Naohiro Saito 4, Jun Tanji 4, Hajime

More information

Plasticity of Cerebral Cortex in Development

Plasticity of Cerebral Cortex in Development Plasticity of Cerebral Cortex in Development Jessica R. Newton and Mriganka Sur Department of Brain & Cognitive Sciences Picower Center for Learning & Memory Massachusetts Institute of Technology Cambridge,

More information

Synaptic Plasticity and Connectivity Requirements to Produce Stimulus-Pair Specific Responses in Recurrent Networks of Spiking Neurons

Synaptic Plasticity and Connectivity Requirements to Produce Stimulus-Pair Specific Responses in Recurrent Networks of Spiking Neurons Synaptic Plasticity and Connectivity Requirements to Produce Stimulus-Pair Specific Responses in Recurrent Networks of Spiking Neurons Mark A. Bourjaily, Paul Miller* Department of Biology and Neuroscience

More information

Spiking Inputs to a Winner-take-all Network

Spiking Inputs to a Winner-take-all Network Spiking Inputs to a Winner-take-all Network Matthias Oster and Shih-Chii Liu Institute of Neuroinformatics University of Zurich and ETH Zurich Winterthurerstrasse 9 CH-857 Zurich, Switzerland {mao,shih}@ini.phys.ethz.ch

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 7: Network models Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis II 6 Single neuron

More information

Exploring the Functional Significance of Dendritic Inhibition In Cortical Pyramidal Cells

Exploring the Functional Significance of Dendritic Inhibition In Cortical Pyramidal Cells Neurocomputing, 5-5:389 95, 003. Exploring the Functional Significance of Dendritic Inhibition In Cortical Pyramidal Cells M. W. Spratling and M. H. Johnson Centre for Brain and Cognitive Development,

More information

Pairwise Analysis Can Account for Network Structures Arising from Spike-Timing Dependent Plasticity

Pairwise Analysis Can Account for Network Structures Arising from Spike-Timing Dependent Plasticity Pairwise Analysis Can Account for Network Structures Arising from Spike-Timing Dependent Plasticity Baktash Babadi 1,2 *, L. F. Abbott 1 1 Center for Theoretical Neuroscience, Department of Neuroscience,

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION doi:10.1038/nature10776 Supplementary Information 1: Influence of inhibition among blns on STDP of KC-bLN synapses (simulations and schematics). Unconstrained STDP drives network activity to saturation

More information

Rolls,E.T. (2016) Cerebral Cortex: Principles of Operation. Oxford University Press.

Rolls,E.T. (2016) Cerebral Cortex: Principles of Operation. Oxford University Press. Digital Signal Processing and the Brain Is the brain a digital signal processor? Digital vs continuous signals Digital signals involve streams of binary encoded numbers The brain uses digital, all or none,

More information

Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6)

Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6) Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6) BPNN in Practice Week 3 Lecture Notes page 1 of 1 The Hopfield Network In this network, it was designed on analogy of

More information

Evaluating the Effect of Spiking Network Parameters on Polychronization

Evaluating the Effect of Spiking Network Parameters on Polychronization Evaluating the Effect of Spiking Network Parameters on Polychronization Panagiotis Ioannou, Matthew Casey and André Grüning Department of Computing, University of Surrey, Guildford, Surrey, GU2 7XH, UK

More information

Modeling of Hippocampal Behavior

Modeling of Hippocampal Behavior Modeling of Hippocampal Behavior Diana Ponce-Morado, Venmathi Gunasekaran and Varsha Vijayan Abstract The hippocampus is identified as an important structure in the cerebral cortex of mammals for forming

More information

A general error-based spike-timing dependent learning rule for the Neural Engineering Framework

A general error-based spike-timing dependent learning rule for the Neural Engineering Framework A general error-based spike-timing dependent learning rule for the Neural Engineering Framework Trevor Bekolay Monday, May 17, 2010 Abstract Previous attempts at integrating spike-timing dependent plasticity

More information

Input-speci"c adaptation in complex cells through synaptic depression

Input-specic adaptation in complex cells through synaptic depression 0 0 0 0 Neurocomputing }0 (00) } Input-speci"c adaptation in complex cells through synaptic depression Frances S. Chance*, L.F. Abbott Volen Center for Complex Systems and Department of Biology, Brandeis

More information

Lateral Inhibition Explains Savings in Conditioning and Extinction

Lateral Inhibition Explains Savings in Conditioning and Extinction Lateral Inhibition Explains Savings in Conditioning and Extinction Ashish Gupta & David C. Noelle ({ashish.gupta, david.noelle}@vanderbilt.edu) Department of Electrical Engineering and Computer Science

More information

Nature Methods: doi: /nmeth Supplementary Figure 1. Activity in turtle dorsal cortex is sparse.

Nature Methods: doi: /nmeth Supplementary Figure 1. Activity in turtle dorsal cortex is sparse. Supplementary Figure 1 Activity in turtle dorsal cortex is sparse. a. Probability distribution of firing rates across the population (notice log scale) in our data. The range of firing rates is wide but

More information

Sparse Coding in Sparse Winner Networks

Sparse Coding in Sparse Winner Networks Sparse Coding in Sparse Winner Networks Janusz A. Starzyk 1, Yinyin Liu 1, David Vogel 2 1 School of Electrical Engineering & Computer Science Ohio University, Athens, OH 45701 {starzyk, yliu}@bobcat.ent.ohiou.edu

More information

Memory Systems II How Stored: Engram and LTP. Reading: BCP Chapter 25

Memory Systems II How Stored: Engram and LTP. Reading: BCP Chapter 25 Memory Systems II How Stored: Engram and LTP Reading: BCP Chapter 25 Memory Systems Learning is the acquisition of new knowledge or skills. Memory is the retention of learned information. Many different

More information

Information Processing During Transient Responses in the Crayfish Visual System

Information Processing During Transient Responses in the Crayfish Visual System Information Processing During Transient Responses in the Crayfish Visual System Christopher J. Rozell, Don. H. Johnson and Raymon M. Glantz Department of Electrical & Computer Engineering Department of

More information

Different inhibitory effects by dopaminergic modulation and global suppression of activity

Different inhibitory effects by dopaminergic modulation and global suppression of activity Different inhibitory effects by dopaminergic modulation and global suppression of activity Takuji Hayashi Department of Applied Physics Tokyo University of Science Osamu Araki Department of Applied Physics

More information

The storage and recall of memories in the hippocampo-cortical system. Supplementary material. Edmund T Rolls

The storage and recall of memories in the hippocampo-cortical system. Supplementary material. Edmund T Rolls The storage and recall of memories in the hippocampo-cortical system Supplementary material Edmund T Rolls Oxford Centre for Computational Neuroscience, Oxford, England and University of Warwick, Department

More information

Computational cognitive neuroscience: 2. Neuron. Lubica Beňušková Centre for Cognitive Science, FMFI Comenius University in Bratislava

Computational cognitive neuroscience: 2. Neuron. Lubica Beňušková Centre for Cognitive Science, FMFI Comenius University in Bratislava 1 Computational cognitive neuroscience: 2. Neuron Lubica Beňušková Centre for Cognitive Science, FMFI Comenius University in Bratislava 2 Neurons communicate via electric signals In neurons it is important

More information

Temporal coding in the sub-millisecond range: Model of barn owl auditory pathway

Temporal coding in the sub-millisecond range: Model of barn owl auditory pathway Temporal coding in the sub-millisecond range: Model of barn owl auditory pathway Richard Kempter* Institut fur Theoretische Physik Physik-Department der TU Munchen D-85748 Garching bei Munchen J. Leo van

More information

Cerebral Cortex. Edmund T. Rolls. Principles of Operation. Presubiculum. Subiculum F S D. Neocortex. PHG & Perirhinal. CA1 Fornix CA3 S D

Cerebral Cortex. Edmund T. Rolls. Principles of Operation. Presubiculum. Subiculum F S D. Neocortex. PHG & Perirhinal. CA1 Fornix CA3 S D Cerebral Cortex Principles of Operation Edmund T. Rolls F S D Neocortex S D PHG & Perirhinal 2 3 5 pp Ento rhinal DG Subiculum Presubiculum mf CA3 CA1 Fornix Appendix 4 Simulation software for neuronal

More information

Neuromorphic computing

Neuromorphic computing Neuromorphic computing Robotics M.Sc. programme in Computer Science lorenzo.vannucci@santannapisa.it April 19th, 2018 Outline 1. Introduction 2. Fundamentals of neuroscience 3. Simulating the brain 4.

More information

A model of the interaction between mood and memory

A model of the interaction between mood and memory INSTITUTE OF PHYSICS PUBLISHING NETWORK: COMPUTATION IN NEURAL SYSTEMS Network: Comput. Neural Syst. 12 (2001) 89 109 www.iop.org/journals/ne PII: S0954-898X(01)22487-7 A model of the interaction between

More information

Learning in neural networks

Learning in neural networks http://ccnl.psy.unipd.it Learning in neural networks Marco Zorzi University of Padova M. Zorzi - European Diploma in Cognitive and Brain Sciences, Cognitive modeling", HWK 19-24/3/2006 1 Connectionist

More information

Hierarchical dynamical models of motor function

Hierarchical dynamical models of motor function ARTICLE IN PRESS Neurocomputing 70 (7) 975 990 www.elsevier.com/locate/neucom Hierarchical dynamical models of motor function S.M. Stringer, E.T. Rolls Department of Experimental Psychology, Centre for

More information

Axon initial segment position changes CA1 pyramidal neuron excitability

Axon initial segment position changes CA1 pyramidal neuron excitability Axon initial segment position changes CA1 pyramidal neuron excitability Cristina Nigro and Jason Pipkin UCSD Neurosciences Graduate Program Abstract The axon initial segment (AIS) is the portion of the

More information

Basics of Computational Neuroscience: Neurons and Synapses to Networks

Basics of Computational Neuroscience: Neurons and Synapses to Networks Basics of Computational Neuroscience: Neurons and Synapses to Networks Bruce Graham Mathematics School of Natural Sciences University of Stirling Scotland, U.K. Useful Book Authors: David Sterratt, Bruce

More information

CHAPTER I From Biological to Artificial Neuron Model

CHAPTER I From Biological to Artificial Neuron Model CHAPTER I From Biological to Artificial Neuron Model EE543 - ANN - CHAPTER 1 1 What you see in the picture? EE543 - ANN - CHAPTER 1 2 Is there any conventional computer at present with the capability of

More information

Modeling Depolarization Induced Suppression of Inhibition in Pyramidal Neurons

Modeling Depolarization Induced Suppression of Inhibition in Pyramidal Neurons Modeling Depolarization Induced Suppression of Inhibition in Pyramidal Neurons Peter Osseward, Uri Magaram Department of Neuroscience University of California, San Diego La Jolla, CA 92092 possewar@ucsd.edu

More information

Visual Nonclassical Receptive Field Effects Emerge from Sparse Coding in a Dynamical System

Visual Nonclassical Receptive Field Effects Emerge from Sparse Coding in a Dynamical System Visual Nonclassical Receptive Field Effects Emerge from Sparse Coding in a Dynamical System Mengchen Zhu 1, Christopher J. Rozell 2 * 1 Wallace H. Coulter Department of Biomedical Engineering, Georgia

More information

Self-Organization and Segmentation with Laterally Connected Spiking Neurons

Self-Organization and Segmentation with Laterally Connected Spiking Neurons Self-Organization and Segmentation with Laterally Connected Spiking Neurons Yoonsuck Choe Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 USA Risto Miikkulainen Department

More information

Dynamic Stochastic Synapses as Computational Units

Dynamic Stochastic Synapses as Computational Units Dynamic Stochastic Synapses as Computational Units Wolfgang Maass Institute for Theoretical Computer Science Technische Universitat Graz A-B01O Graz Austria. email: maass@igi.tu-graz.ac.at Anthony M. Zador

More information

You submitted this quiz on Sun 19 May :32 PM IST (UTC +0530). You got a score of out of

You submitted this quiz on Sun 19 May :32 PM IST (UTC +0530). You got a score of out of Feedback Ex6 You submitted this quiz on Sun 19 May 2013 9:32 PM IST (UTC +0530). You got a score of 10.00 out of 10.00. Question 1 What is common to Parkinson, Alzheimer and Autism? Electrical (deep brain)

More information

Discrimination and Generalization in Pattern Categorization: A Case for Elemental Associative Learning

Discrimination and Generalization in Pattern Categorization: A Case for Elemental Associative Learning Discrimination and Generalization in Pattern Categorization: A Case for Elemental Associative Learning E. J. Livesey (el253@cam.ac.uk) P. J. C. Broadhurst (pjcb3@cam.ac.uk) I. P. L. McLaren (iplm2@cam.ac.uk)

More information

STRUCTURAL ELEMENTS OF THE NERVOUS SYSTEM

STRUCTURAL ELEMENTS OF THE NERVOUS SYSTEM STRUCTURAL ELEMENTS OF THE NERVOUS SYSTEM STRUCTURE AND MAINTENANCE OF NEURONS (a) (b) Dendrites Cell body Initial segment collateral terminals (a) Diagrammatic representation of a neuron. The break in

More information

Learning and Adaptive Behavior, Part II

Learning and Adaptive Behavior, Part II Learning and Adaptive Behavior, Part II April 12, 2007 The man who sets out to carry a cat by its tail learns something that will always be useful and which will never grow dim or doubtful. -- Mark Twain

More information

Spatial and Feature-Based Attention in a Layered Cortical Microcircuit Model

Spatial and Feature-Based Attention in a Layered Cortical Microcircuit Model Spatial and Feature-Based Attention in a Layered Cortical Microcircuit Model Nobuhiko Wagatsuma 1,2 *, Tobias C. Potjans 3,4,5, Markus Diesmann 2,4, Ko Sakai 6, Tomoki Fukai 2,4,7 1 Zanvyl Krieger Mind/Brain

More information

Predictive Features of Persistent Activity Emergence in Regular Spiking and Intrinsic Bursting Model Neurons

Predictive Features of Persistent Activity Emergence in Regular Spiking and Intrinsic Bursting Model Neurons Emergence in Regular Spiking and Intrinsic Bursting Model Neurons Kyriaki Sidiropoulou, Panayiota Poirazi* Institute of Molecular Biology and Biotechnology (IMBB), Foundation for Research and Technology-Hellas

More information

Theta sequences are essential for internally generated hippocampal firing fields.

Theta sequences are essential for internally generated hippocampal firing fields. Theta sequences are essential for internally generated hippocampal firing fields. Yingxue Wang, Sandro Romani, Brian Lustig, Anthony Leonardo, Eva Pastalkova Supplementary Materials Supplementary Modeling

More information

Temporally asymmetric Hebbian learning and neuronal response variability

Temporally asymmetric Hebbian learning and neuronal response variability Neurocomputing 32}33 (2000) 523}528 Temporally asymmetric Hebbian learning and neuronal response variability Sen Song*, L.F. Abbott Volen Center for Complex Systems and Department of Biology, Brandeis

More information

Relative contributions of cortical and thalamic feedforward inputs to V2

Relative contributions of cortical and thalamic feedforward inputs to V2 Relative contributions of cortical and thalamic feedforward inputs to V2 1 2 3 4 5 Rachel M. Cassidy Neuroscience Graduate Program University of California, San Diego La Jolla, CA 92093 rcassidy@ucsd.edu

More information

Same or Different? A Neural Circuit Mechanism of Similarity-Based Pattern Match Decision Making

Same or Different? A Neural Circuit Mechanism of Similarity-Based Pattern Match Decision Making 6982 The Journal of Neuroscience, May 11, 2011 31(19):6982 6996 Behavioral/Systems/Cognitive Same or Different? A Neural Circuit Mechanism of Similarity-Based Pattern Match Decision Making Tatiana A. Engel

More information

LEARNING ARBITRARY FUNCTIONS WITH SPIKE-TIMING DEPENDENT PLASTICITY LEARNING RULE

LEARNING ARBITRARY FUNCTIONS WITH SPIKE-TIMING DEPENDENT PLASTICITY LEARNING RULE LEARNING ARBITRARY FUNCTIONS WITH SPIKE-TIMING DEPENDENT PLASTICITY LEARNING RULE Yefei Peng Department of Information Science and Telecommunications University of Pittsburgh Pittsburgh, PA 15260 ypeng@mail.sis.pitt.edu

More information

TNS Journal Club: Interneurons of the Hippocampus, Freund and Buzsaki

TNS Journal Club: Interneurons of the Hippocampus, Freund and Buzsaki TNS Journal Club: Interneurons of the Hippocampus, Freund and Buzsaki Rich Turner (turner@gatsby.ucl.ac.uk) Gatsby Unit, 22/04/2005 Rich T. Introduction Interneuron def = GABAergic non-principal cell Usually

More information

Cell Responses in V4 Sparse Distributed Representation

Cell Responses in V4 Sparse Distributed Representation Part 4B: Real Neurons Functions of Layers Input layer 4 from sensation or other areas 3. Neocortical Dynamics Hidden layers 2 & 3 Output layers 5 & 6 to motor systems or other areas 1 2 Hierarchical Categorical

More information

Supplementary figure 1: LII/III GIN-cells show morphological characteristics of MC

Supplementary figure 1: LII/III GIN-cells show morphological characteristics of MC 1 2 1 3 Supplementary figure 1: LII/III GIN-cells show morphological characteristics of MC 4 5 6 7 (a) Reconstructions of LII/III GIN-cells with somato-dendritic compartments in orange and axonal arborizations

More information

Spike-timing-dependent synaptic plasticity can form zero lag links for cortical oscillations.

Spike-timing-dependent synaptic plasticity can form zero lag links for cortical oscillations. Neurocomputing 58 6 (24) 85 9 www.elsevier.com/locate/neucom Spike-timing-dependent synaptic plasticity can form zero lag links for cortical oscillations. Andreas Knoblauch a;, Friedrich T. Sommer b a

More information

arxiv: v1 [q-bio.nc] 25 Apr 2017

arxiv: v1 [q-bio.nc] 25 Apr 2017 Neurogenesis and multiple plasticity mechanisms enhance associative memory retrieval in a spiking network model of the hippocampus arxiv:1704.07526v1 [q-bio.nc] 25 Apr 2017 Yansong, Chua and Cheston, Tan

More information

Emergence of Metastable State Dynamics in Interconnected Cortical Networks with Propagation Delays

Emergence of Metastable State Dynamics in Interconnected Cortical Networks with Propagation Delays Emergence of Metastable State Dynamics in Interconnected Cortical Networks with Propagation Delays Katrina M. Kutchko 1,2, Flavio Fröhlich 1,2,3,4,5 * 1 Department of Psychiatry, University of North Carolina

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction Artificial neural networks are mathematical inventions inspired by observations made in the study of biological systems, though loosely based on the actual biology. An artificial

More information

Model-Based Reinforcement Learning by Pyramidal Neurons: Robustness of the Learning Rule

Model-Based Reinforcement Learning by Pyramidal Neurons: Robustness of the Learning Rule 4th Joint Symposium on Neural Computation Proceedings 83 1997 Model-Based Reinforcement Learning by Pyramidal Neurons: Robustness of the Learning Rule Michael Eisele & Terry Sejnowski Howard Hughes Medical

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 5: Data analysis II Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis II 6 Single

More information

Theme 2: Cellular mechanisms in the Cochlear Nucleus

Theme 2: Cellular mechanisms in the Cochlear Nucleus Theme 2: Cellular mechanisms in the Cochlear Nucleus The Cochlear Nucleus (CN) presents a unique opportunity for quantitatively studying input-output transformations by neurons because it gives rise to

More information

SUPPLEMENTARY INFORMATION. Supplementary Figure 1

SUPPLEMENTARY INFORMATION. Supplementary Figure 1 SUPPLEMENTARY INFORMATION Supplementary Figure 1 The supralinear events evoked in CA3 pyramidal cells fulfill the criteria for NMDA spikes, exhibiting a threshold, sensitivity to NMDAR blockade, and all-or-none

More information

Cellular Bioelectricity

Cellular Bioelectricity ELEC ENG 3BB3: Cellular Bioelectricity Notes for Lecture 24 Thursday, March 6, 2014 8. NEURAL ELECTROPHYSIOLOGY We will look at: Structure of the nervous system Sensory transducers and neurons Neural coding

More information

Improving Associative Memory in a Network of Spiking

Improving Associative Memory in a Network of Spiking Improving Associative Memory in a Network of Spiking Neurons Computing Science and Mathematics School of Natural Sciences University of Stirling Scotland FK9 4LA Thesis submitted for the degree of Doctor

More information

Observational Learning Based on Models of Overlapping Pathways

Observational Learning Based on Models of Overlapping Pathways Observational Learning Based on Models of Overlapping Pathways Emmanouil Hourdakis and Panos Trahanias Institute of Computer Science, Foundation for Research and Technology Hellas (FORTH) Science and Technology

More information

Sum of Neurally Distinct Stimulus- and Task-Related Components.

Sum of Neurally Distinct Stimulus- and Task-Related Components. SUPPLEMENTARY MATERIAL for Cardoso et al. 22 The Neuroimaging Signal is a Linear Sum of Neurally Distinct Stimulus- and Task-Related Components. : Appendix: Homogeneous Linear ( Null ) and Modified Linear

More information

The evolution of cooperative turn-taking in animal conflict

The evolution of cooperative turn-taking in animal conflict RESEARCH ARTICLE Open Access The evolution of cooperative turn-taking in animal conflict Mathias Franz 1*, Daniel van der Post 1,2,3, Oliver Schülke 1 and Julia Ostner 1 Abstract Background: A fundamental

More information

A Dynamic Field Theory of Visual Recognition in Infant Looking Tasks

A Dynamic Field Theory of Visual Recognition in Infant Looking Tasks A Dynamic Field Theory of Visual Recognition in Infant Looking Tasks Sammy Perone (sammy-perone@uiowa.edu) and John P. Spencer (john-spencer@uiowa.edu) Department of Psychology, 11 Seashore Hall East Iowa

More information

Active Control of Spike-Timing Dependent Synaptic Plasticity in an Electrosensory System

Active Control of Spike-Timing Dependent Synaptic Plasticity in an Electrosensory System Active Control of Spike-Timing Dependent Synaptic Plasticity in an Electrosensory System Patrick D. Roberts and Curtis C. Bell Neurological Sciences Institute, OHSU 505 N.W. 185 th Avenue, Beaverton, OR

More information

Thalamocortical Feedback and Coupled Oscillators

Thalamocortical Feedback and Coupled Oscillators Thalamocortical Feedback and Coupled Oscillators Balaji Sriram March 23, 2009 Abstract Feedback systems are ubiquitous in neural systems and are a subject of intense theoretical and experimental analysis.

More information

Reading Neuronal Synchrony with Depressing Synapses

Reading Neuronal Synchrony with Depressing Synapses NOTE Communicated by Laurence Abbott Reading Neuronal Synchrony with Depressing Synapses W. Senn Department of Neurobiology, Hebrew University, Jerusalem 4, Israel, Department of Physiology, University

More information

Synaptic Plasticity and the NMDA Receptor

Synaptic Plasticity and the NMDA Receptor Synaptic Plasticity and the NMDA Receptor Lecture 4.2 David S. Touretzky November, 2015 Long Term Synaptic Plasticity Long Term Potentiation (LTP) Reversal of LTP Long Term Depression (LTD) Reversal of

More information

Balancing speed and accuracy of polyclonal T cell activation: A role for extracellular feedback

Balancing speed and accuracy of polyclonal T cell activation: A role for extracellular feedback Balancing speed and accuracy of polyclonal T cell activation: A role for extracellular feedback The Harvard community has made this article openly available. Please share how this access benefits you.

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION doi:1.138/nature1216 Supplementary Methods M.1 Definition and properties of dimensionality Consider an experiment with a discrete set of c different experimental conditions, corresponding to all distinct

More information

CYTOARCHITECTURE OF CEREBRAL CORTEX

CYTOARCHITECTURE OF CEREBRAL CORTEX BASICS OF NEUROBIOLOGY CYTOARCHITECTURE OF CEREBRAL CORTEX ZSOLT LIPOSITS 1 CELLULAR COMPOSITION OF THE CEREBRAL CORTEX THE CEREBRAL CORTEX CONSISTS OF THE ARCHICORTEX (HIPPOCAMPAL FORMA- TION), PALEOCORTEX

More information

19th AWCBR (Australian Winter Conference on Brain Research), 2001, Queenstown, AU

19th AWCBR (Australian Winter Conference on Brain Research), 2001, Queenstown, AU 19th AWCBR (Australian Winter Conference on Brain Research), 21, Queenstown, AU https://www.otago.ac.nz/awcbr/proceedings/otago394614.pdf Do local modification rules allow efficient learning about distributed

More information

NEURAL SYSTEMS FOR INTEGRATING ROBOT BEHAVIOURS

NEURAL SYSTEMS FOR INTEGRATING ROBOT BEHAVIOURS NEURAL SYSTEMS FOR INTEGRATING ROBOT BEHAVIOURS Brett Browning & Gordon Wyeth University of Queensland Computer Science and Electrical Engineering Department Email: browning@elec.uq.edu.au & wyeth@elec.uq.edu.au

More information

Sleep-Wake Cycle I Brain Rhythms. Reading: BCP Chapter 19

Sleep-Wake Cycle I Brain Rhythms. Reading: BCP Chapter 19 Sleep-Wake Cycle I Brain Rhythms Reading: BCP Chapter 19 Brain Rhythms and Sleep Earth has a rhythmic environment. For example, day and night cycle back and forth, tides ebb and flow and temperature varies

More information

Structural basis for the role of inhibition in facilitating adult brain plasticity

Structural basis for the role of inhibition in facilitating adult brain plasticity Structural basis for the role of inhibition in facilitating adult brain plasticity Jerry L. Chen, Walter C. Lin, Jae Won Cha, Peter T. So, Yoshiyuki Kubota & Elly Nedivi SUPPLEMENTARY FIGURES 1-6 a b M

More information

Learning Contrast-Invariant Cancellation of Redundant Signals in Neural Systems

Learning Contrast-Invariant Cancellation of Redundant Signals in Neural Systems Learning Contrast-Invariant Cancellation of Redundant Signals in Neural Systems Jorge F. Mejias 1. *, Gary Marsat 2,3., Kieran Bol 1, Leonard Maler 2,4, André Longtin 1,4 1 Department of Physics, University

More information

VISUAL CORTICAL PLASTICITY

VISUAL CORTICAL PLASTICITY VISUAL CORTICAL PLASTICITY OCULAR DOMINANCE AND OTHER VARIETIES REVIEW OF HIPPOCAMPAL LTP 1 when an axon of cell A is near enough to excite a cell B and repeatedly and consistently takes part in firing

More information

M Cells. Why parallel pathways? P Cells. Where from the retina? Cortical visual processing. Announcements. Main visual pathway from retina to V1

M Cells. Why parallel pathways? P Cells. Where from the retina? Cortical visual processing. Announcements. Main visual pathway from retina to V1 Announcements exam 1 this Thursday! review session: Wednesday, 5:00-6:30pm, Meliora 203 Bryce s office hours: Wednesday, 3:30-5:30pm, Gleason https://www.youtube.com/watch?v=zdw7pvgz0um M Cells M cells

More information

Neurons. Pyramidal neurons in mouse cerebral cortex expressing green fluorescent protein. The red staining indicates GABAergic interneurons.

Neurons. Pyramidal neurons in mouse cerebral cortex expressing green fluorescent protein. The red staining indicates GABAergic interneurons. Neurons Pyramidal neurons in mouse cerebral cortex expressing green fluorescent protein. The red staining indicates GABAergic interneurons. MBL, Woods Hole R Cheung MSc Bioelectronics: PGEE11106 1 Neuron

More information

How has Computational Neuroscience been useful? Virginia R. de Sa Department of Cognitive Science UCSD

How has Computational Neuroscience been useful? Virginia R. de Sa Department of Cognitive Science UCSD How has Computational Neuroscience been useful? 1 Virginia R. de Sa Department of Cognitive Science UCSD What is considered Computational Neuroscience? 2 What is considered Computational Neuroscience?

More information

Timing and the cerebellum (and the VOR) Neurophysiology of systems 2010

Timing and the cerebellum (and the VOR) Neurophysiology of systems 2010 Timing and the cerebellum (and the VOR) Neurophysiology of systems 2010 Asymmetry in learning in the reverse direction Full recovery from UP using DOWN: initial return to naïve values within 10 minutes,

More information

OPTO 5320 VISION SCIENCE I

OPTO 5320 VISION SCIENCE I OPTO 5320 VISION SCIENCE I Monocular Sensory Processes of Vision: Color Vision Mechanisms of Color Processing . Neural Mechanisms of Color Processing A. Parallel processing - M- & P- pathways B. Second

More information

Dendritic compartmentalization could underlie competition and attentional biasing of simultaneous visual stimuli

Dendritic compartmentalization could underlie competition and attentional biasing of simultaneous visual stimuli Dendritic compartmentalization could underlie competition and attentional biasing of simultaneous visual stimuli Kevin A. Archie Neuroscience Program University of Southern California Los Angeles, CA 90089-2520

More information

Continuous transformation learning of translation invariant representations

Continuous transformation learning of translation invariant representations Exp Brain Res (21) 24:255 27 DOI 1.17/s221-1-239- RESEARCH ARTICLE Continuous transformation learning of translation invariant representations G. Perry E. T. Rolls S. M. Stringer Received: 4 February 29

More information

A computational account for the ontogeny of mirror neurons via Hebbian learning

A computational account for the ontogeny of mirror neurons via Hebbian learning A computational account for the ontogeny of mirror neurons via Hebbian learning Graduation Project Bachelor Artificial Intelligence Credits: 18 EC Author Lotte Weerts 10423303 University of Amsterdam Faculty

More information

Bursting dynamics in the brain. Jaeseung Jeong, Department of Biosystems, KAIST

Bursting dynamics in the brain. Jaeseung Jeong, Department of Biosystems, KAIST Bursting dynamics in the brain Jaeseung Jeong, Department of Biosystems, KAIST Tonic and phasic activity A neuron is said to exhibit a tonic activity when it fires a series of single action potentials

More information

T. R. Golub, D. K. Slonim & Others 1999

T. R. Golub, D. K. Slonim & Others 1999 T. R. Golub, D. K. Slonim & Others 1999 Big Picture in 1999 The Need for Cancer Classification Cancer classification very important for advances in cancer treatment. Cancers of Identical grade can have

More information

Why do we have a hippocampus? Short-term memory and consolidation

Why do we have a hippocampus? Short-term memory and consolidation Why do we have a hippocampus? Short-term memory and consolidation So far we have talked about the hippocampus and: -coding of spatial locations in rats -declarative (explicit) memory -experimental evidence

More information

Biomimetic Cortical Nanocircuits: The BioRC Project. Alice C. Parker NSF Emerging Models of Technology Meeting July 24, 2008

Biomimetic Cortical Nanocircuits: The BioRC Project. Alice C. Parker NSF Emerging Models of Technology Meeting July 24, 2008 Biomimetic Cortical Nanocircuits: The BioRC Project Alice C. Parker NSF Emerging Models of Technology Meeting July 24, 2008 The BioRC Project Team and Support Alice Parker, PI and Chongwu Zhou, Co-PI Graduate

More information

Representing Where along with What Information in a Model of a Cortical Patch

Representing Where along with What Information in a Model of a Cortical Patch Representing Where along with What Information in a Model of a Cortical Patch Yasser Roudi 1,2 *, Alessandro Treves 2,3 1 Gatsby Computational Neuroscience Unit, UCL, United Kingdom, 2 Cognitive Neuroscience

More information

What is Anatomy and Physiology?

What is Anatomy and Physiology? Introduction BI 212 BI 213 BI 211 Ecosystems Organs / organ systems Cells Organelles Communities Tissues Molecules Populations Organisms Campbell et al. Figure 1.4 Introduction What is Anatomy and Physiology?

More information

Noise in attractor networks in the brain produced by graded firing rate representations

Noise in attractor networks in the brain produced by graded firing rate representations Noise in attractor networks in the brain produced by graded firing rate representations Tristan J. Webb, University of Warwick, Complexity Science, Coventry CV4 7AL, UK Edmund T. Rolls, Oxford Centre for

More information

File name: Supplementary Information Description: Supplementary Figures, Supplementary Table and Supplementary References

File name: Supplementary Information Description: Supplementary Figures, Supplementary Table and Supplementary References File name: Supplementary Information Description: Supplementary Figures, Supplementary Table and Supplementary References File name: Supplementary Data 1 Description: Summary datasheets showing the spatial

More information

Self-organizing continuous attractor networks and path integration: one-dimensional models of head direction cells

Self-organizing continuous attractor networks and path integration: one-dimensional models of head direction cells INSTITUTE OF PHYSICS PUBLISHING Network: Comput. Neural Syst. 13 (2002) 217 242 NETWORK: COMPUTATION IN NEURAL SYSTEMS PII: S0954-898X(02)36091-3 Self-organizing continuous attractor networks and path

More information

Free recall and recognition in a network model of the hippocampus: simulating effects of scopolamine on human memory function

Free recall and recognition in a network model of the hippocampus: simulating effects of scopolamine on human memory function Behavioural Brain Research 89 (1997) 1 34 Review article Free recall and recognition in a network model of the hippocampus: simulating effects of scopolamine on human memory function Michael E. Hasselmo

More information

ASSOCIATIVE MEMORY AND HIPPOCAMPAL PLACE CELLS

ASSOCIATIVE MEMORY AND HIPPOCAMPAL PLACE CELLS International Journal of Neural Systems, Vol. 6 (Supp. 1995) 81-86 Proceedings of the Neural Networks: From Biology to High Energy Physics @ World Scientific Publishing Company ASSOCIATIVE MEMORY AND HIPPOCAMPAL

More information

Supplementary materials for: Executive control processes underlying multi- item working memory

Supplementary materials for: Executive control processes underlying multi- item working memory Supplementary materials for: Executive control processes underlying multi- item working memory Antonio H. Lara & Jonathan D. Wallis Supplementary Figure 1 Supplementary Figure 1. Behavioral measures of

More information

University of California Postprints

University of California Postprints University of California Postprints Year 2006 Paper 2444 Analysis of early hypoxia EEG based on a novel chaotic neural network M Hu G Li J J. Li Walter J. Freeman University of California, Berkeley M Hu,

More information

Dynamics of Hodgkin and Huxley Model with Conductance based Synaptic Input

Dynamics of Hodgkin and Huxley Model with Conductance based Synaptic Input Proceedings of International Joint Conference on Neural Networks, Dallas, Texas, USA, August 4-9, 2013 Dynamics of Hodgkin and Huxley Model with Conductance based Synaptic Input Priyanka Bajaj and Akhil

More information

Intelligent Control Systems

Intelligent Control Systems Lecture Notes in 4 th Class in the Control and Systems Engineering Department University of Technology CCE-CN432 Edited By: Dr. Mohammed Y. Hassan, Ph. D. Fourth Year. CCE-CN432 Syllabus Theoretical: 2

More information

Electrophysiology. General Neurophysiology. Action Potentials

Electrophysiology. General Neurophysiology. Action Potentials 5 Electrophysiology Cochlear implants should aim to reproduce the coding of sound in the auditory system as closely as possible, for best sound perception. The cochlear implant is in part the result of

More information