A computational account for the ontogeny of mirror neurons via Hebbian learning

Size: px
Start display at page:

Download "A computational account for the ontogeny of mirror neurons via Hebbian learning"

Transcription

1 A computational account for the ontogeny of mirror neurons via Hebbian learning Graduation Project Bachelor Artificial Intelligence Credits: 18 EC Author Lotte Weerts University of Amsterdam Faculty of Science Science Park XH Amsterdam Supervisors dr. Sander Bohté Centrum Wiskunde & Informatica Life Sciences Group Science Park XG Amsterdam dr. Rajat Mani Thomas Netherlands Institute for Neuroscience Social Brain Lab Meibergdreef BA Amsterdam prof. dr. Max Welling University of Amsterdam Faculty of Science Science Park XH Amsterdam June 26, 2015

2 Abstract It has been proposed that Hebbian learning could be responsible for the ontogeny of predictive mirror neurons (Keysers and Gazzola, 2014). Predictive mirror neurons fire both when an individual performs a certain action and when the action they encode is most likely to follow a concurrently observed action. Here, we show that a variation of Oja s rule (an implementation of Hebbian learning) is sufficient for the emergence of mirror neurons. An artificial neural network has been created that simulates the interactions between the premotor cortex (PM) and the superior temporal sulcus (STS). The PM cortex coordinates self-performed actions, whereas the STS is a region known to respond to the sight of body movements and the sound of actions. A quantitative method for analyzing (predictive) mirror neuron activity in the artificial neural network is presented. A parameter space analysis shows that a variation of Oja s rule that uses a threshold produces mirror neuron behavior, given that the bounds for the weights of inhibitory synapses are lower than those of excitatory synapses. By extension, this work provides positive evidence for the association hypothesis, which states that mirror-like behavior of neurons in the motorcortex arises due to associations established between sensory- and motor stimuli.

3 Acknowledgements My sincere thanks goes to Sander Bohté, for sharing his expertise and for providing me with excellent guidance and encouragement throughout the course of this project. Furthermore, I am extremely grateful to Rajat Thomas, for the illuminating insight and expertise he has shared with me with great enthusiasm. I would also like to thank Max Welling, who took the effort to be my university supervisor regardless of his busy schedule. Finally, I want to thank my parents and sisters for their continuous source of support and motivation.

4 Contents 1 Introduction 1 2 Theoretical foundation Interpretations of associative learning Hebbian learning Rescorla Wagner Hebbian learning rules Basic Hebb Rule Covariance rule BCM Oja s rule Summary Computational model A motivation for the structure of the model Modeling a single neuron Modeling action execution and observation Learning rules Summary Analysis procedure A visualization of mirror neuron behavior Analysis procedure Summary Results Comparison of learning rules Parameter space analysis of Oja s rule Summary Conclusion 22 7 Discussion and future work Limitations of this study Future work

5 1 Introduction In the early 1990s, mirror neurons were discovered in the ventral premotor cortex of the macaque monkey (Di Pellegrino et al., 1992). These neurons fired both when the monkeys grabbed an object and when they watched another primate grab that same object. Mirror neuron-like activity has been observed in monkeys, birds and humans (Catmur, 2013). More recently, evidence has started to emerge that suggests the existence of mirror neurons with predictive properties (Keysers and Gazzola, 2014). These neurons fire when the action they encode is the action most likely to happen next, rather than the action that is concurrently being observed. A critical question concerns how (predictive) mirror neurons have developed to behave the way they do. In other words: what is the ontogeny of mirror neurons? Two distinct views have emerged with regards to the origin of mirror neurons: the adaptation hypothesis and the association hypothesis (Heyes, 2010). The adaptation hypothesis refers to the idea that mirror neurons are an adaptation for action understanding and thus a result of evolution. According to this hypothesis, humans are born with mirror neurons and experience plays a relatively minor role in their development. Alternatively, one can view mirror neurons as a product of associative learning, which suggests that these neurons emerge through a correlated experience of observing and executing the same action. The mechanism responsible for associative learning itself is a product of evolution, but associative learning did not evolve for the purpose of producing mirror neurons. The important difference between these two theories is the role of sensorimotor experience, which is crucial for the associative hypothesis but has a less prevalent role in the adaptation hypothesis. In this work, the focus will lie on further developing the association hypothesis. While both the association- and adaptation hypotheses can account for the ontogeny of mirror neurons, the association hypothesis has several advantages. First, it is consistent with evidence that indicates that mirror neurons contribute to a range of social cognitive functions that do not play a role in action understanding, such as emotions (Heyes, 2010). Second, research suggests that the perceptual motor system is optimized to provide the brain with input for associative learning (Giudice et al., 2009). For example, infants have a marked visual preference for hands. Such a preference could provide a rich set of input stimuli for mirror neurons that are active in manual actions. Keysers and Gazzola (2014) have proposed a mechanistic perspective on how mirror neurons in the premotor cortex could emerge due to associative learning, or more precisely, Hebbian learning. Hebbian learning was first proposed by Donald Hebb in 1949 as an explanation for synaptic plasticity, which refers to the ability of the connections (or synapses) between neurons to become either weaker or stronger (Cooper et al., 2013b). Thus, Hebbian learning explains learning on a local level between single neurons. Hebb postulated the principle as follows: "when an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A s efficiency, as one of the cells firing B, is increased". 1

6 To investigate whether or not Hebbian learning is sufficient to lead to the emergence of mirror neurons, we present a computational approach that implements the mechanics described by Keysers and Gazzola (2014). This involves the usage of an artificial neural network (ANN) to simulate activity in the premotor cortex (PM) and the superior temporal sulcus (STS). The PM cortex coordinates self-performed actions, whereas the STS is a region known to respond to the sight of body movements and the sound of actions. Action execution and observation will be simulated by exciting neurons in these two areas. The aim of this simulation is to find out whether or not associative learning - in particular Hebbian learning - is sufficient to lead to the development of mirror neurons. Specifically, the problem investigated in this work can be defined as follows: Can artificial neural networks that evolve via a local Hebbian-like learning rule, when exposed to action execution and observation, be sufficient to lead to the emergence of predictive mirror neurons? Here, we show that Oja s rule, an implementation of Hebbian learning, is sufficient to impose predictive mirror-like behavior. First, the proposed research question is considered in relation to other studies on the ontogeny of mirror neurons (chapter 2). This includes a review of four implementations of Hebbian learning: basic Hebb, the covariance learning rule, BCM, and Oja s rule. Subsequently, the computational model that describes the artificial neural network is introduced in chapter 3. Additionally, a variation of Oja s rule is presented. The procedure for the quantitative analysis of mirror neuron-like behavior, as explained in chapter 4, has been applied to the recorded activity of ANNs that evolve via several different learning rules. The results of this analysis are described in chapter 5. Finally, it is argued in chapter 6 that an artificial neural network that evolves via a variation of Oja s rule is sufficient to explain the emergence of mirror neurons. Related problems that should be considered in future work are dealt with in chapter 7, the final chapter. 2

7 2 Theoretical foundation In this chapter, research that has preceded the current work will be discussed. First, two computational accounts for the ontogony of mirror neurons via the association hypothesis will be discussed. Subsequently, an elaboration of several implementations of Hebbian learning that have been proposed in the literature will be given. 2.1 Interpretations of associative learning The association hypothesis states that mirror neurons emerge due to associative learning. This term refers to any learning process in which a new response becomes associated with a particular stimulus (Terry, 2006). As a refinement of the association hypothesis, two different implementations of associative learning have been proposed, namely Hebbian learning (Keysers and Gazzola, 2014) and Rescorla-Wagner (Cooper et al., 2013b) Hebbian learning Keysers and Gazzola (2014) have proposed a model in which Hebbian learning could account for the emergence of mirror neurons. Whereas there are several brain regions that contain mirror neurons, they illustrate their idea with connections between the premotor cortex (PM), the inferior posterior parietal cortex (area PF/PFG) and superior temporal sulcus (STS). Both the PM and PF/PFG play a role in action coordination, whereas the STS is a region known to respond to the sight of body movements and the sound of actions. More recently, evidence has started to emerge that some mirror neurons in the PM have predictive properties (Keysers and Gazzola, 2014). When an action is observed, these neurons behave as predictors for the action that is most likely to follow next. Consider the sequence of actions relevant to grabbing a cup of coffee, which consists of a sequence of reaching, grabbing the cup and bringing the cup to the mouth. If one observes another person reaching his or her arm, this induces activity in the STS. Additionally, PM neurons that encode grabbing - the action that is likely to be performed next - fire. Such predictive properties are of great importance because it allows us to anticipate the actions of others. The proposed model explains the emergence of such neurons as a result of sensorimotor input of self-performed actions. When one performs a certain action, this has been coordinated by activity in the PM. Subsequently, one observes him- or herself perform that particular action. This is called reafference. Once the observation of the action has reached the STS, neurons in the PM that encode for the next action have already become active. If synaptic plasticity caused by Hebbian learning is assumed, the simultaneous activity in the STS of the observed action and activity in the PM for the next action causes an association between neurons in both regions. Keysers and Gazzola (2014) suggest that such an increase in synaptic strength could account for the emergence of predictive mirror neurons. 3

8 2.1.2 Rescorla Wagner Cooper et al. (2013b) used a computational model to investigate the development of mirror neurons. Two implementations of associative learning were considered; Rescorla- Wagner - a supervised learning rule that relies on reducing a prediction error - and an implementation of Hebbian learning and. Their computational model simulates results of an experimental study (Cook et al., 2010) that examined automatic imitation. Automatic imitation is a stimulus-response effect where the physical representation of features that are task-irrelevant facilitate similar responses but interfere with dissimilar responses. In this study participants were asked to open and close their hands in response to a color stimulus (the task-relevant stimulus) whilst watching a video of an opening or closing hand (the task-irrelevant stimulus). Research has shown that magnetic stimulation of the inferior frontal gyrus (an area where mirror neurons have been found) selectively disrupts automatic imitation. Therefore, automatic imitation is thought to be an index for mirror neuron activity. Cook et al. (2010) showed that automatic imitation is reduced more by contingent and signalled training than by non-contingent sensorimotor training. The results of this study have been successfully simulated by Cooper et al. (2013b) using a model based on the Rescorla-Wagner learning rule. The Hebbian learning rule, however, could not explain the reported data of Cook et al. (2010). Therefore, Cooper et al. (2013b) argue that prediction error-based learning rules, not Hebbian learning, can account for the development of mirror neurons. The study of Cooper et al. (2013b) suggests that Hebbian learning is not sufficient to impose mirror neuron behavior. However, the used implementation of Hebbian learning suffers from a computational problem. The weights of the connections between neurons imposed by this learning rule are in principle unbounded, which makes the learning rule unstable. More advanced types of Hebbian learning that adhere to such weight constraints, for example Oja s rule (Oja, 1982), were not considered. Thus, even though Cooper et al. (2013b) have shown that supervised learning is sufficient to explain automatic imitation data, Hebbian learning should not be dismissed fully as a possible explanation for the emergence of mirror neurons. 2.2 Hebbian learning rules At the time Hebbian learning was introduced by Donald Hebb (1949), the field of neuroscience, let alone the field of neuroinformatics, was still in its infancy. Hebb s postulate was introduced in a general, qualitative manner. It implies that learning occurs due to changes in synaptic strength, and that this is caused by a correlation between the input of a pre- and postsynaptic neuron. These changes have later been defined as long term potentiation (LTP) and long term depression (LTD), which refer to an increase and decrease of synaptic efficacy respectively (Kandel et al., 2000). However, the qualitative description of Hebb leaves much room for interpretation. Over time many computational implementations of Hebb s postulate have been proposed. Here, four of these will be discussed: the basic Hebb rule, the covariance rule, BCM and Oja s rule. 4

9 2.2.1 Basic Hebb Rule A simple rule that uses the presynaptic activity a i and postsynaptic activity a j to change the synaptic strength of the connection w j,i is the basic Hebb Rule (Hebb, 1949): w j,i = α a i a j (1) Here, α is the learning rate, which determines how quickly the synaptic weights change. The brackets and denote that w j,i is the average of all synaptic changes over a particular time. This articulates the idea that synaptic modification is a slow process. The rationale behind the formula is that if both neurons are relatively active, their synaptic connection will increase substantially. However, if one of them or both are less active, the synaptic strength will increase less. The major disadvantage of the basic Hebb rule is that the weights can only increase. Consequently, the basic Hebbian rule accounts for LTP but does not allow LTD. Since LTD has been recorded in neurons, a rule that fails to account for this phenomena is biologically implausible. Intuitively, it prevents the system from being able to unlearn what it has learned. Moreover, the basic Hebb rule is inherently unbounded, since weights can increase indefinitely. This is an effect that does not occur in biological systems due to the physical constraints to the growth of synapses. Aditionally, Hebb s rule does not take baseline activity into account. In general, neurons always have some baseline of noisy activity. Since the simple Hebb rule will increase the weights even if the recorded activity is simply baseline activity, the weight updates can easily get distorted Covariance rule The covariance rule is an alternative to the basic Hebb rule that implements both LTP and LTD (Dayan and Abbott, 2001). It is defined as follows: cov(x, y) = 1 n 1 w j,i = α cov(a i, a j ) (2) n (x i x )(y i y ) (3) i=0 As in the basic Hebb rule, α refers to the learning rate. As suggested by its name, the covariance rule calculates the covariance of the activities of the pre- and postsynaptic neurons and uses this to determine the weight change. The covariance of two sequences is negative if they are uncorrelated, and positive if they are correlated. Therefore, the covariance rule allows for negative weight updates and thus accounts for both LTP and LTD. By substracting the average activity over a certain time period, the covariance rule naturally takes the baseline activity into account. However, similar as the basic Hebb rule, the covariance rule is still unbounded. In 5

10 theory, the weights can grow infinitely if the activation sequences keep being correlated. In biology this is not a problem because the synapses are naturally constrained by their physical properties to grow infinitely. In a computational model, however, such constraints have to be explicitly incorporated in the model, which is not the case for neither the covariance rule nor the basic Hebb rule. Another problem with the covariance rule is that w j,i will always be the same as w i,j, since the formula does not differentiate between the pre- and postsynaptic neuron. Separating the development of two reciprocal connections would make the learning rule more biologically plausible BCM BCM is a modifcation on the covariance rule that imposes a bound on the synaptic strengths. The rule, named after its introducers Bienenstock, Cooper and Munro (1982), is defined as follows: w j,i = a i a j (a j θ v ) (4) θ v acts as a threshold that determines whether synapses are strengthened or weakened. To stabilizes synaptic plasticity, BCM reduces the synaptic weights of the postsynaptic neuron if it becomes too active. This can be achieved by implementing θ v as a sliding threshold: θ v = a 2 j θ v (5) The sliding threshold induces competition between the synapses, because strengthening some synapses increases a j and thus increases θ v, which in effect makes it more difficult for other synapses to increase afterwards. However, one disadvantage of BCM is that it does not take the baseline activity of the presynaptic neurons into account Oja s rule Oja s rule, introduced by Oja (1982), is an alternative learning rule that imposes bounds on the weights: w [i,j] = a i a j α(a 2 j)w [i,j] (6) Oja s rule imposes an online re-normalization by constraining the sum of squares of the synaptic weights. More precisely, the bound of the weights is inversely proportional to α, that is, w 2 will relax to 1 α (Oja, 1982). By constraining the weight vector to be of constant length, Oja s rule imposes competition between different synapses. However, similar to BCM, Oja s rule does not take 6

11 the baseline activity of the presynaptic neuron into account. 2.3 Summary In sum, the field of neuroscience is currently evidencing a debate about the ontogeny of mirror neurons. The associative hypothesis, which assumes that mirror neurons emerge due to associative learning, seems to be a promising explanation. Several interpretations of associative learning have been proposed, namely Rescorla-Wagner and Hebbian learning. Research has shown that supervised learning rules such as Rescorla-Wagner are sufficient to explain results found in studies on automatic imitation. A different model that explains the emergence of predictive mirror neurons by means of Hebbian learning has been proposed, but a proof of concept has yet to be created. The chapter is concluded by a review of several implementations of Hebbian learning, including basic Hebb, the covariance rule, BCM and Oja s rule. 7

12 3 Computational model In this chapter we will introduce the computational model that was used to simulate the emergence of mirror neurons. First, a motivation for the structure of the artificial neuron network will be presented. Then, the model for a single neuron is presented. Finally, it will be shown how this model can simulate action executions and observations. 3.1 A motivation for the structure of the model There are several regions in the brains of monkeys where mirror neurons have been found. Here, similar as in Keysers and Gazzola (2014), the focus lies on a set of three interconnected regions: the premotor cortex (PM, area F5), the inferior posterior parietal cortex (PF/PFG) and the superior temporal sulcus (STS). Both the PM and the PF/PFG are known to contain mirror neurons. Their main function is the coordination of actions, whereas the STS is an area in the temporal cortex that responds to the sight of body movements and the sound of actions. Neurons in the PM and PF/PFG are reciprocally connected, which means that the connectivity goes both from PM to PF/PFG as the other way around. Similarly, there are reciprocal connections between the STS and PF/PFG. Figure 1 shows the structure of the model used to simulate these areas. In order to make a simulation of the dynamics between these different brain regions feasible, three simplifications have been introduced. First, the intermediate step via the PF/PFG is omitted. This results in a model that has only reciprocal connections between the STS and PM. The artificial neural network will thus consist of two layers of neurons. Second, it is assumed that the PM and STS have similar encodings for the same action phases. For example, the action drink a cup of coffee could consist of an action phase sequence of grabbing, reaching and bringing hand to mouth. In this example, the PM and STS will each have three neurons that encode these three action phases. It is not claimed that a single action phase is actually encoded by a single neuron in the brain. In contrast, the neurons of the artificial neural network should not be considered simulating single neurons. As will be described in the next section, neurons are modeled by activities that can take values between 0 and 1. In reality, however, a single neuron can only send a binary signal - the action potential - to postsynaptic neurons. Therefore, the artificial neurons presented here represent the average spiking rate of a group of neurons rather than the activity of a single neuron. Third, the connections from the PM to the STS will be restricted to inhibitory connections between neurons that encode the same action. These connections are inhibitory, because the activity from PM to STS is thought to be net inhibitory (Hietanen and Perrett, 1993). However, it is biologically implausible to have as many inhibitory connections as excitatory connections, since inhibitory synapses only comprise a small fraction of the total number of synapses in the brain (Kandel et al., 2000). Computationally, too many inhibitory connections can destabilize the network since they allow the PM to completely suppress all STS activity. For these reasons, only inhibitory connections from the PM to STS that encode the same action are modeled. 8

13 Figure 1: The setup used for the simulation. It consists of a two-layered neural network that represents the PM (red nodes) and STS (blue nodes) cortices. Each action phase, e.g. "reach" or "grasp", is encoded in one neuron in both the PM and STS. All STS neurons are connected to all PM neurons via excitatory connections (blue arrows), whereas PM neurons are connected via inhibitory synapses (red arrows) to the STS neurons that encode the same action. The weight matrices store the strengths of the connections. In the training phase, PM neurons are sequentially activated to simulate the execution of an action sequence, "reach, grasp, bring to mouth". The order of the excitation of the PM neurons is determined by a transition matrix, which contains the probabilities that certain actions are followed by others. Here, the sequence "reach (A), grasp (B), bring to mouth (C)" is encoded, since the probabilities to transition from A B and B C are both 1. With a delay of 200 ms, the STS is excited in the same order. During the testing phase action observation is modeled. Therefore, only the STS is activated whilst still adhering to the probabilities encoded in the transition matrix. 3.2 Modeling a single neuron The activity of a single neuron is represented by α i, which can take a value between 0 and 1. The activity, similar as in Cooper et al. (2013a), is calculated as follows: I j (t) = i a i (t + 1) = ρa i (t) + (1 ρ)σ(i i (t)) (7) (w j,i a i (t 1)) + E j + B j + N (0, µ 2 ) (8) Here, ρ is a value between 0 and 1 that determines to which extent previous activity persists over time. I j (t) refers to the input to the neuron, which is transformed by a value between 0 and 1 by the sigmoid function σ. The value of I j (t) depends among others 9

14 Parameter Value Persistance ρ External excitation E j 4.0 Bias B j 2.0 Gaussian noise µ 2.0 (a) The parameters used in modeling neurons, drawn from Cooper et al. (2013b) A B C D E F A B C D E F (b) Example of a transition matrix that encodes for the actions A-C-E and B-D-F. The matrix contains the probability that one action phase (row) is followed by another (column). Figure 2: Parameters used in the model (a) and an example of a transition matrix (b) on the weighted sum of the activities of all neurons. The weight matrix w contains the weights of the connections. When the artificial network is trained, w is altered. Additionally, the value of I j (t) depends on three factors: E j, B j and µ. E j refers to the amount of stimulation caused by other sources than neurons within the network and can be either on or off. External stimulation of PM neurons represents choosing the next action that will be executed. In the case of STS neurons, E j represents the observation of a certain action in the real world. The input signal further consist of the activation bias B j, which represents baseline activity of the neuron, and Gaussian noise determined by µ. Each time step t represents 1 ms of activity. The parameters ρ, E j, B j and µ are constant in this model. The values used in our simulations (figure 2a) were derived from previous work (Cooper et al., 2013b). All weight matrix entries are initialized to 1 before training commences. Additionally, the network will first be subjected to a burn-in phase. Here, the weight matrix is held constant and the neurons are allowed to reach a certain steady-state activity. 3.3 Modeling action execution and observation Keysers and Gazzola (2014) propose that mirror neuron activity emerges due to synaptic changes that result from simultaneous action execution and reafference (the observation of self-performed actions). This implies that the simulation of these mechanics should consist of two separate phases. In the first phase, the training phase, action execution and reafference are simulated. During this phase, the weights are being updated. The second phase, the testing phase, consists of the mere observation of actions. If the correct weight changes have been made during the training phase, the PM neurons should start behaving as mirror neurons. The remainder of this section explains how the training and testing phase have been simulated. In the artificial neural network, PM activity represents action execution whereas STS activity represents action observation. In essence, the action observation of self-performed actions is a delayed version of the action execution signal. It takes approximately 100 ms for a signal in the PM to lead to an actual action execution. After another 100 ms 10

15 t (ms) PM-A PM-B PM-C 0 [1, 0, 0] 300 [0, 1, 0] 600 [0, 0, 1] (a) An example of an excitation sequence of PM neurons during the action phase. If 1, E j is added to the input I j (see formula 8). If 0, nothing is added. Note that the external excitation persists until a transition is made to a new action phase, which happens after 300 ms. t (ms) STS-A STS-B STS-C 200 [1, 0, 0] 500 [0, 1, 0] 700 [0, 0, 1] (b) An example of an excitation sequence of the STS neurons. Note that this is simply a delayed version of the action sequence in c, with a delay of 200 ms. Figure 3: Examples of trials of the STS and PM during the training phase the action observation reaches the STS. This gives a total delay of 200 ms. The transitions from one action phase to the next are coordinated by a motor planner, which chooses the next action based on a probability distribution that is encoded in a transition matrix. A simple example of such a transition matrix is shown in figure 2b. It encodes two actions, a c e and b d f. The first sequence can be derived from the transition matrix since the probabilities to go from both a to c (t[0, 1] = 1) and c to e (t[1, 2] = 1) are 1. It is assumed that the probability of each entire action sequence is the same. Therefore, each last action phase in a sequence has a probability of 1 n to go to the first action phase of a new sequence, with n being the total number of action sequences. Initially, the motor planner will choose one of the first action phases as a starting point. Once an action phase is selected, the PM neuron that encodes for this particular action is externally stimulated (value 1 in figure 3a). The complete list of the excitations at one particular time step is referred to as a trial. It is assumed that the length of one action phase is 300 ms, similar as in Keysers and Gazzola (2014). After 300 ms, the next action phase is chosen at random based on the probabilities encoded in the transition matrix, which results in a new trial. Action observation entails the same trial, only with a delay of 200 ms (figure 3b). During the training phase, both the PM and STS trials are active. During the testing phase, only the STS neurons are excited. 3.4 Learning rules During the training phase, the weight vector w will be updated according to a certain neuron rule. Each of the Hebbian learning rules presented in chapter 2 has been implemented. However, since Oja s rule does not naturally handle the baseline activity well, we propose an alternative to Oja s rule. The weight updates are the same as in the original Oja s rule (formula 6). However, a threshold is imposed on all the activities of the neurons: 11

16 a k = { a k, if a k θ 0, otherwise (9) This threshold prevents the weights to change from unsubstantial activities. Note that the addition of the threshold cannot make Oja s rule unbounded, since the bound of the original rule only depends on α. Moreover, because the threshold introduces zero correlation for certain periods of time, the weights will actually be lower compared to weights imposed by the original learning rule. 3.5 Summary In this chapter the computational model of the artifical network that models connections between the premotor cortex (PM) and the superior temporal sulcus (STS) has been presented. The general structure of connectivity in the network is a result of three simplifying assumptions. First, the intermediate step of the PM/PFG has been omitted. Second, each neuron only represents one action phase. Consequently, the neural network consists of two equally sized layers. Third, the inhibitory P M ST S connections only exist between PM and STS neurons that encode the same action. Additionally, the formula to calculate the activity of a single neuron has been presented, together with an elaboration on how a transition matrix can be used to simulate action execution and observation. Finally, a variation of Oja s rule that uses a threshold has been introduced. 12

17 4 Analysis procedure In this chapter an analysis procedure that can detect mirror neuron behavior is introduced. In contrast to the work of Cooper et al. (2013b), the simulation presented here is not a simulation of one particular experiment. Therefore, it cannot be compared with experimental data in a supervised manner. This requires designing new ways of investigating a quantification of mirror neuron behavior. First, it will be discussed what mirror neuron behavior would visually look like in the simulation. Subsequently, a procedure for calculating a quantitative measurement for the level of mirror neuron behavior in a sequence of neuron activities is proposed. 4.1 A visualization of mirror neuron behavior Before an evaluation metric can be designed, it must be clear what is considered to be mirror-like behavior. Figure 4a shows activity that we deem to be typical for mirror neuron behavior. The STS has been excited externally in a particular sequence and the PM mimics the behavior of the visual neurons. Figure 4b shows predictive mirror neuron behavior. The activity of STS-A induces activity in PM-B, which is the action that is the most likely to follow next. Figure 4c shows a combination of the two, in which the activity of STS-A induces activity of PM-B. In contrast to figure 4b, this activity persists until STS-B has stopped to be active. The combined activity can explain both normal and predictive mirror neuron behavior. Note that these kind of behaviors are expected to emerge only during the testing phase. During the training phase, the PM neurons will not be caused by STS activity but by external excitations. 4.2 Analysis procedure A procedure to measure mirror neuron behavior was designed for the simulations presented in this work. All three types of mirror neuron behavior follow approximately the same sequence of spikes (except that predictive mirror neurons surpass the first action of a sequence). Therefore, the analysis procedure consists of the creation of a transition matrix that describes sequences of activity in both the PM and STS cortices. By comparing the transition matrices, it can be determined to what extent the PM and STS activity is similar, and thus, to what extent mirror neuron behavior has emerged. The following procedure was used to calculate a transition matrix from recorded activity. First, a low-pass filter is applied to the activity signals of all neurons (figure 4a) to smoothen the signal (figure 4b). Then, significant peaks are substracted by setting a threshold equal to the mean plus one standard deviation (figure 4c). Subsequently, for each peak the neuron whose peak is the closest follow-up is determined. For a peak at time t this is done as follows. First, the thresholded peaks of all neurons from time t to t+3000 are collected (figure 4d). If a signal contains multiple peaks, all but the first peak are removed from the signal. Then, a cross correlation analysis is performed between all pairs of the original signal and the other signals. From this the delays that have the highst cross correlation can be determined (figure 4e). If the delay of a certain neuron is 13

18 (a) Normal mirror neuron behavior. The activity of the PM neurons mirrors exactly the activation of the STS. (b) Predictive mirror neuron behavior. The activity of PM neurons is the same as the activity of STS neurons that will be active next in the action sequence. For example, the green PM neuron, which usually precedes a yellow neuron, responds to the yellow STS neuron. (c) Combination of normal and predictive mirror neuron behavior. PM neurons respond both to the same action phase and to the action phase that usually precedes the action phase they encode. Figure 4: Examples of three types of mirror neuron-like behavior very low ( 25 to 25 ms) the neuron considered to be simultaneous with the peak that is under investigation. The neuron i which signal has the lowest positive delay d i is selected as the closest successor. In the transition matrix, t j,i is increased by one (figure 4f). If the delay d n of another neuron n is very close to the closest delay (d n d i < 50 ms) it is also selected as the closest successor. In this case, the total probability is evenly distributed over the transition matrix. As a final step, the rows in the transition matrix are normalized. The above procedure can be applied both to STS and PM activity which results in two matrices, t sts and t pm. In order to determine the similarity s of the two matrices, one can calculate the Frobenius norm of their difference matrix, which gives the error ɛ: ɛ = ( n n i=1 j=1 (t sts j,i t pm j,i )2 ) (10) A low error ɛ would indicate a high level of mirror neuron-like behavior. To determine whether or not a low retrieved error is significant, it will be compared to the distribution 14

19 (a) The recorded activity in the neurons (b) Activity after application of low-pass filter (c) All activities lower than the threshold are set to zero (d) Activities within 3000 ms after the start of the first peak of neuron B A B C D A B +1 C D (e) Delays are determined by calculating the maximal cross correlation. (f) The transition matrix is updated for the transition from neuron B to the neuron C, which has the lowest positive delay. Figure 5: A visualization of the procedure used to create a transition matrix from recorded activity. Step D, E and F are repeated for all peaks detected in step A B and C. Afterwards, the rows of the transition matrix are normalized. 15

20 of random errors. The distribution of errors is approximated by calculating the difference between a set of uniformly at random generated stochastic matrices and the STS matrix. If the retrieved error ɛ is lower than 2% of the errors determined by these random matrices, it will be deemed significant. In the case of predictive mirror neuron behavior (figure 3b), the transition from one action sequence to the next will always surpass the first action. Therefore, this methodology works best for normal and combined mirror behavior (figure 3a and 3c). Note that the method presented here cannot distinguish between the different types of mirror neuron behavior. 4.3 Summary In this chapter a quantitative measure for the detection of mirror neuron behavior has been presented. First, three types of mirror neuron behavior (normal, predictive and combined) have been identified. Second, the analysis procedure to detect mirror neuron behavior is described. This procedure consists of the generation of a transition matrix of both the STS and PM neurons. Afterwards, these transition matrices can be compared using the Frobenius norm of their difference matrix to obtain the error ɛ. This analysis procedure can detect all presented types (normal, predictive and combined) of mirror neuron behavior, though it will generally return a higher error for predictive mirror neuron behavior. 16

21 5 Results In this chapter two types of results will be discussed. First, a comparative analysis of the different learning rules that shows how well each can account for mirror neuron behavior is presented. Second, a parameter space analysis of the thresholded Oja s rule is used to show to what extent different parameter settings affect the level of mirror neuron behavior that can be observed. 5.1 Comparison of learning rules Figure 6 shows the mean and standard deviation of the error ɛ of the transition matrices for 10 simulations of basic Hebb, covariance rule, BCM, Oja s rule and the thresholded Oja s rule after a training phase of ms for one action sequence consisting of four action phases. Additionally, a hinton graph of the weight matrices is presented in figure 7. Figure 8 shows the recorded activities in the testing phase for all rules. As visualized in figure 8a, applying the basic Hebb rule does not impose mirror neuron behavior. The weight matrix (figure 7a) shows that the weights for all neurons are approximately the same, which explains why the activities in figure 8b are also very similar. This effect is caused by the baseline activity that is present for all neurons, which influences the weight changes imposed by Hebb s rule. Figure 6: The average and standard deviation of the error ɛ of 10 simulations of the five learning rules. Each simulation consisted of ms in both the training and testing phase, for a single action sequence of 4 action phases. The blue line indicates the value under which the norm is deemed to be significant (< 2% of randomly generated matrices). The covariance rule, shown in figure 8b, naturally takes the baseline activity into account. The weight matrix is more diverse than that of the basic Hebb rule (figure 7b). This causes the average error to be less high than that of Basic Hebb, but still not 17

22 (a) Basic hebb (α = 0.044), m = 2.7 (b) Covariance (α = 0.044), m = 4.8 (c) BCM (θ init = 0), m = 20.3 (d) Oja s rule (θ = 0.5, α e = 10 1,α i = 1 1 ), m = 3.59 (e) Oja s rule (θ = 0.0, α e = 10 1, α i = 1 1 ), m = 3.79 Figure 7: A hinton graph that shows the relative differences of the weights of the weight matrix of all rules. The lower left corner shows all weights from STS to PM neurons, whereas the upper right corner shows all the PM to STS neurons. Blue and red squares denote inhibitory and excitatory neurons respectively. They are normalized by the maximum value of the the weight matrix, which is denoted beneath each figure by m. significant. Moreover, as enpicted by the large variance of the covariance rule in figure 6, the results are unstable. Figure 8c shows the activities after applying BCM, a modification on the covariance rule which imposes a bound on the synaptic strengths. STS activity is completely suppressed during the training phase. Theoretically, BCM should impose bounds on the weights due to competition between the synapses. However, in this case these bounds are still too high which causes the suppression of STS activity during the training phase due to strong inhibitory weights from PM neurons. As a consequence, BCM performs approximately as well as basic hebb, as shown in figure 6. An alternative to BCM is Oja s rule. In contrast to BCM, the bounds of the weights can be directly affected by the α parameter. To account for differences between the bounds for inhibitory and excitatory neurons that occur in nature, different bounds were used. Recall that α 1 is proportional to the bounds of w 2. Here, α inhibitory was set to 1 1 and α excitatory was set to Figure 7d shows that, if applied directly, all the 18

23 (a) Basic hebb, α = (b) Covariance, α = (c) BCM, θ init = 0 (d) Oja s rule, θ = 0, α e = 10 1, α 1 i Figure 8: A visualisation of the performance of all learning rules. Each color denotes the same action phase. The lines above the graph of STS activity show the time each neuron was externally excited. These are the last 4000 ms of a simulation with a training- and test phase of ms for four neurons that encode one action sequence (a-b-c-d). 19 (e) Oja s rule, θ = 0.5, α e = 10 1, α 1 i

24 (a) θ = 0.3 (b) θ = 0.5. (c) θ = 0.7 Figure 9: Results of quantitative parameter space analysis. Different images depict the results for different thresholds θ. The x-axis shows the bounds for inhibitory neurons (α 1 i ), whereas the y-axis shows the bounds for excitatory neurons (αe 1 ). The lower the error (denoted by a color range from dark red to light yellow) the better the performance was for a parameter setting. The areas delineated by the blue line are all retrieved errors that are lower than the errors of 98% of 10 million randomly generated matrices. If the model is set to these parameter settings, mirror neuron behavior emerges. PM neurons to behave similarly. Since no mirror behavior occurs, the average error is high. To account for the baseline activity of the neurons, the learning rule was altered as proposed in section 2.3, and the threshold θ was set to 0.5. As shown in figure 8e, this results in a combination of normal and predictive mirror neuron behavior. This results in an error that is approximately 10 times lower than the other learning rules (figure 6). 5.2 Parameter space analysis of Oja s rule The thresholded Oja s rule has three parameters, namely α inhibitory, α excitatory and threshold θ. A parameter space analysis was performed to see to what extent different parameter settings change the outcome of applying Oja s rule. The thresholded Oja s rule was applied in 500 simulations with different parameter settings. Both the training phase and testing phase consisted of ms. 19 action phases were modeled, divided over 4 action sequences of length 5, giving a total network size of 38 nodes. Since modeling 4 action sequences of length 5 requires 20 nodes, and we only model 19, 20

25 two sequences partly overlap. The procedure as described in chapter 4 was applied to the activities in the testing phase to determine the error ɛ of the difference of the PM and STS transition matrices. Additionally, the error of random matrices was calculated. These were used to approximate the p-values at which a low error is deemed to be significant. The results of the parameter space analysis are shown in figure 9. The axes show the inverse of α since this is approximately equal to the bounds of the weight matrix. Each figure depicts the retrieved error for a different threshold θ value. The blue line indicates the section in which the error is lower than 2% of randomly obtained errors. It shows that if the threshold is relatively low (0.3, figure 9a), Oja s rule does not impose mirror neuron behavior at all. Figure 9b shows that a threshold of 0.5 does imposes mirror behavior if, in general, the bound of excitatory connections is higher than the bound of inhibitory connections. If both parameters are higher than approximately 15, no mirror behavior emerges. If the threshold is very high (0.7, figure 9c), neuron mirror behavior is imposed, but to lesser extent than when the threshold is 0.5. Here, the excitatory bounds much exceed the inhibitory bounds to a larger extent for mirror neuron behavior to emerge. 5.3 Summary Five different learning rules have been implemented. An analysis of simulations of one simple action sequence shows that, of the rules considered, only the thresholded Oja s rule is able to produce mirror neuron behavior. More precisely, it can account for both normal and predictive mirror neuron behavior. Furthermore, a parameter space analysis shows that if the threshold is set too low, no mirror neuron behavior is imposed. A threshold of 0.5 shows a large field of values in which mirror neuron behavior is imposed. In general, the bounds for inhibitory neurons have to be strictly smaller than those of excitatory neurons to impose mirror neuron behavior. If the threshold becomes too high, excitatory bounds much be about twice as high as inhibitory bounds. 21

26 6 Conclusion In previous work it has been proposed that Hebbian learning could be responsible for the ontogeny of predictive mirror neurons (Keysers and Gazzola, 2014). Here, we have shown that a variation of Oja s rule (an implementation of Hebbian learning) is sufficient to explain the emergence of mirror neurons. An artificial neural network that simulates the interactions between the premotor cortex (PM) and the superior temporal sulcus (STS) has been created. Different implementations of Hebbian learning have been compared in performance on a simple action sequence. Additionally, a parameter space analysis has been performed on the proposed thresholded Oja s rule to determine the sensitivity of the parameters on its performance. We have identified that from the learning rules considered, only the thresholded Oja s rule is sufficient to impose mirror neuron behavior. The other learning rules (basic Hebb, covariance, BCM and the original Oja s rule) are subject to at least one of the following three limitations. First, both the covariance rule and the basic Hebb rule are unbounded, which makes them biologically implausible. Second, all learning rules except for the covariance rule are sensitive to baseline activity. These changes overshadow associations between significant STS- and PM activity. This prevents mirror neuron behavior from emerging. Third, if the inhibitory neurons are not bounded strong enough, as in BCM and the original Oja s rule, PM inhibition causes a complete suppression of STS activations. This prevents any association between the PM and STS to occur. A variation of Oja s rule that uses a threshold and different bounds for inhibitory and excitatory synapses has been shown to surpass each of these limitations. A parameter space analysis has indicated that the parameters of the proposed thresholded Oja s rule must adhere to two constraints for mirror neuron behavior to emerge. First, the threshold value must be higher than the baseline activity, but lower than the highest peaks measured. If the threshold is too low, small variances in baseline activity impose weight changes that prevent the correct associations form being made. Choosing the proper threshold value can be viewed as intrinsic homeostatic plasticity. This refers to the capacity of neurons to regulate their own excitability relative to network activity (Turrigiano and Nelson, 2004). Second, mirror neuron behavior can only be imposed if the bounds for the excitatory neurons are higher than those of the inhibitory neurons. This is necessary because otherwise inihbitory PM neurons will completely suppress STS activity which prevents associations from being able to be learned. In conclusion, we have shown that a thresholded Oja s rule is sufficient to account for the emergence of mirror neurons in an artificial neural network that simulates interactions between the PM and STS cortices. Therefore, this work provides positive evidence for the proposal that Hebbian learning is sufficient to account for the emergence of mirror neurons. In the broader sense, this work promotes the idea that statistical sensory motor contingencies suffice in the explanation for the ontogeny of mirror neurons. Therefore, it can be regarded as positive evidence for the association hypothesis. 22

Modeling of Hippocampal Behavior

Modeling of Hippocampal Behavior Modeling of Hippocampal Behavior Diana Ponce-Morado, Venmathi Gunasekaran and Varsha Vijayan Abstract The hippocampus is identified as an important structure in the cerebral cortex of mammals for forming

More information

Learning in neural networks

Learning in neural networks http://ccnl.psy.unipd.it Learning in neural networks Marco Zorzi University of Padova M. Zorzi - European Diploma in Cognitive and Brain Sciences, Cognitive modeling", HWK 19-24/3/2006 1 Connectionist

More information

Observational Learning Based on Models of Overlapping Pathways

Observational Learning Based on Models of Overlapping Pathways Observational Learning Based on Models of Overlapping Pathways Emmanouil Hourdakis and Panos Trahanias Institute of Computer Science, Foundation for Research and Technology Hellas (FORTH) Science and Technology

More information

The storage and recall of memories in the hippocampo-cortical system. Supplementary material. Edmund T Rolls

The storage and recall of memories in the hippocampo-cortical system. Supplementary material. Edmund T Rolls The storage and recall of memories in the hippocampo-cortical system Supplementary material Edmund T Rolls Oxford Centre for Computational Neuroscience, Oxford, England and University of Warwick, Department

More information

Mirror neurons. Romana Umrianova

Mirror neurons. Romana Umrianova Mirror neurons Romana Umrianova The functional role of the parieto-frontal mirror circuit: interpretations and misinterpretations Giacomo Rizzolatti and Corrado Sinigaglia Mechanism that unifies action

More information

Effective And Optimal Storage of Memory Patterns With Variable Coding Levels

Effective And Optimal Storage of Memory Patterns With Variable Coding Levels Effective And Optimal Storage of Memory Patterns With Variable Coding Levels Gal Chechik and Isaac Meilijson School of Mathematical Sciences Tel-Aviv University Tel Aviv 69978, Israel ggal@math.tau.ac.il

More information

Temporally asymmetric Hebbian learning and neuronal response variability

Temporally asymmetric Hebbian learning and neuronal response variability Neurocomputing 32}33 (2000) 523}528 Temporally asymmetric Hebbian learning and neuronal response variability Sen Song*, L.F. Abbott Volen Center for Complex Systems and Department of Biology, Brandeis

More information

Realization of Visual Representation Task on a Humanoid Robot

Realization of Visual Representation Task on a Humanoid Robot Istanbul Technical University, Robot Intelligence Course Realization of Visual Representation Task on a Humanoid Robot Emeç Erçelik May 31, 2016 1 Introduction It is thought that human brain uses a distributed

More information

A model of the interaction between mood and memory

A model of the interaction between mood and memory INSTITUTE OF PHYSICS PUBLISHING NETWORK: COMPUTATION IN NEURAL SYSTEMS Network: Comput. Neural Syst. 12 (2001) 89 109 www.iop.org/journals/ne PII: S0954-898X(01)22487-7 A model of the interaction between

More information

A general error-based spike-timing dependent learning rule for the Neural Engineering Framework

A general error-based spike-timing dependent learning rule for the Neural Engineering Framework A general error-based spike-timing dependent learning rule for the Neural Engineering Framework Trevor Bekolay Monday, May 17, 2010 Abstract Previous attempts at integrating spike-timing dependent plasticity

More information

Rolls,E.T. (2016) Cerebral Cortex: Principles of Operation. Oxford University Press.

Rolls,E.T. (2016) Cerebral Cortex: Principles of Operation. Oxford University Press. Digital Signal Processing and the Brain Is the brain a digital signal processor? Digital vs continuous signals Digital signals involve streams of binary encoded numbers The brain uses digital, all or none,

More information

A Novel Account in Neural Terms. Gal Chechik Isaac Meilijson and Eytan Ruppin. Schools of Medicine and Mathematical Sciences

A Novel Account in Neural Terms. Gal Chechik Isaac Meilijson and Eytan Ruppin. Schools of Medicine and Mathematical Sciences Synaptic Pruning in Development: A Novel Account in Neural Terms Gal Chechik Isaac Meilijson and Eytan Ruppin Schools of Medicine and Mathematical Sciences Tel-Aviv University Tel Aviv 69978, Israel gal@devil.tau.ac.il

More information

Sparse Coding in Sparse Winner Networks

Sparse Coding in Sparse Winner Networks Sparse Coding in Sparse Winner Networks Janusz A. Starzyk 1, Yinyin Liu 1, David Vogel 2 1 School of Electrical Engineering & Computer Science Ohio University, Athens, OH 45701 {starzyk, yliu}@bobcat.ent.ohiou.edu

More information

Cerebral Cortex. Edmund T. Rolls. Principles of Operation. Presubiculum. Subiculum F S D. Neocortex. PHG & Perirhinal. CA1 Fornix CA3 S D

Cerebral Cortex. Edmund T. Rolls. Principles of Operation. Presubiculum. Subiculum F S D. Neocortex. PHG & Perirhinal. CA1 Fornix CA3 S D Cerebral Cortex Principles of Operation Edmund T. Rolls F S D Neocortex S D PHG & Perirhinal 2 3 5 pp Ento rhinal DG Subiculum Presubiculum mf CA3 CA1 Fornix Appendix 4 Simulation software for neuronal

More information

arxiv: v1 [q-bio.nc] 25 Apr 2017

arxiv: v1 [q-bio.nc] 25 Apr 2017 Neurogenesis and multiple plasticity mechanisms enhance associative memory retrieval in a spiking network model of the hippocampus arxiv:1704.07526v1 [q-bio.nc] 25 Apr 2017 Yansong, Chua and Cheston, Tan

More information

A Neural Model of Context Dependent Decision Making in the Prefrontal Cortex

A Neural Model of Context Dependent Decision Making in the Prefrontal Cortex A Neural Model of Context Dependent Decision Making in the Prefrontal Cortex Sugandha Sharma (s72sharm@uwaterloo.ca) Brent J. Komer (bjkomer@uwaterloo.ca) Terrence C. Stewart (tcstewar@uwaterloo.ca) Chris

More information

Beyond Vanilla LTP. Spike-timing-dependent-plasticity or STDP

Beyond Vanilla LTP. Spike-timing-dependent-plasticity or STDP Beyond Vanilla LTP Spike-timing-dependent-plasticity or STDP Hebbian learning rule asn W MN,aSN MN Δw ij = μ x j (v i - φ) learning threshold under which LTD can occur Stimulation electrode Recording electrode

More information

CAS Seminar - Spiking Neurons Network (SNN) Jakob Kemi ( )

CAS Seminar - Spiking Neurons Network (SNN) Jakob Kemi ( ) CAS Seminar - Spiking Neurons Network (SNN) Jakob Kemi (820622-0033) kemiolof@student.chalmers.se November 20, 2006 Introduction Biological background To be written, lots of good sources. Background First

More information

Why do we have a hippocampus? Short-term memory and consolidation

Why do we have a hippocampus? Short-term memory and consolidation Why do we have a hippocampus? Short-term memory and consolidation So far we have talked about the hippocampus and: -coding of spatial locations in rats -declarative (explicit) memory -experimental evidence

More information

19th AWCBR (Australian Winter Conference on Brain Research), 2001, Queenstown, AU

19th AWCBR (Australian Winter Conference on Brain Research), 2001, Queenstown, AU 19th AWCBR (Australian Winter Conference on Brain Research), 21, Queenstown, AU https://www.otago.ac.nz/awcbr/proceedings/otago394614.pdf Do local modification rules allow efficient learning about distributed

More information

Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6)

Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6) Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6) BPNN in Practice Week 3 Lecture Notes page 1 of 1 The Hopfield Network In this network, it was designed on analogy of

More information

Representational Switching by Dynamical Reorganization of Attractor Structure in a Network Model of the Prefrontal Cortex

Representational Switching by Dynamical Reorganization of Attractor Structure in a Network Model of the Prefrontal Cortex Representational Switching by Dynamical Reorganization of Attractor Structure in a Network Model of the Prefrontal Cortex Yuichi Katori 1,2 *, Kazuhiro Sakamoto 3, Naohiro Saito 4, Jun Tanji 4, Hajime

More information

Computational Explorations in Cognitive Neuroscience Chapter 7: Large-Scale Brain Area Functional Organization

Computational Explorations in Cognitive Neuroscience Chapter 7: Large-Scale Brain Area Functional Organization Computational Explorations in Cognitive Neuroscience Chapter 7: Large-Scale Brain Area Functional Organization 1 7.1 Overview This chapter aims to provide a framework for modeling cognitive phenomena based

More information

A Neural Network Model of Naive Preference and Filial Imprinting in the Domestic Chick

A Neural Network Model of Naive Preference and Filial Imprinting in the Domestic Chick A Neural Network Model of Naive Preference and Filial Imprinting in the Domestic Chick Lucy E. Hadden Department of Cognitive Science University of California, San Diego La Jolla, CA 92093 hadden@cogsci.ucsd.edu

More information

Hebbian Plasticity for Improving Perceptual Decisions

Hebbian Plasticity for Improving Perceptual Decisions Hebbian Plasticity for Improving Perceptual Decisions Tsung-Ren Huang Department of Psychology, National Taiwan University trhuang@ntu.edu.tw Abstract Shibata et al. reported that humans could learn to

More information

Motor Systems I Cortex. Reading: BCP Chapter 14

Motor Systems I Cortex. Reading: BCP Chapter 14 Motor Systems I Cortex Reading: BCP Chapter 14 Principles of Sensorimotor Function Hierarchical Organization association cortex at the highest level, muscles at the lowest signals flow between levels over

More information

Neuromorphic computing

Neuromorphic computing Neuromorphic computing Robotics M.Sc. programme in Computer Science lorenzo.vannucci@santannapisa.it April 19th, 2018 Outline 1. Introduction 2. Fundamentals of neuroscience 3. Simulating the brain 4.

More information

UNINFORMATIVE MEMORIES WILL PREVAIL

UNINFORMATIVE MEMORIES WILL PREVAIL virtute UNINFORMATIVE MEMORIES WILL PREVAIL THE STORAGE OF CORRELATED REPRESENTATIONS AND ITS CONSEQUENCES Emilio Kropff SISSA, Trieste di Studi Avanzati Internazionale Superiore - e conoscenza seguir

More information

Lateral Inhibition Explains Savings in Conditioning and Extinction

Lateral Inhibition Explains Savings in Conditioning and Extinction Lateral Inhibition Explains Savings in Conditioning and Extinction Ashish Gupta & David C. Noelle ({ashish.gupta, david.noelle}@vanderbilt.edu) Department of Electrical Engineering and Computer Science

More information

Chapter 5: Learning and Behavior Learning How Learning is Studied Ivan Pavlov Edward Thorndike eliciting stimulus emitted

Chapter 5: Learning and Behavior Learning How Learning is Studied Ivan Pavlov Edward Thorndike eliciting stimulus emitted Chapter 5: Learning and Behavior A. Learning-long lasting changes in the environmental guidance of behavior as a result of experience B. Learning emphasizes the fact that individual environments also play

More information

Information Processing During Transient Responses in the Crayfish Visual System

Information Processing During Transient Responses in the Crayfish Visual System Information Processing During Transient Responses in the Crayfish Visual System Christopher J. Rozell, Don. H. Johnson and Raymon M. Glantz Department of Electrical & Computer Engineering Department of

More information

How has Computational Neuroscience been useful? Virginia R. de Sa Department of Cognitive Science UCSD

How has Computational Neuroscience been useful? Virginia R. de Sa Department of Cognitive Science UCSD How has Computational Neuroscience been useful? 1 Virginia R. de Sa Department of Cognitive Science UCSD What is considered Computational Neuroscience? 2 What is considered Computational Neuroscience?

More information

Plasticity of Cerebral Cortex in Development

Plasticity of Cerebral Cortex in Development Plasticity of Cerebral Cortex in Development Jessica R. Newton and Mriganka Sur Department of Brain & Cognitive Sciences Picower Center for Learning & Memory Massachusetts Institute of Technology Cambridge,

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION doi:10.1038/nature10776 Supplementary Information 1: Influence of inhibition among blns on STDP of KC-bLN synapses (simulations and schematics). Unconstrained STDP drives network activity to saturation

More information

Evolving Imitating Agents and the Emergence of a Neural Mirror System

Evolving Imitating Agents and the Emergence of a Neural Mirror System Evolving Imitating Agents and the Emergence of a Neural Mirror System Elhanan Borenstein and Eytan Ruppin,2 School of Computer Science, Tel Aviv University, Tel-Aviv 6998, Israel 2 School of Medicine,

More information

A Dynamic Model for Action Understanding and Goal-Directed Imitation

A Dynamic Model for Action Understanding and Goal-Directed Imitation * Manuscript-title pg, abst, fig lgnd, refs, tbls Brain Research 1083 (2006) pp.174-188 A Dynamic Model for Action Understanding and Goal-Directed Imitation Wolfram Erlhagen 1, Albert Mukovskiy 1, Estela

More information

Basics of Computational Neuroscience: Neurons and Synapses to Networks

Basics of Computational Neuroscience: Neurons and Synapses to Networks Basics of Computational Neuroscience: Neurons and Synapses to Networks Bruce Graham Mathematics School of Natural Sciences University of Stirling Scotland, U.K. Useful Book Authors: David Sterratt, Bruce

More information

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence To understand the network paradigm also requires examining the history

More information

Cell Responses in V4 Sparse Distributed Representation

Cell Responses in V4 Sparse Distributed Representation Part 4B: Real Neurons Functions of Layers Input layer 4 from sensation or other areas 3. Neocortical Dynamics Hidden layers 2 & 3 Output layers 5 & 6 to motor systems or other areas 1 2 Hierarchical Categorical

More information

VS : Systemische Physiologie - Animalische Physiologie für Bioinformatiker. Neuronenmodelle III. Modelle synaptischer Kurz- und Langzeitplastizität

VS : Systemische Physiologie - Animalische Physiologie für Bioinformatiker. Neuronenmodelle III. Modelle synaptischer Kurz- und Langzeitplastizität Bachelor Program Bioinformatics, FU Berlin VS : Systemische Physiologie - Animalische Physiologie für Bioinformatiker Synaptische Übertragung Neuronenmodelle III Modelle synaptischer Kurz- und Langzeitplastizität

More information

Synaptic Plasticity and Connectivity Requirements to Produce Stimulus-Pair Specific Responses in Recurrent Networks of Spiking Neurons

Synaptic Plasticity and Connectivity Requirements to Produce Stimulus-Pair Specific Responses in Recurrent Networks of Spiking Neurons Synaptic Plasticity and Connectivity Requirements to Produce Stimulus-Pair Specific Responses in Recurrent Networks of Spiking Neurons Mark A. Bourjaily, Paul Miller* Department of Biology and Neuroscience

More information

VISUAL CORTICAL PLASTICITY

VISUAL CORTICAL PLASTICITY VISUAL CORTICAL PLASTICITY OCULAR DOMINANCE AND OTHER VARIETIES REVIEW OF HIPPOCAMPAL LTP 1 when an axon of cell A is near enough to excite a cell B and repeatedly and consistently takes part in firing

More information

A neural circuit model of decision making!

A neural circuit model of decision making! A neural circuit model of decision making! Xiao-Jing Wang! Department of Neurobiology & Kavli Institute for Neuroscience! Yale University School of Medicine! Three basic questions on decision computations!!

More information

Dynamic Stochastic Synapses as Computational Units

Dynamic Stochastic Synapses as Computational Units Dynamic Stochastic Synapses as Computational Units Wolfgang Maass Institute for Theoretical Computer Science Technische Universitat Graz A-B01O Graz Austria. email: maass@igi.tu-graz.ac.at Anthony M. Zador

More information

Learning and Adaptive Behavior, Part II

Learning and Adaptive Behavior, Part II Learning and Adaptive Behavior, Part II April 12, 2007 The man who sets out to carry a cat by its tail learns something that will always be useful and which will never grow dim or doubtful. -- Mark Twain

More information

Memory Systems II How Stored: Engram and LTP. Reading: BCP Chapter 25

Memory Systems II How Stored: Engram and LTP. Reading: BCP Chapter 25 Memory Systems II How Stored: Engram and LTP Reading: BCP Chapter 25 Memory Systems Learning is the acquisition of new knowledge or skills. Memory is the retention of learned information. Many different

More information

The Integration of Features in Visual Awareness : The Binding Problem. By Andrew Laguna, S.J.

The Integration of Features in Visual Awareness : The Binding Problem. By Andrew Laguna, S.J. The Integration of Features in Visual Awareness : The Binding Problem By Andrew Laguna, S.J. Outline I. Introduction II. The Visual System III. What is the Binding Problem? IV. Possible Theoretical Solutions

More information

Evaluating the Effect of Spiking Network Parameters on Polychronization

Evaluating the Effect of Spiking Network Parameters on Polychronization Evaluating the Effect of Spiking Network Parameters on Polychronization Panagiotis Ioannou, Matthew Casey and André Grüning Department of Computing, University of Surrey, Guildford, Surrey, GU2 7XH, UK

More information

LEARNING ARBITRARY FUNCTIONS WITH SPIKE-TIMING DEPENDENT PLASTICITY LEARNING RULE

LEARNING ARBITRARY FUNCTIONS WITH SPIKE-TIMING DEPENDENT PLASTICITY LEARNING RULE LEARNING ARBITRARY FUNCTIONS WITH SPIKE-TIMING DEPENDENT PLASTICITY LEARNING RULE Yefei Peng Department of Information Science and Telecommunications University of Pittsburgh Pittsburgh, PA 15260 ypeng@mail.sis.pitt.edu

More information

Different inhibitory effects by dopaminergic modulation and global suppression of activity

Different inhibitory effects by dopaminergic modulation and global suppression of activity Different inhibitory effects by dopaminergic modulation and global suppression of activity Takuji Hayashi Department of Applied Physics Tokyo University of Science Osamu Araki Department of Applied Physics

More information

Activity-Dependent Development II April 25, 2007 Mu-ming Poo

Activity-Dependent Development II April 25, 2007 Mu-ming Poo Activity-Dependent Development II April 25, 2007 Mu-ming Poo 1. The neurotrophin hypothesis 2. Maps in somatic sensory and motor cortices 3. Development of retinotopic map 4. Reorganization of cortical

More information

Stable Hebbian Learning from Spike Timing-Dependent Plasticity

Stable Hebbian Learning from Spike Timing-Dependent Plasticity The Journal of Neuroscience, December 1, 2000, 20(23):8812 8821 Stable Hebbian Learning from Spike Timing-Dependent Plasticity M. C. W. van Rossum, 1 G. Q. Bi, 2 and G. G. Turrigiano 1 1 Brandeis University,

More information

Computational cognitive neuroscience: 2. Neuron. Lubica Beňušková Centre for Cognitive Science, FMFI Comenius University in Bratislava

Computational cognitive neuroscience: 2. Neuron. Lubica Beňušková Centre for Cognitive Science, FMFI Comenius University in Bratislava 1 Computational cognitive neuroscience: 2. Neuron Lubica Beňušková Centre for Cognitive Science, FMFI Comenius University in Bratislava 2 Neurons communicate via electric signals In neurons it is important

More information

Free recall and recognition in a network model of the hippocampus: simulating effects of scopolamine on human memory function

Free recall and recognition in a network model of the hippocampus: simulating effects of scopolamine on human memory function Behavioural Brain Research 89 (1997) 1 34 Review article Free recall and recognition in a network model of the hippocampus: simulating effects of scopolamine on human memory function Michael E. Hasselmo

More information

Active Control of Spike-Timing Dependent Synaptic Plasticity in an Electrosensory System

Active Control of Spike-Timing Dependent Synaptic Plasticity in an Electrosensory System Active Control of Spike-Timing Dependent Synaptic Plasticity in an Electrosensory System Patrick D. Roberts and Curtis C. Bell Neurological Sciences Institute, OHSU 505 N.W. 185 th Avenue, Beaverton, OR

More information

CS/NEUR125 Brains, Minds, and Machines. Due: Friday, April 14

CS/NEUR125 Brains, Minds, and Machines. Due: Friday, April 14 CS/NEUR125 Brains, Minds, and Machines Assignment 5: Neural mechanisms of object-based attention Due: Friday, April 14 This Assignment is a guided reading of the 2014 paper, Neural Mechanisms of Object-Based

More information

Self-Organization and Segmentation with Laterally Connected Spiking Neurons

Self-Organization and Segmentation with Laterally Connected Spiking Neurons Self-Organization and Segmentation with Laterally Connected Spiking Neurons Yoonsuck Choe Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 USA Risto Miikkulainen Department

More information

Modulators of Spike Timing-Dependent Plasticity

Modulators of Spike Timing-Dependent Plasticity Modulators of Spike Timing-Dependent Plasticity 1 2 3 4 5 Francisco Madamba Department of Biology University of California, San Diego La Jolla, California fmadamba@ucsd.edu 6 7 8 9 10 11 12 13 14 15 16

More information

Social Cognition and the Mirror Neuron System of the Brain

Social Cognition and the Mirror Neuron System of the Brain Motivating Questions Social Cognition and the Mirror Neuron System of the Brain Jaime A. Pineda, Ph.D. Cognitive Neuroscience Laboratory COGS1 class How do our brains perceive the mental states of others

More information

Computational Cognitive Neuroscience

Computational Cognitive Neuroscience Computational Cognitive Neuroscience Computational Cognitive Neuroscience Computational Cognitive Neuroscience *Computer vision, *Pattern recognition, *Classification, *Picking the relevant information

More information

A Dynamic Neural Network Model of Sensorimotor Transformations in the Leech

A Dynamic Neural Network Model of Sensorimotor Transformations in the Leech Communicated by Richard Andersen 1 A Dynamic Neural Network Model of Sensorimotor Transformations in the Leech Shawn R. Lockery Yan Fang Terrence J. Sejnowski Computational Neurobiological Laboratory,

More information

Part 11: Mechanisms of Learning

Part 11: Mechanisms of Learning Neurophysiology and Information: Theory of Brain Function Christopher Fiorillo BiS 527, Spring 2012 042 350 4326, fiorillo@kaist.ac.kr Part 11: Mechanisms of Learning Reading: Bear, Connors, and Paradiso,

More information

2Lesson. Outline 3.3. Lesson Plan. The OVERVIEW. Lesson 3.3 Why does applying pressure relieve pain? LESSON. Unit1.2

2Lesson. Outline 3.3. Lesson Plan. The OVERVIEW. Lesson 3.3 Why does applying pressure relieve pain? LESSON. Unit1.2 Outline 2Lesson Unit1.2 OVERVIEW Rationale: This lesson introduces students to inhibitory synapses. To review synaptic transmission, the class involves a student model pathway to act out synaptic transmission.

More information

Pairwise Analysis Can Account for Network Structures Arising from Spike-Timing Dependent Plasticity

Pairwise Analysis Can Account for Network Structures Arising from Spike-Timing Dependent Plasticity Pairwise Analysis Can Account for Network Structures Arising from Spike-Timing Dependent Plasticity Baktash Babadi 1,2 *, L. F. Abbott 1 1 Center for Theoretical Neuroscience, Department of Neuroscience,

More information

Hierarchical dynamical models of motor function

Hierarchical dynamical models of motor function ARTICLE IN PRESS Neurocomputing 70 (7) 975 990 www.elsevier.com/locate/neucom Hierarchical dynamical models of motor function S.M. Stringer, E.T. Rolls Department of Experimental Psychology, Centre for

More information

Synfire chains with conductance-based neurons: internal timing and coordination with timed input

Synfire chains with conductance-based neurons: internal timing and coordination with timed input Neurocomputing 5 (5) 9 5 www.elsevier.com/locate/neucom Synfire chains with conductance-based neurons: internal timing and coordination with timed input Friedrich T. Sommer a,, Thomas Wennekers b a Redwood

More information

Supplemental Information: Adaptation can explain evidence for encoding of probabilistic. information in macaque inferior temporal cortex

Supplemental Information: Adaptation can explain evidence for encoding of probabilistic. information in macaque inferior temporal cortex Supplemental Information: Adaptation can explain evidence for encoding of probabilistic information in macaque inferior temporal cortex Kasper Vinken and Rufin Vogels Supplemental Experimental Procedures

More information

Synaptic plasticityhippocampus. Neur 8790 Topics in Neuroscience: Neuroplasticity. Outline. Synaptic plasticity hypothesis

Synaptic plasticityhippocampus. Neur 8790 Topics in Neuroscience: Neuroplasticity. Outline. Synaptic plasticity hypothesis Synaptic plasticityhippocampus Neur 8790 Topics in Neuroscience: Neuroplasticity Outline Synaptic plasticity hypothesis Long term potentiation in the hippocampus How it s measured What it looks like Mechanisms

More information

Mirror Neurons in Primates, Humans, and Implications for Neuropsychiatric Disorders

Mirror Neurons in Primates, Humans, and Implications for Neuropsychiatric Disorders Mirror Neurons in Primates, Humans, and Implications for Neuropsychiatric Disorders Fiza Singh, M.D. H.S. Assistant Clinical Professor of Psychiatry UCSD School of Medicine VA San Diego Healthcare System

More information

Modelling the Stroop Effect: Dynamics in Inhibition of Automatic Stimuli Processing

Modelling the Stroop Effect: Dynamics in Inhibition of Automatic Stimuli Processing Modelling the Stroop Effect: Dynamics in Inhibition of Automatic Stimuli Processing Nooraini Yusoff 1, André Grüning 1 and Antony Browne 1 1 Department of Computing, Faculty of Engineering and Physical

More information

Sum of Neurally Distinct Stimulus- and Task-Related Components.

Sum of Neurally Distinct Stimulus- and Task-Related Components. SUPPLEMENTARY MATERIAL for Cardoso et al. 22 The Neuroimaging Signal is a Linear Sum of Neurally Distinct Stimulus- and Task-Related Components. : Appendix: Homogeneous Linear ( Null ) and Modified Linear

More information

AU B. Sc.(Hon's) (Fifth Semester) Esamination, Introduction to Artificial Neural Networks-IV. Paper : -PCSC-504

AU B. Sc.(Hon's) (Fifth Semester) Esamination, Introduction to Artificial Neural Networks-IV. Paper : -PCSC-504 AU-6919 B. Sc.(Hon's) (Fifth Semester) Esamination, 2014 Introduction to Artificial Neural Networks-IV Paper : -PCSC-504 Section-A 1. (I) Synaptic Weights provides the... method for the design of neural

More information

The case for quantum entanglement in the brain Charles R. Legéndy September 26, 2017

The case for quantum entanglement in the brain Charles R. Legéndy September 26, 2017 The case for quantum entanglement in the brain Charles R. Legéndy September 26, 2017 Introduction Many-neuron cooperative events The challenge of reviving a cell assembly after it has been overwritten

More information

Learning. Learning: Problems. Chapter 6: Learning

Learning. Learning: Problems. Chapter 6: Learning Chapter 6: Learning 1 Learning 1. In perception we studied that we are responsive to stimuli in the external world. Although some of these stimulus-response associations are innate many are learnt. 2.

More information

Neural response time integration subserves. perceptual decisions - K-F Wong and X-J Wang s. reduced model

Neural response time integration subserves. perceptual decisions - K-F Wong and X-J Wang s. reduced model Neural response time integration subserves perceptual decisions - K-F Wong and X-J Wang s reduced model Chris Ayers and Narayanan Krishnamurthy December 15, 2008 Abstract A neural network describing the

More information

Neuron Phase Response

Neuron Phase Response BioE332A Lab 4, 2007 1 Lab 4 February 2, 2007 Neuron Phase Response In this lab, we study the effect of one neuron s spikes on another s, combined synapse and neuron behavior. In Lab 2, we characterized

More information

Self-organizing continuous attractor networks and path integration: one-dimensional models of head direction cells

Self-organizing continuous attractor networks and path integration: one-dimensional models of head direction cells INSTITUTE OF PHYSICS PUBLISHING Network: Comput. Neural Syst. 13 (2002) 217 242 NETWORK: COMPUTATION IN NEURAL SYSTEMS PII: S0954-898X(02)36091-3 Self-organizing continuous attractor networks and path

More information

Learning = an enduring change in behavior, resulting from experience.

Learning = an enduring change in behavior, resulting from experience. Chapter 6: Learning Learning = an enduring change in behavior, resulting from experience. Conditioning = a process in which environmental stimuli and behavioral processes become connected Two types of

More information

Same or Different? A Neural Circuit Mechanism of Similarity-Based Pattern Match Decision Making

Same or Different? A Neural Circuit Mechanism of Similarity-Based Pattern Match Decision Making 6982 The Journal of Neuroscience, May 11, 2011 31(19):6982 6996 Behavioral/Systems/Cognitive Same or Different? A Neural Circuit Mechanism of Similarity-Based Pattern Match Decision Making Tatiana A. Engel

More information

Hebbian Learning in a Random Network Captures Selectivity Properties of Prefrontal Cortex

Hebbian Learning in a Random Network Captures Selectivity Properties of Prefrontal Cortex Hebbian Learning in a Random Network Captures Selectivity Properties of Prefrontal Cortex Grace W. Lindsay a,e,g, Mattia Rigotti a,b, Melissa R. Warden c, Earl K. Miller d, Stefano Fusi a,e,f a Center

More information

Attention Response Functions: Characterizing Brain Areas Using fmri Activation during Parametric Variations of Attentional Load

Attention Response Functions: Characterizing Brain Areas Using fmri Activation during Parametric Variations of Attentional Load Attention Response Functions: Characterizing Brain Areas Using fmri Activation during Parametric Variations of Attentional Load Intro Examine attention response functions Compare an attention-demanding

More information

Resistance to forgetting associated with hippocampus-mediated. reactivation during new learning

Resistance to forgetting associated with hippocampus-mediated. reactivation during new learning Resistance to Forgetting 1 Resistance to forgetting associated with hippocampus-mediated reactivation during new learning Brice A. Kuhl, Arpeet T. Shah, Sarah DuBrow, & Anthony D. Wagner Resistance to

More information

Input-speci"c adaptation in complex cells through synaptic depression

Input-specic adaptation in complex cells through synaptic depression 0 0 0 0 Neurocomputing }0 (00) } Input-speci"c adaptation in complex cells through synaptic depression Frances S. Chance*, L.F. Abbott Volen Center for Complex Systems and Department of Biology, Brandeis

More information

Sensory Adaptation within a Bayesian Framework for Perception

Sensory Adaptation within a Bayesian Framework for Perception presented at: NIPS-05, Vancouver BC Canada, Dec 2005. published in: Advances in Neural Information Processing Systems eds. Y. Weiss, B. Schölkopf, and J. Platt volume 18, pages 1291-1298, May 2006 MIT

More information

Minimizing Uncertainty in Property Casualty Loss Reserve Estimates Chris G. Gross, ACAS, MAAA

Minimizing Uncertainty in Property Casualty Loss Reserve Estimates Chris G. Gross, ACAS, MAAA Minimizing Uncertainty in Property Casualty Loss Reserve Estimates Chris G. Gross, ACAS, MAAA The uncertain nature of property casualty loss reserves Property Casualty loss reserves are inherently uncertain.

More information

CS294-6 (Fall 2004) Recognizing People, Objects and Actions Lecture: January 27, 2004 Human Visual System

CS294-6 (Fall 2004) Recognizing People, Objects and Actions Lecture: January 27, 2004 Human Visual System CS294-6 (Fall 2004) Recognizing People, Objects and Actions Lecture: January 27, 2004 Human Visual System Lecturer: Jitendra Malik Scribe: Ryan White (Slide: layout of the brain) Facts about the brain:

More information

Spiking neural network simulator: User s Guide

Spiking neural network simulator: User s Guide Spiking neural network simulator: User s Guide Version 0.55: December 7 2004 Leslie S. Smith, Department of Computing Science and Mathematics University of Stirling, Stirling FK9 4LA, Scotland lss@cs.stir.ac.uk

More information

Introduction to Physiological Psychology Review

Introduction to Physiological Psychology Review Introduction to Physiological Psychology Review ksweeney@cogsci.ucsd.edu www.cogsci.ucsd.edu/~ksweeney/psy260.html n Learning and Memory n Human Communication n Emotion 1 What is memory? n Working Memory:

More information

Human Cognitive Developmental Neuroscience. Jan 27

Human Cognitive Developmental Neuroscience. Jan 27 Human Cognitive Developmental Neuroscience Jan 27 Wiki Definition Developmental cognitive neuroscience is an interdisciplinary scientific field that is situated at the boundaries of Neuroscience Psychology

More information

TMS Disruption of Time Encoding in Human Primary Visual Cortex Molly Bryan Beauchamp Lab

TMS Disruption of Time Encoding in Human Primary Visual Cortex Molly Bryan Beauchamp Lab TMS Disruption of Time Encoding in Human Primary Visual Cortex Molly Bryan Beauchamp Lab This report details my summer research project for the REU Theoretical and Computational Neuroscience program as

More information

The perirhinal cortex and long-term familiarity memory

The perirhinal cortex and long-term familiarity memory THE QUARTERLY JOURNAL OF EXPERIMENTAL PSYCHOLOGY 2005, 58B (3/4), 234 245 The perirhinal cortex and long-term familiarity memory E. T. Rolls, L. Franco, and S. M. Stringer University of Oxford, UK To analyse

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 7: Network models Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis II 6 Single neuron

More information

A Model of Perceptual Change by Domain Integration

A Model of Perceptual Change by Domain Integration A Model of Perceptual Change by Domain Integration Gert Westermann (gert@csl.sony.fr) Sony Computer Science Laboratory 6 rue Amyot 755 Paris, France Abstract A neural network model is presented that shows

More information

The evolution of cooperative turn-taking in animal conflict

The evolution of cooperative turn-taking in animal conflict RESEARCH ARTICLE Open Access The evolution of cooperative turn-taking in animal conflict Mathias Franz 1*, Daniel van der Post 1,2,3, Oliver Schülke 1 and Julia Ostner 1 Abstract Background: A fundamental

More information

Neurobiology: The nerve cell. Principle and task To use a nerve function model to study the following aspects of a nerve cell:

Neurobiology: The nerve cell. Principle and task To use a nerve function model to study the following aspects of a nerve cell: Principle and task To use a nerve function model to study the following aspects of a nerve cell: INTRACELLULAR POTENTIAL AND ACTION POTENTIAL Comparison between low and high threshold levels Comparison

More information

SUPPLEMENTARY INFORMATION. Table 1 Patient characteristics Preoperative. language testing

SUPPLEMENTARY INFORMATION. Table 1 Patient characteristics Preoperative. language testing Categorical Speech Representation in the Human Superior Temporal Gyrus Edward F. Chang, Jochem W. Rieger, Keith D. Johnson, Mitchel S. Berger, Nicholas M. Barbaro, Robert T. Knight SUPPLEMENTARY INFORMATION

More information

Neuroplasticity. Jake Kurczek 9/19/11. Cognitive Communication Disorders

Neuroplasticity. Jake Kurczek 9/19/11. Cognitive Communication Disorders Jake Kurczek 9/19/11 Activity Therapy Be creative Try new things Be prepared to fail Learn from past experiences Be flexible Participants begin working/communicating not good As they work together more

More information

An Artificial Synaptic Plasticity Mechanism for Classical Conditioning with Neural Networks

An Artificial Synaptic Plasticity Mechanism for Classical Conditioning with Neural Networks An Artificial Synaptic Plasticity Mechanism for Classical Conditioning with Neural Networks Caroline Rizzi Raymundo (B) and Colin Graeme Johnson School of Computing, University of Kent, Canterbury, Kent

More information

lateral organization: maps

lateral organization: maps lateral organization Lateral organization & computation cont d Why the organization? The level of abstraction? Keep similar features together for feedforward integration. Lateral computations to group

More information

Modeling Depolarization Induced Suppression of Inhibition in Pyramidal Neurons

Modeling Depolarization Induced Suppression of Inhibition in Pyramidal Neurons Modeling Depolarization Induced Suppression of Inhibition in Pyramidal Neurons Peter Osseward, Uri Magaram Department of Neuroscience University of California, San Diego La Jolla, CA 92092 possewar@ucsd.edu

More information