Computational Explorations in Cognitive Neuroscience Chapter 3

Size: px
Start display at page:

Download "Computational Explorations in Cognitive Neuroscience Chapter 3"

Transcription

1 Computational Explorations in Cognitive Neuroscience Chapter 3

2 3.2 General Structure of Cortical Networks The cerebral cortex is characterized by a mix of excitatory and inhibitory cells. Histologically, it appears as a six-layered sheet. In sensory areas of primate cortex, the middle layer (4) is the main target of inputs from the thalamus. So, there is some motivation for considering it as an input layer. Deep layers (5,6) project to subcortical structures. So, there is some motivation for considering them as output layers. Superficial layers (2,3) receive from and project to other layers, both locally and in other areas. Thus, there is a rough correspondence between cortical layers and the three functional layers of a typical ANN (artificial neural network): input, hidden, and output.

3 A major problem with the correspondence is that in the ANN, each unit is restricted to only one layer. In the cortex, neurons extend across layers and it is not usually clear that the layer designation is functionally relevant, i.e. nothing more than a convenience of description. Thus, for example, axonal inputs to layers 5 and 2 could terminate on different parts of the same deep pyramidal cell. Another problem is that the laminar pattern can differ greatly across different cortical areas. Nonetheless, Figure 3.5 captures something of the connectivity patterns of the cortex: 1) sensory areas have prominent input layers 2) motor areas have prominent output layers 3) most layers have within-layer excitation and inhibition 4) long-range projections are excitatory, although they may synapse on target inhibitory cells (not shown in the figure) 5) excitatory connections are largely bi-directional

4 3.3 Unidirectional Excitatory Interactions: Transformations We now consider the situation where we have a number of detector units operating in parallel. We need to consider not just a single binary output as with a single detector, but a pattern of output over a layer of detectors. Furthermore, we must consider the relation between input and output patterns. We consider that the network transforms input patterns into output patterns. We will call the receiving layer, the hidden layer because we will be adding an additional output layer in the future. The input space that we consider consists of all possible input patterns. The input space can be thought of as being separated into different regions or categories. In the digit example, each digit represents a different category. There may be many (noisy) patterns within each category.

5 The similarity of patterns can be quantified with various distance measures. A simple measure is Euclidean distance: b g 2 i i i d = x y Patterns that are more similar are closer in the input space (have a lower distance) and those that are more different are further away (have a larger distance). A cluster plot groups different patterns based on their relative distances. To be useful in pattern detection, the hidden layer should emphasize the distinctions between different categories, while also de-emphasizing the distinctions between different patterns in the same category. Fig. 3.8 Cluster plots for digit example, for input patterns and hidden layer patterns

6 3.3.1 Exploration of Transformations Bias Weights Because the input patterns have different numbers of active units, they will produce different levels of activation. We would like the activation level to reflect the match between input pattern and weight pattern, not the overall strength of the input. Bias weights are used to compensate for the differences in overall activity level coming from different inputs. Question 3.1 By turning off the bias weights, we find that no activation is produced for certain input patterns (0, 1, 7) and others are very weak (4). These differences reflect the differing overall strengths of the input patterns: these input patterns have fewer active units and need the bias weights to bring their activations up to a comparable level to the others. Question 3.2 The correct hidden units have maximal net input in each case. They just need the bias weight to push the membrane potential above threshold. The role of the bias weights is to normalize the strengths of the net inputs so that they all exceed threshold.

7 Cluster Plots Similarity in the cluster plot of the digit images is related to the degree of overlap between their activated pixels. The similarity relations are lost in the cluster plot of the digit categories. The network collapses across distinctions between different noisy versions of the same digits, while emphasizing the distinctions between different digit categories.

8 Selectivity and Leak In the detector example in the previous chapter (Question 2.9), it was seen that manipulation of the leak conductance altered the selectivity of the receiving unit. Question 3.3 (a) When g_bar_l is reduced from 6 to 5, the hidden units become more excitable, their membrane potentials are higher. As a result, a greater number of units cross threshold, and the activation levels for the ones that cross threshold are higher. The hidden units now respond in a graded fashion they are less selective. (b) This fragments the cluster plot of hidden unit activities. (i) Different digits are clustered together: digits 2, 0, and 6 are clustered together; digits 4, 1, 9, 7, and 3 are clustered together; and digits 5 and 8 each are in their own clusters. (ii) Different noisy versions of some digits are now separately clustered. This is true for 2, 5, 6, and 9. (iii) For one digit (6) each noisy version is grouped separately.

9 (c) When g_bar_l is further reduced to 4, the hidden units become even more excitable, more units cross threshold for a single input, and their activation levels are even higher. The cluster plot of hidden unit activities now makes serious categorical errors, with different digits in the same groups: (i) 3 and 8; (ii) 2, 3, and 5. Also, noisy versions of the same digit can now appear at different distances, e.g. one version of digit 0 is closer to digit 4 than to the other two versions of 0. (d) Lowering the units excitability via the leak current interferes with the goal of the network to have the same hidden representation for each version of the same digit, and different representations for different digits. This goal is best achieved with a range of membrane potential values that just allows the hidden unit with the highest value (i.e. the tip of the iceberg ) to cross threshold. This situation provides the greatest selectivity or discriminability of input patterns. By having more units cross threshold, selectivity declines and the ability of the network to reach this goal is compromised.

10 Letter Inputs Here we see that the digit units do not respond informatively to the letter stimuli. Question 3.4 (a) Lowering g_bar_l to 4 causes a greater number of activated hidden units for each letter input. The cluster plot now shows more structure, but the similarities are NOT the same as for the input letter images. (b) This hidden representation is NOT a good one for conveying letter identity information. The network is discriminating the letter images based on features that are not useful for their identification. This is an important observation because it shows that the ability to discriminate categories of input depends on the input weights being properly set (here by hand, but also by learning) for that discrimination. Weights that are set to make one type of discrimination will not necessarily work to make another type. (c) There is no setting of g_bar_l that gives a satisfactory hidden representation of letter information. As stated above, the weights of this network were set to discriminate digits. This weight pattern is not optimized for discriminating letters, and changing the excitability of the hidden units will not change this fact.

11 3.3.2 Localist versus Distributed Representations The combination of bias, leak, and threshold in the exercises in Chapter 2 produced output patterns with single activated units representing the detection of categories of inputs. This form of representation is called localist. Representation in the cerebral cortex, however, is thought to be distributed. Neurons have tuning curves, i.e. they respond to a range of inputs rather than to only a single input, and hence information is thought to be coarse coded, i.e. the representations of inputs are distributed over an entire layer rather than being localized in single units. Distributed representations have the same capacity to implement transformations of input patterns as do localist representations.

12 Advantages of distributed representations in a layer: 1. Efficiency: a smaller total number of units in the layer is required to represent the input space 2. Similarity: different patterns in the layer can be grouped by their degree of overlap 3. Generalization: the network can appropriately respond to novel input patterns because there is a greater flexibility than for localist representation 4. Robustness: this flexibility also makes the network less vulnerable to noise 5. Accuracy: since the output space is greater, continuous gradation over the input space can be reflected more accurately in the corresponding network patterns 6. Learning: distributed representations fit naturally with learning algorithms such as backprop

13 3.3.3 Exploration of Distributed Representations Here we compare localist and distributed networks using the same digit example. The distributed network differs from the localist network in that in the distributed network: 1. the hidden layer has fewer units 2. input weights represent digit features rather than whole digits 3. representation is by combinations of hidden units rather than single hidden units 4. each digit activates a unique pattern of those hidden units that best match its features 5. hidden units serve multiple roles, i.e. they participate in representing multiple digits 6. noisy input sometimes causes the network to collapse noisy versions of different digits together, e.g. 0 and 4 or 2 and 5 (see Fig. 3.13) 7. noisy input also sometimes does not collapse all of the different versions of the same digit together, e.g. 7 or 9 (see Fig. 3.13) The reason for these problems with noisy input is that noisy versions of one digit share more features with a different digit. These problems will be lessened when the weights are set by learning rather than by hand.

14 8. there is a residual similarity structure across different digit representations in the hidden layer: digit categories are not equally separate elements of a single cluster group, e.g. 0, 4, and 6 are closer together and further from 1, 7, and 9 (see Fig. 3.13)

15 Question 3.5 The distributed network achieves a useful representation of the digits using only half the number of hidden units as the localist network. This efficiency is achieved because the distributed network takes advantage of representation by combination: each hidden unit is selective for a feature, and the distributed representation is by unique combinations of features.

16 3.4 Bidirectional Excitatory Interactions In the cerebral cortex, bidirectional excitatory connections are ubiquitous, and they occur over a range of spatial scales, including nearby neighboring cells and ensembles in widely separated areas. In network models, bidirectional connections can be within-layer (lateral) or between-layer (top-down and bottom-up). Lateral connections allow pattern completion: one part of a pattern activates other parts in the same layer. Top-down connections allow a categorical representation in a higher layer to activate, amplify, or complete a lower-layer pattern.

17 3.4.1 Bidirectional Transformations Consider a bidirectional version of the localist digit network. Input units receive from the hidden units, as well as the hidden units receiving from the input units. The sending and receiving weights of hidden are symmetric: thus, the unit activates the same pattern that activates it.

18 This figure show the Grid log resulting when 0 and 1 were presented to the input layer (DIGITS env_type); and when 2, 3, 4, and 5 were presented to the hidden layer (CATEGS env_type). In the first case, the input layer activations were hard clamped; in the second case, the hidden layer activations were clamped. The overall effect of clamping the input layer activation or the hidden layer activation is almost the same. However, there is a difference: when the input layer is clamped ( 0 & 1 ), the hidden layer activations can vary in strength; when the hidden layer is clamped ( 2, 3, 4, 5 ), the input layer activations can vary in strength.

19 Question 3.6 (a) When digit categories 7 and 3 are activated together, the input layer activations are a combination of the 7 and 3 digit input patterns. Parts of the combined pattern where the two digit patterns overlap are at a higher activation level than parts where they do not. (b) Since both categories are activated, both input patterns are activated in top-down manner. Input units that are part of both 7 and 3 input patterns receive greater top-down activation (combined activation) than those that are only part of one pattern or the other (single activation). (c) Raising g_bar_l makes the input units more leaky, which has the effect of enhancing the difference between input units that are part of both 7 and 3 input patterns and those that are only part of one pattern or the other. As we have seen previously, a more leaky unit is less excitable, so the level of activation of all input units is lowered. Since the combined-activation units are on the upper end of the sigmoid nonlinearity, their activation level does not change. The single-activation units are on the middle part of the sigmoid nonlinearity, so their activation level noticeably decreases. Thus the difference between these two groups is enhanced. (d) This kind of enhancement of differences might be generally useful in cognition to emphasize different aspects of an input pattern based on prior experience, expectations, ongoing behavior, etc. In other words, for selective attention.

20 3.4.2 Bidirectional Pattern Completion Bidirectional lateral connectivity was characteristic of Hebb s Nerve Cell Assembly. Bidirectional lateral connectivity is useful for pattern completion, where part of a learned pattern is presented as input and the network activation fills in the remaining parts. Pattern completion is related to the phenomenon of cued recall in memory. In the pattern completion simulation, when a subset of a pattern in the network that is consistent with the network weights is activated, the bidirectional connectivity fills in the remaining portion of the pattern. Previously, we have seen the use of hard clamping, which fixes the activation levels. We now see the introduction of soft clamping, where the inputs from the event pattern enter as additional excitatory inputs to the units rather than directly setting the activations to particular values. Soft clamping allows units to update their activation values as a function of their weights in order to produce pattern completion.

21 In the simulation, all of the units belonging to the image of the digit 8 are interconnected with weights equal to 1, and all the other units have weights of 0. We can see pattern completion by presenting the network with part of the digit 8 image. Step 2 after presentation of input pattern Step 7 after presentation of input pattern

22 Step 12 after presentation of input pattern

23 The phenomenon of resonance (mutual support) refers to the extra activation in a set of units with excitatory interconnections that occurs when some of them are activated. The excitatory connections produce positive feedback that tends to amplify the activity if unchecked by other factors. The digit 8 input pattern with one other unit activated that is not part of the digit pattern. This unit does not have connections with the digit units.

24 With g_bar_l = 3, the entire input pattern is completed, including the extra unit. With g_bar_l = 7.5, the digit pattern is still completed because of mutual support, but the extra unit is now low in activation because its activation is not strong enough to overcome the strong leak current.

25 Question 3.7 (a) Given the pattern of weights, the minimal number of units that need to be clamped to produce pattern completion to the full 8 is 7. (b) The g_bar_l parameter can be altered to lower this minimal value. The value of this parameter that allows completion with only one input active is: 0.25 for partial completion, 0.18 for proper completion. Question 3.8 (a) If you activate only inputs that are not part of the 8 pattern, then only the units receiving input are activated. In other words, there is no pattern completion. Pattern completion is not possible for the 8 because none of its inputs are activated. The pattern of weights was not designed for this input pattern. (b) Yes, the weights in this layer could be configured to support the representation of another pattern in addition to the 8. This new pattern could be distinctly activated by a partial input. Yes, it would make a difference how similar the new pattern was to the 8 pattern, because too much overlap would prevent discrimination of patterns.

26 3.4.3 Bidirectional Amplification Amplification: the enhancement of activation strength as a result of positive excitatory feedback. Amplification is necessary for various aspects of cognitive function, but it can be a problem if it is not controlled. Epilepsy is a condition in the brain where amplification gets out of control. One advantage of amplification occurs when there is weak input that can be amplified to achieve a fully active pattern.

27 Exploration of Simple Top-Down Amplification A simple example of a bidirectionally connected network is shown in Fig Initial activation of the hidden layer 1 unit is relatively weak. With the leak current of the hidden layer 1 unit set to 3.5, the hidden layer 1 unit has an activation level of 0.35, which is not strong enough to activate the hidden layer 2 unit. Since the hidden layer 2 unit does not become active, it does not feed back excitation to hidden layer 1. With the leak current of the hidden layer 1 unit set to 3.4, it is more excitable, and its activation level (0.55) is now strong enough to activate the hidden layer 2 unit, which now feeds back excitation to hidden layer 1. This top-down excitation acts to boost or amplify the activity in hidden layer 1. The amplification effect is thus seen in the layer 1 unit. Its activation does not stay at the 0.55 level, but with this top-down activation it jumps to 1.0, as does that of the layer 2 unit.

28 Leak=3.4 Leak=3.5

29 Exploration of Amplification with Distributed Representations We now look at a potential problem that occurs with amplification runaway excitation. As we have discussed, mutual excitation (e-e connectivity) has potential benefits. However, it must be kept in check. As discussed in the previous chapter, there are 2 types of current that counterbalance the excitatory currents in the neuron: leak and inhibition. So far in this chapter, we have only explored the leak current in this role, and, as we have seen, too much leak current can prevent the potential benefits of excitatory amplification. In the next section, we will begin to explore how inhibition can serve as a counterbalance to excitation, as it does in the nervous system. In the present section, the simulation has three feature units in hidden layer 1 and two category units in hidden layer 2. (a) The left and right feature units are unique to specific category units, whereas the center feature unit is shared by the category units. (b) Each feature unit feeds excitation forward to its corresponding category unit(s). (c) Each category unit feeds excitation back to its corresponding feature units. This connectivity pattern is interpreted as representing 3 separable features in the input and hidden1 layers, and 2 objects in the hidden2 layer.

30 (1) Unique input: With unique input to the left feature unit, and the leak current = 1.7, there is runaway excitation, with all hidden layer units being activated. With the same input, and the leak current increased to , the excitation is contained, the correct category unit is activated, and it correctly activates its corresponding feature units (pattern completion).

31 Question 3.9 (a) List the values of g_bar_l where the network s behavior exhibited a qualitative transition in what was activated at the end of settling, and describe these network states : all 3 hidden1 units activated 1.736: 2 hidden1 units (CRT & Speakers) activated + both hidden2 units : 2 hidden1 units (CRT & Speakers) activated + hidden2 unit TV 1.737: 1 hidden1 unit (CRT) activated + hidden2 unit TV activated weakly 1.742: only hidden1 unit CRT activated, and not hidden2 units (b) Using the value of g_bar_l that activated only the desired two hidden units at the end of settling, try increasing dt_vm from.03 to.04, which will cause the network to settle faster in the same number of cycles by increasing the rate at which the membrane potential is updated on each cycle are you still able to activate only the left two hidden feature units? What does this tell you about your previous results? No, all of the hidden units are now turned on. By raising the rate of membrane potential updating, activation spreads faster and to all hidden units. The previous results depended on a particular rate of membrane potential updating.

32 (2) Ambiguous input: With ambiguous input (to the center feature unit), and the leak current = 1.737, both category units are weakly activated, which is an appropriate response to ambiguous input. However, when the leak current is slightly decreased to 1.736, there is runaway excitation again. So, the pattern recognition behavior is not robust in the face of small perturbations of leak current. (3) Full input: With full input (to left and center feature units), and leak=1.79, there is runaway excitation and all feature and category units are activated. When the leak current is slightly increased to 1.8, the input to hidden layer 2 is insufficient to activate the category units, there is no topdown excitation. So, with full input, activation either spreads unacceptably or the network fails to get appropriately activated. So, again, pattern recognition behavior is non-robust. The behavior can change qualitatively with a small parameter change.

33

34 Unique leak=1.7 leak=1.7364

35 Ambiguous leak=1.737 leak=1.736

36 Full leak=1.79 leak=1.8

37 3.4.4 Attractor Dynamics We have witnessed that networks with bidirectional excitation tend to have nonlinear (bimodal) behavior. That is, their output can switch from one state to another with small parameter changes. (The output of a linear system changes in a graded fashion to such parametric changes.) This nonlinear behavior of the network derives from the nonlinear sigmoidal activation functions of the individual units. The concept of attractor is useful for understanding the behavior of networks having bidirectional excitatory connectivity.

38 A (stable) attractor is a stable activation state into which the network settles for a variety of different starting states. It is convenient to conceptualize attractors in terms of an energy landscape. The state of the network is viewed as a point which can move along the surface of the landscape. As the network state evolves, the point traces out a trajectory over the surface. The attractor is viewed as a low point (or minimum) in this landscape, and the surrounding region where the initial state leads to the attractor is viewed as the attractor basin. The concept of attractor dynamics will be developed further in later sections. Here, we point out that the process of pattern completion that we have examined may be thought of as the network state trajectory being attracted to the attractor state (the completed pattern) from any of a number of different (partial pattern) initial states in the attractor basin. Also, the slope of the surface corresponds to the rate at which the trajectory moves to the attractor. We saw that this rate can change when there is amplification. This amplification may be thought of as corresponding to the steeper slope of the attractor basin immediately surrounding the attractor.

39 3.5 Inhibitory Interactions In the previous examples, we have seen some of the effects of e-e connectivity. The leak current has been the only counterweight to the excesses of excitation. We saw that by itself, the leak current could not be used efficiently to control runaway excitation. This is because the leak current is set to a constant value. We know that our model of the neuron has an additional counterweight, namely inhibition. Inhibition can be more effective as a counterweight because it can act dynamically, i.e. its strength can rise and fall as needed to keep excitation in check. This set point behavior is analogous to the operation of a thermostat.

40 Inhibition can be feedforward, in which case excitatory input feeds directly to inhibitory units within a layer. This form of inhibition can counterbalance excitation as it comes into a layer from other layers. Inhibition can also be feedback, in which case excitatory input feeds onto excitatory units in a layer, the receiving units excite inhibitory interneuron units within the layer, and these inhibitory units in turn feed inhibition back onto the excitatory units within the layer. When there is feedback inhibition within a layer, a negative feedback loop is produced. The negative feedback loop tends to dynamically counterbalance excitation in the layer. The inhibitory units react to excitation in the layer and act to prevent runaway excitation. Although inhibition is necessary for network function, its computational implementation as individual inhibitory units is very costly. Therefore it is desirable to have a computational shortcut to introduce the advantages of inhibition in network function without the inefficiency of having to continually update individual inhibitory units.

41 The effects of inhibitory interneurons can be summarized in simulations by computing an inhibition function, which is a direct function of the level of excitation in a layer, without explicitly simulating the inhibitory units themselves. The k-winners-take-all (kwta) function is a simple and effective inhibition function. It implements set point (thermostat) behavior by ensuring that only a subset (k) of the total number (n) of units in a layer are allowed to be strongly active.

42 3.5.1 General Functional Benefits of Inhibition 1) Competition: excitatory units compete to become active because only the most active can survive the effects of inhibition. This competition can serve as a mechanism for selection, e.g. as in determining the most appropriate units to represent an input pattern. 2) Sparse distribution: representations of inputs involve activation of a relatively small fraction of the population in a layer, allowing the representation of many different categories. 3) Balance between competition & cooperation: flexibility is achieved by representations are somewhere between localist (complete competition single-unit representation) and distributed (complete cooperation all-unit representation).

43 3.5.2 Exploration of Feedforward and Feedback Inhibition Inhibition is introduced in the example network shown in Figure The inhibitory units are shown as a distinct layer, but they should be considered as the inhibitory interneurons for the excitatory units of the hidden layer. The ratio of inhibitory to excitatory units is roughly 15%, as reported for the cortex. The hidden layer excitatory units receive excitation from the input units and inhibition from the inhibitory units. The inhibitory units receive excitation from the input units and excitatory hidden units, and inhibition from the other inhibitory units. Thus we have all 4 connection types in this example: 1) e-e connections (input to hidden) 2) e-i connections (hidden to inhibitory, and input to inhibitory) 3) i-e connections (inhibitory to hidden) 4) i-i connections (inhibitory to inhibitory)

44 The input unit inhibitory unit hidden unit pathways provide for feedforward inhibition. The hidden unit inhibitory unit hidden unit pathways provide for feedback inhibition. When an input pattern is transmitted to the hidden layer, both forms of inhibition counterbalance excitation from the input layer and prevents runaway excitation in the hidden layer. The leak current is very small and does not contribute appreciably to the counterbalancing.

45 Strength of Inhibitory Conductances Most of the weights are random, except for those from the inhibitory units, which are fixed at 0.5. The hidden layer excitatory units receive from the input and inhibitory units. The inhibitory units receive: (1) f.f. connections from the input layer; (2) f.b. connections from the excitatory hidden units; and (3) inhibitory connections from themselves. We start by manipulating the maximal conductance for the inhibitory current into the excitatory units (g_bar_i.hidden). Decrease maximal conductance for the inhibitory current into the excitatory units (g_bar_i.hidden) from 5 to 3. Then increase it to 7.

46 Question 3.10 (a) What effect does decreasing g_bar_i.hidden have on the average level of excitation of the hidden units and of the inhibitory units? The average level of activation of both excitatory and inhibitory units increases. (b) What effect does increasing g_bar_i.hidden have on the average level of excitation of the hidden units and of the inhibitory units? Increase g_bar_i.hidden to 7. The activation levels decrease.

47 (c) Explain this pattern of results. Decreasing the inhibitory current lowers the inhibition strength, which allows greater activation of (excitatory) hidden layer units. They feed enhanced excitation back to the inhibitory units, thus raising the average inhibitory unit activation level. Raising the inhibitory current raises the inhibition strength, which causes the opposite effect less activation of the hidden units & lowered activation of the inhibitory units.

48 Page 96-97: Manipulate the maximal conductance for the inhibition coming into the inhibitory units g_bar_i.inhib. Compare values of maximal conductance for the inhibitory current into the inhibitory units (g_bar_i.inhib) of 3, 4, and 5. g_bar_i.inhib = 3 g_bar_i.inhib = 4 g_bar_i.inhib = 5

49 In raising the value from 3 to 4 to 5, we see that: (1) the excitatory activation level (red) increases modestly; and (2) the inhibitory level (orange) decreases slightly. Comparing values of 3, 4, and 5 for g_bar_i.inhib (representing increasing levels of mutual inhibition), we see that the final steady-state levels for avg_act_inhi are in the opposite order to those for avg_act_hidd.

50 When g_bar_i.inhib is higher, it causes the inhibitory units to be more inhibited by each other when input excites the inhibitory layer. Let us examine the result of increasing g_bar_i.inhib on the sequence of events following input: the inhibitory units receive the same level of input, but the inhibitory activity is lower excitatory units are inhibited less excitatory activity is higher But the differences are small. On a coarse scale, the initial effect of changing the maximal conductance for the inhibitory current into the inhibitory units is counterbalanced through the excitatory units. In a negative feedback loop, any change in the level of inhibitory unit activity is counterbalanced by a change in the activity of the Hidden excitatory units. Excitation and inhibition each serve to counterbalance the other. Whatever the level of i-i interaction, the i-e-i interactions compensate for it, and the level of activity of the inhibitory units remains roughly the same. On a finer scale: as g_bar_i.inhib goes from 3 4 5: avg_act_inhi decreases and avg_act_hidd increases (i.e. they change in opposite order). (3 4 5): greater mutual inhibition occurs following receipt of input slightly lowered inhibitory activity slightly elevated excitatory activity

51 A possible explanation for why the range of change for the inhib units is less than for the excit units is that the inhib units may be operating near the saturating level of their sigmoid curves (for the conversion of V m to activation), whereas the excit units may be operating more in the linear range. This would explain why the inhib units show a smaller change in activation than the excit units for the same change in V m. This argument, however, is only an approximate explanation. To accurately predict the behavior of this dynamic system requires a dynamical analysis. Such an analysis minimally requires examination of the derivatives of activations as well as the activations themselves. This is because in a negative feedback loop, each small change in excitatory or inhibitory activity immediately causes a change in the other. In such a scenario, the rates of change (derivatives) play a key role.

52 g_bar_i.inhib avg_act_hidd g_bar_i.inhib avg_act_inhi

53 Roles of Feedforward and Feedback Inhibition Question 3.11 (a) Set scale.ff to 0, effectively eliminating the feedforward excitatory inputs to the inhibitory neurons from the input layer. Removing all feedforward inhibition in the simulation causes the excitatory & inhibitory units to oscillate wildly before settling in. (b) Feedforward inhibition normally serves to anticipate a rapid excitatory rise and dampen it before it runs away. When this is shut off, the excitatory unit activation rises quickly before the inhibitory units can dampen it. It takes a number of e-i feedback cycles to lower and stabilize the level of excitation. Even after 100 cycles, there are still small oscillations. Scale.ff = 0

54 (c) Removing all feedback inhibition causes runaway excitation. The feedforward inhibition is insufficient to counter it. Scale.fb=0 (d) With scale.ff set to 0.75 (and scale.fb=0), the excitatory activation goes to approximately the same level as with the default settings (scale.ff=0.35; scale.fb=1.0), but it arrives there much more slowly and with much greater damping.

55 (e) Feedforward inhibition isn t sensitive to the changing level of excitation so a greater level of inhibition is needed to prevent runaway excitation. This greater level contains the excitation, but at the expense of a longer time to settle. By using a combination of both types of inhibition, excitation can rise quickly without running away.

56 Time Constants and Feedforward Anticipation The default time constant for the inhibitory units is set to be faster (0.15) than for the excitatory units (0.04), reflecting neurophysiological evidence. If the inhibitory time constant is slowed down (from 0.15 to 0.04), excitation is runaway until the inhibitory units have time to dampen it. Then there is a precipitous fall in excitation, with eventual settling.

57 When both time constants are made faster (the excitatory t. c. from 0.04 to 0.10 and the inhibitory t.c. from 0.15 to 0.20), the system oscillates wildly without settling. These oscillations can be prevented by finer time scale updating the excitatory and inhibitory units update in smaller steps: both are able to react more smoothly.

58 Effects of Learning With learning, weight values change. Hidden layer units typically receive greater levels of variance in the excitatory inputs they receive from the input patterns (some patterns producing stronger excitation of the hidden layer unit than others). Inhibition must be able to adequately compensate. The default weight strength has mean = 0.25, variance = 0.2.

59 Using Trained as the weight type increases the variability of excitatory inputs to the hidden layer units by changing the variance of the weights from 0.2 to 0.7. This results in higher levels of activation of hidden layer units, as well as greater variability in their activation (the activation levels oscillate).

60 By increasing g_bar_i.hidden to 8, the system can keep the hidden units at a lower level and also prevent the oscillations.

61 Bidirectional Excitation We now examine a network with two bidirectionally connected hidden layers (Fig 3.22). Now, top-down excitatory connections from hidden layer 2 are to inhibitory as well as to excitatory units in hidden layer 1. When hidden layer 2 is activated, it feeds back excitation to layer 1 inhibitory and excitatory units. They both become more active. But the overall levels of activity are not very different from the previous simulation (with only hidden layer 1) (See Fig 3.23).

62 Lowering g_bar_i.hidden to 3 reduces the amount of inhibition on the excitatory units. This has a small effect on the initial run-up, but when hidden layer 2 kicks in, there is runaway excitation. The inhibitory units try to compensate, but they are too weak.

63 Set Point Behavior What happens when different overall levels of activity are presented to the network? In the simulation, compare activation levels with the percent of active units at 15, 20 & 25. Notice that the activation levels of hidden excitatory & inhibitory units rise with increasing input levels, but not dramatically. Also, note that there is not a great change in initial activity level of excitatory units when the hidden layer 2 units send top-down excitatory input. In general, the system has a set point behavior, meaning that it equilibrates in a certain range of activation regardless of the size of the inputs. This is largely due to the balance between excitation & feedback inhibition

64 Question 3.12 Explain in general terms why the system exhibits this set point behavior. The set point behavior results from the fundamental nature of the combination of feedforward and feedback inhibition. Since the inhibitory feedforward connections are faster than the excitatory counterparts, increases in excitation are counterbalanced by increases in feedforward inhibition. Any further increases in the level of excitation are then counterbalanced by the feedback inhibition, which further keeps the excitation in check. The result is that the levels of excitation and inhibition are both kept within a narrow range, around a set point.

65 3.5.3 The k-winners-take-all Inhibitory Functions We would like to utilize an inhibitory function to capture the dynamic role of a population of inhibitory units without the computational burden of explicitly modeling all the individual inhibitory units. k-winners-take-all (kwta) functions serve this purpose. The basic idea is that, out of n excitatory units in a layer, only some smaller number, k, can be active at any given time. k represents the maximum number of units that can be active. These will be the strongest units that are active. The inhibitory mechanism will suppress the activation of units that are weaker than the strongest k units. At any given time, no more than k units can be active but fewer than k can be active. Note that the kwta function can prevent runaway excitation since it limits the overall level of excitation in the layer. Also notice that by acting in this way, the kwta function promotes the development of sparsely distributed representations, which we have noted have some desirable properties for pattern recognition.

66 kwta Function Implementation The purpose of kwta is to promote the survival of the k most active units, i.e. the ones receiving the greatest excitatory input (g e ). So, to implement the kwta function: 1. at any given time, the units activity are put in order of g e. 2. a single inhibitory conductance (g i ) is computed that will be applied across the whole layer. It is computed to be just strong enough to allow only the top k units to have their membrane potentials remain above the steady-state threshold level where I net = this inhibitory conductance is included in the input of every unit.

67 From Eqn 2.9: V m =Θ= gge + gge+ gge e e e i i i l l l gg + gg+ gg e e i i l l ( Θ ) = ( Θ ) + ( Θ) gg E gg E gg E i i i e e e l l l we get Eqn 3.2, which tells how to compute the g i that is needed to put the membrane potential (V m ) of a unit exactly at threshold (Θ) given its present excitatory input (as well as the leak and bias inputs): g ( ) ( ) * gg e e Ee Θ + gg l l E Θ l Θ i = Ei Θ (3.2) * where is the excitatory input minus the bias weight contribution, and g g = 1. e i After ordering the units by g e, g Θ i can be computed for each unit. In basic kwta, g i is set to a value between the values of g Θ i computed for the k th and k+1 th most active units. When this value is applied to every unit, it is guaranteed that every unit from the k+1 th and below will remain below threshold, while every unit from the k th and above will be above threshold.

68 Equation 3.3 tells how to select this value of g i that will cut between the k th and the k+1 th most active unit. ( ) ( 1) ( ) ( 1) Θ Θ Θ gi = gi k+ + q gi k gi k+ (3.3) where q is a constant (typically 0.25) that allows the inhibition to be placed exactly between the k th and k+1 th units. Eqn 3.3 gives a value that is high enough to allow the k th unit to remain active while being too high for the k+1 th unit to be active. In understanding the effect of applying kwta functions, we must be aware of the distribution of the level of excitation across the units in a layer. The values of g Θ i for these units are monotonically related to g e.

69 The shape of the activation distribution of all the units determines the activation strength of those units that survive (Fig. 3.24). (The brackets indicate the distance between the most active unit and the inhibitory threshold.) If the strongest k units are clearly separate from all the others (large differential 3.24c), then those k will be well above threshold and their activation will be strong. However, if they are not clearly separate (small differential 3.24b), then those k will not be very much above threshold and their activation will be weak. This resulting activation level could be used a confidence measure. Some of the top k units may be only weakly active, or actually not activated at all depending on their level of bias & leak currents.

70 Average-based kwta function One can guarantee greater levels of activation using the average-based kwta function. In this version, the level of inhibition is based on the distribution of excitation across the entire layer, rather than of only the k th and k+1 th units. Using the average-based kwta function, there will not be exactly k active units since the cutoff is not guaranteed to be exactly between the k th and k+1 th unit. Depending on the overall distribution of activity, a greater or lesser number of the lowest k most active units will be over threshold. This can have the advantage of greater flexibility of representation in the layer during learning. The average of g Θ i is computed for the top k units as: 1 k i= 1 () Θ g = Θ i g k i i k (3.4) Then the average of g Θ i is computed for the remaining n-k units as: n Θ 1 g = Θ i g n k i i n k i= k+ 1 () (3.5)

71 The layer-wide inhibitory conductance g i is placed between these two averages as: ( ) gi = g Θ i + q g Θ n k i g Θ k i n k (3.6) The average-based kwta inhibitory function is illustrated in Fig If the strongest k units are clearly separate from all the others (large differential 3.25c), then the average-based kwta gives a higher level of inhibition than basic kwta, and fewer than k units become active. If they are not clearly separate (small differential 3.25b), then the average-based kwta gives a lower level of inhibition than basic kwta, and more than k units become active. Since there is typically more spread between the two averages in the average-based version than there was between the two units (k and k+1) in the basic version, the value taken by the parameter q is more important. This value is typically adjusted based on the overall activity level of the layer (.5 for activity levels around 25%, and higher for lower levels of activity). The flexibility that this gives can also be viewed as a lack of control, which can be a problem when the activity levels are sparse. As a rule, average-based kwta is useful for layers having greater than 15% activity, and basic kwta for layers that are sparser than that.

72 3.5.4 Exploration of kwta Inhibition We can see how the basic form of kwta inhibition works in the simulation inhib.proj by setting inhib_type to kwta_inhib. inhib_type = unit_inhib inhib_type = kwta_inhib 1) We find that there is no activity in the inhibitory units because we are using the kwta function. 2) The activity function appears smoother because the kwta function is doing a perfect job of anticipating the level of inhibition needed to balance the excitation. 3) Activation of the excitatory units begins earlier because a faster time constant (dt.hidden = 0.2 instead of 0.04) is being used. 4) More than 15 units are active, even though kwta_pct = [Note that on page 104, the text states that the k value specifies the maximum number of hidden units that can be

73 strongly active. The larger number is due to the noisy XX1 output activation function (page 104).

74 Now set inhib_type to kwta_avg_inhib. The hidden layer activation settles on an activation level of around 11 percent regardless of the number of active units in the input layer. This is an example of set point behavior inhib_type = kwta_avg_inhib

75 Question 3.13 (a) The fastest value of the update parameter (dt.hidden) that does not result in significant oscillatory behavior is 1.0. This value is much higher than the corresponding value for unit-based inhibition (0.04). (b) kwta can use such a fast update time because the hidden units are the only ones actually being integrated. That is, the hidden units do not need any time to wait for the inhibitory feedback from the inhibitory units to counterbalance their excitation level. The kwta function takes care of this immediately.

76 k-or-less property of the basic kwta function Using the basic kwta function, increase g_bar_l to find a value that prevents excitation from an input_pct of 10 or less from activating any of the hidden units, but allows excitation of 20 or more to activate both layers. This is supposed to illustrate the k-or-less property, although more than k (15) units in the hidden layers are observed to be active. It appears that there are 15 strongly active units, though. [Answer: g_bar_l = 0.6]

77 3.5.5 Digits Revisited with kwta Inhibition Set network to LOCALIST_NETWORK (make sure biases are turned off).the basic kwta inhibition function is in effect, with the hidden_k parameter specifying the maximum number of hidden units that can be strongly active. hidden_k = 1 hidden_k = 2 hidden_k = 3 Increasing hidden_k causes increasingly distributed hidden layer patterns. Roughly, the value of k determines the number of strongly active units. However, there is variation due to the effect of ties and the noise in the noisy XX1 activation function. Notice that the level of activation can be controlled by either kwta (hidden_k) or g_bar_l. Using kwta works better because of its set point behavior. That is, once it is set, it is effective regardless of changes in many other parameters.

78 Now set network to DISTRIBUTED_NETWORK. Roughly, there is only one active unit per input pattern (except when there is a tie).

79 Question 3.14 (a) Using one feature, i.e. k=1, gives a similarity structure having only 5 clusters for the 10 digits. Thus, it doesn t do very well. (b) With k=2, we get 7 clusters; with k=3, we get 9 clusters. Finally, with k=4, we get 10 clusters, i.e. all the digits are individually represented and there are no collapsed distinctions. (c) Only one hidden pattern, for digit 8, actually has 4 units active. (d) Every hidden pattern does not actually have 4 active units because the basic kwta function is in effect. It allows some flexibility in the number of active units between 0 and k. This is why it is called k-or-less WTA. In other words, 4 is just an upper limit on the number of active units. (e) With the leak current reduced to 1, the activations are stronger and more units are activated. We get only 9 clusters, with a collapsed distinction between 5 and 2. When it is reduced to 0, the activations are even stronger and even more units are activated. Now we get only 7 clusters, with additional collapsed distinctions between 9 and 7, and between 0 and 4.

80 3.5.6 Other Simple Inhibition Functions Historically, the kwta derives from the single WTA function, used by Grossberg (1976) and Rumelhart & Zipser (1986) in competitive learning algorithms. A single WTA function was used: the single most excited unit s activity is set to 1 and all the others are set to 0. Nowlan (1990) introduced a softer version of this algorithm based on Bayesian statistics. The units were activated to the extent that their likelihood of predicting the input pattern was larger than other units. The likelihood, written as P(data hypothesis), is the probability of the input data occurring given the hypothesis being true. The activity of each unit (y j ) depends on the extent to which its likelihood is greater than that of other units. It is determined by a ratio where: (a) the numerator is equivalent to the likelihood of that unit (b) the denominator is the sum over the likelihoods of all the units. Even though the Nowlan model uses graded soft activation values, it is still based on a localist representation, where each unit solely represents a different input category.

81 In a localist representation, all the hypotheses of the different units are exhaustive, meaning that together they represent all possible input patterns. They are also mutually exclusive, meaning that each unit represents a different set of input patterns. Hence, there is no overlap in their representations, and they do not work cooperatively. Because the mutual exclusivity of single WTA networks precludes representation by multiple cooperating units, their usefulness is limited. However, they are valuable because they are simple enough to allow mathematical analysis of the entire network. The kwta function is too complex for that.

82 The Kohonen network uses an activation function similar to basic WTA. A single winner is selected and then a neighborhood of units around the winner is also activated. The activation trails off as a function of distance from the winner. The Kohonen network suffers from a rigidity of representation because it does not have the full power of distributed representations. Nonetheless, the neighborhood bias has some useful properties and can be implemented in the lateral connectivity of a network within the kwta framework. The interactive activation and competition (IAC) model of McClelland & Rumelhart (1981) has units that transmit both excitation and inhibition directly to other units. It also uses bidirectional excitatory connectivity between layers (bottom-up and top-down excitation), as do the ART models of Grossberg. In the IAC model, it is difficult to maintain a balance between excitation and inhibition that keeps two or more units active at the same time. This limits the usefulness of the IAC model for distributed representation. In general, distributed representations are more stable when inhibition is kept separate from the excitatory units, either as separate inhibitory units or as an inhibition function.

83 3.6 Constraint Satisfaction Constraint satisfaction provides a computational perspective on network function in which the network is viewed as simultaneously satisfying constraints imposed on it by external inputs from the environment and the internal weights and activation states of the network itself. Hopfield studied networks in which units are mutually connected with symmetrical, bidirectional connections, and have sigmoidal activation functions. He showed mathematically, using principles from physics, that these networks maximize the degree of constraint satisfaction. The Hopfield network is a derivation of the Hebbian Nerve Cell Assembly. The mathematical analysis of constraint satisfaction uses the energy function concept discussed previously. As we saw before, the analogy of energy can be useful in understanding network dynamics. The symmetry of the connections in the Hopfield network, is important for its ability to settle in to an energy minimum. We assume that the network is initially presented with an input pattern, represented by an initial point on the energy landscape. The process of constraint satisfaction in the Hopfield network is likened to the system energy going to lower states as the unit activities are updated. The activities reach a steady state as the system energy settles into an energy well (minimum).

84 The network energy is defined as: E = 1 j i i ij j 2 xw x (3.9) We see that the energy depends on the activity levels of both units, as well as the weights between them. [It is often convenient for the activity level to be either 0 or 1, and the weight to be a fraction between 0 and 1.] When all three terms in the product (the activities of both units and their weight) are large, a large product results. Larger products will tend to produce lower (more negative) energy values, which are more stable. Constraints are represented by the extent to which the activity is consistent with the weights. The more consistent they are, the lower is the energy. When the energy reaches a minimum, we can say that the network has found a consistent state which maximally satisfies all the units. The negative of the energy is called the harmony. Greater harmony occurs when two units are strongly active and they are connected by a large weight. Thus the lowest energy well is most stable and has the greatest harmony.

85 3.6.1 Attractors Again In the Hopfield network, activation updating leads to lower energy (or higher harmony), and the network converges to the lowest energy (highest harmony) state. The minima of the energy landscape (maxima of the harmony landscape) represent attractors of the dynamics of the system The Role of Noise A generic problem in updating neural networks is that the best solution to a problem is represented by a global minimum in the energy landscape. It is desirable for the state of the system to settle into this global minimum. However, there are often local minima in the energy landscape in which the system can become trapped. A local minimum is a suboptimal state that may fail to satisfy many of the desired constraints. Noise can actually help overcome this problem. It can shake the system out of shallow local minima and allow it to seek deeper energy wells. The use of noise does not guarantee that the overall global minimum will be reached, but it can be useful nonetheless. In simulations, noise can be added to the activation function in those situations where it is needed.

86 Simulated annealing is a technique that uses noise in a special way. The name comes from analogy to the cooling methods used to create metals. Noise enters the system at a certain level at the start of a process, but as the network settles, the noise is gradually reduced. The greater noise levels at the start of processing allows the network to explore a range of different activation states. As the network activation state moves down the energy landscape, the noise is gradually tapered off to allow the network to settle.

87 3.6.3 The Role of Inhibition What is most desirable for constraint satisfaction with neural networks is that the system should follow the steepest path into the deepest energy well in the least amount of time. What is least desirable is that it should wander around aimlessly in high energy terrain. Inhibition can play a role in restricting the roaming around through different activation states and cause the system to achieve faster, more effective constraint satisfaction.

88 3.6.4 Explorations of Constraint Satisfaction: Cats and Dogs This simulation examines a semantic network: a set of relationships among different features are used to represent a set of entities in the world. The inputs are all soft-clamped into the network units. When we present the network with a single feature (the individual s name), it recalls all the information about the individual. This is an example of pattern completion. This may be thought of as bottom-up since it is like recognizing an individual in the environment. Case1: When we present the network with the cat category, it activates all the units whose features are typical of the category. This is also an example of pattern completion, but it can be viewed as top-down since it is like activating a semantic category. This represents the imposition of a single constraint. We find that the network harmony increases monotonically as the network settles into the solution. This indicates that the network is increasingly satisfying the imposed constraints as the activations are updated.

89 Question 3.15 (a) Explain the reason for the different levels of activation for the different features of cats when just Cat was activated. Features: Name: only those names of cats are activated. Identity: only cat identities are activated. Color: white and orange (cat colors) are strongly activated; black is partially activated because it is a color shared by cats and dogs. Favorite_Food: Grass (cat food) is strongly activated; bugs and scraps are partially activated because they are food shared by cats and dogs. Size: only small and medium are activated because there are no large cats. Favorite_Toy: only string and feather are activated because they are cat toys. (b) This information might be useful in search behavior by priming the network to look for a specific category of objects, e.g. Cat.

90 Case 2: If we now activate orange and cat categories, we get just the two individuals activated that jointly satisfy these two constraints. In comparing the network harmony for Case 2 with Case 1, we see that the harmony rises to a lower final level for Case 2. This reflects the general property that a greater number of constraints leads to lower harmony. In a sense, harmony is lower because it is more difficult to satisfy multiple constraints than just one.

91 3.6.5 Explorations of Constraint Satisfaction: Necker Cube The Necker cube is an ambiguous visual image that has typically evokes bistable perceptual states. The visual system tends to interpret the perspective in either one of two ways, but not both at the same time. In the simulation, the layer is divided into two parts, each corresponding to one view (downward-looking or upward-looking), and each being comprised of 8 units. The connections are weighted so that each part is only connected to itself. The layer has basic kwta with k=8. 1. In the default condition, inputs are given equally to all units. 2. Active units will co-activate the connected units of their view, leading to competition between the growth of the two views. 3. The kwta function will tend to enforce a separation between the levels of activation in the two views. 4. Eventually, one view always wins out. Which view wins is random from run to run.

Inhib conductance into inhib units (g_bar_i.inhib) Strength of feedforward weights to inhib (scale.ff)

Inhib conductance into inhib units (g_bar_i.inhib) Strength of feedforward weights to inhib (scale.ff) 1 Networks: Inhibition Controls activity (bidirectional excitation). Inhibition = thermostat-controlled air conditioner inhibitory neurons sample excitatory activity (like a thermostat samples the temperature)

More information

Cell Responses in V4 Sparse Distributed Representation

Cell Responses in V4 Sparse Distributed Representation Part 4B: Real Neurons Functions of Layers Input layer 4 from sensation or other areas 3. Neocortical Dynamics Hidden layers 2 & 3 Output layers 5 & 6 to motor systems or other areas 1 2 Hierarchical Categorical

More information

Lateral Inhibition Explains Savings in Conditioning and Extinction

Lateral Inhibition Explains Savings in Conditioning and Extinction Lateral Inhibition Explains Savings in Conditioning and Extinction Ashish Gupta & David C. Noelle ({ashish.gupta, david.noelle}@vanderbilt.edu) Department of Electrical Engineering and Computer Science

More information

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence To understand the network paradigm also requires examining the history

More information

Learning in neural networks

Learning in neural networks http://ccnl.psy.unipd.it Learning in neural networks Marco Zorzi University of Padova M. Zorzi - European Diploma in Cognitive and Brain Sciences, Cognitive modeling", HWK 19-24/3/2006 1 Connectionist

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 7: Network models Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis II 6 Single neuron

More information

Sparse Coding in Sparse Winner Networks

Sparse Coding in Sparse Winner Networks Sparse Coding in Sparse Winner Networks Janusz A. Starzyk 1, Yinyin Liu 1, David Vogel 2 1 School of Electrical Engineering & Computer Science Ohio University, Athens, OH 45701 {starzyk, yliu}@bobcat.ent.ohiou.edu

More information

Applied Neuroscience. Conclusion of Science Honors Program Spring 2017

Applied Neuroscience. Conclusion of Science Honors Program Spring 2017 Applied Neuroscience Conclusion of Science Honors Program Spring 2017 Review Circle whichever is greater, A or B. If A = B, circle both: I. A. permeability of a neuronal membrane to Na + during the rise

More information

CHAPTER I From Biological to Artificial Neuron Model

CHAPTER I From Biological to Artificial Neuron Model CHAPTER I From Biological to Artificial Neuron Model EE543 - ANN - CHAPTER 1 1 What you see in the picture? EE543 - ANN - CHAPTER 1 2 Is there any conventional computer at present with the capability of

More information

Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention

Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention Tapani Raiko and Harri Valpola School of Science and Technology Aalto University (formerly Helsinki University of

More information

The storage and recall of memories in the hippocampo-cortical system. Supplementary material. Edmund T Rolls

The storage and recall of memories in the hippocampo-cortical system. Supplementary material. Edmund T Rolls The storage and recall of memories in the hippocampo-cortical system Supplementary material Edmund T Rolls Oxford Centre for Computational Neuroscience, Oxford, England and University of Warwick, Department

More information

Cerebral Cortex. Edmund T. Rolls. Principles of Operation. Presubiculum. Subiculum F S D. Neocortex. PHG & Perirhinal. CA1 Fornix CA3 S D

Cerebral Cortex. Edmund T. Rolls. Principles of Operation. Presubiculum. Subiculum F S D. Neocortex. PHG & Perirhinal. CA1 Fornix CA3 S D Cerebral Cortex Principles of Operation Edmund T. Rolls F S D Neocortex S D PHG & Perirhinal 2 3 5 pp Ento rhinal DG Subiculum Presubiculum mf CA3 CA1 Fornix Appendix 4 Simulation software for neuronal

More information

Spiking Inputs to a Winner-take-all Network

Spiking Inputs to a Winner-take-all Network Spiking Inputs to a Winner-take-all Network Matthias Oster and Shih-Chii Liu Institute of Neuroinformatics University of Zurich and ETH Zurich Winterthurerstrasse 9 CH-857 Zurich, Switzerland {mao,shih}@ini.phys.ethz.ch

More information

Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6)

Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6) Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6) BPNN in Practice Week 3 Lecture Notes page 1 of 1 The Hopfield Network In this network, it was designed on analogy of

More information

A model of the interaction between mood and memory

A model of the interaction between mood and memory INSTITUTE OF PHYSICS PUBLISHING NETWORK: COMPUTATION IN NEURAL SYSTEMS Network: Comput. Neural Syst. 12 (2001) 89 109 www.iop.org/journals/ne PII: S0954-898X(01)22487-7 A model of the interaction between

More information

Intro. Comp. NeuroSci. Ch. 9 October 4, The threshold and channel memory

Intro. Comp. NeuroSci. Ch. 9 October 4, The threshold and channel memory 9.7.4 The threshold and channel memory The action potential has a threshold. In figure the area around threshold is expanded (rectangle). A current injection that does not reach the threshold does not

More information

Modeling of Hippocampal Behavior

Modeling of Hippocampal Behavior Modeling of Hippocampal Behavior Diana Ponce-Morado, Venmathi Gunasekaran and Varsha Vijayan Abstract The hippocampus is identified as an important structure in the cerebral cortex of mammals for forming

More information

MBios 478: Systems Biology and Bayesian Networks, 27 [Dr. Wyrick] Slide #1. Lecture 27: Systems Biology and Bayesian Networks

MBios 478: Systems Biology and Bayesian Networks, 27 [Dr. Wyrick] Slide #1. Lecture 27: Systems Biology and Bayesian Networks MBios 478: Systems Biology and Bayesian Networks, 27 [Dr. Wyrick] Slide #1 Lecture 27: Systems Biology and Bayesian Networks Systems Biology and Regulatory Networks o Definitions o Network motifs o Examples

More information

Essentials of Aggregate System Dynamics Infectious Disease Models. Nathaniel Osgood CMPT 858 FEBRUARY 3, 2011

Essentials of Aggregate System Dynamics Infectious Disease Models. Nathaniel Osgood CMPT 858 FEBRUARY 3, 2011 Essentials of Aggregate System Dynamics Infectious Disease Models Nathaniel Osgood CMPT 858 FEBRUARY 3, 2011 Mathematical Models Link Together Diverse Factors Typical Factors Included Infection Mixing

More information

Information Processing During Transient Responses in the Crayfish Visual System

Information Processing During Transient Responses in the Crayfish Visual System Information Processing During Transient Responses in the Crayfish Visual System Christopher J. Rozell, Don. H. Johnson and Raymon M. Glantz Department of Electrical & Computer Engineering Department of

More information

Computational Explorations in Cognitive Neuroscience Chapter 7: Large-Scale Brain Area Functional Organization

Computational Explorations in Cognitive Neuroscience Chapter 7: Large-Scale Brain Area Functional Organization Computational Explorations in Cognitive Neuroscience Chapter 7: Large-Scale Brain Area Functional Organization 1 7.1 Overview This chapter aims to provide a framework for modeling cognitive phenomena based

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction Artificial neural networks are mathematical inventions inspired by observations made in the study of biological systems, though loosely based on the actual biology. An artificial

More information

Rolls,E.T. (2016) Cerebral Cortex: Principles of Operation. Oxford University Press.

Rolls,E.T. (2016) Cerebral Cortex: Principles of Operation. Oxford University Press. Digital Signal Processing and the Brain Is the brain a digital signal processor? Digital vs continuous signals Digital signals involve streams of binary encoded numbers The brain uses digital, all or none,

More information

Analysis of in-vivo extracellular recordings. Ryan Morrill Bootcamp 9/10/2014

Analysis of in-vivo extracellular recordings. Ryan Morrill Bootcamp 9/10/2014 Analysis of in-vivo extracellular recordings Ryan Morrill Bootcamp 9/10/2014 Goals for the lecture Be able to: Conceptually understand some of the analysis and jargon encountered in a typical (sensory)

More information

LESSON 3.3 WORKBOOK. Why does applying pressure relieve pain?

LESSON 3.3 WORKBOOK. Why does applying pressure relieve pain? Postsynaptic potentials small changes in voltage (membrane potential) due to the binding of neurotransmitter. Receptor-gated ion channels ion channels that open or close in response to the binding of a

More information

What is Anatomy and Physiology?

What is Anatomy and Physiology? Introduction BI 212 BI 213 BI 211 Ecosystems Organs / organ systems Cells Organelles Communities Tissues Molecules Populations Organisms Campbell et al. Figure 1.4 Introduction What is Anatomy and Physiology?

More information

LESSON 3.3 WORKBOOK. Why does applying pressure relieve pain? Workbook. Postsynaptic potentials

LESSON 3.3 WORKBOOK. Why does applying pressure relieve pain? Workbook. Postsynaptic potentials Depolarize to decrease the resting membrane potential. Decreasing membrane potential means that the membrane potential is becoming more positive. Excitatory postsynaptic potentials (EPSP) graded postsynaptic

More information

Exploring the Functional Significance of Dendritic Inhibition In Cortical Pyramidal Cells

Exploring the Functional Significance of Dendritic Inhibition In Cortical Pyramidal Cells Neurocomputing, 5-5:389 95, 003. Exploring the Functional Significance of Dendritic Inhibition In Cortical Pyramidal Cells M. W. Spratling and M. H. Johnson Centre for Brain and Cognitive Development,

More information

Dynamics of Color Category Formation and Boundaries

Dynamics of Color Category Formation and Boundaries Dynamics of Color Category Formation and Boundaries Stephanie Huette* Department of Psychology, University of Memphis, Memphis, TN Definition Dynamics of color boundaries is broadly the area that characterizes

More information

Serial visual search from a parallel model q

Serial visual search from a parallel model q Vision Research 45 (2005) 2987 2992 www.elsevier.com/locate/visres Serial visual search from a parallel model q Seth A. Herd *, Randall C. OÕReilly Department of Psychology, University of Colorado Boulder,

More information

CS 453X: Class 18. Jacob Whitehill

CS 453X: Class 18. Jacob Whitehill CS 453X: Class 18 Jacob Whitehill More on k-means Exercise: Empty clusters (1) Assume that a set of distinct data points { x (i) } are initially assigned so that none of the k clusters is empty. How can

More information

Memory: Computation, Genetics, Physiology, and Behavior. James L. McClelland Stanford University

Memory: Computation, Genetics, Physiology, and Behavior. James L. McClelland Stanford University Memory: Computation, Genetics, Physiology, and Behavior James L. McClelland Stanford University A Playwright s Take on Memory What interests me a great deal is the mistiness of the past Harold Pinter,

More information

Target-to-distractor similarity can help visual search performance

Target-to-distractor similarity can help visual search performance Target-to-distractor similarity can help visual search performance Vencislav Popov (vencislav.popov@gmail.com) Lynne Reder (reder@cmu.edu) Department of Psychology, Carnegie Mellon University, Pittsburgh,

More information

Learning Utility for Behavior Acquisition and Intention Inference of Other Agent

Learning Utility for Behavior Acquisition and Intention Inference of Other Agent Learning Utility for Behavior Acquisition and Intention Inference of Other Agent Yasutake Takahashi, Teruyasu Kawamata, and Minoru Asada* Dept. of Adaptive Machine Systems, Graduate School of Engineering,

More information

Physiology of Tactile Sensation

Physiology of Tactile Sensation Physiology of Tactile Sensation Objectives: 1. Describe the general structural features of tactile sensory receptors how are first order nerve fibers specialized to receive tactile stimuli? 2. Understand

More information

Thalamocortical Feedback and Coupled Oscillators

Thalamocortical Feedback and Coupled Oscillators Thalamocortical Feedback and Coupled Oscillators Balaji Sriram March 23, 2009 Abstract Feedback systems are ubiquitous in neural systems and are a subject of intense theoretical and experimental analysis.

More information

-Ensherah Mokheemer. -Amani Nofal. -Loai Alzghoul

-Ensherah Mokheemer. -Amani Nofal. -Loai Alzghoul -1 -Ensherah Mokheemer -Amani Nofal -Loai Alzghoul 1 P a g e Today we will start talking about the physiology of the nervous system and we will mainly focus on the Central Nervous System. Introduction:

More information

Modeling Category Learning with Exemplars and Prior Knowledge

Modeling Category Learning with Exemplars and Prior Knowledge Modeling Category Learning with Exemplars and Prior Knowledge Harlan D. Harris (harlan.harris@nyu.edu) Bob Rehder (bob.rehder@nyu.edu) New York University, Department of Psychology New York, NY 3 USA Abstract

More information

The Role of Mitral Cells in State Dependent Olfactory Responses. Trygve Bakken & Gunnar Poplawski

The Role of Mitral Cells in State Dependent Olfactory Responses. Trygve Bakken & Gunnar Poplawski The Role of Mitral Cells in State Dependent Olfactory Responses Trygve akken & Gunnar Poplawski GGN 260 Neurodynamics Winter 2008 bstract Many behavioral studies have shown a reduced responsiveness to

More information

Hierarchical dynamical models of motor function

Hierarchical dynamical models of motor function ARTICLE IN PRESS Neurocomputing 70 (7) 975 990 www.elsevier.com/locate/neucom Hierarchical dynamical models of motor function S.M. Stringer, E.T. Rolls Department of Experimental Psychology, Centre for

More information

Chapter 7 Nerve Cells and Electrical Signaling

Chapter 7 Nerve Cells and Electrical Signaling Chapter 7 Nerve Cells and Electrical Signaling 7.1. Overview of the Nervous System (Figure 7.1) 7.2. Cells of the Nervous System o Neurons are excitable cells which can generate action potentials o 90%

More information

Artificial Neural Network : Introduction

Artificial Neural Network : Introduction Artificial Neural Network : Introduction Debasis Samanta IIT Kharagpur dsamanta@iitkgp.ac.in 23.03.2018 Debasis Samanta (IIT Kharagpur) Soft Computing Applications 23.03.2018 1 / 20 Biological nervous

More information

Prof. Greg Francis 7/31/15

Prof. Greg Francis 7/31/15 s PSY 200 Greg Francis Lecture 06 How do you recognize your grandmother? Action potential With enough excitatory input, a cell produces an action potential that sends a signal down its axon to other cells

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 11: Attention & Decision making Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis

More information

Mathematical Structure & Dynamics of Aggregate System Dynamics Infectious Disease Models 2. Nathaniel Osgood CMPT 394 February 5, 2013

Mathematical Structure & Dynamics of Aggregate System Dynamics Infectious Disease Models 2. Nathaniel Osgood CMPT 394 February 5, 2013 Mathematical Structure & Dynamics of Aggregate System Dynamics Infectious Disease Models 2 Nathaniel Osgood CMPT 394 February 5, 2013 Recall: Kendrick-McKermack Model Partitioning the population into 3

More information

COGS 107B Week 1. Hyun Ji Friday 4:00-4:50pm

COGS 107B Week 1. Hyun Ji Friday 4:00-4:50pm COGS 107B Week 1 Hyun Ji Friday 4:00-4:50pm Before We Begin... Hyun Ji 4th year Cognitive Behavioral Neuroscience Email: hji@ucsd.edu In subject, always add [COGS107B] Office hours: Wednesdays, 3-4pm in

More information

Representational Switching by Dynamical Reorganization of Attractor Structure in a Network Model of the Prefrontal Cortex

Representational Switching by Dynamical Reorganization of Attractor Structure in a Network Model of the Prefrontal Cortex Representational Switching by Dynamical Reorganization of Attractor Structure in a Network Model of the Prefrontal Cortex Yuichi Katori 1,2 *, Kazuhiro Sakamoto 3, Naohiro Saito 4, Jun Tanji 4, Hajime

More information

A Novel Account in Neural Terms. Gal Chechik Isaac Meilijson and Eytan Ruppin. Schools of Medicine and Mathematical Sciences

A Novel Account in Neural Terms. Gal Chechik Isaac Meilijson and Eytan Ruppin. Schools of Medicine and Mathematical Sciences Synaptic Pruning in Development: A Novel Account in Neural Terms Gal Chechik Isaac Meilijson and Eytan Ruppin Schools of Medicine and Mathematical Sciences Tel-Aviv University Tel Aviv 69978, Israel gal@devil.tau.ac.il

More information

Unit 1 Exploring and Understanding Data

Unit 1 Exploring and Understanding Data Unit 1 Exploring and Understanding Data Area Principle Bar Chart Boxplot Conditional Distribution Dotplot Empirical Rule Five Number Summary Frequency Distribution Frequency Polygon Histogram Interquartile

More information

Sum of Neurally Distinct Stimulus- and Task-Related Components.

Sum of Neurally Distinct Stimulus- and Task-Related Components. SUPPLEMENTARY MATERIAL for Cardoso et al. 22 The Neuroimaging Signal is a Linear Sum of Neurally Distinct Stimulus- and Task-Related Components. : Appendix: Homogeneous Linear ( Null ) and Modified Linear

More information

Self-Organization and Segmentation with Laterally Connected Spiking Neurons

Self-Organization and Segmentation with Laterally Connected Spiking Neurons Self-Organization and Segmentation with Laterally Connected Spiking Neurons Yoonsuck Choe Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 USA Risto Miikkulainen Department

More information

Why do we have a hippocampus? Short-term memory and consolidation

Why do we have a hippocampus? Short-term memory and consolidation Why do we have a hippocampus? Short-term memory and consolidation So far we have talked about the hippocampus and: -coding of spatial locations in rats -declarative (explicit) memory -experimental evidence

More information

Katsunari Shibata and Tomohiko Kawano

Katsunari Shibata and Tomohiko Kawano Learning of Action Generation from Raw Camera Images in a Real-World-Like Environment by Simple Coupling of Reinforcement Learning and a Neural Network Katsunari Shibata and Tomohiko Kawano Oita University,

More information

A Neurally-Inspired Model for Detecting and Localizing Simple Motion Patterns in Image Sequences

A Neurally-Inspired Model for Detecting and Localizing Simple Motion Patterns in Image Sequences A Neurally-Inspired Model for Detecting and Localizing Simple Motion Patterns in Image Sequences Marc Pomplun 1, Yueju Liu 2, Julio Martinez-Trujillo 2, Evgueni Simine 2, and John K. Tsotsos 2 1 Department

More information

Reinforcement Learning : Theory and Practice - Programming Assignment 1

Reinforcement Learning : Theory and Practice - Programming Assignment 1 Reinforcement Learning : Theory and Practice - Programming Assignment 1 August 2016 Background It is well known in Game Theory that the game of Rock, Paper, Scissors has one and only one Nash Equilibrium.

More information

Chapter 6 subtitles postsynaptic integration

Chapter 6 subtitles postsynaptic integration CELLULAR NEUROPHYSIOLOGY CONSTANCE HAMMOND Chapter 6 subtitles postsynaptic integration INTRODUCTION (1:56) This sixth and final chapter deals with the summation of presynaptic currents. Glutamate and

More information

Artificial Neural Networks

Artificial Neural Networks Artificial Neural Networks Torsten Reil torsten.reil@zoo.ox.ac.uk Outline What are Neural Networks? Biological Neural Networks ANN The basics Feed forward net Training Example Voice recognition Applications

More information

The case for quantum entanglement in the brain Charles R. Legéndy September 26, 2017

The case for quantum entanglement in the brain Charles R. Legéndy September 26, 2017 The case for quantum entanglement in the brain Charles R. Legéndy September 26, 2017 Introduction Many-neuron cooperative events The challenge of reviving a cell assembly after it has been overwritten

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION doi:10.1038/nature10776 Supplementary Information 1: Influence of inhibition among blns on STDP of KC-bLN synapses (simulations and schematics). Unconstrained STDP drives network activity to saturation

More information

Technical Specifications

Technical Specifications Technical Specifications In order to provide summary information across a set of exercises, all tests must employ some form of scoring models. The most familiar of these scoring models is the one typically

More information

Evaluating the Effect of Spiking Network Parameters on Polychronization

Evaluating the Effect of Spiking Network Parameters on Polychronization Evaluating the Effect of Spiking Network Parameters on Polychronization Panagiotis Ioannou, Matthew Casey and André Grüning Department of Computing, University of Surrey, Guildford, Surrey, GU2 7XH, UK

More information

35-2 The Nervous System Slide 1 of 38

35-2 The Nervous System Slide 1 of 38 1 of 38 35-2 The Nervous System The nervous system controls and coordinates functions throughout the body and responds to internal and external stimuli. 2 of 38 Neurons Neurons The messages carried by

More information

Classification of Synapses Using Spatial Protein Data

Classification of Synapses Using Spatial Protein Data Classification of Synapses Using Spatial Protein Data Jenny Chen and Micol Marchetti-Bowick CS229 Final Project December 11, 2009 1 MOTIVATION Many human neurological and cognitive disorders are caused

More information

Omar Sami. Muhammad Abid. Muhammad khatatbeh

Omar Sami. Muhammad Abid. Muhammad khatatbeh 10 Omar Sami Muhammad Abid Muhammad khatatbeh Let s shock the world In this lecture we are going to cover topics said in previous lectures and then start with the nerve cells (neurons) and the synapses

More information

PHYSIOLOGICAL ADAPTATIONS FOR SURVIVAL

PHYSIOLOGICAL ADAPTATIONS FOR SURVIVAL PHYSIOLOGICAL ADAPTATIONS FOR SURVIVAL HOMEOSTASIS Homeostasis means staying similar or unchanging and refers to the constant internal environment or steady state of an organism. It also includes the processes

More information

Chapter 2 The Brain or Bio Psychology

Chapter 2 The Brain or Bio Psychology Chapter 2 The Brain or Bio Psychology 1 2 3 1 Glial Cells Surround neurons and hold them in place Make Myelin (covering for neurons) Manufacture nutrient chemicals neurons need Absorb toxins and waste

More information

Discrimination and Generalization in Pattern Categorization: A Case for Elemental Associative Learning

Discrimination and Generalization in Pattern Categorization: A Case for Elemental Associative Learning Discrimination and Generalization in Pattern Categorization: A Case for Elemental Associative Learning E. J. Livesey (el253@cam.ac.uk) P. J. C. Broadhurst (pjcb3@cam.ac.uk) I. P. L. McLaren (iplm2@cam.ac.uk)

More information

The Re(de)fined Neuron

The Re(de)fined Neuron The Re(de)fined Neuron Kieran Greer, Distributed Computing Systems, Belfast, UK. http://distributedcomputingsystems.co.uk Version 1.0 Abstract This paper describes a more biologically-oriented process

More information

PHY3111 Mid-Semester Test Study. Lecture 2: The hierarchical organisation of vision

PHY3111 Mid-Semester Test Study. Lecture 2: The hierarchical organisation of vision PHY3111 Mid-Semester Test Study Lecture 2: The hierarchical organisation of vision 1. Explain what a hierarchically organised neural system is, in terms of physiological response properties of its neurones.

More information

Intelligent Control Systems

Intelligent Control Systems Lecture Notes in 4 th Class in the Control and Systems Engineering Department University of Technology CCE-CN432 Edited By: Dr. Mohammed Y. Hassan, Ph. D. Fourth Year. CCE-CN432 Syllabus Theoretical: 2

More information

Sleep-Wake Cycle I Brain Rhythms. Reading: BCP Chapter 19

Sleep-Wake Cycle I Brain Rhythms. Reading: BCP Chapter 19 Sleep-Wake Cycle I Brain Rhythms Reading: BCP Chapter 19 Brain Rhythms and Sleep Earth has a rhythmic environment. For example, day and night cycle back and forth, tides ebb and flow and temperature varies

More information

Continuous transformation learning of translation invariant representations

Continuous transformation learning of translation invariant representations Exp Brain Res (21) 24:255 27 DOI 1.17/s221-1-239- RESEARCH ARTICLE Continuous transformation learning of translation invariant representations G. Perry E. T. Rolls S. M. Stringer Received: 4 February 29

More information

Brad May, PhD Johns Hopkins University

Brad May, PhD Johns Hopkins University Brad May, PhD Johns Hopkins University When the ear cannot function normally, the brain changes. Brain deafness contributes to poor speech comprehension, problems listening in noise, abnormal loudness

More information

Part 1 Making the initial neuron connection

Part 1 Making the initial neuron connection To begin, follow your teacher's directions to open the Virtual Neurons software. On the left side of the screen is a group of skin cells. On the right side of the screen is a group of muscle fibers. In

More information

Bundles of Synergy A Dynamical View of Mental Function

Bundles of Synergy A Dynamical View of Mental Function Bundles of Synergy A Dynamical View of Mental Function Ali A. Minai University of Cincinnati University of Cincinnati Laxmi Iyer Mithun Perdoor Vaidehi Venkatesan Collaborators Hofstra University Simona

More information

A Race Model of Perceptual Forced Choice Reaction Time

A Race Model of Perceptual Forced Choice Reaction Time A Race Model of Perceptual Forced Choice Reaction Time David E. Huber (dhuber@psyc.umd.edu) Department of Psychology, 1147 Biology/Psychology Building College Park, MD 2742 USA Denis Cousineau (Denis.Cousineau@UMontreal.CA)

More information

You can follow the path of the neural signal. The sensory neurons detect a stimulus in your finger and send that information to the CNS.

You can follow the path of the neural signal. The sensory neurons detect a stimulus in your finger and send that information to the CNS. 1 Nervous system maintains coordination through the use of electrical and chemical processes. There are three aspects: sensory, motor, and integrative, which we will discuss throughout the system. The

More information

Questions Addressed Through Study of Behavioral Mechanisms (Proximate Causes)

Questions Addressed Through Study of Behavioral Mechanisms (Proximate Causes) Jan 28: Neural Mechanisms--intro Questions Addressed Through Study of Behavioral Mechanisms (Proximate Causes) Control of behavior in response to stimuli in environment Diversity of behavior: explain the

More information

Reading Assignments: Lecture 5: Introduction to Vision. None. Brain Theory and Artificial Intelligence

Reading Assignments: Lecture 5: Introduction to Vision. None. Brain Theory and Artificial Intelligence Brain Theory and Artificial Intelligence Lecture 5:. Reading Assignments: None 1 Projection 2 Projection 3 Convention: Visual Angle Rather than reporting two numbers (size of object and distance to observer),

More information

A Drift Diffusion Model of Proactive and Reactive Control in a Context-Dependent Two-Alternative Forced Choice Task

A Drift Diffusion Model of Proactive and Reactive Control in a Context-Dependent Two-Alternative Forced Choice Task A Drift Diffusion Model of Proactive and Reactive Control in a Context-Dependent Two-Alternative Forced Choice Task Olga Lositsky lositsky@princeton.edu Robert C. Wilson Department of Psychology University

More information

Overview. Unit 2: What are the building blocks of our brains?

Overview. Unit 2: What are the building blocks of our brains? Unit 2: What are the building blocks of our brains? Overview In the last unit we discovered that complex brain functions occur as individual structures in the brain work together like an orchestra. We

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 5: Data analysis II Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis II 6 Single

More information

Abstract A neural network model called LISSOM for the cooperative self-organization of

Abstract A neural network model called LISSOM for the cooperative self-organization of Modeling Cortical Plasticity Based on Adapting Lateral Interaction Joseph Sirosh and Risto Miikkulainen Department of Computer Sciences The University of Texas at Austin, Austin, TX{78712. email: sirosh,risto@cs.utexas.edu

More information

Mechanosensation. Central Representation of Touch. Wilder Penfield. Somatotopic Organization

Mechanosensation. Central Representation of Touch. Wilder Penfield. Somatotopic Organization Mechanosensation Central Representation of Touch Touch and tactile exploration Vibration and pressure sensations; important for clinical testing Limb position sense John H. Martin, Ph.D. Center for Neurobiology

More information

Biological Process 9/7/10. (a) Anatomy: Neurons have three basic parts. 1. The Nervous System: The communication system of your body and brain

Biological Process 9/7/10. (a) Anatomy: Neurons have three basic parts. 1. The Nervous System: The communication system of your body and brain Biological Process Overview 1. The Nervous System: s (a) Anatomy, (b) Communication, (c) Networks 2. CNS/PNS 3. The Brain (a) Anatomy, (b) Localization of function 4. Methods to study the brain (Dr. Heidenreich)

More information

Object recognition and hierarchical computation

Object recognition and hierarchical computation Object recognition and hierarchical computation Challenges in object recognition. Fukushima s Neocognitron View-based representations of objects Poggio s HMAX Forward and Feedback in visual hierarchy Hierarchical

More information

Neural circuits PSY 310 Greg Francis. Lecture 05. Rods and cones

Neural circuits PSY 310 Greg Francis. Lecture 05. Rods and cones Neural circuits PSY 310 Greg Francis Lecture 05 Why do you need bright light to read? Rods and cones Photoreceptors are not evenly distributed across the retina 1 Rods and cones Cones are most dense in

More information

Cognitive Neuroscience Section 4

Cognitive Neuroscience Section 4 Perceptual categorization Cognitive Neuroscience Section 4 Perception, attention, and memory are all interrelated. From the perspective of memory, perception is seen as memory updating by new sensory experience.

More information

Synfire chains with conductance-based neurons: internal timing and coordination with timed input

Synfire chains with conductance-based neurons: internal timing and coordination with timed input Neurocomputing 5 (5) 9 5 www.elsevier.com/locate/neucom Synfire chains with conductance-based neurons: internal timing and coordination with timed input Friedrich T. Sommer a,, Thomas Wennekers b a Redwood

More information

Neurophysiology scripts. Slide 2

Neurophysiology scripts. Slide 2 Neurophysiology scripts Slide 2 Nervous system and Endocrine system both maintain homeostasis in the body. Nervous system by nerve impulse and Endocrine system by hormones. Since the nerve impulse is an

More information

A Race Model of Perceptual Forced Choice Reaction Time

A Race Model of Perceptual Forced Choice Reaction Time A Race Model of Perceptual Forced Choice Reaction Time David E. Huber (dhuber@psych.colorado.edu) Department of Psychology, 1147 Biology/Psychology Building College Park, MD 2742 USA Denis Cousineau (Denis.Cousineau@UMontreal.CA)

More information

Supplementary Figure S1: Histological analysis of kainate-treated animals

Supplementary Figure S1: Histological analysis of kainate-treated animals Supplementary Figure S1: Histological analysis of kainate-treated animals Nissl stained coronal or horizontal sections were made from kainate injected (right) and saline injected (left) animals at different

More information

Respiration Cellular Respiration Understand the relationship between glucose breakdown and ATP when you burn glucose with the help of oxygen, it

Respiration Cellular Respiration Understand the relationship between glucose breakdown and ATP when you burn glucose with the help of oxygen, it Respiration Cellular Respiration Understand the relationship between glucose breakdown and ATP when you burn glucose with the help of oxygen, it traps chemical energy into ATP Energy found in glucose stores

More information

http://www.diva-portal.org This is the published version of a paper presented at Future Active Safety Technology - Towards zero traffic accidents, FastZero2017, September 18-22, 2017, Nara, Japan. Citation

More information

NEURAL SYSTEMS FOR INTEGRATING ROBOT BEHAVIOURS

NEURAL SYSTEMS FOR INTEGRATING ROBOT BEHAVIOURS NEURAL SYSTEMS FOR INTEGRATING ROBOT BEHAVIOURS Brett Browning & Gordon Wyeth University of Queensland Computer Science and Electrical Engineering Department Email: browning@elec.uq.edu.au & wyeth@elec.uq.edu.au

More information

Decision-making mechanisms in the brain

Decision-making mechanisms in the brain Decision-making mechanisms in the brain Gustavo Deco* and Edmund T. Rolls^ *Institucio Catalana de Recerca i Estudis Avangats (ICREA) Universitat Pompeu Fabra Passeigde Circumval.lacio, 8 08003 Barcelona,

More information

Predicting Breast Cancer Survival Using Treatment and Patient Factors

Predicting Breast Cancer Survival Using Treatment and Patient Factors Predicting Breast Cancer Survival Using Treatment and Patient Factors William Chen wchen808@stanford.edu Henry Wang hwang9@stanford.edu 1. Introduction Breast cancer is the leading type of cancer in women

More information

Spectrograms (revisited)

Spectrograms (revisited) Spectrograms (revisited) We begin the lecture by reviewing the units of spectrograms, which I had only glossed over when I covered spectrograms at the end of lecture 19. We then relate the blocks of a

More information

MS&E 226: Small Data

MS&E 226: Small Data MS&E 226: Small Data Lecture 10: Introduction to inference (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 17 What is inference? 2 / 17 Where did our data come from? Recall our sample is: Y, the vector

More information

Nature Neuroscience: doi: /nn Supplementary Figure 1. Behavioral training.

Nature Neuroscience: doi: /nn Supplementary Figure 1. Behavioral training. Supplementary Figure 1 Behavioral training. a, Mazes used for behavioral training. Asterisks indicate reward location. Only some example mazes are shown (for example, right choice and not left choice maze

More information