Emotions as a Vehicle for Rationality: Rational Decision Making Models Based on Emotion-Related Valuing and Hebbian Learning

Size: px
Start display at page:

Download "Emotions as a Vehicle for Rationality: Rational Decision Making Models Based on Emotion-Related Valuing and Hebbian Learning"

Transcription

1 Emotions as a Vehicle for Rationality: Rational Decision Making Models Based on Emotion-Related Valuing and Hebbian Learning Jan Treur, Muhammad Umair,2 VU University Amsterdam, Agent Systems Research Group De Boelelaan 8, 8 HV Amsterdam, The Netherlands 2 COMSATS Institute of Information Technology, Lahore, Pakistan {j.treur, m.umair}@vu.nl, 2 mumair@ciitlahore.edu.pk URL: Abstract. In this paper an adaptive decision model based on predictive loops through feeling states is analysed from the perspective of rationality. Hebbian learning is considered for different types of connections in the decision model. To assess the extent of rationality, a measure is introduced reflecting the environment s behaviour. Simulation results and the extents of rationality of the different models over time are presented and analysed. Keywords: decision making, cognitive agent model, emotion, Hebbian learning Introduction Decision making usually involves considering different options and comparing them in order to make a reasonable choice out of them. Each option has an associated emotional response, related to a prediction of a rewarding or aversive consequence of a choice for that option. The extent to which such an emotional response associated to an option is felt as positive, is often seen as a form of valuing of the option. In decisions such valuing plays an important role, and can be seen as a grounding for the decision. Decisions that are not solidly grounded by having a positive feeling about them often do not last long, as any opportunity to get rid of them will be considered to cancel the decision. In recent neurological literature this idea of emotional valuing (and grounding) of decisions has been related to a notion of value as represented in the amygdale; e.g., (Bechara, Damasio, and Damasio, 23; Bechara, Damasio, Damasio, and Lee, 999; Montague and Berns, 22; Morrison and Salzman, 2; Ousdal, Specht, Server, Andreassen, Dolan, Jensen, 24; Pessoa, 2; Rangel, Camerer, Montague, 28; Rudebeck and Murray, 24). A question that may be put forward is whether such decisions based on emotional valuing are not in conflict with rational behaviour. Indeed, there is a long tradition in considering emotions and rationality enemies of each other. It is this theme that is the focus of this paper. It will be discussed and computationally evaluated how emotions are not an enemy but a vehicle for

2 rationality. It will be investigated in what sense emotional valuing as a basis for decision making satisfies some rationality criterion. Making a decision is not just an instantaneous process in the present. Instead, decision making has an embedding in the temporal dimension; earlier situations are relevant as well, and it also has implications for future situations. More specifically, experiences with (outcomes of) decisions made within the given environment from the past play an important role. Learning processes adapt the decision making mechanism to such experiences. In this way the decisions become more reasonable, or in some way rational, given its increasing knowledge about the environment built up by these past experiences. The question to which extent such learning based on specific biologically plausible learning models leads to decision making satisfying some rationality criterion will be addressed in this paper. The computational model for decision making considered here, for a given (observed) situation first generates preparations for a number of options, relevant for that situation. Next, based on predictive as-if body loops, associated feeling states are generated, in order to obtain emotional valuations of the options; e.g., (Damasio, 994; Damasio, 24; Damasio, 2; Janak and Tye, 25; Ousdal, Specht, Server, Andreassen, Dolan, Jensen, 24; Pearson, Watson, and Platt, 24; Pessoa, 2; Rangel, Camerer, Montague, 28). The extent to which such a feeling state is positive, strengthens the preparation for the related option. Through this process a strongest option emerges which can become the outcome of the decision. For this selection process mutual inhibition relations may be added. The type of biologically inspired learning considered is Hebbian learning (cf. Hebb, 949; Gerstner and Kistler, 22), in four different variations by applying it to different types of connections in the decision model. In order to assess whether a specific decision making model can be considered as being rational, first a notion of rationality is needed. Such a notion strongly depends on characteristics of the environment. What is rational in one environment may be totally irrational in another environment vice versa. For example, throwing a ping pong ball (for table tennis) to hit something can be quite rational in an indoor environment without wind, but totally irrational in a stormy environment outdoor, as the effects of such an action will be totally different. Therefore a rationality measure needs to reflect the environment s characteristics in the sense of its behaviour when actions are performed. Two examples of such a rationality measure will be defined and applied to assess the computational decision model. The point of departure for the underlying notion of rationality here is that the more the agent makes the most beneficial choices for the given environment, the more rational it is. In this paper, in Section 2 the decision model and the different variants of adaptivity considered are introduced. Sections 3 to 5 present a number of simulation results for a deterministic world, a stochastic world and a changing world, respectively. In Section 6 measures for rationality are discussed, and the different models are evaluated. Finally, Section 7 is a discussion. 2

3 2 The Adaptive Decision Model Addressed Traditionally an important function attributed to the amygdala concerns the context of fear. However, in recent years much evidence on the amygdala in humans has been collected showing a function beyond this fear context. In humans many parts of the prefrontal cortex (PFC) and other brain areas such as hippocampus, basal ganglia, and hypothalamus have extensive, often bidirectional connections with the amygdale; e.g. (Ghashghaei, Hilgetag, Barbas, 27; Janak and Tye, 25; Likhtik and Paz, 25; Morrison and Salzman, 2; Salzman and Fusi, 2). A role of amygdala activation has been found in various tasks involving emotional aspects; e.g., (Lindquist and Barrett, 22; Murray, 27; Pessoa, 2). Usually emotional responses are triggered by stimuli for which a prediction is possible of a rewarding or aversive consequence. Feeling these emotions represents a way of experiencing the value of such a prediction: to which extent it is positive or negative for the person. This idea of value also plays a central role in work on the neural basis of economic choice in neuroeconomics. In particular, in decision-making tasks where different options are compared, choices have been related to a notion of value as represented in the amygdale; e.g., (Bechara, Damasio, and Damasio, 23; Bechara, Damasio, Damasio, and Lee, 999; Montague and Berns, 22; Morrison and Salzman, 2; Ousdal, Specht, Server, Andreassen, Dolan, Jensen, 24; Pessoa, 2; Rangel, Camerer, Montague, 28; Sugrue, Corrado, and Newsome, 25) [, 2, 4, 5, 7, 9, 24]. Damasio (999, 2) distinguishes an emotion (or emotional response) from a feeling (or felt emotion); see for example: Seen from a neural perspective, the emotion-feeling cycle begins in the brain, with the perception and appraisal of a stimulus potentially capable of causing an emotion and the subsequent triggering of an emotion. The process then spreads elsewhere in the brain and in the body proper, building up the emotional state. In closing, the process returns to the brain for the feeling part of the cycle, although the return involves brain regions different from those in which it all started. (Damasio, 2, p. ) Any mental state in a person induces emotions felt by this person, as described by Damasio (999; 24; 2); e.g.: few if any exceptions of any object or event, actually present or recalled from memory, are ever neutral in emotional terms. Through either innate design or by learning, we react to most, perhaps all, objects with emotions, however weak, and subsequent feelings, however feeble. (Damasio, 24, p. 93) More specifically, in this paper it is assumed that responses in relation to a sensory representation state roughly proceed according to the following causal chain for a body loop, based on elements from Bosse, Jonker, and Treur (28) and Damasio (999; 24): sensory representation preparation for bodily response body state modification sensing body state sensory representation of body state induced feeling In addition, an as-if body loop uses a direct causal relation 3

4 preparation for bodily response sensory representation of body state as a shortcut in the causal chain; cf. Damasio (994; 999). This can be considered a prediction of the action effect by internal simulation (e.g., Hesslow, 22). The resulting induced feeling is a valuation of this prediction. If the level of the feeling (which is assumed positive) is high, a positive valuation is obtained. In Damasio (999) this is described as follows: The changes related to body state are achieved by one of two mechanisms. One involves what I call the body loop. It uses both humoral signals (chemical messages conveyed via the bloodstream) and neural signals (electrochemical messages conveyed via nerve pathways). As a result of both types of signal the body landscape is changed and is subsequently represented in somatosensory structures of the central nervous system, from the brain stem on up. The change in the representation of the body landscape can partly be achieved by another mechanism, which I call the as if body loop. In this alternate mechanism, the representation of body-related changes is created directly in sensory body maps, under the control of other neural sites, for instance, the prefrontal cortices. It is as if the body had really been changed but it was not. (Damasio, 999, p. 79-8) The body loop (or as-if body loop) is extended to a recursive (as-if) body loop by assuming that the preparation of the bodily response is also affected by the level of the induced feeling: induced feeling preparation for the bodily response Such recursion is suggested by Damasio (24), noticing that what is felt is a body state which is under control of the person: The brain has a direct means to respond to the object as feelings unfold because the object at the origin is inside the body, rather than external to it. The brain can act directly on the very object it perceives. ( ) The object at the origin on the one hand, and the brain map of that object on the other, can influence each other in a sort of reverberative process that is not to be found, for example, in the perception of an external object. (Damasio, 24, pp. 9-92) In this way the valuation of the prediction affects the preparation. A high valuation will strengthen activation of the preparation. Informally described theories in scientific disciplines, for example, in biological or neurological contexts, often are formulated in terms of causal relationships or in terms of dynamical systems. To adequately formalise such a theory the hybrid dynamic modelling language LEADSTO has been developed that subsumes qualitative and quantitative causal relationships, and dynamical systems; cf. (Bosse, Jonker, and Treur, 28). This language has proven successful in a number of contexts, varying from biochemical processes that make up the dynamics of cell behaviour to neurological and cognitive processes; c.f.., (Bosse, Jonker, and Treur, 28; Bosse, Jonker, Meij, Treur, 27). Within LEADSTO a local dynamic property (LP) or temporal relation a D b denotes that when a state property a occurs, then after a certain time delay (which for each relation instance can be specified as any positive real number D), state property b will occur. Below, this D is the time step t. A dedicated software environment is available to support specification and 4

5 simulation. A specification of the model in LEADSTO format can be found in Appendix A. An overview of the basic decision model involving the generation and valuation of emotional responses and feelings is depicted in Fig.. This picture also shows formal representations explained in Table. However, note that the precise numerical relations are not expressed in this picture, but in the detailed specifications below, through local (dynamic) properties LP to LP6. To get an idea of a real life context for the model, consider a simple scenario in which a person can choose among three options of shops to get a certain product, for example, fruit. In the world each of the three options provides this product with a certain satisfaction factor, indicated by characteristics or effectiveness rates λ, λ 2, λ 3 (measured between and ) for the three options respectively. The model describes a situation in which a number of times the person goes to all three shops to find out (and learn) how these satisfaction factors are. In each of the shop the person buys a fraction of the fruit corresponding to the person s tendency or preference to decide for this shop. world_state(w) sensor_state(w) srs(w) prep_state(b i) effector_state(b i) ω i sensor_state(b i) srs(b i) feeling(b i) ω 2i ω 3i λ i Fig.. Overview of the model for decision making evaluated from a rationality perspective The effector state for b i combined with the (possibly stochastic) effectiveness of executing b i in the world (indicated by λ i ) activates the sensor state for b i via the body loop as described above. By a recursive as-if body loop each of the preparations for b i generates a level of feeling for b i which is considered a valuation of the prediction of the action effect by internal simulation. This in turn affects the level of the related preparation for b i. Dynamic interaction within these loops results in an equilibrium for the strength of the preparation and of the feeling. Depending on these values, the option b i is actually activated with a certain intensity. The specific strengths of the connections from the sensory representation to the preparations, and within the recursive as-if body loops can be innate, or are acquired during lifetime. 5

6 Table Overview of the (dynamic) states and connections used in the model Formal name Informal name Description world_state(w) World state w This characterizes the current world situation which the person is facing sensor_state(w) Sensor state for w The person observes the world state through the sensor state, which provides sensory input sensor_state(b i) Sensor state for b i The person observes the body state through the sensor state, which provides sensory input srs(w) Sensory representation Internal representation of sensory input for w srs(b i) feeling(b i) prep_state(b i) effector_state(b i) ω i ω 2i ω 3i of world state w Sensory representation of body state b i Feeling associated to body state b i Preparation for an action involving b i Effector state for body state b i Weight of the connection from srs(w) to prep_state(b i) Weight of the connection from feeling(b i) to prep_state(b i) Weight of the connection from prep_state(b i) to srs(b i) A feeling state feeling(b i) is generated by a predictive as-if body loop, via the sensory representation srs(b i) of body state b i. This provides a valuing of a prediction for the option b i before executing it. Preparation for a response involving body state b i Body expression of b i This is one of the connections which can be learnt by Hebbian learning. This one models the relation between stimulus w and directly associated response b i. This is another connection which can be learnt by Hebbian learning. This one models how the generated feeling affects the preparation for response b i. This is the third connection which can be learnt by Hebbian learning. This one models how the preparation for response b i affects the (predicted) body state representation for b i. The computational model is based on such neurological notions as valuing in relation to feeling, body loop and as-if body loop. The adaptivity in the model is based on Hebbian learning. The detailed specification of the model is presented below starting with how the world state is sensed. Note that in this detailed specification each local property LP is given in two (equivalent) formats. The first format is semiformally expressed in the LEADSTO language and the second format is in the form of a differential equation, for those readers more familiar with that format. For a summarized formal specification in differential equation format see Appendix A, for a summarized formal specification in LEADSTO format, see Appendix B. More detailed description of the model As a first step sensing of the world state w takes place (see also Fig., the arrow in the left upper corner). This means that the activation value of the sensor state for w is updated (gradually) to get its value more in the direction of the value of the world state for w. After a few steps these activation values will become (practically) equal. This is expressed in property LP. 6

7 LP Sensing a world state If world state property w occurs of level V and the sensor state for w occurs has level V 2 then the sensor state for w will have level V 2 + [V V 2] t. = [world_state(w) - sensor_state(w)] From the sensor state for w, the sensory representation of w is updated by dynamic property LP. Again this means that the activation value of the sensory representation for w is updated (gradually) to get its value more in the direction of the value of the sensor state of w. After a few steps these values will become (practically) equal. This is expressed in property LP. LP Generating a sensory representation for a sensed world state If the sensor state for world state property w has level V, and the sensory representation for w has level V 2 then the sensory representation for w will have level V 2 + [V V 2] t. = [sensor_state(w) - srs(w)] To get an updated value of the preparation state for bi, there are impacts of two other states (see the two arrows pointing at the preparation state in Fig., right upper corner): the sensory representation of w and the feeling state for bi. This means that a combination of the activation values of these states have impact in the activation value of the preparation state. The combination function h to combine two inputs which activate a subsequent state uses the threshold function th thus keeping the resultant value in the range [, ]: th(σ, τ, V) = where σ is the steepness and τ is the threshold value. The combination function is: h(σ, τ, V, V 2,, 2) = th (σ, τ, V + 2V 2) where V and V 2 are the current activation level of the two states and ω and ω 2 are the connection strength of the links from these states to the destination state. In this way dynamic property LP2 describes the update of the preparation state for b i from the sensory representation of w and feeling of b i. LP2 From sensory representation and feeling to preparation of a body state If a sensory representation for w with level V occurs and the feeling associated with body state b i has level V i and the preparation state for b i has level U i and i is the strength of the connection from sensory representation for w to preparation for b i and 2i is the strength of the connection from feeling of b i to preparation for b i and σ i is the steepness value for preparation of b i and τ i is the threshold value for preparation of b i and is the person s flexibility for bodily responses then after t the preparation state for body state b i will have level U i + [h(σ i, τ i, V, V i, i, 2i ) - U i] t. = [ h σi, τi, srs(w), feeling(bi), i, 2i) - preparation(bi)] In a similar manner dynamic property LP3 describes the update of the activation level of the sensory representation of a body state from the respective values for preparation state and sensor state (see Fig., left under corner). 7

8 LP3 From preparation and sensor state to sensory representation of a body state If preparation state for b i has level X i and sensor state for b i has level V i and the sensory representation for body state b i has level U i and 3i is the strength of the connection from preparation state for b i to sensory representation for b i and σ i is the steepness value for sensory representation of b i and τ i is the threshold value for sensory representation of b i and 2 is the person s flexibility for bodily responses then after t the sensory representation for b i will have level U i + [ h(σ i, τ i, X i, V i, 3i, )- U i ] t. = [ h σi, τi, preparation(bi), sensor_state(bi), 3, ) - ] Dynamic property LP4 describes the update of the activation level of feeling b i from the activation level of the sensory representation (see Fig. in the middle under). LP4 From sensory representation of a body state to feeling If the sensory representation for body state b i has level V, and b i is felt with level V 2 then b i will be felt with level V 2 + [V V 2] t). = [srs(bi) - feeling(bi)] LP5 describes how the activation level of the effector state for b i is updated from the activation level of the preparation state (see Fig., right upper corner). LP5 From preparation to effector state If the preparation state for b i has level V, and the effector state for body state b i has level V 2. then the effector state for body state b i will have level V 2 + [V V 2] t. = [preparation_state(bi) - effector_state(bi)] LP6 describes the update of the activation level of the sensor state for b i from the activation level of the effector state for b i (see Fig., the arrow from right upper corner to left under corner). LP6 From effector state to sensor state of a body state If the effector state for b i has level V, and λ i is world preference/ recommendation for the option b i and the sensor state for body state b i has level V 2, then the sensor state for b i will have level V 2 + [λ i V V 2] t = [λ i effector_state(bi) - sensor_state(bi)] 8

9 .2 WS(w,V) WS(w,V).2 World Preferences (Static) λ λ2 λ3.2 Connection Strength srs(w)-prep(bi) ω ω2 ω3.2 ES(b,V) Fig. 2. Output of the basic model without learning Just to get the idea, Fig. 2 shows some of the variables for a simple instance of the scenario and the output of the basic model in a deterministic environment without learning (further parameters are as for Fig. 3). This Fig. 2 only shows the output of the effector states for the options b i over time without learning any of the connections, for given deterministic world characteristics or preferences for the available options. The horizontal axis represents time (corresponds to simulation steps). The vertical axis represents the activation values of the depicted states which can be any values between and. The upper left hand graph indicates the recurring world state (WS) in which the decision problem and the options occur. This serves as input. The symbol V (or V, V2, V3) indicates the activation value, which for this world state switches from to and back only. The upper right hand graph shows the (different) world characteristics λ i for the three options. They are preset as characteristics for any given scenario. The lower left hand graph shows that the strengths i for all three connections from srs(w) to prep_state(bi) are the same for this case and constant for the three options (no learning takes place). The lower right hand graph shows that the tendencies to choose for the three responses or decision options b, b 2 or b 3 for w remain the same (due to the fact that no learning takes place). For the considered case study it was assumed that three options are available to the agent and the objective is to see how rationally an agent makes its decisions using a given human-like adaptive model: under deterministic as well as in stochastic world characteristics and in for static as well as changing worlds. The dynamic properties LP7 to LP9 describe a learning mechanism for the connection strengths (A) from sensory representation for w to preparation for option b i (B) from feeling b i to preparation for b i (C) from preparation for b i to sensory representation of b i 9

10 These have been explored separately (A), (B), or (C), and in combination (ABC). The first type of learning (A) corresponds to learning as considered traditionally for a connection between stimulus and response. The other two types of learning (B and C) concern strengthening of the as-if body loop. This models the association of the feeling to an option and how this feeling has impact on preparing for this option. The learning model used is Hebbian learning (Hebb, 949). This is based on the principle that strengthening of a connection between neurons over time may take place when both nodes are often active simultaneously ( neurons that fire together wire together ). The principle goes back to Hebb (949), but has recently gained enhanced interest by more extensive empirical support, and more advanced mathematical formulations (e.g., Gerstner and Kistler, 22). More specifically, here it is assumed that the strength of a connection between states S and S 2 is adapted using the following Hebbian learning rule, taking into account a maximal connection strength, a learning rate >, and an extinction rate (usually small), and activation levels V and V 2 (between and ) of the two nodes involved. The first expression is in differential equation format, the last one in difference equation format d /dt = V V 2 ( - ) - (t + t) = (t) + [ V (t)v 2 (t) ( - (t)) - (t) ] t Such Hebbian learning rules can be found, for example, in (Gerstner and Kistler, 22, p. 46). By the factor - the learning rule keeps the level of bounded by (which could be replaced by any other positive number). When the extinction rate is relatively low, the upward changes during learning are proportional to both V and V 2 and maximal learning takes place when both are. Whenever one of these activation levels is (or close to ) extinction takes over, and slowly decreases (unlearning). First, based on the Hebbian learning mechanism described above LP7 models the update of the strength of the connection from sensory representation of w to preparation of b i (type A; see Fig. the arrow with label i in the upper part, in the middle). LP7 Hebbian learning (A): connection from sensory representation of w to preparation of b i If the connection from sensory representation of w to preparation of b i has strength i and the sensory representation for w has level V and the preparation of bi has level V i and the learning rate from sensory representation of w to preparation of b i is and the extinction rate from sensory representation of w to preparation of b i is then after t the connection from sensory representation of w to preparation of bi will have strength i + ( VV i ( - i) - i) t. = srs(w)preparation(bi) ( - i) - i Similarly the learning of the two connections involved in the as-if body loop (B and C) are specified in LP8 and LP9 (see Fig., the arrows with label 2i (B) and 3i (C)).

11 LP8 Hebbian learning (B): connection from feeling b i to preparation of b i If the connection from feeling associated with body state b i to preparation of b i has strength 2i and the feeling for bi has level V i and the preparation of bi has level U i and the learning rate from feeling of bi to preparation of b i is and the extinction rate from feeling of bi to preparation of b i is then after t the connection from feeling of bi to preparation of bi will have strength 2i + ( V iu i ( - 2i) - 2i) t. = feeling(bi)preparation(bi) ( - 2i) - 2i LP9 Hebbian learning (C): connection from preparation of b i to sensory representation of b i If the connection from preparation of b i to sensory representation of b i has strength 3i and the preparation of bi has level V i and the sensory representation of bi has level U i and the learning rate from preparation of bi to sensory representation of b i is and the extinction rate from preparation of bi to sensory representation of b i is then after t the connection from preparation of bi to sensory representation of bi will have strength 3i + ( V iu i ( - 3i) - 3i) t. = preparation(bi) srs(bi) ( - 3i) - 3i 3 Simulation Results for a Deterministic World In this section some of the simulation results, performed using numerical software, are described in detail. A real life scenario context that can be kept in mind to understand what is happening is the following. Scenario context At regular times (for example, every week) a person faces the decision problem where to buy a certain type of fruit. There are three options for shops to choose from. Each option i has its own characteristics (price and quality, for example); based on these characteristics buying in this shop provides a satisfaction factor λ i between and. For the deterministic case it is just this value that is obtained for this option. For the stochastic case a probability distribution around this λ i will determine the outcome. For a static world the λ i remain constant, whereas for a dynamic world they may change over time. By going to a shop the person experiences the related outcome λ i and learns from this. To find out which shop is most satisfactory, each time the person decides to go to all three of them to buy some fraction of the amount of fruit needed. This fraction corresponds to the person s tendency or preference to decide for this shop. Due to the learning this tendency changes over time, in favour of the shop(s) which provide(s) the highest satisfaction. The simulation results address different scenarios reflecting different types of world characteristics, from worlds that are deterministic to stochastic, and from a static to a changing world. Moreover, learning the connections was done one at a time (A), (B), (C), and learning multiple connections simultaneously (ABC). An overall summary of the results is given in Table 2. Note that this table only contains the values of the connection strengths and activation levels of the effector states after completion of

12 simulation experiments: at the end time. In contrast in the current and next section the processes are shown performed to achieve these final values. Results for the rationality measures are presented in the next section. For all simulation results shown, time is on the horizontal axis whereas the vertical axis shows the activation level of the different states. Step size for all simulations is t =. Fig. 3 shows simulation results for the model under constant, deterministic world characteristics: λ =.9, λ 2 =.2, and λ 3 =.. Other parameters are set as: learning rate =.4, extinction rate =.5, initial connection strength ω 2i = ω 3i =.8, speed factors γ =, γ =.5, γ 2 =, steepness = 2 and threshold =.2 for preparation state, and = and =.3 for sensory representation of b i. For initial 8 time units the stimulus w is kept and for next 7 time units it is kept and the same sequence of activation and deactivation for stimulus is repeated for the rest of the simulation..2 Connection Strength srs(w)-prep(bi) ω ω2 ω3.2 ES(b,V) Fig. 3 Deterministic World: (a) Connection strengths (A) (b) Effector States for b i Initial values ω = ω 2 = ω 3 =.5; η=.4, ζ=.5 Moreover it depicts the situation in which only one type of links (ω i ) is learned as specified in LP7 using the Hebbian approach (A) for the connection from sensory representation of w to preparation state for b i. It is shown that the model adapts the connection strengths of the links ω i according to the world characteristics given by λ i. So ω strengthens more and more over time, resulting in the higher activation level of the effector state for b compared to the activation level of the effector states for the other two options b 2 and b 3. Fig. 4 and Fig. 5 show the simulation results while learning is performed for the links (B) from feeling to preparation state for b i and (C) from preparation state to sensory representation of b i respectively. 2

13 .2 Connection Strength feeling(bi)-prep(bi) ω2 ω22 ω23.2 ES(b,V) Fig. 4 Deterministic World: (a) Connection strengths (B) (b) Effector States for b i Initial values ω 2 = ω 22 = ω 23 =.5, η =.4; ζ=.5.2 Connection Strength prep(bi)-srs(bi) ω3 ω32 ω33.2 ES(b,V) Fig. 5 Deterministic World: (a) Connection strengths (C) (b) Effector States for b i Initial values ω 3 = ω 32 = ω 33 =.5, η =.4, ζ =.5 Fig. 6 shows the results when the Hebbian learning is applied on all links simultaneously (ABC). These results show that for deterministic world characteristics the agent successfully adapts the connection strength in all four different cases rationally, as these connection strengths become more in line with the world characteristics. 3

14 .2 Connection Strength srs(w)-prep(bi) ω ω2 ω3.2 Connection Strength feeling(bi)-prep(bi) ω2 ω22 ω Connection Strength prep(bi)-srs(bi) ω3 ω32 ω33.2 ES(b,V) Fig. 6. Deterministic World: Connection strengths (ABC) and Effector States for b i Initial values ω = ω 2 = ω 3 =.5, ω 2 = ω 22 = ω 23 =.8, ω 3 = ω 32 = ω 33 =.8; η i =.4, ζ i =.3 4 Simulation Results for a Stochastic World Other experiments were carried out for a stochastic world with four different cases as mentioned earlier. To simulate the stochastic world, probability distribution functions (PDF) were defined for λ i according to a Normal Distribution. Using these PDFs, the random numbers were generated for λ i limiting the values for the interval [, ] with μ =.9, μ 2 =.2 and μ 3 =. for λ i respectively. Furthermore the standard deviation for all λ i was taken.. Fig. 7 shows the world state w and stochastic world characteristics λ i. Figures 8 to show the simulation results while learning is performed for the links (A) from sensory representation of w to preparation state for b i, (B) from feeling to preparation state for b i, and (C) from preparation state to sensory representation of b i respectively, one at a time, and (ABC) all three..2 WS(w,V) WS(w,V).2 Stochastic World λ λ2 λ Fig. 7. Stochastic World 4

15 .2 Connection Strength srs(w)-prep(bi) ω ω2 ω3.2 ES(b,V) Fig. 8 Stochastic World: (a) Connection strengths (A) (b) Effector States Initial values ω = ω 2 = ω 3 =.5; η=.4, ζ=.5.2 Connection Strength feeling(bi)-prep(bi) ω2 ω22 ω23.2 ES(b,V) Fig. 9 Stochastic World: (a) Connection strengths (B) (b) Effector States Initial values ω 2 = ω 22 = ω 23 =.5; η=.4, ζ=.5.2 Connection Strength prep(bi)-srs(bi) ω3 ω32 ω33.2 ES(b,V) Fig. Stochastic World: (a) Connection strengths (C) (b) Effector State Initial values ω 3 = ω 32 = ω 33 =.5; η=.4, ζ=.5 5

16 .2 Connection Strength srs(w)-prep(bi) ω ω2 ω3.2 Connection Strength feeling(bi)-prep(bi) ω2 ω22 ω Connection Strength feeling(bi)-prep(bi) ω2 ω22 ω23.2 ES(b,V) Fig.. Stochastic World: Connection Strengths (ABC) and Effector States Initial values ω = ω 2 = ω 3 =.5, ω 2 = ω 22 = ω 23 =.8, ω 3 = ω 32 = ω 33 =.8; η i =.4, ζ i =.3 It can be seen from these results that also in a stochastic scenario the agent model successfully learnt the connections and adapted the connections and effector states to the world characteristics with results quite similar to the results for a deterministic world. 5 Simulation Results for a Changing Stochastic World Another scenario was explored in which the (stochastic) world characteristics were changing drastically from μ =.9, μ 2 =.2 and μ 3 =. for λ i respectively to μ =., μ 2 =.2 and μ 3 =.9 for λ i respectively with standard deviation of. for all. Fig. 2 to Fig. 5 show the results for such a scenario. The results show that the agent has successfully adapted to the changing world characteristics over time, as the final effector states reflect more the changed world characteristics. The initial settings in this experiment were taken from the previous simulation results shown in Figs. 7 and 8 to keep the continuity of the experiment. It can be observed that the connection strength for option 3 becomes higher compared to the other options, and consequently the value of the effector state for b 3 becomes higher than for the other two by the end of experiment. 6

17 .2 Connection Strength srs(w)-prep(bi) ω ω2 ω3.2 ES(b,V) Fig. 2 Changing Stochastic World: (a) Connection strengths (A) (b) Effector States Initial values ω =.78, ω 2 =.53, ω 3 =.52; η=.4,ζ=.5.2 Connection Strength feeling(bi)-prep(bi) ω2 ω22 ω23.2 ES(b,V) Fig. 3 Changing Stochastic World: (a) Connection strengths (B) (b) Effector States Initial values ω 2 =.88, ω 22 =.58, ω 23 =.47; η=.4, ζ=.5.2 Connection Strength prep(bi)-srs(bi) ω3 ω32 ω33.2 ES(b,V) Fig. 4 Changing Stochastic World: (a) Connection strengths (C) (b) Effector States Initial values ω 3 =.87, ω 32 =.3, ω 33 =.23; η=.4, ζ=

18 .2 Connection Strength srs(w)-prep(bi) ω ω2 ω3.2 Connection Strength feeling(bi)-prep(bi) ω2 ω22 ω Connection Strength prep(bi)-srs(bi) ω3 ω32 ω33.2 ES(b,V) Fig. 5. Changing Stochastic World: Connection strengths (ABC) and Effector States Initial values ω =.8, ω 2 =.55, ω 3 =.54, ω 2 = ω 3 =.84, ω 22 = ω 32 =.3, ω 23 = ω 33 =.29; η i =.4, ζ i =. Note that the Table 2 contains the values of different connection strengths and the activation level of effector states after the completion of simulation experiments (at the end time)

19 Table 2. Overview of the simulation results for all links (A), (B), (C) and (ABC) Link Scenario ω x ω x2 ω x3 ES ES 2 ES 3 A B C ABC Static Stocastic Change World Static Stochastic Change World Static Stochastic Change World Static Stocastic Changed World Evaluating the Models on Rationality In the previous section it was shown that the agent model behaves rationally in different scenarios. These scenarios and its different cases are elaborated in detail in the previous section, but the results were assessed with respect to their rationality in a qualitative and rather informal manner. For example, no attempt was made to assign an extent or level to the rationality observed during these experiments. The current section addresses this and to this end two different formally defined measures to assess the extent of the rationality are introduced. The notion of rationality aimed at in this paper concerns that the agent makes the choices that are most beneficial to it in the given environment. For example, it does not take into account the extent to which the agent has information about the environment. So, if the agent has had no experiences yet with the given environment (extreme form of incomplete information), then according to the notion considered here it will behave totally irrational. Only when the agent gathers more information about the environment by having experiences with choices previously made, its behaviour will become more and more rational as an adaptation to its environment. The focus then is on how rational the agent will become over time, in such an adaptation process to the environment. From the rationality measures considered here one rationality measure is based on a discrete scale and the other one on a continuous scale. 9

20 Method (Discrete Rationality Measure) The first method presented is based on the following point of departure: an agent which has the same respective order of effector state activation levels for the different options compared to the order of world characteristics λ i will be considered highly rational. So in this method the rank of the average value λ i at any given time unit is determined, and compared with the rank of the respective effector state levels. More specifically, the following formula is used to determine the irrationality factor IF. where n is the number of options available. This irrationality factor tells to which extent the agent is behaving rationally in the sense that the higher the irrationality factor IF is, the lower is the rationality of the agent. It is assumed that the there is uniqueness in ranking and none of the two values assign a similar rank. To calculate the discrete rationality factor DRF, the maximum possible irrationality factor Max. IF can be determined as follows. Max. IF = Here ceiling(x) is the first integer higher than x. Note that Max. IF is approximately ½n 2. As a higher IF means lower rationality, the discrete rationality factor DRF is calculated as: DRF = - On this scale, for each n only a limited number of values are possible; for example, for n = 3 three values are possible:,.5, and. In general ½ Max. IF + values are possible, which is approximately ¼n 2 +. As an example, suppose during a simulation average values of λ =.7636, λ 2 =.2344, and λ 3 = are given, whereas the effector state values are ES =.7554, ES 2 =.2367 and ES 3 = at a given time point. So according to the given data the world s ranks will be 3, 2, for λ, λ 2, λ 3 and the agent s ranks are 2, 3, for ES, ES 2, ES 3 respectively. So according to the given formulas IF = 2, Max. IF = 4 and DRF =.5. So in this particular case at this given time point the agent is behaving rationally for 5%. Method 2 (Continuous Rationality Measure) The second method presented is based on the following point of departure: an agent which receives the maximum benefit will be the highly rational agent. This is only possible if ES i is for the option whose λ i is the highest. In this method to calculate the continuous rationality factor CRF, first to account for the effort spent in performing actions, the effector state values ES i are normalised as follows. nesi = Here n is number of options available. Based on this the continuous rationality factor CRF Is determined as follows, with Max λi) the maximal value of the different λ i. CRF = 2

21 This method enables to measure to which extent the agent is behaving rationally in a continuous manner. For the given example used to illustrate the previous method CRF= So according to this method the agent is considered to behaving for 66.33% rationally in the given world. Fig. 6 to Fig. 9 show the two types of rationality (depicted as percentages) of the agent for the different scenarios with changing stochastic world. In these figures the first 25 time points show the rationality achieved by the agent just before changing world characteristics drastically for the simulations shown in Section 5. 2 Rationality over Time Discrete Continuous 2 Rationality over Time Discrete Continuous Fig. 6. Rationality during learning ω i (A) Fig. 7. Rationality during learning ω 2i (B) 2 Rationality over Time Discrete Continuous 2 Rationality over Time Discrete Continuous Fig. 8. Rationality during learning ω 3i (C) Fig. 9. Rationality for learning ω i, ω 2i, ω 3i (ABC) From time point 25 onwards, it shows the rationality of the agent after the change has been made (see Section 5). It is clear from the results shown in Fig. 7 to Fig. 2 that the rationality factor of the agent in all four cases improves over time for the given world. 7 Discussion This paper focused on how the extent of rationality of a biologically inspired humanlike adaptive decision model can be analysed. In particular, this was explored for variants of a decision model based on valuing of predictions involving feeling states generated in the amygdala; e.g., (Bechara, Damasio, and Damasio, 23; Bechara, Damasio, Damasio, and Lee, 999; Damasio, 994; Damasio, 24; Montague and Berns, 22; Janak and Tye, 25; Morrison and Salzman, 2; Ousdal, Specht, Server, Andreassen, Dolan, Jensen, 24; Pessoa, 2; Rangel, Camerer, Montague, 2

22 28). The model was made adaptive using Hebbian learning; cf. (Gerstner and Kistler, 22; Hebb, 949). Note that this study limits itself to a biologically inspired model for natural or human-like decision making. A preliminary and shorter version of the work discussed here was presented in (Treur and Umair, 2). To assess the extent of rationality with respect to given world characteristics, two measures were introduced, and using these measure, the extents of rationality of the different models over time were analysed. The point of departure for the underlying notion of rationality chosen is that the more the agent makes the most beneficial choices for the given environment, the more rational it is. It was shown how by the learning processes a high level of rationality was obtained, according to these measures, and how after a major world change after some delay this rationality level is re-obtained. It turned out that emotion-related valuing of predictions in the amygdala as a basis for adaptive decision making according to Hebbian learning satisfies reasonable rationality measures. In this sense the model shows how in human-like decision making emotions are used as a vehicle to obtain rational decisions. This contrasts with the traditional view that emotions and rationality disturb each other or compete with each other, especially in decision processes. More and more findings from neuroscience show that this traditional view is an inadequate way of conceptualisation of processes in the brain. In (Bosse, Treur, and Umair, 22) it is shown how the presented model can be extended by social interaction and how this also contributes to rationality for the case of collective decision making. Moreover, in (Treur, and Umair, 22) it is discussed how some other types of models can be evaluated according to the rationality measures used here. References Bechara, A., Damasio, H., and Damasio, A.R. (23). Role of the Amygdala in Decision- Making. Ann. N.Y. Acad. Sci. 985, (23) Bechara, A., Damasio, H., Damasio, A.R., and Lee, G.P. (999). Different Contributions of the Human Amygdala and Ventromedial Prefrontal Cortex to Decision-Making. Journal of Neuroscience 9, (999) Bosse, T., Hoogendoorn, M., Memon, Z.A., Treur, J., and Umair, M. (2). An Adaptive Model for Dynamics of Desiring and Feeling based on Hebbian Learning. In: Yao, Y., Sun, R., Poggio, T., Liu, J., Zhong, N., and Huang, J. (eds.), Proceedings of the Second International Conference on Brain Informatics, BI'. Lecture Notes in Artificial Intelligence, vol. 6334, pp Springer Verlag (2) Bosse, T., Jonker, C.M., and Treur, J. (28). Formalisation of Damasio's Theory of Emotion, Feeling and Core Consciousness. Consciousness and Cogn. 7, 94 3 (28) Bosse, T., Jonker, C.M., Meij, L. van der, Treur, J. (27). A Language and Environment for Analysis of Dynamics by Simulation. Intern. J. of AI Tools 6, (27) Damasio, A. (994). Descartes Error: Emotion, Reason and the Human Brain. Papermac, London (994) 22

23 Damasio, A. (999). The Feeling of What Happens. Body and Emotion in the Making of Consciousness. New York: Harcourt Brace (999) Damasio, A.: Looking for Spinoza. Vintage books, London (24) Damasio, A.R. (2). Self comes to mind: constructing the conscious brain. Pantheon Books, NY. Forgas, J.P., Laham, S.M., and Vargas, P.T. (25). Mood effects on eyewitness memory: Affective influences on susceptibility to misinformation. Journal of Experimental Social Psychology 4, (25) Gerstner, W., and Kistler, W.M. (22). Mathematical formulations of Hebbian learning. Biol. Cybern. 87, (22) Ghashghaei, H.T., Hilgetag, C.C., Barbas, H. (27). Sequence of information processing for emotions based on the anatomic dialogue between prefrontal cortex and amygdala. Neuroimage 34, (27) Hebb, D.O. (949). The Organization of Behaviour. New York: John Wiley & Sons (949) Hesslow, G. (22). Conscious thought as simulation of behaviour and perception. Trends Cogn. Sci. 6, (22) Montague, P.R., Berns, G.S.: Neural economics and the biological substrates of valuation. Neuron 36, (22) Morrison, S.E., and Salzman, C.D. (2). Re-valuing the amygdala. Current Opinion in Neurobiology 2, (2) Murray, E.A. (27). The amygdala, reward and emotion. Trends Cogn Sci, (27) Rangel, A., Camerer, C., Montague, P.R.: A framework for studying the neurobiology of value-based decision making. Nat Rev Neurosci 9, (28) Salzman, C.D., and Fusi, S. (2). Emotion, Cognition, and Mental State Representation in Amygdala and Prefrontal Cortex. Annu. Rev. Neurosci. 33, (2) Sugrue, L.P., Corrado, G.S., Newsome, W.T. (25). Choosing the greater of two goods: neural currencies for valuation and decision making. Nat Rev Neurosci 6, (25) Bosse, T., Treur, J., and Umair, M. (22). Rationality for Adaptive Collective Decision Making Based on Emotion-Related Valuing and Contagion. In: W. Ding et al. (eds.), Modern Advances in Intelligent Systems and Tools, Proceedings of the 25th International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems, IEA/AIE 2, Part II. Studies in Computational Intelligence, vol. 43, pp Springer Verlag, 22. Janak, P.H., and Tye, K.M. (25). From circuits to behaviour in the amygdale. Nature, vol. 57, 25, pp Likhtik, E., and Paz, R. (25). Amygdala prefrontal interactions in (mal)adaptive learning. Trends in Neurosciences, vol. 38, 25, pp Lindquist, K.A., and Barrett, L.F. (22). A functional architecture of the human brain: emerging insights from the science of emotion. Trends in Cognitive Sciences, vol. 6, 22, pp Ousdal, O.T., Specht, K., Server, A., Andreassen, O.A. Dolan, R.J., Jensen, J. (24). The human amygdala encodes value and space during decision making. Neuroimage, vol. (24) Pearson, J.M., Watson, K.K., and Platt, M.L. (24). Decision Making: The Neuroethological Turn. Neuron, vol. 82, 24, pp Pessoa, L. (2). Emotion and cognition and the amygdala: From what is it? to what s to be done? Neuropsychologia, vol. 48, 2, pp Rudebeck, P.H., and Murray, E.A., (24). The Orbitofrontal Oracle: Cortical Mechanisms for the Prediction and Evaluation of Specific Behavioral Outcomes. Neuron, vol. 84, 24, pp

Modelling the Reciprocal Interaction between Believing and Feeling from a Neurological Perspective

Modelling the Reciprocal Interaction between Believing and Feeling from a Neurological Perspective Modelling the Reciprocal Interaction between Believing and Feeling from a Neurological Perspective Zulfiqar A. Memon, Jan Treur VU University Amsterdam, Department of Artificial Intelligence De Boelelaan

More information

A Computational Model for Dynamics of Desiring and Feeling 1

A Computational Model for Dynamics of Desiring and Feeling 1 A Computational Model for Dynamics of Desiring and Feeling 1 Tibor Bosse a, Mark Hoogendoorn a, Zulfiqar A. Memon b,*, Jan Treur a, Muhammad Umair a,c a VU University Amsterdam, Department of AI, De Boelelaan

More information

A Cognitive Agent Model Using Inverse Mirroring for False Attribution of Own Actions to Other Agents

A Cognitive Agent Model Using Inverse Mirroring for False Attribution of Own Actions to Other Agents A Cognitive Agent Model Using Inverse Mirroring for False Attribution of Own Actions to Other Agents Jan Treur, Muhammad Umair,2 Vrije Universiteit Amsterdam, Department of Artificial Intelligence De Boelelaan

More information

Cognitive and Biological Agent Models for Emotion Reading

Cognitive and Biological Agent Models for Emotion Reading Cognitive and Biological Agent Models for Emotion Reading Zulfiqar A. Memon, Jan Treur Vrije Universiteit Amsterdam, Department of Artificial Intelligence De Boelelaan 1081, 1081 HV Amsterdam, the Netherlands

More information

An Agent Model for Cognitive and Affective Empathic Understanding of Other Agents 1

An Agent Model for Cognitive and Affective Empathic Understanding of Other Agents 1 An Agent Model for Cognitive and Affective Empathic Understanding of Other Agents 1 Zulfiqar A. Memon a,b, Jan Treur a a VU University Amsterdam, Agent Systems Research Group De Boelelaan 1081, 1081 HV

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

Designing Social Agents with Empathic Understanding

Designing Social Agents with Empathic Understanding Designing Social Agents with Empathic Understanding Zulfiqar A. Memon 1,2 and Jan Treur 1 1 Vrije Universiteit Amsterdam, Department of Artificial Intelligence De Boelelaan 1081, 1081 HV Amsterdam 2 Sukkur

More information

A Neural Model for Adaptive Emotion Reading Based on Mirror Neurons and Hebbian Learning

A Neural Model for Adaptive Emotion Reading Based on Mirror Neurons and Hebbian Learning A Neural Model for Adaptive Emotion Reading Based on Mirror Neurons and Hebbian Learning Tibor Bosse, Zulfiqar A. Memon, Jan Treur (http://www.few.vu.nl/~{tbosse, zamemon, treur}) Vrije Universiteit Amsterdam,

More information

An Adaptive Agent Model for Emotion Reading by Mirroring Body States and Hebbian Learning

An Adaptive Agent Model for Emotion Reading by Mirroring Body States and Hebbian Learning An Adaptive Agent Model for Emotion Reading by Mirroring Body States and Hebbian Learning Tibor Bosse 1, Zulfiqar A. Memon 1, 2, Jan Treur 1 1 Vrije Universiteit Amsterdam, Department of Artificial Intelligence

More information

An Adaptive Emotion Reading Model

An Adaptive Emotion Reading Model An Adaptive Emotion Reading Model Tibor Bosse, Zulfiqar A. Memon, Jan Treur ({tbosse, zamemon, treur}@few.vu.nl) Vrije Universiteit Amsterdam, Department of Artificial Intelligence De Boelelaan 1081, 1081

More information

Formal Analysis of Damasio s Theory on Core Consciousness

Formal Analysis of Damasio s Theory on Core Consciousness Formal Analysis of Damasio s Theory on Core Consciousness Tibor Bosse (tbosse@cs.vu.nl) Vrije Universiteit Amsterdam, Department of Artificial Intelligence De Boelelaan 1081a, 1081 HV Amsterdam, The Netherlands

More information

A Neurologically Inspired Network Model for Graziano s Attention Schema Theory for Consciousness

A Neurologically Inspired Network Model for Graziano s Attention Schema Theory for Consciousness A Neurologically Inspired Network Model for Graziano s Attention Schema Theory for Consciousness Erik van den Boogaard (B), Jan Treur, and Maxim Turpijn Behavioural Informatics Group, Vrije Universiteit

More information

Simulation and Representation of Body, Emotion, and Core Consciousness

Simulation and Representation of Body, Emotion, and Core Consciousness Simulation and Representation of Body, Emotion, and Core Consciousness Tibor Bosse * Catholijn M. Jonker Jan Treur * * Vrije Universiteit Amsterdam Radboud Universiteit Nijmegen Department of Artificial

More information

A Computational Agent Model for Post-Traumatic Stress Disorders

A Computational Agent Model for Post-Traumatic Stress Disorders A Computational Agent Model for Post-Traumatic Stress Disorders Sebastien NAZE and Jan TREUR VU University Amsterdam, Agent Systems Research Group De Boelelaan 1081, 1081 HV, Amsterdam, The Netherlands

More information

Emotion Explained. Edmund T. Rolls

Emotion Explained. Edmund T. Rolls Emotion Explained Edmund T. Rolls Professor of Experimental Psychology, University of Oxford and Fellow and Tutor in Psychology, Corpus Christi College, Oxford OXPORD UNIVERSITY PRESS Contents 1 Introduction:

More information

A Decision Making Model Based on Damasio s Somatic Marker Hypothesis

A Decision Making Model Based on Damasio s Somatic Marker Hypothesis A Decision Making Model Based on Damasio s Somatic Marker Hypothesis Mark Hoogendoorn 1, Robbert-Jan Merk 1,2, Jan Treur 1 ({mhoogen, treur}@cs.vu.nl, merkrj@nlr.nl) 1 Vrije Universiteit Amsterdam, Department

More information

Artificial Intelligence Lecture 7

Artificial Intelligence Lecture 7 Artificial Intelligence Lecture 7 Lecture plan AI in general (ch. 1) Search based AI (ch. 4) search, games, planning, optimization Agents (ch. 8) applied AI techniques in robots, software agents,... Knowledge

More information

A Decision Making Model Based on Damasio s Somatic Marker Hypothesis

A Decision Making Model Based on Damasio s Somatic Marker Hypothesis A Decision Making Model Based on Damasio s Somatic Marker Hypothesis Mark Hoogendoorn 1, Robbert-Jan Merk 1,, and Jan Treur 1 ({mhoogen, treur}@cs.vu.nl, merkrj@nlr.nl) 1 Vrije Universiteit Amsterdam,

More information

Modelling Temporal Aspects of Situation Awareness

Modelling Temporal Aspects of Situation Awareness Modelling Temporal Aspects of Situation Awareness Tibor Bosse 1, Robbert-Jan Merk 1,2, and Jan Treur 1 1 Vrije Universiteit Amsterdam, Agent Systems Research Group De Boelelaan 1081, 1081 HV Amsterdam,

More information

Active Sites model for the B-Matrix Approach

Active Sites model for the B-Matrix Approach Active Sites model for the B-Matrix Approach Krishna Chaithanya Lingashetty Abstract : This paper continues on the work of the B-Matrix approach in hebbian learning proposed by Dr. Kak. It reports the

More information

Computational Explorations in Cognitive Neuroscience Chapter 7: Large-Scale Brain Area Functional Organization

Computational Explorations in Cognitive Neuroscience Chapter 7: Large-Scale Brain Area Functional Organization Computational Explorations in Cognitive Neuroscience Chapter 7: Large-Scale Brain Area Functional Organization 1 7.1 Overview This chapter aims to provide a framework for modeling cognitive phenomena based

More information

A Dynamical System Modelling Approach to Gross Model of Emotion Regulation

A Dynamical System Modelling Approach to Gross Model of Emotion Regulation A Dynamical System Modelling Approach to Gross Model of Emotion Regulation Tibor Bosse (tbosse@few.vu.nl) Matthijs Pontier (mpontier@few.vu.nl) Jan Treur (treur@few.vu.nl) Vrije Universiteit Amsterdam,

More information

Introduction and Historical Background. August 22, 2007

Introduction and Historical Background. August 22, 2007 1 Cognitive Bases of Behavior Introduction and Historical Background August 22, 2007 2 Cognitive Psychology Concerned with full range of psychological processes from sensation to knowledge representation

More information

A Computational Model based on Gross Emotion Regulation Theory 1

A Computational Model based on Gross Emotion Regulation Theory 1 A Computational Model based on Gross Emotion Regulation Theory 1 Tibor Bosse (tbosse@few.vu.nl) Matthijs Pontier (mpontier@few.vu.nl) Jan Treur (treur@few.vu.nl) Vrije Universiteit Amsterdam, Department

More information

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence To understand the network paradigm also requires examining the history

More information

Decision neuroscience seeks neural models for how we identify, evaluate and choose

Decision neuroscience seeks neural models for how we identify, evaluate and choose VmPFC function: The value proposition Lesley K Fellows and Scott A Huettel Decision neuroscience seeks neural models for how we identify, evaluate and choose options, goals, and actions. These processes

More information

Modelling the effect of religion on human empathy based on an adaptive temporal causal network model

Modelling the effect of religion on human empathy based on an adaptive temporal causal network model https://doi.org/10.1186/s40649-017-0049-z RESEARCH Open Access Modelling the effect of religion on human empathy based on an adaptive temporal causal network model Laila van Ments 1, Peter Roelofsma 2

More information

Neuromorphic computing

Neuromorphic computing Neuromorphic computing Robotics M.Sc. programme in Computer Science lorenzo.vannucci@santannapisa.it April 19th, 2018 Outline 1. Introduction 2. Fundamentals of neuroscience 3. Simulating the brain 4.

More information

Memory, Attention, and Decision-Making

Memory, Attention, and Decision-Making Memory, Attention, and Decision-Making A Unifying Computational Neuroscience Approach Edmund T. Rolls University of Oxford Department of Experimental Psychology Oxford England OXFORD UNIVERSITY PRESS Contents

More information

Academic year Lecture 16 Emotions LECTURE 16 EMOTIONS

Academic year Lecture 16 Emotions LECTURE 16 EMOTIONS Course Behavioral Economics Academic year 2013-2014 Lecture 16 Emotions Alessandro Innocenti LECTURE 16 EMOTIONS Aim: To explore the role of emotions in economic decisions. Outline: How emotions affect

More information

Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention

Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention Tapani Raiko and Harri Valpola School of Science and Technology Aalto University (formerly Helsinki University of

More information

REPORT DOCUMENTATION PAGE

REPORT DOCUMENTATION PAGE REPORT DOCUMENTATION PAGE Form Approved OMB NO. 0704-0188 The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions,

More information

2012 Course : The Statistician Brain: the Bayesian Revolution in Cognitive Science

2012 Course : The Statistician Brain: the Bayesian Revolution in Cognitive Science 2012 Course : The Statistician Brain: the Bayesian Revolution in Cognitive Science Stanislas Dehaene Chair in Experimental Cognitive Psychology Lecture No. 4 Constraints combination and selection of a

More information

Topological Considerations of Memory Structure

Topological Considerations of Memory Structure Procedia Computer Science Volume 41, 2014, Pages 45 50 This space is reserved for the Procedia header, do not use it BICA 2014. 5th Annual International Conference on Biologically Inspired Cognitive Architectures

More information

PART - A 1. Define Artificial Intelligence formulated by Haugeland. The exciting new effort to make computers think machines with minds in the full and literal sense. 2. Define Artificial Intelligence

More information

Taken From The Brain Top to Bottom //

Taken From The Brain Top to Bottom // Taken From The Brain Top to Bottom // http://thebrain.mcgill.ca/flash/d/d_03/d_03_cl/d_03_cl_que/d_03_cl_que.html THE EVOLUTIONARY LAYERS OF THE HUMAN BRAIN The first time you observe the anatomy of the

More information

Realization of Visual Representation Task on a Humanoid Robot

Realization of Visual Representation Task on a Humanoid Robot Istanbul Technical University, Robot Intelligence Course Realization of Visual Representation Task on a Humanoid Robot Emeç Erçelik May 31, 2016 1 Introduction It is thought that human brain uses a distributed

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 11: Attention & Decision making Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis

More information

5.8 Departure from cognitivism: dynamical systems

5.8 Departure from cognitivism: dynamical systems 154 consciousness, on the other, was completely severed (Thompson, 2007a, p. 5). Consequently as Thompson claims cognitivism works with inadequate notion of cognition. This statement is at odds with practical

More information

COMP329 Robotics and Autonomous Systems Lecture 15: Agents and Intentions. Dr Terry R. Payne Department of Computer Science

COMP329 Robotics and Autonomous Systems Lecture 15: Agents and Intentions. Dr Terry R. Payne Department of Computer Science COMP329 Robotics and Autonomous Systems Lecture 15: Agents and Intentions Dr Terry R. Payne Department of Computer Science General control architecture Localisation Environment Model Local Map Position

More information

Module 1. Introduction. Version 1 CSE IIT, Kharagpur

Module 1. Introduction. Version 1 CSE IIT, Kharagpur Module 1 Introduction Lesson 2 Introduction to Agent 1.3.1 Introduction to Agents An agent acts in an environment. Percepts Agent Environment Actions An agent perceives its environment through sensors.

More information

A model of the interaction between mood and memory

A model of the interaction between mood and memory INSTITUTE OF PHYSICS PUBLISHING NETWORK: COMPUTATION IN NEURAL SYSTEMS Network: Comput. Neural Syst. 12 (2001) 89 109 www.iop.org/journals/ne PII: S0954-898X(01)22487-7 A model of the interaction between

More information

Reading Neuronal Synchrony with Depressing Synapses

Reading Neuronal Synchrony with Depressing Synapses NOTE Communicated by Laurence Abbott Reading Neuronal Synchrony with Depressing Synapses W. Senn Department of Neurobiology, Hebrew University, Jerusalem 4, Israel, Department of Physiology, University

More information

Neural Basis of Decision Making. Mary ET Boyle, Ph.D. Department of Cognitive Science UCSD

Neural Basis of Decision Making. Mary ET Boyle, Ph.D. Department of Cognitive Science UCSD Neural Basis of Decision Making Mary ET Boyle, Ph.D. Department of Cognitive Science UCSD Phineas Gage: Sept. 13, 1848 Working on the rail road Rod impaled his head. 3.5 x 1.25 13 pounds What happened

More information

Feedback Education and Neuroscience. Pankaj Sah

Feedback Education and Neuroscience. Pankaj Sah Feedback Education and Neuroscience Pankaj Sah Science of Learning Learning The process of acquiring a skill or knowledge that leads to a change in behaviour Memory The ability to retain and recover information

More information

Pavlovian, Skinner and other behaviourists contribution to AI

Pavlovian, Skinner and other behaviourists contribution to AI Pavlovian, Skinner and other behaviourists contribution to AI Witold KOSIŃSKI Dominika ZACZEK-CHRZANOWSKA Polish Japanese Institute of Information Technology, Research Center Polsko Japońska Wyższa Szko

More information

Choosing the Greater of Two Goods: Neural Currencies for Valuation and Decision Making

Choosing the Greater of Two Goods: Neural Currencies for Valuation and Decision Making Choosing the Greater of Two Goods: Neural Currencies for Valuation and Decision Making Leo P. Surgre, Gres S. Corrado and William T. Newsome Presenter: He Crane Huang 04/20/2010 Outline Studies on neural

More information

2 Summary of Part I: Basic Mechanisms. 4 Dedicated & Content-Specific

2 Summary of Part I: Basic Mechanisms. 4 Dedicated & Content-Specific 1 Large Scale Brain Area Functional Organization 2 Summary of Part I: Basic Mechanisms 2. Distributed Representations 5. Error driven Learning 1. Biological realism w = δ j ai+ aia j w = δ j ai+ aia j

More information

Intelligent Control Systems

Intelligent Control Systems Lecture Notes in 4 th Class in the Control and Systems Engineering Department University of Technology CCE-CN432 Edited By: Dr. Mohammed Y. Hassan, Ph. D. Fourth Year. CCE-CN432 Syllabus Theoretical: 2

More information

Learning in neural networks

Learning in neural networks http://ccnl.psy.unipd.it Learning in neural networks Marco Zorzi University of Padova M. Zorzi - European Diploma in Cognitive and Brain Sciences, Cognitive modeling", HWK 19-24/3/2006 1 Connectionist

More information

A Computational Model of the Relation between Regulation of Negative Emotions and Mood

A Computational Model of the Relation between Regulation of Negative Emotions and Mood A Computational Model of the Relation between Regulation of Negative Emotions and Mood Altaf Hussain Abro, Michel C.A. Klein, Adnan R. Manzoor, Seyed Amin Tabatabaei, and Jan Treur Vrije Universiteit Amsterdam,

More information

Theoretical Neuroscience: The Binding Problem Jan Scholz, , University of Osnabrück

Theoretical Neuroscience: The Binding Problem Jan Scholz, , University of Osnabrück The Binding Problem This lecture is based on following articles: Adina L. Roskies: The Binding Problem; Neuron 1999 24: 7 Charles M. Gray: The Temporal Correlation Hypothesis of Visual Feature Integration:

More information

Operant matching. Sebastian Seung 9.29 Lecture 6: February 24, 2004

Operant matching. Sebastian Seung 9.29 Lecture 6: February 24, 2004 MIT Department of Brain and Cognitive Sciences 9.29J, Spring 2004 - Introduction to Computational Neuroscience Instructor: Professor Sebastian Seung Operant matching Sebastian Seung 9.29 Lecture 6: February

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Intelligent Agents Chapter 2 & 27 What is an Agent? An intelligent agent perceives its environment with sensors and acts upon that environment through actuators 2 Examples of Agents

More information

A neural circuit model of decision making!

A neural circuit model of decision making! A neural circuit model of decision making! Xiao-Jing Wang! Department of Neurobiology & Kavli Institute for Neuroscience! Yale University School of Medicine! Three basic questions on decision computations!!

More information

Intelligent Agents. CmpE 540 Principles of Artificial Intelligence

Intelligent Agents. CmpE 540 Principles of Artificial Intelligence CmpE 540 Principles of Artificial Intelligence Intelligent Agents Pınar Yolum pinar.yolum@boun.edu.tr Department of Computer Engineering Boğaziçi University 1 Chapter 2 (Based mostly on the course slides

More information

CS324-Artificial Intelligence

CS324-Artificial Intelligence CS324-Artificial Intelligence Lecture 3: Intelligent Agents Waheed Noor Computer Science and Information Technology, University of Balochistan, Quetta, Pakistan Waheed Noor (CS&IT, UoB, Quetta) CS324-Artificial

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 5: Data analysis II Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis II 6 Single

More information

Brain Mechanisms Explain Emotion and Consciousness. Paul Thagard University of Waterloo

Brain Mechanisms Explain Emotion and Consciousness. Paul Thagard University of Waterloo Brain Mechanisms Explain Emotion and Consciousness Paul Thagard University of Waterloo 1 1. Why emotions matter 2. Theories 3. Semantic pointers 4. Emotions 5. Consciousness Outline 2 What is Emotion?

More information

PSY380: VISION SCIENCE

PSY380: VISION SCIENCE PSY380: VISION SCIENCE 1) Questions: - Who are you and why are you here? (Why vision?) - What is visual perception? - What is the function of visual perception? 2) The syllabus & instructor 3) Lecture

More information

Integrating Cognition, Emotion and Autonomy

Integrating Cognition, Emotion and Autonomy Integrating Cognition, Emotion and Autonomy Tom Ziemke School of Humanities & Informatics University of Skövde, Sweden tom.ziemke@his.se 2005-07-14 ICEA = Integrating Cognition, Emotion and Autonomy a

More information

Nature of emotion: Six perennial questions

Nature of emotion: Six perennial questions Motivation & Emotion Nature of emotion James Neill Centre for Applied Psychology University of Canberra 2017 Image source 1 Nature of emotion: Six perennial questions Reading: Reeve (2015) Ch 12 (pp. 337-368)

More information

Evaluating the Effect of Spiking Network Parameters on Polychronization

Evaluating the Effect of Spiking Network Parameters on Polychronization Evaluating the Effect of Spiking Network Parameters on Polychronization Panagiotis Ioannou, Matthew Casey and André Grüning Department of Computing, University of Surrey, Guildford, Surrey, GU2 7XH, UK

More information

Reactive agents and perceptual ambiguity

Reactive agents and perceptual ambiguity Major theme: Robotic and computational models of interaction and cognition Reactive agents and perceptual ambiguity Michel van Dartel and Eric Postma IKAT, Universiteit Maastricht Abstract Situated and

More information

Next Level Practitioner

Next Level Practitioner Next Level Practitioner - Fear Week 115, Day 3 - Dan Siegel, MD - Transcript - pg. 1 Next Level Practitioner Week 115: Fear in the Brain and Body Day 3: How to Work with the Brain and the Body to Relieve

More information

Robotics Summary. Made by: Iskaj Janssen

Robotics Summary. Made by: Iskaj Janssen Robotics Summary Made by: Iskaj Janssen Multiagent system: System composed of multiple agents. Five global computing trends: 1. Ubiquity (computers and intelligence are everywhere) 2. Interconnection (networked

More information

Synfire chains with conductance-based neurons: internal timing and coordination with timed input

Synfire chains with conductance-based neurons: internal timing and coordination with timed input Neurocomputing 5 (5) 9 5 www.elsevier.com/locate/neucom Synfire chains with conductance-based neurons: internal timing and coordination with timed input Friedrich T. Sommer a,, Thomas Wennekers b a Redwood

More information

Nature of emotion: Six perennial questions

Nature of emotion: Six perennial questions Motivation & Emotion Nature of emotion Nature of emotion: Six perennial questions Dr James Neill Centre for Applied Psychology University of Canberra 2016 Image source 1 Reading: Reeve (2015) Ch 12 (pp.

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence COMP-241, Level-6 Mohammad Fahim Akhtar, Dr. Mohammad Hasan Department of Computer Science Jazan University, KSA Chapter 2: Intelligent Agents In which we discuss the nature of

More information

Intelligent Machines That Act Rationally. Hang Li Bytedance AI Lab

Intelligent Machines That Act Rationally. Hang Li Bytedance AI Lab Intelligent Machines That Act Rationally Hang Li Bytedance AI Lab Four Definitions of Artificial Intelligence Building intelligent machines (i.e., intelligent computers) Thinking humanly Acting humanly

More information

Motivational Behavior of Neurons and Fuzzy Logic of Brain (Can robot have drives and love work?)

Motivational Behavior of Neurons and Fuzzy Logic of Brain (Can robot have drives and love work?) Motivational Behavior of Neurons and Fuzzy Logic of Brain (Can robot have drives and love work?) UZIEL SANDLER Jerusalem College of Technology Department of Applied Mathematics Jerusalem 91160 ISRAEL LEV

More information

The Scientific Method

The Scientific Method Course "Empirical Evaluation in Informatics" The Scientific Method Prof. Dr. Lutz Prechelt Freie Universität Berlin, Institut für Informatik http://www.inf.fu-berlin.de/inst/ag-se/ Science and insight

More information

Semiotics and Intelligent Control

Semiotics and Intelligent Control Semiotics and Intelligent Control Morten Lind 0rsted-DTU: Section of Automation, Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark. m/i@oersted.dtu.dk Abstract: Key words: The overall purpose

More information

Modeling Individual and Group Behavior in Complex Environments. Modeling Individual and Group Behavior in Complex Environments

Modeling Individual and Group Behavior in Complex Environments. Modeling Individual and Group Behavior in Complex Environments Modeling Individual and Group Behavior in Complex Environments Dr. R. Andrew Goodwin Environmental Laboratory Professor James J. Anderson Abran Steele-Feldman University of Washington Status: AT-14 Continuing

More information

Learning. Learning: Problems. Chapter 6: Learning

Learning. Learning: Problems. Chapter 6: Learning Chapter 6: Learning 1 Learning 1. In perception we studied that we are responsive to stimuli in the external world. Although some of these stimulus-response associations are innate many are learnt. 2.

More information

AGENT-BASED SYSTEMS. What is an agent? ROBOTICS AND AUTONOMOUS SYSTEMS. Today. that environment in order to meet its delegated objectives.

AGENT-BASED SYSTEMS. What is an agent? ROBOTICS AND AUTONOMOUS SYSTEMS. Today. that environment in order to meet its delegated objectives. ROBOTICS AND AUTONOMOUS SYSTEMS Simon Parsons Department of Computer Science University of Liverpool LECTURE 16 comp329-2013-parsons-lect16 2/44 Today We will start on the second part of the course Autonomous

More information

Dynamic Rule-based Agent

Dynamic Rule-based Agent International Journal of Engineering Research and Technology. ISSN 0974-3154 Volume 11, Number 4 (2018), pp. 605-613 International Research Publication House http://www.irphouse.com Dynamic Rule-based

More information

Sparse Coding in Sparse Winner Networks

Sparse Coding in Sparse Winner Networks Sparse Coding in Sparse Winner Networks Janusz A. Starzyk 1, Yinyin Liu 1, David Vogel 2 1 School of Electrical Engineering & Computer Science Ohio University, Athens, OH 45701 {starzyk, yliu}@bobcat.ent.ohiou.edu

More information

Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6)

Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6) Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6) BPNN in Practice Week 3 Lecture Notes page 1 of 1 The Hopfield Network In this network, it was designed on analogy of

More information

CAS Seminar - Spiking Neurons Network (SNN) Jakob Kemi ( )

CAS Seminar - Spiking Neurons Network (SNN) Jakob Kemi ( ) CAS Seminar - Spiking Neurons Network (SNN) Jakob Kemi (820622-0033) kemiolof@student.chalmers.se November 20, 2006 Introduction Biological background To be written, lots of good sources. Background First

More information

Incorporating Emotion Regulation into Virtual Stories

Incorporating Emotion Regulation into Virtual Stories Incorporating Emotion Regulation into Virtual Stories Tibor Bosse, Matthijs Pontier, Ghazanfar F. Siddiqui, and Jan Treur Vrije Universiteit Amsterdam, Department of Artificial Intelligence, De Boelelaan

More information

2012 Course: The Statistician Brain: the Bayesian Revolution in Cognitive Sciences

2012 Course: The Statistician Brain: the Bayesian Revolution in Cognitive Sciences 2012 Course: The Statistician Brain: the Bayesian Revolution in Cognitive Sciences Stanislas Dehaene Chair of Experimental Cognitive Psychology Lecture n 5 Bayesian Decision-Making Lecture material translated

More information

University of Cambridge Engineering Part IB Information Engineering Elective

University of Cambridge Engineering Part IB Information Engineering Elective University of Cambridge Engineering Part IB Information Engineering Elective Paper 8: Image Searching and Modelling Using Machine Learning Handout 1: Introduction to Artificial Neural Networks Roberto

More information

MTAT Bayesian Networks. Introductory Lecture. Sven Laur University of Tartu

MTAT Bayesian Networks. Introductory Lecture. Sven Laur University of Tartu MTAT.05.113 Bayesian Networks Introductory Lecture Sven Laur University of Tartu Motivation Probability calculus can be viewed as an extension of classical logic. We use many imprecise and heuristic rules

More information

What can we do to improve the outcomes for all adolescents? Changes to the brain and adolescence-- Structural and functional changes in the brain

What can we do to improve the outcomes for all adolescents? Changes to the brain and adolescence-- Structural and functional changes in the brain The Adolescent Brain-- Implications for the SLP Melissa McGrath, M.A., CCC-SLP Ball State University Indiana Speech Language and Hearing Association- Spring Convention April 15, 2016 State of adolescents

More information

Neural circuits PSY 310 Greg Francis. Lecture 05. Rods and cones

Neural circuits PSY 310 Greg Francis. Lecture 05. Rods and cones Neural circuits PSY 310 Greg Francis Lecture 05 Why do you need bright light to read? Rods and cones Photoreceptors are not evenly distributed across the retina 1 Rods and cones Cones are most dense in

More information

Agents and Environments

Agents and Environments Agents and Environments Berlin Chen 2004 Reference: 1. S. Russell and P. Norvig. Artificial Intelligence: A Modern Approach. Chapter 2 AI 2004 Berlin Chen 1 What is an Agent An agent interacts with its

More information

Allen Newell December 4, 1991 Great Questions of Science

Allen Newell December 4, 1991 Great Questions of Science Allen Newell December 4, 1991 Great Questions of Science You need to realize if you haven t before that there is this collection of ultimate scientific questions and if you are lucky to get grabbed by

More information

AI: Intelligent Agents. Chapter 2

AI: Intelligent Agents. Chapter 2 AI: Intelligent Agents Chapter 2 Outline Agents and environments Rationality PEAS (Performance measure, Environment, Actuators, Sensors) Environment types Agent types Agents An agent is anything

More information

An Efficient Hybrid Rule Based Inference Engine with Explanation Capability

An Efficient Hybrid Rule Based Inference Engine with Explanation Capability To be published in the Proceedings of the 14th International FLAIRS Conference, Key West, Florida, May 2001. An Efficient Hybrid Rule Based Inference Engine with Explanation Capability Ioannis Hatzilygeroudis,

More information

Understanding the Role of Emotions in Group Dynamics in Emergency Situations

Understanding the Role of Emotions in Group Dynamics in Emergency Situations Understanding the Role of Emotions in Group Dynamics in Emergency Situations Alexei Sharpanskykh 1(&) and Kashif Zia 2 1 Faculty of Aerospace Engineering, Delft University of Technology, Kluyverweg 1,

More information

Intelligent Machines That Act Rationally. Hang Li Toutiao AI Lab

Intelligent Machines That Act Rationally. Hang Li Toutiao AI Lab Intelligent Machines That Act Rationally Hang Li Toutiao AI Lab Four Definitions of Artificial Intelligence Building intelligent machines (i.e., intelligent computers) Thinking humanly Acting humanly Thinking

More information

Emotion I: General concepts, fear and anxiety

Emotion I: General concepts, fear and anxiety C82NAB Neuroscience and Behaviour Emotion I: General concepts, fear and anxiety Tobias Bast, School of Psychology, University of Nottingham 1 Outline Emotion I (first part) Studying brain substrates of

More information

BEHAVIOR CHANGE THEORY

BEHAVIOR CHANGE THEORY BEHAVIOR CHANGE THEORY An introduction to a behavior change theory framework by the LIVE INCITE team This document is not a formal part of the LIVE INCITE Request for Tender and PCP. It may thus be used

More information

Modeling of Hippocampal Behavior

Modeling of Hippocampal Behavior Modeling of Hippocampal Behavior Diana Ponce-Morado, Venmathi Gunasekaran and Varsha Vijayan Abstract The hippocampus is identified as an important structure in the cerebral cortex of mammals for forming

More information

A Scoring Policy for Simulated Soccer Agents Using Reinforcement Learning

A Scoring Policy for Simulated Soccer Agents Using Reinforcement Learning A Scoring Policy for Simulated Soccer Agents Using Reinforcement Learning Azam Rabiee Computer Science and Engineering Isfahan University, Isfahan, Iran azamrabiei@yahoo.com Nasser Ghasem-Aghaee Computer

More information

Basic Nervous System anatomy. Neurobiology of Happiness

Basic Nervous System anatomy. Neurobiology of Happiness Basic Nervous System anatomy Neurobiology of Happiness The components Central Nervous System (CNS) Brain Spinal Cord Peripheral" Nervous System (PNS) Somatic Nervous System Autonomic Nervous System (ANS)

More information

A Matrix of Material Representation

A Matrix of Material Representation A Matrix of Material Representation Hengfeng Zuo a, Mark Jones b, Tony Hope a, a Design and Advanced Technology Research Centre, Southampton Institute, UK b Product Design Group, Faculty of Technology,

More information

DYNAMICISM & ROBOTICS

DYNAMICISM & ROBOTICS DYNAMICISM & ROBOTICS Phil/Psych 256 Chris Eliasmith Dynamicism and Robotics A different way of being inspired by biology by behavior Recapitulate evolution (sort of) A challenge to both connectionism

More information

Learning and Adaptive Behavior, Part II

Learning and Adaptive Behavior, Part II Learning and Adaptive Behavior, Part II April 12, 2007 The man who sets out to carry a cat by its tail learns something that will always be useful and which will never grow dim or doubtful. -- Mark Twain

More information