Understanding behaviours of a situated agent: A Markov chain analysis

Size: px
Start display at page:

Download "Understanding behaviours of a situated agent: A Markov chain analysis"

Transcription

1 Understanding behaviours of a situated agent: A Markov chain analysis John S Gero, Wei Peng Key Centre of Design Computing and Cognition, University of Sydney, NSW 2006, Australia, {john, , (fax) Abstract: This paper briefly describes situated agents and constructive memory before modeling the behaviour of such an agent applied in a design optimization domain. Markov analysis is used to represent the dynamic behaviour of the memory system of the agent. It shows that the constructive memory behaves as expected and that reasoning moves from reactive and reflective to reflexive as the agent acquires more similar experiences that are increasingly grounded. Keywords: Situated agents; Constructive memory; Markov chain; Design optimization 1. Introduction Situated design computing is a new paradigm for design computing that draws concepts from situated cognition (Clancey, 1997). Situated agents are computational models that have been developed on the notion of situatedness (Clancey, 1997). These agents can be used to build a new generation of computeraided design tool, which can learn by its use (Gero, 2003; Peng and Gero, 2006; Peng, 2006). Central to the concept of situatedness is what is called constructive memory (Gero, 1999), which entails the means by which an agent develops its experience in its interaction with the environment. Situatedness has its roots in works on empirical naturalism (Dewey, 1896 reprinted in 1981) and cognitive psychology (Bartlett, 1932 reprinted in 1977). The notion of situatedness is considered as a conditio sine qua non for any form of true intelligence, natural or artificial (Lindblom and Ziemke, 2002). A situation can be viewed as a worldview that bias a person s interpretation and expectation. A simple example of this worldview affects a person s behaviour is that two designers when given the same set of requirements produce quite different designs. The theory of situatedness claims that every human thought and action is adapted to the environment. They are situated because of what people perceive, how people conceive of their activity, and what people physically do develop together (Clancey, 1997). Vygotsky (1978) contributed to the concept of situatedness by introducing the activity theory, defining that activities of the mind cannot be separated from overt behaviour, or from the social context in which they occur. Social and mental structures interpenetrate each other (Clancey, 1995). In this vein, situatedness is inseparable from interactions in which knowledge is dynamically constructed. A situated agent constructs a situation, which can be represented as first-person memories obtained through taking account of both its contexts and its experience and the interactions between them. Memory in computational systems often refers to a place that holds data and information called memories. It is indexed so as to be queried more efficiently afterwards. The structure, contents and indexes are fixed and independent to their use (Gero, 2006). However, from a cognitive memory point of view, memory is not a place where descriptions of what have been done or said before are stored, but is indistinguishable from a person s capability to make sense, to learn a new skill, to compose something new (Clancey, 1991). This is the essence of Bartlett s model of constructive memory (Bartlett, 1932 reprinted in 1977). It is argued that the contents, the structures of a constructive memory are changed by their use (Gero, 2006). Memories are constructed initially from that experience (previous memories) in response to needs for a memory of that experience but the construction of the memory is connected to the current view of the world at the time of the demand for the memory (Gero, 1999). The notion of a constructive memory reflects how the system adapted to its environment (Gero and Smith, 2006). A memory construction process can be viewed as the way a system uses its previous memory structures and contents to conceptualize, to give meanings to its environmental stimuli. A constructive memory model (Gero, 1999) provides a conceptual framework for us, within which the concept of situatedness can be implemented in a software agent. In this paper, the behaviours of a situated agent are investigated in a number of time-series experiments in design optimization, in which the agent is exposed to heterogeneous design scenarios and develops unsupervised behaviours. These

2 2 J.S. Gero, W. Peng / Engineering Applications of Artificial Intelligence behaviours are explored at two different levels the microscopic and macroscopic behaviours. The microscopic behaviours (hereafter called micro behaviours) are detailed processes and constraints that are defined by the situated agent architecture, for example, sensation, perception, experience activation and reactivation. The investigation of the agent s micro behaviours and their dependencies in relation to contexts, may lead to better understandings of causalities for situated behaviours. On the other hand, analyzing functional aggregations of these micro behaviours, which are called the agent s macro behaviours, enable us describe characteristics of situated behaviours in timeseries events. These macro behaviours are defined as reflexive, reactive and reflective behaviour (Maher and Gero, 2002). What is interesting is whether there are behaviour patterns for a situated agent in time-series events and how these patterns can be explained. A Markov chain approach is utilized to analyze behaviours obtained from test data. Markov chains have been used as stochastic models study the time-dependent behaviours of dynamic systems (Siu, 1994)) and complex adaptive systems (Spears, 1998; 1999). The behaviours of these systems are specified as transition probabilities between the system s states over time. The fundamental assumption for a first-order Markov process is that the conditional probability distribution of the state in the future, given the state of the process currently and in the past, depends only on its current state and not on its state in the past. 1 A second-order Markov chain takes account of the current state and also the previous state. The Markov chain is an idea tool for us to construct a descriptive model of time-dependent relationships among behaviours of a situated agent. 2. Micro Behaviours in Design Optimization Experiments Design optimization is selected as a test bed to carry out the Markov chain analysis. Design optimization is concerned with identifying optimal design solutions which meet design objectives while conforming to design constraints. A large number of optimization algorithms have been developed and are commercially available. Many design optimization tools focus on gathering a variety of mathematical programming algorithms and providing the means for the user to access them to solve design problems. 2 For example, Matlab Optimization Toolbox 3.0 includes a variety of functions for linear programming, quadratic programming, nonlinear optimization and nonlinear least squares. Choosing a suitable optimizer becomes the bottleneck in a design optimization process. The recognition of appropriate optimization models is fundamental to design decision problems (Radford and Gero, 1988). In this paper, a situated agent wraps around a design optimization tool (Matlab Optimization Toolbox), learns concepts of how the tool is used in optimizing a design and adapts its behaviours based on these concepts. The focus of these design optimization experiments is to observe and analyze the agent s behaviours in heterogeneous design optimization scenarios. A sequence of 15 design scenarios is created and executed. Each scenario represents a design task which is further composed of a number of design actions. For example, a typical design optimization task consists of a number of actions: defining objective function; identifying objective function type; defining design variables, variable types; describing design constraints, constraint types; defining gradients of objective function and constraints; defining matrices, such as Hessian matrix and its type, A, b matrices (only available for Matlab users), etc. selecting optimizers; submitting design problem or editing design problem; submitting feedback on agent s outputs. A typical sequence of tasks is: {L, Q, Q, L, NL, Q, NL, L, L, NL, Q, Q, L, L, L} Q, L and NL represent quadratic, linear and nonlinear design optimization problems respectively. The initial experience of the agent holds one instance of a design optimization scenario solved by a quadratic programming optimizer Behaviours of a situated agent Situated agents can sense and put forward changes to the environment via sensors and effectors. Sensors gather environmental changes into data structures called sense-data. Sensation (S) is the process that transfers sense-data into multimodal sensory experiences. This is through push and pull processes. A push process is a data-driven process in which changes from the external world trigger changes in the agent s internal world, for example, the agent s experience. A pull process is an expectation-driven process in which the agent updates the internal world according to the expectation-biased external changes (Gero and Fujii, 2000; Gero and Kannengiesser, 2006). The push and pull processes can occur at different levels of processing, for example, sensation, perception and conception. The pushed sense-data are also called exogenous sense-data (Se). They are triggered by external environmental changes, that is, actions performed by designers in using the design tool. The pulled sense-data are intentionally collected during the agent s expectation-driven process. In the pull process, sensors are triggered from the agent s higher level process (that is, perception, conception) and draw environment changes to update their sense-data. Sensory data (Se+a) consist of two types of variables: the exogenous sense-data (Se) and the

3 J.S. Gero, W. Peng / Engineering Applications of Artificial Intelligence 3 autogenous sensory experience (Sa). Sa is created from matching the agent s exogenous sense-data (Se) with the agent s sensory level experience. Sensory experiences (Se+a) are a combination of the agent s exogenous sense-data (Se) and the related autogenous information (Sa). For instance, sense-data Se is captured by sensors as a sequence of unlabelled events: Se (t) = { a mouse click on a certain text field, key stroke of x, y }. Based on the lowest level of sensory experience, which holds modality information, the agent creates an autogenous variable (Sa) with its initial label for the Se: Sa (t) = { Label for the clicked text field }. Thus, sensory experience Se+a can be created as: Se+a (t) = { [ Label for the clicked test field Key strokes x, y ] } Perception (P) generates percepts based on the agent s sensory experiences. Percepts are intermediate data structures that are generated from mapping sensory data into categories. The sensory experience Se+a is further processed and categorized to create an initial percept Pi which can be used to generate a memory cue. The initial percept can be structured as a triplet Percept (Object, Property, Values of properties). It is expressed as: Pi (t) = Object { Property for the clicked test field, value of that property xy } The perceptual object can be used to cue a memory of the agent s experience. A cue refers to a stimulus that can be used to activate the agent s experience to obtain a memory of that experience. It is generated from matching percepts with the agent s perceptual experience. A cue is subsequently assigned with an activation value to trigger responses from the agent s experience. The cueing function is implemented using experience activation and reactivation (I a and I r), in which a memory cue is applied to the experience structure to obtain a response. Conception (C) is the process of categorizing perceptual sequences and chunks in order to form proto-concepts. A concept is regarded as a result of an interaction process in which meanings are attached to environmental stimuli. In order to illustrate a concept learning process, the term proto-concept is used to describe the intermediate state of a concept. A protoconcept is a knowledge structure that depicts the agent s interpretations and anticipations about its external and internal environment at a particular time. Conception consists of three basic functions: conceptual labeling (C1), constructive learning (C2) and induction (C3). Conceptual labeling creates protoconcepts based on experiential responses to an environment cue. This includes deriving anticipations from these responses and identifying the target. Constructive learning allows the agent to accumulate lower level experiences. Induction can generalize abstractions from the lower level experience and is responsible for generating conceptual knowledge structures. The hypothesizing process (H) generates a hypothesis from current learned proto-concepts. It is where reinterpretation takes place in allowing the agent to learn in a trial and error manner. A situated agent reinterprets its environment using hypotheses which are explanations that are deduced from its domain knowledge (usually conceptual). An agent needs to refocus on or construct a new proto-concept based on hypotheses. Validation (V d ) is the process in which the agent verifies its proto-concepts and hypotheses. It pulls information from the environment to observe whether the environment is changing as expected. A valid concept or experience will be grounded into experiences by incorporation, reconfiguration or reinforcement. The grounding process refers to the experiential grounding. This reinforces the valid concepts or activated experience via changing the structures of the experience so that the likelihood of the grounded experience being activated in similar circumstance is increased. This is implemented by a grounding via weight adaptation process (W a ), which adjusts the weights of each excitatory connection of the valid concept of an IAC neural network (McClelland, 1981; 1995), so that those nodes that fire together become more strongly connected. Reflexive experience response (R ex ) occurs when the experiential response to current sensed data is sufficiently strong to reach a reflexive threshold. A sensory experience can affect action directly. In this circumstance, the agent reflexes to environment stimuli based solely on its experience without activation. As aggregations of the above-mentioned micro behaviours, macro behaviours represent functions that an agent copes with in its environment. The agent s reflexive behaviour ( ) 3 is triggered by environment stimuli which are able to cause reflexive experience response (R ex ). A snapshot of reflexive behaviour can be expressed as environment stimuli sensor S P Cue R ex V d W a and/or C 3. In its reactive behaviour (), an agent reasons by applying its experience to respond to an environment stimulus in a selforganized way. In this mode, the agent activates its experience structures (an IAC neural network) to obtain a response. This can be expressed as environment stimuli sensor S P Cue I a C 1 V d W a and/or C 3. In the reflective behaviour ( ), the agent reasons about its actions by drawing new sense-data from a lower level, reactivating its experience and/or hypothesizing a new protoconcept. This involves the higher-level conceptual experience and hypothesizer. In this mode, the agent s behaviours can be aggregated as environment stimuli sensor S P Cue I r [L needs definition and care as L has been used to mean 3 Note R is different from R, which is a micro-level behaviour

4 4 J.S. Gero, W. Peng / Engineering Applications of Artificial Intelligence linear] C 1 V d W a and/or C 3 or S P H I r C 1 V d W a and/or C 3. Knowledge construction behaviour ( ) is a special form of macro behaviour in which an agent learns new experience via constructive learning function (C 2 ) of the conception process. It can be represented as environment stimuli sensor S + P S+P C 2. Table 1 shows symbols represent various behaviours that are used in this analysis. Table 1. Symbols that represent various micro behaviours Symbols Micro Behaviours (B e) Related Macro Behaviours sequence of design optimization scenarios. How the agent that initially holds a quadratic programming design optimization experience produces changes in its reasoning and memory construction process in relation to the environmental changes needs to be studied. A sequence of tasks {L, Q, Q, L, NL, Q, NL, L, L, NL, Q, Q, L, L, L} is created and adopted. Based on data acquired from this test, a transition matrix is produced. There are altogether 11 states and 73 state transitions in this test. A Markov state transition diagram represents the possible transition between these states in graphical form, which allows us to understand a Markov system. The dark shades nodes (S8 S11) in Fig. 1 are terminate states, in which the system steps out an experiment. S Sensation Both,,, P Perception Both,,, C 1 Conception process 1, conceptual labelling C 2 Conception process 2 conception via constructive learning C 3 Conception process 3 Both,,, conception via inductive learning I a IAC neural network activation I r IAC neural network re-activation H Hypothesising R ex Reflexive experience response V d Validation Both,, W a Weight adaptation Both,, Table 2 presents the Markov states of the system used in the following tests. Sensation (S) and Perception (P) can be grouped into one state and serve as the base for other behaviours in other states, such as I a at S2, because they run parallel at low level and support other behaviours. Table 2. Markov states and behaviours in these states State Behaviours Dominant Behaviour S1 S+P S and P low level behaviours S2 S+P+I a I a -- activating experience S3 S+P+C 1 C 1 conceptual labelling S4 S+P+V d V d validation S5 S+P+I r I r reactivating experience S6 S+P+R ex R ex reflexive experiential response S7 S+P+H H hypothesizing S8 S+P+C 2+C 3 C 2 and C 3 constructive learning and then inductive learning S9 S+P+W a+c 3 W a and C 3 weight adaptation and then inductive learning S10 S+P+W a W a weight adaptation S11 S+P+C 2 C 2 inductive learning 2.2. Markov analysis for test 1 The purpose of Test 1 is to investigate a situated agent s behaviour in a dynamic environment which consists of a S 1 S 2 S S 4 S Fig. 1. The Markov state transition diagram for the agent in test 1. The first-order time-series associations between the states in Test 1, Fig. 1, can be sorted in Table 3 based on states. The causalities that drive these associated behaviours are further discussed from both the agent s functional processes at its microscopic level and the environment contexts the agent encountered at a macroscopic level. Table 3 A detailed discussion of associated behaviours Time-series Association S 6 S 9 S 7 S 8 S Transition Probability 0.20 S Behaviour Descriptions and Causalities for associations S 1 S The agent continues sensing (S) and perceiving (P). This is because the agent faces a new environment context. S 1 S 2 The agent senses (in S), perceives (in P) the environment stimuli and activates its experience (in I a). This is an internal reaction related process in relation to a familiar environment context. S 1 S The agent reflexes to an environment context (in R ex). This indicates that the agent has a very strong experience for that context. But this association is rare with only 0.04 probabilities. S 1 S The agent performs constructive

5 J.S. Gero, W. Peng / Engineering Applications of Artificial Intelligence 5 learning (in C 2). Since there is an inductive learning accompanied (in C 3), we can conclude that the agent is not in the initial stage of test. S 1 S The agent performs constructive learning (in C 2). This occurs at the initial stage of test (note no inductive learning is performed). S 2 S 3 The agent activates its experience and selects a concept. I a and C 1 have a strong dependence due to the internal functional constraints of the agent, in which the agent always selects a concept for an activated experience. The agent meets a familiar environment stimulus. S 3 S The agent selects a concept to react (in C 1) and then observes the environmental changes in order to validate (in V d) that concept. This is also an internal reaction and reflection related process. S 3 S The agent hypothesizes (H) sometimes after the focused or the refocused concepts (C 1) fail to validate. This is a reflective related mechanism. S 3 S The agent grounds (in W a) its focused concept (in C 1) when receiving a direct positive feedback from the user. No inductive learning shows that the agent is in the early state of test. S 4 S The agent senses (S) and perceives (P) its environment to validate a concept (in V d). This is a function related to validation. S 4 S The agent re-activates (in I r) its experience when an existing experience is not able to validate (in V d). This is a reflection related process. S 4 S 9 The agent reinforces the validated experience (in W a) and induces new conceptual knowledge (in C 3) to refresh its experience. This is the grounding related mechanism when an experience (reactive or reflective) is proved to be useful in interactions. S 4 S The agent s validated experience is grounded (in W a) but no inductive learning (no C 3) is occurred due to insufficient data obtained. This happens at the early stage of the test. This behaviour is no longer observed after the agent accumulated enough perceptual experience for it to induce a conceptual knowledge. S 5 S 3 Once the agent re-activates its experience (in I r), it re-focuses on a new concept in C 1. This is the agent s reflection related processes when it is not able to validate its activated experience in reaction. S 5 S The agent validates (in V d) the reactivated experience (I r). This is a reflective related process during which the agent creates hypotheses, refocuses on a new concept and then observes the impacts of the refocused concept in interactions. S 5 S The agent reinforces (in W a) the reactivated experience (in I r) and induces new knowledge (in C 3). This is also a grounding related process in which the agent s reflective experience is proved to be useful. S 6 S 9 The agent reflexes (in R ex) and reinforces (in W a) the reflexive experience. This shows that the agent s reflexive experiences are always get grounded. S 7 S 5 The agent makes a hypothesis (in H) and re-activates (in I r) its experience based on the hypothesis. This is a strong behaviour. This shows the agent s internal reflective processes in relation to a new or confused environment context. Based on the state diagrams from Fig. 1 and detailed discussions from Table 3, some findings are discussed as below: There is a primary path with the highest transition probabilities (from S 1 S 2 S 3 S 4 S 9 ). This shows that the system is mostly likely to perform reaction-related behaviours. The explanation is that the agent initially holds related experience (in quadratic programming) corresponding to certain environmental stimuli (there are seven Ls, five Qs) and the experience is subsequently reinforced during the test; The system performs other behaviours: knowledge construction behaviours (S 1 S 8, S 1 S 11 ), reflection related behaviours (S 3 S 7 S 5 S 3 ) and reflexive behaviour (S 1 S 6 S 9 ). But these behaviours are with less probabilities compare to reaction related behaviours. Similar to above-mentioned reaction-related behaviours, the decisive factor for behaviour is the agent s experience and its contextual environment, in the sense that the availability and strength of a certain experience are the result of the agent s interaction with its environment, and on the other hand, that experience decides how the agent behave in a particular environment. Strong dependences exist between behaviour states, such as S 2 S 3, S 6 S 9, S 7 S 5, represents characteristics for reactive, reflexive and reflective behaviours. The causes for these behaviour patterns are related to environment context, the agent s experience and the way which the agent processes the contextual stimuli. This test shows that a situated agent learns and modifies behaviours based on the experience it has, the contextual environment it encounters and the interactions between the agent and the environment The Markov analysis for test 2 In this section, the agent trained at the end of Test 1 is further experimented. The purpose of this test is to investigate the agent s behaviours in the circumstance where the agent is exposed to the same sequence that it has already experienced in Test 1. It is interesting to know whether the agent simply repeats its behaviours or develops new behaviours from the interactions in this test when the agent holds different experiences to those of the agent in Test 1. The agent contains four experience nodes which were learned in Test 1. Through this extended test, 64 state transitions are gathered. The transition matrix contains 8 states and their

6 6 J.S. Gero, W. Peng / Engineering Applications of Artificial Intelligence transition probabilities. The agent exemplifies different characteristics in its behaviours compared to those it showed in Test 1. Whilst in Test 1, the agent has diversified behaviours in reaction, reflection, reflexion and constructive learning, there are higher probabilities that the agent performs reflexion in this test. As shown in Fig. 2, the state S 8 that is associated with constructive learning is not used in this experiment. It is discovered that there exists only one terminal state S 9, which is related to grounding via weight adaptation. The probability for reflexion (with the value 0.67) is comparatively high compared to other transition probability S S 2 S 3 S 4 S S 6 S 9 S 7 experience. This is the grounding related mechanism when an experience (reactive or reflective) is proved to be useful in interactions (in V d). S 5 S Once the agent re-activates its experience (in I r), it re-focuses on a new concept (in C 1). This is the agent s reflection related processes when it is not able to validate its activated experience in reaction. S 5 S The agent reinforces the re-activated experience (in W a) and induces new knowledge (in C 3). This is also a grounding related process in which the agent s reflective experience is proved to be useful (in I r). S 6 S 4 This is a new behaviour dependence compared to those in Test 1. The agent validates its reflexive experience. S 6 S 9 The agent reflexes (in R ex) and reinforces the reflexive experience (in W a). This shows that the agent s reflexive experiences are always get grounded. S 7 S 5 The agent makes a hypothesis (in H) and re-activates its experience based on the hypothesis (in I r). This is a strong dependence that results from the agent s internal reflective processes in relation to a new or confused environment context. Fig. 2. The Markov state transition diagram for the agent in Test 2. In Table 4, the state diagram in Fig. 2 is further discussed. Table 4. A detailed discussion of associated behaviours in Test 2 Time-series Association Transition Probability Behaviour Descriptions and Causalities for associations S 1 S The agent senses (S), perceives (P) the environment stimuli and activates its experience (in I a). This is an internal reaction related process in relation to a familiar environment context. S 1 S The agent reflexes (S+P S+P+R ex) to an environment context. This indicates that the agent has a very strong experience for that context. S 2 S 3 The agent activates its experience and selects a concept. I a and C 1 have a strong dependence that is caused by it internal reaction process in relation to a familiar environment stimulus. S 3 S 4 The agent selects a concept (in C 1) to react and then observes the environmental changes in order to validate (in V d) that concept. This is also an internal reaction and reflection related process. S 3 S The agent hypothesizes (in H) sometimes after the focused or the refocused concepts (in C 1) fail to validate. This is a reflective related mechanism. S 3 S This is a new behaviour dependence compared to those in Test 1. The agent grounds refocused concept (in W a) and learns new conceptual knowledge from induction (in C 3). S 4 S 5 The agent re-activates (in I r) its experience when an existing experience is not able to validate (in V d). This is a reflection related process. S 4 S 9 The agent reinforces the validated experience (in W a) and induces new conceptual knowledge (in C 3) to refresh its Based on the state diagrams from Fig. 2 and detailed discussions from Table 4, the following conclusions are reached: There is a primary path with the highest transition probabilities for reflexive behaviours (with the main stream S 1 S 6 and S 1 S 6 S 4 ). The experience for linear programming and quadratic programming are highly grounded such that the agent reflexes rather than using its experience structures to react; There is a considerable high probability that the system performs reflective behaviours in this test (with the paths S 3 S 7 S 5 and S 3 S 4 S 5 ). The agent uses its experience to reflect to environmental changes. Considering the agent encounters the same sequence it has experienced in Test 1, the reason that the agent still reflects is caused by its experiential changes; New behaviour dependences (S 6 S 4 and S 3 S 9 ) emerge due to the agent s different experience and environment context it faces in this test. There are some states (S 8, S 10 and S 11 ) and behaviour dependences (S 1 S 8, S 3 S 10, S 4 S 10, S 1 S 11, S 1 S 1 ) missing in this test compared to those in Test 1. The lack of S 8 and S 11, which are related to constructive learning, demonstrates that the system does not face a new context in this test. S 10 is the grounding by weight adaptation process which occurs at the initial stage of Test 1, when there is not enough perceptual data that can be used to induce conceptual knowledge. In summary, the agent is more likely to reflex as some parts of its experience are highly grounded. New behaviour dependencies emerge because agent with new experience behaves differently in the same environment. Even though

7 J.S. Gero, W. Peng / Engineering Applications of Artificial Intelligence 7 exposed to the same environment, a situated agent produces different behaviours The Markov analysis for test 3 Test 3 enables us to study changes imposed by the environment from another perspective. This test treats Test 1 followed by Test 2 as a single test. The combined data set contains 11 states with 137 state transitions. The state transition diagram is demonstrated in Fig. 3, in which the probability changes compared to those in Test 1, in terms of increase and decrease, are shown by arrows that follow these probabilities. The trend of state changes of the agent is caused by the agent s experiential grounding in Test 2, which increased the probabilities for reflexive behaviours and reflective behaviours at the expenses of the other behaviours, such as reactive behaviours and grounding behaviours. This shows that the agent can develop experience in Test 2 and therefore adapt its behaviours based on its experience. 0.36_ 0.20_ S _ S 2 S S 4 S _ 0.04_ 0.28_ 0.28 _ 0.40_ 0.45_ 0.03_ S _ S 9 S _ S 8 S _ S _ 0.09_ 0.53_ Fig. 3. The state diagram generated from the data obtained in Test The Markov Analysis of the Macro Behaviours of the Situated Agent This section describes experiments that enable us to understand the behaviours of this situated agent at a macroscopic level. Functional aggregations of detailed micro behaviours are investigated. The Markov states can be reduced to 4 states, which represent macro behaviours,, and. Unlike the previous detailed behaviour analysis which focused on investigating the statistical distribution among tasks, this test examines the system s behaviours over time here. For example, how the agent changes its behaviours across tasks. The results are obtained from 8 tests: 1. Test 1, as described before, consists of 15 design tasks of {L, Q, Q, L, NL, Q, NL, L, L, NL, Q, Q, L, L, L} with a quadratic design experience; 2. Test 2 (also mentioned before), which is the same sequence of tasks as Test 1, but with a different initial agent experience (obtained from Test 1); 3. Test 3 (also mentioned before) has a combined data set from Test 1 and Test 2 ; 4. Test 4, which use the same agent with the initial single experience of quadratic function to run through another sequence {Q, L, Q, L, NL, Q, NL, Q, Q, Q, Q, Q, L, Q, NL}; 5. Test 5 examines the agent obtained from Test 4 and use the same sequence of Test 4; 6. Test 6 has combined data set from Test 4 and Test 5; 7. Test 7 uses the sequence of Test 4 to examine the agent after Test 1. It is different to Test 5 which uses an agent that has learned from Test 1. The agent used in this test holds experience that has been learned from Test Test 8 has combined data obtained from Test 1 and Test 7. Table 5 shows the macro behaviours obtained from these tests. Table 5. The macro behaviours obtained from tests, T is for Tasks and B is for macro behaviours Test 1 T L Q Q L NL Q NL L L NL Q Q L L L B Test 2 T L Q Q L NL Q NL L L NL Q Q L L L B Test 3 T L Q Q L NL Q NL L L NL Q Q L L L B T L Q Q L NL Q NL L L NL Q Q L L L B R R R R R

8 8 J.S. Gero, W. Peng / Engineering Applications of Artificial Intelligence Test 4 T Q L Q L NL Q NL Q Q Q Q Q L Q NL B Test 5 T Q L Q L NL Q NL Q Q Q Q Q L Q NL B Test 6 T Q L Q L NL Q NL Q Q Q Q Q L Q NL B T Q L Q L NL Q NL Q Q Q Q Q L Q NL B Test 7 T Q L Q L NL Q NL Q Q Q Q Q L Q NL B Test 8 T L Q Q L NL Q NL L L NL Q Q L L L B T Q L Q L NL Q NL Q Q Q Q Q L Q NL B 3.1. Conclusions from first-order Markov state diagrams As shown in Table 6, the Markov state diagrams for Tests 1 8 share some common features and, in the mean time, are related to tasks and its experience learned from the sequence of tasks. Some interesting findings from Table 6 are: 1. is strongly dependent on. This shows that knowledge construction ( ) is likely to be followed by reactive behaviour ( ). But this is not necessarily a causal relation because it is related to how the agent responds to the environment, For example, in Test 1, the agent constructs new knowledge ( ) in the Task L and subsequently reacts based on its initial experience in the Task Q (not the newly learned knowledge in the Task L ). Table 6. State diagrams for Tests 1 8 Test Number Features First-order Markov State Diagrams Test 1 (1) 1. (); 2. ( 0.67); 3. (0.59). shows how the agent interacts with the environment. The agent mainly perform due to strong links to

9 J.S. Gero, W. Peng / Engineering Applications of Artificial Intelligence 9 Test 2 (1 after 1) 1. (0.80); 2. ( 0.67); 3. (0.60) is missing and the agent is focused on and with strong links to them Test 3 (1 and 1) 1. (); 2. ( 0.70); 3. () The weight of links to, and are evenly distributed Test 4 (2) 1. (); 2. (0.67); 3. ( 0.57); The agent focuses on, and due to strong links to them Test 5 (2 after 2) 1. (); 2. (); 3. ( ); The agent focuses on and Test 6 (2 and 2) 1. (0.80); 2. (0.67); 3. ( 0.53); The weight of links to, and are evenly distributed

10 10 J.S. Gero, W. Peng / Engineering Applications of Artificial Intelligence Test 7 (2 after 1) 1. (0.83); 2. (0.67); 3. ( 0.60); The agent focuses on and Test 8 (1 and 2) 1. (); 2. (0.54); 3. ( 0.46); 4. ( 0.45); The weight of links to, and are evenly distributed is missing from all the follow-up tests (Tests 2, 5, 7), in which the agent has already gained some experience from previous tests. This implies that the agent can use what has learned to react, reflect and reflex; 3. From Test 1 and follow-up tests based on it Test 2 and Test 7, we find that the agent has higher transition probabilities in ( 0.67 for Test 2 and 0.60 for Test 7) in the follow-up tests compare to that in Test 1 ( 0 ). On the other hand, the agent has higher transition probabilities in in Test 1 ( 0.59 ). The above observations imply different characteristics of the agent s behaviours in the lead test and the follow-up tests. 4. The above-mentioned phenomena is not valid for Test 4 and the follow-up test (Test 5) based on it, in which both tests have higher transition probabilities in than in. Tasks of Test 4 allow the agent to reinforce its initial experience in quadratic programming and move into reflexive behaviour in Test The Markov analysis for second-order markov state diagrams for test 1 This section depicts the analysis of the second order Markov transition for Test 1 in order to allow us further understand the agent s time-series behaviours. Task numbers, the related design optimization problems for each task and macro behaviours of the system are recorded in Table 7. Table 7. Tasks and macro behaviours obtained from Test 1 Task No. Tasks L Q Q L N Q N L L NL Q Q L L Macro Behavi ours The second-order Markov analysis is performed with these time-series data. As indicated in Table 8, through the secondorder Markov analysis, we can identify clusters (or chunks) of structures and their relationships over time. However, the causalities that drive these behaviour patterns need to be further investigated along with the problems the agent encountered and the agent s experience. Table 8. Second-order Markov transition matrix for agent in Test 1 Transition

11 J.S. Gero, W. Peng / Engineering Applications of Artificial Intelligence The patterns of behaviours according to their probabilities are sorted in Table 9, followed by discussions of the underlying reasons for these patterns. Table 9 A detailed discussion of second-order behaviour patterns in this test Time-series Behaviour Patterns Task 456 Task 1011 Task 123 Task 567 Task 1112 Task 1213 Task 6 10 Transition Probability Causal Relations inside the Pattern? Behaviour Descriptions and Causalities No The agent performs in task 4 (reacting to L), in Task 5 (NL) and in Task 6 (Q). This pattern is decided by environment context and how does the agent s experience when it is exposed to these contexts. No The agent reflects and constructs a new experience in Task 10 ( ) in relation to a NL problem. And in Task 11, the agent uses its previously accumulated experience to react to a Q problem Yes Yes Yes No The agent learns a L in task 1 and reacts to a Q in task 2 with its initial experience. In task 3, the agent reacts to a Q using its grounded experience from task 2. Causal relation in The agent learns a NL in task 5 and reacts to a Q using its experience. In task 7, it uses the learned experience of NL in task 5 to react to a NL problem. Causal relation in The agent reacts and reflects on a Q in task 11 then it uses this grounded experience in reacting to a Q in task 12. Causal relation in The agent reacts and reflects on a Q in task 12 then it reacts to a L in task 13 using previously obtained experience in L. No causal relation in this pattern Yes The agent reacts to Q problem in task 6 to 10. There is causal relationship in a probability 0.58 ) followed by another two s. There are patterns that cannot be explained using second-order Markov analysis simply because some behaviours are historically dependent and related to environment contexts they are situated The Markov analysis for the agent s behaviour in monotonic tasks The multidimensionality 4 of the task space introduces complexity in understanding a situated agent s behaviours. We cannot deduce causal relationships between macro behaviours due to multiple tasks involved in the tests. It is necessary to investigate the agent s behaviour in monotonic tasks, which means particular tasks over time, for example, the agent s behaviours in linear programming ( L ) in a test. Samples are taken from data in Test 1 (described in Table 5) to create a monotonic task data, which is depicted in Table 10. In a monotonic task, a situated agent s behaviours are much more predictable. As shown in Table 10, the Markov state diagram for the agent in an L task exhibits a regular pattern, which is. A situated agent learns via and subsequently used the newly learned experience to react. After a number of exposures to L, the agent reflexes due to its highly grounded experience on linear programming. The grounding of an experience in one particular design scenario has an effect of reducing the grounding of the agent s experience in other design scenarios. As shown in Table 10, the agent initially learns new knowledge for design scenario NL and subsequently reacts to a similar problem in the environment. However, the agent also exhibits high transition probabilities in () and (). Similar results can be deduced from the agent s behaviour in the task Q. The agent mainly reacts using its original experience. But there are circumstances where the agent produces a reflection cue due to un-grounding of its experience. Table 10. The Markov state diagram for Monotonic sample from Test 1 Tasks Task Name Markov State Diagram Task L 1. (); 2. ( 0.80); 3. (0.20). This shows how the agent learns new knowledge and grounds that knowledge In conclusion, some causal relationships in the first-order and second-order Markov chains are identified. For example, a is certainly followed by a (probability obtained from the first-order state diagram in Table 6). is very likely (with 4 Multidimensionality refers to the number of design optimization problem types

12 12 J.S. Gero, W. Peng / Engineering Applications of Artificial Intelligence Task NL Task Q 4. Conclusion 1. (); 2. ( ); 3. (); 4. (); The agent learns new knowledge, and then react based on that knowledge. Its reflection leads to new knowledge construction. 1. (); 2. (0.60); 3. (); The agent reacts based on initial experience on Q. There are circumstances that it performs reflection. This paper uses the Markov chain approach to analyze the behaviours of a situated agent in design optimization experiments. Time-series dependences between the agent s behaviours are disclosed through a number of experiments and related analysis. As demonstrated in the micro behaviour analytical results, the dependences among the agent s microscopic behaviours reflect how the agent s internal processes in respond to what it confronts within its environment. Behaviour patterns and their dependences can be found from the macro behaviours analysis. Some can be explained from firstorder and second order Markov analyses. Others can be traced back to higher-order relations in time, which result from what and how the agent learns its environments. The Markov analysis for Monotonic sample unveils causal relationships between a situated agent s macro behaviours, which showcase that a constructive memory behaves as expected and that reasoning moves from reactive and reflective to reflexive as the agent acquires more similar experiences that are increasingly grounded. These results show that there are structures and mechanisms that produce the agent s state transitions. However, the uniformity of these transitions implies that a situated system is not stationary system, whose behaviours can be predicted. No hidden Markov states can be deduced. The factors behind situated behaviours depend on what has been experienced (past memories of the agent) in response to what are active in the environment at the time when the agent constructs a memory. A situated agent is an open and multi-dimension system, which can react, reflect and reflex depending on internal processed and its interactions with the environment. It can be concluded that situated behaviours are history and process dependent, in the sense that the agent s initial experience and the environment context from which the agent processes information shape how a situated agent behaves. Acknowledgements This research is supported by a grant from the Australian Research Council, grant number DP References Bartlett, F.C., 1932 reprinted in Remembering: A Study in Experimental and Social Psychology, Cambridge University Press, Cambridge. Clancey, W., A tutorial on situated learning, in: Self J. (Eds.), Proceedings of the International Conference on Computers and Education. Charlottesville, VA, AACE, Taiwan, pp Clancey, W., Situated Cognition: On Human Knowledge and Computer Representations, Cambridge University Press, Cambridge. Dewey, J., 1896 reprinted in The reflex arc concept in psychology. Psychological Review 3, Gero, J.S., Understanding situated design computing: Newton, Mach, Einstein and quantum mechanics, Intelligent Computing in Engineering and Architecture (to appear). Gero, J.S., Constructive memory in design thinking, in: Goldschmidt, G., Porter, W. (Eds.), Design Thinking Research Symposium: Design Representation. MIT, Cambridge, pp Gero, J.S., Design tools as situated agents that adapt to their use, in: Dokonal, W., Hirschberg, U. (Eds.), ecaade21. ecaade, Graz University of Technology, pp Gero, J.S., Fujii, H., A computational framework for concept formation in a situated design agent. Knowledge-Based Systems 13(6), Gero, J.S., Kannengiesser, U., A framework for situated design optimization, in: Leeuwen J.V., Timmermans, H. (Eds.), Innovations in Design Decision Support Systems in Architecture and Urban Planning. Springer, Berlin, pp Gero, J.S., Smith, G.J., A computational framework for concept formation for a situated design agent, Part B: Constructive memory. Working Paper, Key Centre of Design Computing and Cognition, University of Sydney. Lindblom, J., Ziemke, T., Social situatedness: Vygotsky and beyond, in: Prince, C.G., Demiris, Y., Marom, Y., Kozima, H. and Balkenius, C. (Eds.), Proceedings Second International Workshop on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems. Edinburgh, Scotland, pp Maher, M.L., Gero, J.S., Agent models of 3D virtual worlds, ACADIA 2002: Thresholds. California State Polytechnic University, pp McClelland, J.L., Retrieving general and specific information from stored knowledge of specifics, Proceedings of the Third

13 J.S. Gero, W. Peng / Engineering Applications of Artificial Intelligence 13 Annual Meeting of the Cognitive Science Society. Erlbaum, Hillsdale, NJ, pp McClelland, J.L., Constructive memory and memory distortion: A parallel distributed processing approach, in: Schacter D.L. (Eds.), Memory Distortion: How Minds, Brains, and Societies Reconstruct the Past. Harvard University Press, Cambridge, MA, pp Peng, W., A Design Interaction Tool that Adapts, PhD Thesis. University of Sydney, Sydney. Peng, W., Gero, J.S., Concept formation in a design optimization tool, in: Leeuwen J.V., Timmermans, H. (Eds.), Innovations in Design Decision Support Systems in Architecture and Urban Planning. Springer, Berlin, pp Radford, A.D., Gero, J.S., Design by Optimization in Architecture and Building, Van Nostrand Reinhold, New York. Siu, N., Risk assessment for dynamic systems: an overview. Reliability engineering & systems safety 43(11), Spears, W.M., A Compression Algorithm for Probability Transition Matrices. SIAM Matrix Analysis and Applications 20(1), Spears, W.M., Aggregating models of evolutionary algorithms, Proceedings of the Congress on Evolutionary Computation. IEEE Press, Washington D.C., USA, pp Vygotsky, L.S., Mind in Society: The Development of Higher Psychological Processes, Harvard University Press, (Original work published in 1934), Cambridge, MA.

Situated Concept Formation from Interactions: An Implementable Constructive Memory Model

Situated Concept Formation from Interactions: An Implementable Constructive Memory Model Situated Concept Formation from Interactions: An Implementable Constructive Model Wei Peng Melbourne, VIC, Australia WEIPENGTIGER@YAHOO.COM.AU John Gero JOHN@JOHNGERO.COM Department of Computer Science,

More information

A Computational Framework for Concept Formation for a Situated Design Agent

A Computational Framework for Concept Formation for a Situated Design Agent A Computational Framework for Concept Formation for a Situated Design Agent John S Gero Key Centre of Design Computing and Cognition University of Sydney NSW 2006 Australia john@arch.usyd.edu.au and Haruyuki

More information

A MODEL OF DESIGNING THAT INCLUDES ITS SITUATEDNESS

A MODEL OF DESIGNING THAT INCLUDES ITS SITUATEDNESS A MODEL OF DESIGNING THAT INCLUDES ITS SITUATEDNESS Key Centre of Design Computing and Cognition Department of Architectural and Design Science University of Sydney NSW 2006 Australia email: john@arch.usyd.edu.au

More information

A SITUATED APPROACH TO ANALOGY IN DESIGNING

A SITUATED APPROACH TO ANALOGY IN DESIGNING A SITUATED APPROACH TO ANALOGY IN DESIGNING JOHN S. GERO AND JAROSLAW M. KULINSKI Key Centre of Design Computing and Cognition Department of Architectural & Design Science University of Sydney, NSW 2006,

More information

Models of Information Retrieval

Models of Information Retrieval Models of Information Retrieval Introduction By information behaviour is meant those activities a person may engage in when identifying their own needs for information, searching for such information in

More information

Function-Behaviour-Structure: A Model for Social Situated Agents

Function-Behaviour-Structure: A Model for Social Situated Agents Function-Behaviour-Structure: A Model for Social Situated Agents John S. Gero and Udo Kannengiesser University of Sydney Sydney NSW 2006, Australia {john, udo}@arch.usyd.edu.au Abstract This paper proposes

More information

Cognitive Modeling. Lecture 9: Intro to Probabilistic Modeling: Rational Analysis. Sharon Goldwater

Cognitive Modeling. Lecture 9: Intro to Probabilistic Modeling: Rational Analysis. Sharon Goldwater Cognitive Modeling Lecture 9: Intro to Probabilistic Modeling: Sharon Goldwater School of Informatics University of Edinburgh sgwater@inf.ed.ac.uk February 8, 2010 Sharon Goldwater Cognitive Modeling 1

More information

Cognitive Modeling. Mechanistic Modeling. Mechanistic Modeling. Mechanistic Modeling Rational Analysis

Cognitive Modeling. Mechanistic Modeling. Mechanistic Modeling. Mechanistic Modeling Rational Analysis Lecture 9: Intro to Probabilistic Modeling: School of Informatics University of Edinburgh sgwater@inf.ed.ac.uk February 8, 2010 1 1 2 3 4 Reading: Anderson (2002). 2 Traditional mechanistic approach to

More information

EEL-5840 Elements of {Artificial} Machine Intelligence

EEL-5840 Elements of {Artificial} Machine Intelligence Menu Introduction Syllabus Grading: Last 2 Yrs Class Average 3.55; {3.7 Fall 2012 w/24 students & 3.45 Fall 2013} General Comments Copyright Dr. A. Antonio Arroyo Page 2 vs. Artificial Intelligence? DEF:

More information

Introduction and Historical Background. August 22, 2007

Introduction and Historical Background. August 22, 2007 1 Cognitive Bases of Behavior Introduction and Historical Background August 22, 2007 2 Cognitive Psychology Concerned with full range of psychological processes from sensation to knowledge representation

More information

Introduction to Computational Neuroscience

Introduction to Computational Neuroscience Introduction to Computational Neuroscience Lecture 11: Attention & Decision making Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis

More information

An Escalation Model of Consciousness

An Escalation Model of Consciousness Bailey!1 Ben Bailey Current Issues in Cognitive Science Mark Feinstein 2015-12-18 An Escalation Model of Consciousness Introduction The idea of consciousness has plagued humanity since its inception. Humans

More information

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence To understand the network paradigm also requires examining the history

More information

The Role of Feedback in Categorisation

The Role of Feedback in Categorisation The Role of in Categorisation Mark Suret (m.suret@psychol.cam.ac.uk) Department of Experimental Psychology; Downing Street Cambridge, CB2 3EB UK I.P.L. McLaren (iplm2@cus.cam.ac.uk) Department of Experimental

More information

Perception Lie Paradox: Mathematically Proved Uncertainty about Humans Perception Similarity

Perception Lie Paradox: Mathematically Proved Uncertainty about Humans Perception Similarity Perception Lie Paradox: Mathematically Proved Uncertainty about Humans Perception Similarity Ahmed M. Mahran Computer and Systems Engineering Department, Faculty of Engineering, Alexandria University,

More information

COMP329 Robotics and Autonomous Systems Lecture 15: Agents and Intentions. Dr Terry R. Payne Department of Computer Science

COMP329 Robotics and Autonomous Systems Lecture 15: Agents and Intentions. Dr Terry R. Payne Department of Computer Science COMP329 Robotics and Autonomous Systems Lecture 15: Agents and Intentions Dr Terry R. Payne Department of Computer Science General control architecture Localisation Environment Model Local Map Position

More information

Capacity Limits in Mechanical Reasoning

Capacity Limits in Mechanical Reasoning Capacity Limits in Mechanical Reasoning Mary Hegarty Department of Psychology University of California, Santa Barbara Santa Barbara, CA 93106 hegarty@psych.ucsb.edu Abstract This paper examines capacity

More information

Learning Classifier Systems (LCS/XCSF)

Learning Classifier Systems (LCS/XCSF) Context-Dependent Predictions and Cognitive Arm Control with XCSF Learning Classifier Systems (LCS/XCSF) Laurentius Florentin Gruber Seminar aus Künstlicher Intelligenz WS 2015/16 Professor Johannes Fürnkranz

More information

Reactive agents and perceptual ambiguity

Reactive agents and perceptual ambiguity Major theme: Robotic and computational models of interaction and cognition Reactive agents and perceptual ambiguity Michel van Dartel and Eric Postma IKAT, Universiteit Maastricht Abstract Situated and

More information

Grounding Ontologies in the External World

Grounding Ontologies in the External World Grounding Ontologies in the External World Antonio CHELLA University of Palermo and ICAR-CNR, Palermo antonio.chella@unipa.it Abstract. The paper discusses a case study of grounding an ontology in the

More information

Lecture 2.1 What is Perception?

Lecture 2.1 What is Perception? Lecture 2.1 What is Perception? A Central Ideas in Perception: Perception is more than the sum of sensory inputs. It involves active bottom-up and topdown processing. Perception is not a veridical representation

More information

Psy2005: Applied Research Methods & Ethics in Psychology. Week 14: An Introduction to Qualitative Research

Psy2005: Applied Research Methods & Ethics in Psychology. Week 14: An Introduction to Qualitative Research Psy2005: Applied Research Methods & Ethics in Psychology Week 14: An Introduction to Qualitative Research 1 Learning Outcomes Outline the General Principles of Qualitative Research Compare and contrast

More information

LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES

LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES Reactive Architectures LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES An Introduction to MultiAgent Systems http://www.csc.liv.ac.uk/~mjw/pubs/imas There are many unsolved (some would say insoluble) problems

More information

Unmanned autonomous vehicles in air land and sea

Unmanned autonomous vehicles in air land and sea based on Gianni A. Di Caro lecture on ROBOT CONTROL RCHITECTURES SINGLE AND MULTI-ROBOT SYSTEMS: A CASE STUDY IN SWARM ROBOTICS Unmanned autonomous vehicles in air land and sea Robots and Unmanned Vehicles

More information

The Standard Theory of Conscious Perception

The Standard Theory of Conscious Perception The Standard Theory of Conscious Perception C. D. Jennings Department of Philosophy Boston University Pacific APA 2012 Outline 1 Introduction Motivation Background 2 Setting up the Problem Working Definitions

More information

Learning to Use Episodic Memory

Learning to Use Episodic Memory Learning to Use Episodic Memory Nicholas A. Gorski (ngorski@umich.edu) John E. Laird (laird@umich.edu) Computer Science & Engineering, University of Michigan 2260 Hayward St., Ann Arbor, MI 48109 USA Abstract

More information

What is analytical sociology? And is it the future of sociology?

What is analytical sociology? And is it the future of sociology? What is analytical sociology? And is it the future of sociology? Twan Huijsmans Sociology Abstract During the last few decades a new approach in sociology has been developed, analytical sociology (AS).

More information

The Perceptron: : A Probabilistic Model for Information Storage and Organization in the brain (F. Rosenblatt)

The Perceptron: : A Probabilistic Model for Information Storage and Organization in the brain (F. Rosenblatt) The Perceptron: : A Probabilistic Model for Information Storage and Organization in the brain (F. Rosenblatt) Artificial Intelligence 2005-21534 Heo, Min-Oh Outline Introduction Probabilistic model on

More information

Morton-Style Factorial Coding of Color in Primary Visual Cortex

Morton-Style Factorial Coding of Color in Primary Visual Cortex Morton-Style Factorial Coding of Color in Primary Visual Cortex Javier R. Movellan Institute for Neural Computation University of California San Diego La Jolla, CA 92093-0515 movellan@inc.ucsd.edu Thomas

More information

2. Strategic Knowledge, Chunks, and Cognitive Segments

2. Strategic Knowledge, Chunks, and Cognitive Segments STRATEGIC KNOWLEDGE DIFFERENCES BETWEEN AN EXPERT AND A NOVICE DESIGNER MANOLYA KAVAKLI Charles Sturt University, Australia and JOHN S GERO University of Sydney, Australia Abstract. This chapter investigates

More information

Module 1. Introduction. Version 1 CSE IIT, Kharagpur

Module 1. Introduction. Version 1 CSE IIT, Kharagpur Module 1 Introduction Lesson 2 Introduction to Agent 1.3.1 Introduction to Agents An agent acts in an environment. Percepts Agent Environment Actions An agent perceives its environment through sensors.

More information

Semiotics and Intelligent Control

Semiotics and Intelligent Control Semiotics and Intelligent Control Morten Lind 0rsted-DTU: Section of Automation, Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark. m/i@oersted.dtu.dk Abstract: Key words: The overall purpose

More information

Presence and Perception: theoretical links & empirical evidence. Edwin Blake

Presence and Perception: theoretical links & empirical evidence. Edwin Blake Presence and Perception: theoretical links & empirical evidence Edwin Blake edwin@cs.uct.ac.za This Talk 2 Perception Bottom-up Top-down Integration Presence Bottom-up Top-down BIPs Presence arises from

More information

Intelligent Control Systems

Intelligent Control Systems Lecture Notes in 4 th Class in the Control and Systems Engineering Department University of Technology CCE-CN432 Edited By: Dr. Mohammed Y. Hassan, Ph. D. Fourth Year. CCE-CN432 Syllabus Theoretical: 2

More information

Learning Deterministic Causal Networks from Observational Data

Learning Deterministic Causal Networks from Observational Data Carnegie Mellon University Research Showcase @ CMU Department of Psychology Dietrich College of Humanities and Social Sciences 8-22 Learning Deterministic Causal Networks from Observational Data Ben Deverett

More information

Topological Considerations of Memory Structure

Topological Considerations of Memory Structure Procedia Computer Science Volume 41, 2014, Pages 45 50 This space is reserved for the Procedia header, do not use it BICA 2014. 5th Annual International Conference on Biologically Inspired Cognitive Architectures

More information

ADAPTING COPYCAT TO CONTEXT-DEPENDENT VISUAL OBJECT RECOGNITION

ADAPTING COPYCAT TO CONTEXT-DEPENDENT VISUAL OBJECT RECOGNITION ADAPTING COPYCAT TO CONTEXT-DEPENDENT VISUAL OBJECT RECOGNITION SCOTT BOLLAND Department of Computer Science and Electrical Engineering The University of Queensland Brisbane, Queensland 4072 Australia

More information

Lecturer: Rob van der Willigen 11/9/08

Lecturer: Rob van der Willigen 11/9/08 Auditory Perception - Detection versus Discrimination - Localization versus Discrimination - - Electrophysiological Measurements Psychophysical Measurements Three Approaches to Researching Audition physiology

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction Artificial neural networks are mathematical inventions inspired by observations made in the study of biological systems, though loosely based on the actual biology. An artificial

More information

Dynamic Control Models as State Abstractions

Dynamic Control Models as State Abstractions University of Massachusetts Amherst From the SelectedWorks of Roderic Grupen 998 Dynamic Control Models as State Abstractions Jefferson A. Coelho Roderic Grupen, University of Massachusetts - Amherst Available

More information

Emotion Recognition using a Cauchy Naive Bayes Classifier

Emotion Recognition using a Cauchy Naive Bayes Classifier Emotion Recognition using a Cauchy Naive Bayes Classifier Abstract Recognizing human facial expression and emotion by computer is an interesting and challenging problem. In this paper we propose a method

More information

Lecturer: Rob van der Willigen 11/9/08

Lecturer: Rob van der Willigen 11/9/08 Auditory Perception - Detection versus Discrimination - Localization versus Discrimination - Electrophysiological Measurements - Psychophysical Measurements 1 Three Approaches to Researching Audition physiology

More information

Artificial Intelligence Lecture 7

Artificial Intelligence Lecture 7 Artificial Intelligence Lecture 7 Lecture plan AI in general (ch. 1) Search based AI (ch. 4) search, games, planning, optimization Agents (ch. 8) applied AI techniques in robots, software agents,... Knowledge

More information

Cognitive Maps-Based Student Model

Cognitive Maps-Based Student Model Cognitive Maps-Based Student Model Alejandro Peña 1,2,3, Humberto Sossa 3, Agustín Gutiérrez 3 WOLNM 1, UPIICSA 2 & CIC 3 - National Polytechnic Institute 2,3, Mexico 31 Julio 1859, # 1099-B, Leyes Reforma,

More information

High-level Vision. Bernd Neumann Slides for the course in WS 2004/05. Faculty of Informatics Hamburg University Germany

High-level Vision. Bernd Neumann Slides for the course in WS 2004/05. Faculty of Informatics Hamburg University Germany High-level Vision Bernd Neumann Slides for the course in WS 2004/05 Faculty of Informatics Hamburg University Germany neumann@informatik.uni-hamburg.de http://kogs-www.informatik.uni-hamburg.de 1 Contents

More information

Design Methodology. 4th year 1 nd Semester. M.S.C. Madyan Rashan. Room No Academic Year

Design Methodology. 4th year 1 nd Semester. M.S.C. Madyan Rashan. Room No Academic Year College of Engineering Department of Interior Design Design Methodology 4th year 1 nd Semester M.S.C. Madyan Rashan Room No. 313 Academic Year 2018-2019 Course Name Course Code INDS 315 Lecturer in Charge

More information

The Advantages of Evolving Perceptual Cues

The Advantages of Evolving Perceptual Cues The Advantages of Evolving Perceptual Cues Ian Macinnes and Ezequiel Di Paolo Centre for Computational Neuroscience and Robotics, John Maynard Smith Building, University of Sussex, Falmer, Brighton, BN1

More information

Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention

Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention Tapani Raiko and Harri Valpola School of Science and Technology Aalto University (formerly Helsinki University of

More information

PART - A 1. Define Artificial Intelligence formulated by Haugeland. The exciting new effort to make computers think machines with minds in the full and literal sense. 2. Define Artificial Intelligence

More information

Dynamics of Color Category Formation and Boundaries

Dynamics of Color Category Formation and Boundaries Dynamics of Color Category Formation and Boundaries Stephanie Huette* Department of Psychology, University of Memphis, Memphis, TN Definition Dynamics of color boundaries is broadly the area that characterizes

More information

Learning and Adaptive Behavior, Part II

Learning and Adaptive Behavior, Part II Learning and Adaptive Behavior, Part II April 12, 2007 The man who sets out to carry a cat by its tail learns something that will always be useful and which will never grow dim or doubtful. -- Mark Twain

More information

Artificial Psychology Revisited: Constructs for Modeling Artificial Emotions

Artificial Psychology Revisited: Constructs for Modeling Artificial Emotions Int'l Conf. Artificial Intelligence ICAI'15 421 Artificial Psychology Revisited: Constructs for Modeling Artificial Emotions James A. Crowder, John N. Carbone, Shelli Friess Raytheon Intelligence, Information,

More information

Sparse Coding in Sparse Winner Networks

Sparse Coding in Sparse Winner Networks Sparse Coding in Sparse Winner Networks Janusz A. Starzyk 1, Yinyin Liu 1, David Vogel 2 1 School of Electrical Engineering & Computer Science Ohio University, Athens, OH 45701 {starzyk, yliu}@bobcat.ent.ohiou.edu

More information

General recognition theory of categorization: A MATLAB toolbox

General recognition theory of categorization: A MATLAB toolbox ehavior Research Methods 26, 38 (4), 579-583 General recognition theory of categorization: MTL toolbox LEOL. LFONSO-REESE San Diego State University, San Diego, California General recognition theory (GRT)

More information

Using Diverse Cognitive Mechanisms for Action Modeling

Using Diverse Cognitive Mechanisms for Action Modeling Using Diverse Cognitive Mechanisms for Action Modeling John E. Laird (laird@umich.edu) Joseph Z. Xu (jzxu@umich.edu) Samuel Wintermute (swinterm@umich.edu) University of Michigan, 2260 Hayward Street Ann

More information

Dr. Braj Bhushan, Dept. of HSS, IIT Guwahati, INDIA

Dr. Braj Bhushan, Dept. of HSS, IIT Guwahati, INDIA 1 Cognition The word Cognitive or Cognition has been derived from Latin word cognoscere meaning to know or have knowledge of. Although Psychology has existed over past 100 years as an independent discipline,

More information

PS3021, PS3022, PS4040

PS3021, PS3022, PS4040 School of Psychology Important Degree Information: B.Sc./M.A. Honours The general requirements are 480 credits over a period of normally 4 years (and not more than 5 years) or part-time equivalent; the

More information

Complementarity and the Relation Between Psychological and Neurophysiological Phenomena

Complementarity and the Relation Between Psychological and Neurophysiological Phenomena the Relation Between Psychological and Neurophysiological Phenomena Douglas M. Snyder Berkeley, California ABSTRACT In their recent article, Kirsch and Hyland questioned the relation between psychological

More information

Interpretation in design: Modelling how the situation changes during design activity

Interpretation in design: Modelling how the situation changes during design activity Interpretation in design: Modelling how the situation changes during design activity Nick Kelly Australian Digital Futures Institute, University of Southern Queensland, Australia nick.kelly@usq.edu.au

More information

Development of Concept of Transitivity in Pre-Operational Stage Children

Development of Concept of Transitivity in Pre-Operational Stage Children Development of Concept of Transitivity in Pre-Operational Stage Children Chaitanya Ahuja Electrical Engineering Pranjal Gupta Civil Engineering Metored by: Dr. Amitabha Mukherjee Department of Computer

More information

Pavlovian, Skinner and other behaviourists contribution to AI

Pavlovian, Skinner and other behaviourists contribution to AI Pavlovian, Skinner and other behaviourists contribution to AI Witold KOSIŃSKI Dominika ZACZEK-CHRZANOWSKA Polish Japanese Institute of Information Technology, Research Center Polsko Japońska Wyższa Szko

More information

A model of parallel time estimation

A model of parallel time estimation A model of parallel time estimation Hedderik van Rijn 1 and Niels Taatgen 1,2 1 Department of Artificial Intelligence, University of Groningen Grote Kruisstraat 2/1, 9712 TS Groningen 2 Department of Psychology,

More information

AGENT-BASED SYSTEMS. What is an agent? ROBOTICS AND AUTONOMOUS SYSTEMS. Today. that environment in order to meet its delegated objectives.

AGENT-BASED SYSTEMS. What is an agent? ROBOTICS AND AUTONOMOUS SYSTEMS. Today. that environment in order to meet its delegated objectives. ROBOTICS AND AUTONOMOUS SYSTEMS Simon Parsons Department of Computer Science University of Liverpool LECTURE 16 comp329-2013-parsons-lect16 2/44 Today We will start on the second part of the course Autonomous

More information

Framework for Comparative Research on Relational Information Displays

Framework for Comparative Research on Relational Information Displays Framework for Comparative Research on Relational Information Displays Sung Park and Richard Catrambone 2 School of Psychology & Graphics, Visualization, and Usability Center (GVU) Georgia Institute of

More information

WP 7: Emotion in Cognition and Action

WP 7: Emotion in Cognition and Action WP 7: Emotion in Cognition and Action Lola Cañamero, UH 2 nd Plenary, May 24-27 2005, Newcastle WP7: The context Emotion in cognition & action in multi-modal interfaces? Emotion-oriented systems adapted

More information

M.Sc. in Cognitive Systems. Model Curriculum

M.Sc. in Cognitive Systems. Model Curriculum M.Sc. in Cognitive Systems Model Curriculum April 2014 Version 1.0 School of Informatics University of Skövde Sweden Contents 1 CORE COURSES...1 2 ELECTIVE COURSES...1 3 OUTLINE COURSE SYLLABI...2 Page

More information

PSYC 441 Cognitive Psychology II

PSYC 441 Cognitive Psychology II PSYC 441 Cognitive Psychology II Session 3 Paradigms and Research Methods in Cognitive Psychology Lecturer: Dr. Benjamin Amponsah, Dept., of Psychology, UG, Legon Contact Information: bamponsah@ug.edu.gh

More information

Implementation of Perception Classification based on BDI Model using Bayesian Classifier

Implementation of Perception Classification based on BDI Model using Bayesian Classifier Implementation of Perception Classification based on BDI Model using Bayesian Classifier Vishwanath Y 1 Murali T S 2 Dr M.V Vijayakumar 3 1 Research Scholar, Dept. of Computer Science & Engineering, Jain

More information

Visual Selection and Attention

Visual Selection and Attention Visual Selection and Attention Retrieve Information Select what to observe No time to focus on every object Overt Selections Performed by eye movements Covert Selections Performed by visual attention 2

More information

Abstract. 2. Metacognition Architecture. 1. Introduction

Abstract. 2. Metacognition Architecture. 1. Introduction Design of Metacontrol and Metacognition mechanisms in SMCA Using Norms and Affect rules Venkatamuni Vijaya Kumar, Darryl. N. Davis, and K.R. Shylaja New Horizon college of Engineering, DR.AIT, Banagalore,

More information

From Pixels to People: A Model of Familiar Face Recognition by Burton, Bruce and Hancock. Presented by Tuneesh K Lella

From Pixels to People: A Model of Familiar Face Recognition by Burton, Bruce and Hancock. Presented by Tuneesh K Lella From Pixels to People: A Model of Familiar Face Recognition by Burton, Bruce and Hancock Presented by Tuneesh K Lella Agenda Motivation IAC model Front-End to the IAC model Combination model Testing the

More information

Analogy-Making in Children: The Importance of Processing Constraints

Analogy-Making in Children: The Importance of Processing Constraints Analogy-Making in Children: The Importance of Processing Constraints Jean-Pierre Thibaut (jean-pierre.thibaut@univ-poitiers.fr) University of Poitiers, CeRCA, CNRS UMR 634, 99 avenue du recteur Pineau

More information

A Neural Network Architecture for.

A Neural Network Architecture for. A Neural Network Architecture for Self-Organization of Object Understanding D. Heinke, H.-M. Gross Technical University of Ilmenau, Division of Neuroinformatics 98684 Ilmenau, Germany e-mail: dietmar@informatik.tu-ilmenau.de

More information

A HMM-based Pre-training Approach for Sequential Data

A HMM-based Pre-training Approach for Sequential Data A HMM-based Pre-training Approach for Sequential Data Luca Pasa 1, Alberto Testolin 2, Alessandro Sperduti 1 1- Department of Mathematics 2- Department of Developmental Psychology and Socialisation University

More information

Knowledge Based Systems

Knowledge Based Systems Knowledge Based Systems Human Expert A human expert is a specialist for a specific differentiated application field who creates solutions to customer problems in this respective field and supports them

More information

Empirical Validation in Agent-Based Models

Empirical Validation in Agent-Based Models Empirical Validation in Agent-Based Models Giorgio Fagiolo Sant Anna School of Advanced Studies, Pisa (Italy) giorgio.fagiolo@sssup.it https://mail.sssup.it/~fagiolo Max-Planck-Institute of Economics Jena,

More information

Study on perceptually-based fitting line-segments

Study on perceptually-based fitting line-segments Regeo. Geometric Reconstruction Group www.regeo.uji.es Technical Reports. Ref. 08/2014 Study on perceptually-based fitting line-segments Raquel Plumed, Pedro Company, Peter A.C. Varley Department of Mechanical

More information

SOCIOLOGICAL RESEARCH

SOCIOLOGICAL RESEARCH SOCIOLOGICAL RESEARCH SOCIOLOGY Is a scientific discipline rooted in Positivism As such it makes use of a number of scientific techniques Including: The experimental method, the survey and questionnaire

More information

The interplay of domain-specific and domain general processes, skills and abilities in the development of science knowledge

The interplay of domain-specific and domain general processes, skills and abilities in the development of science knowledge The interplay of domain-specific and domain general processes, skills and abilities in the development of science knowledge Stella Vosniadou Strategic Professor in Education The Flinders University of

More information

Evolutionary Approach to Investigations of Cognitive Systems

Evolutionary Approach to Investigations of Cognitive Systems Evolutionary Approach to Investigations of Cognitive Systems Vladimir RED KO a,1 b and Anton KOVAL a Scientific Research Institute for System Analysis, Russian Academy of Science, Russia b National Nuclear

More information

Using Inverse Planning and Theory of Mind for Social Goal Inference

Using Inverse Planning and Theory of Mind for Social Goal Inference Using Inverse Planning and Theory of Mind for Social Goal Inference Sean Tauber (sean.tauber@uci.edu) Mark Steyvers (mark.steyvers@uci.edu) Department of Cognitive Sciences, University of California, Irvine

More information

EMOTIONAL LEARNING. Synonyms. Definition

EMOTIONAL LEARNING. Synonyms. Definition EMOTIONAL LEARNING Claude Frasson and Alicia Heraz Department of Computer Science, University of Montreal Montreal (Québec) Canada {frasson,heraz}@umontreal.ca Synonyms Affective Learning, Emotional Intelligence,

More information

Agents. Environments Multi-agent systems. January 18th, Agents

Agents. Environments Multi-agent systems. January 18th, Agents Plan for the 2nd hour What is an agent? EDA132: Applied Artificial Intelligence (Chapter 2 of AIMA) PEAS (Performance measure, Environment, Actuators, Sensors) Agent architectures. Jacek Malec Dept. of

More information

CREATIVE INFERENCE IN IMAGERY AND INVENTION. Dept. of Psychology, Texas A & M University. College Station, TX , USA ABSTRACT

CREATIVE INFERENCE IN IMAGERY AND INVENTION. Dept. of Psychology, Texas A & M University. College Station, TX , USA ABSTRACT From: AAAI Technical Report SS-92-02. Compilation copyright 1992, AAAI (www.aaai.org). All rights reserved. CREATIVE INFERENCE IN IMAGERY AND INVENTION Ronald A. Finke Dept. of Psychology, Texas A & M

More information

Measuring Focused Attention Using Fixation Inner-Density

Measuring Focused Attention Using Fixation Inner-Density Measuring Focused Attention Using Fixation Inner-Density Wen Liu, Mina Shojaeizadeh, Soussan Djamasbi, Andrew C. Trapp User Experience & Decision Making Research Laboratory, Worcester Polytechnic Institute

More information

A Matrix of Material Representation

A Matrix of Material Representation A Matrix of Material Representation Hengfeng Zuo a, Mark Jones b, Tony Hope a, a Design and Advanced Technology Research Centre, Southampton Institute, UK b Product Design Group, Faculty of Technology,

More information

Bill Wilson. Categorizing Cognition: Toward Conceptual Coherence in the Foundations of Psychology

Bill Wilson. Categorizing Cognition: Toward Conceptual Coherence in the Foundations of Psychology Categorizing Cognition: Toward Conceptual Coherence in the Foundations of Psychology Halford, G.S., Wilson, W.H., Andrews, G., & Phillips, S. (2014). Cambridge, MA: MIT Press http://mitpress.mit.edu/books/categorizing-cognition

More information

Recognizing Scenes by Simulating Implied Social Interaction Networks

Recognizing Scenes by Simulating Implied Social Interaction Networks Recognizing Scenes by Simulating Implied Social Interaction Networks MaryAnne Fields and Craig Lennon Army Research Laboratory, Aberdeen, MD, USA Christian Lebiere and Michael Martin Carnegie Mellon University,

More information

Rethinking Cognitive Architecture!

Rethinking Cognitive Architecture! Rethinking Cognitive Architecture! Reconciling Uniformity and Diversity via Graphical Models! Paul Rosenbloom!!! 1/25/2010! Department of Computer Science &! Institute for Creative Technologies! The projects

More information

Predictive Disturbance Management in Manufacturing Control Systems

Predictive Disturbance Management in Manufacturing Control Systems Predictive Disturbance Management in Manufacturing Control Systems Paulo Leitão 1, Francisco Restivo 2 1 Polytechnic Institute of Bragança, Quinta Sta Apolónia, Apartado 134, 5301-857 Bragança, Portugal,

More information

Towards a Computational Model of Perception and Action in Human Computer Interaction

Towards a Computational Model of Perception and Action in Human Computer Interaction Towards a Computational Model of Perception and Action in Human Computer Interaction Pascal Haazebroek and Bernhard Hommel Cognitive Psychology Unit & Leiden Institute for Brain and Cognition Wassenaarseweg

More information

to Cues Present at Test

to Cues Present at Test 1st: Matching Cues Present at Study to Cues Present at Test 2nd: Introduction to Consolidation Psychology 355: Cognitive Psychology Instructor: John Miyamoto 05/03/2018: Lecture 06-4 Note: This Powerpoint

More information

An Efficient Hybrid Rule Based Inference Engine with Explanation Capability

An Efficient Hybrid Rule Based Inference Engine with Explanation Capability To be published in the Proceedings of the 14th International FLAIRS Conference, Key West, Florida, May 2001. An Efficient Hybrid Rule Based Inference Engine with Explanation Capability Ioannis Hatzilygeroudis,

More information

Artificial Cognitive Systems

Artificial Cognitive Systems Artificial Cognitive Systems David Vernon Carnegie Mellon University Africa vernon@cmu.edu www.vernon.eu Artificial Cognitive Systems 1 Carnegie Mellon University Africa Lecture 2 Paradigms of Cognitive

More information

Modeling and Environmental Science: In Conclusion

Modeling and Environmental Science: In Conclusion Modeling and Environmental Science: In Conclusion Environmental Science It sounds like a modern idea, but if you view it broadly, it s a very old idea: Our ancestors survival depended on their knowledge

More information

Study on perceptually-based fitting elliptic arcs

Study on perceptually-based fitting elliptic arcs Regeo. Geometric Reconstruction Group www.regeo.uji.es Technical Reports. Ref. 09/2015 Study on perceptually-based fitting elliptic arcs Pedro Company, Raquel Plumed, Peter A.C. Varley Department of Mechanical

More information

Dynamic Social Simulation with Multi-Agents having Internal Dynamics

Dynamic Social Simulation with Multi-Agents having Internal Dynamics New Frontiers in Artificial Intelligence: Joint Proceeding of the 7th and 8th A.Sakurai (Ed.) Springer (27) Dynamic Social Simulation with Multi-Agents having Internal Dynamics Takashi Sato and Takashi

More information

Design the Flexibility, Maintain the Stability of Conceptual Schemas

Design the Flexibility, Maintain the Stability of Conceptual Schemas Design the Flexibility, Maintain the Stability of Conceptual Schemas Lex Wedemeijer 1 ABP Netherlands, Department of Information Management, P.O.Box 4476, NL-6401 CZ, Heerlen, The Netherlands L.Wedemeijer@ABP.NL

More information

Do you have to look where you go? Gaze behaviour during spatial decision making

Do you have to look where you go? Gaze behaviour during spatial decision making Do you have to look where you go? Gaze behaviour during spatial decision making Jan M. Wiener (jwiener@bournemouth.ac.uk) Department of Psychology, Bournemouth University Poole, BH12 5BB, UK Olivier De

More information

Critical Thinking Assessment at MCC. How are we doing?

Critical Thinking Assessment at MCC. How are we doing? Critical Thinking Assessment at MCC How are we doing? Prepared by Maura McCool, M.S. Office of Research, Evaluation and Assessment Metropolitan Community Colleges Fall 2003 1 General Education Assessment

More information