Perceptual Anchoring with Indefinite Descriptions

Size: px
Start display at page:

Download "Perceptual Anchoring with Indefinite Descriptions"

Transcription

1 Perceptual Anchoring with Indefinite Descriptions Silvia Coradeschi and Alessandro Saffiotti Center for Applied Autonomous Sensor Systems Örebro University, S Örebro, Sweden silvia.coradeschi, Abstract Anchoring is the problem of how to connect, inside an artificial system, the symbol-level and signallevel representations of the same physical object. In most previous work on anchoring, symbol-level representations were meant to denote one specific object, like the red pen p22. These are also called definite descriptions. In this paper, we study anchoring in the case of indefinite descriptions, like a red pen. A key point of our study is that anchoring with an indefinite description involves, in general, the selection of one object among several perceived objects that satisfy that description. We analyze several strategies to perform object selection, and compare them with the problem of action selection in autonomous embedded agents. 1 Introduction Consider an embodied autonomous system that incorporates a symbolic reasoning process. By embodied we mean here that the system is embedded in the physical world, and that it incorporates sensors and actuators to interact with the world. An autonomous robot is an example of such a system. The symbolic process in the system uses symbols to denote objects in the world that are relevant to the performance of the system s tasks: for instance, it may use the symbol cup22 to denote a specific cup to be grasped. By virtue of its embodiment, this system must also incorporate a sensorimotoric process that observes and manipulate physical objects in the world. In order to consistently perform the intended tasks, the system must make sure that the symbolic process and the sensori-motoric process talk about the same physical objects. In our example, it must make sure that the symbol cup22 is correctly linked to the sensor data in the sensori-motoric process that originate from observing the intended cup see Fig. 1. We call anchoring the problem of establishing and maintaining this link [Saffiotti, 1994; Coradeschi and Saffiotti, 2000]. The anchoring problem must be addressed in any physically embedded system that comprises a symbolic reasoning component. An interest has recently appeared in the fields of robotics and AI toward a general theory of anchoring, which Physical World denote observe Symbolic reasoning system table1 cup22 door5 room3 symbols Anchoring Sensori motoric system sensor data Autonomous System Figure 1: The anchoring problem: connecting symbols and sensor data that refer to the same physical objects. would advance our ability to build intelligent embedded systems and to transfer techniques across different systems. This interest gave rise to a symposium and a special issue [Coradeschi and Saffiotti, 2001; 2003]. In much of the previous work, the focus is on anchoring individual symbols meant to denote one specific object, like cup22. In many robot tasks, however, we are not interested in a specific object but just in an arbitrary one that has some properties that makes it the right one for the task. For instance, in order to bring a coffee to a person, a robot needs to find a cup, but it does not matter which one. Object references of the kind a cup are known as indefinite descriptions in the philosophical and linguistic tradition [Russell, 1905]. Anchoring a individual symbol based on an indefinite description, like an cup, is a challenging problem which, to our knowledge, has received little attention in the AI and robotics literature. One of the main challenges is that the anchoring process must select which object, among several potential candidates, should be used. This selection might later be revised if another suitable object appears which is preferred according to some criterion. For instance, the robot can anchor to a cup seen on a far away table, but it may see another cup nearby while going to that table and hence re-anchor to this more convenient cup. Proc. of the Joint Workshop of the Swedish AI Society and the Swedish Society for Learning Systems Orebro, Sweden, April 11-12, 2003

2 In this paper, we investigate the problem of anchoring in the presence of indefinite descriptions. We examine different anchoring strategies based on a different trade-off between commitment to a previous choice and reactivity to new information. This examination will bring about a few intriguing similarities between the problem of object selection and the one of action selection. To make our discussion more concrete, we shall ground the examination on a concrete example using a Sony AIBO robot. 2 A computational model of anchoring In [Coradeschi and Saffiotti, 2000] the authors have proposed a computational model of anchoring in which the intelligent system is assumed to comprise a symbol system and a perceptual system. The symbol system manipulates individual symbols, like x and cup22, which are meant to denote physical objects. It also associates each individual symbol with a set of symbolic predicates, like red, that assert properties of the corresponding object. The perceptual system generates percepts, 1 like a region in an image, from the observation of physical objects. It also associates each percept with the observed values of a set of measurable attributes, like the average RGB values of a region. The model further assumes that a predicate grounding relation is given, which encodes the correspondence between predicate symbols and admissible values of observable attributes. The task of anchoring is to use the relation to connect individual symbols in and percepts in. For instance, suppose that red is predicated of the symbol cup22, and that the RGB values of a given region in an image are compatible with the predicate red according to. Then that region could be anchored to the symbol cup22. No assumption is made about the origin of the relation: it can be hand-coded by the designer, learnt from samples, or other. The correspondence between symbols and percepts is reified in a data structure called anchor. The anchor contains a signature that gives the current best estimate of the observable properties of the object, which may be used both for acting on the object (e.g., its position) and for re-identifying it later on. The anchoring process is responsible for creating and maintaining anchors. This process takes as input: the individual symbols provided by together with a set of predicates that qualify them, and the percepts provided by together with the corresponding values of attributes. The set of predicates that are associated to an individual symbol is called the symbolic descriptor for that symbol. The anchoring process is defined by three abstract functionalities: Find, Track, and Reacquire. The Find functionality corresponds to the initial creation of an anchor for an individual symbol given a symbolic descriptor provided by the symbol system. This functionality selects a percept from the perceptual stream using the predicate grounding relation to match predicates in the descriptor 1 We take here a percept to be a structured collection of measurements that are assumed to originate from the same physical object. The problem of how to generate percepts that are in one-to-one correspondence with objects is a hard problem, which lies outside the scope of this paper. to observed attribute values. Once an anchor has been created, it must be continuously updated to account for changes in the object s attributes, e.g., its position. This is done by the Track functionality using a combination of prediction and new observations. Prediction is used to make sure that the new percepts used to update the anchor are compatible with the previous observations, that is, that we are still tracking the same object. Moreover, comparison with the symbolic descriptor is used to make sure that the updated anchor still satisfies the predicates, that is, that the object still has the properties that make it the right one from the point of view of the symbol system. The Track functionality assumes that the object is kept under constant observation. The Reacquire functionality reestablishes an anchor for an object that has not been observed for some time. For instance, every morning I tell my robot to go and pick up my cup. The robot knows what my cup looks like and where it has seen it last time, and it can use this information to find it again. The Reacquire functionality can be considered a combination of Find and Track: it is similar to a Find, with the addition that information from previous observations can also be used as in the Track functionality. A more detailed description of these functionalities, together with a few example of their use, can be found in [Coradeschi and Saffiotti, 2000]. 3 The challenge of indefinite descriptions In practice, the problem of anchoring is the problem of connecting a given symbol with the right percepts in the perceptual stream by comparing the observed attributes of the percepts with the symbolic properties predicated of that symbol. There are, however, several ways to use predicates to denote objects [Russell, 1905]. Given a predicate, we can have descriptions as different as The such that, An such that, All such that, and so on. The first one is commonly referred to as a definite description, and it is intended to denote one specific individual. The second one is commonly referred to as an indefinite description, and it is intended to denote an arbitrary individual in the class of - objects. The third one is intended to denote all the individuals in the class of -objects. As we shall see, the anchoring problem may present different aspects in these different cases. 2 Previous application of the model of anchoring described above have been mainly concerned with definite descriptions. In these applications the robot needed to perform an action with respect to a specific object denoted by an individual constant, like Cross(door-31). However, indefinite descriptions also play an important role in the execution of tasks by robotics systems. Many tasks require an object that has a number of properties, for instance a red pen, so it can be used for a specific action, for instance to correct the exams, but it does not matter which particular red pen is used. A distinctive features of indefinite descriptions is that they intrinsically incorporate a form of non-determinism: a red pen can denote any individual in the class of red pens. (Note, however, that it denotes an individual, and not the entire 2 We assume here that the symbol system can let the anchoring process know which type of description is meant.

3 class.) Anchoring the symbol red pen, then, implicitly entails a choice: if several red pens are perceived, one of them should be selected and anchored to the symbol. Any of the matching objects can be chosen, although the choice can lead to different developments of the world later in time. We call object selection the problem of selecting which percept (hence, which object) to anchor to a symbol in the presence of an indefinite description. It is important to note that object selection does not need to be performed in general when dealing with a definite description. Since a definite description is meant to denote a unique object, the fact that several objects match the description constitutes an exceptional situation. This ambiguity should be resolved in some way, for instance by performing informative actions to better discriminate the objects or by asking the symbolic system to provide additional properties. In the case of an indefinite description, by contrast, object selection enters as an essential component of the anchoring problem. The next two questions, then, are: how should object selection be performed, and when. As for the first question, in principle any object that satisfies the given description could be non-deterministically selected. In practical applications, however, it is often useful to provide the anchoring process with a preference criterion. Consider again the red pen example. If you see a pen nearby and one on a table further away, it would be more convenient to select the closer pen in order to simplify the action. As for the second question, when object selection is performed is determined by the overall structure of the anchoring process. This decides which functionality should be called at each point in time in order to create and maintain the anchor. This question, therefore, translates to considering the role of object selection in the three functionalities of anchoring defined above. The Find functionality obviously needs to perform object selection in the case of multiple matching percepts, possibly using a given preference criterion. The preference criterion could be built-in for a specific application, or it could be provided by the symbol system together with the description. The Find functionality is thus the first locus of the selection of the object to be used to anchor the symbol. In the case of a definite description, object selection should be replaced by an exception handling mechanism as discussed above. The Track functionality is the same in the case of a definite or indefinite description. This functionality is meant to keep under constant observation a specific object, and therefore it has an implicit definite description: the object that is being tracked. As a consequence, this functionality never performs object selection. In a sense, the Track functionality maintains a commitment to a specific object. The Reacquire functionality is used to re-establish an anchor. This functionality has access both to the symbolic descriptor and to the values of the attributes of the last anchored object, stored in the anchor s signature. Reacquire can consider both sources of information to perform a selection between the objects that can be used to re-anchor the symbol, including the one that was last used. As for the Find, object selection can be informed by a preference criterion. Different object selection strategies can assign different Figure 2: Anchoring an orange ball among many. relative weights to the two sources of information, leading to different behaviors of the Reacquire functionality. At one extreme, object selection only uses the information stored in the signature and ignores the symbolic descriptor: as a consequence, Reacquire would re-anchor the symbol to the object that was previously used, even if this object does not satisfy the descriptor any more or if a better object is perceived. At the other extreme, object selection only uses the information in the symbolic descriptor: as a consequence, Reacquire would re-anchor the symbol to an object that satisfies the descriptor, no matter whether or not this object is the same one that was used before. Most useful strategies probably lie somewhere between these two extremes. The two extreme strategies above correspond to two different intuitive interpretations of the Reacquire functionality: reacquire the previously used object, or reacquire an object denoted by the indefinite description. Interestingly, the difference between these two interpretations does not appear in the case of a definite description: in that case, the previously used object is the object denoted by the description, since a definite description is meant to denote exactly one object. These concepts need to be distinguished and separated, however, when dealing with an indefinite description. 4 Strategies for object selection In order to further explore the problem of object selection in anchoring with an indefinite description, we consider a concrete example in which several objects can be perceived that match the given description. The example is inspired by one of the challenges presented at RoboCup 2002 in the Sony 4-legged robot league (see A 4-legged robot is in a soccer field and 10 identical orange balls are placed in the field see Fig. 2. The robot uses a color camera and a color segmentation algorithm to perceive the balls [Wasik and Saffiotti, 2002]. The goal is to score all the balls. With respect to anchoring the problem can be described as follows. The robot has an indefinite description of a ball: an ball orange, and must execute the action of scoring this ball: Score. Any of the 10 balls is suitable for this action, although it would be more convenient for the robot to approach the closest ball instead of moving toward a randomly selected ball. The task of the anchoring process is to anchor the symbol to one of the balls, and maintain the anchor during the execution of the action. The robot will act

4 with respect to the selected ball, since the robot s motion and kicking routines use the properties in the anchor s signature as input, e.g., the ball position. Once a ball has been selected, the robot should maintain a certain degree of commitment to it in order not to oscillate between different balls. However, the robot should also in some cases release this commitment and perform a new object selection, for instance, if the current ball is scored or if another ball in a better position appears. There are two elements which determine the balance between commitment to the current object and selection of a possibly new object. The first one is the strategy used by the anchoring process to decide when to call the Track functionality, which tracks the current object, and when to call the Reacquire functionality, which can perform object selection. The second element is the object selection strategy used in Reacquire. Our goal is to analyze how different balances between commitment and re-selection lead to different anchoring behaviors. To do so, we fix the first element (the anchoring process) and vary the second one. The anchoring process that we use here is as follows. Initially, an anchor is created for the symbol by calling the Find functionality. Then, the Track functionality is used to update the anchor during the performance of the action. The Track is interrupted when one of the following conditions occurs: (1) the ball is scored, (2) the ball is not visible any more, or (3) several ball s percepts are present in the image. In the first case, the anchor is deleted and the Find is called again to create a new one. This corresponds to initiating a new Score action. In the second and third case, the Reacquire functionality is called to update the existing anchor. In the following we consider three examples in which the Reacquire functionality uses three different strategies for object selection: a static strategy, a dynamic strategy, and a hybrid strategy. 4.1 Static object selection In the first strategy that we consider, the anchoring process selects a ball when the scoring action is started, and it remains committed to that ball until the action is completed. To obtain this behavior, the Reacquire functionality uses an object selection strategy that only considers the values in the anchor s signature, which reflect the observable properties of the previously anchored object. Reacquire will therefore ignore any perceived orange ball that does not match the signature, e.g., because it is at a different position. From the point of view of anchoring, the execution of the Score action in this case proceeds as follows. Initially, Find is used to select a ball among those which are initially visible and to create a corresponding anchor. The preference criterion used in our example simply selects the closest ball. Track is then used to update the anchor during the robot motion. If the ball goes out of view, for instance because the robot points the camera toward the net, the Reacquire functionality is called until precisely that ball is re-anchored. The last seen position of the ball is used to recognize the ball. When the ball is finally scored, a new Score action is Figure 3: The robot is fully committed to the selected ball and ignores a better candidate. initiated, and so on until all the balls are scored. 3 An obvious disadvantage of this strategy is that a robot would pursue a ball until it is in goal ignoring other balls even if they turn out to be in better positions. This disadvantage is illustrated in Fig. 3. The figure shows a simulated run of our scenario using this strategy. The robot (triangle) initially sees only the ball on the upper part of the figure, and hence the Find functionality anchors this ball. While moving toward this ball, the robot sees the second ball. Although this ball is in a more convenient position since it is closer to the robot, the robot stays committed to the first ball, which leads to a sub-optimal execution of the action. 4.2 Dynamic object selection With the previous strategy, the object to use is decided once and for all at the beginning of the action. At the other end of the spectrum, the anchoring process can reconsider this decision again and again during the performance of the action in case a better ball is available. To obtain this behavior, the Reacquire functionality uses an object selection strategy that only considers the indefinite symbolic descriptor an ball orange but ignores the anchor s signature. Reacquire updates the anchor using one of the visible balls, without giving any special preference to the ball that was previously anchored. In a sense, Reacquire performs a purely reactive object selection, without using any information about the history of the anchor. From the point of view of anchoring, the execution of the Score action in this case proceeds as follows. Initially, Find selects and anchors a ball as in the previous case. Track is then used to update the anchor during the robot motion. If the ball goes out of view, or if a second ball becomes visible, the Reacquire functionality is called until the anchor is re-established. Differently from the previous case, however, Reacquire does not look for a percept that matches the previous ball, but for any percept that is an orange ball. It then uses a preference criteria (the closest one, in our example) to select one of them for updating the anchor. Note that, while in the previous case the anchor was always associated to the 3 Reacquire is also called if another ball appears in the image, but this is redundant here since our object selection strategy will chose the original ball anyway. A better anchoring process could be devised for this specific case, but we prefer to use the same process for all three cases.

5 only occurs if a ball better (in the above sense) than the current one appears. In the scenario shown in Fig. 4, the robot would not consider the second ball as a possible candidate since it is not in the same direction as the current one, and it would keep pursuing and scoring the first ball. Figure 4: The robot continuously reconsiders its choice and oscillates between two balls. same ball, here it can be associated to different balls at different points in time. This second strategy provides more reactivity if a more suitable object is perceived during the performance of the action. It has, however, the disadvantages of reactive approaches: the continuous reconsideration of the options and the absence of memory can lead to oscillations and limit cycles. Fig. 4 illustrates this problem. The robot initially selects the ball on the lower part of the image since it is closer. While manouvering around this ball to reach a kicking position, it gets closer to the second ball and it re-anchors the symbol (and hence the action) to this one. While manouvering around the second ball it gets closer to the first one and it switches the anchor again. And so on, entering an oscillatory behavior. 4.3 Hybrid object selection The two strategies described above represent the two ends of a spectrum of possible strategies that combine commitment to an object with selection of new objects when appropriate. We now give an example of a strategy that lies between these two extremes. In this strategy, the anchoring process selects a ball when the action is started, and it remains committed to that ball until some other ball is perceived that suits the scoring action better than the current one. Differently from the previous case, the historical information about the current ball is used to decide whether we should keep the commitment or switch to another ball. To obtain this behavior, the Reacquire functionality uses an object selection strategy that considers both the indefinite symbolic descriptor a ball orange and the values in the anchor s signature. There are of course many ways to do so. In this example, we use a simple criterion that proved effective in the program we implemented for the RoboCup 2002 challenge. We consider all the percepts that match the descriptor. If a percept is in a different direction from the one of the current ball, it is discarded since changing the direction of motion is inefficient and it may lead to an oscillatory behavior. If a percept is roughly in the same direction as the current ball, it is retained as a possible candidate. The candidate which is closest to the robot is then selected and used for re-anchoring the symbol. From the point of view of anchoring, the execution of the Score action would be similar to the previous case, the main difference being that re-anchoring to a different ball 5 Object selection versus action selection We have seen that the problem of anchoring with an indefinite description includes two important aspects: (i) how to chose a specific object to use; and (ii) how to manage the commitment to the chosen object, that is, how to decide when to keep using that object and when to start using a different one. Choice and commitment have been the subject of much debate in the context of another problem: the problem of action selection in an autonomous agent. In a nutshell, this can be stated as the problem of how to: (i) deliberate about which specific action to execute in order to fulfil the goals, and (ii) decide when to deliberate and when to act, that is, to execute the currently selected action [Cohen and Levesque, 1990]. 4 With respect to the second aspect, there are two obvious extreme possibilities: to select an action and to fully commit to it until it has been completed; or to continuously re-evaluate our options during execution in order to adapt to contingencies. These two extremes have the same problems as the first two object selection strategies discussed in the previous section. Full commitment may lead to sub-optimal execution or failure in dynamic or partially unknown environments. Continuous deliberation may lead to oscillation and cyclic behaviors. For this reason, intermediate solutions are usually adopted, in which action selection is occasionally, but not continuously, re-evaluated according to some criteria [Kinny and Georgeff, 1991; Wooldridge and Parsons, 1998]. Common criteria that are used to decide to engage in a new deliberation include: (1) Satisfaction, that is, the action has completed; (2) Impossibility, that is, the action cannot be executed any more; and (3) Irrelevance, that is, the situation has changed in some substantial way since the time the current action was selected [Cohen and Levesque, 1990]. Action selection is also an important component in the execution of robotic plans. Suppose that a planner has requested the navigation module of a mobile robot to perform an abstract action like GoTo(Room1). This action is abstract since it does not specify a unique physical motion to perform, like Move forward 1 meter, but an arbitrary motion from the set of all motions that have the property of bringing the robot inside Room1. In a sense, we can see this action as an indefinite description of a motion. In order to execute this action, the navigation module must instantiate this action by selecting a specific motion to perform, just like we must select a specific object in order to anchor an indefinite description. There are at least two ways to execute this abstract action. The navigation module can statically decide a motion, for instance by planning a trajectory based on the current environment configuration, and then blindly execute this motion until completion. Or it can dynamically decide which motion to 4 The term intention is often used in this context as a precursor of action. For the goals of our discussion, however, we can ignore this distinction.

6 perform at each control cycle, for instance by performing gradient descent toward a goal location while reactively avoiding obstacles. In robotics, these two approaches are usually referred to as trajectory planning and trajectory generation, respectively. Once again, these two extreme approaches share the same problems as the two first object selection strategies discussed in the previous section. The first approach lacks the ability to adapt to a new situation, e.g., the appearance of a previously unknown obstacle; while the second approach may be trapped in local minima or limit cycles. Intermediate solutions are usually preferred: for instance, a path is planned and then re-evaluated from time to time during execution. Interestingly, the conditions used to decide to re-evaluate the path are similar to the three conditions given above: (1) the robot has reached the target; (2) the path cannot be tracked, e.g., because it is blocked by an obstacle; or (3) the robot s knowledge of the environment has changed in a substantial way since the time the path was generated. In the ball scoring example above, the anchoring process uses similar criteria to decide when to interrupt the execution of the Track functionality, which incorporates a commitment to the currently selected object, and to call the Find or the Reacquire functionalities, which incorporate the possibility to perform object selection. The fact that the current ball has been scored obviously corresponds to the satisfaction criterion. The fact that the current ball is not observable any more corresponds to the impossibility to keep tracking it. And the fact that other candidates are perceived corresponds to irrelevance criterion. The discussion in this section suggests that physical execution of abstract actions like pick up the red cup involves the instantiation of two distinct entities: (i) the abstract movement pick up must be instantiated in an actual executable motion, and (ii) the object description a red cup must be instantiated in an actual perceivable object. In both problems, the instantiation may have to be revised during execution according to some strategy. Interestingly, the strategies of motion selection and object selection have similar extreme cases. In one extreme, execution is fully committed to one specific choice made at the start of the action. In the other extreme, execution reactively selects the motion/object to use at each control cycle. In both problems, these two extremes cases have similar advantages and disadvantages. Any strategy for action selection can therefore be seen as a point in a 2D space: action selection motion selection object selection A key problem is to find which strategy is most adequate for a specific domain and task. While this question has already received some attention in the case of motion selection, the case of object selection has not, to our knowledge, been studied yet. 6 Conclusions The main outcome of this study is that object selection is an essential part of the execution of actions which include indefinite references. Object selection involves a delicate trade-off between commitment to a given object and ability to change object during execution. Interestingly, this trade-off has several points of similarity with the trade-off between choice and commitment in action selection. The object selection strategies used in our examples were hand-coded. We could also think of a more sophisticated anchoring process that automatically selects an adequate strategy depending on the contingencies, similarly what is done in [Schut and Wooldridge, 2001]. Alternatively, the symbol system could decide which strategy to use, e.g., by calling different versions of the Reacquire functionality. In this paper, we have focused on indefinite descriptions. However, even in the case of definite descriptions the anchoring process may be confronted with a choice between several objects that all match the symbolic descriptor, since in general the identity of an object cannot be fully established from the perceptual data. The study of the indefinite case can therefore help us to better understand how to deal with cases of ambiguity in anchoring a definite description. The problem of dealing with indefinite descriptions is just one of the many facets of the anchoring problem. Anchoring is a difficult problem that involves concepts which have interested philosophers for centuries and are still far from being fully understood. Nonetheless, we have to provide practical solutions to this problem if we want to build robotic systems that include a symbolic component. The study presented in this paper is meant as a contribution in this direction. Acknowledgements This work was funded by the Swedish KK Foundation and by Vetenskapsrådet. We thank Lars Karlsson for providing helpful comments to this paper. References [Cohen and Levesque, 1990] P.R. Cohen and H.J. Levesque. Intention is choice with committment. Artificial Intelligence, 42: , [Coradeschi and Saffiotti, 2000] S. Coradeschi and A. Saffiotti. Anchoring symbols to sensor data: preliminary report. In Proc. of the AAAI Conf., pages , [Coradeschi and Saffiotti, 2001] S. Coradeschi and A. Saffiotti, editors. AAAI Fall Symposium on Anchoring Symbols to Sensor Data in Single and Multiple Robot Systems, AAAI Technical Report FS-01-01, Menlo Park, CA, AAAI. Online at [Coradeschi and Saffiotti, 2003] S. Coradeschi and A. Saffiotti, editors. Robotics and Autonomous Systems, special issue on Perceptual Anchoring, In press. [Kinny and Georgeff, 1991] D. Kinny and M. Georgeff. Commitment and effectiveness of situated agents. In Proc. of the 12th IJCAI Conf., pages 82 88, [Russell, 1905] B. Russell. On denoting. Mind, XIV: , [Saffiotti, 1994] A. Saffiotti. Pick-up what? In C. Bäckström and E. Sandewall, editors, Current trends in AI Planning, pages IOS Press, Amsterdam, NL, 1994.

7 [Schut and Wooldridge, 2001] M. Schut and M. Wooldridge. Principles of intention reconsideration. In Proc. of the 5th Int. Conf. on Autonomous Agents, pages , [Wasik and Saffiotti, 2002] Z. Wasik and A. Saffiotti. Robust color segmentation for the robocup domain. In Proc. of the Int. Conf. on Pattern Recognition (ICPR), Quebec City, Quebec, CA, [Wooldridge and Parsons, 1998] M. Wooldridge and S. Parsons. Intention reconsideration reconsidered. In Proc. of the Int. Workshop on Agent Theories, Architectures and Languages, Paris, France, 1998.

Introduction to Artificial Intelligence 2 nd semester 2016/2017. Chapter 2: Intelligent Agents

Introduction to Artificial Intelligence 2 nd semester 2016/2017. Chapter 2: Intelligent Agents Introduction to Artificial Intelligence 2 nd semester 2016/2017 Chapter 2: Intelligent Agents Mohamed B. Abubaker Palestine Technical College Deir El-Balah 1 Agents and Environments An agent is anything

More information

ICS 606. Intelligent Autonomous Agents 1. Intelligent Autonomous Agents ICS 606 / EE 606 Fall Reactive Architectures

ICS 606. Intelligent Autonomous Agents 1. Intelligent Autonomous Agents ICS 606 / EE 606 Fall Reactive Architectures Intelligent Autonomous Agents ICS 606 / EE 606 Fall 2011 Nancy E. Reed nreed@hawaii.edu 1 Lecture #5 Reactive and Hybrid Agents Reactive Architectures Brooks and behaviors The subsumption architecture

More information

LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES

LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES Reactive Architectures LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES An Introduction to MultiAgent Systems http://www.csc.liv.ac.uk/~mjw/pubs/imas There are many unsolved (some would say insoluble) problems

More information

Artificial Intelligence Lecture 7

Artificial Intelligence Lecture 7 Artificial Intelligence Lecture 7 Lecture plan AI in general (ch. 1) Search based AI (ch. 4) search, games, planning, optimization Agents (ch. 8) applied AI techniques in robots, software agents,... Knowledge

More information

Learning Utility for Behavior Acquisition and Intention Inference of Other Agent

Learning Utility for Behavior Acquisition and Intention Inference of Other Agent Learning Utility for Behavior Acquisition and Intention Inference of Other Agent Yasutake Takahashi, Teruyasu Kawamata, and Minoru Asada* Dept. of Adaptive Machine Systems, Graduate School of Engineering,

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence COMP-241, Level-6 Mohammad Fahim Akhtar, Dr. Mohammad Hasan Department of Computer Science Jazan University, KSA Chapter 2: Intelligent Agents In which we discuss the nature of

More information

Robotics Summary. Made by: Iskaj Janssen

Robotics Summary. Made by: Iskaj Janssen Robotics Summary Made by: Iskaj Janssen Multiagent system: System composed of multiple agents. Five global computing trends: 1. Ubiquity (computers and intelligence are everywhere) 2. Interconnection (networked

More information

CS343: Artificial Intelligence

CS343: Artificial Intelligence CS343: Artificial Intelligence Introduction: Part 2 Prof. Scott Niekum University of Texas at Austin [Based on slides created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All materials

More information

Grounding Ontologies in the External World

Grounding Ontologies in the External World Grounding Ontologies in the External World Antonio CHELLA University of Palermo and ICAR-CNR, Palermo antonio.chella@unipa.it Abstract. The paper discusses a case study of grounding an ontology in the

More information

Agents and Environments

Agents and Environments Agents and Environments Berlin Chen 2004 Reference: 1. S. Russell and P. Norvig. Artificial Intelligence: A Modern Approach. Chapter 2 AI 2004 Berlin Chen 1 What is an Agent An agent interacts with its

More information

Module 1. Introduction. Version 1 CSE IIT, Kharagpur

Module 1. Introduction. Version 1 CSE IIT, Kharagpur Module 1 Introduction Lesson 2 Introduction to Agent 1.3.1 Introduction to Agents An agent acts in an environment. Percepts Agent Environment Actions An agent perceives its environment through sensors.

More information

1 What is an Agent? CHAPTER 2: INTELLIGENT AGENTS

1 What is an Agent? CHAPTER 2: INTELLIGENT AGENTS 1 What is an Agent? CHAPTER 2: INTELLIGENT AGENTS http://www.csc.liv.ac.uk/ mjw/pubs/imas/ The main point about agents is they are autonomous: capable of acting independently, exhibiting control over their

More information

The Semantics of Intention Maintenance for Rational Agents

The Semantics of Intention Maintenance for Rational Agents The Semantics of Intention Maintenance for Rational Agents Michael P. Georgeffand Anand S. Rao Australian Artificial Intelligence Institute Level 6, 171 La Trobe Street, Melbourne Victoria 3000, Australia

More information

Toward A Cognitive Computer Vision System

Toward A Cognitive Computer Vision System Toward A Cognitive Computer Vision System D. Paul Benjamin Pace University, 1 Pace Plaza, New York, New York 10038, 212-346-1012 benjamin@pace.edu Damian Lyons Fordham University, 340 JMH, 441 E. Fordham

More information

BDI & Reasoning. Janne Järvi Department of Computer Sciences University of Tampere

BDI & Reasoning. Janne Järvi Department of Computer Sciences University of Tampere BDI & Reasoning Janne Järvi Department of Computer Sciences University of Tampere Outline Mental states and computer programs BDI model Some BDI implementations Example of BDI reasoning Mental states and

More information

Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department

Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department Princess Nora University Faculty of Computer & Information Systems 1 ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department (CHAPTER-3) INTELLIGENT AGENTS (Course coordinator) CHAPTER OUTLINE What

More information

Re: ENSC 370 Project Gerbil Functional Specifications

Re: ENSC 370 Project Gerbil Functional Specifications Simon Fraser University Burnaby, BC V5A 1S6 trac-tech@sfu.ca February, 16, 1999 Dr. Andrew Rawicz School of Engineering Science Simon Fraser University Burnaby, BC V5A 1S6 Re: ENSC 370 Project Gerbil Functional

More information

Robot Learning Letter of Intent

Robot Learning Letter of Intent Research Proposal: Robot Learning Letter of Intent BY ERIK BILLING billing@cs.umu.se 2006-04-11 SUMMARY The proposed project s aim is to further develop the learning aspects in Behavior Based Control (BBC)

More information

High-level Vision. Bernd Neumann Slides for the course in WS 2004/05. Faculty of Informatics Hamburg University Germany

High-level Vision. Bernd Neumann Slides for the course in WS 2004/05. Faculty of Informatics Hamburg University Germany High-level Vision Bernd Neumann Slides for the course in WS 2004/05 Faculty of Informatics Hamburg University Germany neumann@informatik.uni-hamburg.de http://kogs-www.informatik.uni-hamburg.de 1 Contents

More information

EICA: Combining Interactivity with Autonomy for Social Robots

EICA: Combining Interactivity with Autonomy for Social Robots EICA: Combining Interactivity with Autonomy for Social Robots Yasser F. O. Mohammad 1, Toyoaki Nishida 2 Nishida-Sumi Laboratory, Department of Intelligence Science and Technology, Graduate School of Informatics,

More information

Phenomenal content. PHIL April 15, 2012

Phenomenal content. PHIL April 15, 2012 Phenomenal content PHIL 93507 April 15, 2012 1. The phenomenal content thesis... 1 2. The problem of phenomenally silent contents... 2 3. Phenomenally sneaky contents... 3 3.1. Phenomenal content and phenomenal

More information

Embodiment in GLAIR: A Grounded Layered Architecture. with Integrated Reasoning for Autonomous Agents. Henry Hexmoor. Johan Lammens.

Embodiment in GLAIR: A Grounded Layered Architecture. with Integrated Reasoning for Autonomous Agents. Henry Hexmoor. Johan Lammens. Embodiment in GLAIR: A Grounded Layered Architecture with Integrated Reasoning for Autonomous Agents Henry Hexmoor Johan Lammens Stuart Shapiro Computer Science Department 226 Bell Hall State University

More information

Bending it Like Beckham: Movement, Control and Deviant Causal Chains

Bending it Like Beckham: Movement, Control and Deviant Causal Chains Bending it Like Beckham: Movement, Control and Deviant Causal Chains MARKUS E. SCHLOSSER Forthcoming in Analysis This is the author s copy that may differ from the final print version Like all causal theories

More information

Test-Crisps. Test-Crisps. Fetch-Crisps. Taste-Crisps. Fetch-Crisps. Taste-Crisps

Test-Crisps. Test-Crisps. Fetch-Crisps. Taste-Crisps. Fetch-Crisps. Taste-Crisps Robots with the Best of Intentions S. Parsons 1 O. Pettersson 2 A. Saotti 2 M. Wooldridge 1 1 Department of Electronic Engineering Queen Mary and Westeld College University of London, London E1 4NS, U.K.

More information

COMP329 Robotics and Autonomous Systems Lecture 15: Agents and Intentions. Dr Terry R. Payne Department of Computer Science

COMP329 Robotics and Autonomous Systems Lecture 15: Agents and Intentions. Dr Terry R. Payne Department of Computer Science COMP329 Robotics and Autonomous Systems Lecture 15: Agents and Intentions Dr Terry R. Payne Department of Computer Science General control architecture Localisation Environment Model Local Map Position

More information

Exploration and Exploitation in Reinforcement Learning

Exploration and Exploitation in Reinforcement Learning Exploration and Exploitation in Reinforcement Learning Melanie Coggan Research supervised by Prof. Doina Precup CRA-W DMP Project at McGill University (2004) 1/18 Introduction A common problem in reinforcement

More information

The benefits of surprise in dynamic environments: from theory to practice

The benefits of surprise in dynamic environments: from theory to practice The benefits of surprise in dynamic environments: from theory to practice Emiliano Lorini 1 and Michele Piunti 1,2 1 ISTC - CNR, Rome, Italy 2 Università degli studi di Bologna - DEIS, Bologna, Italy {emiliano.lorini,michele.piunti}@istc.cnr.it

More information

Semiotics and Intelligent Control

Semiotics and Intelligent Control Semiotics and Intelligent Control Morten Lind 0rsted-DTU: Section of Automation, Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark. m/i@oersted.dtu.dk Abstract: Key words: The overall purpose

More information

Agent-Based Systems. Agent-Based Systems. Michael Rovatsos. Lecture 5 Reactive and Hybrid Agent Architectures 1 / 19

Agent-Based Systems. Agent-Based Systems. Michael Rovatsos. Lecture 5 Reactive and Hybrid Agent Architectures 1 / 19 Agent-Based Systems Michael Rovatsos mrovatso@inf.ed.ac.uk Lecture 5 Reactive and Hybrid Agent Architectures 1 / 19 Where are we? Last time... Practical reasoning agents The BDI architecture Intentions

More information

Behavior Architectures

Behavior Architectures Behavior Architectures 5 min reflection You ve read about two very different behavior architectures. What are the most significant functional/design differences between the two approaches? Are they compatible

More information

Artificial Intelligence. Intelligent Agents

Artificial Intelligence. Intelligent Agents Artificial Intelligence Intelligent Agents Agent Agent is anything that perceives its environment through sensors and acts upon that environment through effectors. Another definition later (Minsky) Humans

More information

Dynamic Rule-based Agent

Dynamic Rule-based Agent International Journal of Engineering Research and Technology. ISSN 0974-3154 Volume 11, Number 4 (2018), pp. 605-613 International Research Publication House http://www.irphouse.com Dynamic Rule-based

More information

Affective Action Selection and Behavior Arbitration for Autonomous Robots

Affective Action Selection and Behavior Arbitration for Autonomous Robots Affective Action Selection and Behavior Arbitration for Autonomous Robots Matthias Scheutz Department of Computer Science and Engineering University of Notre Dame Notre Dame, IN 46556, USA mscheutz@cse.nd.edu

More information

PART - A 1. Define Artificial Intelligence formulated by Haugeland. The exciting new effort to make computers think machines with minds in the full and literal sense. 2. Define Artificial Intelligence

More information

CS 771 Artificial Intelligence. Intelligent Agents

CS 771 Artificial Intelligence. Intelligent Agents CS 771 Artificial Intelligence Intelligent Agents What is AI? Views of AI fall into four categories 1. Thinking humanly 2. Acting humanly 3. Thinking rationally 4. Acting rationally Acting/Thinking Humanly/Rationally

More information

M.Sc. in Cognitive Systems. Model Curriculum

M.Sc. in Cognitive Systems. Model Curriculum M.Sc. in Cognitive Systems Model Curriculum April 2014 Version 1.0 School of Informatics University of Skövde Sweden Contents 1 CORE COURSES...1 2 ELECTIVE COURSES...1 3 OUTLINE COURSE SYLLABI...2 Page

More information

Emotions in Intelligent Agents

Emotions in Intelligent Agents From: FLAIRS-02 Proceedings. Copyright 2002, AAAI (www.aaai.org). All rights reserved. Emotions in Intelligent Agents N Parameswaran School of Computer Science and Engineering University of New South Wales

More information

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence Foundations of Artificial Intelligence 2. Rational Agents Nature and Structure of Rational Agents and Their Environments Wolfram Burgard, Bernhard Nebel and Martin Riedmiller Albert-Ludwigs-Universität

More information

Time Experiencing by Robotic Agents

Time Experiencing by Robotic Agents Time Experiencing by Robotic Agents Michail Maniadakis 1 and Marc Wittmann 2 and Panos Trahanias 1 1- Foundation for Research and Technology - Hellas, ICS, Greece 2- Institute for Frontier Areas of Psychology

More information

Introduction and Historical Background. August 22, 2007

Introduction and Historical Background. August 22, 2007 1 Cognitive Bases of Behavior Introduction and Historical Background August 22, 2007 2 Cognitive Psychology Concerned with full range of psychological processes from sensation to knowledge representation

More information

Intelligent Agents. CmpE 540 Principles of Artificial Intelligence

Intelligent Agents. CmpE 540 Principles of Artificial Intelligence CmpE 540 Principles of Artificial Intelligence Intelligent Agents Pınar Yolum pinar.yolum@boun.edu.tr Department of Computer Engineering Boğaziçi University 1 Chapter 2 (Based mostly on the course slides

More information

Inferring Actions and Observations from Interactions

Inferring Actions and Observations from Interactions 2013 Annual Conference on Advances in Cognitive Systems: Workshop on Goal Reasoning Inferring Actions and Observations from Interactions Joseph P. Garnier Olivier L. Georgeon Amélie Cordier Université

More information

Unmanned autonomous vehicles in air land and sea

Unmanned autonomous vehicles in air land and sea based on Gianni A. Di Caro lecture on ROBOT CONTROL RCHITECTURES SINGLE AND MULTI-ROBOT SYSTEMS: A CASE STUDY IN SWARM ROBOTICS Unmanned autonomous vehicles in air land and sea Robots and Unmanned Vehicles

More information

Function-Behaviour-Structure: A Model for Social Situated Agents

Function-Behaviour-Structure: A Model for Social Situated Agents Function-Behaviour-Structure: A Model for Social Situated Agents John S. Gero and Udo Kannengiesser University of Sydney Sydney NSW 2006, Australia {john, udo}@arch.usyd.edu.au Abstract This paper proposes

More information

Lesson 6 Learning II Anders Lyhne Christensen, D6.05, INTRODUCTION TO AUTONOMOUS MOBILE ROBOTS

Lesson 6 Learning II Anders Lyhne Christensen, D6.05, INTRODUCTION TO AUTONOMOUS MOBILE ROBOTS Lesson 6 Learning II Anders Lyhne Christensen, D6.05, anders.christensen@iscte.pt INTRODUCTION TO AUTONOMOUS MOBILE ROBOTS First: Quick Background in Neural Nets Some of earliest work in neural networks

More information

Contents. Foundations of Artificial Intelligence. Agents. Rational Agents

Contents. Foundations of Artificial Intelligence. Agents. Rational Agents Contents Foundations of Artificial Intelligence 2. Rational s Nature and Structure of Rational s and Their s Wolfram Burgard, Bernhard Nebel, and Martin Riedmiller Albert-Ludwigs-Universität Freiburg May

More information

Web-Mining Agents Cooperating Agents for Information Retrieval

Web-Mining Agents Cooperating Agents for Information Retrieval Web-Mining Agents Cooperating Agents for Information Retrieval Prof. Dr. Ralf Möller Universität zu Lübeck Institut für Informationssysteme Karsten Martiny (Übungen) Literature Chapters 2, 6, 13, 15-17

More information

Intelligent Autonomous Agents. Ralf Möller, Rainer Marrone Hamburg University of Technology

Intelligent Autonomous Agents. Ralf Möller, Rainer Marrone Hamburg University of Technology Intelligent Autonomous Agents Ralf Möller, Rainer Marrone Hamburg University of Technology Lab class Tutor: Rainer Marrone Time: Monday 12:15-13:00 Locaton: SBS93 A0.13.1/2 w Starting in Week 3 Literature

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Intelligent Agents Chapter 2 & 27 What is an Agent? An intelligent agent perceives its environment with sensors and acts upon that environment through actuators 2 Examples of Agents

More information

Evolution of Plastic Sensory-motor Coupling and Dynamic Categorization

Evolution of Plastic Sensory-motor Coupling and Dynamic Categorization Evolution of Plastic Sensory-motor Coupling and Dynamic Categorization Gentaro Morimoto and Takashi Ikegami Graduate School of Arts and Sciences The University of Tokyo 3-8-1 Komaba, Tokyo 153-8902, Japan

More information

Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention

Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention Tapani Raiko and Harri Valpola School of Science and Technology Aalto University (formerly Helsinki University of

More information

KECERDASAN BUATAN 3. By Sirait. Hasanuddin Sirait, MT

KECERDASAN BUATAN 3. By Sirait. Hasanuddin Sirait, MT KECERDASAN BUATAN 3 By @Ir.Hasanuddin@ Sirait Why study AI Cognitive Science: As a way to understand how natural minds and mental phenomena work e.g., visual perception, memory, learning, language, etc.

More information

Deliberating on Ontologies: The Present Situation. Simon Milton Department of Information Systems, The University of Melbourne

Deliberating on Ontologies: The Present Situation. Simon Milton Department of Information Systems, The University of Melbourne Deliberating on Ontologies: The Present Situation Simon Milton Department of, The University of Melbourne 1. Helping data models better map the world 2. Finding the role of ontology where theories of agency

More information

Artificial Intelligence. Outline

Artificial Intelligence. Outline Artificial Intelligence Embodied Intelligence (R. Brooks, MIT) Outline Key perspectives for thinking about how an intelligent system interacts with world Compare mainstream AI to early artificial creature

More information

Experiments with Helping Agents

Experiments with Helping Agents Published in D. D'Aloisi, C. Ferrari, A. Poggi (eds.), Atti del Gruppo di Lavoro su Intelligenza Artificiale Distribuita, IV Convegno dell'associazione Italiana per l'intelligenza Artificiale (AI*IA).

More information

EEL-5840 Elements of {Artificial} Machine Intelligence

EEL-5840 Elements of {Artificial} Machine Intelligence Menu Introduction Syllabus Grading: Last 2 Yrs Class Average 3.55; {3.7 Fall 2012 w/24 students & 3.45 Fall 2013} General Comments Copyright Dr. A. Antonio Arroyo Page 2 vs. Artificial Intelligence? DEF:

More information

AGENT-BASED SYSTEMS. What is an agent? ROBOTICS AND AUTONOMOUS SYSTEMS. Today. that environment in order to meet its delegated objectives.

AGENT-BASED SYSTEMS. What is an agent? ROBOTICS AND AUTONOMOUS SYSTEMS. Today. that environment in order to meet its delegated objectives. ROBOTICS AND AUTONOMOUS SYSTEMS Simon Parsons Department of Computer Science University of Liverpool LECTURE 16 comp329-2013-parsons-lect16 2/44 Today We will start on the second part of the course Autonomous

More information

The Potsdam NLG systems at the GIVE-2.5 Challenge

The Potsdam NLG systems at the GIVE-2.5 Challenge The Potsdam NLG systems at the GIVE-2.5 Challenge Konstantina Garoufi and Alexander Koller Area of Excellence Cognitive Sciences University of Potsdam, Germany {garoufi, akoller}@uni-potsdam.de Abstract

More information

Cognitive and Biological Agent Models for Emotion Reading

Cognitive and Biological Agent Models for Emotion Reading Cognitive and Biological Agent Models for Emotion Reading Zulfiqar A. Memon, Jan Treur Vrije Universiteit Amsterdam, Department of Artificial Intelligence De Boelelaan 1081, 1081 HV Amsterdam, the Netherlands

More information

On Three Layer Architectures (Erann Gat) Matt Loper / Brown University Presented for CS296-3

On Three Layer Architectures (Erann Gat) Matt Loper / Brown University Presented for CS296-3 On Three Layer Architectures (Erann Gat) Matt Loper / Brown University Presented for CS296-3 February 14th, 2007 Introduction What is a good control architecture for a robot? How should it coordinate long

More information

An Escalation Model of Consciousness

An Escalation Model of Consciousness Bailey!1 Ben Bailey Current Issues in Cognitive Science Mark Feinstein 2015-12-18 An Escalation Model of Consciousness Introduction The idea of consciousness has plagued humanity since its inception. Humans

More information

Local Image Structures and Optic Flow Estimation

Local Image Structures and Optic Flow Estimation Local Image Structures and Optic Flow Estimation Sinan KALKAN 1, Dirk Calow 2, Florentin Wörgötter 1, Markus Lappe 2 and Norbert Krüger 3 1 Computational Neuroscience, Uni. of Stirling, Scotland; {sinan,worgott}@cn.stir.ac.uk

More information

Chapter 2: Intelligent Agents

Chapter 2: Intelligent Agents Chapter 2: Intelligent Agents Outline Last class, introduced AI and rational agent Today s class, focus on intelligent agents Agent and environments Nature of environments influences agent design Basic

More information

CS148 - Building Intelligent Robots Lecture 5: Autonomus Control Architectures. Instructor: Chad Jenkins (cjenkins)

CS148 - Building Intelligent Robots Lecture 5: Autonomus Control Architectures. Instructor: Chad Jenkins (cjenkins) Lecture 5 Control Architectures Slide 1 CS148 - Building Intelligent Robots Lecture 5: Autonomus Control Architectures Instructor: Chad Jenkins (cjenkins) Lecture 5 Control Architectures Slide 2 Administrivia

More information

Vorlesung Grundlagen der Künstlichen Intelligenz

Vorlesung Grundlagen der Künstlichen Intelligenz Vorlesung Grundlagen der Künstlichen Intelligenz Reinhard Lafrenz / Prof. A. Knoll Robotics and Embedded Systems Department of Informatics I6 Technische Universität München www6.in.tum.de lafrenz@in.tum.de

More information

Intelligent Agents. Soleymani. Artificial Intelligence: A Modern Approach, Chapter 2

Intelligent Agents. Soleymani. Artificial Intelligence: A Modern Approach, Chapter 2 Intelligent Agents CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2016 Soleymani Artificial Intelligence: A Modern Approach, Chapter 2 Outline Agents and environments

More information

Functionalist theories of content

Functionalist theories of content Functionalist theories of content PHIL 93507 April 22, 2012 Let s assume that there is a certain stable dependence relation between the physical internal states of subjects and the phenomenal characters

More information

ENVIRONMENTAL REINFORCEMENT LEARNING: A Real-time Learning Architecture for Primitive Behavior Refinement

ENVIRONMENTAL REINFORCEMENT LEARNING: A Real-time Learning Architecture for Primitive Behavior Refinement ENVIRONMENTAL REINFORCEMENT LEARNING: A Real-time Learning Architecture for Primitive Behavior Refinement TaeHoon Anthony Choi, Eunbin Augustine Yim, and Keith L. Doty Machine Intelligence Laboratory Department

More information

Katsunari Shibata and Tomohiko Kawano

Katsunari Shibata and Tomohiko Kawano Learning of Action Generation from Raw Camera Images in a Real-World-Like Environment by Simple Coupling of Reinforcement Learning and a Neural Network Katsunari Shibata and Tomohiko Kawano Oita University,

More information

Representing Problems (and Plans) Using Imagery

Representing Problems (and Plans) Using Imagery Representing Problems (and Plans) Using Imagery Samuel Wintermute University of Michigan 2260 Hayward St. Ann Arbor, MI 48109-2121 swinterm@umich.edu Abstract In many spatial problems, it can be difficult

More information

DYNAMICISM & ROBOTICS

DYNAMICISM & ROBOTICS DYNAMICISM & ROBOTICS Phil/Psych 256 Chris Eliasmith Dynamicism and Robotics A different way of being inspired by biology by behavior Recapitulate evolution (sort of) A challenge to both connectionism

More information

Lecture 5- Hybrid Agents 2015/2016

Lecture 5- Hybrid Agents 2015/2016 Lecture 5- Hybrid Agents 2015/2016 Ana Paiva * These slides are based on the book by Prof. M. Woodridge An Introduction to Multiagent Systems and the slides online compiled by Professor Jeffrey S. Rosenschein..

More information

Web-Mining Agents Cooperating Agents for Information Retrieval

Web-Mining Agents Cooperating Agents for Information Retrieval Web-Mining Agents Cooperating Agents for Information Retrieval Prof. Dr. Ralf Möller Universität zu Lübeck Institut für Informationssysteme Tanya Braun (Übungen) Organizational Issues: Assignments Start:

More information

Integrating Cognition, Perception and Action through Mental Simulation in Robots

Integrating Cognition, Perception and Action through Mental Simulation in Robots Integrating Cognition, Perception and Action through Mental Simulation in Robots Nicholas L. Cassimatis, J. Gregory Trafton, Alan C. Schultz, Magdalena D. Bugajska Naval Research Laboratory Codes 5513,

More information

(Visual) Attention. October 3, PSY Visual Attention 1

(Visual) Attention. October 3, PSY Visual Attention 1 (Visual) Attention Perception and awareness of a visual object seems to involve attending to the object. Do we have to attend to an object to perceive it? Some tasks seem to proceed with little or no attention

More information

Perception Lie Paradox: Mathematically Proved Uncertainty about Humans Perception Similarity

Perception Lie Paradox: Mathematically Proved Uncertainty about Humans Perception Similarity Perception Lie Paradox: Mathematically Proved Uncertainty about Humans Perception Similarity Ahmed M. Mahran Computer and Systems Engineering Department, Faculty of Engineering, Alexandria University,

More information

Muddy Tasks and the Necessity of Autonomous Mental Development

Muddy Tasks and the Necessity of Autonomous Mental Development Muddy Tasks and the Necessity of Autonomous Mental Development Juyang Weng Embodied Intelligence Laboratory Department of Computer Science and Engineering Michigan State University East Lansing, MI 48824

More information

Agents. Environments Multi-agent systems. January 18th, Agents

Agents. Environments Multi-agent systems. January 18th, Agents Plan for the 2nd hour What is an agent? EDA132: Applied Artificial Intelligence (Chapter 2 of AIMA) PEAS (Performance measure, Environment, Actuators, Sensors) Agent architectures. Jacek Malec Dept. of

More information

ISSN (PRINT): , (ONLINE): , VOLUME-5, ISSUE-5,

ISSN (PRINT): , (ONLINE): , VOLUME-5, ISSUE-5, A SURVEY PAPER ON IMPLEMENTATION OF HUMAN INFORMATION PROCESSING SYSTEM IN ARTIFICIAL INTELLIGENCE BASED MACHINES Sheher Banu 1, Girish HP 2 Department of ECE, MVJCE, Bangalore Abstract Processing of information

More information

Solutions for Chapter 2 Intelligent Agents

Solutions for Chapter 2 Intelligent Agents Solutions for Chapter 2 Intelligent Agents 2.1 This question tests the student s understanding of environments, rational actions, and performance measures. Any sequential environment in which rewards may

More information

A Computational Theory of Belief Introspection

A Computational Theory of Belief Introspection A Computational Theory of Belief Introspection Kurt Konolige Artificial Intelligence Center SRI International Menlo Park, California 94025 Abstract Introspection is a general term covering the ability

More information

Ontologies for World Modeling in Autonomous Vehicles

Ontologies for World Modeling in Autonomous Vehicles Ontologies for World Modeling in Autonomous Vehicles Mike Uschold, Ron Provine, Scott Smith The Boeing Company P.O. Box 3707,m/s 7L-40 Seattle, WA USA 98124-2207 michael.f.uschold@boeing.com Craig Schlenoff,

More information

Answers to end of chapter questions

Answers to end of chapter questions Answers to end of chapter questions Chapter 1 What are the three most important characteristics of QCA as a method of data analysis? QCA is (1) systematic, (2) flexible, and (3) it reduces data. What are

More information

Rational Agents (Ch. 2)

Rational Agents (Ch. 2) Rational Agents (Ch. 2) Extra credit! Occasionally we will have in-class activities for extra credit (+3%) You do not need to have a full or correct answer to get credit, but you do need to attempt the

More information

Study on perceptually-based fitting line-segments

Study on perceptually-based fitting line-segments Regeo. Geometric Reconstruction Group www.regeo.uji.es Technical Reports. Ref. 08/2014 Study on perceptually-based fitting line-segments Raquel Plumed, Pedro Company, Peter A.C. Varley Department of Mechanical

More information

Working Paper 8: Janine Morley, Interesting topics & directions for practice theories August 2014

Working Paper 8: Janine Morley, Interesting topics & directions for practice theories August 2014 Please Note: The following working paper was presented at the workshop Demanding ideas: where theories of practice might go next held 18-20 June 2014 in Windermere, UK. The purpose of the event was to

More information

Multi-agent Engineering. Lecture 4 Concrete Architectures for Intelligent Agents. Belief-Desire-Intention Architecture. Ivan Tanev.

Multi-agent Engineering. Lecture 4 Concrete Architectures for Intelligent Agents. Belief-Desire-Intention Architecture. Ivan Tanev. Multi-agent Engineering Lecture 4 Concrete Architectures for Intelligent Agents. Belief-Desire-Intention Architecture Ivan Tanev 1 Outline 1. Concrete architectures 2. Belief-Desire-Intention (BDI) Architecture.

More information

Modeling Agents as Qualitative Decision Makers

Modeling Agents as Qualitative Decision Makers Modeling Agents as Qualitative Decision Makers Ronen I. Brafman Dept. of Computer Science University of British Columbia Vancouver, B.C. Canada V6T 1Z4 brafman@cs.ubc.ca Moshe Tennenholtz Industrial Engineering

More information

On the Sense of Agency and of Object Permanence in Robots

On the Sense of Agency and of Object Permanence in Robots On the Sense of Agency and of Object Permanence in Robots Sarah Bechtle 1, Guido Schillaci 2 and Verena V. Hafner 2 Abstract This work investigates the development of the sense of object permanence in

More information

Comment on McLeod and Hume, Overlapping Mental Operations in Serial Performance with Preview: Typing

Comment on McLeod and Hume, Overlapping Mental Operations in Serial Performance with Preview: Typing THE QUARTERLY JOURNAL OF EXPERIMENTAL PSYCHOLOGY, 1994, 47A (1) 201-205 Comment on McLeod and Hume, Overlapping Mental Operations in Serial Performance with Preview: Typing Harold Pashler University of

More information

BACKGROUND + GENERAL COMMENTS

BACKGROUND + GENERAL COMMENTS Response on behalf of Sobi (Swedish Orphan Biovitrum AB) to the European Commission s Public Consultation on a Commission Notice on the Application of Articles 3, 5 and 7 of Regulation (EC) No. 141/2000

More information

Item Analysis Explanation

Item Analysis Explanation Item Analysis Explanation The item difficulty is the percentage of candidates who answered the question correctly. The recommended range for item difficulty set forth by CASTLE Worldwide, Inc., is between

More information

Applying Appraisal Theories to Goal Directed Autonomy

Applying Appraisal Theories to Goal Directed Autonomy Applying Appraisal Theories to Goal Directed Autonomy Robert P. Marinier III, Michael van Lent, Randolph M. Jones Soar Technology, Inc. 3600 Green Court, Suite 600, Ann Arbor, MI 48105 {bob.marinier,vanlent,rjones}@soartech.com

More information

AI and Philosophy. Gilbert Harman. Thursday, October 9, What is the difference between people and other animals?

AI and Philosophy. Gilbert Harman. Thursday, October 9, What is the difference between people and other animals? AI and Philosophy Gilbert Harman Thursday, October 9, 2008 A Philosophical Question about Personal Identity What is it to be a person? What is the difference between people and other animals? Classical

More information

Presence and Perception: theoretical links & empirical evidence. Edwin Blake

Presence and Perception: theoretical links & empirical evidence. Edwin Blake Presence and Perception: theoretical links & empirical evidence Edwin Blake edwin@cs.uct.ac.za This Talk 2 Perception Bottom-up Top-down Integration Presence Bottom-up Top-down BIPs Presence arises from

More information

A Modular Hierarchical Behavior-Based Architecture

A Modular Hierarchical Behavior-Based Architecture A Modular Hierarchical Behavior-Based Architecture Scott Lenser, James Bruce, Manuela Veloso Computer Science Department Carnegie Mellon University 5000 Forbes Ave Pittsburgh, PA 15213 {slenser,jbruce,mmv}@cs.cmu.edu

More information

2 Psychological Processes : An Introduction

2 Psychological Processes : An Introduction 2 Psychological Processes : An Introduction 2.1 Introduction In our everyday life we try to achieve various goals through different activities, receive information from our environment, learn about many

More information

EXERCISE 7.4 CONTEST BUILD-UP ROUTINE

EXERCISE 7.4 CONTEST BUILD-UP ROUTINE Exercise 7.4 Page 1 EXERCISE 7.4 CONTEST BUILD-UP ROUTINE Aim The purpose of this exercise is to develop a final stage strategy in competition preparations that will produce the best level of readiness

More information

CHAPTER 3 METHOD AND PROCEDURE

CHAPTER 3 METHOD AND PROCEDURE CHAPTER 3 METHOD AND PROCEDURE Previous chapter namely Review of the Literature was concerned with the review of the research studies conducted in the field of teacher education, with special reference

More information

Intelligent Agents. Chapter 2 ICS 171, Fall 2009

Intelligent Agents. Chapter 2 ICS 171, Fall 2009 Intelligent Agents Chapter 2 ICS 171, Fall 2009 Discussion \\Why is the Chinese room argument impractical and how would we have to change the Turing test so that it is not subject to this criticism? Godel

More information