COMP329 Robotics and Autonomous Systems Lecture 15: Agents and Intentions. Dr Terry R. Payne Department of Computer Science

Similar documents
AGENT-BASED SYSTEMS. What is an agent? ROBOTICS AND AUTONOMOUS SYSTEMS. Today. that environment in order to meet its delegated objectives.

1 What is an Agent? CHAPTER 2: INTELLIGENT AGENTS

Silvia Rossi. Agent as Intentional Systems. Lezione n. Corso di Laurea: Informatica. Insegnamento: Sistemi multi-agente.

Intelligent Autonomous Agents. Ralf Möller, Rainer Marrone Hamburg University of Technology

Web-Mining Agents Cooperating Agents for Information Retrieval

Web-Mining Agents Cooperating Agents for Information Retrieval

Agents. Environments Multi-agent systems. January 18th, Agents

Intelligent Agents. CmpE 540 Principles of Artificial Intelligence

Agents and Environments

Artificial Intelligence

Agents as Intentional Systems

Artificial Intelligence Lecture 7

Robotics Summary. Made by: Iskaj Janssen

Artificial Intelligence CS 6364

Module 1. Introduction. Version 1 CSE IIT, Kharagpur

CS 771 Artificial Intelligence. Intelligent Agents

Introduction to Artificial Intelligence 2 nd semester 2016/2017. Chapter 2: Intelligent Agents

AI: Intelligent Agents. Chapter 2

How do you design an intelligent agent?

Agents and State Spaces. CSCI 446: Artificial Intelligence

CS 331: Artificial Intelligence Intelligent Agents

CS 331: Artificial Intelligence Intelligent Agents

Intentionality. Phil 255

Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department

Intelligent Agents. Chapter 2 ICS 171, Fall 2009

Eliminative materialism

CS 331: Artificial Intelligence Intelligent Agents. Agent-Related Terms. Question du Jour. Rationality. General Properties of AI Systems

Overview. What is an agent?

Intelligent Agents. Philipp Koehn. 16 February 2017

LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES

Agents and Environments

CS324-Artificial Intelligence

Foundations of Artificial Intelligence

Chapter 2: Intelligent Agents

ICS 606. Intelligent Autonomous Agents 1. Intelligent Autonomous Agents ICS 606 / EE 606 Fall Reactive Architectures

22c:145 Artificial Intelligence

Contents. Foundations of Artificial Intelligence. Agents. Rational Agents

Intelligent Agents. Soleymani. Artificial Intelligence: A Modern Approach, Chapter 2

Solutions for Chapter 2 Intelligent Agents

AI Programming CS F-04 Agent Oriented Programming

Agents & Environments Chapter 2. Mausam (Based on slides of Dan Weld, Dieter Fox, Stuart Russell)

Dr. Mustafa Jarrar. Chapter 2 Intelligent Agents. Sina Institute, University of Birzeit

Intelligent Agents. Outline. Agents. Agents and environments

Rational Agents (Ch. 2)

Intelligent Agents. Instructor: Tsung-Che Chiang

KECERDASAN BUATAN 3. By Sirait. Hasanuddin Sirait, MT

Intelligent Agents. Chapter 2

Agents & Environments Chapter 2. Mausam (Based on slides of Dan Weld, Dieter Fox, Stuart Russell)

Agents. Robert Platt Northeastern University. Some material used from: 1. Russell/Norvig, AIMA 2. Stacy Marsella, CS Seif El-Nasr, CS4100

Intelligent Agents. BBM 405 Fundamentals of Artificial Intelligence Pinar Duygulu Hacettepe University. Slides are mostly adapted from AIMA

EEL-5840 Elements of {Artificial} Machine Intelligence

Behaviorism: An essential survival tool for practitioners in autism

Vorlesung Grundlagen der Künstlichen Intelligenz

Artificial Intelligence

Design Methodology. 4th year 1 nd Semester. M.S.C. Madyan Rashan. Room No Academic Year

Intelligent Agents. Instructor: Tsung-Che Chiang

What is AI? The science of making machines that: Think rationally. Think like people. Act like people. Act rationally

An Escalation Model of Consciousness

Intelligent Agents. Russell and Norvig: Chapter 2

Multi-agent Engineering. Lecture 4 Concrete Architectures for Intelligent Agents. Belief-Desire-Intention Architecture. Ivan Tanev.

Behavior Architectures

In this chapter we discuss validity issues for quantitative research and for qualitative research.

Rational Agents (Ch. 2)

When Your Partner s Actions Seem Selfish, Inconsiderate, Immature, Inappropriate, or Bad in Some Other Way

Functionalist theories of content

CS343: Artificial Intelligence

The Cognitive Approach

Consider the following aspects of human intelligence: consciousness, memory, abstract reasoning

Lecture 2 Agents & Environments (Chap. 2) Outline

Agents. Formalizing Task Environments. What s an agent? The agent and the environment. Environments. Example: the automated taxi driver (PEAS)

Agents and Environments. Stephen G. Ware CSCI 4525 / 5525

Published in January Published by: Association for Dementia Studies. Association for Dementia Studies. Institute of Health and Society


What is Science 2009 What is science?

What Constitutes a Good Contribution to the Literature (Body of Knowledge)?

Worried about your memory?

A Computational Theory of Belief Introspection

A Decision-Theoretic Approach to Evaluating Posterior Probabilities of Mental Models

Artificial Intelligence Agents and Environments 1

Communicating More Effectively with NLP

Researchers Beginning to Better Understand False Memory Formation Alissa Fleck

CHAPTER TWO. The Philosophical Approach: Enduring Questions

PHLA The Philosophy of Mind - II

Overview. cis32-spring2003-parsons-lect15 2

Assignment 4: True or Quasi-Experiment

Overview EXPERT SYSTEMS. What is an expert system?

Illusion of control is all about the relationship between the conscious and the sub-conscious mind.

Agents. This course is about designing intelligent agents Agents and environments. Rationality. The vacuum-cleaner world

Modeling Agents as Qualitative Decision Makers

Paul Figueroa. Washington Municipal Clerks Association ANNUAL CONFERENCE. Workplace Bullying: Solutions and Prevention. for

Lecture 9: Lab in Human Cognition. Todd M. Gureckis Department of Psychology New York University

Physiological Function, Health and Medical Theory

Children with cochlear implants: parental perspectives. Parents points of view

Chapter 11. Experimental Design: One-Way Independent Samples Design

Is integrated information theory viable as a theory of consciousness? George Deane

9699 SOCIOLOGY. Mark schemes should be read in conjunction with the question paper and the Principal Examiner Report for Teachers.

2 Psychological Processes : An Introduction

MTAT Bayesian Networks. Introductory Lecture. Sven Laur University of Tartu

Effective Intentions: The Power of Conscious Will

ID + MD = OD Towards a Fundamental Algorithm for Consciousness. by Thomas McGrath. June 30, Abstract

Transcription:

COMP329 Robotics and Autonomous Systems Lecture 15: Agents and Intentions Dr Terry R. Payne Department of Computer Science

General control architecture Localisation Environment Model Local Map Position Global Map Cognition Path Real World Environment Perception Motion Control 2

Aims of these slides We will start to focus on the second part of the module: Autonomous agents (also check out COMP310) Things you will need for the second assignment. We will recap some of the basic ideas about agents from earlier in the module. Look at some aspects in more detail. Introduce the idea of the intentional stance 3

What is an Agent? As we have said before: The main point about agents is they are autonomous: capable independent action.... An agent is a computer system that is situated in some environment, and that is capable of autonomous action in that environment in order to meet its delegated objectives... It is all about decisions An agent has to choose what action to perform. An agent has to decide when to perform an action. 4

Agent and Environment Sensors (Percepts) Environment Effectors (Action) The fundamental question is what action(s) to take for a given state of the environment 5

Intelligent Agents Making good decisions requires the agent to be intelligent. Agent has to do the right thing. An intelligent agent is a computer system capable of flexible autonomous action in some environment. By flexible, we mean: Reactive Pro-active Social All these properties make it able to respond to what is around it. (More on the next few slides). 6

Reactivity If a program s environment is guaranteed to be fixed, the program need never worry about its own success or failure Program just executes blindly. Example of fixed environment: compiler. The real world is not like that: most environments are dynamic and information is incomplete. 7

Reactivity Software is hard to build for dynamic domains: program must take into account possibility of failure ask itself whether it is worth executing! A reactive system is one that maintains an ongoing interaction with its environment, and responds to changes that occur in it (in time for the response to be useful). 8

Proactiveness Reacting to an environment is easy e.g., stimulus response rules But we generally want agents to do things for us. Hence goal directed behaviour. Pro-activeness = generating and attempting to achieve goals; not driven solely by events; taking the initiative. Also: recognising opportunities. 9

Social Ability The real world is a multi-agent environment: we cannot go around attempting to achieve goals without taking others into account. Some goals can only be achieved by interacting with others. Similarly for many computer environments: witness the INTERNET. Social ability in agents is the ability to interact with other agents (and possibly humans) via cooperation, coordination, and negotiation. At the very least, it means the ability to communicate... 10

Social Ability: Cooperation Cooperation is working together as a team to achieve a shared goal. Often prompted either by the fact that no one agent can achieve the goal alone, or that cooperation will obtain a better result (e.g., get result faster). 11

Social Ability: Coordination Coordination is managing the interdependencies between activities. For example, if there is a non-sharable resource that you want to use and I want to use, then we need to coordinate. 12

Social Ability: Negotiation Negotiation is the ability to reach agreements on matters of common interest. For example: You have one TV in your house; you want to watch a movie, your housemate wants to watch football. A possible deal: watch football tonight, and a movie tomorrow. Typically involves offer and counter-offer, with compromises made by participants. 13

Properties of Environments Since agents are in close contact with their environment, the properties of the environment affect agents. Also have a big effect on those of us who build agents. Common to categorise environments along some different dimensions. Fully observable vs partially observable Deterministic vs non-deterministic Episodic vs non-episodic Static vs dynamic Discrete vs continuous 14

Properties of Environments Fully observable vs partially observable. An accessible or fully observable environment is one in which the agent can obtain complete, accurate, up-to-date information about the environment s state. Most moderately complex environments (including, for example, the everyday physical world and the Internet) are inaccessible, or partially observable. The more accessible an environment is, the simpler it is to build agents to operate in it. 15

Properties of Environments Deterministic vs non-deterministic. A deterministic environment is one in which any action has a single guaranteed effect there is no uncertainty about the state that will result from performing an action. The physical world can to all intents and purposes be regarded as non-deterministic. We'll follow Russell and Norvig in calling environments stochastic if we quantify the nondeterminism using probability theory. Non-deterministic environments present greater problems for the agent designer. 16

Properties of Environments Episodic vs non-episodic. In an episodic environment, the performance of an agent is dependent on a number of discrete episodes, with no link between the performance of an agent in different scenarios. An example of an episodic environment would be an assembly line where an agent had to spot defective parts. Episodic environments are simpler from the agent developer s perspective because the agent can decide what action to perform based only on the current episode it need not reason about the interactions between this and future episodes. Relations to the Markov property Environments that are not episodic are called either non-episodic or sequential. Here the current decision affects future decisions. Driving a car is sequential. 17

Properties of Environments Static vs dynamic. A static environment is one that can be assumed to remain unchanged except by the performance of actions by the agent. A dynamic environment is one that has other processes operating on it, and which hence changes in ways beyond the agent s control. The physical world is a highly dynamic environment. One reason an environment may be dynamic is the presence of other agents. 18

Properties of Environments Discrete vs continuous. An environment is discrete if there are a fixed, finite number of actions and percepts in it. Otherwise it is continuous Often we treat a continuous environment as descrete for simplicity becomes 19

Agents as Intentional Systems When explaining human activity, it is often useful to make statements such as the following: Janine took her umbrella because she believed it was going to rain. Michael worked hard because he wanted to possess a PhD. These statements make use of a folk psychology, by which human behaviour is predicted and explained through the attribution of attitudes e.g. believing, wanting, hoping, fearing... The attitudes employed in such folk psychological descriptions are called the intentional notions. 20

Dennett on Intentional Systems The philosopher Daniel Dennett coined the term intentional system to describe entities:... whose behaviour can be predicted by the method of attributing belief, desires and rational acumen... Dennett identifies different grades of intentional system:... A first-order intentional system has beliefs and desires (etc.) but no beliefs and desires about beliefs and desires...... A second-order intentional system is more sophisticated; it has beliefs and desires (and no doubt other intentional states) about beliefs and desires (and other intentional states) both those of others and its own... Is it legitimate or useful to attribute beliefs, desires, and so on, to computer systems? 21

McCarthy on Intentional Systems John McCarthy argued that there are occasions when the intentional stance is appropriate: 22

McCarthy on Intentional Systems... To ascribe beliefs, free will, intentions, consciousness, abilities, or wants to a machine is legitimate when such an ascription expresses the same information about the machine that it expresses about a person. It is useful when the ascription helps us understand the structure of the machine, its past or future behaviour, or how to repair or improve it. It is perhaps never logically required even for humans, but expressing reasonably briefly what is actually known about the state of the machine in a particular situation may require mental qualities or qualities isomorphic to them. Theories of belief, knowledge and wanting can be constructed for machines in a simpler setting than for humans, and later applied to humans. Ascription of mental qualities is most straightforward for machines of known structure such as thermostats and computer operating systems, but is most useful when applied to entities whose structure is incompletely known... John McCarthy argued that there are occasions when the intentional stance is appropriate: 23

What can be described with the intentional stance? As it turns out, more or less anything can... consider a light switch:... It is perfectly coherent to treat a light switch as a (very cooperative) agent with the capability of transmitting current at will, who invariably transmits current when it believes that we want it transmitted and not otherwise; flicking the switch is simply our way of communicating our desires... (Yoav Shoham) But most adults would find such a description absurd! Why is this? 24

Intentional Systems It provides us with a familiar, non-technical way of understanding and explaining agents. 25

What can be described with the intentional stance? The answer seems to be that while the intentional stance description is consistent:... it does not buy us anything, since we essentially understand the mechanism sufficiently to have a simpler, mechanistic description of its behaviour... (Yoav Shoham) Put crudely, the more we know about a system, the less we need to rely on animistic, intentional explanations of its behaviour. But with very complex systems, a mechanistic, explanation of its behaviour may not be practicable. As computer systems become ever more complex, we need more powerful abstractions and metaphors to explain their operation low level explanations become impractical. The intentional stance is such an abstraction. 26

Agents as Intentional Systems So agent theorists start from the (strong) view of agents as intentional systems: one whose simplest consistent description requires the intentional stance. This intentional stance is an abstraction tool...... a convenient way of talking about complex systems, which allows us to predict and explain their behaviour without having to understand how the mechanism actually works. So why not use the intentional stance as an abstraction tool in computing to explain, understand, and, crucially, program computer systems, through Most important developments in computing are based on new abstractions: the notion of agents? procedural abstraction, abstract data types, objects, etc Agents, and agents as intentional systems, represent a further, and increasingly powerful abstraction. 27

Abstractions Remember: most important developments in computing are based on new abstractions. Programming has progressed through: Just as moving from machine code to higher level languages brings an efficiency gain, so does moving from objects to agents. The following 2006 paper claims that developing complex applications using agent-based methods leads to an average saving of 350% in development time (and up to 500% over the use of Java). S. Benfield, Making a Strong Business Case for Multiagent Technology, Invited Talk at AAMAS 2006. to machine code; assembly language; machine-independent programming languages; sub-routines; procedures & functions; abstract data types; objects; Agents, as intentional systems, that represent a further, and increasingly powerful abstraction. 28

Agents as Intentional Systems There are other arguments in favour North by Northwest of this idea... 1.Characterising Agents It provides us with a familiar, non-technical way of understanding and explaining agents. 2.Nested Representations It gives us the potential to specify systems that include representations of other systems. It is widely accepted that such nested representations are essential for agents that must cooperate with other agents. If you think that Agent B knows x, then move to location L. Eve Kendell knows that Roger Thornhill is working for the FBI. Eve believes that Philip Vandamm suspects that she is helping Roger. This, in turn, leads Eve to believe that Philip thinks she is working for the FBI (which is true). By pretending to shoot Roger, Eve hopes to convince Philip that she is not working for the FBI 29

Agents as Intentional Systems There are other arguments in favour of this idea... 3.Post-Declarative Systems In procedural programming, we say exactly what a system should do; In declarative programming, we state something that we want to achieve, give the system general info about the relationships between objects, and let a built-in control mechanism (e.g., goal-directed theorem proving) figure out what to do; With agents, we give a high-level description of the delegated goal, and let the control mechanism figure out what to do, knowing that it will act in accordance with some built-in theory of rational agency. 30

Post-Declarative Systems What is this built-in theory? Method of combining: What you believe about the world. What you desire to bring about Establish a set of intentions Then figure out how to make these happen. DS1 seen 2.3 million miles from Earth 31

Summary This lecture reflected on the idea of anagent. It discussed briefly the properties of the environments in which the agents operate It also introduces the intentional stance. And describes why this idea is Next time we will look at practical reasoning and the Belief, Desire, Intention model 32