Intelligent Agents. Chapter 2

Similar documents
AI: Intelligent Agents. Chapter 2

Agents. This course is about designing intelligent agents Agents and environments. Rationality. The vacuum-cleaner world

22c:145 Artificial Intelligence

Intelligent Agents. Russell and Norvig: Chapter 2

Artificial Intelligence Intelligent agents

Chapter 2: Intelligent Agents

CS 331: Artificial Intelligence Intelligent Agents

CS 331: Artificial Intelligence Intelligent Agents

Intelligent Agents. Outline. Agents. Agents and environments

Agents & Environments Chapter 2. Mausam (Based on slides of Dan Weld, Dieter Fox, Stuart Russell)

Agents. Formalizing Task Environments. What s an agent? The agent and the environment. Environments. Example: the automated taxi driver (PEAS)

Agents & Environments Chapter 2. Mausam (Based on slides of Dan Weld, Dieter Fox, Stuart Russell)

Outline. Chapter 2 Agents & Environments. Agents. Types of Agents: Immobots

CS 331: Artificial Intelligence Intelligent Agents. Agent-Related Terms. Question du Jour. Rationality. General Properties of AI Systems

Dr. Mustafa Jarrar. Chapter 2 Intelligent Agents. Sina Institute, University of Birzeit

Agents. Environments Multi-agent systems. January 18th, Agents

Intelligent Autonomous Agents. Ralf Möller, Rainer Marrone Hamburg University of Technology

Web-Mining Agents Cooperating Agents for Information Retrieval

Rational Agents (Chapter 2)

Intelligent Agents. Chapter 2 ICS 171, Fall 2009

Intelligent Agents. Philipp Koehn. 16 February 2017

Agents and State Spaces. CSCI 446: Artificial Intelligence

Intelligent Agents. BBM 405 Fundamentals of Artificial Intelligence Pinar Duygulu Hacettepe University. Slides are mostly adapted from AIMA

Agents and Environments

Web-Mining Agents Cooperating Agents for Information Retrieval

Foundations of Artificial Intelligence

Artificial Intelligence CS 6364

Intelligent Agents. Soleymani. Artificial Intelligence: A Modern Approach, Chapter 2

Contents. Foundations of Artificial Intelligence. Agents. Rational Agents

Artificial Intelligence Agents and Environments 1

Artificial Intelligence. Intelligent Agents

CS 771 Artificial Intelligence. Intelligent Agents

Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department

Artificial Intelligence

Lecture 2 Agents & Environments (Chap. 2) Outline

Intelligent Agents. Instructor: Tsung-Che Chiang

Intelligent Agents. CmpE 540 Principles of Artificial Intelligence

KECERDASAN BUATAN 3. By Sirait. Hasanuddin Sirait, MT

Intelligent Agents. Instructor: Tsung-Che Chiang

What is AI? The science of making machines that: Think rationally. Think like people. Act like people. Act rationally

Introduction to Artificial Intelligence 2 nd semester 2016/2017. Chapter 2: Intelligent Agents

Vorlesung Grundlagen der Künstlichen Intelligenz

Ar#ficial Intelligence

Agents and Environments. Stephen G. Ware CSCI 4525 / 5525

How do you design an intelligent agent?

CS324-Artificial Intelligence

Outline for Chapter 2. Agents. Agents. Agents and environments. Vacuum- cleaner world. A vacuum- cleaner agent 8/27/15

Artificial Intelligence Lecture 7

Agents and State Spaces. CSCI 446: Ar*ficial Intelligence Keith Vertanen

Agents and Environments

Overview. What is an agent?

AI Programming CS F-04 Agent Oriented Programming

Silvia Rossi. Agent as Intentional Systems. Lezione n. Corso di Laurea: Informatica. Insegnamento: Sistemi multi-agente.

Module 1. Introduction. Version 1 CSE IIT, Kharagpur

Artificial Intelligence

Solutions for Chapter 2 Intelligent Agents

Rational Agents (Ch. 2)

Rational Agents (Ch. 2)

CS343: Artificial Intelligence

Agents. Robert Platt Northeastern University. Some material used from: 1. Russell/Norvig, AIMA 2. Stacy Marsella, CS Seif El-Nasr, CS4100

Introduction to Arti Intelligence


AGENT-BASED SYSTEMS. What is an agent? ROBOTICS AND AUTONOMOUS SYSTEMS. Today. that environment in order to meet its delegated objectives.

COMP329 Robotics and Autonomous Systems Lecture 15: Agents and Intentions. Dr Terry R. Payne Department of Computer Science

GRUNDZÜGER DER ARTIFICIAL INTELLIGENCE

1 What is an Agent? CHAPTER 2: INTELLIGENT AGENTS

Robotics Summary. Made by: Iskaj Janssen

Definitions. The science of making machines that: This slide deck courtesy of Dan Klein at UC Berkeley

Sequential Decision Making

What is AI? The science of making machines that:

Reinforcement Learning

Behavior Architectures

EEL-5840 Elements of {Artificial} Machine Intelligence

Effects of Agent Appearance on Customer Buying Motivations on Online Shopping Sites

Affective Game Engines: Motivation & Requirements

AVR Based Gesture Vocalizer Using Speech Synthesizer IC

Unmanned autonomous vehicles in air land and sea

Motion Control for Social Behaviours

Situated Coordination

PCT 101. A Perceptual Control Theory Primer. Fred Nickols 8/27/2012

Artificial Intelligence. Outline

Appendix I Teaching outcomes of the degree programme (art. 1.3)

Chapter 3. Information Input and Processing Part II* Prepared by: Ahmed M. El-Sherbeeny, PhD *(Adapted from Slides by: Dr.

Robot Behavior Genghis, MIT Callisto, GATech

A Computational Framework for Concept Formation for a Situated Design Agent

Embodiment in GLAIR: A Grounded Layered Architecture. with Integrated Reasoning for Autonomous Agents. Henry Hexmoor. Johan Lammens.

Programming with Goals (3)

Modeling Emotion and Temperament on Cognitive Mobile Robots

Semiotics and Intelligent Control

emotions "affective computing" 30/3/

Solving problems by searching

Bayesian Perception & Decision for Intelligent Mobility

Robot Learning Letter of Intent

A PCT Primer. Fred Nickols 6/30/2011

A Theory of Complexity and its Relevance to Better Decision Making

Probabilistic Graphical Models: Applications in Biomedicine

Wolfram Alpha: Inside and Out

Detecting and Disrupting Criminal Networks. A Data Driven Approach. P.A.C. Duijn

Intensity and Loudness of Sound

Transcription:

Intelligent Agents Chapter 2

Outline Agents and environments Rationality Task environment: PEAS: Performance measure Environment Actuators Sensors Environment types Agent types

Agents and Environments An agent is anything that can be viewed as perceiving its environment through sensors and acting in that environment through actuators. sensors environment percepts actions? agent actuators

Agents and Environments An agent is anything that can be viewed as perceiving its environment through sensors and acting in that environment through actuators. sensors environment percepts actions? agent actuators Agents include humans, robots, softbots, thermostats, etc. The agent function maps from percept histories to actions: f : P A The agent program runs on a physical architecture to give f

Vacuum-cleaner world A B Percepts: location and contents, e.g., [A, Dirty] Actions: Left, Right, Suck, NoOp

A vacuum-cleaner agent Agent function: Percept sequence Action [A, Clean] Right [A, Dirty] Suck [B, Clean] Left [B, Dirty] Suck [A, Clean], [A, Clean] Right [A, Clean], [A, Dirty] Suck Note: This says how the agent should function. It says nothing about how this should be implemented.

A vacuum-cleaner agent Agent program: Function Reflex-Vacuum-Agent([location,status])returns an action if status = Dirty then return Suck else if location = A then return Right else if location = B then return Left Ask: What is the right function for implementing a specification? Can it be implemented in a small agent program?

Rationality Informally a rational agent is one that does the right thing.

Rationality Informally a rational agent is one that does the right thing. How well an agent does is given by a performance measure.

Rationality Informally a rational agent is one that does the right thing. How well an agent does is given by a performance measure. A fixed performance measure evaluates the environment sequence Examples: one point per square cleaned up in time T? one point per clean square per time step, minus one per move? penalize for > k dirty squares?

Rationality Informally a rational agent is one that does the right thing. How well an agent does is given by a performance measure. A fixed performance measure evaluates the environment sequence Examples: one point per square cleaned up in time T? one point per clean square per time step, minus one per move? penalize for > k dirty squares? A rational agent selects an action which maximizes the expected value of the performance measure given the percept sequence to date and its own knowledge. The action selection may range from being hardwired (e.g. in an insect or reflexive agent) to involving substantial reasoning.

Notes: Rationality Rational omniscient percepts may not supply all the relevant information

Notes: Rationality Rational omniscient percepts may not supply all the relevant information Rational clairvoyant action outcomes may not be as expected

Notes: Rationality Rational omniscient percepts may not supply all the relevant information Rational clairvoyant action outcomes may not be as expected Hence, rational successful Full, general rationality requires exploration, learning, autonomy

The Task Environment To design a rational agent, we must specify the task environment The task environment has the following components: Performance measure Environment Actuators Sensors Acronym: PEAS

Consider, e.g., the task of designing an automated taxi: PEAS Performance measure: safety, destination, profits, legality, comfort,... Environment: streets/freeways, traffic, pedestrians, weather,... Actuators: steering, accelerator, brake, horn, speaker,... Sensors: video, accelerometers, gauges, engine sensors, keyboard, GPS,...

Internet shopping agent Performance measure:?? Environment:?? Actuators:?? Sensors:??

Internet shopping agent Performance measure: price, quality, appropriateness, efficiency Environment:?? Actuators:?? Sensors:??

Internet shopping agent Performance measure: price, quality, appropriateness, efficiency Environment: current and future WWW sites, vendors, shippers Actuators:?? Sensors:??

Internet shopping agent Performance measure: price, quality, appropriateness, efficiency Environment: current and future WWW sites, vendors, shippers Actuators: display to user, follow URL, fill in form Sensors:??

Internet shopping agent Performance measure: price, quality, appropriateness, efficiency Environment: current and future WWW sites, vendors, shippers Actuators: display to user, follow URL, fill in form Sensors: HTML pages (text, graphics, scripts)

Environment Types Fully observable vs. partially observable If the agent has access to full state of the environment or not

Environment Types Fully observable vs. partially observable If the agent has access to full state of the environment or not Deterministic vs. stochastic Deterministic: Next state is completely determined by the agent s actions. (Or the set of agents in a multiagent env.)

Environment Types Fully observable vs. partially observable If the agent has access to full state of the environment or not Deterministic vs. stochastic Deterministic: Next state is completely determined by the agent s actions. (Or the set of agents in a multiagent env.) Uncertain: not fully observable or not deterministic

Environment Types Fully observable vs. partially observable If the agent has access to full state of the environment or not Deterministic vs. stochastic Deterministic: Next state is completely determined by the agent s actions. (Or the set of agents in a multiagent env.) Uncertain: not fully observable or not deterministic Episodic vs. sequential Episodic: Agent s experience is divided into independent episodes (e.g. classification)

Environment Types Fully observable vs. partially observable If the agent has access to full state of the environment or not Deterministic vs. stochastic Deterministic: Next state is completely determined by the agent s actions. (Or the set of agents in a multiagent env.) Uncertain: not fully observable or not deterministic Episodic vs. sequential Episodic: Agent s experience is divided into independent episodes (e.g. classification) Static vs. dynamic Dynamic: Environment may change while agent is deliberating.

Environment Types Fully observable vs. partially observable If the agent has access to full state of the environment or not Deterministic vs. stochastic Deterministic: Next state is completely determined by the agent s actions. (Or the set of agents in a multiagent env.) Uncertain: not fully observable or not deterministic Episodic vs. sequential Episodic: Agent s experience is divided into independent episodes (e.g. classification) Static vs. dynamic Dynamic: Environment may change while agent is deliberating. Discrete vs. continuous

Environment Types Fully observable vs. partially observable If the agent has access to full state of the environment or not Deterministic vs. stochastic Deterministic: Next state is completely determined by the agent s actions. (Or the set of agents in a multiagent env.) Uncertain: not fully observable or not deterministic Episodic vs. sequential Episodic: Agent s experience is divided into independent episodes (e.g. classification) Static vs. dynamic Dynamic: Environment may change while agent is deliberating. Discrete vs. continuous Single-agent vs. multiagent

Observable Deterministic Episodic Static Discrete Single-agent Environment types Crossword Backgammon Internet shopping Taxi

Environment types Crossword Backgammon Internet shopping Taxi Observable Yes Yes No No Deterministic Episodic Static Discrete Single-agent

Environment types Crossword Backgammon Internet shopping Taxi Observable Yes Yes No No Deterministic Yes No Partly No Episodic Static Discrete Single-agent

Environment types Crossword Backgammon Internet shopping Taxi Observable Yes Yes No No Deterministic Yes No Partly No Episodic No No No No Static Discrete Single-agent

Environment types Crossword Backgammon Internet shopping Taxi Observable Yes Yes No No Deterministic Yes No Partly No Episodic No No No No Static Yes Yes Semi No Discrete Single-agent

Environment types Crossword Backgammon Internet shopping Taxi Observable Yes Yes No No Deterministic Yes No Partly No Episodic No No No No Static Yes Yes Semi No Discrete Yes Yes Yes No Single-agent

Environment types Crossword Backgammon Internet shopping Taxi Observable Yes Yes No No Deterministic Yes No Partly No Episodic No No No No Static Yes Yes Semi No Discrete Yes Yes Yes No Single-agent Yes No Yes No

Environment types Crossword Backgammon Internet shopping Taxi Observable Yes Yes No No Deterministic Yes No Partly No Episodic No No No No Static Yes Yes Semi No Discrete Yes Yes Yes No Single-agent Yes No Yes No The environment type largely determines the agent design The real world is:

Environment types Crossword Backgammon Internet shopping Taxi Observable Yes Yes No No Deterministic Yes No Partly No Episodic No No No No Static Yes Yes Semi No Discrete Yes Yes Yes No Single-agent Yes No Yes No The environment type largely determines the agent design The real world is: partially observable, stochastic, sequential, dynamic, continuous, and multi-agent

Agent types There are four basic types in order of increasing generality: simple reflex agents reflex agents with state goal-based agents utility-based agents All these can have a learning component added

Simple reflex agents Agent Sensors Condition action rules What the world is like now What action I should do now Environment Actuators Action is selected according to the current percept No knowledge of percept history.

A simple reflex agent algorithm Function Simple-Reflex-Agent(percept) returns an action persistent: rules a set of condition-action rules state Interpret-Input(percept) rule Rule-Match(state,rules) action rule.action return action

Example Function Reflex-Vacuum-Agent([location,status]) returns an action if status = Dirty then return Suck else if location = A then return Right else if location = B then return Left

Reflex agents with state State How the world evolves What my actions do Condition action rules Sensors What the world is like now What action I should do now Environment Agent Actuators Also called a model-based reflex agent Agent keeps track of what it knows about the world. Useful for partial observability

A simple reflex agent algorithm Function Reflex-Agent-With-State(percept) returns an action persistent: state: the agent s conception of the world state model: The transition model how the next state depends on the present state and action rules: a set of condition-action rules action: the most recent action (initially none) state Update-State(state,action,percept,model) rule Rule-Match(state,rules) action rule.action return action

Goal-based agents State How the world evolves What my actions do Goals Sensors What the world is like now What it will be like if I do action A What action I should do now Environment Agent Actuators Agent s actions are determined in part by its goals. Example: Classical planning.

Utility-based agents State How the world evolves What my actions do Utility Sensors What the world is like now What it will be like if I do action A How happy I will be in such a state What action I should do now Environment Agent Actuators In addition to goals, use a notion of how good an action sequence is. E.g.: Taxi to airport should be safe, efficient, etc.

Learning agents Performance standard Critic Sensors feedback learning goals Learning element changes knowledge Performance element Environment Problem generator Agent Actuators

Summary Agents interact with environments through actuators and sensors The agent function describes what the agent does in all circumstances The performance measure evaluates the environment sequence A rational agent maximizes expected performance Agent programs implement agent functions PEAS descriptions define task environments Environments are categorized along several dimensions: observable? deterministic? episodic? static? discrete? single-agent? Several basic agent architectures exist: reflex, reflex with state, goal-based, utility-based