Agents. Formalizing Task Environments. What s an agent? The agent and the environment. Environments. Example: the automated taxi driver (PEAS)

Similar documents
Artificial Intelligence Intelligent agents

Intelligent Agents. Russell and Norvig: Chapter 2

CS 331: Artificial Intelligence Intelligent Agents

CS 331: Artificial Intelligence Intelligent Agents

CS 331: Artificial Intelligence Intelligent Agents. Agent-Related Terms. Question du Jour. Rationality. General Properties of AI Systems

Agents. This course is about designing intelligent agents Agents and environments. Rationality. The vacuum-cleaner world

Intelligent Agents. Chapter 2

Agents & Environments Chapter 2. Mausam (Based on slides of Dan Weld, Dieter Fox, Stuart Russell)

Chapter 2: Intelligent Agents

Agents & Environments Chapter 2. Mausam (Based on slides of Dan Weld, Dieter Fox, Stuart Russell)

22c:145 Artificial Intelligence

Intelligent Agents. Outline. Agents. Agents and environments

Agents. Environments Multi-agent systems. January 18th, Agents

Outline. Chapter 2 Agents & Environments. Agents. Types of Agents: Immobots

AI: Intelligent Agents. Chapter 2

KECERDASAN BUATAN 3. By Sirait. Hasanuddin Sirait, MT

Rational Agents (Chapter 2)

Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department

What is AI? The science of making machines that: Think rationally. Think like people. Act like people. Act rationally

Intelligent Agents. Chapter 2 ICS 171, Fall 2009

Artificial Intelligence Agents and Environments 1

Dr. Mustafa Jarrar. Chapter 2 Intelligent Agents. Sina Institute, University of Birzeit

Artificial Intelligence. Intelligent Agents

Lecture 2 Agents & Environments (Chap. 2) Outline

Intelligent Agents. CmpE 540 Principles of Artificial Intelligence

CS 771 Artificial Intelligence. Intelligent Agents

Agents and Environments

Artificial Intelligence CS 6364

Intelligent Agents. Soleymani. Artificial Intelligence: A Modern Approach, Chapter 2

Intelligent Agents. BBM 405 Fundamentals of Artificial Intelligence Pinar Duygulu Hacettepe University. Slides are mostly adapted from AIMA

Intelligent Autonomous Agents. Ralf Möller, Rainer Marrone Hamburg University of Technology

Web-Mining Agents Cooperating Agents for Information Retrieval

Vorlesung Grundlagen der Künstlichen Intelligenz

Introduction to Artificial Intelligence 2 nd semester 2016/2017. Chapter 2: Intelligent Agents

Foundations of Artificial Intelligence

Contents. Foundations of Artificial Intelligence. Agents. Rational Agents

Web-Mining Agents Cooperating Agents for Information Retrieval

Intelligent Agents. Instructor: Tsung-Che Chiang

Agents and State Spaces. CSCI 446: Artificial Intelligence

Intelligent Agents. Instructor: Tsung-Che Chiang

Outline for Chapter 2. Agents. Agents. Agents and environments. Vacuum- cleaner world. A vacuum- cleaner agent 8/27/15

Ar#ficial Intelligence

Artificial Intelligence

Agents and Environments. Stephen G. Ware CSCI 4525 / 5525

Artificial Intelligence Lecture 7

Intelligent Agents. Philipp Koehn. 16 February 2017

How do you design an intelligent agent?

CS324-Artificial Intelligence

Agents and Environments

AI Programming CS F-04 Agent Oriented Programming

Overview. What is an agent?

Artificial Intelligence

Agents and State Spaces. CSCI 446: Ar*ficial Intelligence Keith Vertanen

Rational Agents (Ch. 2)

Rational Agents (Ch. 2)

Silvia Rossi. Agent as Intentional Systems. Lezione n. Corso di Laurea: Informatica. Insegnamento: Sistemi multi-agente.

Solutions for Chapter 2 Intelligent Agents

CS343: Artificial Intelligence

Module 1. Introduction. Version 1 CSE IIT, Kharagpur

COMP329 Robotics and Autonomous Systems Lecture 15: Agents and Intentions. Dr Terry R. Payne Department of Computer Science


Agents. Robert Platt Northeastern University. Some material used from: 1. Russell/Norvig, AIMA 2. Stacy Marsella, CS Seif El-Nasr, CS4100

GRUNDZÜGER DER ARTIFICIAL INTELLIGENCE

Sequential Decision Making

Definitions. The science of making machines that: This slide deck courtesy of Dan Klein at UC Berkeley

Artificial Intelligence. Admin.

CS148 - Building Intelligent Robots Lecture 5: Autonomus Control Architectures. Instructor: Chad Jenkins (cjenkins)

AGENT-BASED SYSTEMS. What is an agent? ROBOTICS AND AUTONOMOUS SYSTEMS. Today. that environment in order to meet its delegated objectives.

DRIVING AT NIGHT. It s More Dangerous

What is AI? The science of making machines that:

1 What is an Agent? CHAPTER 2: INTELLIGENT AGENTS

Bayesian Perception & Decision for Intelligent Mobility

Formal Verification of Train Control with Air Pressure Brakes

Robotics Summary. Made by: Iskaj Janssen

PREVENTING DISTRACTED DRIVING. Maintaining Focus Behind the Wheel of a School Bus

STPA Applied to Automotive Automated Parking Assist

Artificial Intelligence. Outline

Reinforcement Learning

Hearing Conservation Training Program

Driving at Night. It's More Dangerous

GUIDELINES FOR PATIENTS HAVING CERVICAL DISCECTOMY AND FUSION SURGERY

Driver Distraction: Towards A Working Definition

I just didn t see it It s all about hazard perception. Dr. Robert B. Isler Associate Professor School of Psychology University of Waikato Hamilton

Introduction to Arti Intelligence

Multimodal Driver Displays: Potential and Limitations. Ioannis Politis

New Approaches to Accessibility. Richard Ladner University of Washington

Behavior Architectures

ISSN (PRINT): , (ONLINE): , VOLUME-5, ISSUE-5,

A Theory of Complexity and its Relevance to Better Decision Making

Driving with ADD/ADHD. Presenter: Evan Levene, AAA Club Alliance Driving Instructor

Unmanned autonomous vehicles in air land and sea

MobileAccessibility. Richard Ladner University of Washington

Application of ecological interface design to driver support systems

Modeling Anger and Aggressive Driving Behavior in a Dynamic Choice-Latent Variable Model

Lecture 5- Hybrid Agents 2015/2016

1) Principle of Proactivity

DRIVER BEHAVIOUR STUDIES IN THE MOTORWAY OPERATIONS PLATFORM GRANT

Intelligent Machines That Act Rationally. Hang Li Bytedance AI Lab

OPTIC FLOW IN DRIVING SIMULATORS

Autism Strategy Survey 2017

Transcription:

What s an agent? Russell and Norvig: An agent is anything that can be viewed as perceiving its environment through sensors and acting on that environment through actuators. (p. 32) Examples: The agent and the environment An agent: Works in a particular environment Has goals Perceives the environment Performs actions to achieve its goals. Example: the automated taxi driver Possible performance measures: (PEAS) P: Performance This is all important: it defines the goal E: Environment This defines the world the agent lives in A: Actuators This defines how the agent can change the world S: Sensors This defines how the agent sees the world (and how much of the word the agent can see) Fully vs. partially observable: can the sensors detect all aspects that are relevant to the choice of action. Roads, other traffic, pedestrians, other traffic Actuators: Formalizing Task Environment: Safe, fast, legal, comfortable trip, maximize profit. driver Internet shopper Backgammon player Chemical plant controller Spam detector Steering, accelerator, brake, turn signal, horn Sensors: Cameras, engine sensors, laser rangefinders, GPS, keyboard, microphone Observable fully fully partially? Deterministic Episodic 1

Deterministic vs. stochastic: is the next environment state completely determined by the current state? Episodic Episodic vs. sequential: can the agent s experience be divided into steps where the agent s action depends only on the current episdoe? static static dynamic dynamic vs. dynamic: can the environment change while the agent is choosing an action? vs. continuous: This distinction can be applied to the state of the environment, the way time is handled and to the percepts/actions of the agent. static static dynamic dynamic discrete discrete discrete continuous The vacuum world Single vs. multi-agent: Does the environment contain other agents who are also maximizing some performance measure that depends on the current agent s actions? static static dynamic dynamic Environment: square A and B Percepts: [location and content] e.g. [A, Dirty] Actions: left, right, suck, no-op discrete discrete discrete continuous single multi multi multi 2

The vacuum world (cont.) A simple agent function Percept sequence [A,Clean] [A, Dirty] [B, Clean] [B, Dirty] [A, Clean],[A, Clean] [A, Clean],[A, Dirty] Action Right Left Right def REFLEX-VACUUM-AGENT (location, status) : if status == Dirty: return elif location == A: return Right elif location == B: return Left Is this the best agent for the job? Rational agents Rationality A rational agent is one that does the right thing. What is the right thing? Approximation: the most successful agent. Measure of success? Performance measure according to what is wanted in the environment (goal). Performance measures for the vacuum world: The amount of dirt cleaned per unit time. How much energy was spent on moving and cleaning. What is rational at a given time depends on: Performance measure. The available actions. The built-in knowledge about the environment. Percept sequence to date (sensors). A rational agent chooses an action which maximizes the expected value of the performance measure given the percept sequence and its built-in knowledge. Agent structure Agent: architecture + program The agent program: maps percepts to actions The agent program receives as input the current percept and returns an action for the agent s actuators. Agent types Function TABLE-DRIVEN_AGENT(percept) returns an action static: percepts, a sequence, initially empty table, a table of actions, indexed by percept sequences append percept to percepts action LOOKUP(percepts, table) return action Why won t this work? 3

Simple reflex agent Simple reflex agent (cont) Selects action only on the basis of the current percept Large reduction in possible percept/action combinations function SIMPLE-REFLEX-AGENT(percept) returns an action static: rules, a set of condition-action rules state INTERPRET-INPUT(percept) rule RULE-MATCH(state, rule) action RULE-ACTION[rule] return action function REFLEX-VACUUM-AGENT ([location, status]) if status == Dirty then return else if location == A then return Right else if location == B then return Left Will only work if the environment is fully observable Our taxi-driver agent would not work as a simple reflex agent! Model-based reflex agent Maintain an internal state Update the state using information on how the world works (the model of the world) -driver agent: needs to keep track of cars in his blind spot Goal-based agents Our taxi-driver agent needs to get somewhere: it has a goal. Chooses actions to achieve goal. Search and planning are subfields of AI devoted to finding a sequence of actions that achieve the agent s goals. Utility-based agents Utility function maps a (sequence of) state(s) onto a real number. Certain goals can be reached in different ways. Some are better, have a higher utility. Improves on goals: Selecting between conflicting goals Select appropriately between several goals that have varying probability of success. Learning agents All previous agent-programs describe methods for selecting actions. Yet it does not explain the origin of these programs. Learning mechanisms can be used to perform this task. Teach them instead of instructing them. Advantage is the robustness of the program toward initially unknown environments. 4

Learning agents Performance element: selects actions based on percepts. Corresponds to the previous agent programs Learning element: introduce improvements in performance element. Critic provides feedback on agents performance based on fixed performance standard. Problem generator: suggests actions that will lead to new and informative experiences. Exploration vs. exploitation 5