Foundations of Artificial Intelligence

Similar documents
Contents. Foundations of Artificial Intelligence. Agents. Rational Agents

Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department

Artificial Intelligence CS 6364

Artificial Intelligence. Intelligent Agents

AI: Intelligent Agents. Chapter 2

CS 331: Artificial Intelligence Intelligent Agents

CS 331: Artificial Intelligence Intelligent Agents

Intelligent Agents. BBM 405 Fundamentals of Artificial Intelligence Pinar Duygulu Hacettepe University. Slides are mostly adapted from AIMA

22c:145 Artificial Intelligence

CS 331: Artificial Intelligence Intelligent Agents. Agent-Related Terms. Question du Jour. Rationality. General Properties of AI Systems

Chapter 2: Intelligent Agents

Intelligent Agents. Chapter 2 ICS 171, Fall 2009

Intelligent Agents. Soleymani. Artificial Intelligence: A Modern Approach, Chapter 2

Intelligent Agents. Outline. Agents. Agents and environments

Dr. Mustafa Jarrar. Chapter 2 Intelligent Agents. Sina Institute, University of Birzeit

CS 771 Artificial Intelligence. Intelligent Agents

Agents. This course is about designing intelligent agents Agents and environments. Rationality. The vacuum-cleaner world

Introduction to Artificial Intelligence 2 nd semester 2016/2017. Chapter 2: Intelligent Agents

Agents and Environments

Intelligent Agents. Russell and Norvig: Chapter 2

Intelligent Agents. Chapter 2

Vorlesung Grundlagen der Künstlichen Intelligenz

Agents & Environments Chapter 2. Mausam (Based on slides of Dan Weld, Dieter Fox, Stuart Russell)

Agents and Environments. Stephen G. Ware CSCI 4525 / 5525

Artificial Intelligence

Intelligent Autonomous Agents. Ralf Möller, Rainer Marrone Hamburg University of Technology

Artificial Intelligence Intelligent agents

Intelligent Agents. CmpE 540 Principles of Artificial Intelligence

Intelligent Agents. Instructor: Tsung-Che Chiang

Agents & Environments Chapter 2. Mausam (Based on slides of Dan Weld, Dieter Fox, Stuart Russell)

Web-Mining Agents Cooperating Agents for Information Retrieval

Artificial Intelligence Agents and Environments 1

Outline. Chapter 2 Agents & Environments. Agents. Types of Agents: Immobots

Intelligent Agents. Instructor: Tsung-Che Chiang

Web-Mining Agents Cooperating Agents for Information Retrieval

Agents. Formalizing Task Environments. What s an agent? The agent and the environment. Environments. Example: the automated taxi driver (PEAS)

Artificial Intelligence Lecture 7

Outline for Chapter 2. Agents. Agents. Agents and environments. Vacuum- cleaner world. A vacuum- cleaner agent 8/27/15

Agents and State Spaces. CSCI 446: Artificial Intelligence

Agents. Environments Multi-agent systems. January 18th, Agents

Intelligent Agents. Philipp Koehn. 16 February 2017

Rational Agents (Chapter 2)

Lecture 2 Agents & Environments (Chap. 2) Outline

AI Programming CS F-04 Agent Oriented Programming

Agents. Robert Platt Northeastern University. Some material used from: 1. Russell/Norvig, AIMA 2. Stacy Marsella, CS Seif El-Nasr, CS4100

Artificial Intelligence

Module 1. Introduction. Version 1 CSE IIT, Kharagpur

CS324-Artificial Intelligence

KECERDASAN BUATAN 3. By Sirait. Hasanuddin Sirait, MT

How do you design an intelligent agent?

Agents and Environments

Overview. What is an agent?

Solutions for Chapter 2 Intelligent Agents

Ar#ficial Intelligence

Silvia Rossi. Agent as Intentional Systems. Lezione n. Corso di Laurea: Informatica. Insegnamento: Sistemi multi-agente.

Agents and State Spaces. CSCI 446: Ar*ficial Intelligence Keith Vertanen

Rational Agents (Ch. 2)


What is AI? The science of making machines that: Think rationally. Think like people. Act like people. Act rationally

Rational Agents (Ch. 2)

CS343: Artificial Intelligence

COMP329 Robotics and Autonomous Systems Lecture 15: Agents and Intentions. Dr Terry R. Payne Department of Computer Science

AGENT-BASED SYSTEMS. What is an agent? ROBOTICS AND AUTONOMOUS SYSTEMS. Today. that environment in order to meet its delegated objectives.

1 What is an Agent? CHAPTER 2: INTELLIGENT AGENTS

Robotics Summary. Made by: Iskaj Janssen

Foundations of AI. 10. Knowledge Representation: Modeling with Logic. Concepts, Actions, Time, & All the Rest

Definitions. The science of making machines that: This slide deck courtesy of Dan Klein at UC Berkeley

Behavior Architectures

Perceptual Anchoring with Indefinite Descriptions

What is AI? The science of making machines that:

Robot Behavior Genghis, MIT Callisto, GATech

Introduction to Deep Reinforcement Learning and Control

GRUNDZÜGER DER ARTIFICIAL INTELLIGENCE

Computational Models of Visual Attention: Bottom-Up and Top-Down. By: Soheil Borhani

Category-Based Intrinsic Motivation

Intelligent Machines That Act Rationally. Hang Li Bytedance AI Lab

Artificial Intelligence. Admin.

Sequential Decision Making

Autonomous Mobile Robotics

EEL-5840 Elements of {Artificial} Machine Intelligence

Katsunari Shibata and Tomohiko Kawano

Agent-Based Systems. Agent-Based Systems. Michael Rovatsos. Lecture 5 Reactive and Hybrid Agent Architectures 1 / 19

Reinforcement Learning

Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention

Real-time computational attention model for dynamic scenes analysis

Multi-agent Engineering. Lecture 4 Concrete Architectures for Intelligent Agents. Belief-Desire-Intention Architecture. Ivan Tanev.

Intelligent Machines That Act Rationally. Hang Li Toutiao AI Lab

Motion Control for Social Behaviours

Unmanned autonomous vehicles in air land and sea

Introduction to Computational Neuroscience

Introduction to Arti Intelligence

Does scene context always facilitate retrieval of visual object representations?

A Computational Framework for Concept Formation for a Situated Design Agent

Programming with Goals (3)

Artificial Intelligence. Outline

Applying Appraisal Theories to Goal Directed Autonomy

Bayesian Perception & Decision for Intelligent Mobility

Embodiment in GLAIR: A Grounded Layered Architecture. with Integrated Reasoning for Autonomous Agents. Henry Hexmoor. Johan Lammens.

Grounding Ontologies in the External World

Projective Simulation and the Taxonomy of Agency

Transcription:

Foundations of Artificial Intelligence 2. Rational Agents Nature and Structure of Rational Agents and Their Environments Wolfram Burgard, Bernhard Nebel and Martin Riedmiller Albert-Ludwigs-Universität Freiburg May 2, 2014

Contents 1 What is an agent? 2 What is a rational agent? 3 The structure of rational agents 4 Different classes of agents 5 Types of environments (University of Freiburg) Foundations of AI May 2, 2014 2 / 23

Agents Perceive the environment through sensors ( Percepts) Act upon the environment through actuators ( Actions) Agent Sensors Percepts? Environment Actuators Actions Examples: Humans and animals, robots and software agents (softbots), temperature control, ABS,... (University of Freiburg) Foundations of AI May 2, 2014 3 / 23

Rational Agents... do the right thing! In order to evaluate their performance, we have to define a performance measure. Autonomous vacuum cleaner example: m 2 per hour Level of cleanliness Energy usage Noise level Safety (behavior towards hamsters/small children) Optimal behavior is often unattainable Not all relevant information is perceivable Complexity of the problem is too high (University of Freiburg) Foundations of AI May 2, 2014 4 / 23

Rationality vs. Omniscience An omniscient agent knows the actual effects of its actions In comparison, a rational agent behaves according to its percepts and knowledge and attempts to maximize the expected performance Example: If I look both ways before crossing the street, and then as I cross I am hit by a meteorite, I can hardly be accused of lacking rationality. (University of Freiburg) Foundations of AI May 2, 2014 5 / 23

The Ideal Rational Agent Rational behavior is dependent on Performance measures (goals) Percept sequences Knowledge of the environment Possible actions Ideal rational agent For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has. Active perception is necessary to avoid trivialization. The ideal rational agent acts according to the function Percept Sequence World Knowledge Action (University of Freiburg) Foundations of AI May 2, 2014 6 / 23

Examples of Rational Agents Agent Type Medical diagnosis system Satellite image analysis system Part-picking robot Refinery controller Interactive English tutor Performance Measure healthy patient, costs, lawsuits correct image categorization percentage of parts in correct bins purity, yield, safety student s score on test Environment Actuators Sensors patient, hospital, stuff downlink from orbiting satellite conveyor belt with parts, bins refinery, operators set of students, testing agency display questions, tests, diagnoses, treatments, referrals display categorization of scene jointed arm and hand valves pumps, heaters displays display exercises, suggestions, corrections keyboard entry of symptoms, findings, patient s answers color pixel arrays camera, joint angle sensors temperature, pressure, chemical sensors keyboard entry (University of Freiburg) Foundations of AI May 2, 2014 7 / 23

Structure of Rational Agents Realization of the ideal mapping through an Agent program, executed on an Architecture which also provides an interface to the environment (percepts, actions) Agent = Architecture + Program (University of Freiburg) Foundations of AI May 2, 2014 8 / 23

The Simplest Design: Table-Driven Agents function TABLE-DRIVEN-AGENT(percept ) returns an action persistent: percepts, a sequence, initially empty table, a table of actions, indexed by percept sequences, initially fully specified append percept to the end of percepts action LOOKUP( percepts, table) return action Problems: Figure 2.3 The TABLE-DRIVEN-AGENT program is invoked for each new percept and returns action each time. It retains the complete percept sequence in memory. The table can become very large and it usually takes a very long time for the designer to specify it (or to learn function it) REFLEX-VACUUM-AGENT([location,status]) returns an action... practically impossible if status = Dirty then return Suck else if location = A then return Right else if location = B then return Left (University Figure of Freiburg) 2.4 The agent program for Foundations a simpleofreflex AI agent in the two-state vacuum May 2, 2014 environment. 9 / 23Th

Simple Reflex Agent Agent Sensors What the world is like now Condition-action rules What action I should do now Environment Actuators Direct use of perceptions is often not possible due to the large space required to store them (e.g., video images). Input therefore is often interpreted before decisions are made. (University of Freiburg) Foundations of AI May 2, 2014 10 / 23

else if location = A then return Right else if location = B then return Left Interpretative Reflex Agents Figure 2.4 The agent program for a simple reflex agent in the two-state va program implements the agent function tabulated in Figure??. Since storage space required for perceptions is too large, direct interpretation of perceptions function SIMPLE-REFLEX-AGENT(percept) returns an action persistent: rules, a set of condition action rules state INTERPRET-INPUT( percept) rule RULE-MATCH(state, rules) action rule.action return action Figure 2.6 A simple reflex agent. It acts according to a rule whose condi state, as defined by the percept. (University of Freiburg) Foundations of AI May 2, 2014 11 / 23

Structure of Model-based Reflex Agents In case the agent s history in addition to the actual percept is required to decide on the next action, it must be represented in a suitable form. State How the world evolves What my actions do Condition-action rules Sensors What the world is like now What action I should do now Environment Agent Actuators (University of Freiburg) Foundations of AI May 2, 2014 12 / 23

A Model-based Reflex Agent function MODEL-BASED-REFLEX-AGENT(percept) returns an action persistent: state, the agent s current conception of the world state model, a description of how the next state depends on current state and action rules, a set of condition action rules action, the most recent action, initially none state UPDATE-STATE(state, action, percept, model) rule RULE-MATCH(state, rules) action rule.action return action Figure 2.8 A model-based reflex agent. It keeps track of the current state of the world, using a internal model. It then chooses an action in the same way as the reflex agent. (University of Freiburg) Foundations of AI May 2, 2014 13 / 23

Model-based, Goal-based Agents Often, percepts alone are insufficient to decide what to do. This is because the correct action depends on the given explicit goals (e.g., go towards X). The model-based, goal-based agents use an explicit representation of goals and consider them for the choice of actions. (University of Freiburg) Foundations of AI May 2, 2014 14 / 23

Model-based, Goal-based Agents Sensors State How the world evolves What the world is like now What my actions do Goals What it will be like if I do action A What action I should do now Environment Agent Actuators (University of Freiburg) Foundations of AI May 2, 2014 15 / 23

Model-based, Utility-based Agents Usually, there are several possible actions that can be taken in a given situation. In such cases, the utility of the next achieved state can come into consideration to arrive at a decision. A utility function maps a state (or a sequence of states) onto a real number. The agent can also use these numbers to weigh the importance of competing goals. (University of Freiburg) Foundations of AI May 2, 2014 16 / 23

Model-based, Utility-based Agents State How the world evolves Sensors What the world is like now What my actions do Utility What it will be like if I do action A How happy I will be in such a state Environment Agent What action I should do now Actuators (University of Freiburg) Foundations of AI May 2, 2014 17 / 23

Learning Agents Learning agents can become more competent over time. They can start with an initially empty knowledge base. They can operate in initially unknown environments. (University of Freiburg) Foundations of AI May 2, 2014 18 / 23

Components of Learning Agents learning element (responsible for making improvements) performance element (has to select external actions) critic (determines the performance of the agent) problem generator (suggests actions that will lead to informative experiences) (University of Freiburg) Foundations of AI May 2, 2014 19 / 23

Learning Agents Performance standard Critic Sensors feedback learning goals Learning element changes knowledge Performance element Environment Problem generator Agent Actuators (University of Freiburg) Foundations of AI May 2, 2014 20 / 23

The Environment of Rational Agents Accessible vs. inaccessible (fully observable vs. partially observable) Are the relevant aspects of the environment accessible to the sensors? Deterministic vs. stochastic Is the next state of the environment completely determined by the current state and the selected action? If only actions of other agents are nondeterministic, the environment is called strategic. Episodic vs. sequential Can the quality of an action be evaluated within an episode (perception + action), or are future developments decisive for the evaluation of quality? Static vs. dynamic Can the environment change while the agent is deliberating? If the environment does not change but if the agent s performance score changes as time passes by the environment is denoted as semi-dynamic. Discrete vs. continuous Is the environment discrete (chess) or continuous (a robot moving in a room)? Single agent vs. multi-agent Which entities have to be regarded as agents? There are competitive and cooperative scenarios. (University of Freiburg) Foundations of AI May 2, 2014 21 / 23

Examples of Environments Task Observable Deterministic Episodic Static Discrete Agents Crossword puzzle Chess with a clock fully deterministic sequential static discrete single fully strategic sequential semi discrete multi Poker partially stochastic sequential static discrete multi Backgammon fully stochastic sequential static discrete multi Taxi driving partially stochastic sequential dynamic continuous multi Medical diagnosis Image analysis Part-picking robot Refinery controller Interactive English tutor partially stochastic sequential dynamic continuous single fully deterministic episodic semi continuous single partially stochastic episodic dynamic continuous single partially stochastic sequential dynamic continuous single partially stochastic sequential dynamic discrete multi Whether an environment has a certain property also depends on the conception of the designer. (University of Freiburg) Foundations of AI May 2, 2014 22 / 23

Summary An agent is something that perceives and acts. It consists of an architecture and an agent program. An ideal rational agent always takes the action that maximizes its performance given the percept sequence and its knowledge of the environment. An agent program maps from a percept to an action. There are a variety of designs Reflex agents respond immediately to percepts. Goal-based agents work towards goals. Utility-based agents try to maximize their reward. Learning agents improve their behavior over time. Some environments are more demanding than others. Environments that are partially observable, nondeterministic, strategic, dynamic, and continuous and multi-agent are the most challenging. (University of Freiburg) Foundations of AI May 2, 2014 23 / 23