CS 331: Artificial Intelligence Intelligent Agents. Agent-Related Terms. Question du Jour. Rationality. General Properties of AI Systems

Similar documents
CS 331: Artificial Intelligence Intelligent Agents

CS 331: Artificial Intelligence Intelligent Agents

Chapter 2: Intelligent Agents

Agents. This course is about designing intelligent agents Agents and environments. Rationality. The vacuum-cleaner world

Intelligent Agents. Outline. Agents. Agents and environments

Intelligent Agents. BBM 405 Fundamentals of Artificial Intelligence Pinar Duygulu Hacettepe University. Slides are mostly adapted from AIMA

Agents. Formalizing Task Environments. What s an agent? The agent and the environment. Environments. Example: the automated taxi driver (PEAS)

Artificial Intelligence Intelligent agents

Agents & Environments Chapter 2. Mausam (Based on slides of Dan Weld, Dieter Fox, Stuart Russell)

Dr. Mustafa Jarrar. Chapter 2 Intelligent Agents. Sina Institute, University of Birzeit

Agents & Environments Chapter 2. Mausam (Based on slides of Dan Weld, Dieter Fox, Stuart Russell)

CS 771 Artificial Intelligence. Intelligent Agents

Intelligent Agents. Instructor: Tsung-Che Chiang

Intelligent Agents. Russell and Norvig: Chapter 2

Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department

Artificial Intelligence CS 6364

Intelligent Agents. Chapter 2 ICS 171, Fall 2009

Intelligent Autonomous Agents. Ralf Möller, Rainer Marrone Hamburg University of Technology

Intelligent Agents. Instructor: Tsung-Che Chiang

AI: Intelligent Agents. Chapter 2

Artificial Intelligence. Intelligent Agents

Web-Mining Agents Cooperating Agents for Information Retrieval

Intelligent Agents. Soleymani. Artificial Intelligence: A Modern Approach, Chapter 2

Foundations of Artificial Intelligence

Outline. Chapter 2 Agents & Environments. Agents. Types of Agents: Immobots

What is AI? The science of making machines that: Think rationally. Think like people. Act like people. Act rationally

Contents. Foundations of Artificial Intelligence. Agents. Rational Agents

Intelligent Agents. Chapter 2

Web-Mining Agents Cooperating Agents for Information Retrieval

Introduction to Artificial Intelligence 2 nd semester 2016/2017. Chapter 2: Intelligent Agents

Artificial Intelligence Agents and Environments 1

Intelligent Agents. CmpE 540 Principles of Artificial Intelligence

22c:145 Artificial Intelligence

Lecture 2 Agents & Environments (Chap. 2) Outline

Rational Agents (Chapter 2)

Agents and Environments. Stephen G. Ware CSCI 4525 / 5525

Vorlesung Grundlagen der Künstlichen Intelligenz

Outline for Chapter 2. Agents. Agents. Agents and environments. Vacuum- cleaner world. A vacuum- cleaner agent 8/27/15

Agents and Environments

Agents. Environments Multi-agent systems. January 18th, Agents

Agents and State Spaces. CSCI 446: Artificial Intelligence

Artificial Intelligence

Intelligent Agents. Philipp Koehn. 16 February 2017

How do you design an intelligent agent?

Artificial Intelligence Lecture 7

KECERDASAN BUATAN 3. By Sirait. Hasanuddin Sirait, MT

Ar#ficial Intelligence

CS324-Artificial Intelligence

AI Programming CS F-04 Agent Oriented Programming

Overview. What is an agent?

Agents and Environments

Artificial Intelligence

Rational Agents (Ch. 2)

Agents and State Spaces. CSCI 446: Ar*ficial Intelligence Keith Vertanen

Agents. Robert Platt Northeastern University. Some material used from: 1. Russell/Norvig, AIMA 2. Stacy Marsella, CS Seif El-Nasr, CS4100

Rational Agents (Ch. 2)

Solutions for Chapter 2 Intelligent Agents

Module 1. Introduction. Version 1 CSE IIT, Kharagpur

CS343: Artificial Intelligence

Silvia Rossi. Agent as Intentional Systems. Lezione n. Corso di Laurea: Informatica. Insegnamento: Sistemi multi-agente.

COMP329 Robotics and Autonomous Systems Lecture 15: Agents and Intentions. Dr Terry R. Payne Department of Computer Science

AGENT-BASED SYSTEMS. What is an agent? ROBOTICS AND AUTONOMOUS SYSTEMS. Today. that environment in order to meet its delegated objectives.


Sequential Decision Making

GRUNDZÜGER DER ARTIFICIAL INTELLIGENCE

1 What is an Agent? CHAPTER 2: INTELLIGENT AGENTS

Robotics Summary. Made by: Iskaj Janssen

CS148 - Building Intelligent Robots Lecture 5: Autonomus Control Architectures. Instructor: Chad Jenkins (cjenkins)

STPA Applied to Automotive Automated Parking Assist

Introduction to Arti Intelligence

Problem Solving Agents

Robot Behavior Genghis, MIT Callisto, GATech

Unmanned autonomous vehicles in air land and sea

Bayesian Perception & Decision for Intelligent Mobility

Reinforcement Learning

What is Artificial Intelligence? A definition of Artificial Intelligence. Systems that act like humans. Notes

Four Example Application Domains

Definitions. The science of making machines that: This slide deck courtesy of Dan Klein at UC Berkeley

Exploration and Exploitation in Reinforcement Learning

Robot Learning Letter of Intent

Discover Simple Neuroscience Body Hacks that Easily Increase Personal Performance, Accelerate Physical Results and Relieve Pain

Katsunari Shibata and Tomohiko Kawano

Towards Learning to Ignore Irrelevant State Variables

Behavior Architectures

Neurobiology of Sexual Assault Trauma: Supportive Conversations with Victims

Embodiment in GLAIR: A Grounded Layered Architecture. with Integrated Reasoning for Autonomous Agents. Henry Hexmoor. Johan Lammens.

Anthony Robbins' book on success

Modeling Emotion and Temperament on Cognitive Mobile Robots

Paul Figueroa. Washington Municipal Clerks Association ANNUAL CONFERENCE. Workplace Bullying: Solutions and Prevention. for

Designing Human-like Video Game Synthetic Characters through Machine Consciousness

Subliminal Messages: How Do They Work?

Introduction to Machine Learning. Katherine Heller Deep Learning Summer School 2018

Artificial Intelligence. Outline

Emotional Intelligence

CyberKnife. Pre-treatment information for patients MEDICAL CLINIC

Introduction to Game Theory Autonomous Agents and MultiAgent Systems 2015/2016


Buprenorphine Patient Education

Category-Based Intrinsic Motivation

Lost it. Find it. Already Have. A Nail A Mirror A Seed 6/2/ :16 PM. (c) Copyright 2016 Cindy Miller, Inc. (c) Copyright 2016 Cindy Miller, Inc.

Transcription:

General Properties of AI Systems CS 331: Artificial Intelligence Intelligent Agents Sensors Reasoning Actuators Percepts Actions Environmen nt This part is called an agent. Agent: anything that perceives its environment through sensors and acts on that environment through actuators 1 2 Example: Vacuum Cleaner Agent Percept Sequence Action [A, Clean] Right [A, Dirty] [B, Clean] Left [B, Dirty] [A, Clean],[A, Clean] Right [A, Clean],[A, Dirty] : : [A, Clean], [A, Clean], [A, Clean] Right Agent-Related Terms Percept sequence: A complete history of everything the agent has ever perceived. Think of this as the state of the world from the agent s perspective. Agent function (or Policy): Maps percept sequence to action (determines agent behavior) Agent program: Implements the agent function [A, Clean], [A, Clean], [A, Dirty] : : 4 Question du Jour What s the difference between the agent function and the agent program? Rationality Rationality: do the action that causes the agent to be most successful How do you define success? Need a performance measure Eg. reward agent with one point for each clean square at each time step (could penalize for costs and noise) Important point: Design performance measures according to what one wants in the environment, not according to how one thinks the agent should behave 5 6 1

Rationality Rationality depends on 4 things: 1. Performance measure of success 2. Agent s prior knowledge of environment 3. Actions agent can perform 4. Agent s percept sequence to date Rational agent: for each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has Learning Successful agents split task of computing policy in 3 periods: 1. Initially, designers compute some prior knowledge to include in policy 2. When deciding its next action, agent does some computation 3. Agent learns from experience to modify its behavior Autonomous agents: Learn from experience to compensate for partial or incorrect prior knowledge 7 8 PEAS Descriptions of Agents Performance, Environment, Actuators, Sensors Example: Automated taxi driver Performance Measure Safe, fast, legal, comfortable trip, maximize profits Environment Actuators Sensors Roads, other pedestrians, customers Steering, accelerator, brake, signal, horn, display Cameras, sonar, speedometer, GPS, odometer, accelerometer, engine sensors, keyboard 9 Properties of Environments Fully observable: can access complete state of environment at each point in time Deterministic: if next state of the environment completely determined by current state and agent s action Episodic: agent s experience divided into Vs independent, atomic episodes in which agent perceives and performs a single action in each episode. Static: agent doesn t need to keep sensing while decides what action to take, doesn t need to worry about time Discrete: (note: discrete/continuous distinction applies to states, time, percepts, or actions) Partially observable: could be due to noisy, inaccurate or incomplete sensor data Stochastic: a partially observable environment can appear to be stochastic. (Strategic: environment is deterministic except for actions of other agents) Sequential: current decision affects all future decisions Dynamic: environment changes while agent is thinking (Semidynamic: environment doesn t change with time but agent s performance does) Continuous Single agent Multiagent: agents affect each others performance measure cooperative or competitive Examples of task environments Task Observable Deterministic Episodic Static Discrete Agents Environment Crossword Fully Deterministic Sequential Static Discrete Single puzzle Chess with a Fully Strategic Sequential Semi Discrete Multi clock Poker Partially Stochastic Sequential Static Discrete Multi Backgammon Fully Stochastic Sequential Static Discrete Multi Taxi driving Partially Stochastic Sequential Dynamic Continuous Multi Medical Partially Stochastic Sequential Dynamic Continuous Multi diagnosis Image analysis Fully Deterministic Episodic Semi Continuous Single Part-picking Partially Stochastic Episodic Semi Continuous Single robot Refinery Partially Stochastic Sequential Dynamic Continuous Single controller Interactive English tutor Partially Stochastic Sequential Dynamic Discrete Multi Agent Programs Agent program: implements the policy Simplest agent program is a table-driven agent function TABLE-DRIVEN-AGENT(percept) returns an action static: percepts, a sequence, initially empty table, a table of actions, indexed by percept sequences, initially fully specific append percept to the end of percepts action LOOKUP(percepts, table) This is a BIG table clearly not feasible! 12 2

4 Kinds of Agent Programs Simplex reflex agents Model-based reflex agents Goal-based agents Utility-based agents Simple Reflex Agent Selects actions using only the current precept Works on condition-action rules: if condition then action function SIMPLE-REFLEX-AGENT(percept) returns an action static: rules, a set of condition-action rules Not the entire percept history! 13 state INTERPRET-INPUT(percept) rule RULE-MATCH(state, rules) action RULE-ACTION[rule] 14 Simple Reflex Agents Simple Reflex Agents Advantages: Easy to implement Uses much less memory than the table-driven agent Disadvantages: Will only work correctly if the environment is fully observable Infinite loops 15 16 Model-based Reflex Agents Maintain some internal state that keeps track of the part of the world it can t see now Needs model (encodes knowledge about how the world works) Model-based Reflex Agents function REFLEX-AGENT-WITH-STATE(percept) returns an action static: state, a desription of the current world state rules, a set of condition-action rules action, the most recent action, initially none state UPDATE-STATE(state, action, percept) rule RULE-MATCH(state, rules) action RULE-ACTION[rule] 17 18 3

Goal-based Agents Goal-based Agents Goal information guides agent s actions (looks to the future) Sometimes achieving goal is simple eg. from a single action Other times, goal requires reasoning about long sequences of actions Flexible: simply reprogram the agent by changing goals 19 20 Utililty-based Agents Utility-based Agents What if there are many paths to the goal? Utility measures which states are preferable to other state Maps state to real number (utility or happiness ) 21 22 Learning Agents Learning Agents Critic: Tells learning element how well the agent is doing with respect ot the performance standard (because the percepts don t tell the agent about its success/failure) Responsible for improving the agent s behavior with experience Suggest actions to come up with new and informative experiences 23 24 4

Learning Agents Think of this as outside the agent since you don t want it to be changed by the agent Maps percepts p to actions What you should know What it means to be rational Be able to do a PEAS description of a task environment Be able to determine the properties of a task environment Know which agent program is appropriate for your task 25 26 Develop a PEAS description of the task environment for a face-recognition agent Performance Measure Environment Describe the task environment Fully Observable Partially Observable Deterministic Stochastic Episodic Sequential Actuators Sensors Static Discrete Single agent Dynamic Continuous Multi-agent 27 28 Select a suitable agent design for the facerecognition agent 29 5