Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department

Similar documents
Artificial Intelligence. Intelligent Agents

Foundations of Artificial Intelligence

Contents. Foundations of Artificial Intelligence. Agents. Rational Agents

Dr. Mustafa Jarrar. Chapter 2 Intelligent Agents. Sina Institute, University of Birzeit

Intelligent Agents. Soleymani. Artificial Intelligence: A Modern Approach, Chapter 2

Intelligent Agents. Outline. Agents. Agents and environments

Intelligent Agents. BBM 405 Fundamentals of Artificial Intelligence Pinar Duygulu Hacettepe University. Slides are mostly adapted from AIMA

Chapter 2: Intelligent Agents

Intelligent Agents. Chapter 2 ICS 171, Fall 2009

Artificial Intelligence CS 6364

CS 771 Artificial Intelligence. Intelligent Agents

CS 331: Artificial Intelligence Intelligent Agents

CS 331: Artificial Intelligence Intelligent Agents

Introduction to Artificial Intelligence 2 nd semester 2016/2017. Chapter 2: Intelligent Agents

CS 331: Artificial Intelligence Intelligent Agents. Agent-Related Terms. Question du Jour. Rationality. General Properties of AI Systems

Artificial Intelligence Agents and Environments 1

Agents & Environments Chapter 2. Mausam (Based on slides of Dan Weld, Dieter Fox, Stuart Russell)

AI: Intelligent Agents. Chapter 2

22c:145 Artificial Intelligence

Intelligent Autonomous Agents. Ralf Möller, Rainer Marrone Hamburg University of Technology

Agents & Environments Chapter 2. Mausam (Based on slides of Dan Weld, Dieter Fox, Stuart Russell)

Outline. Chapter 2 Agents & Environments. Agents. Types of Agents: Immobots

Agents and Environments

Web-Mining Agents Cooperating Agents for Information Retrieval

Intelligent Agents. CmpE 540 Principles of Artificial Intelligence

Web-Mining Agents Cooperating Agents for Information Retrieval

Intelligent Agents. Instructor: Tsung-Che Chiang

Intelligent Agents. Instructor: Tsung-Che Chiang

Outline for Chapter 2. Agents. Agents. Agents and environments. Vacuum- cleaner world. A vacuum- cleaner agent 8/27/15

Artificial Intelligence Lecture 7

Agents. Formalizing Task Environments. What s an agent? The agent and the environment. Environments. Example: the automated taxi driver (PEAS)

Artificial Intelligence

Agents. This course is about designing intelligent agents Agents and environments. Rationality. The vacuum-cleaner world

Intelligent Agents. Russell and Norvig: Chapter 2

Lecture 2 Agents & Environments (Chap. 2) Outline

Artificial Intelligence Intelligent agents

Vorlesung Grundlagen der Künstlichen Intelligenz

Intelligent Agents. Chapter 2

Agents and State Spaces. CSCI 446: Artificial Intelligence

Agents and Environments. Stephen G. Ware CSCI 4525 / 5525

Rational Agents (Chapter 2)

Agents. Environments Multi-agent systems. January 18th, Agents

CS324-Artificial Intelligence

Artificial Intelligence

Intelligent Agents. Philipp Koehn. 16 February 2017

KECERDASAN BUATAN 3. By Sirait. Hasanuddin Sirait, MT

How do you design an intelligent agent?

What is AI? The science of making machines that: Think rationally. Think like people. Act like people. Act rationally

Module 1. Introduction. Version 1 CSE IIT, Kharagpur

Ar#ficial Intelligence

AI Programming CS F-04 Agent Oriented Programming

Agents and State Spaces. CSCI 446: Ar*ficial Intelligence Keith Vertanen

Agents and Environments

Overview. What is an agent?

Rational Agents (Ch. 2)

Rational Agents (Ch. 2)

Solutions for Chapter 2 Intelligent Agents

Agents. Robert Platt Northeastern University. Some material used from: 1. Russell/Norvig, AIMA 2. Stacy Marsella, CS Seif El-Nasr, CS4100

CS343: Artificial Intelligence


COMP329 Robotics and Autonomous Systems Lecture 15: Agents and Intentions. Dr Terry R. Payne Department of Computer Science

Silvia Rossi. Agent as Intentional Systems. Lezione n. Corso di Laurea: Informatica. Insegnamento: Sistemi multi-agente.

AGENT-BASED SYSTEMS. What is an agent? ROBOTICS AND AUTONOMOUS SYSTEMS. Today. that environment in order to meet its delegated objectives.

Four Example Application Domains

1 What is an Agent? CHAPTER 2: INTELLIGENT AGENTS

GRUNDZÜGER DER ARTIFICIAL INTELLIGENCE

Re: ENSC 370 Project Gerbil Functional Specifications

Chapter 5 Car driving

Perceptual Anchoring with Indefinite Descriptions

Unmanned autonomous vehicles in air land and sea

Programming with Goals (3)

Robotics Summary. Made by: Iskaj Janssen

Modeling Emotion and Temperament on Cognitive Mobile Robots

CS148 - Building Intelligent Robots Lecture 5: Autonomus Control Architectures. Instructor: Chad Jenkins (cjenkins)

Problem Solving Agents

Sequential Decision Making

Lecture 5- Hybrid Agents 2015/2016

Solving problems by searching

Robot Learning Letter of Intent

Persuasive Technology

Behavior Architectures

Embodiment in GLAIR: A Grounded Layered Architecture. with Integrated Reasoning for Autonomous Agents. Henry Hexmoor. Johan Lammens.

Compound Effects of Top-down and Bottom-up Influences on Visual Attention During Action Recognition

Driving Cessation and HD

OPTIC FLOW IN DRIVING SIMULATORS

Robot Behavior Genghis, MIT Callisto, GATech

Autonomous Mobile Robotics

Katsunari Shibata and Tomohiko Kawano

Category-Based Intrinsic Motivation

Reinforcement Learning

Artificial Intelligence. Outline

Introduction to Arti Intelligence

HearIntelligence by HANSATON. Intelligent hearing means natural hearing.

An intelligent car driving based on fuzzy target with safety zone

How the brain works. ABSTRACT This paper analyses the associative capabilities of the brain and takes the consequences of that capability.

Exploration and Exploitation in Reinforcement Learning

Sensation and Perception -- Team Problem Solving Assignments

Towards Learning to Ignore Irrelevant State Variables

What is Artificial Intelligence? A definition of Artificial Intelligence. Systems that act like humans. Notes

Multimodal Driver Displays: Potential and Limitations. Ioannis Politis

Transcription:

Princess Nora University Faculty of Computer & Information Systems 1 ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department

(CHAPTER-3) INTELLIGENT AGENTS (Course coordinator)

CHAPTER OUTLINE What is an Agent? What is a rational agent? Agent Task environments Different classes of agents 3

What is an Agent? 4

1. What is an Agent? Perceive the environment through sensors ( Percepts) Act upon the environment through actuators ( Actions) 5 5

We use the term percept to refer to the agent s perceptual inputs at any given instant An agent percept sequence is the complete history of every thing the agent has ever perceived. The agent function maps from percept histories to actions: [f: P* A] Is there a difference between agent function ad agent program? 6

AGENTS Human agent: sensors = eyes, ears,.; actuators= hands, legs, mouth, Robotic agent: sensors = cameras and infrared range finders for actuators= various motors for actuators 7

AUTONOMOUS AGENT One whose actions are based on both built-in knowledge and own experience Initial knowledge provides an ability to learn A truly autonomous agent can adapt to a wide variety of environments 8

The vacuum-cleaner world (1) Environment: square A and B Percepts: [location and content] e.g. [A, Dirty] Actions: left, right, suck, and no-op Agent s function look-up table 9

The Vacuum-cleaner World (2) Percept sequence [A,Clean] [A, Dirty] [B, Clean] [B, Dirty] Action Right Suck Left Suck 10

The vacuum-cleaner world (3) function REFLEX-VACUUM-AGENT ([location, status]) return an action if status == Dirty then return Suck else if location == A then return Right else if location == B then return Left 11

Rational agent? 12

... do the \right thing"! 2. Rational agent? The right action is the one that will cause the agent to be most successful. how and when to evaluate the agent's success.?? In order to evaluate their performance, we have to define a performance measure. 13 example : Autonomous vacuum cleaner o m2 per hour o Level of cleanliness o Energy usage o Noise level o Safety (behavior towards hamsters/small children) 13

Ideal rational Agent : For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has Rational behavior is dependent on 1. Performance measures (goals) 2. Percept sequences up to date 3. The agent prior Knowledge of the environment 4. Possible actions 14 14

Optimal behavior is often unattainable (not totally achieved) o Not all relevant information is perceivable o Complexity of the problem is too high Active perception is necessary to avoid trivialization. The ideal rational agent acts according to the function Percept Sequence X World Knowledge Action 15 15

Structure of Rational Agents Agent program, executed on an Architecture which also provides an interface to the environment (percepts, actions) 16 16

Rationality vs. Omniscience An omniscient agent knows the actual effects of its actions and it is impossible in real world In comparison, a rational agent behaves according to its percepts and knowledge and attempts to maximize the expected performance Example: If I look both ways before crossing the street, and then as I cross I am hit by a car, I can hardly be accused of lacking rationality. Omniscient =have all the knowledge 17 17

Agent Task environments 18

Task Environment Before we design an intelligent agent, we must specify its task environment : PEAS: Performance measure Environment Actuators Sensors 19

Task Environment (cont..) Example: Agent = robot driver in DARPA Challenge Performance measure: Time to complete course Environment: Roads, other traffic, obstacles Actuators: Steering wheel, accelerator, brake, signal, horn Sensors: Optical cameras, lasers, sonar, accelerometer, speedometer, GPS, odometer, engine sensors, 20

Examples of Rational Agents Agent Type Medical diagnosis system Satellite image analysis system Part-picking robot Performance Measure healthy patient, costs, lawsuits(court cases) correct image categorization percentage of parts in correct bins(books) Environment Actuators Sensors patient, hospital, stuff downlink from orbiting satellite conveyor belt with parts, bins display questions, tests, diagnoses, treatments, display categorization of scene jointed arm and hand keyboard entry of symptoms, findings, patient's answers color pixel arrays camera, joint angle sensors 21 21

Examples of Rational Agents Agent Type Refinery معمل controller تكرير Interactive English tutor Performance Measure purity, safety student's score on test Environment Actuators Sensors Refinery, operators set of students, testing agency valves pumps, heaters displays display exercises, suggestions, corrections temperature, pressure, chemical sensors keyboard entry web crawling agent did you get only pages you wanted User, internet Display related info keyboard entry 22 22

Properties of Task Environment or Types o Fully observable vs. partially observable o Deterministic vs. stochastic o Episodic vs. sequential o Static vs. dynamic o Discrete vs. continuous o Single agent vs. multi agent 23 23

Fully observable vs. partially observable fully observable : if an agent's sensors give it access to the complete state of the environment at each point in time. agent need not maintain any internal state to keep track of the world. An environment might be partially observable because of noisy and inaccurate sensors or because parts of the state are simply missing from the sensor data Examples: vacuum cleaner with local dirt sensor. 24

Deterministic vs. stochastic deterministic Environment : if the next state of the environment is completely determined by the current state and the action executed by the agent. EX: Vacuum cleaner is Deterministic why? Ex: Taxi driving agent (robot driving agent) is stochastic, why? He doesn t know about traffic, can never predict traffic situation 25 Stochastic= connected to random events

Episodic vs. sequential: Episodic An agent s action is divided into atomic episodes. Each episodic perceive then take action(this action depend on this episodes) and next episodic does not rely on previous one it taking the right action. EX: classification tasks, Sequential: the current decision could affect all future decisions EX: chess and taxi driver 26

Static vs. dynamic: Static environment is unchanged while an agent is deliberating it is easy to deal with because the agent need not keep looking at the world while it is deciding on the action or need it worry about the passage of time EX :crossword puzzles are static Dynamic environments: continuously ask the agent what it wants to do 27 Ex: taxi driving is dynamic

Discrete vs. Continuous: Discrete : A limited number of distinct, clearly defined states, percepts and actions. Ex: Chess has finite number of discrete states, and has discrete set of percepts and actions. Continuous : not limited Taxi driving has continuous states, and actions 28

Single agent vs. multi-agent: An agent operating by itself in an environment is single agent o o EX: Crossword is a single agent Ex: chess is a competitive multi-agent environment 29

Environment types The simplest environment is Fully observable, deterministic, episodic, static, discrete and single-agent. Most real situations are: Partially observable, stochastic, sequential, dynamic, continuous and multi-agent. 30

Different classes of agents 1. Simple Reflex agents 2. Model based Reflex agents 3. Goal Based agents 4. Utility Based agents 31

1- Simple Reflex Agent Reflex agents respond immediately to percepts. Select actions on the basis of the current percept, ignoring the rest of the percept history 32 Ex vacuum cleaner, why? Because its decision based only on the current location and whether it contain dirt or not. 32

2- Model-based Reflex Agents The most effective way to handle partial observably In case the agent's history in perception in addition to the actual percept is required to decide on the next action, it must be represented in a suitable form. [model] 33 33

3- Goal-based Agents o Goal-based agents work towards goals. o Often, percepts alone are insufficient to decide what to do. o This is because the correct action depends on the given explicit goals (e.g., go towards X). o The goal-based agents use an explicit representation of goals and consider them for the choice of actions. o Ex : taxi driving destination, vacuum cleaner 34 34

4- Utility-based Agents Usually, there are several possible actions that can be taken in a given situation. Utility-based agents take action that maximize their reward. A utility function maps a state (or a sequence of states) onto a real number. The agent can also use these numbers to weigh the importance of competing goals. Ex taxi driving, may be many paths lead to goal but some are quicker, cheaper, safer 35 35

Learning Agents o Learning agents improve their behavior over time o Learning agents can become more competent over time. o They can start with an initially empty knowledge base. o They can operate in initially unknown environments. 36 36

Learning Agents determines performance the agent: responsible for making improvements the of percept only doesn t provide how much is the agent is successful take percept and decide an action suggests exploratory actions that will lead to new informative experiences 37 37

Thank you End of Chapter 2 38