CS343: Artificial Intelligence

Similar documents
What is AI? The science of making machines that:

Introduction to Artificial Intelligence 2 nd semester 2016/2017. Chapter 2: Intelligent Agents

Definitions. The science of making machines that: This slide deck courtesy of Dan Klein at UC Berkeley

Lecture 2 Agents & Environments (Chap. 2) Outline

Intelligent Agents. Soleymani. Artificial Intelligence: A Modern Approach, Chapter 2

CS 771 Artificial Intelligence. Intelligent Agents

Agents and State Spaces. CSCI 446: Artificial Intelligence

What is AI? The science of making machines that: Think rationally. Think like people. Act like people. Act rationally

Artificial Intelligence Lecture 7

Artificial Intelligence. Intelligent Agents

CS324-Artificial Intelligence

Chapter 2: Intelligent Agents

Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department

Agents & Environments Chapter 2. Mausam (Based on slides of Dan Weld, Dieter Fox, Stuart Russell)

Agents & Environments Chapter 2. Mausam (Based on slides of Dan Weld, Dieter Fox, Stuart Russell)

How do you design an intelligent agent?

Intelligent Agents. BBM 405 Fundamentals of Artificial Intelligence Pinar Duygulu Hacettepe University. Slides are mostly adapted from AIMA

Agents and Environments

Artificial Intelligence

AI: Intelligent Agents. Chapter 2

Artificial Intelligence CS 6364

Agents. This course is about designing intelligent agents Agents and environments. Rationality. The vacuum-cleaner world

Rational Agents (Chapter 2)

CS 331: Artificial Intelligence Intelligent Agents. Agent-Related Terms. Question du Jour. Rationality. General Properties of AI Systems

CS 331: Artificial Intelligence Intelligent Agents

CS 331: Artificial Intelligence Intelligent Agents

Rational Agents (Ch. 2)

Agents and Environments

Vorlesung Grundlagen der Künstlichen Intelligenz

Intelligent Agents. Outline. Agents. Agents and environments

Dr. Mustafa Jarrar. Chapter 2 Intelligent Agents. Sina Institute, University of Birzeit

Intelligent Agents. Philipp Koehn. 16 February 2017

Intelligent Agents. CmpE 540 Principles of Artificial Intelligence

Module 1. Introduction. Version 1 CSE IIT, Kharagpur

Outline. Chapter 2 Agents & Environments. Agents. Types of Agents: Immobots

Agents and Environments. Stephen G. Ware CSCI 4525 / 5525

Rational Agents (Ch. 2)

Overview. What is an agent?


Intelligent Agents. Chapter 2

Intelligent Agents. Instructor: Tsung-Che Chiang

KECERDASAN BUATAN 3. By Sirait. Hasanuddin Sirait, MT

Intelligent Agents. Instructor: Tsung-Che Chiang

Agents. Formalizing Task Environments. What s an agent? The agent and the environment. Environments. Example: the automated taxi driver (PEAS)

Artificial Intelligence Intelligent agents

Introduction to Deep Reinforcement Learning and Control

Intelligent Agents. Chapter 2 ICS 171, Fall 2009

AI Programming CS F-04 Agent Oriented Programming

Intelligent Agents. Russell and Norvig: Chapter 2

Foundations of Artificial Intelligence

Artificial Intelligence Agents and Environments 1

Ar#ficial Intelligence

Contents. Foundations of Artificial Intelligence. Agents. Rational Agents

22c:145 Artificial Intelligence

A Scoring Policy for Simulated Soccer Agents Using Reinforcement Learning

Web-Mining Agents Cooperating Agents for Information Retrieval

Agents. Robert Platt Northeastern University. Some material used from: 1. Russell/Norvig, AIMA 2. Stacy Marsella, CS Seif El-Nasr, CS4100

Artificial Intelligence

M.Sc. in Cognitive Systems. Model Curriculum

Web-Mining Agents Cooperating Agents for Information Retrieval

Intelligent Autonomous Agents. Ralf Möller, Rainer Marrone Hamburg University of Technology

Agents and State Spaces. CSCI 446: Ar*ficial Intelligence Keith Vertanen

CS 4365: Artificial Intelligence Recap. Vibhav Gogate

EEL-5840 Elements of {Artificial} Machine Intelligence

Agents. Environments Multi-agent systems. January 18th, Agents

ERA: Architectures for Inference

Behavior Architectures

What is Artificial Intelligence? A definition of Artificial Intelligence. Systems that act like humans. Notes

Solutions for Chapter 2 Intelligent Agents

AGENT-BASED SYSTEMS. What is an agent? ROBOTICS AND AUTONOMOUS SYSTEMS. Today. that environment in order to meet its delegated objectives.

Artificial Intelligence Programming Probability

Lecture 13: Finding optimal treatment policies

Lecture 17: The Cognitive Approach

Robotics Summary. Made by: Iskaj Janssen

Reinforcement Learning

High-level Vision. Bernd Neumann Slides for the course in WS 2004/05. Faculty of Informatics Hamburg University Germany

Introduction to Medical Computing

Artificial Intelligence. Admin.

COMP329 Robotics and Autonomous Systems Lecture 15: Agents and Intentions. Dr Terry R. Payne Department of Computer Science

Sequential Decision Making

Intelligent Machines That Act Rationally. Hang Li Bytedance AI Lab

Introduction to Machine Learning. Katherine Heller Deep Learning Summer School 2018

Computational Models for Belief Revision, Group Decisions and Cultural Shifts

ICS 606. Intelligent Autonomous Agents 1. Intelligent Autonomous Agents ICS 606 / EE 606 Fall Reactive Architectures

CS148 - Building Intelligent Robots Lecture 5: Autonomus Control Architectures. Instructor: Chad Jenkins (cjenkins)

Intelligent Machines That Act Rationally. Hang Li Toutiao AI Lab

Reinforcement Learning and Artificial Intelligence

LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES

The Effect of Sensor Errors in Situated Human-Computer Dialogue

Agent-Based Systems. Agent-Based Systems. Michael Rovatsos. Lecture 5 Reactive and Hybrid Agent Architectures 1 / 19

Human Activities: Handling Uncertainties Using Fuzzy Time Intervals

1 What is an Agent? CHAPTER 2: INTELLIGENT AGENTS

Artificial Intelligence. Outline

Appendix I Teaching outcomes of the degree programme (art. 1.3)

Between-word regressions as part of rational reading

EECS 433 Statistical Pattern Recognition

Computational Neuroscience. Instructor: Odelia Schwartz

Perceptual Anchoring with Indefinite Descriptions

Citation for published version (APA): Geus, A. F. D., & Rotterdam, E. P. (1992). Decision support in aneastehesia s.n.

Lecture 3: Bayesian Networks 1

Transcription:

CS343: Artificial Intelligence Introduction: Part 2 Prof. Scott Niekum University of Texas at Austin [Based on slides created by Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All materials available at http://ai.berkeley.edu.]

Last time AI: Act rationally by maximizing your expected utility!

Today Some examples of modern AI applications How do they all fit together? How do we formally define problems in AI?

Natural Language Speech technologies (e.g. Siri) Automatic speech recognition (ASR) Text-to-speech synthesis (TTS) Dialog systems

Natural Language Video

Natural Language Speech technologies (e.g. Siri) Automatic speech recognition (ASR) Text-to-speech synthesis (TTS) Dialog systems Language processing technologies Question answering Machine translation Web search Text classification, spam filtering, etc

Natural Language

Deep learning

Natural Language

Vision (Perception) Object and face recognition Scene segmentation Image classification Images from Erik Sudderth (left), wikipedia (right)

Object Tracking

Perception + Natural Language

Perception + Natural Language We won t discuss NLP and perception directly, but we will cover: Bayes nets Supervised learning Deep learning Course outline

Robotics Robotics Part mech. eng. Part AI Reality much harder than simulations! Technologies Vehicles Rescue Soccer! Lots of automation In this class: We ignore mechanical aspects Methods for planning Methods for control Images from UC Berkeley, Boston Dynamics, RoboCup, Google

Robot Laundry

Robot Soccer

Learning from demonstration

Learning from demonstration

Full body control of humanoids

but still a long way to go

Robotics We will cover several topics relevant to robotics: Planning and search Reinforcement learning Time-series analysis State estimation and filtering

Logic Logical systems Theorem provers NASA fault diagnosis Question answering Methods: Deduction systems Constraint satisfaction Satisfiability solvers (huge advances!) Image from Bart Selman

Game Playing Classic Moment: May, '97: Deep Blue vs. Kasparov First match won against world champion Intelligent creative play 200 million board positions per second Humans understood 99.9 of Deep Blue's moves Can do about the same now with a PC cluster Open question: How does human cognition deal with the search space explosion of chess? Or: how can humans compete with computers at all?? 1996: Kasparov Beats Deep Blue I could feel --- I could smell --- a new kind of intelligence across the table. 1997: Deep Blue Beats Kasparov Deep Blue hasn't proven anything. Huge game-playing advances recently, e.g. in Go! Text from Bart Selman, image from IBM s Deep Blue pages

Course Topics Part I: Making Decisions Fast search / planning Constraint satisfaction Adversarial and uncertain search MDPs and Reinforcement learning Part II: Reasoning under Uncertainty Bayes nets Decision theory and value of information Statistical Machine learning Throughout: Applications Natural language, vision, robotics, games,

But how??

Designing Rational Agents An agent is an entity that perceives and acts. A rational agent selects actions that maximize its (expected) utility. Characteristics of the performance measure, environment, actions, and sensing dictate techniques for selecting rational actions By then end of the course you should understand: General AI techniques for a variety of problem types How to recognize when and how a new problem can be solved with an existing technique Agent Sensors? Actuators Percepts Actions Environment

Pac-Man as an Agent Agent Sensors? Percepts Environment Actuators Actions Pac-Man is a registered trademark of Namco-Bandai Games, used here for educational purposes

Pac-Man as an Agent

Reflex Agents Reflex agents: Choose action based on current perceptions May have memory or a model of the world s current state Do not consider the future consequences of their actions Often referred to as a policy Consider how the world IS Can a reflex agent be rational?

Planning Agents Plan ahead Ask what if Decisions based on (hypothesized) consequences of actions Must have a model of how the world evolves in response to actions Consider how the world WOULD BE

Quiz: Reflex or Planning? Select which type of agent is described: 1.Pacman, where Pacman is programmed to move in the direction of the closest food pellet 2.Pacman, where Pacman is programmed to move in the direction of the closest food pellet, unless there is a ghost in that direction that is less than 3 steps away. 3.A navigation system that first considers all possible routes to the destination, then selects the shortest route.

Properties of task environment Fully observable vs. partially observable Single-agent vs. multi-agent Deterministic vs. stochastic Episodic vs. sequential Static vs. dynamic Discrete vs. continuous Known vs. unknown

Fully observable vs. partially observable Fully observable: agent s sensors give it access to complete state of the environment at all times Can be partially observable due to noisy and inaccurate sensors, or because parts of the state are simply missing from the sensor data Example: Perfect GPS vs noisy pose estimation Example: IKEA assembly while blindfolded Almost everything in the real world is partially observable

Single agent vs. multi-agent Not multi-agent if other agents can be considered part of the environment Only considered to be multi-agent if the agents are maximizing a performance metric that depends on other agents behavior Single agent example: Pacman with randomly moving ghosts Multi-agent example: Pacman with ghosts that use a planner to follow him

Deterministic vs. stochastic Deterministic: next state of environment is completely determined by the current state and the action executed by the agent Stochastic: actions have probabilistic outcomes Strongly related to partial observability most apparent stochasticity results from partial observation of a deterministic system Example: Coin flip

Episodic vs. sequential Episodic: agent s experience is divided into atomic episodes; single percept and action each episode, and next episode does not depend on the actions taken in previous ones Sequential: current decision could affect all future decisions Example: Image classification vs. Chess

Static vs. dynamic Static: The environment does not change, except for actions taken by the agent Dynamic: The environment continuously evolves without input from the agent Example: Checkers vs. pacman

Discrete vs. continuous Refers to state, how time is handled, and the percepts and actions of the agent Discrete: Finite category or integer Continuous: Real-valued quantity Percept example: Discrete colors vs. color spectrum Action example: Chess moves vs. steering angle State example: Pacman s (x,y) position vs. robot s joint angles

Known vs. unknown Agent s state of knowledge about the rules of the game / laws of physics Known environment: the outcomes for all actions are given Unknown: agent has to learn how it works to make good decisions Possible to be partially observable but known (solitaire) Possible to be fully observable but unknown (video game)