Programming with Goals (3)

Similar documents
An Empirical Study of Agent Programs

Reasoning about Goals in BDI Agents: the PRACTIONIST Framework

Motivations and Goal-Directed Autonomy

Operational Behaviour for Executing, Suspending, and Aborting Goals in BDI Agent Systems

Foundations of AI. 10. Knowledge Representation: Modeling with Logic. Concepts, Actions, Time, & All the Rest

Guidelines for Developing Explainable Cognitive Models 1

Intelligent Agents. CmpE 540 Principles of Artificial Intelligence

OBSERVATION-BASED PROACTIVE COMMUNICATION IN MULTI-AGENT TEAMWORK

AGENT-BASED SYSTEMS. What is an agent? ROBOTICS AND AUTONOMOUS SYSTEMS. Today. that environment in order to meet its delegated objectives.

Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department

Introduction to Artificial Intelligence 2 nd semester 2016/2017. Chapter 2: Intelligent Agents

Intelligent Autonomous Agents. Ralf Möller, Rainer Marrone Hamburg University of Technology

ICS 606. Intelligent Autonomous Agents 1. Intelligent Autonomous Agents ICS 606 / EE 606 Fall Reactive Architectures

LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES

Reasoning About Norms under Uncertainty in Dynamic Environments

Emotions as Heuristics in Multi-Agent Systems

Multi-agent Engineering. Lecture 4 Concrete Architectures for Intelligent Agents. Belief-Desire-Intention Architecture. Ivan Tanev.

Web-Mining Agents Cooperating Agents for Information Retrieval

Inferencing in Artificial Intelligence and Computational Linguistics

Web-Mining Agents Cooperating Agents for Information Retrieval

Robotics Summary. Made by: Iskaj Janssen

Knowledge Based Systems

Agents and Environments

COMP329 Robotics and Autonomous Systems Lecture 15: Agents and Intentions. Dr Terry R. Payne Department of Computer Science

STIN2103. Knowledge. engineering expert systems. Wan Hussain Wan Ishak. SOC 2079 Ext.: Url:

Agents and State Spaces. CSCI 446: Artificial Intelligence

Coherence Theory of Truth as Base for Knowledge Based Systems

Module 1. Introduction. Version 1 CSE IIT, Kharagpur

A Logic of Emotions for Intelligent Agents

Chapter 02 Developing and Evaluating Theories of Behavior

ORC: an Ontology Reasoning Component for Diabetes

Representation and Reasoning for Goals in BDI Agents

Supporting Blended Workflows

Artificial Intelligence Lecture 7

Finding Information Sources by Model Sharing in Open Multi-Agent Systems 1

A Modelling Environment for Mind and Matter Aspects of Intentional Behaviour

TRANSLATING RESEARCH INTO PRACTICE

Solutions for Chapter 2 Intelligent Agents

Representation Theorems for Multiple Belief Changes

Audio: In this lecture we are going to address psychology as a science. Slide #2

CAAF: A Cognitive Affective Agent Programming Framework

Learning to Use Episodic Memory

Foundations of Artificial Intelligence

Chapter 2: Intelligent Agents

Goal Conflict Concerns. Marc

Rational Agents (Chapter 2)

The Semantics of Intention Maintenance for Rational Agents

Citation for published version (APA): Geus, A. F. D., & Rotterdam, E. P. (1992). Decision support in aneastehesia s.n.

A Framework for Coherence-Based Multi-Agent Adaptivity

EEL-5840 Elements of {Artificial} Machine Intelligence

Motivation Management in AGI Systems

Cognitive Maps-Based Student Model

Contents. Foundations of Artificial Intelligence. Agents. Rational Agents

DiPRA (Distributed Practical Reasoning Architecture)

Grounding Ontologies in the External World

Incorporating BDI agents into human-agent decision making research

Situated Coordination

CS343: Artificial Intelligence

TITLE: A LOGICAL APPROACH TO FORMALISING NEGOTIATION IN MULTI-AGENT SYSTEMS

Users. Ergonomics vs. Cognetics

The Open Access Institutional Repository at Robert Gordon University

1 What is an Agent? CHAPTER 2: INTELLIGENT AGENTS

Artificial Intelligence

Integrating Joint Intention Theory, Belief Reasoning, and Communicative Action for Generating Team-Oriented Dialogue

Lies, Bullshit, and Deception in Agent-Oriented Programming Languages

REPORT ON EMOTIONAL INTELLIGENCE QUESTIONNAIRE: GENERAL

Unifying Data-Directed and Goal-Directed Control: An Example and Experiments

Using Diverse Cognitive Mechanisms for Action Modeling

Iterated Belief Change and Exogenous Actions in the Situation Calculus

6. A theory that has been substantially verified is sometimes called a a. law. b. model.

Lecture 5- Hybrid Agents 2015/2016

CS 771 Artificial Intelligence. Intelligent Agents

Robot Learning Letter of Intent

Stepwise Knowledge Acquisition in a Fuzzy Knowledge Representation Framework

a ~ P~e ýt7c4-o8bw Aas" zt:t C 22' REPORT TYPE AND DATES COVERED FINAL 15 Apr Apr 92 REPORT NUMBER Dept of Computer Science ' ;.

Sense-making Approach in Determining Health Situation, Information Seeking and Usage

A Synthetic Mind Model for Virtual Actors in Interactive Storytelling Systems

Function-Behaviour-Structure: A Model for Social Situated Agents

BDI & Reasoning. Janne Järvi Department of Computer Sciences University of Tampere

Computational Tree Logic and Model Checking A simple introduction. F. Raimondi

BPMN Business Process Modeling Notations

CS 331: Artificial Intelligence Intelligent Agents

Semiotics and Intelligent Control

Overview. cis32-spring2003-parsons-lect15 2

Experiments with Helping Agents

Extending BDI Multi-Agent Systems with Situation Management

CS 331: Artificial Intelligence Intelligent Agents

The 29th Fuzzy System Symposium (Osaka, September 9-, 3) Color Feature Maps (BY, RG) Color Saliency Map Input Image (I) Linear Filtering and Gaussian

Overview EXPERT SYSTEMS. What is an expert system?

CS324-Artificial Intelligence

Using progressive adaptability against the complexity of modeling emotionally influenced virtual agents


AI: Intelligent Agents. Chapter 2

Applying social norms to implicit negotiation among Non-Player Characters in serious games

Introduction to Categorization Theory

Dynamic Rule-based Agent

Modelling the Social Environment: Towards Socially Adaptive Electronic Partners

Agent-Based Systems. Agent-Based Systems. Michael Rovatsos. Lecture 5 Reactive and Hybrid Agent Architectures 1 / 19

A Computational Theory of Belief Introspection

CS 331: Artificial Intelligence Intelligent Agents. Agent-Related Terms. Question du Jour. Rationality. General Properties of AI Systems

Transcription:

Programming with Goals (3) M. Birna van Riemsdijk, TU Delft, The Netherlands GOAL slides adapted from MAS course slides by Hindriks 4/24/11 Delft University of Technology Challenge the future

Outline GOAL: sensing GOAL: instantaneous & durative actions GOAL: program structure Goal interactions Modularity, multiple agents, organizational reasoning 2

1. GOAL: Sensing 3

Sensing Agents need sensors to: explore the environment when they have incomplete information (e.g. Wumpus World) keep track of changes in the environment that are not caused by itself GOAL agents sense the environment through a perceptual interface defined between the agent and the environment Environment generates percepts Environment Interface Standard: EIS (Hindriks et al.) 4

Percept Base Percepts are received by an agent in its percept base. The reserved keyword percept is wrapped around the percept content, e.g. percept(block(a)). Not automatically inserted into beliefs! 5

Processing Percepts The percept base is refreshed, i.e. emptied, every reasoning cycle of the agent. Agent has to decide what to do when it perceives something, i.e. receives a percept. Use percepts to update agent s mental state: Ignore the percept Update the beliefs of the agent Adopt/drop a new goal 6

Updating Agent s Mental State One way to update beliefs with percepts: First, delete everything agent believes. Example: remove all block and on facts. Second, insert new information about current state provided from percepts into belief base. Example: insert block and on facts for every percept(block( )) and percept(on( )). Assumes that environment is fully observable with respect to block and on facts. Downside: not very efficient 7

Percept Update Pattern A typical pattern for updating is: Rule 1 If the agent perceives block X is on top of block Y, and does not believe that X is on top of Y Then insert on(x,y)into the belief base. Rule 2 If the agent believes that X is on top of Y, and does not perceive block X is on top of block Y Then remove on(x,y)from the belief base. 8

Percepts and Event Module Percepts are processed in GOAL by means of event rules, i.e. rules in the event module. event module{ program{ < rules > } } Event module is executed every time that agent receives new percepts. 9

Implementing Pattern Rule 1 Rule 1 If the agent perceives block X is on top of block Y, and does not believe that X is on top of Y Then insert on(x,y)into the belief base. Note: percept base is inspected using the bel operator, e.g. bel(percept(on(x,y))). INCORRECT! event module { program{ % assumes full observability. if bel(percept(on(x,y)), not(on(x,y))) then insert(on(x,y)). } } 10

Implementing Pattern Rule 1 Rule 1 If the agent perceives block X is on top of block Y, and does not believe that X is on top of Y, then insert on(x,y)into the belief base. We want to apply this rule for all percept instances that match it! Content Percept Base percept(on(a,table)) percept(on(b,table)) percept(on(c,table)) percept(on(d,table)) event module { program{ % assumes full observability. forall bel(percept(on(x,y)), not(on(x,y))) do insert(on(x,y)). } } 11

Implementing Pattern Rule 2 Rule 2 If the agent believes that X is on top of Y, and does not perceive block X is on top of block Y Then remove on(x,y)from the belief base. event module { program{ % assumes full observability. forall bel(percept(on(x,y)), not(on(x,y))) do insert(on(x,y)). forall bel(on(x,y), not(percept(on(x,y)))) do delete(on(x,y)). } } 1. We want that all rules are applied! By default the event module applies all rules in linear order. 2. Note that none of these rules fires if nothing changed. 12

Initially Agent Knows Nothing In most environments an agent initially has no information about the state of the environment, e.g. Tower World, Wumpus World, Represented by an empty belief base: beliefs{ } There is no need to include a belief base in this case in a GOAL agent. It is ok to simply have no belief base section. 13

Summarizing Two types of rules: if <cond> then <action>. is applied at most once (if multiple instances chooses randomly) forall <cond> do <action>. is applied once for each instantiation of parameters that satisfy condition. Main module by default: checks rules in linear order applies first applicable rule (also checks action precondition!) Event module by default: Checks rules in linear order Applies all applicable rules (rules may enable/disable each other!) Program section modifiers: [order=random], [order=linear], [order=linearall], [order=randomall] Built-in actions: insert, delete, adopt, drop. 14

2. GOAL: Instantaneous & Durative Actions 15

Instantaneous versus Durative Instantaneous actions Actions in the Blocks World environment are instantaneous, i.e. they do not take time. Wumpus World actions are of this type as well. Durative actions Actions in the Tower World environment take time. When a GOAL agent sends an action to such an environment, the action will not be completed immediately. 16

Durative Actions and Sensing While durative actions are performed an agent may receive percepts. Useful to monitor progress of action. UT2004 Example: Other bot is perceived while moving. 17

Specifying Durative Actions delayed effect problem solution: no postcondition results of action are handled by event rules Postcondition may be empty : post { } Better practice is to indicate that you have not forgotten to specify it by using post { true }. 18

3. GOAL: Program Structure 19

Structure of GOAL Program init module { <initialization of agent> } main module{ <action selection strategy> } event module { <percept processing> } 20

GOAL Interpreter Cycle Process events (= apply event rules) Select action (= apply action rules) Perform action (= send to environment) Also called reasoning or deliberation cycle. GOAL s cycle is a classic sense-plan-act cycle. Update mental state (= apply action specs + commitment strategy) 21

Sections in Modules 1. knowledge{...} 2. beliefs{...} 3. goals{...} 4. program{...} 5. actionspec{...} Init module: all sections optional, globally available Main & event module: 2 not allowed; 4 obligatory; 1,3,5: optional At least event or main module should be present 22

4. Goal Interactions 23

Positive and Negative Interactions Agents may have multiple goals that could interact when trying to pursue these goals simultaneously Positive interaction benefits can be obtained from simultaneous pursuit John Thangarajah, Lin Padgham, and Michael Winikoff. Detecting and exploiting positive goal interaction in intelligent agents. In AAMAS '03: Proceedings of The 2nd International Conference on Autonomous Agents and Multiagent Systems, pages 401-408, 2003. Negative interaction pursuit of one goal negatively impacts possibility to pursue other goal; conflicts between goals BDI logics: goals are consistent... 24

Negative Interactions (1) John Thangarajah, Lin Padgham, and Michael Winikoff. Detecting and avoiding interference between goals in intelligent agents. In IJCAI '03: Proceedings of the International Joint Conference on Artificial Intelligence, pages 721-726, 2003. Goal representation: goal-plan tree 25

Negative Interactions (2) Interference can occur: When an in-condition is made false while a plan or goal is executing, causing the plan or goal to fail. When a previously achieved effect is made false before a plan or goal that relies on it begins executing, preventing the goal or plan from being able to execute. 26

Negative Interactions (3) Interaction tree: annotate GPT with interaction summaries 27

Goals in Conflict (1) M. Birna van Riemsdijk, Mehdi Dastani, John-Jules Ch Meyer. Goals in Conflict: Semantic Foundations of Goals in Agent Programming. Journal of Autonomous Agents and Multi-Agent Systems, 18(3):471-500. 2009. Springer-Verlag. Logic-based representation of goal conflicts Goal base: set of propositional formulas three semantics for deriving G Goal base: set of goal inference rules derive goal ; agent believes ; has goals +; conflict with goals - translation to default rules: 28

Goals in Conflict (2) Goal base: set of goal inference rules derive goal ; agent believes ; has goals +; conflict with goals - translation to default rules G follows from goal base iff follows from one of the extensions of resulting default theory Conflicting goals in different extensions Conflict is user-defined, not only inconsistency 29

5. Modularity, Multiple Agents, Organizational Reasoning 30

Modularity in Agent Programming Central issue in software engineering Increased understandability of programs Busetta et al. (ATAL 99): capability cluster of components of a cognitive agent Braubach et al. (ProMAS 05): extension of capability notion Van Riemsdijk et al. (AAMAS 06): goal-oriented modularity idea: modules encapsulate information on how to achieve a goal dispatch (sub)goal to module 31

Modules in GOAL User-defined modules, next to init, main and event Idea: focus attention on (part of) goal Use action rules to call module if goal condition in action rules, corresponding goals become (local) goals of module different exit policies: after doing one action; when local goals have been achieved; when no actions can be executed anymore; using explicit exit-module action See also Hindriks (ProMAS 07) 32

Multi-Agent System in GOAL.mas2g file: launch rules to start multiple agents action send(receiver, Content) to send messages mailbox semantics: inspected using bel operator declarative, imperative and interrogative moods Hindriks, Van Riemsdijk (ProMAS 09): communication semantics based on mental models 33

Organization-Aware Agent Programming fire fighter ambulance? police officer?? = role = software agent How to program organization-aware agents? 34

Not Discussed Novel goal feature listall L <- <goal condition> do {<action rules}." Empirical Software Engineering for Agent Programming see PRIMA 09, PRIMA 10 (Van Riemsdijk & Hindriks) 35

Summary Sensing percept base inspect using bel(percept(...)) Goal interactions positive & negative interaction framework for conflicting goals based on default logic Modularity modules to focus on and achieve goals 36