LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES

Similar documents
ICS 606. Intelligent Autonomous Agents 1. Intelligent Autonomous Agents ICS 606 / EE 606 Fall Reactive Architectures

Agent-Based Systems. Agent-Based Systems. Michael Rovatsos. Lecture 5 Reactive and Hybrid Agent Architectures 1 / 19

Lecture 5- Hybrid Agents 2015/2016

Introduction to Multi-Agent Programming

Robotics Summary. Made by: Iskaj Janssen

CS148 - Building Intelligent Robots Lecture 5: Autonomus Control Architectures. Instructor: Chad Jenkins (cjenkins)

EEL-5840 Elements of {Artificial} Machine Intelligence

Agents. Environments Multi-agent systems. January 18th, Agents

(c) KSIS Politechnika Poznanska

Module 1. Introduction. Version 1 CSE IIT, Kharagpur

Behavior Architectures

Putting Minsky and Brooks Together. Bob Hearn MIT AI Lab

An Escalation Model of Consciousness

COMP329 Robotics and Autonomous Systems Lecture 15: Agents and Intentions. Dr Terry R. Payne Department of Computer Science

Part I Part 1 Robotic Paradigms and Control Architectures

1 What is an Agent? CHAPTER 2: INTELLIGENT AGENTS

Semiotics and Intelligent Control

Artificial Intelligence Lecture 7

AGENT-BASED SYSTEMS. What is an agent? ROBOTICS AND AUTONOMOUS SYSTEMS. Today. that environment in order to meet its delegated objectives.

DYNAMICISM & ROBOTICS

Intelligent Agents. CmpE 540 Principles of Artificial Intelligence

On Three Layer Architectures (Erann Gat) Matt Loper / Brown University Presented for CS296-3

Embodiment in GLAIR: A Grounded Layered Architecture. with Integrated Reasoning for Autonomous Agents. Henry Hexmoor. Johan Lammens.

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence

Grounding Ontologies in the External World

Unmanned autonomous vehicles in air land and sea

How do you design an intelligent agent?

Artificial Cognitive Systems

Deliberating on Ontologies: The Present Situation. Simon Milton Department of Information Systems, The University of Melbourne

Allen Newell December 4, 1991 Great Questions of Science

MOBILE & SERVICE ROBOTICS RO OBOTIC CA 01. Supervision and control

Chapter 2. Knowledge Representation: Reasoning, Issues, and Acquisition. Teaching Notes

Lecture 2.1 What is Perception?

M.Sc. in Cognitive Systems. Model Curriculum

Learning to Use Episodic Memory

Intelligent Agents. Philipp Koehn. 16 February 2017

Thinking the environment aurally An enactive approach to auditory-architectural research and design

Pavlovian, Skinner and other behaviourists contribution to AI

Sparse Coding in Sparse Winner Networks

Robot Learning Letter of Intent

Artificial Intelligence

Foundations of AI. 10. Knowledge Representation: Modeling with Logic. Concepts, Actions, Time, & All the Rest

Intelligent Autonomous Agents. Ralf Möller, Rainer Marrone Hamburg University of Technology

Introduction and Historical Background. August 22, 2007

Computational Cognitive Neuroscience

Plan Recognition through Goal Graph Analysis

Vorlesung Grundlagen der Künstlichen Intelligenz

Inferencing in Artificial Intelligence and Computational Linguistics

Appendix I Teaching outcomes of the degree programme (art. 1.3)

EICA: Combining Interactivity with Autonomy for Social Robots

Multi-agent Engineering. Lecture 4 Concrete Architectures for Intelligent Agents. Belief-Desire-Intention Architecture. Ivan Tanev.

Intelligent Machines That Act Rationally. Hang Li Bytedance AI Lab

AIR FORCE INSTITUTE OF TECHNOLOGY

Web-Mining Agents Cooperating Agents for Information Retrieval

Programming with Goals (3)

Lecture 6. Perceptual and Motor Schemas

Topological Considerations of Memory Structure

CS343: Artificial Intelligence

A method to define agricultural robot behaviours

Getting the Payoff With MDD. Proven Steps to Get Your Return on Investment

Grounded Cognition. Lawrence W. Barsalou

Reactivity and Deliberation in Decision-Making Systems

38. Behavior-Based Systems

Recognizing Scenes by Simulating Implied Social Interaction Networks

A Computational Theory of Belief Introspection

Intelligent Machines That Act Rationally. Hang Li Toutiao AI Lab

Agents and Environments

Re: ENSC 370 Project Gerbil Functional Specifications

Toward A Cognitive Computer Vision System

Rethinking Cognitive Architecture!

Theoretical Neuroscience: The Binding Problem Jan Scholz, , University of Osnabrück

Functionalism. (1) Machine Functionalism

Robot Behavior Genghis, MIT Callisto, GATech

ERA: Architectures for Inference

Web-Mining Agents Cooperating Agents for Information Retrieval

An Abstract Behavior Representation for Robust, Dynamic Sequencing in a Hybrid Architecture

Reactive agents and perceptual ambiguity

COMP150 Behavior-Based Robotics

Perceptual Anchoring with Indefinite Descriptions

Overview. What is an agent?

Cognitive Penetrability and the Content of Perception


Cognitive Psychology. Robert J. Sternberg EDITION. Yak University THOIVISOISI * WADSWORTH

A Matrix of Material Representation

Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention

Hierarchical FSM s with Multiple Concurrency Models

A Scientific Model of Consciousness that Explains Spirituality and Enlightened States

A Three Agent Model of Consciousness Explains Spirituality and Multiple Nondual Enlightened States Frank Heile, Ph.D.

Presence and Perception: theoretical links & empirical evidence. Edwin Blake

OBSERVATION-BASED PROACTIVE COMMUNICATION IN MULTI-AGENT TEAMWORK

Time Experiencing by Robotic Agents

Implementation of Perception Classification based on BDI Model using Bayesian Classifier

Intro, Graph and Search

TTI Personal Talent Skills Inventory Coaching Report

A SITUATED APPROACH TO ANALOGY IN DESIGNING

Proposal for a Multiagent Architecture for Self-Organizing Systems (MA-SOS)

Challenges of Embodied Intelligence

Silvia Rossi. Agent as Intentional Systems. Lezione n. Corso di Laurea: Informatica. Insegnamento: Sistemi multi-agente.

Fodor on Functionalism EMILY HULL

Transcription:

Reactive Architectures LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES An Introduction to MultiAgent Systems http://www.csc.liv.ac.uk/~mjw/pubs/imas There are many unsolved (some would say insoluble) problems associated with symbolic AI These problems have led some researchers to question the viability of the whole paradigm, and to the development of reactive architectures Although united by a belief that the assumptions underpinning mainstream AI are in some sense wrong, reactive agent researchers use many different techniques In this presentation, we start by reviewing the work of one of the most vocal critics of mainstream AI: Rodney Brooks 5-2 Brooks behavior languages Brooks has put forward three theses: 1. Intelligent behavior can be generated without explicit representations of the kind that symbolic AI proposes 2. Intelligent behavior can be generated without explicit abstract reasoning of the kind that symbolic AI proposes 3. Intelligence is an emergent property of certain complex systems Brooks behavior languages He identifies two key ideas that have informed his research: 1. Situatedness and embodiment: Real intelligence is situated in the world, not in disembodied systems such as theorem provers or expert systems 2. Intelligence and emergence: Intelligent behavior arises as a result of an agent s interaction with its environment. Also, intelligence is in the eye of the beholder ; it is not an innate, isolated property 5-3 5-4 1

Brooks behavior languages Brooks built some agents/robots based on his subsumption architecture (αρχιτεκτονική υπαγωγής) A subsumption architecture is a hierarchy of taskaccomplishing behaviors Each behavior is a rather simple rule-like structure Each behavior competes with others to exercise control over the agent Lower layers represent more primitive kinds of behavior (such as avoiding obstacles), and have precedence over layers further up the hierarchy The resulting systems are, in terms of the amount of computation they do, extremely simple Some of the robots do tasks that would be impressive if they were accomplished by symbolic AI systems 5-5 A Traditional Decomposition of a Mobile Robot Control System into Functional Modules From Brooks, A Robust Layered Control System for a Mobile Robot, 1985 5-6 A Decomposition of a Mobile Robot Control System Based on Task Achieving Behaviors Layered Control in the Subsumption Architecture From Brooks, A Robust Layered Control System for a Mobile Robot, 1985 5-7 From Brooks, A Robust Layered Control System for a Mobile Robot, 1985 5-8 2

Action Selection in the Subsumption Architecture Example of a Module Avoid 5-9 From Brooks, A Robust Layered Control System for a Mobile Robot, 1985 5-10 Schematic of a Module Levels 0, 1, and 2 Control Systems From Brooks, A Robust Layered Control System for a Mobile Robot, 1985 5-11 From Brooks, A Robust Layered Control System for a Mobile Robot, 1985 5-12 3

Steels Mars Explorer Steels Mars explorer system, using the subsumption architecture, achieves nearoptimal cooperative performance in simulated rock gathering on Mars domain: The objective is to explore a distant planet, and in particular, to collect sample of a precious rock. The location of the samples is not known in advance, but it is known that they tend to be clustered. Gradient Field Τhe mother ship generates a radio signal In order that agents can know in which direction the mother ship lies The signal weakens as distance from the source increases To find the direction of the mother ship, an agent need travel 'up the gradient' of signal strength The signal does not carry any information It need only exist 5-13 5-14 For individual (non-cooperative) agents, the lowest-level behavior, (the behavior with the highest priority ) is obstacle avoidance: if detect an obstacle then change direction (1) Any samples carried by agents are dropped back at the mother-ship: if carrying samples and at the base then drop samples (2) Agents carrying samples will return to the mothership: if carrying samples and not at the base then travel up gradient (3) Agents will collect samples they find: if detect a sample then pick sample up (4) An agent with nothing better to do will explore randomly: if true then move randomly (5) 5-15 5-16 4

If samples are distributed randomly then a large number of robots with this simple behaviour will perform very well. Will collect the samples quickly. However, this is not the case! Samples tend to cluster in piles. It would be better for agents to co-operate. When one finds a cluster it should inform the others! Direct communication between agents is not allowed! Solution: When an agent finds a sample, upon returning back to the ship it drops radioactive crumbs all the way back to the path to the ship In this way the trail is marked for the other agents to follow. 5-17 5-18 However, when the pile of samples is exhausted, the trail will remain there leading agents to an empty space! Solution: When an agent is heading towards the pile it removes some crumbs to make the trail fainter If the sample pile is large the trail will be enforced by agents carrying samples back to the ship If the pile is not large the trail will be eliminated after a while The lowest-level behavior is still obstacle avoidance: if detect an obstacle then change direction (1) Any samples carried by agents are dropped back at the mother-ship: if carrying samples and at the base then drop samples (2) 5-19 5-20 5

Agents carrying samples will return to the mother-ship. They are marking the trail to the pile by dropping 2 crumbs: if carrying samples and not at the base then drop 2 crumbs and travel up gradient (3 ) Agents will collect samples they find: if detect a sample then pick sample up (4) When agents sense crumbs they must follow the trail to the pile, traveling down the gradient. Some crumbs are removed to make the trail fainter: if sense crumbs then pick up 1 crumb and travel down gradient (5a) An agent with nothing better to do will explore randomly: if true then move randomly (5) 5-21 5-22 Situated Automata A sophisticated approach is that of Rosenschein and Kaelbling In their situated automata paradigm, an agent is specified in a rule-like (declarative) language, and this specification is then compiled down to a digital machine, which satisfies the declarative specification This digital machine can operate in a provable time bound Reasoning is done off line, at compile time, rather than online at run time Situated Automata The logic used to specify an agent is essentially a modal logic of knowledge The technique depends upon the possibility of giving the worlds in possible worlds semantics a concrete interpretation in terms of the states of an automaton [An agent] x is said to carry the information that P in world state s, written s K(x,P), if for all world states in which x has the same value as it does in s, the proposition P is true. [Kaelbling and Rosenschein, 1990] 5-23 5-24 6

Situated Automata Circuit Model of a Finite-State Machine An agent is specified in terms of two components: perception and action Two programs are then used to synthesize agents RULER is used to specify the perception component of an agent GAPPS is used to specify the action component f = state update function s = internal state g = output function 5-25 From Rosenschein and Kaelbling, A Situated View of Representation and Control, 1994 5-26 RULER Situated Automata RULER takes as its input three components [A] specification of the semantics of the [agent's] inputs ( whenever bit 1 is on, it is raining ); a set of static facts ( whenever it is raining, the ground is wet ); and a specification of the state transitions of the world ( if the ground is wet, it stays wet until the sun comes out ). The programmer then specifies the desired semantics for the output ( if this bit is on, the ground is wet ), and the compiler... [synthesizes] a circuit whose output will have the correct semantics.... All that declarative knowledge has been reduced to a very simple circuit. [Kaelbling, 1991] GAPPS Situated Automata The GAPPS program takes as its input A set of goal reduction rules, (essentially rules that encode information about how goals can be achieved), and a top level goal Then it generates a program that can be translated into a digital circuit in order to realize the goal The generated circuit does not represent or manipulate symbolic expressions; all symbolic manipulation is done at compile time 5-27 5-28 7

Circuit Model of a Finite-State Machine RULER GAPPS The key lies in understanding how a process can naturally mirror in its states subtle conditions in its environment and how these mirroring states ripple out to overt actions that eventually achieve goals. Situated Automata The theoretical limitations of the approach are not well understood Compilation (with propositional specifications) is equivalent to an NP-complete problem The more expressive the agent specification language, the harder it is to compile it (There are some deep theoretical results which say that after a certain expressiveness, the compilation simply can t be done.) From Rosenschein and Kaelbling, A Situated View of Representation and Control, 1994 5-29 5-30 Advantages of Reactive Agents Simplicity Economy Computational tractability Robustness against failure Elegance Limitations of Reactive Agents Agents without environment models must have sufficient information available from local environment If decisions are based on local environment, how does it take into account non-local information (i.e., it has a short-term view) Difficult to make reactive agents that learn Since behavior emerges from component interactions plus environment, it is hard to see how to engineer specific agents (no principled methodology exists) It is hard to engineer agents with large numbers of behaviors (dynamics of interactions become too complex to understand) 5-31 5-32 8

Hybrid Architectures Many researchers have argued that neither a completely deliberative nor completely reactive approach is suitable for building agents They have suggested using hybrid systems, which attempt to marry classical and alternative approaches An obvious approach is to build an agent out of two (or more) subsystems: a deliberative one, containing a symbolic world model, which develops plans and makes decisions in the way proposed by symbolic AI a reactive one, which is capable of reacting to events without complex reasoning Hybrid Architectures Often, the reactive component is given some kind of precedence over the deliberative one This kind of structuring leads naturally to the idea of a layered architecture, of which TOURINGMACHINES and INTERRAP are examples In such an architecture, an agent s control subsystems are arranged into a hierarchy, with higher layers dealing with information at increasing levels of abstraction 5-33 5-34 Hybrid Architectures A key problem in such architectures is what kind of control framework to embed the agent s subsystems in, to manage the interactions between the various layers Horizontal layering Layers are each directly connected to the sensory input and action output. In effect, each layer itself acts like an agent, producing suggestions as to what action to perform. Vertical layering Sensory input and action output are each dealt with by at most one layer each 5-35 Hybrid Architectures m possible actions suggested by each layer, n layers m n interactions Introduces bottleneck in central control system m 2 (n-1) interactions Not fault tolerant to layer failure 5-36 9

Ferguson TOURING MACHINES Ferguson TOURING MACHINES The TOURINGMACHINES architecture consists of perception and action subsystems, which interface directly with the agent s environment, and three control layers, embedded in a control framework, which mediates between the layers 5-37 5-38 Ferguson TOURING MACHINES The reactive layer is implemented as a set of situation-action rules, a la subsumption architecture Example: rule-1: kerb-avoidance (κράσπεδο) if is-in-front(kerb, Observer) and speed(observer) > 0 and separation(kerb,observer)<kerbthreshhold then change-orientation(kerbavoidanceangle) The planning layer constructs plans and selects actions to execute in order to achieve the agent s goals Ferguson TOURING MACHINES The modeling layer contains symbolic representations of the cognitive state of other entities in the agent s environment The three layers communicate with each other and are embedded in a control framework, which use control rules Example: censor-rule-1: if entity(obstacle-6) in perception-buffer then remove-sensory-record(layer-r, entity(obstacle-6)) 5-39 5-40 10

Müller InteRRaP Vertically layered, two-pass architecture cooperation layer social knowledge plan layer planning knowledge behavior layer world model world interface InteRRaP - Types of Interaction Bottom-up activation Occurs when a lower layer passes control to a higher layer because it is not competent to deal with the current situation Top-down execution Occurs when a higher layer makes use of the facilities provided by a lower layer to achieve one of its goals perceptual input action output 5-41 5-42 InteRRaP - Layer Functions situation recognition and goal activation It maps a knowledge base (one of the three layers) and current goals to a new set of goals. planning and scheduling It is responsible for selecting which plans to execute, based on the current plans, goals, and knowledge base of that layer. Layered Architectures Currently the most popular general class of agent architecture available Reactive, proactive, social behavior can be easily generated by the reactive, proactive, and social layers in an architecture Disadvantage: lack of conceptual and semantic clarity of unlayered approaches E.g. logic-based approaches 5-43 5-44 11