Mixture of Behaviors in a Bayesian Autonomous Driver Model

Similar documents
Further Steps towards Driver Modeling according to the Bayesian Programming Approach

Learning of a Bayesian Autonomous Driver Mixture-of-Behaviors (BAD MoB) Model

Bayesian Perception & Decision for Intelligent Mobility

Modelling Aspects of Longitudinal Control in an Integrated Driver Model:

Spatial Cognition for Mobile Robots: A Hierarchical Probabilistic Concept- Oriented Representation of Space

Artificial Intelligence Lecture 7

LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES

Rethinking Cognitive Architecture!

ERA: Architectures for Inference

Artificial Intelligence

Agents. Environments Multi-agent systems. January 18th, Agents

M.Sc. in Cognitive Systems. Model Curriculum

Intelligent Agents. CmpE 540 Principles of Artificial Intelligence

ICS 606. Intelligent Autonomous Agents 1. Intelligent Autonomous Agents ICS 606 / EE 606 Fall Reactive Architectures

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence

Implementation of Perception Classification based on BDI Model using Bayesian Classifier

A probabilistic method for food web modeling

Bayesian Belief Network Based Fault Diagnosis in Automotive Electronic Systems

Agent-Based Systems. Agent-Based Systems. Michael Rovatsos. Lecture 5 Reactive and Hybrid Agent Architectures 1 / 19

Logistic Regression and Bayesian Approaches in Modeling Acceptance of Male Circumcision in Pune, India

Dynamic Control Models as State Abstractions

Support system for breast cancer treatment

COMMITMENT. &SOLUTIONS Act like someone s life depends on what we do. UNPARALLELED

High-level Vision. Bernd Neumann Slides for the course in WS 2004/05. Faculty of Informatics Hamburg University Germany

Agents and Environments

A SITUATED APPROACH TO ANALOGY IN DESIGNING

Reactivity and Deliberation in Decision-Making Systems

Grounding Ontologies in the External World

EEL-5840 Elements of {Artificial} Machine Intelligence

Using Probabilistic Reasoning to Develop Automatically Adapting Assistive

Supplementary notes for lecture 8: Computational modeling of cognitive development

Inferring Actions and Observations from Interactions

A Bayesian Hierarchical Framework for Multimodal Active Perception

Application of Bayesian Network Model for Enterprise Risk Management of Expressway Management Corporation

Pythia WEB ENABLED TIMED INFLUENCE NET MODELING TOOL SAL. Lee W. Wagenhals Alexander H. Levis

Evolutionary Approach to Investigations of Cognitive Systems

MACHINE LEARNING ANALYSIS OF PERIPHERAL PHYSIOLOGY FOR EMOTION DETECTION

Semiotics and Intelligent Control

Intelligent Agents. Chapter 2 ICS 171, Fall 2009

Gender Based Emotion Recognition using Speech Signals: A Review

CS 4365: Artificial Intelligence Recap. Vibhav Gogate

Augmented Cognition to enhance human sensory awareness, cognitive functioning and psychic functioning: a research proposal in two phases

CS148 - Building Intelligent Robots Lecture 5: Autonomus Control Architectures. Instructor: Chad Jenkins (cjenkins)

Causal Knowledge Modeling for Traditional Chinese Medicine using OWL 2

Continuous/Discrete Non Parametric Bayesian Belief Nets with UNICORN and UNINET

Behavior Architectures

Lecture 2 Agents & Environments (Chap. 2) Outline

Vorlesung Grundlagen der Künstlichen Intelligenz

Ontology-Based Context Awareness for Driving Assistance Systems

5.8 Departure from cognitivism: dynamical systems

Artificial Intelligence

A Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China

Lecture 5- Hybrid Agents 2015/2016

(c) KSIS Politechnika Poznanska

A Computational Framework for Concept Formation for a Situated Design Agent

Behavioral Pattern Identification Through Rough Set Modelling

CS 771 Artificial Intelligence. Intelligent Agents

Intelligent Autonomous Agents. Ralf Möller, Rainer Marrone Hamburg University of Technology

Learning to Use Episodic Memory

Sensory Cue Integration

Team-Project Geometry and Probability for motion and action

Method Comparison for Interrater Reliability of an Image Processing Technique in Epilepsy Subjects

Learning and Adaptive Behavior, Part II

Facial Event Classification with Task Oriented Dynamic Bayesian Network

22c:145 Artificial Intelligence

COMP329 Robotics and Autonomous Systems Lecture 15: Agents and Intentions. Dr Terry R. Payne Department of Computer Science

Automatic Definition of Planning Target Volume in Computer-Assisted Radiotherapy

Bayesian Networks Representation. Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University

NON-INTRUSIVE REAL TIME HUMAN FATIGUE MODELLING AND MONITORING

Reactive agents and perceptual ambiguity

Pavlovian, Skinner and other behaviourists contribution to AI

Artificial Intelligence. Intelligent Agents

Agents & Environments Chapter 2. Mausam (Based on slides of Dan Weld, Dieter Fox, Stuart Russell)

Unifying Data-Directed and Goal-Directed Control: An Example and Experiments

Web-Mining Agents Cooperating Agents for Information Retrieval

The 29th Fuzzy System Symposium (Osaka, September 9-, 3) Color Feature Maps (BY, RG) Color Saliency Map Input Image (I) Linear Filtering and Gaussian

INTERVIEWS II: THEORIES AND TECHNIQUES 5. CLINICAL APPROACH TO INTERVIEWING PART 1

Cognitive Maps-Based Student Model

Using Bayesian Networks to Analyze Expression Data. Xu Siwei, s Muhammad Ali Faisal, s Tejal Joshi, s

Graphical Modeling Approaches for Estimating Brain Networks

Using Diverse Cognitive Mechanisms for Action Modeling

Intent Inference and Action Prediction using a Computational Cognitive Model

Proposal for a Multiagent Architecture for Self-Organizing Systems (MA-SOS)

Agents & Environments Chapter 2. Mausam (Based on slides of Dan Weld, Dieter Fox, Stuart Russell)

Learning Utility for Behavior Acquisition and Intention Inference of Other Agent

Learning Classifier Systems (LCS/XCSF)


Towards Motivation-based Plan Evaluation

Biologically-Inspired Human Motion Detection

Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention

Introduction and Historical Background. August 22, 2007


Analysis of Speech Recognition Techniques for use in a Non-Speech Sound Recognition System

The Engineering of Emergence in Complex Adaptive Systems. Philosophiae Doctor

International Journal of Software and Web Sciences (IJSWS)

Intelligent Agents. Outline. Agents. Agents and environments

Outline. Chapter 2 Agents & Environments. Agents. Types of Agents: Immobots

BAYESIAN NETWORKS AS KNOWLEDGE REPRESENTATION SYSTEM IN DOMAIN OF RELIABILITY ENGINEERING

Artificial Intelligence Agents and Environments 1

Using Inverse Planning and Theory of Mind for Social Goal Inference

Transcription:

Mixture of Behaviors in a Bayesian Autonomous Driver Model Autoren: Claus Möbus, Mark Eilers 1, Malte Zilinski 2, and Hilke Garbe models of human driver behavior and cognition, probabilistic driver model, Bayesian autonomous driver models, mixture-of-experts model, visual attention allocation Abstract The Human Centered Design (HCD) of Partial Autonomous Driver Assistance Systems (PA- DAS) requires Digital Human Models (DHMs) of human control strategies for traffic scenario simulations. We present a probabilistic model architecture for generating descriptive models of human driver behavior: Bayesian Autonomous Driver (BAD) models. They implement the sensory-motor system of human drivers in a psychological motivated mixture-of-experts (= mixture-of-schema) architecture with autonomous and goal-based attention allocation processes. Under the assumption of stationary behavioral processes models are specified across at least two time slices. Learning data are time series of relevant variables: percepts, goals, and actions. We can represent individual or groups of human and artificial agents. Models propagate information in various directions. When working top-down, goals emitted by a cognitive layer select a corresponding expert (schema), which propagates actions, relevance of areas of interest (AoIs) and perceptions. When working bottom-up, percepts trigger AoIs, actions, experts and goals. When the task or goal is defined and the model has certain percepts evidence can be propagated simultaneously top-down and bottom-up and the appropriate expert (schema) and its behavior can be activated. Thus, the model can be easily extended to implement a modified version of the SEEV visual scanning or attention allocation model of Horrey, Wickens, and Consalus. In contrast to Horrey et al. the model can predict the probability of attending a certain AoI on the basis of single, mixed, and even incomplete evidence (goal priorities, percepts, effort to switch between AoIs). In this paper we present the architecture and a proof of concept with plausible but artificial data. 1 Introduction The Human or Cognitive Centered Design (HCD) of intelligent transport systems requires digital Models of Human Behavior and Cognition (MHBC) which are embedded, context aware, personalized, adaptive, and anticipatory. Models and prototypes we propose here are of that type. A special kind of MHBC is developed and used as driver models in traffic scenario simulations (Cacciabue, 2007). In current research projects their usefulness for proving safety assertions and supporting risk-based design is studied intensively (ISi-PADAS, 2009). In both cases it is assumed that the conceptualization and development of MHBCs and ambient intelligent assistance systems are parallel and independent activities. In the near future with the need for smarter and more intelligent assistance the problem of transferring human skills (Yangsheng, 2005) into the envisioned technical systems becomes more and more apparent. The conventional approach is the handcrafting of MHBC on the basis of human behavior traces. An ex post evaluation of their human likeness or empirical validity and revision-evaluation cycles is obligatory. We propose a machine-learning alternative: the estimation of Bayesian MHBCs from behavior traces. The learnt models are empirical valid by construction. We call these models Bayesian Autonomous Driver (BAD) models. An ex post evaluation of BAD models is not necessary. 1 project Integrated Modeling for Safe Transportation (IMOST) sponsored by the Government of Lower Saxony, Germany under contracts ZN2245, ZN2253, ZN2366 2 project ISi-PADAS funded by the European Commission in the 7th Framework Program, Theme 7 Transport FP7-218552

2 Probabilistic Models of Human Behavior and Cognition A driver, a pedestrian or a biker is a human agent whose skills and skill acquisition process can be described by a three-stage model with the cognitive, associative, and autonomous stages or layers (Anderson, 2002). Accordingly various modeling approaches are adequate: productionsystem models for the cognitive and associative stage (e.g. models in a cognitive architecture), control-theoretic, or probabilistic models for the autonomous stage. The advantage of probabilistic models is that they fulfill the above modeling criteria and above that especially robustness. This is a great advantage facing the irreducible incompleteness of knowledge about the environment and the underlying psychological mechanisms (Bessiere, 2008). 2.1 Bayesian Autonomous Driver (BAD) Models Due to the variability of human cognition and behavior and the irreducible lack of knowledge about cognitive mechanisms it seems rational to conceptualizes, estimate and implement probabilistic models when modeling human traffic agents. In contrast to other models probabilistic models need not be idiosyncratically handcrafted but could be learnt objectively from human behavior traces. BAD models (Möbus, 2008; 2009a; 2009b) are developed in the tradition of Bayesian expert systems and Bayesian robot programming (Bessiere, 2003). They describe phenomena on the basis of the variables of interest and some conditional probability distributions (JPDs). This is in contrast to models in cognitive architectures (e.g. ACT-R) which try to simulate cognitive algorithms and processes on a finer granular basis which are difficult to identify even with e.g. functional magnetic resonance imaging (FMRI) methods. In (Möbus, 2008) we described first steps to model lateral and longitudinal control behavior of single and groups of drivers with reactive Bayesian sensory-motor models. Then we included the time domain and reported work in progress with dynamic Bayesian sensory-motor models (Möbus, 2009a; 2009b). The vision is a dynamic BAD model which is able to decompose complex situations into basic situations and to compose complex behavior from basic motor schemas (experts). This new Mixture-of-Experts (MoE) model facilitates the management of sensory-motor schemas in a library. Context dependent driver behavior could be generated by mixing pure behavior from different schemas. 2.2 Basic Concepts of Bayesian Programs A BP is defined as a mean of specifying a family of probability distributions. By using such a specification it is possible to construct a BAD model, which can effectively control a (virtual) vehicle. The components of a BP are presented in (Bessiere, 2003), where the analogy to a logic program is helpful. An application consists of a (task model) description and a question. A description is constructed from preliminary knowledge and a data set. Preliminary knowledge is constructed from a set of pertinent variables, a decomposition of the JPD and a set of forms. Forms are either parametric forms or BPs. The purpose of a description is to specify an effective method to compute a JPD on a set of variables given a set of (experimental) data and preliminary knowledge. To specify preliminary knowledge the modeler must define the set of relevant variables on which the JPD is defined, decompose the JPD into factors of CPDs according to CIHs, and define the forms. Each CPD in the decomposition is a form. Either this is a parametric form which parameter are estimated from batch data (behavior traces) or another application. Parameter estimation from batch data is the conventional way of estimating the parameters in a BAD model. The Bayesian estimation procedure uses only a small fraction of the data (cases) for updating the model parameters. This procedure is described below.

Given a description a question is obtained by partitioning the variables into searched, known, and unknown variables. We define a question as the CPD P(Searched Known, preliminary knowledge, data). The selection of an appropriate action can be treated as the inference problem: P(Action Percepts, Goals, preliminary knowledge, data). Various policies (Draw, Best, and Expectation) are possible whether the concrete action is drawn at random, chosen as the best action with highest probability, or as the expected action. 3 Mixture of Expert Model 3.1 A Psychological Motivated Model Currently we are evaluating the suitability of static and dynamic graphical models. Dynamic models evolve over time. If the model contains discrete time-stamps one can have a model for each unit of time. These local models are called time-slices (Jensen, 2007). The time slices are connected through temporal links to give a full model. In the case of identical time-slices and temporal links we have a repetitive temporal model which is called Dynamic Bayesian Network model (DBN). In our current research we strive for the realization of dynamic Bayesian Autonomous Driver model (Fig. 1). The model is suited to represent the sensor-motor system of individuals or groups of human or artificial agents in the functional autonomous layer or stage of Anderson (2002). It is a psychological motivated mixture-of-experts (= mixture-of-schema) model with autonomous and goal-based attention allocation processes. It is distributed across two time slices, and tries to avoid the latent state assumptions of HMMs. Learning data are time series or case data of relevant variables: percepts, goals, and actions. Goals are the only latent variables which could be set by commands issued by the associative layer. Fig. 1: Mixture-of-Experts (= Mixture-of-Schema) Model with Visual Attention Allocation Extension (mapping ideas of Horrey et al. (2006) into the Dynamic Bayesian Network modeling framework). The model propagates information in various directions. When working top-down, goals emitted by the associative layer select a corresponding expert (schema), which propagates actions, relevance of areas of interest (AoIs) and perceptions. When working bottom-up, percepts trigger AoIs, actions, experts and goals. When the task or goal is defined and the model has certain percepts evidence can be propagated simultaneously top-down and bottom-up and the appropriate expert (schema) and its behavior can be activated.

Thus, the model can be easily extended to implement a modified version of the SEEV visual scanning or attention allocation model of Horrey, Wickens, and Consalus (2006). In contrast to Horrey et al. the model can predict the probability of attending a certain AoI on the basis of single, mixed, and even incomplete evidence (goal priorities, percepts, effort to switch between AoIs). In 3.2 we want to demonstrate that this architecture is feasible. 3.2 A Proof of Concept In our current research we used partial inverted Markov models for modeling the sensorymotor system of the human driver. Now we want to discuss what kind of DBNs are worth to be considered next when driver state variables (e.g. lateral and longitudinal (de ac)-celeration) are included and when a psychological motivated mixture-of-experts (= mixture-of-schema) model with autonomous and goal-based attention allocation processes is the ultimate goal. For the proof of concept we develop a Bayesian model for a simple scenario with three maneuvers (Fig. 2 4) and three areas of interest (AoIs) (Fig. 5). The driver and the BAD model are sitting in the ego vehicle (ev). Sometimes there is an alter vehicle (av) or the roadside occupying the AoIs depending on the state of the car (State = left, middle, or right lane). Fig. 2: Left Lane Change Maneuvre Fig. 3: Left Lane Change Maneuvre Fig. 4: Pass Vehicle Maneuvre Fig. 5: Areas of Interest We call the model Reactive State-Based Expert-Role Model (RSRM) (Fig. 7). This is due to the fact that AoIs directly influence actions. The model embeds two naïve Bayes classifiers: One for the expert-roles and one for the states. This simplifies the structure of the architecture. Time slices are selected that in each new time slice a new expert role is active. Expert roles are contained in the top layer of nodes. We have experts for each main part of a maneuver (Fig. 2-4): left_lane_in, left_lane_out, pass_in, pass_mid_in, pass_mid_out,

pass_out, right_lan_in, right_lane_out. The next layer of nodes describes the actions the model is able to generate: left_check_lane, left_signal, left_turn, middle_straight_accelerate, middle_straight_decelerate, middle_straight_look, right_check_lane, right_signal, right_turn. Below that layer a layer of nodes is describing the state (is_in_left_lane, is_in_middle_lane, is_in_right_lane) of the vehicle. These state nodes should be augmented in the future by states describing the driver. The three bottom layers contain nodes describing the activation of the three AoIs: is_occupied and is_empty. When the model is urged to be in the left_lane_in role by e.g. goal setting from the associative layer, we expect in the same time-slice primarily that the left lane is checked and that the driver decelerates the vehicle. For the AoIs we expect that the middle AoI is occupied and the left AoI is empty. For the next time slice we expect the vehicle in the left or middle lane, The expected behavior is that of the left_lane_out expert. This role specific behavior in this next future time slice is a bit different than before. We expect more acceleration, more attention forward and more checking the right lane. When the state is known (e.g. S = is_in_middle_lane) we can include this evidence in the model and infer the appropriate expectations (e.g. left and right lane check, looking forward, and both (ac de)celerations). Fig. 7: Expectations when BAD RSRM model perceives a combination of AoI evidence When the model is perceiving a combination of AoI evidence (Fig. 7), we can infer the maneuvers (= expert-roles). For instance, in Fig. 7 the left AoI is empty and the middle AoI is occupied. We expect that the vehicle is in the middle or right lane and that the expert-roles are left_lane_in and pass_in and their appropriate mixed behavior is activated. In the case, when all AoIs are occupied the model is decelerating with main attention to the middle AoI (middle_straight_look). What will happen, if a goal (= expert role) is blocked? In Fig. 8 this situation is modeled by the appropriate evidence. The lane-in goal and at the same time the perception in the left and middle AoIs is set to is_occupied. This situation blocks the left lane in and the pass vehicle in maneuvers. The expected behavior is deceleration and looking forward. 4 Outlook We believe that the proof of concept is convincing: Bayesian Driver Models with Mixed Experts are expressive enough to describe and predict a wide range of phenomena. In our future research we will implement the model with real driving data.

Fig. 8: Expectations when BAD RSRM model perceives a blocking of goals or expert-roles by a combination of occupied AoIs 5 References Cacciabue, P.C. (ed) Modelling Driver Behaviour in Automotive Environments, London: Springer, ISBN-10: 1-84628-617-4 (2007) ISi-PADAS, http://www.offis.de/projekte/v/240/isi-padas%20_e.php (visited 08.08.2009) Yangsheng Xu, Ka Keung Caramon Lee, and Ka Keung C. Lee, Human Behavior Learning and Transfer, CRC Press Inc., (2005) Anderson, J.R.: Learning and Memory, John Wiley, (2002) Bessiere, P., Laugier, Ch., & Siegwart, R., (eds.) Probabilistic Reasoning and Decision Making in Sensory-Motor Systems, Berlin: Springer, ISBN 978-3-540-79006-8 (2008) Möbus, C. and Eilers, M., First Steps Towards Driver Modeling According to the Bayesian Programming Approach, Symposium Cognitive Modeling, p.59, in: L. Urbas, Th. Goschke & B. Velichkovsky (eds) KogWis 2008. Christoph Hille, Dresden, ISBN 978-3- 939025-14-6 (2008) Möbus, C., Eilers, M., Further Steps Towards Driver Modeling according to the Bayesian Programming Approach, in: Conference Proceedings, HCII 2009, Digital Human Modeling, pp. 413-422, LNCS (LNAI), Springer, San Diego, ISBN 978-3-642-02808-3 (2009) Möbus, C., Eilers, M., Garbe, H., and Zilinski, M.: Probabilistic and Empirical Grounded Modeling of Agents in (Partial) Cooperative Traffic Scenarios, in: Conference Proceedings, HCII 2009, Digital Human Modeling, pp. 423-432, LNCS (LNAI), Springer, San Diego, ISBN 978-3-642-02808-3 (2009) Bessiere, P., Survey: Probabilistic Methodology and Techniques for Artifact Conception and Development, Rapport de Recherche, No. 4730, INRIA, (2003) Jensen, F.V. & Nielsen, Th.D.: Bayesian Networks and Decision Graphs (2nd edition), Springer, ISBN 0-387-68281-3 (2007) Horrey, W.J., Wickens, Ch.D., and Consalus, K.P.: Modeling Driver s Visual Attention Allocation While Interacting With In-Vehicle Technologies, J. Exp. Psych., 2006, 12, 67-78