Semiotics and Intelligent Control

Similar documents
LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES

ICS 606. Intelligent Autonomous Agents 1. Intelligent Autonomous Agents ICS 606 / EE 606 Fall Reactive Architectures

An Escalation Model of Consciousness

Unmanned autonomous vehicles in air land and sea

Artificial Intelligence

Robot Learning Letter of Intent

On Three Layer Architectures (Erann Gat) Matt Loper / Brown University Presented for CS296-3

Agent-Based Systems. Agent-Based Systems. Michael Rovatsos. Lecture 5 Reactive and Hybrid Agent Architectures 1 / 19

(c) KSIS Politechnika Poznanska

CS148 - Building Intelligent Robots Lecture 5: Autonomus Control Architectures. Instructor: Chad Jenkins (cjenkins)

Module 1. Introduction. Version 1 CSE IIT, Kharagpur

EEL-5840 Elements of {Artificial} Machine Intelligence

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence

Robotics Summary. Made by: Iskaj Janssen

Reactivity and Deliberation in Decision-Making Systems

From where does the content of a certain geo-communication come? semiotics in web-based geo-communication Brodersen, Lars

Pavlovian, Skinner and other behaviourists contribution to AI

HAPTIC EVALUATION OF MALAY FEMALES' TRADITIONAL CLOTHES IN Kansei PERSPECTIVE

Agents. Environments Multi-agent systems. January 18th, Agents

Agents and Environments

Deliberating on Ontologies: The Present Situation. Simon Milton Department of Information Systems, The University of Melbourne

Intelligent Agents. CmpE 540 Principles of Artificial Intelligence

How do you design an intelligent agent?

Biceps Activity EMG Pattern Recognition Using Neural Networks

Behavior Architectures

WP 7: Emotion in Cognition and Action

A brief comparison between the Subsumption Architecture and Motor Schema Theory in light of Autonomous Exploration by Behavior

Lecture 5- Hybrid Agents 2015/2016

1. INTRODUCTION. Vision based Multi-feature HGR Algorithms for HCI using ISL Page 1

Intelligent Agents. Philipp Koehn. 16 February 2017

Learning to Use Episodic Memory

AIR FORCE INSTITUTE OF TECHNOLOGY

Field data reliability analysis of highly reliable item

The Role of Modeling and Feedback in. Task Performance and the Development of Self-Efficacy. Skidmore College

Reactive agents and perceptual ambiguity

On the Emergence of Indexical and Symbolic Interpretation in Artificial Creatures, or What is this I Hear?

Artificial Intelligence Lecture 7

Do you have to look where you go? Gaze behaviour during spatial decision making

Type-2 fuzzy control of a fed-batch fermentation reactor

Artificial Intelligence

Introduction to Artificial Intelligence 2 nd semester 2016/2017. Chapter 2: Intelligent Agents

M.Sc. in Cognitive Systems. Model Curriculum

CS 771 Artificial Intelligence. Intelligent Agents

Application of ecological interface design to driver support systems

Appendix I Teaching outcomes of the degree programme (art. 1.3)

Intelligent Agents. Russell and Norvig: Chapter 2

A Framework for Self-Awareness in Artificial Subjects

ISSN (PRINT): , (ONLINE): , VOLUME-5, ISSUE-5,

Artificial Intelligence Intelligent agents

DYNAMICISM & ROBOTICS

Lecture 3: Bayesian Networks 1

Exploring MultiObjective Fitness Functions and Compositions of Different Fitness Functions in rtneat

38. Behavior-Based Systems

Designing Human-like Video Game Synthetic Characters through Machine Consciousness

Intelligent Autonomous Agents. Ralf Möller, Rainer Marrone Hamburg University of Technology

CLASSROOM & PLAYGROUND

Embodiment in GLAIR: A Grounded Layered Architecture. with Integrated Reasoning for Autonomous Agents. Henry Hexmoor. Johan Lammens.

Agents. This course is about designing intelligent agents Agents and environments. Rationality. The vacuum-cleaner world

To conclude, a theory of error must be a theory of the interaction between human performance variability and the situational constraints.

Grounding Ontologies in the External World

Princess Nora University Faculty of Computer & Information Systems ARTIFICIAL INTELLIGENCE (CS 370D) Computer Science Department

Web-Mining Agents Cooperating Agents for Information Retrieval

Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention

KECERDASAN BUATAN 3. By Sirait. Hasanuddin Sirait, MT

General Brain concepts: The brain is an associational organ. The neurons that fire together, wire together. It is also an anticipation machine (173)

A method to define agricultural robot behaviours

Sparse Coding in Sparse Winner Networks


An Overview on Soft Computing in Behavior Based Robotics

Implementation of Perception Classification based on BDI Model using Bayesian Classifier

Web-Mining Agents Cooperating Agents for Information Retrieval

Multi-agent Engineering. Lecture 4 Concrete Architectures for Intelligent Agents. Belief-Desire-Intention Architecture. Ivan Tanev.

Perceptual Anchoring with Indefinite Descriptions

NEURAL SYSTEMS FOR INTEGRATING ROBOT BEHAVIOURS

Theme I: Introduction and Research Methods. Topic 1: Introduction. Topic 2: Research Methods

Vorlesung Grundlagen der Künstlichen Intelligenz

Part I Part 1 Robotic Paradigms and Control Architectures

Lesson 12. Understanding and Managing Individual Behavior

Introduction and Historical Background. August 22, 2007

Perception Lie Paradox: Mathematically Proved Uncertainty about Humans Perception Similarity

Abstract. 2. Metacognition Architecture. 1. Introduction

Analysis of the human risk factor which affects the reliability and safety of machinery

A NEW DIAGNOSIS SYSTEM BASED ON FUZZY REASONING TO DETECT MEAN AND/OR VARIANCE SHIFTS IN A PROCESS. Received August 2010; revised February 2011

AI: Intelligent Agents. Chapter 2

Erasmus & Visiting Students: Modules & Assessments

Advanced in vitro exposure systems

Application of distributed lighting control architecture in dementia-friendly smart homes

New Mexico TEAM Professional Development Module: Deaf-blindness

Allen Newell December 4, 1991 Great Questions of Science

Lecture 6. Perceptual and Motor Schemas

Bayesian Perception & Decision for Intelligent Mobility

Robot Behavior Genghis, MIT Callisto, GATech

MOBILE & SERVICE ROBOTICS RO OBOTIC CA 01. Supervision and control

2 Psychological Processes : An Introduction

ENVIRONMENTAL REINFORCEMENT LEARNING: A Real-time Learning Architecture for Primitive Behavior Refinement

Fodor on Functionalism EMILY HULL

An Abstract Behavior Representation for Robust, Dynamic Sequencing in a Hybrid Architecture

Identity Theory: Reconstruction and Application

Generalization and Theory-Building in Software Engineering Research

Transcription:

Semiotics and Intelligent Control Morten Lind 0rsted-DTU: Section of Automation, Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark. m/i@oersted.dtu.dk Abstract: Key words: The overall purpose of this paper is to demonstrate the relevance of semiotics concepts to the analysis of intelligent control systems. Semiotics has only a minor impact on research within intelligent control or robotics. These areas are currently dominated by mathematical concepts of control theory and information processing concepts of artificial intelligence. This situation is unfortunate because the understanding of a complex control problem requires an analysis of the sign relations between sensory data and their meanings and the sign relations between the physical result of a control action and the intentions of control agents. These problems of sign interpretation inherent in all control situations are relevant for the design of automated controls and of the interaction between human operators and the automation. The relevance of semiotics to control problems is demonstrated by applying the semiotics of action developed by Charles Morris to various aspects of a control situation. We use Morris's definition of three dimensions of signifying that arises from a decomposition of an action into perceptual, appraisive and prescriptive stages. These distinctions identify types of knowledge that a control agent must have in order to cope with a control situation in the environment. Selected examples from the domains of robotics and process control are used to demonstrate that Morris semiotics of action is valuable for conceptual analysis of control situations and for systems design. Intelligent Control, Interpretation, Robots, Architecture 1. INTRODUCTION A control agent interacting with a physical environment interpret and react to external events and happenings in order to produce or maintain the state of affairs in the environment. The efficiency of a control action is The original version of this chapter was revised: The copyright line was incorrect. This has been corrected. The Erratum to this chapter is available at DOI: 10.1007/978-0-387-35611-2_22 K. Liu et al. (eds.), Organizational Semiotics IFIP International Federation for Information Processing 2002

288 Morten Lind highly dependent on a proper interpretation of events i.e. their significance in the actual situation. One would therefore expect that semiotics would play a key role in the analysis and design of control systems, especially in areas that are dealing with complex problems like intelligent control and robotics. This is actually not the case. Semiotics has only a minor impact on research in these areas, which currently are dominated by mathematical concepts of control theory and information processing concepts of artificial intelligence. This situation is unfortunate because complex control problems cannot be analysed properly without an understanding of the sign relations between sensory data and their meanings and the sign relations between the physical result of a control action and the intentions of control agents. These problems of sign interpretation inherent in all control situations are relevant for the design of automated controls and for the interaction between human operators and the automation. The relevance of semiotic to control problems is demonstrated in the following by applying the semiotic of action developed by Charles Morris to various aspects of a control situation. We use Morris's definition of three dimensions of signifying that arises from a decomposition of an action into three stages of perception, appraisal and prescription. These distinctions identify types of knowledge that a control agent must have in order to cope with a control situation in the environment. Selected examples from the domains of robotics and process control are used to demonstrate that Morris semiotic of action is valuable in conceptual analyses of control situations and in systems design. 2. A SEMIOTIC ANALYSIS OF ACTIONS Morris (1985) semiotic of action describe how an agent interacting with an environment associate meaning to events. It is assumed in the analysis that the actions of the agent are aimed at the achievement of goals, which are valuable or desirable for the agent. The courses of action chosen by the agent depend both on these goals and on observations of the state of the environment. Morris's semiotics is based on Mead's analysis of the act (Mead 1938) and makes a distinction between three dimensions of sign signification namely a designative, an appraisive and a prescriptive dimension. These three dimensions of signification are derived from Mead's decomposition of an act into three phases. Morris and Mead discuss the behaviour of biological species i.e. they talk about the acts of an organism. We will generalize their analysis to agents, which could be biological organisms (e.g. humans) or artefacts (robots). It should be stressed that we do not associate

Semiotics and Intelligent Control 289 symbolic activity with artefacts. The relations between signs in the environment and their meaning or signification, should be understood as modes of interpretation that are prescribed by the designer and implemented in the hardware or software of the control system. Morris's analysis is summarized below. Table 1. Morris three dimensions of signification (1985) Action Dimensions of Interpretant Signification requirements sipification Obtaining Designative Sense organs Stimulus properties information of the object Selection of objects Appraisive Object preferences Reinforcing for preferential of the agent properties of object behaviour Action on object by Prescripti ve Behaviour Act as instrumental specific behaviour preferences of the agent Morris dimensions of sign signification define three ways that an agent can associate meaning to objects or events (signs) in the environment and identify therefore types of knowledge that the agent must have of the environment in order to act. Advantage can actually be taken of the behavioural consequences of different sign interpretations. For example critical events can prescriptively signify avoidance reactions or counteractions by proper design of the robot software. 3. SEMIOTICS AND INTELLIGENT ROBOTS Even though semiotic concepts are seldom used explicitly in the analysis or design of intelligent robots, it quite easy to fmd examples in this domain of Morris's dimensions of signification. The purpose of robots is to perform tasks in complex physical environments such as underwater inspection, missions in space or materials handling in manufacturing. These robots interact with environments of great complexity and can only operate successfully if they are able detect changes (i.e. signs) in the environment, to collect sensory information, interpret the situation and to plan and execute appropriate actions. The processing of information in a robot can be divided into several stages that each can be regarded as steps of interpretation of the primary sensory information. Although it is possible to subdivide these processes into subordinate processes on almost arbitrary levels of detail it is customary to distinguish between three overall processes; 1) perception, 2)

290 Morten Lind situation assessment and 3) execution of actions. These three processes correspond closely to Morris's three dimensions of sign signification. The first perceptual stage of interpretation is the extraction of features from sensory data. At this stage the significations of signs in the environment are determined by the properties of the sensory systems e.g. the video hardware and software and the algorithms for feature extraction in vision based robots. The meanings associated with the signs are in this case not derived from the goals or the action capabilities of the robot. The significance or meaning derived from the signs in the first stage belongs to the designative dimension in Morris typology. In the second stage of situation assessment significance is associated with an event (sign) in the environment through correlations with the goals of the robot (actually its designer). The sign signify accordingly the desirability or value of the environmental situation for the robot. The significance or meaning derived in this stage belongs therefore to the appraisive dimension in Morris typology. Note that the interpretation may be based on a combination of raw sensory inputs and features extracted in the perceptive stage. These alternatives in the interpretation of signs are, as seen below, highly relevant to recent discussions of functional architectures of robots. In the third stage of action execution signs in the environment acquire meaning by being associated with an action or response. The action may be a direct reaction to the sensory input i.e. not involving any kind reflection of the desirability of the situation or based on a reflection on the range of possible actions. In this case objects or events in the environment would belong to Morris's third category of prescriptive signs. Sensors c: c:.2 01 E : c: c: Co u 0 a; 'c Q) Q) u c: )( Actuation - - U '0 0 CIS Q) 0 Q) Q. Co E /I) '0!! E 0 01 'S - Figure 1. The basic principle of deliberation 3.1 Deliberation and subsumption architectures Signs of different type of signification are often combined in robots and other industrial control systems. Observed features are used to assess the desirability of an action or the state of the environment. The features and the assessment define together the significance of the situation and direct the associated response. The situation becomes in this way a complex sign. Such

Semiotics and Intelligent Control 291 complex interpretations of sensory data characterize robots that deliberate about actions. A paradigm example of an industrial robotic architecture implementing the principle of deliberation is the ReS architecture of (Albus, 1994). reason about behavior of objects plan changes to the world Identify objects Sensors -- monitor changes --... Actuation build maps explore wander Figure 2. The basic principle of subsumption The deliberation principle, illustrated in Figure 1, is often contrasted with the so-called subsumption principle proposed by Brooks (1986) and shown in Figure 2. Here the idea is to associate actions directly with sensory events i.e. to interpret happenings in the environment as prescriptive signs. Robots based on these two diverse principles have different behavioural characteristics. Robots that deliberate spend time reflecting on complex interpretations of events in their environment. They tend therefore to be slow but are able to accommodate to changing conditions. On the contrary, robots working according to the subsumption principle are fast responding but cannot cope with major changes in the environmental conditions because of their stereotypical interpretations of events. Semiotic views of the deliberation and the subsumption architectures are shown in Figure 3. It is seen that three elements of semiosis are involved in deliberation and only two in subsumption. The arrows indicate their temporal sequence. It is realized that it is quite meaningful to analyse robots within the conceptual framework of Morris's semiotics.

292 Morten Lind A ppraisive Signification Se lection of objects for preferential behavior A ction on Obtaining ObJect Information by Specific Behavior Action on Obtaining Inform ation Object by Specific Behavior Designative Signification Prescriptive Signification Designative Prescriptive Signification Signification Figure 3. The semiotics of deliberation and subsumption 4. SIGN INTERPRETATION AND KNOWLEDGE IN ACTION An agent will, according to Morris, interpret events and objects in relation to the sensory apparatus, goals and values and the possible courses of action. These interpretations are behavioural skills resulting from the adaptation of the agent to the environment through design or evolution. Another way to describe this adaptation is to say that the agent has acquired knowledge about how to behave in the environment. Morris's three dimensions of sign signification therefore correspond to three basic types of knowledge in action. The knowledge of an agent interacting with a natural world (i.e. an environment not created or shaped to fulfil human purposes) is derived from sensory data and the values and action potential of the agent. However, an agent adapted to an artificial environment must have knowledge about the intentions of the agents that have formed the environment. Take as an example a mobile robot in a laboratory. Such a robot must obviously have knowledge about physical dimensions and positions of objects in the laboratory in order to avoid collisions. But in addition, it must also know about the proper use of the objects i.e. their purpose or function. If the robot did not know about intentions it would not exhibit the traits we normally associate with intelligent behaviour. It would for example not be able to distinguish between an empty living room with many doors and a corridor with identical dimensions. The living room and the corridor can only be distinguished by a difference in function or use and not by their physical

Semiotics and Intelligent Control 293 shape or dimensions. This knowledge about purposes or intentions can only be acquired through communication, by coding or by learning. Agents adapted to artificial environments must therefore have knowledge about the intentions or share values with other agents i.e. be part of a social context. 5. MULTIPLE INTERPRETATIONS The same sign may have several interpretations depending on the context. As an example, the appraisive signification of an object or event would depend on the goals or intentions of the agent. The relevance of this phenomenon for the analysis of complex control situations can be demonstrated by the following example of controlling a ship engine cooling system (Figure 4). Auxiliary Cooling System Water Tank H)JI()-L- Inlet Engine Seawater L- Outlet SENSOR VALUE MEANING PLANT GOAL REMEDIAL TASK REMEDIAL ACTION dp is zero /fengine cooling is lost.. No flow of seawater Maintain engine temperature within limits Maintain coolant flow Restore energy balance Restore coolant flow Reduce energy production Reconfigure and start auxiliary cooling system Pump P2 is not doing mechanical work pump motor has stopped Produce pressure Keep pump running Restore pressure pump revolutions Reconfigure and start P3 Repair pump motor Figure 4. Multiple interpretations of ship engine cooling system state

294 Morten Lind The purpose of the cooling system is to keep the engine temperature within limits. The cooling is provided by a heat exchanger transferring the heat produced by the engine to a seawater circulation loop. For the sake of the present discussion we have assumed that the only parameter that can be observed by the control agent is the differential pressure dp across the seawater pump. The task of the control agent is to ensure that the cooling system is serving its purpose and that it is safe i.e. the system is operated according to multiple goals. Note that the reading the differential pressure dp will have different interpretations or meanings depending of the goals of the control agent (human or machine). The four different meanings of the differential pressure shown in Figure 4 represent deviations from a desirable condition (the associated goal). Since the system is operated according to four goals we have four different interpretations all corresponding to an appraisive signification of the pressure reading. Such ambiguity in the interpretation of the state of the object under control is problematic for a control agent because the remedial action to choose in the situation depends on the interpretation and because each remedy may have undesirable consequences. The agent should obviously analyse these multiple interpretations in order to act intelligently (see (Lind 1994) for a more detailed analysis of this example). It is often possible to circumvent such interpretation problems by the provision of additional independent measurements. However, since instrumentation is costly it can be tempting to reduce the number of sensor readings to a minimum. It is realized that semiotic analyses along these lines could be used in control system design to reduce risk of plant operations by identifying potential interpretation problems. Semiotics may in this way be used constructively to design the meaning environment of control agents. The difference between control by deliberation and subsumption can also be illustrated with the cooling system example. Using subsumption, the control agent reacts directly to sensor events by a prescribed action. Since there are four different interpretations (actions) of the sensor value, the agent must use action preferences to decide which of the interpretations to use. The four interpretations are illustrated in figure 5 by the horizontal boxes.

Semiotics and Intelligent Control 295 SENSOR VALUE MEANING PLANT GOAL REMEDIAL TASK REMEDIAL ACTION Engine Maintain engine Restore energy Reduce energy cooling is lost temperature balance production within limits No flow of Maintain coolant Restore coolant Reconfigure and start dp is zero seawater flow flow auxiliary cooling system..; Pump P2 is notdoing Produce pressure Restore pressure Reconfigure and start P3 mechanical work The pump motor Keep pump running Restore pump Repair pump motor has stopped revolutions Figure 5. Using subsumption there would be four different ways to act. An agent deliberating about action will interpret the sensor value in view of the four possible goals to achieve and use goal preferences to select the actual goal to pursue in the situation. In figure 6 the agent has chosen to maintain coolant flow and therefore to remedy the situation by reconfiguring and starting the auxiliary cooling system. SENSOR VALUE MEANING PLANT GOAL REMEDIAL TASK REMEDIAL ACTION Engine Maintain engine Restore energy Reduce energy cooling is lost temperature balance production within limits No flow of Maintain coolant Restore coolant Reconfigure and start dp is zero seawater flow flow QlJxilijll'y cooli!!/uystem Pump P2 is notdoing Produce pressure Restore pressure Reconfigure and start P3 mechanical work J The pump motor Keep pump running Restore pump Repair pump motor has stopped revolutions Figure 6. Using deliberation there would be only one way to act. 6. CONCLUSIONS We have demonstrated the relevance of semiotic concepts for the analysis of complex control situations. It is shown that the semiotics of action developed by Morris can be applied to analyse the interpretation of sensor information in control situations. It is also shown that Morris semiotics provides a new perspective on recent discussions of alternative software architectures for robotics. Finally, a process plant example demonstrates that semiotics can be used to identify ambiguities in the interpretation of sensor information. It is clear from these examples that semiotics offer valuable

296 Morten Lind new insights in robotics and intelligent control that is a supplement to the more traditional logical or mathematical approaches. It is concluded that semiotics is of particular value in qualitative analyses of complex control problems. It is an open question however, whether semiotics also can be used constructively i.e. be used in the synthesis of intelligent controls. ACKNOWLEDGMENTS The present work is supported by The Danish Foundation for Basic Research through a contract with the Centre for Human-Machine Interaction (CHMI). REFERENCES Albus, J. & Proctor, F. G. (1996) "A Reference Model Architecture for Intelligent Hybrid Control Systems", Proc. of the International Federation of Automatic Control, San Fransisco, CA, June 30-July 5. Brooks, R. A. (1986) "A Robust Layered Control System for a Mobile Robot", IEEE Journal of Robotics and Automation, Vol. RA-2, No1, March, pp 14-23. Mead, G. H. (1938) "The Philosophy of the Act", The University of Chicago Press, Chicago. Morris, C. (1985) "Signs and the Act", in: R. E. Innis (Ed). 1985. Semiotics- An Introductory Anthology, Indiana University Press, Bloomington. Lind, M. (1996) "Interpretation Problems in Modelling Complex Artefacts for Diagnosis", Proc.Cognitive Systems Engineering for Process Control. Kyoto Japan. Nov 12-15.