Combining Logical with Emotional Reasoning in Natural Argumentation

Similar documents
In Favour of Cognitive Models of Emotions

PORTIA: A User-Adapted Persuasion System In the Healthy Eating Domain

Affective Advice Giving Dialogs

Persuasive Speech. Persuasive Speaking: Reasoning with Your Audience

Knowledge is rarely absolutely certain. In expert systems we need a way to say that something is probably but not necessarily true.

Coherence Theory of Truth as Base for Knowledge Based Systems

International Journal of Software and Web Sciences (IJSWS)

Chapter 3-Attitude Change - Objectives. Chapter 3 Outline -Attitude Change

43. Can subliminal messages affect behavior? o Subliminal messages have NO effect on behavior - but people perceive that their behavior changed.

The Semantics of Intention Maintenance for Rational Agents

Recognizing Intentions from Rejoinders in a Bayesian Interactive Argumentation System

Wason's Cards: What is Wrong?

Artificial Intelligence Programming Probability

Expert Systems. Artificial Intelligence. Lecture 4 Karim Bouzoubaa

Modeling Suppositions in Users Arguments

Exploratory Interaction with a Bayesian Argumentation System

Chapter 02 Developing and Evaluating Theories of Behavior

Programming with Goals (3)

Testing the Persuasiveness of the Oklahoma Academy of Science Statement on Science, Religion, and Teaching Evolution

User Modeling And Adaptation In Health Promotion Dialogs With An Animated Character

Intelligent Expressions of Emotions

Representation Theorems for Multiple Belief Changes

From Greta s Mind to her Face:

PLANNING THE RESEARCH PROJECT

Sex Appeal as Persuasion

Supplementary notes for lecture 8: Computational modeling of cognitive development

Insight Assessment Measuring Thinking Worldwide

PSYC2600 Lecture One Attitudes

Chapter 2. Knowledge Representation: Reasoning, Issues, and Acquisition. Teaching Notes

Non-Industry Voices on Tobacco Harm Reduction and Vaping Products

Towards a general model for argumentation services

Finding Information Sources by Model Sharing in Open Multi-Agent Systems 1

VOLUME B. Elements of Psychological Treatment

An Interval-Based Representation of Temporal Knowledge

Overview. cis32-spring2003-parsons-lect15 2

Abstracts. An International Journal of Interdisciplinary Research. From Mentalizing Folk to Social Epistemology

Overview EXPERT SYSTEMS. What is an expert system?

Verbal and nonverbal discourse planning

Approaches to Verbal Persuasion in Intelligent User Interfaces

Jazyková kompetence I Session II

Logical formalization of social commitments: Application to agent communication languages

Emotional Intelligence: The other side of smart

6. A theory that has been substantially verified is sometimes called a a. law. b. model.

Chapter Seven Attitude Change

CS 4365: Artificial Intelligence Recap. Vibhav Gogate

Group Assignment #1: Concept Explication. For each concept, ask and answer the questions before your literature search.

Visual book review 1 Safe and Sound, AI in hazardous applications by John Fox and Subrata Das

TABLE OF CONTENTS. Page. Level 5 exemplars. Paper 1. Question 1... Question 2... Question 3... Paper 2

Citation for published version (APA): Geus, A. F. D., & Rotterdam, E. P. (1992). Decision support in aneastehesia s.n.

Chapters 4 & 9: Dual-Process Theories of Behavior and Persuasion

Thinking Like a Researcher

Unifying Data-Directed and Goal-Directed Control: An Example and Experiments

DRAFT. full paper published in: Lecture Notes in Computer Science, Volume 5219, Berlin/Heidelberg, 2008, pp

Nothing in biology makes sense except in the light of evolution Theodosius Dobzhansky Descent with modification Darwin

Strength calculation of rewards

EL.LE ISSN Vol. 4 Num. 1 Marzo Ada Bier (Università Ca Foscari Venezia, Italia)

BASIC VOLUME. Elements of Drug Dependence Treatment

Logistic Regression and Bayesian Approaches in Modeling Acceptance of Male Circumcision in Pune, India

NEGOTIATION SEVENTH EDITION

TEACHING YOUNG GROWNUPS HOW TO USE BAYESIAN NETWORKS.

Rethinking Cognitive Architecture!

AGENT-BASED SYSTEMS. What is an agent? ROBOTICS AND AUTONOMOUS SYSTEMS. Today. that environment in order to meet its delegated objectives.

Pythia WEB ENABLED TIMED INFLUENCE NET MODELING TOOL SAL. Lee W. Wagenhals Alexander H. Levis

BAYESIAN NETWORKS AS KNOWLEDGE REPRESENTATION SYSTEM IN DOMAIN OF RELIABILITY ENGINEERING

COMP329 Robotics and Autonomous Systems Lecture 15: Agents and Intentions. Dr Terry R. Payne Department of Computer Science

Useful Roles of Emotions in Animated Pedagogical Agents. Ilusca L. L. Menezes IFT6261 :: Winter 2006

- Conduct effective follow up visits when missing children return home ensuring intelligence is shared with appropriate partners.

RELYING ON TRUST TO FIND RELIABLE INFORMATION. Keywords Reputation; recommendation; trust; information retrieval; open distributed systems.

Process of Designing & Implementing a Research Project

Cognitive domain: Comprehension Answer location: Elements of Empiricism Question type: MC

Selecting a research method

Chapter 22. Joann T. funk

Potential Use of a Causal Bayesian Network to Support Both Clinical and Pathophysiology Tutoring in an Intelligent Tutoring System for Anemias

[1] provides a philosophical introduction to the subject. Simon [21] discusses numerous topics in economics; see [2] for a broad economic survey.

Sport England Satellite Club Evaluation Interim Report 2 Executive Summary

Critical Conversations

An assistive application identifying emotional state and executing a methodical healing process for depressive individuals.

P(HYP)=h. P(CON HYP)=p REL. P(CON HYP)=q REP

An Experimental Investigation of Self-Serving Biases in an Auditing Trust Game: The Effect of Group Affiliation: Discussion

Decisions and Dependence in Influence Diagrams

Towards an Understanding of Human Persuasion and Biases in Argumentation

Objectives. Quantifying the quality of hypothesis tests. Type I and II errors. Power of a test. Cautions about significance tests

Modeling the Role of Theory of Mind in Social Interaction and Influence

Physiological Function, Health and Medical Theory

Cognitive and Biological Agent Models for Emotion Reading

Proposal for a Multiagent Architecture for Self-Organizing Systems (MA-SOS)

Expert System Profile

Design the Flexibility, Maintain the Stability of Conceptual Schemas

Audio: In this lecture we are going to address psychology as a science. Slide #2

I. Logical Argument (argument) appeals to reason and intellect.

First Problem Set: Answers, Discussion and Background

Theory Building and Hypothesis Testing. POLI 205 Doing Research in Politics. Theory. Building. Hypotheses. Testing. Fall 2015

Defining emotion: a clinical perspective

Eliminative materialism

APPROVAL SHEET. Uncertainty in Semantic Web. Doctor of Philosophy, 2005

Critical Thinking Assessment at MCC. How are we doing?

Probabilistic Reasoning with Bayesian Networks and BayesiaLab

Can Bayesian models have normative pull on human reasoners?

Advantage EAP Employee Assistance Program

Response to the ASA s statement on p-values: context, process, and purpose

Transcription:

Combining Logical with Emotional Reasoning in Natural Argumentation V. Carofiglio Intelligent Interfaces, Department of Informatics University of Bari, Italy carofiglio@di.uniba.it F. de Rosis Intelligent Interfaces, Department of Informatics University of Bari, Italy derosis@di.uniba.it Abstract We discuss how emotions may affect shallow and inner forms of intelligence by considering, in particular, the case of argumentation. It has been proved that 'natural' argumentation system should be endowed with the ability to provide burdens of proof as well as dialectical arguments, that are not necessarily based on the rationality and validity of proofs. We describe an ongoing project, in which we model argumentation knowledge and emotion activation in intelligent agents with the formalism of belief networks. The two knowledge sources are combined to assess the rational and emotional strength of candidate strategies and to adapt selection of the 'most promising strategy' to the scale of values and the personality of the message receiver. 1. Introduction Affective factors influence argumentation in several directions. They may influence argument strength by appealing to the receiver s emotions and highly placed values (Sillince and Minors, 1991) and may affect, at the same time, the way argument structures are formulated by the proponent (Wegman, 1988). A natural argumentation system should therefore be endowed with the ability to provide burdens of proof as well as dialectical arguments, that is arguments that are not necessarily based on the rationality and validity of proofs. This view is compatible with the Elaboration Likelihood Model, according to which effectiveness of persuasive communication is due to a process that follows two different paths induced by communication (Petty and Cacioppo, 1986). The first one is the central route: this assumes that people are more likely to be persuaded if they are able to elaborate extensively on the message received. If they are motivated to think about the message and the message is a strong one, they will be persuaded accordingly: in this case, the rational aspect of argumentation is the central topic. Instead, the argumentation through the peripheral route assumes that, even if a person is unable to elaborate on the message extensively, he may still be persuaded by factors that are only indirectly related to the content of the message itself. These factors (for instance, the perspective of consequences) induce some attitude change in the receiver of the argumentation message, as result of the activation of some emotion or the recall of some highly placed value. This view is elaborated in Poggi s (1998) goal and belief model of persuasion, in which she describes two ways in which the receiver s goals may be influenced: emotion triggering and goal hooking. Let us consider a still topical claim: In the closing stages of the Gulf War between a UN-backed force and Iraq, the UN-backed force should have pressed on to Bagdad and thus ensured the complete overthrow of Saddam Hussein. (Sillince and Minors, 1991). In the cited paper, a list of arguments for and against this statement are listed, which appeal to various kinds of emotions. Two examples: Ex 1: an argument for with appeal to fear: If we leave Saddam with a nuclear and chemical weapons capability he may hit back at some time in the future. Ex 2: again an argument for, with appeal to hope: Once Saddam has been replaced and democratic institutions set up in Baghdad, the problems of the region will be over. Theorists of argumentation do not agree on how this form of argumentation should be considered. To Sillince, emotions and highly-placed values are among the factors that influence the strength of arguments: the manipulation and processing of emotions and values should therefore be part of automatic generation of good arguments. Marcus

(2000) mentions assuming the hearer as a purely logical agent as one of the fallacies of present persuasive communication attempts. Gilbert (1994) claims that arguments invariably include non-logical components essential to their proper understanding. Others, on the contrary, tag these arguments as logically irrelevant although they may succeed in evoking an attitude of approval for oneself and what one says (Walton, 1999). This is, in our view, one of the domains in which purely logical reasoning must be integrated with consideration of emotional and value factors, to achieve some degree of naturalness in the message generated (Lisetti and Gmytrasiewicz, 2002). In this short paper, we show how a natural argumentation system may be built by combining appropriately the two forms of reasoning and by selecting the arguments which best suit the user characteristics. We describe the formalism that we employ to represent knowledge involved and to simulate argument selection strategies. We demonstrate the first results of this ongoing Project and organize our contribution according to two questions proposed for the Workshop. 2. Argumentation Modeling Several authors dealt with the problem of simulating reasoning behind argumentation, by applying the model proposed by Toulmin longtime ago (Toulmin, 1958). According to this model, a claim may be supported by presenting one or more data: these data act as variables that may be accepted in the scope of a warrant; they are evidences supporting the claim with a given degree of strength (specified by a qualifier). The power of the warrant may be decreased or increased by introducing a rebuttal or a backing of warrant. Some argumentation systems employed logic to represent knowledge and reasoning (for instance, Grasso et al, 2000). Others proposed representing uncertainty to measure the strength of arguments and to model selection of optimal strategies in particular contexts (Zukerman, 1999). Recently, interest towards natural argumentation encouraged considering affective factors and scale of values of the interlocutor in modeling selection of the best strategy (Reed and Grasso, 2001). Consideration of these factors requires defining a method for representing the cognitive and affective state of the interlocutor. It requires, as well, enriching argumentation graphs with knowledge needed to consider the affective impact of arguments. 3. Some Answers To The Workshop Questions Q1: How to model emotional states and attitudes We justified elsewhere our choice of dynamic belief networks as a formalism for representing personality and context-based emotion activation (de Rosis et al, in press). In the cited paper, we showed how the emotional state of an Embodied Animated Agent may be manifested through face and voice expression. In another paper, we showed how emotions may influence the dialog between the agent and a user, by contributing to dynamic revision of the priority of discourse goals (Cavalluzzi at al, 2003). In this contribution, we describe an ongoing development of that research project, which is aimed at simulating, in particular, appeal to emotions and highly placed values in the persuasive section of the dialog. We first summarize the main aspects of our emotion modeling method, to then describe the argument selection strategy. Let us denote with P the proponent of an argument and with R its receiver, that is the person to which the argument is addressed. Our model of activation of emotional states in R is represented with a Dynamic Belief Network (DBN). As proposed by Nicholson and Brady (1994), we use DBNs as goal monitoring systems that employ the observation data in the time interval (T i, T i+1 ) to generate a probabilistic model of the receiver s mind at time T i+1, from the model that was built at time T i. We employ this model to reason hypothetically about the consequences of an argument on the monitored goals of the receiver. Let us consider the triggering of hope that is shown in figure 1 1. Hope is a positive emotion which is triggered by a change in the belief that the goal of Getting-future-good-of-self will be achieved. The intensity of this emotion is influenced by the following cognitive components: the receiver s belief that an event x will occur to self in the future Bel R Ev (Occ R x); the belief that this event is desirable: Bel R (Desirable x); the belief that this situation favours achieving the agent s goal: Bel R (Ach-FutGoodSelf) The emotion of fear is triggered by a similar cognitive mechanism, when some of the BN nodes involved have opposite sign because the occurring 1 The network includes two symmetrical components, at times T i and T i+1. Due to space limits, we show in the figure only the component at time T i+1, with the link to the monitored goal at time T i.

event is undesirable. The goal involved is, in this case, Preserving-self-from-future-bad. According to the utility theory, we calculate the variation in the intensity of an emotion in the receiver R as a function of the product of the change in the probability to achieve a given goal, times the utility that achieving this goal takes to R. Q2: How should user emotions and attitudes be used to enhance HCI? To get internal consistency in the representation of knowledge sources employed in our argumentation system, we model also the argumentation knowledge by means of a belief network. To formalize the various aspects of Toulmin s models, we add some semantics to these networks by representing, in their nodes, not only data and claims but also warrants (in warrant-nodes). This gives us the opportunity to represent backings of warrant by chaining back on these nodes. Figure 2 shows, for instance, the warrant of appeal to expert opinion (Kienpointner, 1992) with its backing. The backing of warrant is a domain-dependent rule. Unification of a general rule (the warrant) with an instantiated one (the backing of warrant) allows chaining and defines a degree of applicability of the warrant in the considered context. Our argumentation-bn also includes Proof-Nodes, which enable us to represent multiple warrants concluding on the same claim. By associating weights with warrant-nodes according to the receiver s scale of values, we can simulate selection of argumentation strategies by appeal to the receiver s highly placed values. Explicit representation of warrants may be useful, as well, for translating selected strategies into natural language messages. When logical and emotional argumentations are combined, the strength of an argument depends not only on its rational impact and its plausibility, but also on the strength of emotions the argument triggers in the receiver. For example: arguments aimed at getting a receiver to adopt a recommended course of action by appealing to hope or to fear have the structure of arguments from (positive or negative) consequences. They try to persuade the receiver to accept (or reject) the truth of a proposition by citing the consequences, for R, of accepting (or rejecting) that proposition (Walton, 1992). Figure 3 shows the BN representing this argument, when a positive consequence g of doing an action a is mentioned and therefore the purpose is to persuade Ag (R in our case) to perform a (as in Ex1, in the Introduction). Modeling knowledge about emotions and about argumentation strategies by means of BNs enables us to evaluate whether the argument is plausible and whether it produces, at the same time, the desired rational and emotional impact. In the example about the Iraq war, we may assess whether conditions for triggering fear or hope (at a desired degree of intensity) are satisfied in a given context and with the given interlocutor. To evaluate every candidate argument the system might present to the receiver, a simulative reasoning is applied to the argument graph which is in focus in the current phase of the dialogue, by guessing the effect this candidate might produce on the receiver. We call IM-S-R the second-order belief network that the system employs in this simulative reasoning. 4. An Example Dialog Let us consider an example dialog in the domain of advice about eating disorders 2 and let us denote with Si the system moves and with Uj the user moves. S1: You should eat more vegetables. Eating sufficient quantities of vegetables may help to protect you against a number of dreadful illnesses, keeping you free to enjoy a long life. U1: Who says that? S2: The British Food Standard Agency. U2: But is the FSA a good source of experience in this domain? S3: Yes it is! It is a competent and trustful source. U3: But my father is a vegetarian and he is not in good health. S4: Of course, there are exceptions. But chances are increased considerably. U4: But I like salami and don t care being in good health! S5: I m surprised! Still, you don t smoke and make regular physical activity! U5: True. But I m doing it for aesthetic reasons. S6: Oh, I understand now! But eating vegetables is important also to have an attractive aspect. etc In this dialog, the selected strategy is appeal to positive consequences; figure 4 shows the emotion 2 This is the domain we considered in the scope of the ECfunded Project MAGICSTER (IST-1999.29078) which partially supported this work.

and argument-bns that may be employed to simulate reasoning behind it. The system wants to persuade the user to eat vegetables: ShouldDo R EatVeg. It selects, among the strategies it knows, those having a good rational impact by exploiting data in which the user presumably believes. It then tests the emotional impact of arguments it might employ (for instance, appeal to hope as in our example, or to fear) by applying a simulative reasoning on IM-S-R, as shown in figure 4. That is, it asks itself: Is U more sensible to arguments evoking positive or negative consequences of eating vegetables?. The answer to this question depends on the presumed personality of the user, which is represented (as we said) in terms of utility assigned to achieving the node goal in the figure. Once (as in our example dialog) appeal to hope has been selected, the system further decides whether to recur, in its argumentation, to the value having a good appearance or being in good health. By propagating in the network the data it knows about the user (she does not smoke and makes sport), it selects the second alternative (Proof 2 in the figure) as this appears to be more likely. Users may respond to the message produced by the system with critical questions of various kinds: about the data, about the qualifier or about the backing of warrant. They may also introduce rebuttals to weaken the system argument. In our example, at move S1 the system introduces the argument supporting Proof2. At moves S2 and S3, it answers to further questions by exploring backward the BN. At move S5, it understands that its initial hypothesis about the user s scale of values was not correct and that appearance is more important, to the user, than health. It then explores the strategy bringing to Proof1 and carries on the dialog with this new strategy. A strategy based on appeal to fear would bring, on the contrary, to a move S1 of the type: S1: You should eat more vegetables. Not eating appropriate daily amounts of fruits and vegetables increases significantly the risks of potentially fatal and tragic illnesses such as heart diseases and cancer. You are also less protected against flu and other infections. 5. Conclusions We have still to clarify a number of problems before producing a system that simulates the subtleties of emotional argumentation. We suspect that even the line separating rational from emotional arguments is not clear and depends on the proponent s beliefs about the receiver: we found difficulties, in some cases, in categorizing an argument as rational or emotional. But we believe that this is a good reason to claim that a natural argumentation system cannot do without considering emotional factors. Acknowledgements After we submitted this paper, we had extended discussions about emotional persuasion with Cristiano Castelfranchi, Maria Miceli and Isabella Poggi: we owe to these discussions and to insightful comments by one of the reviewers a number of hints on how to revise our original ideas. 6. References Cavalluzzi, A., De Carolis, B., Carofiglio, V. and Grassano, G. (2003). Emotional dialogs with an embodied agent. Proceedings of User Modeling 03. Springer LNCS de Rosis, F., Pelachaud, C., Poggi, I., Carofiglio, V. and De Carolis, B. (in press). From Greta s mind to her face: modelling the dynamics of affective states in a Conversational Embodied Agent. In International Journal of Human-Computer Studies. Gilbert, M.A. (1994) What is an emotional argument? Or Why do argument theorists quarrel with their mates? ISSA Conference. June. Grasso F., Cawsey A. and Jones R, (2000): Dialectical argumentation to solve conflicts in advice giving: a Case Study in the promotion of health nutrition. International Journal of Human-Computer Studies, 53 (6). Kienpointner, M. (1992) How to Classify Arguments. In Argumentation Illuminated. F.H. van Eermeren, R.Grootendorst, J.A.Blair, C.A.Willard (eds.), pp.178.188. Lisetti, C.L. and Gmytrasiewicz, P. (2002) Can a rational agent afford to be affectless? A formal approach. Applied Artificial Intelligence. 16, 7-8. Marcu, D. (2000) Perlocutions: The Achille s heel of speech acts. Journal of Pragmatics. Nicholson, A.E. and Brady, J.M. (1994). Dynamic belief networks for discrete monitoring. IEEE Transactions on Systems, Men and Cybernetics, 24,11. Petty, R.E., Cacioppo, J.T. (1986) The Elaboration Likelihood Model of Persuasion. Advances in Experimental Social Psychology. L.Berkowitz (Ed.), 19, pp. 123-205, Academic Press. Picard,R. (1997). Affective Computing. MIT Press. Poggi, I. (1998). A goal and belief model of persuasion. 6 th International Pragmatics Conference. Reed, C.A. and Grasso, (2001) F. Computational Models of Natural Language" in Alexandrov, V. et al. (eds) Proceedings of the International Conference on Computational Science (ICCS2001), Springer Verlag, San Francisco.

SillinceJ.A. and Minors,R.H. (1991). What makes a strong argument? Emotions, highly-placed values and roleplaying. Communication and Cognition. 24, 3-4. Toulmin,S.T. (1958). The uses of argument. Cambridge University Press. Walton,D.N. (1999) Dialectical relevance in persuasion dialogue. Informal Logic. 19, 2-3. Walton, D.N. (1992). The place of Emotions in Argument. Penn State. Wegman,C., (1988). Emotion and argumentation in expression of opinion. In V Hamilton, G H Bower and N H Frjida (Eds): Cognitive perspectives on emotion and motivation. Kluwer. Zukerman,I., McConachy,R., Korb, K., D.Pickett. (1999). Exploratory Interaction with a Bayesian Argumentation System. XVI IJCAI, pp. 1294-1299, Stockholm, Sweden. Morgan Kaufmann Publishers. Fig. 1: activation of hope

Fig.2: The Appeal to Expert Opinion warrant, with its backing Fig. 3: the Appeal to positive consequences warrant

Fig. 4: Combining an argumentation-bn with an emotion-activation-bn