Visual book review 1 Safe and Sound, AI in hazardous applications by John Fox and Subrata Das Boris Kovalerchuk Dept. of Computer Science Central Washington University, Ellensburg, WA 98926-7520 borisk@cwu.edu An abbreviated version will appear in IEEE Engineering in Medicine and Biology Magazine, 2001 01.19.2001 A recent book Safe and Sound, AI in hazardous applications by John Fox and Subrata Das (AAAI Press/ The MIT Press, 2000, 293p., ISBN 0-262-06211-9) attracts attention of the research community and practitioners to the problem of safety of traditional and computer-aided medical diagnosis and treatment. The authors use medical applications as a focal point for the general safety problem through variety of hazardous applications. At first glance, the problem is already well known. However, they show that discovery and analysis of the sources of danger in hazardous applications are far from having rigorous solutions. The book uses an artificial intelligence (AI) approach, which allows one to express different types of statements in consistent logical fashion. Specifically the authors consider two main types of statements for making medical decisions: claims and their grounds. In addition, a confidence value is assigned to a claim using ground statements. For instance, the claim can be that Mr. P has a gastric ulcer and the grounds can be that Mr. P has pain after meals & ulcers are painful because of an increase in acidity. The word support can express the level of confidence. In the book, the safety issue for defining diagnosis and treatment is viewed as adding restrictions on inference. The goal of restrictions is to avoid dangerous actions for patients. This review should help a reader to see conditions for successful applications of logic of argument (LA), which is the central theme of the book. The book contains three parts (see Figure 1): Part 1: Method for building software agents, which produce Rigorously Engineered Decisions. Part 2: Technique for deploying agents in general and hazardous environment with medical examples. Part 3: Formal and logical aspect of the method. The spread of the central theme over the book is presented in Figure 1. Chapter provides an informal description of LA, Chapters 13 and 15 contain formalisms and Chapter 15 implementation of LA in Prolog language. The critical component of LA is exposed in Axiom 8 (Chapter 13). This axiom is the central assumption of LA. It actually formalizes the requirement of independence of arguments used in LA in inferring diagnosis or treatment recommendation. The property is also known as truth-functionality in Artificial Intelligence literature [1] and was a subject of intensive discussions in AI community for years [1,2,3]. 1 Boris Kovalerchuk 1
Figure 2 presents main concepts introduced in the book such as Domino autonomous agent model. The model includes several components: Goals, Situation (patient data), Actions (clinical orders), Candidate solutions, Decisions (diagnosis, treatment), and Plans (treatment and care plans) (Chapter 6). PROForma -- the development environment for the Domino model. PROForma includes techniques for reasoning under uncertainty, formulating goals, identifying and arguing solutions. (Chapter 5). Guardian agent and safety bag. These components are watching out for hazards and safety (Chapter 7). Safety protocol. This protocol allows one to communicate safety conditions in terms of modalities such as safe, authorized, preferred, permitted, and obligatory (Chapter 10). Reasoning procedure. This procedure permits explicit reasoning about safety in terms of actions such as anticipate, alert, avoid, augment, diminish, monitor, schedule, react, authorize (Chapter 8). Map of the book with the central theme Part 1: Method for building software agents, which produce Rigorously Engineered Decisions (RED) Part 2: Deploying agents in general and hazardous environments with medical examples Part 3: Formal and logical aspect of the method Preface 1 2 3 5 6 7 8 9 10 11 12 13 1 15 16 17 Foreword Introduction Bibliography Index Central assumption: Axiom 8 (truth functionality -- independence of arguments) Central theme - logic of argument Formal logic of argument LA Implementation in Prolog language Figure 1. The authors coined the term RED -- Rigorously Engineered Decisions in order to distinguish diagnostic and treatment decisions made in disciplined way from less controlled decisions. The RED approach is based on making decisions from mentioned claims, their grounds, and confidence values. Specifically, the authors suggest using the logic of argument. A modern AI terminology of autonomous software agents is used in the book. The Figures below analyze LA an instantly mathematical subject by using a conceptual visualization approach. Technical results presented in the book have two components: the general logic of argument is covered in Chapters, 12-15, and 17, and a formal logic of safety is covered in Chapter 16 (see Figure 3). Two languages are introduced for LA: language R 2 L (Chapter 12) for rigorously engineered decisions, and its formal equivalent Temporal 2
Propositional Language R2L (Chapter 13). The transformation procedure from R2L to R2L is given in Chapter 1. The first language is convenient for describing tasks and the second one for implementation in Prolog (Chapter 17). Domino autonomous agent model with components: Goals, Situation (patient data), Actions (clinical orders), Candidate solutions, Decisions (diagnosis, treatment), Plans (treatment and care plans). Main concepts introduced 1 2 3 5 6 7 8 9 10 11 12 13 1 15 16 17 PROForma -- development environment for Domino model: reasoning under uncertainty, formulating goals, identifying and arguing solutions. Guardian agent and safety bag of methods watching out for hazards and safety Explicit reasoning about safety in terms of actions: anticipate, alert, avoid, augment, diminish, monitor, schedule,react, authorize. Modalities for safety protocol: safe, authorized, preferred, permitted, obligatory. Figure 2. Main technical results Language R 2 L for rigorously engineered decisions (RED) Temporal Propositional Language R2L 1 2 3 5 6 7 8 9 10 11 12 13 1 15 16 17 Procedure for equivalent knowledge transformation R 2 L R2L Central theme - logic of argument Formal logic of safety Interpreter for R2L in Prolog Language. Medical examples. Formal logic of argument Figure 3. 3
The authors see AI and cognitive science professionals as their primary audience. Note that the people working in medical applications can gain a greater appreciation of the potential to be found in the rigorous analysis of safety issues as well. Unfortunately, medical funding agencies in the US have underestimated this potential. The book provides enough successful medical examples to convince a reader that Rigorously Engineered Decisions make sense and should have a strong future potential in medical applications. However, as with any technique, the RED approach is not a panacea. It has its own restrictions. The rest of this review is devoted to description of the central theme of the book -- logic of argument (Figures and 5) and analysis of restrictions of the approach (Figures 6 and 7). The term argument is defined as a triple: <Claim: Grounds: Qualifier>, where a Qualifier is the confidence value assigned to the Claim based on the Grounds. The authors emphasize that an argument has a form of a logical rule, Grounds => supports Claim but does not have not its force, that is, an argument does not sanction a definite conclusion (see Figure ). The central theme --logic of argument An argument has a form of a logical rule, but not its force (an argument does not sanction a definite conclusion.) The conclusion of argument can not be guaranteed to be correct in contrast with a conventional proof. We may not be able to assume that all our premises are true. Types of basic logic for logic of argument: classical (0/1) logic, other logics. The Argument term is a triple: <Claim: Grounds: Qualifier>, where a Qualifier is the confidence value assigned to the Claim based on the Grounds. Example: Claim = Mr.P has a gastric ulcer, Grounds= Mr. P has pain after meals & ulcers are painful because of an increase in acidity, Qualifier= supports. Types of Qualifiers: 1) binary (supports/opposes), 2) numerical drawn from [0,1] (probabilistic or other measures), 3) linguistic sings ( presumably, terms for levels of consistency between grounds of the argument.) Figure.
The central assumption -- independence Axiom 8 (truth functionality axiom) states that if d1 and d2 are supports for claims F and G respectively then function (d1,d2) (or d1 d2 in infix notation) is a derived support for F&G. The scope of unforeseen and unforeseeable interactions is vast (p. 12) 7 Truth functionality means that the value of truth function for F&G depends only on d1 (truth of F ) and d2 (truth of G) and does not depend on F and G interaction Absence of interaction means independence of F and G. In probabilistic inference this assumption is not accepted in general. Example: In a lattice structure dictionary d1 d2 can simply be taken as the greatest lower bound of d1 and d2. Axiom 8: <sup d1 >F & <sup d2 >G <sup d1 d2 >(F&G), where : D D D is the function for computing support for assertions derived through material implication. Independence of F and G should be provided and tested reviewer authors Figure 5. The authors state that the conclusion of argument process can not be guaranteed to be correct in contrast with a conventional proof. This, of course, is the typical situation in medical arguments. Several types of Qualifiers are considered to express the level of support for the claim: binary (supports/opposes); numerical drawn from [0,1] (probabilistic or other measures); linguistic signs such as presumably or other terms for expressing levels of consistency between the grounds of the argument. The authors base their assumption of independence (Axiom 8) on the statement that in many applications there is not enough data for estimating dependencies in the form of conditional probabilities P(F Q) and P(Q F). This is obviously true. Therefore, the authors select tasks were arguments are most likely independent. For instance, argument A= Mr. P has pain after meals and argument B= ulcers are painful because of an increase in acidity are relatively independent. Ulcers can be painful independent of the pain suffered by the Mr. P after meal. However, this is not something that can be guaranteed for all possible medical statements. 5
Figure 5 explains the logic of axiom 8 and its manner for expressing independence of arguments through truth-functionality. Figure 6 analyzes future of the central assumption of the book. Authors mention that The scope of unforeseen and unforeseeable interactions is vast (p. 12). For some tasks, designers can provide independent (non-interacting) arguments, but it is not a universally rigorous design approach. Fuzzy logic control success justifies this statement [3]. Control engineers were able to find thousands applications with minor interactions between arguments and/or make adjustments in logical connectors/operators using neural networks and other learning techniques to accommodate interactions. Medical field needs similar techniques. Study of breast cancer diagnosis shows many interactions between arguments, which can not be ignored. For instance in [] these interactions were restored by the interactive monotone Boolean function approach and used for developing diagnostic rules. Discussion on the central assumption There is no data for estimating dependencies as conditional probabilities P(F Q) and P(Q F) in many applications. Authors approach can work if F and Q are constructed as independents entities. Practical approach Without testing independence RED approach may not produce rigorously engineered decisions List of successful medical tasks Independence of F and G should be provided and was actually provided in book s examples. authors Figure 6. reviewer 6
Future of independence assumption For some task designers can provide independent (non-interacting) arguments, but it is not a universal rigorous design approach. The scope of unforeseen and unforeseeable interactions is vast (p. 12) 7 Fuzzy logic control success justifies this statement. A special technique is needed to express interaction rigorously The technique has been developed for some hazardous medical applications. Logic of hazardous applications could be especially sensitive to independence assumption. Breast cancer example. Figure 7. Concluding remarks. The problem of safety in hazardous applications is a growing area of research and applications. The authors approach is fruitful and the book can be a useful reference for medical informatics professionals. References [1] Russell, S., Norvig, P., Artificial Intelligence: Modern Approach, Prentice Hall, 1995, [2] Bellman, R., Giertz, M. On the analytical formalism of the theory of fuzzy sets, Information Sciences, N. 5, 1973, pp. 19-156. [2] Kovalerchuk, B., Context Spaces as Necessary Frames for Correct Approximate Reasoning, International Journal of General Systems, vol. 25(1), pp. 61-80, 1996. [3]. Kovalerchuk, B., Vityaev E., Ruiz J.F., Consistent Knowledge Discovery in Medical Diagnosis, IEEE Engineering in Medicine and Biology Magazine, vol. 19, N,, 2000, pp. 26-37. 7