Can Bayesian models have normative pull on human reasoners?

Similar documents
Descending Marr s levels: Standard observers are no panacea. to appear in Behavioral & Brain Sciences

Wason's Cards: What is Wrong?

Utility Maximization and Bounds on Human Information Processing

Exploring the Influence of Particle Filter Parameters on Order Effects in Causal Learning

THE INTERPRETATION OF EFFECT SIZE IN PUBLISHED ARTICLES. Rink Hoekstra University of Groningen, The Netherlands

Psy 803: Higher Order Cognitive Processes Fall 2012 Lecture Tu 1:50-4:40 PM, Room: G346 Giltner

Cognitive Modeling. Lecture 9: Intro to Probabilistic Modeling: Rational Analysis. Sharon Goldwater

How does Bayesian reverse-engineering work?

Cognitive Modeling. Mechanistic Modeling. Mechanistic Modeling. Mechanistic Modeling Rational Analysis

P(HYP)=h. P(CON HYP)=p REL. P(CON HYP)=q REP

On The Scope of Mechanistic Explanation in Cognitive Sciences

TEACHING YOUNG GROWNUPS HOW TO USE BAYESIAN NETWORKS.

Causal Models Interact with Structure Mapping to Guide Analogical Inference

Commentary on Behavioral and Brain Sciences target article: Jones & Love, Bayesian Fundamentalism or Enlightenment?

Supplementary notes for lecture 8: Computational modeling of cognitive development

The Discovery/Justification Distinction

An Inspection-Time Analysis of Figural Effects and Processing Direction in Syllogistic Reasoning

Generic Priors Yield Competition Between Independently-Occurring Preventive Causes

Argument Scope in Inductive Reasoning: Evidence for an Abductive Account of Induction

An Efficient Method for the Minimum Description Length Evaluation of Deterministic Cognitive Models

Moving from Levels & Reduction to Dimensions & Constraints

Development of Concept of Transitivity in Pre-Operational Stage Children

Learning Deterministic Causal Networks from Observational Data

Visualizing Bayesian situations about the use of the unit square Andreas Eichler 1, Markus Vogel 2 and Katharina Böcherer-Linder 3

Why Hypothesis Tests Are Essential for Psychological Science: A Comment on Cumming. University of Groningen. University of Missouri

How People Estimate Effect Sizes: The Role of Means and Standard Deviations

Confirmation Bias. this entry appeared in pp of in M. Kattan (Ed.), The Encyclopedia of Medical Decision Making.

[1] provides a philosophical introduction to the subject. Simon [21] discusses numerous topics in economics; see [2] for a broad economic survey.

Context-Sensitive Induction

Subjective randomness and natural scene statistics

What is AI? The science of making machines that:

Analogical Inferences in Causal Systems

The Role of Biomedical Knowledge in Clinical Reasoning: Bridging the Gap between Two Theories

Representation Theorems for Multiple Belief Changes

Response to the ASA s statement on p-values: context, process, and purpose

Modelling Causation in Evidence-Based Legal Reasoning ICAIL Doctoral Consortium

William Bechtel and Oron Shagrir. Abstract

Evolutionary Approach to Investigations of Cognitive Systems

Evaluative feedback can improve deductive reasoning

Who Is Rational? Studies of Individual Differences in Reasoning

Representing subset relations with tree diagrams or unit squares?

The Regression-Discontinuity Design

CHAPTER 3. Methodology

Doing After Seeing. Seeing vs. Doing in Causal Bayes Nets

Cognition, Learning and Social Change Conference Summary A structured summary of the proceedings of the first conference

A Unified View of Consequence Relation, Belief Revision and Conditional Logic

UC Merced Proceedings of the Annual Meeting of the Cognitive Science Society

Artificial Intelligence: Its Scope and Limits, by James Fetzer, Kluver Academic Publishers, Dordrecht, Boston, London. Artificial Intelligence (AI)

Empty Thoughts: An Explanatory Problem for Higher-Order Theories of Consciousness

A Comment on the Absent-Minded Driver Paradox*

Carol and Michael Lowenstein Professor Director, Philosophy, Politics and Economics. Why were you initially drawn to game theory?

Problem representations and illusions in reasoning

A conversation with Professor David Chalmers, May 20, 2016 Participants

On the Representation of Nonmonotonic Relations in the Theory of Evidence

The shallow processing of logical negation

Modelling the information that underlies complex behavioural decisions to provide decision support

Research Methodology in Social Sciences. by Dr. Rina Astini

To conclude, a theory of error must be a theory of the interaction between human performance variability and the situational constraints.

Causal Schema-Based Inductive Reasoning

Does pure water boil, when it s heated to 100 C? : The Associative Strength of Disabling Conditions in Conditional Reasoning

INTERVIEWS II: THEORIES AND TECHNIQUES 5. CLINICAL APPROACH TO INTERVIEWING PART 1

POLI 343 Introduction to Political Research

Confidence in Causal Inferences: The Case of Devaluation

COHERENCE: THE PRICE IS RIGHT

Foundations for a Science of Social Inclusion Systems

Categorisation, Bounded Rationality and Bayesian Inference

Audio: In this lecture we are going to address psychology as a science. Slide #2

Negations in syllogistic reasoning: Evidence for a heuristic analytic conflict

Artificial Intelligence Programming Probability

A Bayesian Network Model of Causal Learning

Critical Thinking Rubric. 1. The student will demonstrate the ability to interpret information. Apprentice Level 5 6. graphics, questions, etc.

Bottom-Up Model of Strategy Selection

The Common Priors Assumption: A comment on Bargaining and the Nature of War

Cognitive domain: Comprehension Answer location: Elements of Empiricism Question type: MC

Glossary of Research Terms Compiled by Dr Emma Rowden and David Litting (UTS Library)

Generalization and Theory-Building in Software Engineering Research

Finding the Cause: Examining the Role of Qualitative Causal Inference through Categorical Judgments

Bayesian and Frequentist Approaches

Finding Information Sources by Model Sharing in Open Multi-Agent Systems 1

Rationality in Cognitive Science

Coherence Theory of Truth as Base for Knowledge Based Systems

CRITICAL THINKING: ASSESSMENT RUBRIC. Preliminary Definitions:

From: AAAI Technical Report SS Compilation copyright 1995, AAAI ( All rights reserved.

The Role of Causality in Judgment Under Uncertainty. Tevye R. Krynski & Joshua B. Tenenbaum

Working Memory Span and Everyday Conditional Reasoning: A Trend Analysis

Protocol analysis and Verbal Reports on Thinking

Is it possible to gain new knowledge by deduction?

Robyn Repko Waller, Philosophy of Psychology I Syllabus,

PRACTICAL APPLICATIONS AND THEORIES OF MEMORY: A NEW EPISTEMOLOGY TO SUPPLEMENT THE OLD

Chapter 1 Introduction to Educational Research

Deconfounding Hypothesis Generation and Evaluation in Bayesian Models

The scope of perceptual content, II: properties

FULL REPORT OF RESEARCH ACTIVITIES. Background

Grounding Ontologies in the External World

THE RATIONAL ANALYSIS OF MIND AND BEHAVIOR

Chapter 02 Developing and Evaluating Theories of Behavior

AI and Philosophy. Gilbert Harman. Tuesday, December 4, Early Work in Computational Linguistics (including MT Lab at MIT)

Word counts: Abstract (53), Main Text (997), References (151), Entire Text (1343)

The Scientific Method

Functionalism. (1) Machine Functionalism

Transcription:

Can Bayesian models have normative pull on human reasoners? Frank Zenker 1,2,3 1 Lund University, Department of Philosophy & Cognitive Science, LUX, Box 192, 22100 Lund, Sweden 2 Universität Konstanz, Philosophie, Postfach D9, 78457 Konstanz, Germany 3 Slovak Academy of Sciences, Institute of Philosophy, Klemensova 19, 81364 Bratislava, Slovak Republic frank.zenker@fil.lu.se Abstract: Rejecting the claim that human reasoning can approximate generally NP-hard Bayesian models in the sense that the mind s actual computations come close to, or be like, the inferences that Bayesian models dictate this paper addresses whether a Bayesian model can have normative pull on human reasoners. For such normative pull to arise, we argue, a well-defined and empirically supported approximation relation is required but broadly absent between (i) human reasoning on the ground and (ii) the behavior of a non-np-hard model. We discuss a responsible stance on this issue. Extended abstract: Cognitive science has been moving away from logical models, towards probabilistic ones, particularly Bayesian models (Johnson-Laird et al., 2015). These models have generally featured in accounts of human reasoning at the algorithmic level (Marr, 1982). Regular slippage between Marr s three levels (Oaksford, 2015), however, may have contributed to the virulence of the claim that Bayesian models have a normative pull on actual reasoners, and so may serve as a normative yardstick for rational human behavior. 1

The slippage here is between the use of rational in Anderson s (1990) program for rational reconstruction (Griffiths et al., 2012), where rational denotes behavior adapted optimally to an organism s representation of a task and an environment, onone hand, and the more traditional, fuller sense of rational, namely: normatively correct inferential choice-behavior, on the other. This latter sense is invoked, for instance, when preferences are said to remain transitive and thus save agents who (must) choice-behave from being Dutch-booked. Since choice broadly includes choosing what to believe and how to act, the slippage also occurs in recent work on reconstructing and evaluating natural language argumentation (e.g., Korb, 2013; Hahn & Oaksford, 2007; Hahn & Hornikx, 2016; Zenker, 2013). Slippage may also have productive aspects, of course. But it remains at best unclear how the normative pull of a Bayesian model properly arises, if it does (Woods, 2012; Oaksford & Chater, 2012). The paper s first part reviews the more technical side of an ongoing debate regarding the claim that human reasoning can approximate a Bayesian model, in the sense that the mind s actual computations come close to, or may be similar to, behavior that the formal model dictates. Theoretical work in computer science shows that Bayesian reasoning is generally NP-hard (Roth, 1996), and can even remain so as the complexity of a Bayesian network is reduced. By contrast, under suitable assumptions regarding the parameter space, the network size, and the relevance of intermediate variables, for instance some Bayesian algorithms may deliver outputs in at time-scales that are realistic for human reasoners (Kwisthout, 2011; Kwisthout et al., 2011; van Roij et al., 2012; van Roij, 2015; Rijn et al., 2013). But these assumptions neither pertain to Bayes update-rule nor to its role in making the network run. Therefore, some so-modelled episode of reasoning R1 may plausibly be an algorithmiclevel candidate for a formal model of actual reasoning. But this may not hold for another such episode R2. What obtains, or not, thus depends on the particulars of a case. While knowledge of the exact condition(s) under which a Bayesian model is not NP-hard is subject to ongoing research, extant results indicate that smaller, more local episodes of reasoning could plausibly be Bayesian inferences. How close outputs from such episodes then come to those delivered from heuristics is an open question (see Blackmond Laskey & Martignon, 2014). It nevertheless seems clear that a 2

responsibly meaningful sense of approximation has not been forthcoming. Hence, the general claim that human reasoning may approximate Bayesian inference is currently not readily meaningful. As the paper s second part argues, on this background it may seem that Bayesian models of inference can after all have normative pull on human reasoners, namely for these local cases which are computationally tractable. For the sake of argument, however, one can even grant the (counterfactual) claim that a well-defined and mathematically demonstrated approximation relation holds between a computationally tractable model and an NP-hard model. A related question now is whether one has a well-defined and empirically supported approximation between human behavior on the ground and the behavior of this tractable model (Fig. 1). Computationally tractable model Computationally non-tractable model human reasoning on the ground Fig. 1: To grant the approximation relation between a non-tractable and a tractable model (left arrow) shifts the issue towards ascertaining whether an approximation relation holds between the tractable model and actual human reasoning (right arrow). This new question obviously shifts the problem, and makes empirical facts relevant. Since computationally tractable models can easily, and massively, outrun what humans can do even when they perform at their best, this places some weight on the ascertainability of an approximation relation between actual behavior and the tractable model. 3

The paper s third part hence discusses the reasonability of assuming that human reasoning can approximate the non-tractable model, an issue that Earman (1996) has aptly summarized as follows: Eschewing the descriptive in favor of the normative does not erase all difficulties. For ought is commonly taken to imply can, but actual inductive agents can t, since they lack the logical and computational powers required to meet the Bayesian norms. The response that Bayesian norms should be regarded as goals toward which we should strive even if we always fall short is idle puffery unless it is specified how we can take steps to bring us closer to the goals (p. 56). Rather than wishing to pass a final verdict, we primarily seek to make audiences (more) sensitive to this issue. References Anderson, J. R. (1990). The adaptive character of thought. Hillsdale, NJ: Erlbaum. Blackmond Laskey, K., and Martignon, L. (2014). Comparing fast and frugal trees and Bayesian networks for risk assessment. In: K. Makar, B. de Sousa, & R. Gould (Eds.), Sustainability in statistics education (pp. 1-6). Proceedings of the Ninth International Conference on Teaching Statistics (ICOTS9, July, 2014), Flagstaff, Arizona, USA. Voorburg, The Netherlands: International Statistical Institute. Earman, J. (1996). Bayes or Bust? A Critical Examination of Bayesian Confirmation Theory. Cambridge, MA: The MIT Press. Griffiths, T.L., Tenenbaum, J.B., and Kemp, C. (2012). Bayesian Inference. In: Holyoak, K.J. and Morrison, R.G. (eds.) The Oxford Handbook of Thinking and Reasoning (pp. xx-yy). Oxford: OUP. Hahn, U., & Oaksford, M. (2007). The rationality of informal argumentation: A Bayesian approach to reasoning fallacies. Psychological Review, 114, 704-732. Hahn, U., and Hornikx, J. (2016). A normative framework for argument quality: argumentation schemes with a Bayesian foundation. Synthese, 193(6), 1833-1873. Johnson-Laird, P.N., Khemlani, S.S., and Goodwin, G.P. (2015). Logic, probability, and human reasoning. Trends in cognitive sciences, 19(4), 201-214. Korb, K. (2003). Bayesian informal logic and fallacy. Informal Logic, 23, 41-70. Kwisthout, J. (2011). Most probable explanations in Bayesian networks: Complexity and tractability. International Journal of Approximate Reasoning, 52, 1452 1469. Kwisthout, J., Warenham, T., and Rooij, I. van (2011). Bayesian intractability is not an ailment that approximation can cure. Cognitive Science, 35, 779 784. Marr, D. (1982). Vision. San Francisco, CA: W. H. Freeman. Rijn, H. van; Kwisthout, J.; Rooij, I. van (2013). Bridging the gap between theory and practice of approximate Bayesian inference. Cognitive Systems Research, 24, 2 8. Rooij, I. van; Wright, C.D.; and Wareham, T. (2012). Intractability and the use of heuristics in psychological explanations. Synthese, 187, 471 487. 4

Roij, I. van (2015). How the curse of intractability can be cognitive science s blessing. In: Noelle, D. C., Dale, R., Warlaumont, A. S., Yoshimi, J., Matlock, T., Jennings, C. D., & Maglio, P. P. (Eds.) (2015). Proceedings of the 37th Annual Meeting of the Cognitive Science Society (pp. 2839-2840). Austin, TX: Cognitive Science Society. Roth, D. (1996). On the hardness of approximate reasoning. Artificial Intelligence, 82(1), 273-302. Oaksford, M., and Chater, M. (2012). Normative Systems: Logic, Probability, and Rational Choice. In: Holyoak, K.J. and Morrison, R.G. (eds.), The Oxford Handbook of Thinking and Reasoning (pp. xx-yy). Oxford: OUP. Oaksford, M. (2015). Imaging deductive reasoning and the new paradigm. Frontiers in human neuroscience, 9, 101-114. Woods, J. (2013). Epistemology Mathematicized. Informal Logic, 33 (2), 292-331. Zenker, F. (2013). Bayesian argumentation: The practical side of probability. In F. Zenker (ed.), Bayesian argumentation: The practical side of probability (pp. 1-11). Dordrecht: Springer. 5