CISC453 Winter Probabilistic Reasoning Part B: AIMA3e Ch

Size: px
Start display at page:

Download "CISC453 Winter Probabilistic Reasoning Part B: AIMA3e Ch"

Transcription

1 CISC453 Winter 2010 Probabilistic Reasoning Part B: AIMA3e Ch

2 Overview 2 a roundup of approaches from AIMA3e a survey of approximate methods alternatives to the direct computing of Conditional Probability Tables for Bayesian networks using sampling to estimate CPTs for queries specifics of several sampling approaches 14.6 relational & first-order probability approaches expanding the reach of Bayesian network models moving toward first-order logic semantics 14.7 alternative uncertain reasoning approaches rule-based Dempster-Shafer fuzzy sets & fuzzy logic 14.8 summary

3 Rationale for non-exact approaches 3 the complexity of Bayesian network calculations even the best algorithms are, in the worst case, exponential in the number of variables though for simple networks & queries about individual variables efficient calculations are possible but alternatives to exact calculations are feasible we have available the priors and conditional probability tables associated with nodes in the Bayesian network, as part of the constructing of the network given these we can get approximate solutions for query probabilities through randomized sampling (Monte Carlo) algorithms by repeated sampling we estimate the desired posterior probabilities & can control the accuracy of the approximation by the number of samples generated of multiple techniques available, we have a look at two direct sampling & Markov chain sampling

4 Probability estimation by sampling 4 (1) direct sampling given (a) we know a probability distribution & (b) we have a source of uniformly distributed random numbers we can simulate the sampling of actual events by appropriate sampling of the distribution as we do to simulate dice rolls, for example we can apply this to a Bayesian net by using the prior and conditional probabilities associated with the nodes we start at the top & sample in the order given by the topology of the network, using the sampled results from parent nodes to select the distribution for sampling at subsequent nodes at the end this produces a sampled event - a set of sampled values for each of the variables in the net see the algorithm on the next slide & examples for the "wet grass" Bayesian net on subsequent slides

5 Direct Sampling Algorithm 5 it generates events from a Bayesian network passes over the topology & uses the probabilities specified by values generated for parent nodes function PRIOR-SAMPLE(bn) returns an event sampled from the prior specified by bn inputs: bn, a Bayesian network specifying joint distribution P(X 1,..., X n ) x an event with n elements foreach variable X i in X 1,..., X n do x[i] a random sample from P(X i parents(x i )) return x

6 Direct Sampling Algorithm 6 the sample network for the "wet grass" problem

7 Direct Sampling Algorithm 7 the next few slides illustrate the operation of the algorithm on the "wet grass" network yellow means sample from this distribution

8 Direct Sampling Algorithm 8 the sampled value for Cloudy is true shown in green

9 Direct Sampling Algorithm 9 sample from the corresponding rows of the CPTs of the child nodes

10 Direct Sampling Algorithm 10 for the Sprinkler variable, the sampled value is false shown as red

11 Direct Sampling Algorithm 11 for the Rain variable, the sampled value is true

12 Direct Sampling Algorithm 12 sample from the F T row of the WetGrass CPT

13 Direct Sampling Algorithm 13 for the WetGrass variable, the sampled value is true

14 Direct Sampling Algorithm 14 this process yields sample events they reflect the probabilities in the joint distribution given by the Bayesian net for a given total number of sample events, the fraction for a specific event is the answer to the query about that event, an estimate of its probability in the large sample limit, this is the actual probability of the event, so the process produces consistent estimates for partially specified events (on m variables, where m <= n), estimates are just the fraction of all complete events produced by the algorithm that match the partially specified event the REJECTION-SAMPLING algorithm computes conditional probabilities using PRIOR-SAMPLE to generate events it rejects sampled events not consistent with the conditional probability of interest, then counts the proportion of the remaining events that match the evidence condition of interest

15 Rejection Sampling 15 the REJECTION-SAMPLING algorithm function REJECTION-SAMPLING(X, e, bn, N) returns an estimate of P(X e) inputs: X, the query variable e, observed values for variables E bn, a Bayesian network N, the total number of samples to be generated local variables: N, a vector of counts for each value of X, initially zero for j = 1 to N do x PRIOR-SAMPLE(bn) if x is consistent with e then N[x] N[x] + 1 where x is the value of X in x return NORMALIZE(N)

16 Rejection Sampling 16 the rejection sampling approach yields estimates that converge to true probabilities as the number of samples increases, with the standard deviation of the error proportional to 1/sqrt(n), where n is the number of samples but as E (the set of evidence variables) grows, the proportion of samples consistent with e decreases exponentially & the process becomes impractical for complex problems it begins to look like counting (rare) real world events to estimate conditional probabilities a partial solution to the inefficiency of the rejection sampling approach is offered by likelihood weighting

17 Likelihood Weighting 17 likelihood weighting to improve efficiency of sampling it avoids generating large numbers of events that don't apply to the conditional probability of interest by producing only events that are consistent with the evidence variable values e so, it fixes the evidence variables E and samples only the nonevidence variables then every event generated is consistent with the evidence but fixing the evidence variables means it isn't enough just to count events, rather, we need to weight each sampled event by the likelihood that it agrees with the evidence the weight is given by the product of conditional probabilities for each evidence variable, given its parents the result is that events in which the actual evidence appears unlikely are given less weight see AIMA3e pp for the weight calculation procedure

18 Likelihood Weighting 18 issues with likelihood weighting for a non-evidence variable Z i, its sampled values will be influenced by evidence among its ancestor nodes, but Z i will not be influenced by evidence variables that are not ancestors given the query: P(Rain Cloudy = true, WetGrass = true) the sampled Sprinkler & Rain variable values will include some samples with both false, though (non-ancestor) evidence rules out these events

19 Likelihood Weighting 19 as with rejection sampling the likelihood weighting estimates can be shown to be consistent since all generated samples are used, the algorithm may be more efficient problems arise as the number of evidence variables grows, since most samples will have very low weights, with only a small fraction agreeing appreciably with the evidence even worse if evidence variables happen to be late in the variable ordering, so that the non-evidence variables don't have any among their parents & ancestors to guide the sample generation then they will be mostly unrelated to the evidence of the query

20 Markov Chain Monte Carlo (MCMC) 20 another algorithmic approach to estimating posterior probabilities by sampling note that this class of algorithms includes the WALKSAT and simulated annealing examples seen in earlier chapters it operates by generating samples "incrementally", each from the previously sampled state by randomly applying changes one specific MCMC algorithm is Gibbs sampling 1. fix evidence variables 2. assume other variables are in some arbitrary state 3. update by sampling a value for some nonevidence variable X i the sampled X i is conditioned on the current values of the variables in its Markov blanket (its parent variables + child variables + parents of its child variables - see the next slide for an example) 4. repeat, moving in the space of complete assignments, always keeping evidence variables fixed & sampling a new value for a nonevidence variable

21 Markov Chain Monte Carlo (MCMC) 21 illustration a partial Bayes network showing the Markov blanket for X

22 The Gibbs Sampling Algorithm 22 here's the algorithm function GIBBS-ASK(X, e, bn, N) returns an estimate of P(X e) inputs: X, the query variable; e, observed values for variables E bn, a Bayesian network; N, the total number of samples to be generated local variables: N, a vector of counts for each value of X, initially zero Z, the nonevidence variables in bn x, the current state of the network, initially copied from e initialize x with random values for the variables in Z for j = i to N do foreach Z i in Z do set the value of Z i in x by sampling from P(Z i mb(z i )) N[x] N[x] + 1 where x is the value of X in x return NORMALIZE(N)

23 Markov Chain Monte Carlo (MCMC) 23 properties of Gibbs sampling MCMC each state visited is a sample for the query variable posterior probability, which is estimated from the proportions of states visited it can be shown that Gibbs sampling returns consistent probability estimates this relies on the sampling process reaching a condition of dynamic equilibrium in which the long run fraction of time in each state is exactly proportional to its posterior probability there is a rather long & detailed proof in the AIMA3e conveniently omitted here

24 Relational & First-Order Probability 24 Bayesian nets are "propositional" in their basics a fixed, finite set of random variables, each with a fixed domain of values we could extend their use to many more problems if we somehow include the first-order properties of capturing relations among objects along with quantification over variables that stand for objects the example: an online book seller wishes to capture customer evaluations of books - summarized as a posterior distribution over book quality, given the evidence any simple summary statistic does not capture variations in the kindness &/or honesty of evaluators examining a Bayesian net representation for expressing the recommendation relationships shows that it becomes impractical as the number of customers & books is non-trivial see the figure on the next slide

25 Relational & First-Order Probability 25 book recommendation problem as Bayes nets (a) shows the net for 1 book & 1 customer (b) shows the corresponding net for 2 books & 2 customers for both, Honesty(C i ) is Boolean, & the other variables are assumed to be on a 1 to 5 integer scale the net structure in (b) shows repetition for Recommendation(c, b) & indicates CPTs for all Recommendation(c, b) variables will be identical (as will those for Honesty(c) and so on) our goal: capture this commonality in a first-order-like way

26 Relational & First-Order Probability 26 we return to the ideas of possible worlds for both probability & first-order representations if we could assign probabilities to the possible worlds of FOL that is, to the models that result from an interpretation & a mapping of constants to objects, predicates to relations, & function symbols to functional relations then the probability of a FOL sentence could be calculated as in Bayes nets, summing over corresponding possible worlds P(φ) = sum (over ω such that φ is true in ω) P(ω)

27 Relational & First-Order Probability 27 relational & first order models as we've seen before, one problem is that with functions, FOL models are infinite a solution may be feasible by adopting some alternative semantics, from those for database systems in particular, the unique names assumption & domain closure unique names: constant names indicate unique objects domain closure: the only objects are those that are explicitly named top: FOL possible worlds semantics, bottom: database semantics

28 Relational & First-Order Probability 28 these database semantics yield relational probability models (RPMs) RPMs do not make the CWA (closed world assumption) that says that unknown facts are false, since it is just those things we want to reason about probabilistically remaining points referenced on later slides also note that, unfortunately, RPMs will fail when the assumptions don't hold examples from the book recommendation problem multiple ISBNs for the same "logical" book multiple customer IDs for a single customer particularly one who would like to skew a recommendation system (in what is called a sybyl attack) both existence uncertainty (what are the real objects) & identity uncertainty (which symbols really refer to the same object) mean that we'll eventually need to adopt a fuller version of FOL semantics

29 Relational Probability Models 29 RPMs include constant, function & predicate symbols we treat predicates as functions that return a value we assume there's a type signature for each function it gives the type of each argument & of the function value this eliminates spurious possible worlds, provided we know the type of each object we define an RPM in terms of the types & type signatures here are examples for the book-recommendation problem types: Customer, Book signatures: Honest: Customer {t, f}, Kindness: Customer {1, 2, 3, 4, 5} Quality: Book {1, 2, 3, 4, 5} Recommendation: Customer x Book {1, 2, 3, 4, 5} constants: the customer & book names that the retailer records C 1, C 2, B 1, B 2 the random variables of the RPM are derived from these

30 Relational Probability Models 30 the random variables instantiate each function with each possible combination of objects write out the dependencies that govern the random variables Honest(c) <0.99, 0.01> Kindness(c) <0.1, 0.1, 0.2, 0.3, 0.3> Quality(b) <0.05, 0.2, 0.4, 0.2, 0.15> Recommendation(c, b) RecCPT(Honest(c), Kindness(c), Quality(b)) RecCPT is a conditional distribution with 2x5x5 rows of 5 entries semantics of the RPM are given by instantiating the dependencies for all known constants (see part (b) of the earlier figure) to form a Bayesian network that defines a joint distribution over the RPM's random variables

31 Relational Probability Models 31 RPMs: context-specific independence a variable is independent of some parents, given certain values of others so Recommendation(c, b) is independent of Kindness(c) & Quality(b) when Honest(c) = false we can capture idea that a fan of a particular author will always give that author's books a 5, independent of quality these are expressed in a form that resembles a programming language conditional statement but the inference algorithm does not "know" the value of the conditional test rather, posterior probabilities will reflect high probability that a customer is a fan of an author when the customer only gives 5s to books by the author, and otherwise is not particularly kind

32 Relational Probability Models 32 how do you do inferencing in an RPM? convert to an equivalent Bayes net by a process called "unrolling" the methods that unroll an RPM into a Bayesian network are analogous to the propositionalization process for FOL inference as in FOL resolution that instantiates logical variables only as needed for inferencing, lifting the inference process above the level of ground sentences, similar techniques can be applied to the random variables of RPMs when ground random variables differ only in the constant symbols used to generate them

33 Open Universe Probability Models 33 another acronym: OUPMs many real world problems may not allow for the unique names and domain closure assumptions and require OUPMs that adopt standard FOL semantics these in turn extend the "generative" property of the system Bayes nets generate possible worlds as assignments of values to variables RPMs generate sets of events through instantiation of logical variables in predicates/functions OUPMs generate possible worlds through the addition of objects there are inference algorithms that may be used to derive consistent posterior probabilities for FOL queries, given the revised representation

34 FOL & Probabilistic Reasoning 34 the state of the art? first-order probabilistic reasoning is a relatively new research area & the techniques are not well-established they may be applicable to many real world problems that involve uncertain information AIMA3e cites example domains including computer vision, text understanding, & military intelligence analysis even more generally, the interpretation of sensor data

35 More Uncertain Reasoning Other Approaches though using probability to model uncertainty has a long history in many sciences, AI has been slow to adopt probabilistic techniques one reason was its numeric character, when some in AI have seen it as requiring symbolic & qualitative approaches alternative AI approaches have included default reasoning systems: conclusions don't have a degree of belief, but can be superseded when a better reason for some alternative is found: these have had a degree of success rule-based systems: the rules have some associated numeric uncertainty property: these were popular in expert systems Dempster-Shafer theory: it uses interval-valued degree of belief to capture knowledge about the probability of a proposition probability & logic share the ontological commitment that propositions are true or false though an agent may be uncertain about which holds, but fuzzy logic ontology allows vagueness about the degree of truth of a proposition

36 Rule-Based Uncertain Reasoning 36 rule-based systems have some desirable properties, including truth-functionality: the truth of complex logical sentences depends only on truth of components but this is only the case for probability when strong independence assumptions hold over the history of AI there have been attempts to develop uncertain reasoning schemes that retain the advantages of logical representation but simple examples can show how the truth functional property is not appropriate for general uncertain reasoning it is only successful with highly restricted tasks & carefully engineered rule bases but as the rule base expands, it is difficult to avoid undesirable interactions among rules as a result, Bayesian networks have mostly replaced rule-based approaches

37 Dempster-Shafer theory 37 Dempster-Shafer theory takes another approach it deals not with the probabilities of propositions but probabilities that evidence supports propositions the measure of belief is a belief function, notationally Bel(A) Bel(A) and Bel( A) don't have to sum to 1.0, depending on the evidence available - the difference is a "gap" when there is a gap, decision problems can be defined so that a Dempster-Shafer system cannot reach a decision the Bayesian model can handle these cases, and as shown in the text in a biased-coin example, uses the evidence of the outcome of coin flips to compute the posterior distribution of a Bias random variable allows beliefs to change under future information gathering

38 Fuzzy Sets, Fuzzy Logic 38 vagueness in Fuzzy Sets & Fuzzy Logic fuzzy sets provides a method of specifying how well some object matches a vague description it is not uncertainty about the world, but about degrees of matching to some linguistically vague term like "tall" Tall becomes a fuzzy predicate that for a specific object has a truth value between 0 & 1 and thus defines a set of members for which the boundaries are not sharp fuzzy logic provides tools for reasoning over expressions related to membership in fuzzy sets fuzzy logic is truth-functional (complex sentence depends only on truth values of components) this results in problems because there may be interactions among the components that aren't accounted for in the formalism

39 Fuzzy Sets, Fuzzy Logic 39 fuzzy control systems these use fuzzy rules to map from real-valued input to output parameters they are used successfully & commercially in many systems but that may just reflect the concise, intuitive mapping rather than any general applicability to uncertain reasoning when accounts of fuzzy logic appeal to probability theory they may give up the truth-functional property they are not fully developed or understood in the sense of properly representing the relationship between linguistic observations & continuous quantities

40 Summary 40 a summary for stochastic sampling techniques (likelihood weighting, Markov Chain Monte Carlo) provide consistent estimates of posterior probabilities even for large networks (too large for the exact algorithms) we can combine probability theory & FOL representational tools to get systems that reason under uncertainty Relational Probability Models adopt semantics that allow a well defined probability distribution with an equivalent Bayesian network Open Universe Probability Models relax semantic restrictions to allow existence & identity uncertainty, & define probability distributions over an infinite space of first-order possible worlds among alternative systems proposed for reasoning under uncertainty, truth-functional systems typically fail to capture interactions among components

MS&E 226: Small Data

MS&E 226: Small Data MS&E 226: Small Data Lecture 10: Introduction to inference (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 17 What is inference? 2 / 17 Where did our data come from? Recall our sample is: Y, the vector

More information

Chapter 2. Knowledge Representation: Reasoning, Issues, and Acquisition. Teaching Notes

Chapter 2. Knowledge Representation: Reasoning, Issues, and Acquisition. Teaching Notes Chapter 2 Knowledge Representation: Reasoning, Issues, and Acquisition Teaching Notes This chapter explains how knowledge is represented in artificial intelligence. The topic may be launched by introducing

More information

CS 4365: Artificial Intelligence Recap. Vibhav Gogate

CS 4365: Artificial Intelligence Recap. Vibhav Gogate CS 4365: Artificial Intelligence Recap Vibhav Gogate Exam Topics Search BFS, DFS, UCS, A* (tree and graph) Completeness and Optimality Heuristics: admissibility and consistency CSPs Constraint graphs,

More information

ERA: Architectures for Inference

ERA: Architectures for Inference ERA: Architectures for Inference Dan Hammerstrom Electrical And Computer Engineering 7/28/09 1 Intelligent Computing In spite of the transistor bounty of Moore s law, there is a large class of problems

More information

Artificial Intelligence Programming Probability

Artificial Intelligence Programming Probability Artificial Intelligence Programming Probability Chris Brooks Department of Computer Science University of San Francisco Department of Computer Science University of San Francisco p.1/25 17-0: Uncertainty

More information

Approximate Inference in Bayes Nets Sampling based methods. Mausam (Based on slides by Jack Breese and Daphne Koller)

Approximate Inference in Bayes Nets Sampling based methods. Mausam (Based on slides by Jack Breese and Daphne Koller) Approximate Inference in Bayes Nets Sampling based methods Mausam (Based on slides by Jack Breese and Daphne Koller) 1 Bayes Nets is a generative model We can easily generate samples from the distribution

More information

Bayesian (Belief) Network Models,

Bayesian (Belief) Network Models, Bayesian (Belief) Network Models, 2/10/03 & 2/12/03 Outline of This Lecture 1. Overview of the model 2. Bayes Probability and Rules of Inference Conditional Probabilities Priors and posteriors Joint distributions

More information

A Brief Introduction to Bayesian Statistics

A Brief Introduction to Bayesian Statistics A Brief Introduction to Statistics David Kaplan Department of Educational Psychology Methods for Social Policy Research and, Washington, DC 2017 1 / 37 The Reverend Thomas Bayes, 1701 1761 2 / 37 Pierre-Simon

More information

Models for Inexact Reasoning. Imprecision and Approximate Reasoning. Miguel García Remesal Department of Artificial Intelligence

Models for Inexact Reasoning. Imprecision and Approximate Reasoning. Miguel García Remesal Department of Artificial Intelligence Models for Inexact Reasoning Introduction to Uncertainty, Imprecision and Approximate Reasoning Miguel García Remesal Department of Artificial Intelligence mgremesal@fi.upm.es Uncertainty and Imprecision

More information

Bayesian Tailored Testing and the Influence

Bayesian Tailored Testing and the Influence Bayesian Tailored Testing and the Influence of Item Bank Characteristics Carl J. Jensema Gallaudet College Owen s (1969) Bayesian tailored testing method is introduced along with a brief review of its

More information

Bayesian and Frequentist Approaches

Bayesian and Frequentist Approaches Bayesian and Frequentist Approaches G. Jogesh Babu Penn State University http://sites.stat.psu.edu/ babu http://astrostatistics.psu.edu All models are wrong But some are useful George E. P. Box (son-in-law

More information

Stepwise Knowledge Acquisition in a Fuzzy Knowledge Representation Framework

Stepwise Knowledge Acquisition in a Fuzzy Knowledge Representation Framework Stepwise Knowledge Acquisition in a Fuzzy Knowledge Representation Framework Thomas E. Rothenfluh 1, Karl Bögl 2, and Klaus-Peter Adlassnig 2 1 Department of Psychology University of Zurich, Zürichbergstraße

More information

Bayesian Networks Representation. Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University

Bayesian Networks Representation. Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University Bayesian Networks Representation Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University March 16 th, 2005 Handwriting recognition Character recognition, e.g., kernel SVMs r r r a c r rr

More information

CPS331 Lecture: Coping with Uncertainty; Discussion of Dreyfus Reading

CPS331 Lecture: Coping with Uncertainty; Discussion of Dreyfus Reading CPS331 Lecture: Coping with Uncertainty; Discussion of Dreyfus Reading Objectives: 1. To discuss ways of handling uncertainty (probability; Mycin CF) 2. To discuss Dreyfus views on expert systems Materials:

More information

Cognitive Modeling. Lecture 12: Bayesian Inference. Sharon Goldwater. School of Informatics University of Edinburgh

Cognitive Modeling. Lecture 12: Bayesian Inference. Sharon Goldwater. School of Informatics University of Edinburgh Cognitive Modeling Lecture 12: Bayesian Inference Sharon Goldwater School of Informatics University of Edinburgh sgwater@inf.ed.ac.uk February 18, 20 Sharon Goldwater Cognitive Modeling 1 1 Prediction

More information

Signal Detection Theory and Bayesian Modeling

Signal Detection Theory and Bayesian Modeling Signal Detection Theory and Bayesian Modeling COGS 202: Computational Modeling of Cognition Omar Shanta, Shuai Tang, Gautam Reddy, Reina Mizrahi, Mehul Shah Detection Theory and Psychophysics: A Review

More information

Supplementary notes for lecture 8: Computational modeling of cognitive development

Supplementary notes for lecture 8: Computational modeling of cognitive development Supplementary notes for lecture 8: Computational modeling of cognitive development Slide 1 Why computational modeling is important for studying cognitive development. Let s think about how to study the

More information

High-level Vision. Bernd Neumann Slides for the course in WS 2004/05. Faculty of Informatics Hamburg University Germany

High-level Vision. Bernd Neumann Slides for the course in WS 2004/05. Faculty of Informatics Hamburg University Germany High-level Vision Bernd Neumann Slides for the course in WS 2004/05 Faculty of Informatics Hamburg University Germany neumann@informatik.uni-hamburg.de http://kogs-www.informatik.uni-hamburg.de 1 Contents

More information

Bayesian Inference Bayes Laplace

Bayesian Inference Bayes Laplace Bayesian Inference Bayes Laplace Course objective The aim of this course is to introduce the modern approach to Bayesian statistics, emphasizing the computational aspects and the differences between the

More information

Handling Partial Preferences in the Belief AHP Method: Application to Life Cycle Assessment

Handling Partial Preferences in the Belief AHP Method: Application to Life Cycle Assessment Handling Partial Preferences in the Belief AHP Method: Application to Life Cycle Assessment Amel Ennaceur 1, Zied Elouedi 1, and Eric Lefevre 2 1 University of Tunis, Institut Supérieur de Gestion de Tunis,

More information

Introduction. Patrick Breheny. January 10. The meaning of probability The Bayesian approach Preview of MCMC methods

Introduction. Patrick Breheny. January 10. The meaning of probability The Bayesian approach Preview of MCMC methods Introduction Patrick Breheny January 10 Patrick Breheny BST 701: Bayesian Modeling in Biostatistics 1/25 Introductory example: Jane s twins Suppose you have a friend named Jane who is pregnant with twins

More information

Bayesians methods in system identification: equivalences, differences, and misunderstandings

Bayesians methods in system identification: equivalences, differences, and misunderstandings Bayesians methods in system identification: equivalences, differences, and misunderstandings Johan Schoukens and Carl Edward Rasmussen ERNSI 217 Workshop on System Identification Lyon, September 24-27,

More information

Inferencing in Artificial Intelligence and Computational Linguistics

Inferencing in Artificial Intelligence and Computational Linguistics Inferencing in Artificial Intelligence and Computational Linguistics (http://www.dfki.de/~horacek/infer-ai-cl.html) no classes on 28.5., 18.6., 25.6. 2-3 extra lectures will be scheduled Helmut Horacek

More information

BAYESIAN HYPOTHESIS TESTING WITH SPSS AMOS

BAYESIAN HYPOTHESIS TESTING WITH SPSS AMOS Sara Garofalo Department of Psychiatry, University of Cambridge BAYESIAN HYPOTHESIS TESTING WITH SPSS AMOS Overview Bayesian VS classical (NHST or Frequentist) statistical approaches Theoretical issues

More information

Model calibration and Bayesian methods for probabilistic projections

Model calibration and Bayesian methods for probabilistic projections ETH Zurich Reto Knutti Model calibration and Bayesian methods for probabilistic projections Reto Knutti, IAC ETH Toy model Model: obs = linear trend + noise(variance, spectrum) 1) Short term predictability,

More information

Topic 4. Representation and Reasoning with Uncertainty

Topic 4. Representation and Reasoning with Uncertainty Topic 4 Representation and Reasoning with Uncertainty Contents 4.0 Representing Uncertainty PART II 4.2 Certainty Factors (CFs) 4.3 Dempster-Shafer theory 4.4 Fuzzy Logic 1 Interpretations of the meaning

More information

Dealing with Uncertainty in Expert Systems

Dealing with Uncertainty in Expert Systems Dealing with Uncertainty in Expert Systems Sonal Dubey, R. K. Pandey, S. S. Gautam Abstract The aim of artificial intelligence is to develop tools for representing piece of knowledge and providing inference

More information

Minimizing Uncertainty in Property Casualty Loss Reserve Estimates Chris G. Gross, ACAS, MAAA

Minimizing Uncertainty in Property Casualty Loss Reserve Estimates Chris G. Gross, ACAS, MAAA Minimizing Uncertainty in Property Casualty Loss Reserve Estimates Chris G. Gross, ACAS, MAAA The uncertain nature of property casualty loss reserves Property Casualty loss reserves are inherently uncertain.

More information

Small Sample Bayesian Factor Analysis. PhUSE 2014 Paper SP03 Dirk Heerwegh

Small Sample Bayesian Factor Analysis. PhUSE 2014 Paper SP03 Dirk Heerwegh Small Sample Bayesian Factor Analysis PhUSE 2014 Paper SP03 Dirk Heerwegh Overview Factor analysis Maximum likelihood Bayes Simulation Studies Design Results Conclusions Factor Analysis (FA) Explain correlation

More information

Lecture 3: Bayesian Networks 1

Lecture 3: Bayesian Networks 1 Lecture 3: Bayesian Networks 1 Jingpeng Li 1 Content Reminder from previous lecture: Bayes theorem Bayesian networks Why are they currently interesting? Detailed example from medical diagnostics Bayesian

More information

Chapter 3 Software Packages to Install How to Set Up Python Eclipse How to Set Up Eclipse... 42

Chapter 3 Software Packages to Install How to Set Up Python Eclipse How to Set Up Eclipse... 42 Table of Contents Preface..... 21 About the Authors... 23 Acknowledgments... 24 How This Book is Organized... 24 Who Should Buy This Book?... 24 Where to Find Answers to Review Questions and Exercises...

More information

A Decision-Theoretic Approach to Evaluating Posterior Probabilities of Mental Models

A Decision-Theoretic Approach to Evaluating Posterior Probabilities of Mental Models A Decision-Theoretic Approach to Evaluating Posterior Probabilities of Mental Models Jonathan Y. Ito and David V. Pynadath and Stacy C. Marsella Information Sciences Institute, University of Southern California

More information

Remarks on Bayesian Control Charts

Remarks on Bayesian Control Charts Remarks on Bayesian Control Charts Amir Ahmadi-Javid * and Mohsen Ebadi Department of Industrial Engineering, Amirkabir University of Technology, Tehran, Iran * Corresponding author; email address: ahmadi_javid@aut.ac.ir

More information

Introduction to Bayesian Analysis 1

Introduction to Bayesian Analysis 1 Biostats VHM 801/802 Courses Fall 2005, Atlantic Veterinary College, PEI Henrik Stryhn Introduction to Bayesian Analysis 1 Little known outside the statistical science, there exist two different approaches

More information

Hoare Logic and Model Checking. LTL and CTL: a perspective. Learning outcomes. Model Checking Lecture 12: Loose ends

Hoare Logic and Model Checking. LTL and CTL: a perspective. Learning outcomes. Model Checking Lecture 12: Loose ends Learning outcomes Hoare Logic and Model Checking Model Checking Lecture 12: Loose ends Dominic Mulligan Based on previous slides by Alan Mycroft and Mike Gordon Programming, Logic, and Semantics Group

More information

An Escalation Model of Consciousness

An Escalation Model of Consciousness Bailey!1 Ben Bailey Current Issues in Cognitive Science Mark Feinstein 2015-12-18 An Escalation Model of Consciousness Introduction The idea of consciousness has plagued humanity since its inception. Humans

More information

Using Bayesian Networks to Direct Stochastic Search in Inductive Logic Programming

Using Bayesian Networks to Direct Stochastic Search in Inductive Logic Programming Appears in Proceedings of the 17th International Conference on Inductive Logic Programming (ILP). Corvallis, Oregon, USA. June, 2007. Using Bayesian Networks to Direct Stochastic Search in Inductive Logic

More information

Identifying Parkinson s Patients: A Functional Gradient Boosting Approach

Identifying Parkinson s Patients: A Functional Gradient Boosting Approach Identifying Parkinson s Patients: A Functional Gradient Boosting Approach Devendra Singh Dhami 1, Ameet Soni 2, David Page 3, and Sriraam Natarajan 1 1 Indiana University Bloomington 2 Swarthmore College

More information

You must answer question 1.

You must answer question 1. Research Methods and Statistics Specialty Area Exam October 28, 2015 Part I: Statistics Committee: Richard Williams (Chair), Elizabeth McClintock, Sarah Mustillo You must answer question 1. 1. Suppose

More information

Visual book review 1 Safe and Sound, AI in hazardous applications by John Fox and Subrata Das

Visual book review 1 Safe and Sound, AI in hazardous applications by John Fox and Subrata Das Visual book review 1 Safe and Sound, AI in hazardous applications by John Fox and Subrata Das Boris Kovalerchuk Dept. of Computer Science Central Washington University, Ellensburg, WA 98926-7520 borisk@cwu.edu

More information

BAYESIAN NETWORK FOR FAULT DIAGNOSIS

BAYESIAN NETWORK FOR FAULT DIAGNOSIS BAYESIAN NETWOK FO FAULT DIAGNOSIS C.H. Lo, Y.K. Wong and A.B. ad Department of Electrical Engineering, The Hong Kong Polytechnic University Hung Hom, Kowloon, Hong Kong Fax: +852 2330 544 Email: eechlo@inet.polyu.edu.hk,

More information

Rethinking Cognitive Architecture!

Rethinking Cognitive Architecture! Rethinking Cognitive Architecture! Reconciling Uniformity and Diversity via Graphical Models! Paul Rosenbloom!!! 1/25/2010! Department of Computer Science &! Institute for Creative Technologies! The projects

More information

Implementation of Inference Engine in Adaptive Neuro Fuzzy Inference System to Predict and Control the Sugar Level in Diabetic Patient

Implementation of Inference Engine in Adaptive Neuro Fuzzy Inference System to Predict and Control the Sugar Level in Diabetic Patient , ISSN (Print) : 319-8613 Implementation of Inference Engine in Adaptive Neuro Fuzzy Inference System to Predict and Control the Sugar Level in Diabetic Patient M. Mayilvaganan # 1 R. Deepa * # Associate

More information

Decisions and Dependence in Influence Diagrams

Decisions and Dependence in Influence Diagrams JMLR: Workshop and Conference Proceedings vol 52, 462-473, 2016 PGM 2016 Decisions and Dependence in Influence Diagrams Ross D. hachter Department of Management cience and Engineering tanford University

More information

ICS 606. Intelligent Autonomous Agents 1. Intelligent Autonomous Agents ICS 606 / EE 606 Fall Reactive Architectures

ICS 606. Intelligent Autonomous Agents 1. Intelligent Autonomous Agents ICS 606 / EE 606 Fall Reactive Architectures Intelligent Autonomous Agents ICS 606 / EE 606 Fall 2011 Nancy E. Reed nreed@hawaii.edu 1 Lecture #5 Reactive and Hybrid Agents Reactive Architectures Brooks and behaviors The subsumption architecture

More information

The 29th Fuzzy System Symposium (Osaka, September 9-, 3) Color Feature Maps (BY, RG) Color Saliency Map Input Image (I) Linear Filtering and Gaussian

The 29th Fuzzy System Symposium (Osaka, September 9-, 3) Color Feature Maps (BY, RG) Color Saliency Map Input Image (I) Linear Filtering and Gaussian The 29th Fuzzy System Symposium (Osaka, September 9-, 3) A Fuzzy Inference Method Based on Saliency Map for Prediction Mao Wang, Yoichiro Maeda 2, Yasutake Takahashi Graduate School of Engineering, University

More information

Probability and Sample space

Probability and Sample space Probability and Sample space We call a phenomenon random if individual outcomes are uncertain but there is a regular distribution of outcomes in a large number of repetitions. The probability of any outcome

More information

Knowledge is rarely absolutely certain. In expert systems we need a way to say that something is probably but not necessarily true.

Knowledge is rarely absolutely certain. In expert systems we need a way to say that something is probably but not necessarily true. CmSc310 Artificial Intelligence Expert Systems II 1. Reasoning under uncertainty Knowledge is rarely absolutely certain. In expert systems we need a way to say that something is probably but not necessarily

More information

Logistic Regression and Bayesian Approaches in Modeling Acceptance of Male Circumcision in Pune, India

Logistic Regression and Bayesian Approaches in Modeling Acceptance of Male Circumcision in Pune, India 20th International Congress on Modelling and Simulation, Adelaide, Australia, 1 6 December 2013 www.mssanz.org.au/modsim2013 Logistic Regression and Bayesian Approaches in Modeling Acceptance of Male Circumcision

More information

Multi Parametric Approach Using Fuzzification On Heart Disease Analysis Upasana Juneja #1, Deepti #2 *

Multi Parametric Approach Using Fuzzification On Heart Disease Analysis Upasana Juneja #1, Deepti #2 * Multi Parametric Approach Using Fuzzification On Heart Disease Analysis Upasana Juneja #1, Deepti #2 * Department of CSE, Kurukshetra University, India 1 upasana_jdkps@yahoo.com Abstract : The aim of this

More information

Support for Cognitive Science Experiments

Support for Cognitive Science Experiments Support for Cognitive Science Experiments CogSketch as Research Instrument Corpus of stimuli for people Corpus of formally represented stimuli Cognitive Simulation Compare results on variety of tasks Gathering

More information

CHAPTER - 7 FUZZY LOGIC IN DATA MINING

CHAPTER - 7 FUZZY LOGIC IN DATA MINING CHAPTER - 7 FUZZY LOGIC IN DATA MINING 7.1. INTRODUCTION Fuzzy logic is an approach of data mining that involves computing the data based on the probable predictions and clustering as opposed to the traditional

More information

Burn-in, bias, and the rationality of anchoring

Burn-in, bias, and the rationality of anchoring Burn-in, bias, and the rationality of anchoring Falk Lieder Translational Neuromodeling Unit ETH and University of Zurich lieder@biomed.ee.ethz.ch Thomas L. Griffiths Psychology Department University of

More information

CPSC 121 Some Sample Questions for the Final Exam

CPSC 121 Some Sample Questions for the Final Exam CPSC 121 Some Sample Questions for the Final Exam [0] 1. Tautologies and Contradictions: Determine whether the following statements are tautologies (definitely true), contradictions (definitely false),

More information

Outline. What s inside this paper? My expectation. Software Defect Prediction. Traditional Method. What s inside this paper?

Outline. What s inside this paper? My expectation. Software Defect Prediction. Traditional Method. What s inside this paper? Outline A Critique of Software Defect Prediction Models Norman E. Fenton Dongfeng Zhu What s inside this paper? What kind of new technique was developed in this paper? Research area of this technique?

More information

ARTIFICIAL INTELLIGENCE

ARTIFICIAL INTELLIGENCE ARTIFICIAL INTELLIGENCE LECTURE # 04 Artificial Intelligence 2012 Lecture 04 Delivered By Zahid Iqbal 1 Review of Last Lecture Artificial Intelligence 2012 Lecture 04 Delivered By Zahid Iqbal 2 Review

More information

Pythia WEB ENABLED TIMED INFLUENCE NET MODELING TOOL SAL. Lee W. Wagenhals Alexander H. Levis

Pythia WEB ENABLED TIMED INFLUENCE NET MODELING TOOL SAL. Lee W. Wagenhals Alexander H. Levis Pythia WEB ENABLED TIMED INFLUENCE NET MODELING TOOL Lee W. Wagenhals Alexander H. Levis ,@gmu.edu Adversary Behavioral Modeling Maxwell AFB, Montgomery AL March 8-9, 2007 1 Outline Pythia

More information

Artificial Intelligence Lecture 7

Artificial Intelligence Lecture 7 Artificial Intelligence Lecture 7 Lecture plan AI in general (ch. 1) Search based AI (ch. 4) search, games, planning, optimization Agents (ch. 8) applied AI techniques in robots, software agents,... Knowledge

More information

Bayes Theorem Application: Estimating Outcomes in Terms of Probability

Bayes Theorem Application: Estimating Outcomes in Terms of Probability Bayes Theorem Application: Estimating Outcomes in Terms of Probability The better the estimates, the better the outcomes. It s true in engineering and in just about everything else. Decisions and judgments

More information

Knowledge Representation defined. Ontological Engineering. The upper ontology of the world. Knowledge Representation

Knowledge Representation defined. Ontological Engineering. The upper ontology of the world. Knowledge Representation 3 Knowledge Representation defined Knowledge Representation (Based on slides by Tom Lenaerts) Lars Bungum (Ph.D. stip.) Department of Computer & Information Science How to represent facts about the world

More information

Simpson s Paradox and the implications for medical trials

Simpson s Paradox and the implications for medical trials Simpson s Paradox and the implications for medical trials Norman Fenton, Martin Neil and Anthony Constantinou 31 August 2015 Abstract This paper describes Simpson s paradox, and explains its serious implications

More information

Checking the counterarguments confirms that publication bias contaminated studies relating social class and unethical behavior

Checking the counterarguments confirms that publication bias contaminated studies relating social class and unethical behavior 1 Checking the counterarguments confirms that publication bias contaminated studies relating social class and unethical behavior Gregory Francis Department of Psychological Sciences Purdue University gfrancis@purdue.edu

More information

Tech Talk: Using the Lafayette ESS Report Generator

Tech Talk: Using the Lafayette ESS Report Generator Raymond Nelson Included in LXSoftware is a fully featured manual score sheet that can be used with any validated comparison question test format. Included in the manual score sheet utility of LXSoftware

More information

Bayesian Logistic Regression Modelling via Markov Chain Monte Carlo Algorithm

Bayesian Logistic Regression Modelling via Markov Chain Monte Carlo Algorithm Journal of Social and Development Sciences Vol. 4, No. 4, pp. 93-97, Apr 203 (ISSN 222-52) Bayesian Logistic Regression Modelling via Markov Chain Monte Carlo Algorithm Henry De-Graft Acquah University

More information

Chapter 23. Inference About Means. Copyright 2010 Pearson Education, Inc.

Chapter 23. Inference About Means. Copyright 2010 Pearson Education, Inc. Chapter 23 Inference About Means Copyright 2010 Pearson Education, Inc. Getting Started Now that we know how to create confidence intervals and test hypotheses about proportions, it d be nice to be able

More information

Practical Bayesian Optimization of Machine Learning Algorithms. Jasper Snoek, Ryan Adams, Hugo LaRochelle NIPS 2012

Practical Bayesian Optimization of Machine Learning Algorithms. Jasper Snoek, Ryan Adams, Hugo LaRochelle NIPS 2012 Practical Bayesian Optimization of Machine Learning Algorithms Jasper Snoek, Ryan Adams, Hugo LaRochelle NIPS 2012 ... (Gaussian Processes) are inadequate for doing speech and vision. I still think they're

More information

Bayesian integration in sensorimotor learning

Bayesian integration in sensorimotor learning Bayesian integration in sensorimotor learning Introduction Learning new motor skills Variability in sensors and task Tennis: Velocity of ball Not all are equally probable over time Increased uncertainty:

More information

IE 5203 Decision Analysis Lab I Probabilistic Modeling, Inference and Decision Making with Netica

IE 5203 Decision Analysis Lab I Probabilistic Modeling, Inference and Decision Making with Netica IE 5203 Decision Analysis Lab I Probabilistic Modeling, Inference and Decision Making with Netica Overview of Netica Software Netica Application is a comprehensive tool for working with Bayesian networks

More information

STATISTICAL INFERENCE 1 Richard A. Johnson Professor Emeritus Department of Statistics University of Wisconsin

STATISTICAL INFERENCE 1 Richard A. Johnson Professor Emeritus Department of Statistics University of Wisconsin STATISTICAL INFERENCE 1 Richard A. Johnson Professor Emeritus Department of Statistics University of Wisconsin Key words : Bayesian approach, classical approach, confidence interval, estimation, randomization,

More information

Learning Deterministic Causal Networks from Observational Data

Learning Deterministic Causal Networks from Observational Data Carnegie Mellon University Research Showcase @ CMU Department of Psychology Dietrich College of Humanities and Social Sciences 8-22 Learning Deterministic Causal Networks from Observational Data Ben Deverett

More information

Ordinal Data Modeling

Ordinal Data Modeling Valen E. Johnson James H. Albert Ordinal Data Modeling With 73 illustrations I ". Springer Contents Preface v 1 Review of Classical and Bayesian Inference 1 1.1 Learning about a binomial proportion 1 1.1.1

More information

A Computational Theory of Belief Introspection

A Computational Theory of Belief Introspection A Computational Theory of Belief Introspection Kurt Konolige Artificial Intelligence Center SRI International Menlo Park, California 94025 Abstract Introspection is a general term covering the ability

More information

Fodor on Functionalism EMILY HULL

Fodor on Functionalism EMILY HULL Fodor on Functionalism EMILY HULL Deficiencies in Other Theories Dualism Failure to account for mental causation Immaterial substance cannot cause physical events If mental processes were a different substance

More information

TRIPODS Workshop: Models & Machine Learning for Causal I. & Decision Making

TRIPODS Workshop: Models & Machine Learning for Causal I. & Decision Making TRIPODS Workshop: Models & Machine Learning for Causal Inference & Decision Making in Medical Decision Making : and Predictive Accuracy text Stavroula Chrysanthopoulou, PhD Department of Biostatistics

More information

Exploring the Influence of Particle Filter Parameters on Order Effects in Causal Learning

Exploring the Influence of Particle Filter Parameters on Order Effects in Causal Learning Exploring the Influence of Particle Filter Parameters on Order Effects in Causal Learning Joshua T. Abbott (joshua.abbott@berkeley.edu) Thomas L. Griffiths (tom griffiths@berkeley.edu) Department of Psychology,

More information

9 research designs likely for PSYC 2100

9 research designs likely for PSYC 2100 9 research designs likely for PSYC 2100 1) 1 factor, 2 levels, 1 group (one group gets both treatment levels) related samples t-test (compare means of 2 levels only) 2) 1 factor, 2 levels, 2 groups (one

More information

Citation for published version (APA): Geus, A. F. D., & Rotterdam, E. P. (1992). Decision support in aneastehesia s.n.

Citation for published version (APA): Geus, A. F. D., & Rotterdam, E. P. (1992). Decision support in aneastehesia s.n. University of Groningen Decision support in aneastehesia Geus, Arian Fred de; Rotterdam, Ernest Peter IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to

More information

Review Questions in Introductory Knowledge... 37

Review Questions in Introductory Knowledge... 37 Table of Contents Preface..... 17 About the Authors... 19 How This Book is Organized... 20 Who Should Buy This Book?... 20 Where to Find Answers to Review Questions and Exercises... 20 How to Report Errata...

More information

Type and quantity of data needed for an early estimate of transmissibility when an infectious disease emerges

Type and quantity of data needed for an early estimate of transmissibility when an infectious disease emerges Research articles Type and quantity of data needed for an early estimate of transmissibility when an infectious disease emerges N G Becker (Niels.Becker@anu.edu.au) 1, D Wang 1, M Clements 1 1. National

More information

Identifying the Zygosity Status of Twins Using Bayes Network and Estimation- Maximization Methodology

Identifying the Zygosity Status of Twins Using Bayes Network and Estimation- Maximization Methodology Identifying the Zygosity Status of Twins Using Bayes Network and Estimation- Maximization Methodology Yicun Ni (ID#: 9064804041), Jin Ruan (ID#: 9070059457), Ying Zhang (ID#: 9070063723) Abstract As the

More information

FATIGUE. A brief guide to the PROMIS Fatigue instruments:

FATIGUE. A brief guide to the PROMIS Fatigue instruments: FATIGUE A brief guide to the PROMIS Fatigue instruments: ADULT ADULT CANCER PEDIATRIC PARENT PROXY PROMIS Ca Bank v1.0 Fatigue PROMIS Pediatric Bank v2.0 Fatigue PROMIS Pediatric Bank v1.0 Fatigue* PROMIS

More information

LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES

LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES Reactive Architectures LECTURE 5: REACTIVE AND HYBRID ARCHITECTURES An Introduction to MultiAgent Systems http://www.csc.liv.ac.uk/~mjw/pubs/imas There are many unsolved (some would say insoluble) problems

More information

A Case Study: Two-sample categorical data

A Case Study: Two-sample categorical data A Case Study: Two-sample categorical data Patrick Breheny January 31 Patrick Breheny BST 701: Bayesian Modeling in Biostatistics 1/43 Introduction Model specification Continuous vs. mixture priors Choice

More information

Hierarchical Bayesian Modeling of Individual Differences in Texture Discrimination

Hierarchical Bayesian Modeling of Individual Differences in Texture Discrimination Hierarchical Bayesian Modeling of Individual Differences in Texture Discrimination Timothy N. Rubin (trubin@uci.edu) Michael D. Lee (mdlee@uci.edu) Charles F. Chubb (cchubb@uci.edu) Department of Cognitive

More information

Bayesian Methods p.3/29. data and output management- by David Spiegalhalter. WinBUGS - result from MRC funded project with David

Bayesian Methods p.3/29. data and output management- by David Spiegalhalter. WinBUGS - result from MRC funded project with David Bayesian Methods - I Dr. David Lucy d.lucy@lancaster.ac.uk Lancaster University Bayesian Methods p.1/29 The package we shall be looking at is. The Bugs bit stands for Bayesian Inference Using Gibbs Sampling.

More information

An Interval-Based Representation of Temporal Knowledge

An Interval-Based Representation of Temporal Knowledge An Interval-Based Representation of Temporal Knowledge James F. Allen Department of Computer Science The University of Rochester Rochester, NY 14627 Abstract This paper describes a method for maintaining

More information

EEL-5840 Elements of {Artificial} Machine Intelligence

EEL-5840 Elements of {Artificial} Machine Intelligence Menu Introduction Syllabus Grading: Last 2 Yrs Class Average 3.55; {3.7 Fall 2012 w/24 students & 3.45 Fall 2013} General Comments Copyright Dr. A. Antonio Arroyo Page 2 vs. Artificial Intelligence? DEF:

More information

Fuzzy Knowledge Base for Medical Training

Fuzzy Knowledge Base for Medical Training Fuzzy Knowledge Base for Medical Training Ray Dueñas Jiménez 1 Roger Vieira Horvat 2, Marcelo Schweller 2, Tiago de Araujo Guerra Grangeia 2, Marco Antonio de Carvalho Filho 2, and André Santanchè 1 1

More information

Bayesian Belief Network Based Fault Diagnosis in Automotive Electronic Systems

Bayesian Belief Network Based Fault Diagnosis in Automotive Electronic Systems Bayesian Belief Network Based Fault Diagnosis in Automotive Electronic Systems Yingping Huang *, David Antory, R. Peter Jones, Craig Groom, Ross McMurran, Peter Earp and Francis Mckinney International

More information

A new formalism for temporal modeling in medical decision-support systems

A new formalism for temporal modeling in medical decision-support systems A new formalism for temporal modeling in medical decision-support systems Constantin F. Aliferis, M.D., M.Sc., and Gregory F. Cooper, M.D., Ph.D. Section of Medical Informatics & Intelligent Systems Program

More information

For general queries, contact

For general queries, contact Much of the work in Bayesian econometrics has focused on showing the value of Bayesian methods for parametric models (see, for example, Geweke (2005), Koop (2003), Li and Tobias (2011), and Rossi, Allenby,

More information

The Human Side of Science: I ll Take That Bet! Balancing Risk and Benefit. Uncertainty, Risk and Probability: Fundamental Definitions and Concepts

The Human Side of Science: I ll Take That Bet! Balancing Risk and Benefit. Uncertainty, Risk and Probability: Fundamental Definitions and Concepts The Human Side of Science: I ll Take That Bet! Balancing Risk and Benefit Uncertainty, Risk and Probability: Fundamental Definitions and Concepts What Is Uncertainty? A state of having limited knowledge

More information

Beyond Subjective and Objective in Statistics

Beyond Subjective and Objective in Statistics June 05 Foundations of Statistics Other objectivity vs. subjectivity issues The current discourse is not helpful. Objectivity and Subjectivity in Statistics Starting point: how are these terms used in

More information

Progress in Risk Science and Causality

Progress in Risk Science and Causality Progress in Risk Science and Causality Tony Cox, tcoxdenver@aol.com AAPCA March 27, 2017 1 Vision for causal analytics Represent understanding of how the world works by an explicit causal model. Learn,

More information

Student Performance Q&A:

Student Performance Q&A: Student Performance Q&A: 2009 AP Statistics Free-Response Questions The following comments on the 2009 free-response questions for AP Statistics were written by the Chief Reader, Christine Franklin of

More information

Method Comparison for Interrater Reliability of an Image Processing Technique in Epilepsy Subjects

Method Comparison for Interrater Reliability of an Image Processing Technique in Epilepsy Subjects 22nd International Congress on Modelling and Simulation, Hobart, Tasmania, Australia, 3 to 8 December 2017 mssanz.org.au/modsim2017 Method Comparison for Interrater Reliability of an Image Processing Technique

More information

MITOCW conditional_probability

MITOCW conditional_probability MITOCW conditional_probability You've tested positive for a rare and deadly cancer that afflicts 1 out of 1000 people, based on a test that is 99% accurate. What are the chances that you actually have

More information

Multilevel IRT for group-level diagnosis. Chanho Park Daniel M. Bolt. University of Wisconsin-Madison

Multilevel IRT for group-level diagnosis. Chanho Park Daniel M. Bolt. University of Wisconsin-Madison Group-Level Diagnosis 1 N.B. Please do not cite or distribute. Multilevel IRT for group-level diagnosis Chanho Park Daniel M. Bolt University of Wisconsin-Madison Paper presented at the annual meeting

More information

INTRODUCTION TO STATISTICS SORANA D. BOLBOACĂ

INTRODUCTION TO STATISTICS SORANA D. BOLBOACĂ INTRODUCTION TO STATISTICS SORANA D. BOLBOACĂ OBJECTIVES Definitions Stages of Scientific Knowledge Quantification and Accuracy Types of Medical Data Population and sample Sampling methods DEFINITIONS

More information

INTRODUCTION TO ASSESSMENT OPTIONS

INTRODUCTION TO ASSESSMENT OPTIONS ASTHMA IMPACT A brief guide to the PROMIS Asthma Impact instruments: PEDIATRIC PROMIS Pediatric Item Bank v2.0 Asthma Impact PROMIS Pediatric Item Bank v1.0 Asthma Impact* PROMIS Pediatric Short Form v2.0

More information