Learning from just one or a few examples, and mostly unlabeled examples ( semi-supervised learning ).
|
|
- Zoe Rodgers
- 5 years ago
- Views:
Transcription
1 Outline for lectures Introduction Cognition as probabilistic inference Learning concepts from examples (continued) Learning and using intuitive theories (more structured systems of knowledge)
2 tufa tufa tufa Learning from just one or a few examples, and mostly unlabeled examples ( semi-supervised learning ).
3 Simple model of concept learning This is a blicket. Can you show me the other blickets?
4 Learning to learn: what object features count for word learning? 24-month-olds show the shape bias with simple novel objects. 20-month-olds do not. (Landau, Smith, Jones 1988) This is a dax. Show me the dax Smith et al (2002) trained 17- month-olds on labels for 4 artificial categories: After 8 weeks of training (20 min/week), 19-month-olds show the shape bias. wib zup lug div
5 Transfer to real-world vocabulary The intuition: Learn that shape varies across categories but is relatively constant within nameable categories. The puzzle: The shape bias is a powerful inductive constraint, yet can be learned from very little data.
6 Learning about feature variability (Kemp, Perfors & Tenenbaum, Dev. Science 2007)?
7 Learning about feature variability (Kemp, Perfors & Tenenbaum, Dev. Science 2007)?
8 A hierarchical Bayesian model Level 3: Prior expectations on bags in general Level 2: Bags in general Level 1: Bag proportions Simultaneously infer Data
9 A hierarchical Bayesian model Level 3: Prior expectations on bags in general Level 2: Bags in general α = 0.1 β (within-bag variability) (overall population distribution) Level 1: Bag proportions Data
10 A hierarchical Bayesian model Level 3: Prior expectations on bags in general Level 2: Bags in general α = 5 β (within-bag variability) (overall population distribution) Level 1: Bag proportions Data
11 Learning the shape bias (Kemp, Perfors & Tenenbaum, Dev. Science 2007) wib lug Assuming independent Dirichletmultinomial models for each dimension zup div Training we learn that: Shape varies across categories but not within categories. Texture, color, size vary across and within categories.
12 Second-order generalization test (Kemp, Perfors & Tenenbaum, Dev. Science 2007) 1 st order gen 2 nd order gen This is a dax. Show me the dax Training Test blessing of abstraction
13 A more realistic model Prior expectations on categories in general Categories in general Individual categories Data????? ? ? ? ? ? ? ? ? ? ? ? ? ? (Perfors & Tenenbaum, Proc Cog Sci 2009)
14 A more realistic model Prior expectations on categories in general Categories in general Individual categories Data y z y z y z y z (Perfors & Tenenbaum, Proc Cog Sci 2009)
15 A more realistic model Prior expectations on categories in general Categories in general Individual categories Data y z y z y z y z y z (Perfors & Tenenbaum, Proc Cog Sci 2009) 2 nd order generalization: ? or ?
16 A more realistic model Prior expectations on categories in general Learning the base distribution of a DP mixture Categories in general Individual categories Data y z y z y z y z y z (Perfors & Tenenbaum, Proc Cog Sci 2009) 2 nd order generalization: ? or ?
17 Model vs. People % correct generalization 1 st order generalization 2 nd order generalization % correct generalization 1 st order generalization 2 nd order generalization Category labels absent Category labels present Category labels absent Category labels present % coherence: % coherence: % correct generalization % correct generalization (Perfors & Tenenbaum, Proc Cog Sci 2009)
18 Towards more natural concepts
19 Towards more natural concepts CRP mixture:
20 How many different ways to structure a domain? (Shafto, Kemp, Mansingka, Tenenbaum, 2006; submitted) CrossCat : nonparametric clustering over features, with a different clustering of objects for each feature-cluster.
21 How many different ways to structure a domain? (Shafto, Kemp, Mansingka, Tenenbaum, 2006; submitted) CrossCat : nonparametric clustering over features, with a different clustering of objects for each feature-cluster. Evidence for CrossCat-like learning in humans: - Sorting natural categories - Sorting artificial categories - Predicting values for novel features and novel objects
22 Learning relational concepts CONCEPTS Professors Graduate students Undergraduates RELATIONSHIPS Professors give advice to Grad students and Undergrads. Grad students give advice to Undergrads. Undergrads give advice to no one.
23 Learning relational concepts CONCEPTS Magnets Magnetic objects Non-magnetic objects RELATIONSHIPS Magnets interact with each other. Magnets and Magnetic objects interact. Magnetic objects do not interact with each other. Non-magnetic objects do not interact with anything.
24 people Learning relational concepts people person 1 gives advice to person 9
25 Learning relational concepts gives advice to people gives advice to Prof Grads Ug people Profs Grads Ugrads
26 Infinite Relational Model (IRM) (Kemp, Griffiths, Tenenbaum, Yamada, & Ueda, 2006)
27 Learning algorithm Continuous parameters (weights/probabilities) integrated out analytically. Gibbs sampling + split-merge moves: Iteration 1 Iteration 2 Iteration 3 Iteration 4 Iteration 5 Iteration 6
28 The causal blocks world (Tenenbaum and Niyogi, 2003) G F O W C L C A
29 The causal blocks world (Tenenbaum and Niyogi, 2003) G F O W C L A C
30 Training Test A O G F W? X B C A C L? Learning curves 9 Probability of lighting up Likelihood of Lighting Up Stage (# of objects / 3) # of objects observed Pre Post Same Group Post Different Group Model predictions
31 Constructing semantic networks (Collins & Quillian, 1969)
32 Upper level medical ontology
33 Upper level medical ontology Biomedical predicate data used to construct ontology in UMLS (McCrae et al.): 134 concepts: enzyme, hormone, organ, disease, cell function predicates: affects(hormone, organ), complicates(enzyme, cell function), treats(drug, disease), diagnoses(procedure, disease)
34 Learning semantic networks with IRM concept concept predicate Biomedical predicate data from UMLS (McCrae et al.): 134 concepts: enzyme, hormone, organ, disease, cell function predicates: affects(hormone, organ), complicates(enzyme, cell function), treats(drug, disease), diagnoses(procedure, disease)
35 Learning semantic networks with IRM e.g., Diseases affect Organisms Chemicals interact with Chemicals Chemicals cause Diseases
36 Learning semantic networks with IRM Relations between clusters: e.g., Diseases affect Organisms Chemicals interact with Chemicals Chemicals cause Diseases
37 Extracting semantic networks from text via relational clustering (Kok & Domingos 2008) Tested several algorithms for relational clustering on TextRunner data: ~ 2 million triples of the form R(x, y): e.g., upheld(court, ruling), named_after(jupiter, Roman_god). ~ 10,214 R symbols, 8942 x symbols, 7995 y symbols (each appears >25 times).
38 Annotated hierarchies model (Roy, Kemp, Mansinghka & Tenenbaum, 2007)
39 Annotated hierarchies model (Roy, Kemp, Mansinghka & Tenenbaum, 2007)
40 The Mondrian Process (Roy & Teh, 2008; in prep)
41 The Mondrian Process (Roy & Teh, 2008; in prep)
42 tufa tufa tufa Learning from just one or a few examples, and mostly unlabeled examples ( semi-supervised learning ).
43 Learning words for objects tufa tufa tufa (Collins & Quillian, 1969) (IT population responses Hung et al., 2005; c.f. Kiani et al. 2007)
44 Learning words for objects Bayesian inference over treestructured hypothesis space: (Xu & Tenenbaum, Psych. Review 2007; Schmidt & Tenenbaum, in prep) tufa tufa tufa
45 Learning words for objects Bayesian inference over treestructured hypothesis space: (Xu & Tenenbaum, Psych. Review 2007; Schmidt & Tenenbaum, in prep) People Model tufa tufa tufa
46 Hierarchical Bayesian framework F: form Tree S: structure tufa tufa tufa F1 F2 F3 F4 D: data
47 Learning to learn: what is the right form of structure for the domain? F: form Tree Space Order S: structure X1 X2 X3 X4 X5 X6 X2 X5 X4 X1 X3 X6 X1 X2 X3 X4 X5 X6 Features D: data X1 X2 X3 X4 X5 X6
48 The value of structural form knowledge: inductive constraints (bias) Mystery city - average annual temperature: 66 F / 19 C - voted 60% for George W Bush in popular foods are fried and BBQ
49 Property induction (Kemp & Tenenbaum, Psych. Review 2009; Shafto et al., Cognition 2008) Given that {X 1,, X n } have property P, how likely is it that Y does? 2D Tree 2D Tree Horses All mammals Minneapolis Houston
50 Property induction (Kemp & Tenenbaum, Psych. Review 2009) Given that {X 1,, X n } have property P, how likely is it that Y does? 2D Tree 2D Tree ( ) 1 1 Property ~ N 0,( Δ + 2 I) σ (Zhu, Lafferty & Ghahramani, 2003) Horses All mammals N ( ) ) 1 0,exp( σ (c.f. Lawrence, 2004) x i x j Minneapolis Houston
51 Property induction (Kemp & Tenenbaum, Psych. Review 2009; Shafto et al., Cognition 2008) Given that {X 1,, X n } have property P, how likely is it that Y does? Tree Smooth: P(D S) high Not smooth: P(D S) low 2D Tree 2D ( ) 1 1 Property ~ N 0,( Δ + 2 I) σ (Zhu, Lafferty & Ghahramani, 2003) Horses All mammals Minneapolis Houston
52 People can discover structural forms Scientists Learning structural forms Linnaeus Kingdom Animalia Phylum Chordata Class Mammalia Order Primates Family Hominidae Genus Homo Species Homo sapiens Darwin Mendeleev Children e.g., hierarchical structure of category labels, cyclical structure of seasons or days of the week, clique structure of social networks. but standard learning algorithms assume fixed forms. Principal components analysis: low-dimensional spatial structure Hierarchical clustering: tree structure k-means clustering, mixture models: flat partition.
53 Hypothesis space of structural forms (Kemp & Tenenbaum, PNAS 2008) Form Process Form Process
54 A hierarchical Bayesian approach P(F) F: form S: structure D: data (Kemp & Tenenbaum, PNAS 2008) P(S F) Simplicity P(D S) Smoothness (Fit to data) X1 X2 X3 X4 X5 X6 x X1 X2 X1 X2 X6 X3 X4 X5 X6 X3 X5 Features X4 X1 X2 X3 X4 X5 X6
55 judges animals features cases
56 objects objects similarities
57 Development of structural forms as more data are observed blessing of abstraction
58 Causal learning and reasoning Causal model Event data (Griffiths & Tenenbaum; Kemp, Goodman, Tenenbaum)
59 Causal learning and reasoning Causal schema Behaviors Diseases Symptoms high-fat diet working in factory heart disease lung cancer coughing chest pain Causal model Event data Cut down hypothesis space from size 521,939,651,343,829, 405,020,504,063 to 131,072 (Griffiths & Tenenbaum; Kemp, Goodman, Tenenbaum)
60 Causal schema Causal model Event data Patient 1 Patient 2 Patient 3 Patient 4 Patient 5 Laws Classes B D S B η z 1 2 B D S D S Infinite relational model Belief net with Dirichletmultinomial parameterization (Mansinghka, Kemp, Tenenbaum, Griffiths, UAI 2006)
61 Ground-truth causal network Causal model Event data Causal schema c c 2 # samples recovered model blessing of abstraction recovered schema Causal model Event data recovered model (Mansinghka, Kemp, Tenenbaum, Griffiths, UAI 2006)
62 Conclusions How does the mind get so much from so little, in learning about objects, classes, causes, scenes, sentences, thoughts, social systems? A toolkit for studying the nature, use and acquisition of abstract knowledge: Bayesian inference in probabilistic generative models. Probabilistic models defined over a range of structured representations: spaces, graphs, grammars, predicate logic, schemas, programs. Hierarchical models, with inference at multiple levels of abstraction. Nonparametric models, adapting their complexity to the data and balancing constraint with flexibility. An alternative to classic either-or dichotomies: Nature versus Nurture, Logic (Structure, Rules, Symbols) versus Probability (Statistics). How can domain-general mechanisms of learning and representation build domain-specific abstract knowledge? How can structured symbolic knowledge be acquired by statistical learning? A different way to think about the development of a cognitive system. Powerful abstractions can be learned surprisingly quickly, together with or prior to learning the more concrete knowledge they constrain. Structured representations need not be rigid, static, hand-wired, brittle. Embedded in a probabilistic framework, they can grow dynamically and robustly in response to the sparse, noisy data of experience.
63 Open directions and challenges More precise relation to psychology How does human cognitive processing perform approximate probabilistic inference (i.e., approximately implement rational methods of approximate inference)? Relation to the brain How to implement structured probabilistic models in neural architectures? Probabilistic models for richly structured knowledge How to formalize an intuitive theory of physics or psychology? Effective learning of structured probabilistic models How to balance expressiveness/learnability tradeoff?
64 Goal-directed action (production and comprehension) (Wolpert et al., 2003)
65 Goal inference as inverse probabilistic planning (Baker, Tenenbaum & Saxe, Cognition, in press) constraints goals rational planning (MDP) Gergely, Csibra et al.: Agent actions
66 Inferring social goals constraints goals (Baker, Goodman & Tenenbaum, Cog Sci 2008; Ullman, Baker, Evans, Macindoe & Tenenbaum, submitted) Hamlin, Kuhlmeier, Wynn & Bloom: constraints goals rational planning (MDP) Agent rational planning (MDP) actions Agent actions Model prediction Subject ratings Model prediction Subject ratings
67 The really big questions Where does it all end? Across different domains and tasks, and different levels of abstraction, our probabilistic models are starting to look increasingly complex and to differ in almost arbitrarily many ways. This seems unsatisfying The brain appears to have a uniform circuitry. Other cognitive modeling paradigms adopt a single unifying representational primitive (production rules, predicate logic, synaptic strengths, tensors). Is there a single universal Bayesian primitive? How does it all begin? Can all these different kinds of representations be learned? What is the ultimate hypothesis space of innate primitives or is it simply turtles all the way up? Could a universal hypothesis space for all probabilistic models possibly be searched effectively? (C.f. Kolmogorov complexity theory, Chater & Vitanyi)
68 Church: a universal probabilistic language (Goodman et al., UAI 2008)
69 Church: a universal probabilistic language (Goodman et al., UAI 2008) Causal networks
70 Church: a universal probabilistic language (Goodman et al., UAI 2008) Causal networks Relational schema
71 Church: a universal probabilistic language (Goodman et al., UAI 2008) Causal networks Relational schema Physical objects
72 Church: a universal probabilistic language (Goodman et al., UAI 2008) Causal networks Relational schema Physical objects Rational agents
73 Open directions and challenges More precise relation to psychology How does human cognitive processing perform approximate probabilistic inference (i.e., approximately implement rational methods of approximate inference)? Relation to the brain How to implement structured probabilistic models in neural architectures? Probabilistic models for richly structured knowledge How to formalize an intuitive theory of physics or psychology? Effective learning of structured probabilistic models How to balance expressiveness/learnability tradeoff?
Bayesian models of inductive generalization
Bayesian models of inductive generalization Neville E. Sanjana & Joshua B. Tenenbaum Department of Brain and Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 239 nsanjana, jbt @mit.edu
More informationExploring the Influence of Particle Filter Parameters on Order Effects in Causal Learning
Exploring the Influence of Particle Filter Parameters on Order Effects in Causal Learning Joshua T. Abbott (joshua.abbott@berkeley.edu) Thomas L. Griffiths (tom griffiths@berkeley.edu) Department of Psychology,
More informationAClass: A Simple, Online Probabilistic Classifier. Vikash K. Mansinghka Computational Cognitive Science Group MIT BCS/CSAIL
AClass: A Simple, Online Probabilistic Classifier Vikash K. Mansinghka Computational Cognitive Science Group MIT BCS/CSAIL AClass: A Simple, Online Probabilistic Classifier or How I learned to stop worrying
More informationA probabilistic model of cross-categorization
A probabilistic model of cross-categorization Patrick Shafto University of Louisville Charles Kemp Carnegie Mellon University Vikash Mansinghka Navia Systems Joshua B. Tenenbaum Massachusetts Institute
More informationCognitive Modeling. Lecture 12: Bayesian Inference. Sharon Goldwater. School of Informatics University of Edinburgh
Cognitive Modeling Lecture 12: Bayesian Inference Sharon Goldwater School of Informatics University of Edinburgh sgwater@inf.ed.ac.uk February 18, 20 Sharon Goldwater Cognitive Modeling 1 1 Prediction
More informationERA: Architectures for Inference
ERA: Architectures for Inference Dan Hammerstrom Electrical And Computer Engineering 7/28/09 1 Intelligent Computing In spite of the transistor bounty of Moore s law, there is a large class of problems
More informationPing Pong in Church: Productive use of concepts in human probabilistic inference
Ping Pong in Church: Productive use of concepts in human probabilistic inference Tobias Gerstenberg (t.gerstenberg@ucl.ac.uk) & Noah D. Goodman 2 (ngoodman@stanford.edu) Cognitive, Perceptual and Brain
More informationM.Sc. in Cognitive Systems. Model Curriculum
M.Sc. in Cognitive Systems Model Curriculum April 2014 Version 1.0 School of Informatics University of Skövde Sweden Contents 1 CORE COURSES...1 2 ELECTIVE COURSES...1 3 OUTLINE COURSE SYLLABI...2 Page
More informationLearning Deterministic Causal Networks from Observational Data
Carnegie Mellon University Research Showcase @ CMU Department of Psychology Dietrich College of Humanities and Social Sciences 8-22 Learning Deterministic Causal Networks from Observational Data Ben Deverett
More informationIntroduction to Machine Learning. Katherine Heller Deep Learning Summer School 2018
Introduction to Machine Learning Katherine Heller Deep Learning Summer School 2018 Outline Kinds of machine learning Linear regression Regularization Bayesian methods Logistic Regression Why we do this
More informationComputational Models for Belief Revision, Group Decisions and Cultural Shifts
Computational Models for Belief Revision, Group Decisions and Cultural Shifts Whitman Richards (PI), M. I. T. Computer Science and Artificial Intelligence Laboratory MURI Objective: models that forecast
More informationRepresentational and sampling assumptions drive individual differences in single category generalisation
Representational and sampling assumptions drive individual differences in single category generalisation Keith Ransom (keith.ransom@adelaide.edu.au) School of Psychology, University of Adelaide Andrew
More informationChapter 7. Mental Representation
Chapter 7 Mental Representation Mental Representation Mental representation is a systematic correspondence between some element of a target domain and some element of a modeling (or representation) domain.
More informationSignal Detection Theory and Bayesian Modeling
Signal Detection Theory and Bayesian Modeling COGS 202: Computational Modeling of Cognition Omar Shanta, Shuai Tang, Gautam Reddy, Reina Mizrahi, Mehul Shah Detection Theory and Psychophysics: A Review
More informationTesting a Bayesian Measure of Representativeness Using a Large Image Database
Testing a Bayesian Measure of Representativeness Using a Large Image Database Joshua T. Abbott Department of Psychology University of California, Berkeley Berkeley, CA 94720 joshua.abbott@berkeley.edu
More informationBayesian models of inductive generalization
Bayesian models of inductive generalization Neville E. Sanjana & Joshua B. Tenenbaum Department of Brain and Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 239 {nsanjana, jbt}@mit.edu
More informationSupplementary notes for lecture 8: Computational modeling of cognitive development
Supplementary notes for lecture 8: Computational modeling of cognitive development Slide 1 Why computational modeling is important for studying cognitive development. Let s think about how to study the
More informationComputational cognitive science: Generative models, probabilistic programs, and common sense
Computational cognitive science: Generative models, probabilistic programs, and common sense Josh Tenenbaum MIT Brains, Minds and Machines Summer School 2015 1 Two notions of intelligence: Classifying/recognizing/predicting
More informationStepwise Knowledge Acquisition in a Fuzzy Knowledge Representation Framework
Stepwise Knowledge Acquisition in a Fuzzy Knowledge Representation Framework Thomas E. Rothenfluh 1, Karl Bögl 2, and Klaus-Peter Adlassnig 2 1 Department of Psychology University of Zurich, Zürichbergstraße
More informationProbabilistic Models in Cognitive Science and Artificial Intelligence
Probabilistic Models in Cognitive Science and Artificial Intelligence Very Brief History of Cog Sci and AI 1950 s-1980 s Symbolic models of cognition von Neumann computer architecture as metaphor 1980
More informationCognitive Development: Theory of mind
Cognitive Development: Theory of mind Informatics 1 CG: Lecture 17 Chris Lucas clucas2@inf.ed.ac.uk Theory of mind Our intuitive beliefs about mental states: Concrete beliefs: The house is white *on this
More informationCognitive Modeling. Lecture 9: Intro to Probabilistic Modeling: Rational Analysis. Sharon Goldwater
Cognitive Modeling Lecture 9: Intro to Probabilistic Modeling: Sharon Goldwater School of Informatics University of Edinburgh sgwater@inf.ed.ac.uk February 8, 2010 Sharon Goldwater Cognitive Modeling 1
More informationContext-Sensitive Induction
Context-Sensitive Induction Patrick Shafto 1, Charles Kemp 1, Elizabeth Baraff 1, John D. Coley 2, & Joshua B. Tenenbaum 1 1 Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
More informationCognitive Modeling. Mechanistic Modeling. Mechanistic Modeling. Mechanistic Modeling Rational Analysis
Lecture 9: Intro to Probabilistic Modeling: School of Informatics University of Edinburgh sgwater@inf.ed.ac.uk February 8, 2010 1 1 2 3 4 Reading: Anderson (2002). 2 Traditional mechanistic approach to
More informationLearning to Learn Causal Models
Cognitive Science 34 (2010) 1185 1243 Copyright Ó 2010 Cognitive Science Society, Inc. All rights reserved. ISSN: 0364-0213 print / 1551-6709 online DOI: 10.1111/j.1551-6709.2010.01128.x Learning to Learn
More informationGeneric Priors Yield Competition Between Independently-Occurring Preventive Causes
Powell, D., Merrick, M. A., Lu, H., & Holyoak, K.J. (2014). Generic priors yield competition between independentlyoccurring preventive causes. In B. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.),
More informationExemplars, Prototypes, Similarities, and Rules in Category Representation: An Example of Hierarchical Bayesian Analysis
Cognitive Science 32 (2008) 1403 1424 Copyright C 2008 Cognitive Science Society, Inc. All rights reserved. ISSN: 0364-0213 print / 1551-6709 online DOI: 10.1080/03640210802073697 Exemplars, Prototypes,
More informationUsing Inverse Planning and Theory of Mind for Social Goal Inference
Using Inverse Planning and Theory of Mind for Social Goal Inference Sean Tauber (sean.tauber@uci.edu) Mark Steyvers (mark.steyvers@uci.edu) Department of Cognitive Sciences, University of California, Irvine
More informationBayesian Bi-Cluster Change-Point Model for Exploring Functional Brain Dynamics
Int'l Conf. Bioinformatics and Computational Biology BIOCOMP'18 85 Bayesian Bi-Cluster Change-Point Model for Exploring Functional Brain Dynamics Bing Liu 1*, Xuan Guo 2, and Jing Zhang 1** 1 Department
More informationCS221 / Autumn 2017 / Liang & Ermon. Lecture 19: Conclusion
CS221 / Autumn 2017 / Liang & Ermon Lecture 19: Conclusion Outlook AI is everywhere: IT, transportation, manifacturing, etc. AI being used to make decisions for: education, credit, employment, advertising,
More informationBurn-in, bias, and the rationality of anchoring
Burn-in, bias, and the rationality of anchoring Falk Lieder Translational Neuromodeling Unit ETH and University of Zurich lieder@biomed.ee.ethz.ch Thomas L. Griffiths Psychology Department University of
More informationCogs 202 (SP12): Cognitive Science Foundations. Computational Modeling of Cognition. Prof. Angela Yu. Department of Cognitive Science, UCSD
Cogs 202 (SP12): Cognitive Science Foundations Computational Modeling of Cognition Prof. Angela Yu Department of Cognitive Science, UCSD http://www.cogsci.ucsd.edu/~ajyu/teaching/cogs202_sp12/index.html
More informationDiscovering Inductive Biases in Categorization through Iterated Learning
Discovering Inductive Biases in Categorization through Iterated Learning Kevin R. Canini (kevin@cs.berkeley.edu) Thomas L. Griffiths (tom griffiths@berkeley.edu) University of California, Berkeley, CA
More informationChris L. Baker, Julian Jara-Ettinger, Rebecca Saxe, & Joshua B. Tenenbaum* Department of Brain and Cognitive Sciences
Rational quantitative attribution of beliefs, desires, and percepts in human mentalizing Chris L. Baker, Julian Jara-Ettinger, Rebecca Saxe, & Joshua B. Tenenbaum* Department of Brain and Cognitive Sciences
More informationCommunity structure in resting state complex networks
Downloaded from orbit.dtu.dk on: Dec 20, 2017 Community structure in resting state complex networks Andersen, Kasper Winther; Madsen, Kristoffer H.; Siebner, Hartwig Roman; Schmidt, Mikkel Nørgaard; Mørup,
More information22 Concepts in a Probabilistic Language of Thought
22 Concepts in a Probabilistic Language of Thought Noah D. Goodman, Joshua B. Tenenbaum, and Tobias Gerstenberg 22.1 Introduction Knowledge organizes our understanding of the world, determining what we
More informationRethinking Cognitive Architecture!
Rethinking Cognitive Architecture! Reconciling Uniformity and Diversity via Graphical Models! Paul Rosenbloom!!! 1/25/2010! Department of Computer Science &! Institute for Creative Technologies! The projects
More informationCSC2130: Empirical Research Methods for Software Engineering
CSC2130: Empirical Research Methods for Software Engineering Steve Easterbrook sme@cs.toronto.edu www.cs.toronto.edu/~sme/csc2130/ 2004-5 Steve Easterbrook. This presentation is available free for non-commercial
More informationLearning Grounded Causal Models
Learning Grounded Causal Models Noah D. Goodman (ndg@mit.edu), Vikash K. Mansinghka (vkm@mit.edu), Joshua B. Tenenbaum (jbt@mit.edu) Department of Brain and Cognitive Sciences, Massachusetts Institute
More informationLearning Physical Parameters from Dynamic Scenes
Learning Physical Parameters from Dynamic Scenes Tomer D. Ullman a,, Andreas Stuhlmüller a, Noah D. Goodman b, Joshua B. Tenenbaum a a Department of Brain and Cognitive Sciences, Massachusetts Institute
More informationIndividual Differences in Attention During Category Learning
Individual Differences in Attention During Category Learning Michael D. Lee (mdlee@uci.edu) Department of Cognitive Sciences, 35 Social Sciences Plaza A University of California, Irvine, CA 92697-5 USA
More informationSAMPLING AND SAMPLE SIZE
SAMPLING AND SAMPLE SIZE Andrew Zeitlin Georgetown University and IGC Rwanda With slides from Ben Olken and the World Bank s Development Impact Evaluation Initiative 2 Review We want to learn how a program
More informationDeep Networks and Beyond. Alan Yuille Bloomberg Distinguished Professor Depts. Cognitive Science and Computer Science Johns Hopkins University
Deep Networks and Beyond Alan Yuille Bloomberg Distinguished Professor Depts. Cognitive Science and Computer Science Johns Hopkins University Artificial Intelligence versus Human Intelligence Understanding
More informationFor general queries, contact
Much of the work in Bayesian econometrics has focused on showing the value of Bayesian methods for parametric models (see, for example, Geweke (2005), Koop (2003), Li and Tobias (2011), and Rossi, Allenby,
More informationPrécis for The inner life of goals: costs, rewards, and commonsense psychology. Julian Jara-Ettinger
Précis for The inner life of goals: costs, rewards, and commonsense psychology. 1. Introduction Julian Jara-Ettinger Theories of decision-making have been at the heart of psychology since the field s inception,
More informationGrounded Cognition. Lawrence W. Barsalou
Grounded Cognition Lawrence W. Barsalou Department of Psychology Emory University July 2008 Grounded Cognition 1 Definition of grounded cognition the core representations in cognition are not: amodal symbols
More informationEECS 433 Statistical Pattern Recognition
EECS 433 Statistical Pattern Recognition Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208 http://www.eecs.northwestern.edu/~yingwu 1 / 19 Outline What is Pattern
More informationThe Human Kernel. Christoph Dann CMU. Abstract
The Human Kernel Andrew Gordon Wilson CMU Christoph Dann CMU Christopher G. Lucas University of Edinburgh Eric P. Xing CMU Abstract Bayesian nonparametric models, such as Gaussian processes, provide a
More informationCausal Schema-Based Inductive Reasoning
Causal Schema-Based Inductive Reasoning Oren Griffiths (ogriffiths@psy.unsw.edu.au) Ralf Mayrhofer (rmayrho@uni.goettingen.de) Jonas Nagel (nagel.jonas@gmail.com) Michael R. Waldmann (michael.waldmann@bio.uni-goettingen.de)
More informationEncoding higher-order structure in visual working memory: A probabilistic model
Encoding higher-order structure in visual working memory: A probabilistic model Timothy F. Brady (tfbrady@mit.edu), Joshua B. Tenenbaum (jbt@mit.edu) Department of Brain and Cognitive Sciences Massachusetts
More informationHigh-level Vision. Bernd Neumann Slides for the course in WS 2004/05. Faculty of Informatics Hamburg University Germany
High-level Vision Bernd Neumann Slides for the course in WS 2004/05 Faculty of Informatics Hamburg University Germany neumann@informatik.uni-hamburg.de http://kogs-www.informatik.uni-hamburg.de 1 Contents
More informationwith relevance to recent developments in computational theories of human learning. According
Causality and Imagination Caren M. Walker & Alison Gopnik University of California, Berkeley Abstract This review describes the relation between the imagination and causal cognition, particularly with
More informationBayesian Nonparametric Methods for Precision Medicine
Bayesian Nonparametric Methods for Precision Medicine Brian Reich, NC State Collaborators: Qian Guan (NCSU), Eric Laber (NCSU) and Dipankar Bandyopadhyay (VCU) University of Illinois at Urbana-Champaign
More informationA Predictive Chronological Model of Multiple Clinical Observations T R A V I S G O O D W I N A N D S A N D A M. H A R A B A G I U
A Predictive Chronological Model of Multiple Clinical Observations T R A V I S G O O D W I N A N D S A N D A M. H A R A B A G I U T H E U N I V E R S I T Y O F T E X A S A T D A L L A S H U M A N L A N
More informationObject recognition and hierarchical computation
Object recognition and hierarchical computation Challenges in object recognition. Fukushima s Neocognitron View-based representations of objects Poggio s HMAX Forward and Feedback in visual hierarchy Hierarchical
More informationModeling Human Understanding of Complex Intentional Action with a Bayesian Nonparametric Subgoal Model
Modeling Human Understanding of Complex Intentional Action with a Bayesian Nonparametric Subgoal Model Ryo Nakahashi Chris L. Baker and Joshua B. Tenenbaum Computer Science and Artificial Intelligence
More informationEffectively Learning from Pedagogical Demonstrations
Effectively Learning from Pedagogical Demonstrations Mark K Ho (mark ho@brown.edu) Department of Cognitive, Linguistic, and Psychological Sciences, 190 Thayer Street Providence, RI 02906 USA Michael L.
More informationLecture 9 Internal Validity
Lecture 9 Internal Validity Objectives Internal Validity Threats to Internal Validity Causality Bayesian Networks Internal validity The extent to which the hypothesized relationship between 2 or more variables
More informationCPC journal club archive
CPC journal club archive (alphabetically ordered according to first author s last name) last updated: 01/30/2019 abrahamyan/silva/daking/carandini/gardner - adaptable history biases in human perceptual
More informationBayesian and Frequentist Approaches
Bayesian and Frequentist Approaches G. Jogesh Babu Penn State University http://sites.stat.psu.edu/ babu http://astrostatistics.psu.edu All models are wrong But some are useful George E. P. Box (son-in-law
More informationRepresentation Theorems for Multiple Belief Changes
Representation Theorems for Multiple Belief Changes Dongmo Zhang 1,2 Shifu Chen 1 Wujia Zhu 1,2 Zhaoqian Chen 1 1 State Key Lab. for Novel Software Technology Department of Computer Science and Technology
More informationCausal Models Interact with Structure Mapping to Guide Analogical Inference
Causal Models Interact with Structure Mapping to Guide Analogical Inference Hee Seung Lee (heeseung@ucla.edu) Keith J. Holyoak (holyoak@lifesci.ucla.edu) Department of Psychology, University of California,
More informationVision as Bayesian inference: analysis by synthesis?
Vision as Bayesian inference: analysis by synthesis? Schwarz Andreas, Wiesner Thomas 1 / 70 Outline Introduction Motivation Problem Description Bayesian Formulation Generative Models Letters, Text Faces
More informationA Cue Imputation Bayesian Model of Information Aggregation
A Cue Imputation Bayesian Model of Information Aggregation Jennifer S. Trueblood, George Kachergis, and John K. Kruschke {jstruebl, gkacherg, kruschke}@indiana.edu Cognitive Science Program, 819 Eigenmann,
More informationLearning Grounded Causal Models
Learning Grounded Causal Models Noah D. Goodman (ndg@mit.edu), Vikash K. Mansinghka (vkm@mit.edu), Joshua B. Tenenbaum (jbt@mit.edu) Massachusetts Institute of Technology, Cambridge, MA 239 Abstract We
More informationBetween-word regressions as part of rational reading
Between-word regressions as part of rational reading Klinton Bicknell & Roger Levy UC San Diego CUNY 2010: New York Bicknell & Levy (UC San Diego) Regressions as rational reading CUNY 2010 1 / 23 Introduction
More informationTheory-Based Induction
Theory-Based Induction Charles Kemp (ckemp@mit.edu) Joshua B. Tenenbaum (jbt@mit.edu) Department of Brain and Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139-4307 USA Abstract
More informationArtificial Intelligence Lecture 7
Artificial Intelligence Lecture 7 Lecture plan AI in general (ch. 1) Search based AI (ch. 4) search, games, planning, optimization Agents (ch. 8) applied AI techniques in robots, software agents,... Knowledge
More informationUsing Bayesian Networks to Direct Stochastic Search in Inductive Logic Programming
Appears in Proceedings of the 17th International Conference on Inductive Logic Programming (ILP). Corvallis, Oregon, USA. June, 2007. Using Bayesian Networks to Direct Stochastic Search in Inductive Logic
More informationAPPROVAL SHEET. Uncertainty in Semantic Web. Doctor of Philosophy, 2005
APPROVAL SHEET Title of Dissertation: BayesOWL: A Probabilistic Framework for Uncertainty in Semantic Web Name of Candidate: Zhongli Ding Doctor of Philosophy, 2005 Dissertation and Abstract Approved:
More informationVision: Over Ov view Alan Yuille
Vision: Overview Alan Yuille Why is Vision Hard? Complexity and Ambiguity of Images. Range of Vision Tasks. More 10x10 images 256^100 = 6.7 x 10 ^240 than the total number of images seen by all humans
More informationCISC453 Winter Probabilistic Reasoning Part B: AIMA3e Ch
CISC453 Winter 2010 Probabilistic Reasoning Part B: AIMA3e Ch 14.5-14.8 Overview 2 a roundup of approaches from AIMA3e 14.5-14.8 14.5 a survey of approximate methods alternatives to the direct computing
More informationMS&E 226: Small Data
MS&E 226: Small Data Lecture 10: Introduction to inference (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 17 What is inference? 2 / 17 Where did our data come from? Recall our sample is: Y, the vector
More informationA Brief Introduction to Bayesian Statistics
A Brief Introduction to Statistics David Kaplan Department of Educational Psychology Methods for Social Policy Research and, Washington, DC 2017 1 / 37 The Reverend Thomas Bayes, 1701 1761 2 / 37 Pierre-Simon
More informationUtility Maximization and Bounds on Human Information Processing
Topics in Cognitive Science (2014) 1 6 Copyright 2014 Cognitive Science Society, Inc. All rights reserved. ISSN:1756-8757 print / 1756-8765 online DOI: 10.1111/tops.12089 Utility Maximization and Bounds
More informationRunning head: RATIONAL APPROXIMATIONS TO CATEGORY LEARNING. Rational approximations to rational models: Alternative algorithms for category learning
Rational Approximations to Category Learning 1 Running head: RATIONAL APPROXIMATIONS TO CATEGORY LEARNING Rational approximations to rational models: Alternative algorithms for category learning Adam N.
More informationUsing Bayesian Networks to Analyze Expression Data. Xu Siwei, s Muhammad Ali Faisal, s Tejal Joshi, s
Using Bayesian Networks to Analyze Expression Data Xu Siwei, s0789023 Muhammad Ali Faisal, s0677834 Tejal Joshi, s0677858 Outline Introduction Bayesian Networks Equivalence Classes Applying to Expression
More informationWason's Cards: What is Wrong?
Wason's Cards: What is Wrong? Pei Wang Computer and Information Sciences, Temple University This paper proposes a new interpretation
More informationComputational Cognitive Neuroscience
Computational Cognitive Neuroscience Computational Cognitive Neuroscience Computational Cognitive Neuroscience *Computer vision, *Pattern recognition, *Classification, *Picking the relevant information
More informationThe Trajectory of Psychology within Cognitive Science. Dedre Gentner Northwestern University
The Trajectory of Psychology within Cognitive Science Dedre Gentner Northwestern University 1. How has Psychology fared within Cognitive Science? 2. How have areas within Psychology risen and fallen? 3.What
More informationCognition, Learning and Social Change Conference Summary A structured summary of the proceedings of the first conference
Cognition, Learning and Social Change Conference Summary A structured summary of the proceedings of the first conference The purpose of this series of three conferences is to build a bridge between cognitive
More informationMemory-Augmented Active Deep Learning for Identifying Relations Between Distant Medical Concepts in Electroencephalography Reports
Memory-Augmented Active Deep Learning for Identifying Relations Between Distant Medical Concepts in Electroencephalography Reports Ramon Maldonado, BS, Travis Goodwin, PhD Sanda M. Harabagiu, PhD The University
More informationBayes and blickets: Effects of knowledge on causal induction in children and adults. Thomas L. Griffiths. University of California, Berkeley
Running Head: BAYES AND BLICKETS Bayes and blickets: Effects of knowledge on causal induction in children and adults Thomas L. Griffiths University of California, Berkeley David M. Sobel Brown University
More informationNeuro-Inspired Statistical. Rensselaer Polytechnic Institute National Science Foundation
Neuro-Inspired Statistical Pi Prior Model lfor Robust Visual Inference Qiang Ji Rensselaer Polytechnic Institute National Science Foundation 1 Status of Computer Vision CV has been an active area for over
More informationWeak Supervision. Vincent Chen and Nish Khandwala
Weak Supervision Vincent Chen and Nish Khandwala Outline Motivation We want more labels! We want to program our data! #Software2.0 Weak Supervision Formulation Landscape of Noisy Labeling Schemes Snorkel
More informationDeSTIN: A Scalable Deep Learning Architecture with Application to High-Dimensional Robust Pattern Recognition
DeSTIN: A Scalable Deep Learning Architecture with Application to High-Dimensional Robust Pattern Recognition Itamar Arel and Derek Rose and Robert Coop Department of Electrical Engineering and Computer
More informationBrain mapping 2.0: Putting together fmri datasets to map cognitive ontologies to the brain. Bertrand Thirion
Brain mapping 2.0: Putting together fmri datasets to map cognitive ontologies to the brain Bertrand Thirion bertrand.thirion@inria.fr 1 Outline Introduction: statistical inference in neuroimaging and big
More informationOn The Scope of Mechanistic Explanation in Cognitive Sciences
On The Scope of Mechanistic Explanation in Cognitive Sciences Anna-Mari Rusanen (anna-mari.rusanen@helsinki.fi) Department of Philosophy, History, Art and Culture Studies, PO BOX 24 00014 University of
More informationThe Development and Application of Bayesian Networks Used in Data Mining Under Big Data
2017 International Conference on Arts and Design, Education and Social Sciences (ADESS 2017) ISBN: 978-1-60595-511-7 The Development and Application of Bayesian Networks Used in Data Mining Under Big Data
More informationChristopher Guy Lucas. A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy. Psychology.
Acquired Abstract knowledge in causal induction: a hierarchical Bayesian approach by Christopher Guy Lucas A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor
More informationStructured Correlation from the Causal Background
Structured Correlation from the Causal Background Ralf Mayrhofer 1, Noah D. Goodman 2, Michael R. Waldmann 1, and Joshua B. Tenenbaum 2 1 {rmayrho, mwaldma}@uni-goettingen.de; Department of Psychology,
More informationSubjective randomness and natural scene statistics
Psychonomic Bulletin & Review 2010, 17 (5), 624-629 doi:10.3758/pbr.17.5.624 Brief Reports Subjective randomness and natural scene statistics Anne S. Hsu University College London, London, England Thomas
More informationCausal learning in humans. Alison Gopnik Dept. of Psychology UC-Berkeley
Causal learning in humans Alison Gopnik Dept. of Psychology UC-Berkeley Knowledge as an inverse problem East Coast Solutions Structured Abstract Innate Domain-specific Modularity West Coast Solutions Distributed
More informationArtificial Intelligence and Human Thinking. Robert Kowalski Imperial College London
Artificial Intelligence and Human Thinking Robert Kowalski Imperial College London 1 Artificial Intelligence and Human Thinking The Abductive Logic Programming (ALP) agent model as a unifying framework
More informationBehavioural Economics University of Oxford Vincent P. Crawford Michaelmas Term 2012
Behavioural Economics University of Oxford Vincent P. Crawford Michaelmas Term 2012 Introduction to Behavioral Economics and Decision Theory (with very large debts to David Laibson and Matthew Rabin) Revised
More informationArtificial Cognitive Systems
Artificial Cognitive Systems David Vernon Carnegie Mellon University Africa vernon@cmu.edu www.vernon.eu Artificial Cognitive Systems 1 Carnegie Mellon University Africa Lecture 2 Paradigms of Cognitive
More informationHierarchical Bayesian Modeling of Individual Differences in Texture Discrimination
Hierarchical Bayesian Modeling of Individual Differences in Texture Discrimination Timothy N. Rubin (trubin@uci.edu) Michael D. Lee (mdlee@uci.edu) Charles F. Chubb (cchubb@uci.edu) Department of Cognitive
More informationGrounding Ontologies in the External World
Grounding Ontologies in the External World Antonio CHELLA University of Palermo and ICAR-CNR, Palermo antonio.chella@unipa.it Abstract. The paper discusses a case study of grounding an ontology in the
More informationArtificial Intelligence Programming Probability
Artificial Intelligence Programming Probability Chris Brooks Department of Computer Science University of San Francisco Department of Computer Science University of San Francisco p.1/25 17-0: Uncertainty
More information