Exploiting Similarity to Optimize Recommendations from User Feedback
|
|
- Arleen McCoy
- 5 years ago
- Views:
Transcription
1 1
2 Exploiting Similarity to Optimize Recommendations from User Feedback Hasta Vanchinathan Andreas Krause (Learning and Adaptive Systems Group, D-INF,ETHZ ) Collaborators: Isidor Nikolic (Microsoft, Zurich), Fabio De Bona (Google, Zurich) 2
3 A Recommendation Example 3
4 A Recommendation Example 4
5 A Recommendation Example 5
6 A Recommendation Example 6
7 A Recommendation Example 7
8 A Recommendation Example 8
9 A Recommendation Example 9
10 A Recommendation Example 10
11 Many real world instances Disclaimer: All trademarks belong to respective owners
12 Many real world instances Disclaimer: All trademarks belong to respective owners
13 Many real world instances Disclaimer: All trademarks belong to respective owners
14 Many real world instances Disclaimer: All trademarks belong to respective owners
15 Many real world instances Disclaimer: All trademarks belong to respective owners
16 Many real world instances Disclaimer: All trademarks belong to respective owners
17 Many real world instances Disclaimer: All trademarks belong to respective owners
18 Many real world instances Disclaimer: All trademarks belong to respective owners
19 Common Thread 19
20 Common Thread To do well, we need a model. e.g., 20
21 Common Thread To do well, we need a model. e.g., Popular techniques include Content-based filtering Collaborative filtering Hybrid recommendation systems 21
22 Common Thread To do well, we need a model. e.g., Popular techniques include Content-based filtering Collaborative filtering Hybrid recommendation systems All aim to predict reward given a fixed data set 22
23 Challenges 23
24 Challenges Many, dynamic! 24
25 Challenges Many, dynamic! Preferences change 25
26 Challenges Many, dynamic! Estimating all combinations both hard and wasteful! Preferences change 26
27 Challenges Many, dynamic! Estimating all combinations both hard and wasteful! Preferences change Only need identify high reward items! 27
28 Challenges Many, dynamic! Estimating all combinations both hard and wasteful! Preferences change Only need identify high reward items! 28
29 Multi Arm Bandits 29
30 Multi Arm Bandits 30
31 Multi Arm Bandits Early approaches require k << T 31
32 Multi Arm Bandits Early approaches require k << T Can get strong guarantees for a finite set of actions Gittins indices - Greedy, UCB1 (Auer et al 01) #of arms increases -> performance degrades 32
33 Multi Arm Bandits Early approaches require k << T Can get strong guarantees for a finite set of actions Gittins indices - Greedy, UCB1 (Auer et al 01) #of arms increases -> performance degrades For dynamic web scale recommendations, k >> T 33
34 Learning meets bandits f(x) x 34
35 Learning meets bandits Exploit similarity information to predict rewards for new items f(x) x 35
36 Learning meets bandits Exploit similarity information to predict rewards for new items Must make assumptions on reward function, e.g.: f(x) x 36
37 Learning meets bandits Exploit similarity information to predict rewards for new items Must make assumptions on reward function, e.g.: Linear (linucb - Li et al 10) f(x) x 37
38 Learning meets bandits Exploit similarity information to predict rewards for new items Must make assumptions on reward function, e.g.: Linear (linucb - Li et al 10) Lipschitz (Bubeck et al 08) f(x) x 38
39 Learning meets bandits Exploit similarity information to predict rewards for new items Must make assumptions on reward function, e.g.: Linear (linucb - Li et al 10) Lipschitz (Bubeck et al 08) Low RKHS norm (GP-UCB - Srinivas et al 12) f(x) x 39
40 Learning meets bandits Exploit similarity information to predict rewards for new items Must make assumptions on reward function, e.g.: Linear (linucb - Li et al 10) Lipschitz (Bubeck et al 08) Low RKHS norm (GP-UCB - Srinivas et al 12) This is the approach we pursue in this work! f(x) x 40
41 Problem Setup 41
42 Problem Setup 42
43 Problem Setup = user attributes 43
44 Problem Setup = user attributes 44
45 Problem Setup = user attributes 45
46 Problem Setup = user attributes 46
47 Problem Setup = user attributes 47
48 Problem Setup = user attributes 48
49 Problem Setup = user attributes 49
50 Problem Setup = user attributes 50
51 Problem Setup = user attributes We want to maximize: 51
52 Problem Setup = user attributes Equivalently, minimize 52
53 Problem Setup = user attributes Equivalently, minimize 53
54 Our Approach 54
55 Our Approach We propose CGPRank, that uses a bayesian model for the rewards 55
56 Our Approach We propose CGPRank, that uses a bayesian model for the rewards CGPRank efficiently shares reward across 56
57 Our Approach We propose CGPRank, that uses a bayesian model for the rewards CGPRank efficiently shares reward across Items 57
58 Our Approach We propose CGPRank, that uses a bayesian model for the rewards CGPRank efficiently shares reward across Items Users 58
59 Our Approach We propose CGPRank, that uses a bayesian model for the rewards CGPRank efficiently shares reward across Items Users positions 59
60 Demux ing Feedback 60
61 Demux ing Feedback We still need to predict: 61
62 Demux ing Feedback We still need to predict: Assume: items do not influence reward of other items 62
63 Demux ing Feedback We still need to predict: Assume: items do not influence reward of other items 63
64 Demux ing Feedback We still need to predict: Assume: items do not influence reward of other items 64
65 Demux ing Feedback We still need to predict: Assume: items do not influence reward of other items relevance! 65
66 Demux ing Feedback We still need to predict: Assume: items do not influence reward of other items relevance! Position CTR! 66
67 CGPRank Sharing across positions 67
68 CGPRank Sharing across positions 68
69 CGPRank Sharing across positions
70 CGPRank Sharing across positions
71 CGPRank Sharing across positions ?? 0.16??
72 CGPRank Sharing across positions ?? 0.16??
73 CGPRank Sharing across positions ?? Position weights - independent of items! - estimated from logs 0.16??
74 CGPRank Sharing across positions Position weights - independent of items! - estimated from logs
75 CGPRank Sharing across items/users 75
76 CGPRank Sharing across items/users 76
77 CGPRank Sharing across items/users 77
78 CGPRank Sharing across items/users 78
79 CGPRank Sharing across items/users 79
80 CGPRank Sharing across items/users 80
81 CGPRank Sharing across items/users 81
82 CGPRank Sharing across items/users 82
83 CGPRank Sharing across items/users 83
84 CGPRank Sharing across items/users 84
85 CGPRank Sharing across items/users 85
86 CGPRank Sharing across items/users 86
87 CGPRank Sharing across items/users 87
88 Sharing across items / users with Gaussian processes Bayesian models for functions Prior P(f) f(x) reward x choice 88
89 Sharing across items / users with Gaussian processes Bayesian models for functions Prior P(f) f(x) reward x choice 89
90 Sharing across items / users with Gaussian processes Bayesian models for functions Prior P(f) f(x) reward x choice 90
91 Sharing across items / users with Gaussian processes Bayesian models for functions Prior P(f) f(x) reward likely x choice 91
92 Sharing across items / users with Bayesian models for functions Gaussian processes Prior P(f) unlikely f(x) reward likely x choice 92
93 Sharing across items / users with Gaussian processes Bayesian models for functions Likelihood P(data f) Prior P(f) unlikely f(x) f(x) reward likely x choice 93
94 Sharing across items / users with Bayesian models for functions Prior P(f) Gaussian processes unlikely f(x) Likelihood P(data f) Posterior: P(f data) f(x) reward likely x choice x 94
95 Sharing across items / users with Bayesian models for functions Prior P(f) Gaussian processes unlikely f(x) Likelihood P(data f) Posterior: P(f data) f(x) reward likely x choice x 95
96 Sharing across items / users with Bayesian models for functions Prior P(f) Gaussian processes unlikely f(x) Likelihood P(data f) Posterior: P(f data) f(x) reward likely likely x choice x 96
97 Sharing across items / users with Bayesian models for functions Prior P(f) Gaussian processes unlikely f(x) Likelihood P(data f) Posterior: P(f data) f(x) reward likely likely x choice x 97
98 Sharing across items / users with Bayesian models for functions Prior P(f) Gaussian processes unlikely f(x) Likelihood P(data f) Posterior: P(f data) f(x) reward likely likely x choice x unlikely 98
99 Sharing across items / users with Bayesian models for functions Prior P(f) Gaussian processes unlikely f(x) Likelihood P(data f) Posterior: P(f data) f(x) reward likely likely x choice x unlikely Closed form Bayesian posterior inference possible! 99
100 Sharing across items / users with Gaussian processes Bayesian models for functions Prior P(f) unlikely f(x) Likelihood P(data f) Posterior: P(f data) f(x) reward likely likely x choice Closed form Bayesian posterior inference possible! Allows to represent uncertainty in prediction x unlikely 100
101 Predictive confidence in GPs f(x) Typically, only care about marginals, i.e., x 101
102 Predictive confidence in GPs f(x) Typically, only care about marginals, i.e., x x 102
103 Predictive confidence in GPs f(x) f(x ) x x Typically, only care about marginals, i.e., P(f(x )) 103
104 Predictive confidence in GPs f(x) f(x ) x x Typically, only care about marginals, i.e., P(f(x )) Parameterized by covariance function K(x,x ) = Cov(f(x),f(x )) 104
105 Predictive confidence in GPs f(x) f(x ) x x Typically, only care about marginals, i.e., P(f(x )) Parameterized by covariance function K(x,x ) = Cov(f(x),f(x )) Can capture many rec. tasks using appropriate cov. function 105
106 Intuition: Explore-Exploit using GPs Selection Rule: 118
107 Intuition: Explore-Exploit using GPs Selection Rule: 119
108 CGPRank Selection Rule 120
109 CGPRank Selection Rule At t=0, if no prior observations 121
110 CGPRank Selection Rule At t=0, with some prior observation 122
111 CGPRank Selection Rule Uncertainty shrinks not just at observation. 123
112 CGPRank Selection Rule but also at other locations based on similarity! 124
113 CGPRank Selection Rule If list size is 2 125
114 CGPRank Selection Rule The first item,, is selected according to 126
115 CGPRank Selection Rule 127
116 CGPRank Selection Rule Secret sauce? 128
117 CGPRank Selection Rule Time varying tradeoff parameter 129
118 CGPRank Selection Rule Hallucinate mean and shrink uncertainties 130
119 CGPRank Selection Rule Hallucinate mean and shrink uncertainties 131
120 CGPRank Selection Rule Now update model and again pick using: 132
121 CGPRank Selection Rule Now update model and again pick using: 133
122 CGPRank 134
123 CGPRank 135
124 CGPRank 136
125 CGPRank 137
126 CGPRank 138
127 CGPRank 139
128 CGPRank 140
129 CGPRank 141
130 CGPRank 142
131 CGPRank 143
132 CGPRank 144
133 CGPRank 145
134 CGPRank 146
135 Theorem 1 If we choose CGPRank - guarantees, then running CGPRank for T rounds, we incur a regret sublinear in T. Specifically, Grows strongly sublinearly for typical kernels 147
136 Experiments - Datasets 153
137 Experiments - Datasets Google book store logs 42 days of user logs Given key book, suggest list of related books Kernel computed from related graph on books 154
138 Experiments - Datasets Google book store logs 42 days of user logs Given key book, suggest list of related books Kernel computed from related graph on books Yahoo! Webscope R6B* 10 days of user log on Yahoo! Frontpage Unbiased method to test bandit algorithms 45 million user interations with 271 articles Feedback available for single selection, we simulated list selection 155
139 Experiments - Questions How much does principled sharing of feedback help? Across items/context? Across positions? Can CGPRank outperform an existing, tuned recommendation system? 156
140 Sharing across items 157
141 Sharing across contexts 158
142 Effect of increasing list size 159
143 Boost over existing approach Existing Algorithm 160
144 Conclusions CGPRank - Efficient Algorithm with strong theoretical guarantees Can generalize from sparse feedback across Items Contexts Positions Experiments suggest Statistical and computational efficiency 161
Search e Fall /18/15
Sample Efficient Policy Click to edit Master title style Search Click to edit Emma Master Brunskill subtitle style 15-889e Fall 2015 11 Sample Efficient RL Objectives Probably Approximately Correct Minimizing
More informationPractical Bayesian Optimization of Machine Learning Algorithms. Jasper Snoek, Ryan Adams, Hugo LaRochelle NIPS 2012
Practical Bayesian Optimization of Machine Learning Algorithms Jasper Snoek, Ryan Adams, Hugo LaRochelle NIPS 2012 ... (Gaussian Processes) are inadequate for doing speech and vision. I still think they're
More informationarxiv: v1 [stat.ap] 2 Feb 2016
Better safe than sorry: Risky function exploitation through safe optimization Eric Schulz 1, Quentin J.M. Huys 2, Dominik R. Bach 3, Maarten Speekenbrink 1 & Andreas Krause 4 1 Department of Experimental
More informationReinforcement Learning : Theory and Practice - Programming Assignment 1
Reinforcement Learning : Theory and Practice - Programming Assignment 1 August 2016 Background It is well known in Game Theory that the game of Rock, Paper, Scissors has one and only one Nash Equilibrium.
More informationEXPLORATION FLOW 4/18/10
EXPLORATION Peter Bossaerts CNS 102b FLOW Canonical exploration problem: bandits Bayesian optimal exploration: The Gittins index Undirected exploration: e-greedy and softmax (logit) The economists and
More informationChallenges in Developing Learning Algorithms to Personalize mhealth Treatments
Challenges in Developing Learning Algorithms to Personalize mhealth Treatments JOOLHEALTH Bar-Fit Susan A Murphy 01.16.18 HeartSteps SARA Sense 2 Stop Continually Learning Mobile Health Intervention 1)
More informationBayesOpt: Extensions and applications
BayesOpt: Extensions and applications Javier González Masterclass, 7-February, 2107 @Lancaster University Agenda of the day 9:00-11:00, Introduction to Bayesian Optimization: What is BayesOpt and why it
More informationTwo-sided Bandits and the Dating Market
Two-sided Bandits and the Dating Market Sanmay Das Center for Biological and Computational Learning Massachusetts Institute of Technology Cambridge, MA 02139 sanmay@mit.edu Emir Kamenica Department of
More informationA Decision-Theoretic Approach to Evaluating Posterior Probabilities of Mental Models
A Decision-Theoretic Approach to Evaluating Posterior Probabilities of Mental Models Jonathan Y. Ito and David V. Pynadath and Stacy C. Marsella Information Sciences Institute, University of Southern California
More informationInferring Cognitive Models from Data using Approximate Bayesian Computation
Inferring Cognitive Models from Data using Approximate Bayesian Computation CHI 2017 Antti Kangasrääsiö 1, Kumaripaba Athukorala 1, Andrew Howes 2, Jukka Corander 3, Samuel Kaski 1, Antti Oulasvirta 4
More informationA Comparison of Collaborative Filtering Methods for Medication Reconciliation
A Comparison of Collaborative Filtering Methods for Medication Reconciliation Huanian Zheng, Rema Padman, Daniel B. Neill The H. John Heinz III College, Carnegie Mellon University, Pittsburgh, PA, 15213,
More informationK Armed Bandit. Goal. Introduction. Methodology. Approach. and optimal action rest of the time. Fig 1 shows the pseudo code
K Armed Bandit Goal Introduction Methodology Approach Epsilon Greedy Method: Greedy Method: Optimistic Initial Value Method: Implementation Experiments & Results E1: Comparison of all techniques for Stationary
More informationIntroduction to Machine Learning. Katherine Heller Deep Learning Summer School 2018
Introduction to Machine Learning Katherine Heller Deep Learning Summer School 2018 Outline Kinds of machine learning Linear regression Regularization Bayesian methods Logistic Regression Why we do this
More informationData Mining in Bioinformatics Day 4: Text Mining
Data Mining in Bioinformatics Day 4: Text Mining Karsten Borgwardt February 25 to March 10 Bioinformatics Group MPIs Tübingen Karsten Borgwardt: Data Mining in Bioinformatics, Page 1 What is text mining?
More informationReview: Logistic regression, Gaussian naïve Bayes, linear regression, and their connections
Review: Logistic regression, Gaussian naïve Bayes, linear regression, and their connections New: Bias-variance decomposition, biasvariance tradeoff, overfitting, regularization, and feature selection Yi
More informationCh.20 Dynamic Cue Combination in Distributional Population Code Networks. Ka Yeon Kim Biopsychology
Ch.20 Dynamic Cue Combination in Distributional Population Code Networks Ka Yeon Kim Biopsychology Applying the coding scheme to dynamic cue combination (Experiment, Kording&Wolpert,2004) Dynamic sensorymotor
More informationCS 4365: Artificial Intelligence Recap. Vibhav Gogate
CS 4365: Artificial Intelligence Recap Vibhav Gogate Exam Topics Search BFS, DFS, UCS, A* (tree and graph) Completeness and Optimality Heuristics: admissibility and consistency CSPs Constraint graphs,
More informationBayesians methods in system identification: equivalences, differences, and misunderstandings
Bayesians methods in system identification: equivalences, differences, and misunderstandings Johan Schoukens and Carl Edward Rasmussen ERNSI 217 Workshop on System Identification Lyon, September 24-27,
More informationData Analysis Using Regression and Multilevel/Hierarchical Models
Data Analysis Using Regression and Multilevel/Hierarchical Models ANDREW GELMAN Columbia University JENNIFER HILL Columbia University CAMBRIDGE UNIVERSITY PRESS Contents List of examples V a 9 e xv " Preface
More informationST440/550: Applied Bayesian Statistics. (10) Frequentist Properties of Bayesian Methods
(10) Frequentist Properties of Bayesian Methods Calibrated Bayes So far we have discussed Bayesian methods as being separate from the frequentist approach However, in many cases methods with frequentist
More informationUsing Heuristic Models to Understand Human and Optimal Decision-Making on Bandit Problems
Using Heuristic Models to Understand Human and Optimal Decision-Making on andit Problems Michael D. Lee (mdlee@uci.edu) Shunan Zhang (szhang@uci.edu) Miles Munro (mmunro@uci.edu) Mark Steyvers (msteyver@uci.edu)
More informationHuman and Optimal Exploration and Exploitation in Bandit Problems
Human and Optimal Exploration and ation in Bandit Problems Shunan Zhang (szhang@uci.edu) Michael D. Lee (mdlee@uci.edu) Miles Munro (mmunro@uci.edu) Department of Cognitive Sciences, 35 Social Sciences
More informationBayesian integration in sensorimotor learning
Bayesian integration in sensorimotor learning Introduction Learning new motor skills Variability in sensors and task Tennis: Velocity of ball Not all are equally probable over time Increased uncertainty:
More informationA Vision-based Affective Computing System. Jieyu Zhao Ningbo University, China
A Vision-based Affective Computing System Jieyu Zhao Ningbo University, China Outline Affective Computing A Dynamic 3D Morphable Model Facial Expression Recognition Probabilistic Graphical Models Some
More informationIntroduction to Computational Neuroscience
Introduction to Computational Neuroscience Lecture 5: Data analysis II Lesson Title 1 Introduction 2 Structure and Function of the NS 3 Windows to the Brain 4 Data analysis 5 Data analysis II 6 Single
More informationIntroduction to Computational Neuroscience
Introduction to Computational Neuroscience Lecture 10: Brain-Computer Interfaces Ilya Kuzovkin So Far Stimulus So Far So Far Stimulus What are the neuroimaging techniques you know about? Stimulus So Far
More informationShu Kong. Department of Computer Science, UC Irvine
Ubiquitous Fine-Grained Computer Vision Shu Kong Department of Computer Science, UC Irvine Outline 1. Problem definition 2. Instantiation 3. Challenge 4. Fine-grained classification with holistic representation
More informationReach and grasp by people with tetraplegia using a neurally controlled robotic arm
Leigh R. Hochberg et al. Reach and grasp by people with tetraplegia using a neurally controlled robotic arm Nature, 17 May 2012 Paper overview Ilya Kuzovkin 11 April 2014, Tartu etc How it works?
More informationMS&E 226: Small Data
MS&E 226: Small Data Lecture 10: Introduction to inference (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 17 What is inference? 2 / 17 Where did our data come from? Recall our sample is: Y, the vector
More informationSLA learning from past failures, a Multi-Armed Bandit approach
SLA learning from past failures, a Multi-Armed Bandit approach Lise Rodier, David Auger, Johanne Cohen PRiSM-CNRS, 45 avenue des Etats-Unis, 78 Versailles, France Email: {David.Auger, Johanne.Cohen, Lise.Rodier}@prism.uvsq.fr
More informationBayesian Tolerance Intervals for Sparse Data Margin Assessment
Bayesian Tolerance Intervals for Sparse Data Margin Assessment Benjamin Schroeder and Lauren Hund ASME V&V Symposium May 3, 2017 - Las Vegas, NV SAND2017-4590 C - (UUR) Sandia National Laboratories is
More informationFor general queries, contact
Much of the work in Bayesian econometrics has focused on showing the value of Bayesian methods for parametric models (see, for example, Geweke (2005), Koop (2003), Li and Tobias (2011), and Rossi, Allenby,
More informationBayesRandomForest: An R
BayesRandomForest: An R implementation of Bayesian Random Forest for Regression Analysis of High-dimensional Data Oyebayo Ridwan Olaniran (rid4stat@yahoo.com) Universiti Tun Hussein Onn Malaysia Mohd Asrul
More informationUsing Bayesian Networks to Analyze Expression Data Λ
Using Bayesian Networks to Analyze Expression Data Λ Nir Friedman School of Computer Science & Engineering Hebrew University Jerusalem, 994, ISRAEL nir@cs.huji.ac.il Iftach Nachman Center for Neural Computation
More informationLec 02: Estimation & Hypothesis Testing in Animal Ecology
Lec 02: Estimation & Hypothesis Testing in Animal Ecology Parameter Estimation from Samples Samples We typically observe systems incompletely, i.e., we sample according to a designed protocol. We then
More informationRisk Mediation in Association Rules:
Risk Mediation in Association Rules: The Case of Decision Support in Medication Review Michiel C. Meulendijk 1, Marco R. Spruit 2, Sjaak Brinkkemper 2 1 Department of Public Health and Primary Care, Leiden
More informationData Availability and Function Extrapolation
Data Availability and Function Extrapolation Pablo León Villagrá Irina Preda Christopher G. Lucas Informatics Forum, Crichton Street, EH8 9AB, Edinburgh, United Kingdom Abstract In function learning experiments,
More informationHierarchical Bayesian Modeling of Individual Differences in Texture Discrimination
Hierarchical Bayesian Modeling of Individual Differences in Texture Discrimination Timothy N. Rubin (trubin@uci.edu) Michael D. Lee (mdlee@uci.edu) Charles F. Chubb (cchubb@uci.edu) Department of Cognitive
More informationA COMPARISON OF IMPUTATION METHODS FOR MISSING DATA IN A MULTI-CENTER RANDOMIZED CLINICAL TRIAL: THE IMPACT STUDY
A COMPARISON OF IMPUTATION METHODS FOR MISSING DATA IN A MULTI-CENTER RANDOMIZED CLINICAL TRIAL: THE IMPACT STUDY Lingqi Tang 1, Thomas R. Belin 2, and Juwon Song 2 1 Center for Health Services Research,
More informationBayesian Nonparametric Methods for Precision Medicine
Bayesian Nonparametric Methods for Precision Medicine Brian Reich, NC State Collaborators: Qian Guan (NCSU), Eric Laber (NCSU) and Dipankar Bandyopadhyay (VCU) University of Illinois at Urbana-Champaign
More informationRoBO: A Flexible and Robust Bayesian Optimization Framework in Python
RoBO: A Flexible and Robust Bayesian Optimization Framework in Python Aaron Klein kleinaa@cs.uni-freiburg.de Numair Mansur mansurm@cs.uni-freiburg.de Stefan Falkner sfalkner@cs.uni-freiburg.de Frank Hutter
More informationBayes Linear Statistics. Theory and Methods
Bayes Linear Statistics Theory and Methods Michael Goldstein and David Wooff Durham University, UK BICENTENNI AL BICENTENNIAL Contents r Preface xvii 1 The Bayes linear approach 1 1.1 Combining beliefs
More informationEmotional Evaluation of Bandit Problems
Emotional Evaluation of Bandit Problems Johannes Feldmaier, Klaus Diepold Institute for Data Processing Technische Universität München Munich, Germany {johannes.feldmaier, kldi}@tum.de Abstract In this
More informationIndividual Differences in Attention During Category Learning
Individual Differences in Attention During Category Learning Michael D. Lee (mdlee@uci.edu) Department of Cognitive Sciences, 35 Social Sciences Plaza A University of California, Irvine, CA 92697-5 USA
More informationBayesian Models for Combining Data Across Domains and Domain Types in Predictive fmri Data Analysis (Thesis Proposal)
Bayesian Models for Combining Data Across Domains and Domain Types in Predictive fmri Data Analysis (Thesis Proposal) Indrayana Rustandi Computer Science Department Carnegie Mellon University March 26,
More informationAutomatic Medical Coding of Patient Records via Weighted Ridge Regression
Sixth International Conference on Machine Learning and Applications Automatic Medical Coding of Patient Records via Weighted Ridge Regression Jian-WuXu,ShipengYu,JinboBi,LucianVladLita,RaduStefanNiculescuandR.BharatRao
More informationBetween-word regressions as part of rational reading
Between-word regressions as part of rational reading Klinton Bicknell & Roger Levy UC San Diego CUNY 2010: New York Bicknell & Levy (UC San Diego) Regressions as rational reading CUNY 2010 1 / 23 Introduction
More informationReward-Modulated Hebbian Learning of Decision Making
ARTICLE Communicated by Laurenz Wiskott Reward-Modulated Hebbian Learning of Decision Making Michael Pfeiffer pfeiffer@igi.tugraz.at Bernhard Nessler nessler@igi.tugraz.at Institute for Theoretical Computer
More informationPSYCH-GA.2211/NEURL-GA.2201 Fall 2016 Mathematical Tools for Cognitive and Neural Science. Homework 5
PSYCH-GA.2211/NEURL-GA.2201 Fall 2016 Mathematical Tools for Cognitive and Neural Science Homework 5 Due: 21 Dec 2016 (late homeworks penalized 10% per day) See the course web site for submission details.
More informationThe weak side of informal social control Paper prepared for Conference Game Theory and Society. ETH Zürich, July 27-30, 2011
The weak side of informal social control Paper prepared for Conference Game Theory and Society. ETH Zürich, July 27-30, 2011 Andreas Flache Department of Sociology ICS University of Groningen Collective
More informationStatistical Audit. Summary. Conceptual and. framework. MICHAELA SAISANA and ANDREA SALTELLI European Commission Joint Research Centre (Ispra, Italy)
Statistical Audit MICHAELA SAISANA and ANDREA SALTELLI European Commission Joint Research Centre (Ispra, Italy) Summary The JRC analysis suggests that the conceptualized multi-level structure of the 2012
More informationToward Comparison-based Adaptive Operator Selection
Toward Comparison-based Adaptive Operator Selection Álvaro Fialho, Marc Schoenauer, Michèle Sebag Orsay, France Evolutionary Algorithms cycle and parameters Performance very sensitive to the parameter
More informationSensory Cue Integration
Sensory Cue Integration Summary by Byoung-Hee Kim Computer Science and Engineering (CSE) http://bi.snu.ac.kr/ Presentation Guideline Quiz on the gist of the chapter (5 min) Presenters: prepare one main
More informationWhy so gloomy? A Bayesian explanation of human pessimism bias in the multi-armed bandit task
Why so gloomy? A Bayesian explanation of human pessimism bias in the multi-armed bandit tas Anonymous Author(s) Affiliation Address email Abstract 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
More informationBayesian (Belief) Network Models,
Bayesian (Belief) Network Models, 2/10/03 & 2/12/03 Outline of This Lecture 1. Overview of the model 2. Bayes Probability and Rules of Inference Conditional Probabilities Priors and posteriors Joint distributions
More informationBayesian Joint Modelling of Benefit and Risk in Drug Development
Bayesian Joint Modelling of Benefit and Risk in Drug Development EFSPI/PSDM Safety Statistics Meeting Leiden 2017 Disclosure is an employee and shareholder of GSK Data presented is based on human research
More informationIntroduction to Survival Analysis Procedures (Chapter)
SAS/STAT 9.3 User s Guide Introduction to Survival Analysis Procedures (Chapter) SAS Documentation This document is an individual chapter from SAS/STAT 9.3 User s Guide. The correct bibliographic citation
More informationExploring Experiential Learning: Simulations and Experiential Exercises, Volume 5, 1978 THE USE OF PROGRAM BAYAUD IN THE TEACHING OF AUDIT SAMPLING
THE USE OF PROGRAM BAYAUD IN THE TEACHING OF AUDIT SAMPLING James W. Gentry, Kansas State University Mary H. Bonczkowski, Kansas State University Charles W. Caldwell, Kansas State University INTRODUCTION
More information10-1 MMSE Estimation S. Lall, Stanford
0 - MMSE Estimation S. Lall, Stanford 20.02.02.0 0 - MMSE Estimation Estimation given a pdf Minimizing the mean square error The minimum mean square error (MMSE) estimator The MMSE and the mean-variance
More informationModeling Nonresponse Bias Likelihood and Response Propensity
Modeling Nonresponse Bias Likelihood and Response Propensity Daniel Pratt, Andy Peytchev, Michael Duprey, Jeffrey Rosen, Jamie Wescott 1 RTI International is a registered trademark and a trade name of
More informationNeurons and neural networks II. Hopfield network
Neurons and neural networks II. Hopfield network 1 Perceptron recap key ingredient: adaptivity of the system unsupervised vs supervised learning architecture for discrimination: single neuron perceptron
More informationArtificial Intelligence Programming Probability
Artificial Intelligence Programming Probability Chris Brooks Department of Computer Science University of San Francisco Department of Computer Science University of San Francisco p.1/25 17-0: Uncertainty
More informationGender Based Emotion Recognition using Speech Signals: A Review
50 Gender Based Emotion Recognition using Speech Signals: A Review Parvinder Kaur 1, Mandeep Kaur 2 1 Department of Electronics and Communication Engineering, Punjabi University, Patiala, India 2 Department
More informationIs Motion Planning Overrated? Jeannette Bohg - Interactive Perception and Robot Learning Lab - Stanford
Is Motion Planning Overrated? Jeannette Bohg - Interactive Perception and Robot Learning Lab - Stanford Is Motion Planning Overrated? Jeannette Bohg - Interactive Perception and Robot Learning Lab - Stanford
More informationLearning from data when all models are wrong
Learning from data when all models are wrong Peter Grünwald CWI / Leiden Menu Two Pictures 1. Introduction 2. Learning when Models are Seriously Wrong Joint work with John Langford, Tim van Erven, Steven
More informationSawtooth Software. A Parameter Recovery Experiment for Two Methods of MaxDiff with Many Items RESEARCH PAPER SERIES
Sawtooth Software RESEARCH PAPER SERIES A Parameter Recovery Experiment for Two Methods of MaxDiff with Many Items Keith Chrzan, Sawtooth Software, Inc. Copyright 2015, Sawtooth Software, Inc. 1457 E 840
More informationS Imputation of Categorical Missing Data: A comparison of Multivariate Normal and. Multinomial Methods. Holmes Finch.
S05-2008 Imputation of Categorical Missing Data: A comparison of Multivariate Normal and Abstract Multinomial Methods Holmes Finch Matt Margraf Ball State University Procedures for the imputation of missing
More informationA Bayesian Approach to Tackling Hard Computational Challenges
A Bayesian Approach to Tackling Hard Computational Challenges Eric Horvitz Microsoft Research Joint work with: Y. Ruan, C. Gomes, H. Kautz, B. Selman, D. Chickering MS Research University of Washington
More informationGeorgetown University ECON-616, Fall Macroeconometrics. URL: Office Hours: by appointment
Georgetown University ECON-616, Fall 2016 Macroeconometrics Instructor: Ed Herbst E-mail: ed.herbst@gmail.com URL: http://edherbst.net/ Office Hours: by appointment Scheduled Class Time and Organization:
More informationShu Kong. Department of Computer Science, UC Irvine
Ubiquitous Fine-Grained Computer Vision Shu Kong Department of Computer Science, UC Irvine Outline 1. Problem definition 2. Instantiation 3. Challenge and philosophy 4. Fine-grained classification with
More informationThe Classification Accuracy of Measurement Decision Theory. Lawrence Rudner University of Maryland
Paper presented at the annual meeting of the National Council on Measurement in Education, Chicago, April 23-25, 2003 The Classification Accuracy of Measurement Decision Theory Lawrence Rudner University
More informationBlood Glucose Monitoring System. Copyright 2016 Ascensia Diabetes Care Holdings AG diabetes.ascensia.com
Viewing test results in My Readings The CONTOUR DIABETES app captures all your blood glucose readings to create personalized patterns and trends, so you can see how your daily activities impact your results.
More informationUsing historical data for Bayesian sample size determination
Using historical data for Bayesian sample size determination Author: Fulvio De Santis, J. R. Statist. Soc. A (2007) 170, Part 1, pp. 95 113 Harvard Catalyst Journal Club: December 7 th 2016 Kush Kapur,
More informationPolicy Gradients. CS : Deep Reinforcement Learning Sergey Levine
Policy Gradients CS 294-112: Deep Reinforcement Learning Sergey Levine Class Notes 1. Homework 1 milestone due today (11:59 pm)! Don t be late! 2. Remember to start forming final project groups Today s
More informationStatistical Analysis Using Machine Learning Approach for Multiple Imputation of Missing Data
Statistical Analysis Using Machine Learning Approach for Multiple Imputation of Missing Data S. Kanchana 1 1 Assistant Professor, Faculty of Science and Humanities SRM Institute of Science & Technology,
More informationRational Learning and Information Sampling: On the Naivety Assumption in Sampling Explanations of Judgment Biases
Psychological Review 2011 American Psychological Association 2011, Vol. 118, No. 2, 379 392 0033-295X/11/$12.00 DOI: 10.1037/a0023010 Rational Learning and Information ampling: On the Naivety Assumption
More informationA Hierarchical Adaptive Approach to the Optimal Design of Experiments
A Hierarchical Adaptive Approach to the Optimal Design of Experiments Woojae Kim 1 (kim.1124@osu.edu), Mark Pitt 1 (pitt.2@osu.edu), Zhong-Lin Lu 1 (lu.535@osu.edu), Mark Steyvers 2 (mark.steyvers@uci.edu),
More informationRemarks on Bayesian Control Charts
Remarks on Bayesian Control Charts Amir Ahmadi-Javid * and Mohsen Ebadi Department of Industrial Engineering, Amirkabir University of Technology, Tehran, Iran * Corresponding author; email address: ahmadi_javid@aut.ac.ir
More informationFeedback-Controlled Parallel Point Process Filter for Estimation of Goal-Directed Movements From Neural Signals
IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 21, NO. 1, JANUARY 2013 129 Feedback-Controlled Parallel Point Process Filter for Estimation of Goal-Directed Movements From Neural
More informationNonlinear, Nongaussian Ensemble Data Assimilation with Rank Regression and a Rank Histogram Filter
Nonlinear, Nongaussian Ensemble Data Assimilation with Rank Regression and a Rank Histogram Filter Jeff Anderson, NCAR Data Assimilation Research Section pg 1 Schematic of a Sequential Ensemble Filter
More informationThe Simulacrum. What is it, how is it created, how does it work? Michael Eden on behalf of Sally Vernon & Cong Chen NAACCR 21 st June 2017
The Simulacrum What is it, how is it created, how does it work? Michael Eden on behalf of Sally Vernon & Cong Chen NAACCR 21 st June 2017 sally.vernon@phe.gov.uk & cong.chen@phe.gov.uk 1 Overview What
More informationThe Outlier Approach How To Triumph In Your Career As A Nonconformist
The Outlier Approach How To Triumph In Your Career As A Nonconformist We have made it easy for you to find a PDF Ebooks without any digging. And by having access to our ebooks online or by storing it on
More informationSample size calculation a quick guide. Ronán Conroy
Sample size calculation a quick guide Thursday 28 October 2004 Ronán Conroy rconroy@rcsi.ie How to use this guide This guide has sample size ready-reckoners for a number of common research designs. Each
More informationInformation-theoretic stimulus design for neurophysiology & psychophysics
Information-theoretic stimulus design for neurophysiology & psychophysics Christopher DiMattina, PhD Assistant Professor of Psychology Florida Gulf Coast University 2 Optimal experimental design Part 1
More informationHybrid HMM and HCRF model for sequence classification
Hybrid HMM and HCRF model for sequence classification Y. Soullard and T. Artières University Pierre and Marie Curie - LIP6 4 place Jussieu 75005 Paris - France Abstract. We propose a hybrid model combining
More informationProbability-Based Protein Identification for Post-Translational Modifications and Amino Acid Variants Using Peptide Mass Fingerprint Data
Probability-Based Protein Identification for Post-Translational Modifications and Amino Acid Variants Using Peptide Mass Fingerprint Data Tong WW, McComb ME, Perlman DH, Huang H, O Connor PB, Costello
More informationForgetful Bayes and myopic planning: Human learning and decision-making in a bandit setting
Forgetful Bayes and myopic planning: Human learning and decision-maing in a bandit setting Shunan Zhang Department of Cognitive Science University of California, San Diego La Jolla, CA 92093 s6zhang@ucsd.edu
More informationYou must answer question 1.
Research Methods and Statistics Specialty Area Exam October 28, 2015 Part I: Statistics Committee: Richard Williams (Chair), Elizabeth McClintock, Sarah Mustillo You must answer question 1. 1. Suppose
More informationUsing Bayesian Networks to Analyze Expression Data. Xu Siwei, s Muhammad Ali Faisal, s Tejal Joshi, s
Using Bayesian Networks to Analyze Expression Data Xu Siwei, s0789023 Muhammad Ali Faisal, s0677834 Tejal Joshi, s0677858 Outline Introduction Bayesian Networks Equivalence Classes Applying to Expression
More informationIntroduction. Chapter 1
1 Chapter 1 Introduction In a number of problems, including Brain-Computer Interfaces (BCI), deep brain stimulation (DBS), sensory prosthetics, and spinal cord injury (SCI) therapy, complex electronic
More informationInstitutional Ranking. VHA Study
Statistical Inference for Ranks of Health Care Facilities in the Presence of Ties and Near Ties Minge Xie Department of Statistics Rutgers, The State University of New Jersey Supported in part by NSF,
More informationIntroduction to Bayesian Analysis 1
Biostats VHM 801/802 Courses Fall 2005, Atlantic Veterinary College, PEI Henrik Stryhn Introduction to Bayesian Analysis 1 Little known outside the statistical science, there exist two different approaches
More informationUsing AUC and Accuracy in Evaluating Learning Algorithms
1 Using AUC and Accuracy in Evaluating Learning Algorithms Jin Huang Charles X. Ling Department of Computer Science The University of Western Ontario London, Ontario, Canada N6A 5B7 fjhuang, clingg@csd.uwo.ca
More informationEmpirical game theory of pedestrian interaction for autonomous vehicles
Empirical game theory of pedestrian interaction for autonomous vehicles Fanta Camara 1,2, Richard Romano 1, Gustav Markkula 1, Ruth Madigan 1, Natasha Merat 1 and Charles Fox 1,2,3 1 Institute for Transport
More informationAnalysis of acgh data: statistical models and computational challenges
: statistical models and computational challenges Ramón Díaz-Uriarte 2007-02-13 Díaz-Uriarte, R. acgh analysis: models and computation 2007-02-13 1 / 38 Outline 1 Introduction Alternative approaches What
More informationDopamine enables dynamic regulation of exploration
Dopamine enables dynamic regulation of exploration François Cinotti Université Pierre et Marie Curie, CNRS 4 place Jussieu, 75005, Paris, FRANCE francois.cinotti@isir.upmc.fr Nassim Aklil nassim.aklil@isir.upmc.fr
More informationOutline. What s inside this paper? My expectation. Software Defect Prediction. Traditional Method. What s inside this paper?
Outline A Critique of Software Defect Prediction Models Norman E. Fenton Dongfeng Zhu What s inside this paper? What kind of new technique was developed in this paper? Research area of this technique?
More informationEvidence-Based Filters for Signal Detection: Application to Evoked Brain Responses
Evidence-Based Filters for Signal Detection: Application to Evoked Brain Responses M. Asim Mubeen a, Kevin H. Knuth a,b,c a Knuth Cyberphysics Lab, Department of Physics, University at Albany, Albany NY,
More information