Methods of Randomization Lupe Bedoya. Development Impact Evaluation Field Coordinator Training Washington, DC April 22-25, 2013

Size: px
Start display at page:

Download "Methods of Randomization Lupe Bedoya. Development Impact Evaluation Field Coordinator Training Washington, DC April 22-25, 2013"

Transcription

1 Methods of Randomization Lupe Bedoya Development Impact Evaluation Field Coordinator Training Washington, DC April 22-25, 2013

2 Content 1. Important Concepts 2. What vs. Why 3. Some Practical Issues 4. Select Randomization Methods

3 Content 1. Important Concepts 2. What vs. Why 3. Some Practical Issues 4. Select Randomization Methods

4 Important Concepts Outcomes: What we observe, measure and want to affect Counterfactual Outcomes: The potential outcome that would had taken place if the individual had not been exposed to the program Impact: Change in outcomes caused by the intervention Use of the word cause More schooling causes higher earnings. We mean to say: A person with more schooling has higher earnings relative to the earnings of that same person if s/he had less schooling

5 The Evaluation Problem Basically a missing data problem: We don t observe the counterfactual outcomes for the same people If use an inappropriate counterfactual Biased estimates of the impact Impact Estimate ˆD i = Y i t -Y j c ˆD i = D i +x i, j Matching error for having individuals j and i = selection bias True impact Example: individuals with more ability tend to study more years, so the impact estimate of comparing those who study more (treatment) with those who study less (comparison) will have a significant matching error or selection bias

6 The Statistical Solution Randomization balances out the selection problem. It does not eliminate it Definition The purposeful manipulation of a social program or policy randomizing groups into treatment and control status E[ ˆD i ] = E[D i ] if and only if E[x i, j ] = 0 Independence Assumption = The 2 groups will (on average) have ALL the same characteristics (observable and unobservable) The only difference is the treatment If the randomization is not well implemented we are back with selection problems

7 In a nutshell We observe outcomes of policies and programs. We want to say that these policies or programs caused outcomes to change. We define these changes as an impact. We do not observe counterfactual outcomes. We create a counterfactual. We must convincingly solve the selection problem. We need to make sure the randomization/ intervention is implemented as planned

8 Content 1. Important Concepts 2. What vs. Why 3. Some Practical Issues 4. Select Randomization Methods

9 Example: A pencil sharpener

10 Example: A pencil sharpener Hidden from the researcher

11 Approach A (What) Two approaches: What vs. How/Why Come up with treatment that might affect pencils (windows). Run RCT. Conclusion: Windows affect pencils. Approach B (How/Why) Make economic theory about an economic mechanism (smoke gives opossums incentives to leave home). Find testable implication (opening windows affect pencils) and run RCT. Conclusion: We can t reject that smoke incentivizes opossums Implication for policy creating targeted incentives is a solution if we want to affect pencil sharpening (of course it applies to many real problems)

12 Need to focus on the most interesting questions: Why does financial literacy affect growth? Through which mechanism (how) does it affect growth, if at all? These questions affect the generalizability of the model! That is why DIME focuses on the How/Why Treatment variations to understand mechanisms Test hypothesis Contribute to scale up, learn for other settings

13 Content 1. Important Concepts 2. What vs. Why 3. Some Practical Issues 4. Select Randomization Methods

14 What are Potential Problems Random Designs face? Ethical issues Denying services Can only use when everyone does not have a right to the program Program/operational concerns Adequate participant flow Contamination Cannot randomize at the correct unit of analysis External validity

15 Where Do Ethical Concerns Arise? Voluntary programs that can enroll all applicants Mandatory programs that can enroll all eligibles Entitlement programs Control group will be made worse off

16 ..And When They Can Be Addressed? Voluntary programs often far more interest than realized programs often simply enroll first comers Mandatory programs capacity may be limited relative to eligibility program is a demonstration Entitlement Programs difficult to justify one possibility: compensation to control group another possibility: encouragement designs

17 Need to Define Subgroups prior to RA Subgroups defined prior to random assignment (exogenous) e.g. demographics e.g. baseline behaviors, outcome values can generate unbiased subgroup estimates Subgroups defined by events/actions after random assignment (endogenous) problematic e.g. program dosage, stayers vs. leavers

18 Illustration 1: Mandatory Testing Program Variants Eligible Population e.g., all children in public HS RA Treatment 1 Treatment 2 Control Implications: Treatment variations (cannot evaluate What ) Impact can be extrapolated to all eligible population

19 Illustration 2: Voluntary Program Eligible Population e.g., unemployed Non-Applicants Outreach Applicants e.g., unemployed who apply to job training programs RA Treatment Control Implications: Could evaluate the What Impact can only be extrapolated to applicants within the eligible population

20 Implications for internal as well generalizability Internal Validity: does it get the causal effect right in the sample you are studying? (e.g., the children in public HS who volunteer to participate in the experiment) External Validity: Generalizable to the entire population and to other populations (e.g., all children in public HS)

21 Content 1. Important Concepts 2. What vs. Why 3. Some Practical Issues 4. Select Randomization Methods Simple Random Design Stratified Random Design Clustered Random Design

22 Simple Random Design Eligible Population All students RA At the student level Treatment Control Unit of randomization = unit of analysis Random sample from the universe (with or without replacement) Results can be extrapolated to the eligible population

23 Stratified Random Design Oversample a strata Eligible (1500 Students) Why? We think the impact of the program is different across diverse groups (i.e., strata) We may be able to measure more precisely the estimates by strata if 400 Women 1100 Men we randomize at that level. For instance, if we are interested in knowing the impact of a RA (150T;150C) 300 Treatment RA (150T;150C) program on women, who are under-represented in your eligible population (e.g., women in maledominated careers), we need to oversample women 300 Control Implication: Need to use weights to estimate the overall impact because women are oversampled

24 Cluster Random Design Eligible Population 900 villages Unit of Randomization RA Treatment villages Control 450 villages 3,000 households 3,000 households Unit of analysis Outcomes are measured at this level

25 Cluster Random Design Unit of randomization (e.g., a village) is different from the unit of analysis (e.g., household) primary and secondary sampling units Treatment effects may be correlated within a village usually decreases precision In the worst case scenario, units within the cluster are identical and you are left with only the number of clusters

26 Individuals within clusters tend to have more similar elements than elements selected at random Intra-cluster correlation highly affects internal validity more sample size needed. We get more information about the impact on the whole population randomizing at the individual level than at the cluster level Easy to be unbalanced

27 What can go wrong? Common Problems: Can Be Serious!! Potentially underpowered situations small samples; clustered samples; high variance outcomes High non-participation or low dosage serious concern in voluntary programs Control group crossover/other contaminants Sample attrition esp. by treatment status control group can be harder to retain/locate Potential Selection Problems

28 Critical when conducting experiments Design experiments that help answer how/why things work (e.g., test predictions of theories), especially if your purpose is to pledge applicability beyond the specific context of your experiment. Recognize the limitations of your particular experiment (e.g., what you are estimating, what population your results are applicable to) Guarantee randomization is well implemented, and the intervention goes as planned Very frequent cause of low-quality IEs. FC are crucial for this. Make the appropriate/feasible corrections (e.g., se, selection problems)

Evaluating Social Programs Course: Evaluation Glossary (Sources: 3ie and The World Bank)

Evaluating Social Programs Course: Evaluation Glossary (Sources: 3ie and The World Bank) Evaluating Social Programs Course: Evaluation Glossary (Sources: 3ie and The World Bank) Attribution The extent to which the observed change in outcome is the result of the intervention, having allowed

More information

Glossary From Running Randomized Evaluations: A Practical Guide, by Rachel Glennerster and Kudzai Takavarasha

Glossary From Running Randomized Evaluations: A Practical Guide, by Rachel Glennerster and Kudzai Takavarasha Glossary From Running Randomized Evaluations: A Practical Guide, by Rachel Glennerster and Kudzai Takavarasha attrition: When data are missing because we are unable to measure the outcomes of some of the

More information

TRACER STUDIES ASSESSMENTS AND EVALUATIONS

TRACER STUDIES ASSESSMENTS AND EVALUATIONS TRACER STUDIES ASSESSMENTS AND EVALUATIONS 1 INTRODUCTION This note introduces the reader to tracer studies. For the Let s Work initiative, tracer studies are proposed to track and record or evaluate the

More information

Statistical Power Sampling Design and sample Size Determination

Statistical Power Sampling Design and sample Size Determination Statistical Power Sampling Design and sample Size Determination Deo-Gracias HOUNDOLO Impact Evaluation Specialist dhoundolo@3ieimpact.org Outline 1. Sampling basics 2. What do evaluators do? 3. Statistical

More information

Version No. 7 Date: July Please send comments or suggestions on this glossary to

Version No. 7 Date: July Please send comments or suggestions on this glossary to Impact Evaluation Glossary Version No. 7 Date: July 2012 Please send comments or suggestions on this glossary to 3ie@3ieimpact.org. Recommended citation: 3ie (2012) 3ie impact evaluation glossary. International

More information

Carrying out an Empirical Project

Carrying out an Empirical Project Carrying out an Empirical Project Empirical Analysis & Style Hint Special program: Pre-training 1 Carrying out an Empirical Project 1. Posing a Question 2. Literature Review 3. Data Collection 4. Econometric

More information

Measuring impact. William Parienté UC Louvain J PAL Europe. povertyactionlab.org

Measuring impact. William Parienté UC Louvain J PAL Europe. povertyactionlab.org Measuring impact William Parienté UC Louvain J PAL Europe povertyactionlab.org Course overview 1. What is evaluation? 2. Measuring impact 3. Why randomize? 4. How to randomize 5. Sampling and Sample Size

More information

Introduction to Program Evaluation

Introduction to Program Evaluation Introduction to Program Evaluation Nirav Mehta Assistant Professor Economics Department University of Western Ontario January 22, 2014 Mehta (UWO) Program Evaluation January 22, 2014 1 / 28 What is Program

More information

Regression Discontinuity Design

Regression Discontinuity Design Regression Discontinuity Design Regression Discontinuity Design Units are assigned to conditions based on a cutoff score on a measured covariate, For example, employees who exceed a cutoff for absenteeism

More information

Impact Evaluation Toolbox

Impact Evaluation Toolbox Impact Evaluation Toolbox Gautam Rao University of California, Berkeley * ** Presentation credit: Temina Madon Impact Evaluation 1) The final outcomes we care about - Identify and measure them Measuring

More information

Why randomize? Rohini Pande Harvard University and J-PAL.

Why randomize? Rohini Pande Harvard University and J-PAL. Why randomize? Rohini Pande Harvard University and J-PAL www.povertyactionlab.org Agenda I. Background on Program Evaluation II. What is a randomized experiment? III. Advantages and Limitations of Experiments

More information

Randomized Evaluations

Randomized Evaluations Randomized Evaluations Introduction, Methodology, & Basic Econometrics using Mexico s Progresa program as a case study (with thanks to Clair Null, author of 2008 Notes) Sept. 15, 2009 Not All Correlations

More information

Public Policy & Evidence:

Public Policy & Evidence: Public Policy & Evidence: How to discriminate, interpret and communicate scientific research to better inform society. Rachel Glennerster Executive Director J-PAL Global Press coverage of microcredit:

More information

Quantitative Methods. Lonnie Berger. Research Training Policy Practice

Quantitative Methods. Lonnie Berger. Research Training Policy Practice Quantitative Methods Lonnie Berger Research Training Policy Practice Defining Quantitative and Qualitative Research Quantitative methods: systematic empirical investigation of observable phenomena via

More information

GUIDE 4: COUNSELING THE UNEMPLOYED

GUIDE 4: COUNSELING THE UNEMPLOYED GUIDE 4: COUNSELING THE UNEMPLOYED Addressing threats to experimental integrity This case study is based on Sample Attrition Bias in Randomized Experiments: A Tale of Two Surveys By Luc Behaghel, Bruno

More information

Lecture II: Difference in Difference. Causality is difficult to Show from cross

Lecture II: Difference in Difference. Causality is difficult to Show from cross Review Lecture II: Regression Discontinuity and Difference in Difference From Lecture I Causality is difficult to Show from cross sectional observational studies What caused what? X caused Y, Y caused

More information

Randomization as a Tool for Development Economists. Esther Duflo Sendhil Mullainathan BREAD-BIRS Summer school

Randomization as a Tool for Development Economists. Esther Duflo Sendhil Mullainathan BREAD-BIRS Summer school Randomization as a Tool for Development Economists Esther Duflo Sendhil Mullainathan BREAD-BIRS Summer school Randomization as one solution Suppose you could do a Randomized evaluation of the microcredit

More information

Analysis of TB prevalence surveys

Analysis of TB prevalence surveys Workshop and training course on TB prevalence surveys with a focus on field operations Analysis of TB prevalence surveys Day 8 Thursday, 4 August 2011 Phnom Penh Babis Sismanidis with acknowledgements

More information

Lecture II: Difference in Difference and Regression Discontinuity

Lecture II: Difference in Difference and Regression Discontinuity Review Lecture II: Difference in Difference and Regression Discontinuity it From Lecture I Causality is difficult to Show from cross sectional observational studies What caused what? X caused Y, Y caused

More information

Planning Sample Size for Randomized Evaluations.

Planning Sample Size for Randomized Evaluations. Planning Sample Size for Randomized Evaluations www.povertyactionlab.org Planning Sample Size for Randomized Evaluations General question: How large does the sample need to be to credibly detect a given

More information

INTERNAL VALIDITY, BIAS AND CONFOUNDING

INTERNAL VALIDITY, BIAS AND CONFOUNDING OCW Epidemiology and Biostatistics, 2010 J. Forrester, PhD Tufts University School of Medicine October 6, 2010 INTERNAL VALIDITY, BIAS AND CONFOUNDING Learning objectives for this session: 1) Understand

More information

Econometric analysis and counterfactual studies in the context of IA practices

Econometric analysis and counterfactual studies in the context of IA practices Econometric analysis and counterfactual studies in the context of IA practices Giulia Santangelo http://crie.jrc.ec.europa.eu Centre for Research on Impact Evaluation DG EMPL - DG JRC CRIE Centre for Research

More information

Threats and Analysis. Bruno Crépon J-PAL

Threats and Analysis. Bruno Crépon J-PAL Threats and Analysis Bruno Crépon J-PAL Course Overview 1. What is Evaluation? 2. Outcomes, Impact, and Indicators 3. Why Randomize and Common Critiques 4. How to Randomize 5. Sampling and Sample Size

More information

Introduction to Experiments

Introduction to Experiments Experimental Methods in the Social Sciences Introduction to Experiments August 5, 2013 Experiments Increasingly Important in the Social Sciences Field experiments in political science as early as 1920s

More information

Vocabulary. Bias. Blinding. Block. Cluster sample

Vocabulary. Bias. Blinding. Block. Cluster sample Bias Blinding Block Census Cluster sample Confounding Control group Convenience sample Designs Experiment Experimental units Factor Level Any systematic failure of a sampling method to represent its population

More information

Abdul Latif Jameel Poverty Action Lab Executive Training: Evaluating Social Programs Spring 2009

Abdul Latif Jameel Poverty Action Lab Executive Training: Evaluating Social Programs Spring 2009 MIT OpenCourseWare http://ocw.mit.edu Abdul Latif Jameel Poverty Action Lab Executive Training: Evaluating Social Programs Spring 2009 For information about citing these materials or our Terms of Use,

More information

Chapter 13 Summary Experiments and Observational Studies

Chapter 13 Summary Experiments and Observational Studies Chapter 13 Summary Experiments and Observational Studies What have we learned? We can recognize sample surveys, observational studies, and randomized comparative experiments. o These methods collect data

More information

Chapter 13. Experiments and Observational Studies. Copyright 2012, 2008, 2005 Pearson Education, Inc.

Chapter 13. Experiments and Observational Studies. Copyright 2012, 2008, 2005 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies Copyright 2012, 2008, 2005 Pearson Education, Inc. Observational Studies In an observational study, researchers don t assign choices; they simply observe

More information

Propensity Score Methods for Estimating Causality in the Absence of Random Assignment: Applications for Child Care Policy Research

Propensity Score Methods for Estimating Causality in the Absence of Random Assignment: Applications for Child Care Policy Research 2012 CCPRC Meeting Methodology Presession Workshop October 23, 2012, 2:00-5:00 p.m. Propensity Score Methods for Estimating Causality in the Absence of Random Assignment: Applications for Child Care Policy

More information

Critical Appraisal Series

Critical Appraisal Series Definition for therapeutic study Terms Definitions Study design section Observational descriptive studies Observational analytical studies Experimental studies Pragmatic trial Cluster trial Researcher

More information

How to Randomise? (Randomisation Design)

How to Randomise? (Randomisation Design) J-PAL Africa Executive Education Course How to Randomise? (Randomisation Design) Lecture 4, January 24, 2012 Martin Abel University of Cape Town Roadmap to Randomised Evaluations 1 Environment / Context

More information

A NEW TRIAL DESIGN FULLY INTEGRATING BIOMARKER INFORMATION FOR THE EVALUATION OF TREATMENT-EFFECT MECHANISMS IN PERSONALISED MEDICINE

A NEW TRIAL DESIGN FULLY INTEGRATING BIOMARKER INFORMATION FOR THE EVALUATION OF TREATMENT-EFFECT MECHANISMS IN PERSONALISED MEDICINE A NEW TRIAL DESIGN FULLY INTEGRATING BIOMARKER INFORMATION FOR THE EVALUATION OF TREATMENT-EFFECT MECHANISMS IN PERSONALISED MEDICINE Dr Richard Emsley Centre for Biostatistics, Institute of Population

More information

Causal Validity Considerations for Including High Quality Non-Experimental Evidence in Systematic Reviews

Causal Validity Considerations for Including High Quality Non-Experimental Evidence in Systematic Reviews Non-Experimental Evidence in Systematic Reviews OPRE REPORT #2018-63 DEKE, MATHEMATICA POLICY RESEARCH JUNE 2018 OVERVIEW Federally funded systematic reviews of research evidence play a central role in

More information

Measuring Impact. Program and Policy Evaluation with Observational Data. Daniel L. Millimet. Southern Methodist University.

Measuring Impact. Program and Policy Evaluation with Observational Data. Daniel L. Millimet. Southern Methodist University. Measuring mpact Program and Policy Evaluation with Observational Data Daniel L. Millimet Southern Methodist University 23 May 2013 DL Millimet (SMU) Observational Data May 2013 1 / 23 ntroduction Measuring

More information

UNDERSTANDING & MISUNDERSTANDING RANDOMIZED CONTROLLED TRIALS. Angus Deaton, Princeton: Nancy Cartwright, Durham & UCSD Stockholm October 2016

UNDERSTANDING & MISUNDERSTANDING RANDOMIZED CONTROLLED TRIALS. Angus Deaton, Princeton: Nancy Cartwright, Durham & UCSD Stockholm October 2016 UNDERSTANDING & MISUNDERSTANDING RANDOMIZED CONTROLLED TRIALS Angus Deaton, Princeton: Nancy Cartwright, Durham & UCSD Stockholm October 2016 Outline 1. Doing trials Bias and precision: (perfectly conducted)

More information

I. Introduction and Data Collection B. Sampling. 1. Bias. In this section Bias Random Sampling Sampling Error

I. Introduction and Data Collection B. Sampling. 1. Bias. In this section Bias Random Sampling Sampling Error I. Introduction and Data Collection B. Sampling In this section Bias Random Sampling Sampling Error 1. Bias Bias a prejudice in one direction (this occurs when the sample is selected in such a way that

More information

SECONDARY DATA ANALYSIS: Its Uses and Limitations. Aria Kekalih

SECONDARY DATA ANALYSIS: Its Uses and Limitations. Aria Kekalih SECONDARY DATA ANALYSIS: Its Uses and Limitations Aria Kekalih it is always wise to begin any research activity with a review of the secondary data (Novak 1996). Secondary Data Analysis can be literally

More information

Instrumental Variables Estimation: An Introduction

Instrumental Variables Estimation: An Introduction Instrumental Variables Estimation: An Introduction Susan L. Ettner, Ph.D. Professor Division of General Internal Medicine and Health Services Research, UCLA The Problem The Problem Suppose you wish to

More information

Nonresponse Rates and Nonresponse Bias In Household Surveys

Nonresponse Rates and Nonresponse Bias In Household Surveys Nonresponse Rates and Nonresponse Bias In Household Surveys Robert M. Groves University of Michigan and Joint Program in Survey Methodology Funding from the Methodology, Measurement, and Statistics Program

More information

13 Things Mentally Strong People Don t Do.

13 Things Mentally Strong People Don t Do. SUPPORTING SERVING AND FORMER MEMBERS OF THE ARMED FORCES, EMERGENCY 13 Things Mentally Strong People Don t Do. Mental strength isn't often reflected in what you do. It's usually seen in what you don't

More information

In this second module in the clinical trials series, we will focus on design considerations for Phase III clinical trials. Phase III clinical trials

In this second module in the clinical trials series, we will focus on design considerations for Phase III clinical trials. Phase III clinical trials In this second module in the clinical trials series, we will focus on design considerations for Phase III clinical trials. Phase III clinical trials are comparative, large scale studies that typically

More information

Methods for Addressing Selection Bias in Observational Studies

Methods for Addressing Selection Bias in Observational Studies Methods for Addressing Selection Bias in Observational Studies Susan L. Ettner, Ph.D. Professor Division of General Internal Medicine and Health Services Research, UCLA What is Selection Bias? In the regression

More information

Lecture 4: Research Approaches

Lecture 4: Research Approaches Lecture 4: Research Approaches Lecture Objectives Theories in research Research design approaches ú Experimental vs. non-experimental ú Cross-sectional and longitudinal ú Descriptive approaches How to

More information

Protocol Development: The Guiding Light of Any Clinical Study

Protocol Development: The Guiding Light of Any Clinical Study Protocol Development: The Guiding Light of Any Clinical Study Susan G. Fisher, Ph.D. Chair, Department of Clinical Sciences 1 Introduction Importance/ relevance/ gaps in knowledge Specific purpose of the

More information

Pros. University of Chicago and NORC at the University of Chicago, USA, and IZA, Germany

Pros. University of Chicago and NORC at the University of Chicago, USA, and IZA, Germany Dan A. Black University of Chicago and NORC at the University of Chicago, USA, and IZA, Germany Matching as a regression estimator Matching avoids making assumptions about the functional form of the regression

More information

Math 140 Introductory Statistics

Math 140 Introductory Statistics Math 140 Introductory Statistics Professor Silvia Fernández Sample surveys and experiments Most of what we ve done so far is data exploration ways to uncover, display, and describe patterns in data. Unfortunately,

More information

Further data analysis topics

Further data analysis topics Further data analysis topics Jonathan Cook Centre for Statistics in Medicine, NDORMS, University of Oxford EQUATOR OUCAGS training course 24th October 2015 Outline Ideal study Further topics Multiplicity

More information

Instrumental Variables I (cont.)

Instrumental Variables I (cont.) Review Instrumental Variables Observational Studies Cross Sectional Regressions Omitted Variables, Reverse causation Randomized Control Trials Difference in Difference Time invariant omitted variables

More information

Why published medical research may not be good for your health

Why published medical research may not be good for your health Why published medical research may not be good for your health Doug Altman EQUATOR Network and Centre for Statistics in Medicine University of Oxford 2 Reliable evidence Clinical practice and public health

More information

Technical Track Session IV Instrumental Variables

Technical Track Session IV Instrumental Variables Impact Evaluation Technical Track Session IV Instrumental Variables Christel Vermeersch Beijing, China, 2009 Human Development Human Network Development Network Middle East and North Africa Region World

More information

Sampling for Impact Evaluation. Maria Jones 24 June 2015 ieconnect Impact Evaluation Workshop Rio de Janeiro, Brazil June 22-25, 2015

Sampling for Impact Evaluation. Maria Jones 24 June 2015 ieconnect Impact Evaluation Workshop Rio de Janeiro, Brazil June 22-25, 2015 Sampling for Impact Evaluation Maria Jones 24 June 2015 ieconnect Impact Evaluation Workshop Rio de Janeiro, Brazil June 22-25, 2015 How many hours do you expect to sleep tonight? A. 2 or less B. 3 C.

More information

Impact Evaluation Methods: Why Randomize? Meghan Mahoney Policy Manager, J-PAL Global

Impact Evaluation Methods: Why Randomize? Meghan Mahoney Policy Manager, J-PAL Global Impact Evaluation Methods: Why Randomize? Meghan Mahoney Policy Manager, J-PAL Global Course Overview 1. What is Evaluation? 2. Outcomes, Impact, and Indicators 3. Why Randomize? 4. How to Randomize? 5.

More information

Recent developments for combining evidence within evidence streams: bias-adjusted meta-analysis

Recent developments for combining evidence within evidence streams: bias-adjusted meta-analysis EFSA/EBTC Colloquium, 25 October 2017 Recent developments for combining evidence within evidence streams: bias-adjusted meta-analysis Julian Higgins University of Bristol 1 Introduction to concepts Standard

More information

QUASI-EXPERIMENTAL HEALTH SERVICE EVALUATION COMPASS 1 APRIL 2016

QUASI-EXPERIMENTAL HEALTH SERVICE EVALUATION COMPASS 1 APRIL 2016 QUASI-EXPERIMENTAL HEALTH SERVICE EVALUATION COMPASS 1 APRIL 2016 AIM & CONTENTS Aim to explore what a quasi-experimental study is and some issues around how they are done Context and Framework Review

More information

Welcome to this series focused on sources of bias in epidemiologic studies. In this first module, I will provide a general overview of bias.

Welcome to this series focused on sources of bias in epidemiologic studies. In this first module, I will provide a general overview of bias. Welcome to this series focused on sources of bias in epidemiologic studies. In this first module, I will provide a general overview of bias. In the second module, we will focus on selection bias and in

More information

UNIT 3 & 4 PSYCHOLOGY RESEARCH METHODS TOOLKIT

UNIT 3 & 4 PSYCHOLOGY RESEARCH METHODS TOOLKIT UNIT 3 & 4 PSYCHOLOGY RESEARCH METHODS TOOLKIT Prepared by Lucie Young, Carey Baptist Grammar School lucie.young@carey.com.au Credit to Kristy Kendall VCE Psychology research methods workbook for some

More information

REVIEW FOR THE PREVIOUS LECTURE

REVIEW FOR THE PREVIOUS LECTURE Slide 2-1 Calculator: The same calculator policies as for the ACT hold for STT 315: http://www.actstudent.org/faq/answers/calculator.html. It is highly recommended that you have a TI-84, as this is the

More information

DATA is derived either through. Self-Report Observation Measurement

DATA is derived either through. Self-Report Observation Measurement Data Management DATA is derived either through Self-Report Observation Measurement QUESTION ANSWER DATA DATA may be from Structured or Unstructured questions? Quantitative or Qualitative? Numerical or

More information

Quantitative research Quiz Answers

Quantitative research Quiz Answers Quantitative research Quiz Answers 1. What is a clinical trial? A. A clinical trial is an experiment where patients volunteer to test new ways of screening for, preventing, diagnosing or treating a disease.

More information

Chapter 3. Producing Data

Chapter 3. Producing Data Chapter 3. Producing Data Introduction Mostly data are collected for a specific purpose of answering certain questions. For example, Is smoking related to lung cancer? Is use of hand-held cell phones associated

More information

TRANSLATING RESEARCH INTO ACTION. Why randomize? Dan Levy. Harvard Kennedy School

TRANSLATING RESEARCH INTO ACTION. Why randomize? Dan Levy. Harvard Kennedy School TRANSLATING RESEARCH INTO ACTION Why randomize? Dan Levy Harvard Kennedy School Your background Course Overview 1. What is evaluation? 2. Measuring impacts (outcomes, indicators) 3. Why randomize? 4. How

More information

Probabilities and Research. Statistics

Probabilities and Research. Statistics Probabilities and Research Statistics Sampling a Population Interviewed 83 out of 616 (13.5%) initial victims Generalizability: Ability to apply findings from one sample or in one context to other samples

More information

Designing Pharmacy Practice Research

Designing Pharmacy Practice Research Designing Pharmacy Practice Research Ross T. Tsuyuki, BSc(Pharm), PharmD, MSc, FCHSP, FACC Professor of Medicine (Cardiology) Director, EPICORE Centre Faculty of Medicine and Dentistry University of Alberta

More information

Confounding and Bias

Confounding and Bias 28 th International Conference on Pharmacoepidemiology and Therapeutic Risk Management Barcelona, Spain August 22, 2012 Confounding and Bias Tobias Gerhard, PhD Assistant Professor, Ernest Mario School

More information

Validity and Quantitative Research. What is Validity? What is Validity Cont. RCS /16/04

Validity and Quantitative Research. What is Validity? What is Validity Cont. RCS /16/04 Validity and Quantitative Research RCS 6740 6/16/04 What is Validity? Valid Definition (Dictionary.com): Well grounded; just: a valid objection. Producing the desired results; efficacious: valid methods.

More information

Analysis A step in the research process that involves describing and then making inferences based on a set of data.

Analysis A step in the research process that involves describing and then making inferences based on a set of data. 1 Appendix 1:. Definitions of important terms. Additionality The difference between the value of an outcome after the implementation of a policy, and its value in a counterfactual scenario in which the

More information

Propensity Score Methods to Adjust for Bias in Observational Data SAS HEALTH USERS GROUP APRIL 6, 2018

Propensity Score Methods to Adjust for Bias in Observational Data SAS HEALTH USERS GROUP APRIL 6, 2018 Propensity Score Methods to Adjust for Bias in Observational Data SAS HEALTH USERS GROUP APRIL 6, 2018 Institute Institute for Clinical for Clinical Evaluative Evaluative Sciences Sciences Overview 1.

More information

Risk of bias assessment for. statistical methods

Risk of bias assessment for. statistical methods Risk of bias assessment for experimental and quasiexperimental designs based on statistical methods Hugh Waddington & Jorge Garcia Hombrados The haphazard way we individually and collectively study the

More information

You can t fix by analysis what you bungled by design. Fancy analysis can t fix a poorly designed study.

You can t fix by analysis what you bungled by design. Fancy analysis can t fix a poorly designed study. You can t fix by analysis what you bungled by design. Light, Singer and Willett Or, not as catchy but perhaps more accurate: Fancy analysis can t fix a poorly designed study. Producing Data The Role of

More information

AP STATISTICS 2014 SCORING GUIDELINES

AP STATISTICS 2014 SCORING GUIDELINES 2014 SCORING GUIDELINES Question 4 Intent of Question The primary goals of this question were to assess a student s ability to (1) describe why the median might be preferred to the mean in a particular

More information

ECONOMIC EVALUATION IN DEVELOPMENT

ECONOMIC EVALUATION IN DEVELOPMENT ECONOMIC EVALUATION IN DEVELOPMENT Autumn 2015 Michael King 1 A plan to end poverty.. I have identified the specific investments that are needed [to end poverty]; found ways to plan and implement them;

More information

Asking and answering research questions. What s it about?

Asking and answering research questions. What s it about? 2 Asking and answering research questions What s it about? (Social Psychology pp. 24 54) Social psychologists strive to reach general conclusions by developing scientific theories about why people behave

More information

Clinical problems and choice of study designs

Clinical problems and choice of study designs Evidence Based Dentistry Clinical problems and choice of study designs Asbjørn Jokstad University of Oslo, Norway Nov 21 2001 1 Manipulation with intervention Yes Experimental study No Non-experimental

More information

UN Handbook Ch. 7 'Managing sources of non-sampling error': recommendations on response rates

UN Handbook Ch. 7 'Managing sources of non-sampling error': recommendations on response rates JOINT EU/OECD WORKSHOP ON RECENT DEVELOPMENTS IN BUSINESS AND CONSUMER SURVEYS Methodological session II: Task Force & UN Handbook on conduct of surveys response rates, weighting and accuracy UN Handbook

More information

Does AIDS Treatment Stimulate Negative Behavioral Response? A Field Experiment in South Africa

Does AIDS Treatment Stimulate Negative Behavioral Response? A Field Experiment in South Africa Does AIDS Treatment Stimulate Negative Behavioral Response? A Field Experiment in South Africa Plamen Nikolov Harvard University February 19, 2011 Plamen Nikolov (Harvard University) Behavioral Response

More information

What is: regression discontinuity design?

What is: regression discontinuity design? What is: regression discontinuity design? Mike Brewer University of Essex and Institute for Fiscal Studies Part of Programme Evaluation for Policy Analysis (PEPA), a Node of the NCRM Regression discontinuity

More information

Can Quasi Experiments Yield Causal Inferences? Sample. Intervention 2/20/2012. Matthew L. Maciejewski, PhD Durham VA HSR&D and Duke University

Can Quasi Experiments Yield Causal Inferences? Sample. Intervention 2/20/2012. Matthew L. Maciejewski, PhD Durham VA HSR&D and Duke University Can Quasi Experiments Yield Causal Inferences? Matthew L. Maciejewski, PhD Durham VA HSR&D and Duke University Sample Study 1 Study 2 Year Age Race SES Health status Intervention Study 1 Study 2 Intervention

More information

Applied Quantitative Methods II

Applied Quantitative Methods II Applied Quantitative Methods II Lecture 7: Endogeneity and IVs Klára Kaĺıšková Klára Kaĺıšková AQM II - Lecture 7 VŠE, SS 2016/17 1 / 36 Outline 1 OLS and the treatment effect 2 OLS and endogeneity 3 Dealing

More information

Experiments. ESP178 Research Methods Dillon Fitch 1/26/16. Adapted from lecture by Professor Susan Handy

Experiments. ESP178 Research Methods Dillon Fitch 1/26/16. Adapted from lecture by Professor Susan Handy Experiments ESP178 Research Methods Dillon Fitch 1/26/16 Adapted from lecture by Professor Susan Handy Recap Causal Validity Criterion Association Non-spurious Time order Causal Mechanism Context Explanation

More information

Threats and Analysis. Shawn Cole. Harvard Business School

Threats and Analysis. Shawn Cole. Harvard Business School Threats and Analysis Shawn Cole Harvard Business School Course Overview 1. What is Evaluation? 2. Outcomes, Impact, and Indicators 3. Why Randomize? 4. How to Randomize? 5. Sampling and Sample Size 6.

More information

WWC STUDY REVIEW STANDARDS

WWC STUDY REVIEW STANDARDS WWC STUDY REVIEW STANDARDS INTRODUCTION The What Works Clearinghouse (WWC) reviews studies in three stages. First, the WWC screens studies to determine whether they meet criteria for inclusion within the

More information

CHL 5225 H Advanced Statistical Methods for Clinical Trials. CHL 5225 H The Language of Clinical Trials

CHL 5225 H Advanced Statistical Methods for Clinical Trials. CHL 5225 H The Language of Clinical Trials CHL 5225 H Advanced Statistical Methods for Clinical Trials Two sources for course material 1. Electronic blackboard required readings 2. www.andywillan.com/chl5225h code of conduct course outline schedule

More information

Outcomes, Indicators and Measuring Impact

Outcomes, Indicators and Measuring Impact J-PAL Africa Executive Education Course Outcomes, Indicators and Measuring Impact Lecture 2, January 23, 2012 Isaac M. Mbiti Southern Methodist University Course Overview 1. Why evaluate? What is evaluation?

More information

CAUSAL EFFECT HETEROGENEITY IN OBSERVATIONAL DATA

CAUSAL EFFECT HETEROGENEITY IN OBSERVATIONAL DATA CAUSAL EFFECT HETEROGENEITY IN OBSERVATIONAL DATA ANIRBAN BASU basua@uw.edu @basucally Background > Generating evidence on effect heterogeneity in important to inform efficient decision making. Translate

More information

CASE STUDY 2: VOCATIONAL TRAINING FOR DISADVANTAGED YOUTH

CASE STUDY 2: VOCATIONAL TRAINING FOR DISADVANTAGED YOUTH CASE STUDY 2: VOCATIONAL TRAINING FOR DISADVANTAGED YOUTH Why Randomize? This case study is based on Training Disadvantaged Youth in Latin America: Evidence from a Randomized Trial by Orazio Attanasio,

More information

Lecture 18: Controlled experiments. April 14

Lecture 18: Controlled experiments. April 14 Lecture 18: Controlled experiments April 14 1 Kindle vs. ipad 2 Typical steps to carry out controlled experiments in HCI Design State a lucid, testable hypothesis Identify independent and dependent variables

More information

UNIT I SAMPLING AND EXPERIMENTATION: PLANNING AND CONDUCTING A STUDY (Chapter 4)

UNIT I SAMPLING AND EXPERIMENTATION: PLANNING AND CONDUCTING A STUDY (Chapter 4) UNIT I SAMPLING AND EXPERIMENTATION: PLANNING AND CONDUCTING A STUDY (Chapter 4) A DATA COLLECTION (Overview) When researchers want to make conclusions/inferences about an entire population, they often

More information

Quasi-Experimental Workshop

Quasi-Experimental Workshop Quasi-Experimental Workshop Tom Cook and Will Shadish Supported by Institute for Educational Sciences Introductions: People Workshop Staff Your names and affiliations Logic for selecting you Introduction:

More information

Statistical Sampling: An Overview for Criminal Justice Researchers April 28, 2016

Statistical Sampling: An Overview for Criminal Justice Researchers April 28, 2016 Good afternoon everyone. My name is Stan Orchowsky, I'm the research director for the Justice Research and Statistics Association. It's my pleasure to welcome you this afternoon to the next in our Training

More information

The Effects of Maternal Alcohol Use and Smoking on Children s Mental Health: Evidence from the National Longitudinal Survey of Children and Youth

The Effects of Maternal Alcohol Use and Smoking on Children s Mental Health: Evidence from the National Longitudinal Survey of Children and Youth 1 The Effects of Maternal Alcohol Use and Smoking on Children s Mental Health: Evidence from the National Longitudinal Survey of Children and Youth Madeleine Benjamin, MA Policy Research, Economics and

More information

Trial designs fully integrating biomarker information for the evaluation of treatment-effect mechanisms in stratified medicine

Trial designs fully integrating biomarker information for the evaluation of treatment-effect mechanisms in stratified medicine Trial designs fully integrating biomarker information for the evaluation of treatment-effect mechanisms in stratified medicine Dr Richard Emsley Centre for Biostatistics, Institute of Population Health,

More information

Class 1: Introduction, Causality, Self-selection Bias, Regression

Class 1: Introduction, Causality, Self-selection Bias, Regression Class 1: Introduction, Causality, Self-selection Bias, Regression Ricardo A Pasquini April 2011 Ricardo A Pasquini () April 2011 1 / 23 Introduction I Angrist s what should be the FAQs of a researcher:

More information

Study Design STUDY DESIGN CASE SERIES AND CROSS-SECTIONAL STUDY DESIGN

Study Design STUDY DESIGN CASE SERIES AND CROSS-SECTIONAL STUDY DESIGN STUDY DESIGN CASE SERIES AND CROSS-SECTIONAL Daniel E. Ford, MD, MPH Vice Dean for Clinical Investigation Johns Hopkins School of Medicine Introduction to Clinical Research July 15, 2014 STUDY DESIGN Provides

More information

Matched Cohort designs.

Matched Cohort designs. Matched Cohort designs. Stefan Franzén PhD Lund 2016 10 13 Registercentrum Västra Götaland Purpose: improved health care 25+ 30+ 70+ Registries Employed Papers Statistics IT Project management Registry

More information

VALIDITY OF QUANTITATIVE RESEARCH

VALIDITY OF QUANTITATIVE RESEARCH Validity 1 VALIDITY OF QUANTITATIVE RESEARCH Recall the basic aim of science is to explain natural phenomena. Such explanations are called theories (Kerlinger, 1986, p. 8). Theories have varying degrees

More information

SAMPLING AND SAMPLE SIZE

SAMPLING AND SAMPLE SIZE SAMPLING AND SAMPLE SIZE Andrew Zeitlin Georgetown University and IGC Rwanda With slides from Ben Olken and the World Bank s Development Impact Evaluation Initiative 2 Review We want to learn how a program

More information

Population. population. parameter. Census versus Sample. Statistic. sample. statistic. Parameter. Population. Example: Census.

Population. population. parameter. Census versus Sample. Statistic. sample. statistic. Parameter. Population. Example: Census. Population Population the complete collection of ALL individuals (scores, people, measurements, etc.) to be studied the population is usually too big to be studied directly, then statistics is used Parameter

More information

Patrick Breheny. January 28

Patrick Breheny. January 28 Confidence intervals Patrick Breheny January 28 Patrick Breheny Introduction to Biostatistics (171:161) 1/19 Recap Introduction In our last lecture, we discussed at some length the Public Health Service

More information

Math 124: Modules 3 and 4. Sampling. Designing. Studies. Studies. Experimental Studies Surveys. Math 124: Modules 3 and 4. Sampling.

Math 124: Modules 3 and 4. Sampling. Designing. Studies. Studies. Experimental Studies Surveys. Math 124: Modules 3 and 4. Sampling. What we will do today Five Experimental Module 3 and Module 4 David Meredith Department of Mathematics San Francisco State University September 24, 2008 Five Experimental 1 Five 2 Experimental Terminology

More information