Key questions when starting an econometric project (Angrist & Pischke, 2009):
|
|
- Dorthy Barnett
- 6 years ago
- Views:
Transcription
1 Econometric & other impact assessment approaches to policy analysis Part 1 1 The problem of causality in policy analysis Internal vs. external validity Key questions when starting an econometric project (Angrist & Pischke, 2009): 1. What is the causal relationship that I want to estimate? E.g., what is the causal effect of a change in policy x on outcome y? 2. If I could design the ideal experiment to capture this causal effect, what would that experiment look like? What factors would I hold constant and what factors would I vary? Often involves random assignment of treatment. Useful to think about even if the experiment would be impossible to carry out in practice. 3. What is my identification strategy? I.e., how can I use observational data to approximate the ideal experiment? 4. What type of statistical inference will I use? E.g., population of interest, sample, type of standard errors The potential outcomes framework for binary treatments Let y 1,i be the outcome for individual i with treatment ( D i = 1) y 0,i be the outcome for individual i without treatment ( D i = 0 ) Then the causal effect of treatment or the treatment effect for individual i is y 1,i y 0,i. Note that our framework is with vs. without, not before vs. after. Challenge is that we only observe people in one state (treated or untreated). Can t be in both states at same time. We only observe y i where: y i = y 1i if D i = 1 y 0i if D i = 0 = y 1i D i + (1 D i )y 0i = ( y 1i y 0i )D i + y 0i Effects are likely to differ across individuals/units of analysis. It s the expected value of the effect (i.e., the mean effect) that is of interest for policy analysis, etc.: E( y 1,i y 0,i ) Average treatment effect (ATE) 1 Notes developed by Nicole Mason, September 2012, drawing on references listed below and class notes from EC 850, Development Economics (taught by Songqin Jin, Spring 2010, Michigan State University Department of Economics). 1
2 What are we measuring if we compare the outcomes for the treated to the untreated? Is this the causal effect we want (i.e., the effect of treatment on the outcome)? Eq. 1: E( y i D i = 1) E( y i D i = 0) = E( y 1i D i = 1) E( y 0i D i = 1) Average treatment effect on the treated (ATT) + E( y 0i D i = 1) E( y 0i D i = 0) Selection bias Can also write ATT as: ATT = E( y 1i D i = 1) E( y 0i D i = 1) = E( y 1i y 0i D i = 1) If no selection bias, then we get the ATT. The ATT is of interest but note that it is not the same as the ATE (except in special cases). Selection bias term is the difference between no-treatment outcomes between individuals that are treated and those that are not treated. If there is selection bias (or self-selection), estimate of the ATE based on comparing the average outcomes of the treated to the untreated will be misleading (biased). Goal is to minimize/eliminate selection bias. Self-selection: In most cases, individuals at least partly determine whether they receive treatment, and their decisions may be related to the benefits of treatment, y 1,i y 0,i (Wooldridge, 2003: 606). EX) Causal effect of going to the hospital on health: If compare average health outcomes of those that do go to the hospital to the average health outcomes of those that do not go to the hospital, likely to have a selection bias problem. People who are sick are more likely than healthy people to go to the hospital. Sick people have worse health outcomes to start out with. Selection bias is an issue. Example from Zambia and/or agricultural sector? Example: Effect of providing fertilizer to farmers on maize yields The intervention: provide fertilizer to select farmers in a poor region of the country (region A) in Objective is to raise maize yields. o Program targets poor areas o Farmers have to enroll at local extension office to receive the fertilizer o No fertilizer program before 2011 Have data on maize yields for both 2009 (before program) and 2011 (year of program) for farmers in region A and another region that didn t get the fertilizer (region B) Observe that farmers that were provided fertilizer have lower yields in 2011 than in 2009 Does this mean that the program did not work? 2
3 o There was a national drought in 2011, so everyone s yields were lower in 2011 than in 2009 (failure of the reflexive - before & after - comparison) o We compare farmers in the program region (region A) to those in the nonprogram region (region B) and find that treatment farmers in region A have a larger decline in yields than in region B. Did the program have a negative impact? Farmers in region B have better quality soil (unobservable) Farmers in region B have more irrigation, which is key in a drought year (observable) What if we compare treatment farmers in region A with their neighbors? o Think treated farmers and comparison farmers soils are approximately the same o Suppose we observe that the treatment farmers yields decline by less than comparison farmers. Did the program work? Not necessarily. Suppose that farmers that went to register with the program have more farming ability, and so could manage the drought better than their neighbors and the fertilizer was irrelevant (individual ability unobservable). o Suppose we observe no difference between the two groups. Did the program not work? Not necessarily. Rain could have caused the fertilizer to run off onto the neighbors fields (spillover/contamination) Random assignment of treatment solves the selection bias problem (& ATE=ATT) If treatment is randomly assigned across the population, then treatment status ( D i ) is independent of the potential outcomes. Implications: 1. ATE =ATT, in other words, ATT = E( y 1i y 0i D i = 1) = E( y 1i y 0i ) = ATE 2. No selection bias so can easily estimate ATE=ATT using Eq. 1 above (i.e., compare average outcomes of treated to the untreated) With randomization, average initial (pre-treatment) characteristics of individuals should not be statistically different between treated and untreated groups. 3
4 Regression analysis of experimental data If randomization was successful, then can estimate the ATE using the regression: y i = α + β D i + ε i May get more precise estimates (smaller standard errors) if include other (exogenous) control variables, X: y i = α + β D i + Xγ + ε i If we have observational data (rather than experimental data/randomized treatment), we can, in some cases, use regression analysis to approximate the ideal experiment. We will spend much of the rest of the course looking at this issue. Randomization: great if you can get it but not always an option Randomization may not be an option in certain cases because it is: 1. Too costly 2. Unethical 3. Takes too long to get results 4. Otherwise infeasible When randomization is not an option, our challenge is to find a natural experiment/ quasi-experiment that comes as close as possible to the experimental ideal. Need to be able to change the variable of interest, holding other key variables constant. Natural experiment (or quasi-experiment): A situation where the economic environment sometimes summarized by an explanatory variable exogenously changes, perhaps inadvertently, due to a policy or institutional change (Wooldridge, 2002: 799). 4
5 Research Methodology Spectrum (Roe & Just, 2009 p. 1267) Lab experiment Field experiment Natural experiment Field/ market data Lab experiment: the researcher purposefully imposes one or more exogenous changes (i.e. treatments) on a randomly chosen subset of subjects while holding all other elements of the context identical for a control group Field experiment: the researcher manipulates a naturally occurring context to induce relevant exogenous variation Compared to laboratory experiments, in which the researcher has control over nearly all aspects of the economic and institutional context, field experiments allow for less researcher control because much of the context is independent of the researcher s effort. Natural experiment: As with field data, a researcher cannot manipulate the stimulus or influence the data generation process. Rather, the researcher takes advantage of a change in context or setting that occurs for some subjects due to natural causes or social change beyond the researcher s and subjects influence. A natural experiment generates a control group (i.e., a group of similar individuals who avoid the natural treatment) that can be compared to the treatment group. Field/market data: naturally occurring or uncontrolled data in which the researcher observes market or field behavior that transpires regardless of the researcher s existence and largely independent of the researcher s activity Validity: External, Internal, & Ecological (Roe & Just, 2009 p ) Validity whether a particular conclusion or inference represents a good approximation to the true conclusion or inference (i.e., whether our methods of research and subsequent observations provide an adequate reflection of the truth Internal validity the ability of a researcher to argue that observed correlations are causal External validity the ability to generalize the relationships found in a study to other persons, times, and settings Ecological validity a study has ecological validity to the extent that the context in which subjects cast decisions is similar to the context of interest. 5
6 Source: Roe & Just (2009) Threats to internal validity (Roe & Just, 2009 p. 1268) 1. Lack of temporal clarity 2. Systematic differences in treatment groups 3. Concurrent third elements that confound the outcome 4. Maturation over time/structural change Threats to external validity (Roe & Just, 2009 p ) 1. Potential interactions with elements and context not found in the study s setting 2. Limited variation within stimulus or response 3. Systematic differences between the groups for which the result will be applied Tradeoffs Threats to internal validity are greatest for field/market data and least for lab experiment data Threats to external validity are least for field/market data and greatest for lab experiment data Field experiments & natural experiments are a middle ground but not a panacea Ideally use multiple research approaches 6
7 References Angrist, J.D., and J.S. Pischke Mostly Harmless Econometrics: An Empiricist s Companion. Princeton: Princeton University Press. Ch. 1: Questions about Questions Ch. 2: The Experimental Ideal Roe, B.E., and D.R. Just Internal and External Validity in Economics Research: Tradeoffs between Experiments, Field Experiments, Natural Experiments, and Field Data. American Journal of Agricultural Economics 91: Wooldridge, J.M Introductory Econometrics: A Modern Approach, 2nd ed. Cincinnati, OH: South-Western College Publishers. Wooldridge, J.M Econometric Analysis of Cross-Section and Panel Data, 1 st Edition. Cambridge, MA: MIT Press. 7
Introduction to Applied Research in Economics Kamiljon T. Akramov, Ph.D. IFPRI, Washington, DC, USA
Introduction to Applied Research in Economics Kamiljon T. Akramov, Ph.D. IFPRI, Washington, DC, USA Training Course on Applied Econometric Analysis June 1, 2015, WIUT, Tashkent, Uzbekistan Why do we need
More informationRegression Discontinuity Suppose assignment into treated state depends on a continuously distributed observable variable
Private school admission Regression Discontinuity Suppose assignment into treated state depends on a continuously distributed observable variable Observe higher test scores for private school students
More informationMarno Verbeek Erasmus University, the Netherlands. Cons. Pros
Marno Verbeek Erasmus University, the Netherlands Using linear regression to establish empirical relationships Linear regression is a powerful tool for estimating the relationship between one variable
More informationA note on evaluating Supplemental Instruction
Journal of Peer Learning Volume 8 Article 2 2015 A note on evaluating Supplemental Instruction Alfredo R. Paloyo University of Wollongong, apaloyo@uow.edu.au Follow this and additional works at: http://ro.uow.edu.au/ajpl
More informationEc331: Research in Applied Economics Spring term, Panel Data: brief outlines
Ec331: Research in Applied Economics Spring term, 2014 Panel Data: brief outlines Remaining structure Final Presentations (5%) Fridays, 9-10 in H3.45. 15 mins, 8 slides maximum Wk.6 Labour Supply - Wilfred
More informationIntroduction to Applied Research in Economics
Introduction to Applied Research in Economics Dr. Kamiljon T. Akramov IFPRI, Washington, DC, USA Regional Training Course on Applied Econometric Analysis June 12-23, 2017, WIUT, Tashkent, Uzbekistan Why
More informationWrite your identification number on each paper and cover sheet (the number stated in the upper right hand corner on your exam cover).
STOCKHOLM UNIVERSITY Department of Economics Course name: Empirical methods 2 Course code: EC2402 Examiner: Per Pettersson-Lidbom Number of credits: 7,5 credits Date of exam: Sunday 21 February 2010 Examination
More informationToday s reading concerned what method? A. Randomized control trials. B. Regression discontinuity. C. Latent variable analysis. D.
Quasi-Experiments Today s reading concerned what method? A. Randomized control trials. B. Regression discontinuity. C. Latent variable analysis. D. Distribution plotting.. What visual cue on a graph indicates
More informationTRANSLATING RESEARCH INTO ACTION
TRANSLATING RESEARCH INTO ACTION This case study is based on a current study by Esther Duflo and Tavneet Suri. J-PAL thanks the authors for allowing us to use their project 17 Key Vocabulary 1. Equivalence:
More informationQuasi-experimental analysis Notes for "Structural modelling".
Quasi-experimental analysis Notes for "Structural modelling". Martin Browning Department of Economics, University of Oxford Revised, February 3 2012 1 Quasi-experimental analysis. 1.1 Modelling using quasi-experiments.
More informationLecture II: Difference in Difference. Causality is difficult to Show from cross
Review Lecture II: Regression Discontinuity and Difference in Difference From Lecture I Causality is difficult to Show from cross sectional observational studies What caused what? X caused Y, Y caused
More informationInstrumental Variables Estimation: An Introduction
Instrumental Variables Estimation: An Introduction Susan L. Ettner, Ph.D. Professor Division of General Internal Medicine and Health Services Research, UCLA The Problem The Problem Suppose you wish to
More informationEMPIRICAL STRATEGIES IN LABOUR ECONOMICS
EMPIRICAL STRATEGIES IN LABOUR ECONOMICS University of Minho J. Angrist NIPE Summer School June 2009 This course covers core econometric ideas and widely used empirical modeling strategies. The main theoretical
More informationWhat is: regression discontinuity design?
What is: regression discontinuity design? Mike Brewer University of Essex and Institute for Fiscal Studies Part of Programme Evaluation for Policy Analysis (PEPA), a Node of the NCRM Regression discontinuity
More informationVersion No. 7 Date: July Please send comments or suggestions on this glossary to
Impact Evaluation Glossary Version No. 7 Date: July 2012 Please send comments or suggestions on this glossary to 3ie@3ieimpact.org. Recommended citation: 3ie (2012) 3ie impact evaluation glossary. International
More informationClass 1: Introduction, Causality, Self-selection Bias, Regression
Class 1: Introduction, Causality, Self-selection Bias, Regression Ricardo A Pasquini April 2011 Ricardo A Pasquini () April 2011 1 / 23 Introduction I Angrist s what should be the FAQs of a researcher:
More informationThe Limits of Inference Without Theory
The Limits of Inference Without Theory Kenneth I. Wolpin University of Pennsylvania Koopmans Memorial Lecture (2) Cowles Foundation Yale University November 3, 2010 Introduction Fuller utilization of the
More informationCausal Methods for Observational Data Amanda Stevenson, University of Texas at Austin Population Research Center, Austin, TX
Causal Methods for Observational Data Amanda Stevenson, University of Texas at Austin Population Research Center, Austin, TX ABSTRACT Comparative effectiveness research often uses non-experimental observational
More informationEmpirical Tools of Public Finance. 131 Undergraduate Public Economics Emmanuel Saez UC Berkeley
Empirical Tools of Public Finance 131 Undergraduate Public Economics Emmanuel Saez UC Berkeley 1 DEFINITIONS Empirical public finance: The use of data and statistical methods to measure the impact of government
More informationIntroduction to Observational Studies. Jane Pinelis
Introduction to Observational Studies Jane Pinelis 22 March 2018 Outline Motivating example Observational studies vs. randomized experiments Observational studies: basics Some adjustment strategies Matching
More informationPLS 506 Mark T. Imperial, Ph.D. Lecture Notes: Reliability & Validity
PLS 506 Mark T. Imperial, Ph.D. Lecture Notes: Reliability & Validity Measurement & Variables - Initial step is to conceptualize and clarify the concepts embedded in a hypothesis or research question with
More informationA Guide to Quasi-Experimental Designs
Western Kentucky University From the SelectedWorks of Matt Bogard Fall 2013 A Guide to Quasi-Experimental Designs Matt Bogard, Western Kentucky University Available at: https://works.bepress.com/matt_bogard/24/
More informationCausal Validity Considerations for Including High Quality Non-Experimental Evidence in Systematic Reviews
Non-Experimental Evidence in Systematic Reviews OPRE REPORT #2018-63 DEKE, MATHEMATICA POLICY RESEARCH JUNE 2018 OVERVIEW Federally funded systematic reviews of research evidence play a central role in
More informationDesigned Experiments have developed their own terminology. The individuals in an experiment are often called subjects.
When we wish to show a causal relationship between our explanatory variable and the response variable, a well designed experiment provides the best option. Here, we will discuss a few basic concepts and
More informationWhy randomize? Rohini Pande Harvard University and J-PAL.
Why randomize? Rohini Pande Harvard University and J-PAL www.povertyactionlab.org Agenda I. Background on Program Evaluation II. What is a randomized experiment? III. Advantages and Limitations of Experiments
More informationEmpirical Strategies
Empirical Strategies Joshua Angrist BGPE March 2012 These lectures cover many of the empirical modeling strategies discussed in Mostly Harmless Econometrics (MHE). The main theoretical ideas are illustrated
More informationLecture 4: Chapter 3, Section 4 Designing Studies (Focus on Experiments)
ecture 4: Chapter 3, Section 4 Designing Studies (Focus on Experiments) Definitions Randomization Control Blind Experiment Pitfalls Specific Experimental Designs Cengage earning Elementary Statistics:
More informationHandout 1: Introduction to the Research Process and Study Design STAT 335 Fall 2016
DESIGNING OBSERVATIONAL STUDIES As we have discussed, for the purpose of establishing cause-and-effect relationships, observational studies have a distinct disadvantage in comparison to randomized comparative
More informationRegression Discontinuity Designs: An Approach to Causal Inference Using Observational Data
Regression Discontinuity Designs: An Approach to Causal Inference Using Observational Data Aidan O Keeffe Department of Statistical Science University College London 18th September 2014 Aidan O Keeffe
More informationEvaluating Social Programs Course: Evaluation Glossary (Sources: 3ie and The World Bank)
Evaluating Social Programs Course: Evaluation Glossary (Sources: 3ie and The World Bank) Attribution The extent to which the observed change in outcome is the result of the intervention, having allowed
More informationLecture 4: Chapter 3, Section 4. (Focus on Experiments) Designing Studies. Looking Back: Review. Definitions
ecture 4: Chapter 3, Section 4 Designing Studies (Focus on Experiments)!Definitions!Randomization!Control!Blind Experiment!Pitfalls!Specific Experimental Designs Cengage earning Elementary Statistics:
More informationEstimating average treatment effects from observational data using teffects
Estimating average treatment effects from observational data using teffects David M. Drukker Director of Econometrics Stata 2013 Nordic and Baltic Stata Users Group meeting Karolinska Institutet September
More informationEmpirical Strategies 2012: IV and RD Go to School
Empirical Strategies 2012: IV and RD Go to School Joshua Angrist Rome June 2012 These lectures cover program and policy evaluation strategies discussed in Mostly Harmless Econometrics (MHE). We focus on
More informationCASE STUDY 2: VOCATIONAL TRAINING FOR DISADVANTAGED YOUTH
CASE STUDY 2: VOCATIONAL TRAINING FOR DISADVANTAGED YOUTH Why Randomize? This case study is based on Training Disadvantaged Youth in Latin America: Evidence from a Randomized Trial by Orazio Attanasio,
More informationIntroduction to Program Evaluation
Introduction to Program Evaluation Nirav Mehta Assistant Professor Economics Department University of Western Ontario January 22, 2014 Mehta (UWO) Program Evaluation January 22, 2014 1 / 28 What is Program
More informationCausal Inference in Observational Settings
The University of Auckland New Zealand Causal Inference in Observational Settings 7 th Wellington Colloquium Statistics NZ 30 August 2013 Professor Peter Davis University of Auckland, New Zealand and COMPASS
More informationPros. University of Chicago and NORC at the University of Chicago, USA, and IZA, Germany
Dan A. Black University of Chicago and NORC at the University of Chicago, USA, and IZA, Germany Matching as a regression estimator Matching avoids making assumptions about the functional form of the regression
More informationLecture 4: Research Approaches
Lecture 4: Research Approaches Lecture Objectives Theories in research Research design approaches ú Experimental vs. non-experimental ú Cross-sectional and longitudinal ú Descriptive approaches How to
More informationENDOGENEITY: AN OVERLOOKED THREAT TO VALIDITY OF CROSS-SECTIONAL RESEARCH. John Antonakis
ENDOGENEITY: AN OVERLOOKED THREAT TO VALIDITY OF CROSS-SECTIONAL RESEARCH John Antonakis Professor of Organizational Behavior Faculty of Business and Economics University of Lausanne Research seminar 28
More informationInternal Validity and Experimental Design
Internal Validity and Experimental Design February 18 1 / 24 Outline 1. Share examples from TESS 2. Talk about how to write experimental designs 3. Internal validity Why randomization works Threats to
More informationWRITTEN PRELIMINARY Ph.D. EXAMINATION. Department of Applied Economics. January 17, Consumer Behavior and Household Economics.
WRITTEN PRELIMINARY Ph.D. EXAMINATION Department of Applied Economics January 17, 2012 Consumer Behavior and Household Economics Instructions Identify yourself by your code letter, not your name, on each
More informationLecture 2B: Chapter 3, Section 4 Designing Studies (Focus on Experiments)
ooking Back: Review ecture 2B: Chapter 3, Section 4 Designing Studies (Focus on Experiments)! Definitions! Randomization! Control! Blind Experiment! Pitfalls! 2 Types of Study Design " Observational study:
More informationMeasurement. Reliability vs. Validity. Reliability vs. Validity
MEASUREMENT Outline 1. Measurement - Reliability vs. Validity - Validity - Threats to Internal Validity - Construct Validity (measurement validity) - Face validity - Content validity - Predictive validity
More informationThe Practice of Statistics 1 Week 2: Relationships and Data Collection
The Practice of Statistics 1 Week 2: Relationships and Data Collection Video 12: Data Collection - Experiments Experiments are the gold standard since they allow us to make causal conclusions. example,
More informationThreats and Analysis. Shawn Cole. Harvard Business School
Threats and Analysis Shawn Cole Harvard Business School Course Overview 1. What is Evaluation? 2. Outcomes, Impact, and Indicators 3. Why Randomize? 4. How to Randomize? 5. Sampling and Sample Size 6.
More informationUsing natural experiments to evaluate population health interventions
Using natural experiments to evaluate population health interventions Peter Craig MRC Population Health Sciences Research Network Public Health Intervention Research Symposium Toronto, 29-30 November 2010
More informationIntroduction to Applied Research in Economics
Introduction to Applied Research in Economics Dr. Kamiljon T. Akramov IFPRI, Washington, DC, USA Training Course on Introduction to Applied Econometric Analysis November 14, 2016, National Library, Dushanbe,
More informationUsing register data to estimate causal effects of interventions: An ex post synthetic control-group approach
Using register data to estimate causal effects of interventions: An ex post synthetic control-group approach Magnus Bygren and Ryszard Szulkin The self-archived version of this journal article is available
More informationRESEARCH METHODS. Winfred, research methods, ; rv ; rv
RESEARCH METHODS 1 Research Methods means of discovering truth 2 Research Methods means of discovering truth what is truth? 3 Research Methods means of discovering truth what is truth? Riveda Sandhyavandanam
More informationInstrumental Variables I (cont.)
Review Instrumental Variables Observational Studies Cross Sectional Regressions Omitted Variables, Reverse causation Randomized Control Trials Difference in Difference Time invariant omitted variables
More informationChapter 11: Experiments and Observational Studies p 318
Chapter 11: Experiments and Observational Studies p 318 Observation vs Experiment An observational study observes individuals and measures variables of interest but does not attempt to influence the response.
More informationThe Late Pretest Problem in Randomized Control Trials of Education Interventions
The Late Pretest Problem in Randomized Control Trials of Education Interventions Peter Z. Schochet ACF Methods Conference, September 2012 In Journal of Educational and Behavioral Statistics, August 2010,
More informationEC352 Econometric Methods: Week 07
EC352 Econometric Methods: Week 07 Gordon Kemp Department of Economics, University of Essex 1 / 25 Outline Panel Data (continued) Random Eects Estimation and Clustering Dynamic Models Validity & Threats
More informationImpact Evaluation Toolbox
Impact Evaluation Toolbox Gautam Rao University of California, Berkeley * ** Presentation credit: Temina Madon Impact Evaluation 1) The final outcomes we care about - Identify and measure them Measuring
More informationAP Statistics Chapter 5 Multiple Choice
AP Statistics Chapter 5 Multiple Choice 1. A nutritionist wants to study the effect of storage time (6, 12, and 18 months) on the amount of vitamin C present in freeze dried fruit when stored for these
More informationPropensity Score Matching with Limited Overlap. Abstract
Propensity Score Matching with Limited Overlap Onur Baser Thomson-Medstat Abstract In this article, we have demostrated the application of two newly proposed estimators which accounts for lack of overlap
More informationApplied Analysis of Variance and Experimental Design. Lukas Meier, Seminar für Statistik
Applied Analysis of Variance and Experimental Design Lukas Meier, Seminar für Statistik About Me Studied mathematics at ETH. Worked at the statistical consulting service and did a PhD in statistics (at
More informationPolitical Science 15, Winter 2014 Final Review
Political Science 15, Winter 2014 Final Review The major topics covered in class are listed below. You should also take a look at the readings listed on the class website. Studying Politics Scientifically
More information1. Temporal precedence 2. Existence of a relationship 3. Elimination of Rival Plausible Explanations
Ci Criminology i 860 Experimentation: Deconstructing Donnerstein & Berkowitz Remember John Stuart Mill s 3 Criteria for inferring the existence of a causal relation 1. Temporal precedence 2. Existence
More informationManitoba Centre for Health Policy. Inverse Propensity Score Weights or IPTWs
Manitoba Centre for Health Policy Inverse Propensity Score Weights or IPTWs 1 Describe Different Types of Treatment Effects Average Treatment Effect Average Treatment Effect among Treated Average Treatment
More informationVillarreal Rm. 170 Handout (4.3)/(4.4) - 1 Designing Experiments I
Statistics and Probability B Ch. 4 Sample Surveys and Experiments Villarreal Rm. 170 Handout (4.3)/(4.4) - 1 Designing Experiments I Suppose we wanted to investigate if caffeine truly affects ones pulse
More informationECON Microeconomics III
ECON 7130 - Microeconomics III Spring 2016 Notes for Lecture #5 Today: Difference-in-Differences (DD) Estimators Difference-in-Difference-in-Differences (DDD) Estimators (Triple Difference) Difference-in-Difference
More informationMethods for Addressing Selection Bias in Observational Studies
Methods for Addressing Selection Bias in Observational Studies Susan L. Ettner, Ph.D. Professor Division of General Internal Medicine and Health Services Research, UCLA What is Selection Bias? In the regression
More informationThreats and Analysis. Bruno Crépon J-PAL
Threats and Analysis Bruno Crépon J-PAL Course Overview 1. What is Evaluation? 2. Outcomes, Impact, and Indicators 3. Why Randomize and Common Critiques 4. How to Randomize 5. Sampling and Sample Size
More informationLAB 4 Experimental Design
LAB 4 Experimental Design Generally speaking, the research design that is used and the properties of the variables combine to determine what statistical tests we use to analyze the data and draw our conclusions.
More informationProblem Set 5 ECN 140 Econometrics Professor Oscar Jorda. DUE: June 6, Name
Problem Set 5 ECN 140 Econometrics Professor Oscar Jorda DUE: June 6, 2006 Name 1) Earnings functions, whereby the log of earnings is regressed on years of education, years of on-the-job training, and
More informationOne slide on research question Literature review: structured; holes you will fill in Your research design
Topics Ahead Week 10-11: Experimental design; Running experiment Week 12: Survey Design; ANOVA Week 13: Correlation and Regression; Non Parametric Statistics Week 14: Computational Methods; Simulation;
More informationOverview of Perspectives on Causal Inference: Campbell and Rubin. Stephen G. West Arizona State University Freie Universität Berlin, Germany
Overview of Perspectives on Causal Inference: Campbell and Rubin Stephen G. West Arizona State University Freie Universität Berlin, Germany 1 Randomized Experiment (RE) Sir Ronald Fisher E(X Treatment
More informationIdentification with Models and Exogenous Data Variation
Identification with Models and Exogenous Data Variation R. Jay Kahn Toni M. Whited University of Michigan University of Michigan and NBER June 11, 2016 Abstract We distinguish between identification and
More informationModule 14: Missing Data Concepts
Module 14: Missing Data Concepts Jonathan Bartlett & James Carpenter London School of Hygiene & Tropical Medicine Supported by ESRC grant RES 189-25-0103 and MRC grant G0900724 Pre-requisites Module 3
More informationThe validity of inferences about the correlation (covariation) between treatment and outcome.
Threats Summary (MGS 9940, Fall, 2004-Courtesy of Amit Mehta) Statistical Conclusion Validity The validity of inferences about the correlation (covariation) between treatment and outcome. Threats to Statistical
More informationRESEARCH METHODS. Winfred, research methods,
RESEARCH METHODS Winfred, research methods, 04-23-10 1 Research Methods means of discovering truth Winfred, research methods, 04-23-10 2 Research Methods means of discovering truth what is truth? Winfred,
More informationPropensity Score Methods for Estimating Causality in the Absence of Random Assignment: Applications for Child Care Policy Research
2012 CCPRC Meeting Methodology Presession Workshop October 23, 2012, 2:00-5:00 p.m. Propensity Score Methods for Estimating Causality in the Absence of Random Assignment: Applications for Child Care Policy
More informationSome interpretational issues connected with observational studies
Some interpretational issues connected with observational studies D.R. Cox Nuffield College, Oxford, UK and Nanny Wermuth Chalmers/Gothenburg University, Gothenburg, Sweden ABSTRACT After some general
More informationPrevious Example. New. Tradition
Experimental Design Previous Example New Tradition Goal? New Tradition =? Challenges Internal validity How to guarantee what you have observed is true? External validity How to guarantee what you have
More informationLecture II: Difference in Difference and Regression Discontinuity
Review Lecture II: Difference in Difference and Regression Discontinuity it From Lecture I Causality is difficult to Show from cross sectional observational studies What caused what? X caused Y, Y caused
More informationExperimental Design Part II
Experimental Design Part II Keith Smolkowski April 30, 2008 Where Are We Now? esearch eview esearch Design: The Plan Internal Validity Statements of Causality External Validity Statements of Generalizability
More informationChallenges of Observational and Retrospective Studies
Challenges of Observational and Retrospective Studies Kyoungmi Kim, Ph.D. March 8, 2017 This seminar is jointly supported by the following NIH-funded centers: Background There are several methods in which
More informationCHAPTER LEARNING OUTCOMES
EXPERIIMENTAL METHODOLOGY CHAPTER LEARNING OUTCOMES When you have completed reading this article you will be able to: Define what is an experiment Explain the role of theory in educational research Justify
More informationCross-Lagged Panel Analysis
Cross-Lagged Panel Analysis Michael W. Kearney Cross-lagged panel analysis is an analytical strategy used to describe reciprocal relationships, or directional influences, between variables over time. Cross-lagged
More informationUse of the Quantitative-Methods Approach in Scientific Inquiry. Du Feng, Ph.D. Professor School of Nursing University of Nevada, Las Vegas
Use of the Quantitative-Methods Approach in Scientific Inquiry Du Feng, Ph.D. Professor School of Nursing University of Nevada, Las Vegas The Scientific Approach to Knowledge Two Criteria of the Scientific
More informationApplied Econometrics for Development: Experiments II
TSE 16th January 2019 Applied Econometrics for Development: Experiments II Ana GAZMURI Paul SEABRIGHT The Cohen-Dupas bednets study The question: does subsidizing insecticide-treated anti-malarial bednets
More informationMethodology for Non-Randomized Clinical Trials: Propensity Score Analysis Dan Conroy, Ph.D., inventiv Health, Burlington, MA
PharmaSUG 2014 - Paper SP08 Methodology for Non-Randomized Clinical Trials: Propensity Score Analysis Dan Conroy, Ph.D., inventiv Health, Burlington, MA ABSTRACT Randomized clinical trials serve as the
More informationYou can t fix by analysis what you bungled by design. Fancy analysis can t fix a poorly designed study.
You can t fix by analysis what you bungled by design. Light, Singer and Willett Or, not as catchy but perhaps more accurate: Fancy analysis can t fix a poorly designed study. Producing Data The Role of
More informationExperimental design (continued)
Experimental design (continued) Spring 2017 Michelle Mazurek Some content adapted from Bilge Mutlu, Vibha Sazawal, Howard Seltman 1 Administrative No class Tuesday Homework 1 Plug for Tanu Mitra grad student
More informationRandomized Evaluations
Randomized Evaluations Introduction, Methodology, & Basic Econometrics using Mexico s Progresa program as a case study (with thanks to Clair Null, author of 2008 Notes) Sept. 15, 2009 Not All Correlations
More information26:010:557 / 26:620:557 Social Science Research Methods
26:010:557 / 26:620:557 Social Science Research Methods Dr. Peter R. Gillett Associate Professor Department of Accounting & Information Systems Rutgers Business School Newark & New Brunswick 1 Overview
More informationExperimental and Quasi-Experimental designs
External Validity Internal Validity NSG 687 Experimental and Quasi-Experimental designs True experimental designs are characterized by three "criteria for causality." These are: 1) The cause (independent
More informationCarrying out an Empirical Project
Carrying out an Empirical Project Empirical Analysis & Style Hint Special program: Pre-training 1 Carrying out an Empirical Project 1. Posing a Question 2. Literature Review 3. Data Collection 4. Econometric
More informationQUASI-EXPERIMENTAL HEALTH SERVICE EVALUATION COMPASS 1 APRIL 2016
QUASI-EXPERIMENTAL HEALTH SERVICE EVALUATION COMPASS 1 APRIL 2016 AIM & CONTENTS Aim to explore what a quasi-experimental study is and some issues around how they are done Context and Framework Review
More information2013/4/28. Experimental Research
2013/4/28 Experimental Research Definitions According to Stone (Research methods in organizational behavior, 1978, pp 118), a laboratory experiment is a research method characterized by the following:
More informationDesign of Experiments & Introduction to Research
Design of Experiments & Introduction to Research 1 Design of Experiments Introduction to Research Definition and Purpose Scientific Method Research Project Paradigm Structure of a Research Project Types
More informationResearch Design. Source: John W. Creswell RESEARCH DESIGN. Qualitative, Quantitative, and Mixed Methods Approaches Third Edition
Research Design Source: John W. Creswell RESEARCH DESIGN Qualitative, Quantitative, and Mixed Methods Approaches Third Edition The Three Types of Designs Three types Qualitative research Quantitative research
More informationBIOSTATISTICAL METHODS
BIOSTATISTICAL METHODS FOR TRANSLATIONAL & CLINICAL RESEARCH Designs on Micro Scale: DESIGNING CLINICAL RESEARCH THE ANATOMY & PHYSIOLOGY OF CLINICAL RESEARCH We form or evaluate a research or research
More informationSampling for Impact Evaluation. Maria Jones 24 June 2015 ieconnect Impact Evaluation Workshop Rio de Janeiro, Brazil June 22-25, 2015
Sampling for Impact Evaluation Maria Jones 24 June 2015 ieconnect Impact Evaluation Workshop Rio de Janeiro, Brazil June 22-25, 2015 How many hours do you expect to sleep tonight? A. 2 or less B. 3 C.
More informationSurvival Skills for Researchers. Study Design
Survival Skills for Researchers Study Design Typical Process in Research Design study Collect information Generate hypotheses Analyze & interpret findings Develop tentative new theories Purpose What is
More informationMBACATÓLICA JAN/APRIL Marketing Research. Fernando S. Machado. Experimentation
MBACATÓLICA JAN/APRIL 2006 Marketing Research Fernando S. Machado Week 5 Experimentation 1 Experimentation Experimental Research and Causality Experimental Designs Lab versus Field Experiments Test Marketing
More informationIdentifying Peer Influence Effects in Observational Social Network Data: An Evaluation of Propensity Score Methods
Identifying Peer Influence Effects in Observational Social Network Data: An Evaluation of Propensity Score Methods Dean Eckles Department of Communication Stanford University dean@deaneckles.com Abstract
More informationCritical dilemmas in the methodology of economics facing the crisis
Critical dilemmas in the methodology of economics facing the crisis Alessio Moneta Institute of Economics Scuola Superiore Sant Anna, Pisa amoneta@sssup.it 28 November 2014 Seminar Series Critical economics
More informationMeasuring Impact. Conceptual Issues in Program and Policy Evaluation. Daniel L. Millimet. Southern Methodist University.
Measuring mpact Conceptual ssues in Program and Policy Evaluation Daniel L. Millimet Southern Methodist University 23 May 2013 DL Millimet (SMU) Measuring mpact May 2013 1 / 25 ntroduction Primary concern
More information