Internal Validity and Experimental Design

Similar documents
The validity of inferences about the correlation (covariation) between treatment and outcome.

PLS 506 Mark T. Imperial, Ph.D. Lecture Notes: Reliability & Validity

26:010:557 / 26:620:557 Social Science Research Methods

STATISTICAL CONCLUSION VALIDITY

One slide on research question Literature review: structured; holes you will fill in Your research design

Previous Example. New. Tradition

Threats to Validity in Experiments. The John Henry Effect

EXPERIMENTAL RESEARCH DESIGNS

VALIDITY OF QUANTITATIVE RESEARCH

Lecture 4: Research Approaches

Topic #2. A key criterion in evaluating any test, measure, or piece of research is validity.

In this chapter we discuss validity issues for quantitative research and for qualitative research.

Today s reading concerned what method? A. Randomized control trials. B. Regression discontinuity. C. Latent variable analysis. D.

RESEARCH METHODS. Winfred, research methods, ; rv ; rv

RESEARCH METHODS. Winfred, research methods,

Chapter 9 Experimental Research (Reminder: Don t forget to utilize the concept maps and study questions as you study this and the other chapters.

Lecture 9 Internal Validity

WWC STUDY REVIEW STANDARDS

Experimental and Quasi-Experimental designs

The following are questions that students had difficulty with on the first three exams.

10/23/2017. Often, we cannot manipulate a variable of interest

Measuring and Assessing Study Quality

Validity and Quantitative Research. What is Validity? What is Validity Cont. RCS /16/04

Levels of Evaluation. Question. Example of Method Does the program. Focus groups with make sense? target population. implemented as intended?

Overview of Perspectives on Causal Inference: Campbell and Rubin. Stephen G. West Arizona State University Freie Universität Berlin, Germany

HPS301 Exam Notes- Contents

11/2/2017. Often, we cannot manipulate a variable of interest

Educational Research. S.Shafiee. Expert PDF Trial

Use of the Quantitative-Methods Approach in Scientific Inquiry. Du Feng, Ph.D. Professor School of Nursing University of Nevada, Las Vegas

OBSERVATION METHODS: EXPERIMENTS

Experimental Design and the struggle to control threats to validity

EC352 Econometric Methods: Week 07

Research Methods. Donald H. McBurney. State University of New York Upstate Medical University Le Moyne College

Formative and Impact Evaluation. Formative Evaluation. Impact Evaluation

Introduction to Observational Studies. Jane Pinelis

Experimental Design Part II

Analysis methods for improved external validity

Research Design. Beyond Randomized Control Trials. Jody Worley, Ph.D. College of Arts & Sciences Human Relations

Research Methods & Design Outline. Types of research design How to choose a research design Issues in research design

WWC Standards for Regression Discontinuity and Single Case Study Designs

Instrumental Variables Estimation: An Introduction

George B. Ploubidis. The role of sensitivity analysis in the estimation of causal pathways from observational data. Improving health worldwide

Experimental Research. Types of Group Comparison Research. Types of Group Comparison Research. Stephen E. Brock, Ph.D.

What We Will Cover in This Section

Chapter 10 Quasi-Experimental and Single-Case Designs

Research Design. Source: John W. Creswell RESEARCH DESIGN. Qualitative, Quantitative, and Mixed Methods Approaches Third Edition

Mixed Methods Study Design

Propensity Score Methods for Estimating Causality in the Absence of Random Assignment: Applications for Child Care Policy Research

Threats to validity in intervention studies. Potential problems Issues to consider in planning

Randomization as a Tool for Development Economists. Esther Duflo Sendhil Mullainathan BREAD-BIRS Summer school

Chapter 11 Nonexperimental Quantitative Research Steps in Nonexperimental Research

From Description to Causation

Lecture 3. Previous lecture. Learning outcomes of lecture 3. Today. Trustworthiness in Fixed Design Research. Class question - Causality

Lecture II: Difference in Difference. Causality is difficult to Show from cross

Analysis A step in the research process that involves describing and then making inferences based on a set of data.

Study Design. Svetlana Yampolskaya, Ph.D. Summer 2013

UNIT III: Research Design. In designing a needs assessment the first thing to consider is level for the assessment

Key questions when starting an econometric project (Angrist & Pischke, 2009):

Epidemiologic Methods I & II Epidem 201AB Winter & Spring 2002

Session 1: Dealing with Endogeneity

On the Targets of Latent Variable Model Estimation

TECH 646 Analysis of Research in Industry and Technology. Experiments

(happiness, input, feedback, improvement)

Methods of Reducing Bias in Time Series Designs: A Within Study Comparison

Public Policy & Evidence:

PSY 250. Designs 8/11/2015. Nonexperimental vs. Quasi- Experimental Strategies. Nonequivalent Between Group Designs

Experimental Political Science and the Study of Causality

9 research designs likely for PSYC 2100

Propensity Score Analysis Shenyang Guo, Ph.D.

CRITICAL APPRAISAL SKILLS PROGRAMME Making sense of evidence about clinical effectiveness. 11 questions to help you make sense of case control study

Getting ready for propensity score methods: Designing non-experimental studies and selecting comparison groups

Topic #4 CONTROL. 1. What are the threats to the validity of a contemplated piece of research?

Econometric analysis and counterfactual studies in the context of IA practices

Causality and Treatment Effects

Georgina Salas. Topics EDCI Intro to Research Dr. A.J. Herrera

Ec331: Research in Applied Economics Spring term, Panel Data: brief outlines

Topic #4 CONTROL. 1. What are the threats to the validity of a contemplated piece of research?

04/12/2014. Research Methods in Psychology. Chapter 6: Independent Groups Designs. What is your ideas? Testing

Recent advances in non-experimental comparison group designs

f WILEY ANOVA and ANCOVA A GLM Approach Second Edition ANDREW RUTHERFORD Staffordshire, United Kingdom Keele University School of Psychology

1. Temporal precedence 2. Existence of a relationship 3. Elimination of Rival Plausible Explanations

Other Designs. What Aids Our Causal Arguments? Time Designs Cohort. Time Designs Cross-Sectional

ISC- GRADE XI HUMANITIES ( ) PSYCHOLOGY. Chapter 2- Methods of Psychology

Underlying Theory & Basic Issues

Chapter 8 Experimental Design

Conditional Average Treatment Effects

Donna L. Coffman Joint Prevention Methodology Seminar

Why randomize? Rohini Pande Harvard University and J-PAL.

PYSC 224 Introduction to Experimental Psychology

Measures of Dispersion. Range. Variance. Standard deviation. Measures of Relationship. Range. Variance. Standard deviation.

Research Designs. Internal Validity

Students will demonstrate knowledge of an experiment by identifying different types of variables.

(CORRELATIONAL DESIGN AND COMPARATIVE DESIGN)

Lecture II: Difference in Difference and Regression Discontinuity

Conducting Research in the Social Sciences. Rick Balkin, Ph.D., LPC-S, NCC

PÄIVI KARHU THE THEORY OF MEASUREMENT

16:35 17:20 Alexander Luedtke (Fred Hutchinson Cancer Research Center)

How to Randomise? (Randomisation Design)

Research Design. SAGE Publications

Transcription:

Internal Validity and Experimental Design February 18 1 / 24

Outline 1. Share examples from TESS 2. Talk about how to write experimental designs 3. Internal validity Why randomization works Threats to internal validity 2 / 24

Examples from TESS What was the most interesting study you found on TESS? What was the topic (outcome concept and research question)? What was the design? 3 / 24

Protocol Writing up experimental designs Once we know our hypotheses, the experimental conditions are easy 4 / 24

Examples What experimental conditions do we need? 1. Individuals exposed to expert endorsements are more likely to support a policy than when exposed to partisan endorsements. 5 / 24

Examples What experimental conditions do we need? 1. Individuals exposed to expert endorsements are more likely to support a policy than when exposed to partisan endorsements. 2. Providing conditional cash transfers to women in rural Uganda is more effective at increasing their childrens' educational attainment than either microfinance loans to start businesses or unconditional grants of cash or goods (e.g., food). 6 / 24

Examples What experimental conditions do we need? 1. Individuals exposed to expert endorsements are more likely to support a policy than when exposed to partisan endorsements. 2. Providing conditional cash transfers to women in rural Uganda is more effective at increasing their childrens' educational attainment than either microfinance loans to start businesses or unconditional grants of cash or goods (e.g., food). 3. The effect of a public health intervention is more effective for native speakers of Danish than nonnative speakers of Danish. 7 / 24

Protocol Writing up experimental designs Once we know our hypotheses, the experimental conditions are easy But, there are still lots of decisions to make! 8 / 24

Random Assignment Why do we need it? Why can't we just compare t2 t1 changes? 9 / 24

Random Assignment Breaks the selection process This has benefits: 1. Balances covariates between groups 2. Balances potential outcomes between groups 3. No confounding 10 / 24

"Perfect Doctor" True potential outcomes (unobservable in reality) Unit Y(0) Y(1) 1 13 14 2 6 0 3 4 1 4 5 2 5 6 3 6 6 1 7 8 10 8 8 9 Mean 7 5 11 / 24

"Perfect Doctor" Observational data with strong selection bias Unit Y(0) Y(1) 1? 14 2 6? 3 4? 4 5? 5 6? 6 6? 7? 10 8? 9 Mean 5.4 11 12 / 24

Random assignment We have to do it But how do we randomize? 13 / 24

Definition The observation of units after, and possibly before, a randomly assigned intervention in a controlled setting, which tests one or more precise causal expectations. 14 / 24

Definition The observation of units after, and po ssibly before, a randomly assigned intervention in a controlled setting, which tests one or more precise causal expectations. 15 / 24

Design consideration Single-factor versus crossed designs Control groups Pretest measurement Crossover (within-subjects) designs Follow-up outcome measurement 16 / 24

Threats to validity "Falsificationist" strategy * No experiment is perfect * SCC pp.41-42 17 / 24

Threats to internal validity 1. Ambiguous temporal precedence 2. Selection 3. History 4. Maturation 5. Regression 6. Attrition 7. Testing (exposure to test affects subsequent scores; measurement has an effect) 8. Instrumentation SSC Table 2.4 (p.55) 18 / 24

Internal validity in experiments Which of these threats is solved by randomized experimentation? All but attrition, testing, and instrumentation Assuming well-designed protocol, then we just have to deal with testing and attrition 19 / 24

Threats to statistical conclusion validity 1. Power 2. Statistical assumption violations 3. Fishing 4. Measurement error 5. Restriction of range 6. Protocol violations 7. Loss of control 8. Unit heterogeneity (on DV) 9. Statistical artefacts SSC Table 2.2 (p.45) 20 / 24

Measurement and operationalization Content validity: does it include everything it is supposed to measure Construct validity: does the instrument actually measure the particular dimension of interest Predictive validity: does it predict what it is supposed to Face validity: does it make sense 21 / 24

Pretesting The best way to figure out whether a measure or a treatment serves its intended purpose is to pretest it before implementing the full study. 22 / 24

Ethics of randomization Equipoise* Treatment preferences** When to randomize Freedman; SSC pp.272-273 * SSC pp.273-274 23 / 24

Next week Continue our discussion of designs Talk about experimental analysis No example study for next week (because there is a lot to cover in class) Do not read the Splawa-Neyman text It's just there in case you're really interested in the statistics of experiments 24 / 24