Threats to Validity in Experiments. The John Henry Effect

Similar documents
Today s reading concerned what method? A. Randomized control trials. B. Regression discontinuity. C. Latent variable analysis. D.

Study Design. Svetlana Yampolskaya, Ph.D. Summer 2013

OBSERVATION METHODS: EXPERIMENTS

Threats to validity in intervention studies. Potential problems Issues to consider in planning

Experimental and Quasi-Experimental designs

Experimental Research. Types of Group Comparison Research. Types of Group Comparison Research. Stephen E. Brock, Ph.D.

PSY 250. Experimental Design: The Basic Building Blocks. Simple between subjects design. The Two-Group Design 7/25/2015. Experimental design

Internal Validity and Experimental Design

Why randomize? Rohini Pande Harvard University and J-PAL.

The validity of inferences about the correlation (covariation) between treatment and outcome.

CHAPTER 8 EXPERIMENTAL DESIGN

26:010:557 / 26:620:557 Social Science Research Methods

GUIDE 4: COUNSELING THE UNEMPLOYED

Evaluating Social Programs Course: Evaluation Glossary (Sources: 3ie and The World Bank)

Experimental Design Part II

Empirical Knowledge: based on observations. Answer questions why, whom, how, and when.

TRANSLATING RESEARCH INTO ACTION

Georgina Salas. Topics EDCI Intro to Research Dr. A.J. Herrera

8 people got 100 (I rounded down to 100 for those who went over) Two had perfect exams

Sociology 201: Social Research Design

Experiments. ESP178 Research Methods Dillon Fitch 1/26/16. Adapted from lecture by Professor Susan Handy

Measures of Dispersion. Range. Variance. Standard deviation. Measures of Relationship. Range. Variance. Standard deviation.

PSY 250. Designs 8/11/2015. Nonexperimental vs. Quasi- Experimental Strategies. Nonequivalent Between Group Designs

Topic #2. A key criterion in evaluating any test, measure, or piece of research is validity.

IR 211 Professor Graham Fall 2015 Final Study Guide, Complete

Previous Example. New. Tradition

Version No. 7 Date: July Please send comments or suggestions on this glossary to

10/23/2017. Often, we cannot manipulate a variable of interest

The essential focus of an experiment is to show that variance can be produced in a DV by manipulation of an IV.

Research Design. Source: John W. Creswell RESEARCH DESIGN. Qualitative, Quantitative, and Mixed Methods Approaches Third Edition

04/12/2014. Research Methods in Psychology. Chapter 6: Independent Groups Designs. What is your ideas? Testing

11/2/2017. Often, we cannot manipulate a variable of interest

Experimental Design and the struggle to control threats to validity

RESEARCH METHODS. Winfred, research methods, ; rv ; rv

Political Science 15, Winter 2014 Final Review

Lecture 4: Research Approaches

Design of Experiments & Introduction to Research

Classic experimental design. Definitions. Decisions. Example: Manhattan Bail Project

RESEARCH METHODS. Winfred, research methods,

PYSC 224 Introduction to Experimental Psychology

TECH 646 Analysis of Research in Industry and Technology. Experiments

STATISTICAL CONCLUSION VALIDITY

One slide on research question Literature review: structured; holes you will fill in Your research design

Evaluating experiments (part 2) Andy J. Wills

CHAPTER LEARNING OUTCOMES

Review and Wrap-up! ESP 178 Applied Research Methods Calvin Thigpen 3/14/17 Adapted from presentation by Prof. Susan Handy

RESEARCH METHODS. A Process of Inquiry. tm HarperCollinsPublishers ANTHONY M. GRAZIANO MICHAEL L RAULIN

Instrumental Variables Estimation: An Introduction

PLS 506 Mark T. Imperial, Ph.D. Lecture Notes: Reliability & Validity

What We Will Cover in This Section

Lecture 3. Previous lecture. Learning outcomes of lecture 3. Today. Trustworthiness in Fixed Design Research. Class question - Causality

Causation and experiments. Phil 12: Logic and Decision Making Spring 2011 UC San Diego 5/24/2011

Educational Research. S.Shafiee. Expert PDF Trial

The Practice of Statistics 1 Week 2: Relationships and Data Collection

CASE STUDY 4: DEWORMING IN KENYA

More on Experiments: Confounding and Obscuring Variables (Part 1) Dr. Stefanie Drew

DEPENDENT VARIABLE. TEST UNITS - subjects or entities whose response to the experimental treatment are measured or observed. Dependent Variable

Overview of Experimentation

Levels of Evaluation. Question. Example of Method Does the program. Focus groups with make sense? target population. implemented as intended?

PYSC 224 Introduction to Experimental Psychology

Chapter 9 Experimental Research (Reminder: Don t forget to utilize the concept maps and study questions as you study this and the other chapters.

MBACATÓLICA JAN/APRIL Marketing Research. Fernando S. Machado. Experimentation

Chapter 7. Marketing Experimental Research. Business Research Methods Verónica Rosendo Ríos Enrique Pérez del Campo Marketing Research

9 research designs likely for PSYC 2100

Introduction to Experiments

Rival Plausible Explanations

AP Statistics Unit 4.2 Day 3 Notes: Experimental Design. Expt1:

In this chapter we discuss validity issues for quantitative research and for qualitative research.

Impact Evaluation Methods: Why Randomize? Meghan Mahoney Policy Manager, J-PAL Global

VALIDITY OF QUANTITATIVE RESEARCH

Sampling for Impact Evaluation. Maria Jones 24 June 2015 ieconnect Impact Evaluation Workshop Rio de Janeiro, Brazil June 22-25, 2015

STATISTICS 8 CHAPTERS 1 TO 6, SAMPLE MULTIPLE CHOICE QUESTIONS

Introduction: Research design

EXPERIMENTAL AND EX POST FACTO DESIGNS M O N A R A H I M I

What We Will Cover in This Section

UNIT III: Research Design. In designing a needs assessment the first thing to consider is level for the assessment

TRANSLATING RESEARCH INTO ACTION. Why randomize? Dan Levy. Harvard Kennedy School

Experimental Design. Dewayne E Perry ENS C Empirical Studies in Software Engineering Lecture 8

Threats and Analysis. Bruno Crépon J-PAL

Randomized Evaluations

Threats and Analysis. Shawn Cole. Harvard Business School

Data = collections of observations, measurements, gender, survey responses etc. Sample = collection of some members (a subset) of the population

Overview of the Logic and Language of Psychology Research

Some Potential Confounds: All Experimental & Quasi-Experimental Designs (QEDs) 1. Experimenter Bias (i.e., E expectancy effects)

Causal Research Design- Experimentation

Glossary From Running Randomized Evaluations: A Practical Guide, by Rachel Glennerster and Kudzai Takavarasha

Types of Group Comparison Research. Stephen E. Brock, Ph.D., NCSP EDS 250. Causal-Comparative Research 1

Causality and Experiments

The following are questions that students had difficulty with on the first three exams.

AP Psychology -- Chapter 02 Review Research Methods in Psychology

Quantitative research Methods. Tiny Jaarsma

Agenda for today. Go over the homework problems that were due today. Intuitive inference of data & information. What makes a good project

(CORRELATIONAL DESIGN AND COMPARATIVE DESIGN)

Use of the Quantitative-Methods Approach in Scientific Inquiry. Du Feng, Ph.D. Professor School of Nursing University of Nevada, Las Vegas

Designing and Analyzing RCTs. David L. Streiner, Ph.D.

Villarreal Rm. 170 Handout (4.3)/(4.4) - 1 Designing Experiments I

9/30/2017. personality traits: Sheltered, Confident, Team- Oriented, Conventional, Pressured, Achieving

Formative and Impact Evaluation. Formative Evaluation. Impact Evaluation

UNDERSTANDING EPIDEMIOLOGICAL STUDIES. Csaba P Kovesdy, MD FASN Salem VA Medical Center, Salem VA University of Virginia, Charlottesville VA

Chapter 1: Explaining Behavior

Transcription:

Threats to Validity in Experiments Office Hours: M 11:30-12:30, W 10:30-12:30 SSB 447 The John Henry Effect

Housekeeping Homework 3, due Tuesday A great paper, but far from an experiment Midterm review guide part 2 will be sent around tonight No terms will be on the midterm that are not on the midterm review guides.

Sample midterm answer Term: Ordinal Measure Definition: A measure in which the values indicate only the order of cases. Significance: Ordinal measures allow less than or greater than comparisons, but the distance between points on the scale may vary. The distance between 1 and 2 on an ordinal scale may be different than the distance between 3 and 4 on the same scale. Ordinal measures contain less information than ratio or interval measures. Example: If the concept being measured is annual income, an ordinal scale could take a value of 0 for incomes less than $10,000 per year, a value of 1 for incomes $10,000-25,000, and a value of 3 for incomes over $25,000 per year.

Treatment = Independent Variable Might be a dummy variable: Tutoring vs. No tutoring Might be a ratio variable: Number of Hours of Tutoring Might be a categorical variable: Nothing vs. tutoring vs. online study vs. lucky rabbits foot Dummy variable means two groups: Treatment and Control Other variable types will have multiple groups

Threats to internal validity in experiments (1) Selection bias (in assignment) As soon as you set up a randomization, subjects go to work trying to break it. Ex: Progresa Differential attrition When people drop out of one group for different reasons than they drop out of the other. Hypothesis: People begin to like classical music if they listen to it all the time Treatment group: Has to listen to classical music for 1 hour a day for a month Control group: Listens to whatever music they want Which group would experience more attrition? What type of people would drop out of the treatment group? How would that affect your findings?

Threats to internal validity in experiments (2) Endogenous change Other events (besides the treatment) occur between pretest and post-test Testing: see the Solomon 4 group design Maturation Regression toward the mean History Effects/External Events Not problems in true experiments in which the context for both groups is the same

Threats to internal validity in experiments (3) Contamination effects: Diffusion Particularly problematic where the treatment is some form of information or education Demoralization Compensatory rivalry: AKA the John Henry Effect

Threats to internal validity in experiments (4) Treatment misidentification: The treatment isn t what we think it is Hawthorne effect: What did researchers think the treatment was? What else was part of the treatment? Experimenter expectations: The need (in some cases) for double-blind experiments Voter education in Georgia (the country) Villages are randomly assigned to treatment and control Educators go door to door in the treatment group villages and provide residents with information on the election date, the issues, and the candidates. No one goes door to door in the villages in the control group. Results: Treatment villages experience lower voter turnout than the control villages. What did researchers think the treatment was? What else might have been part of the treatment, and how might this have negatively affected voter turnout?

Selection bias in sampling Experimenters often overstate the external validity of their results Experiment participants are rarely a true random sample of the population Key question: How do people who are willing to participate in the study differ from those who are unwilling to participate?

Group Work Groups of 2-3, 1 paper per group. Put each student name on the paper. My theory is that, for families in Haiti, having a parent migrate to the US increases educational attainment by the children left back home. IV = Parent migrated to US or not (dummy variable) DV = Years of schooling completed by child 1. Describe (in 1-3 sentences) a causal mechanism that would cause a positive effect of parental migration on child educational attainment. 2. Describe (in 1-3 sentences) a causal mechanism that would cause a negative effect of parental migration on child educational attainment. 3. Name at least two extraneous variables that might confound our results if we used a cross-sectional design with observational data: a) One must be observable and measurable b) One must be unobservable 4. Imagine we were able to run a true experiment with 2 years between treatment and post-test. 1. How would we draw our sample and assign our groups? 2. What is the treatment condition and what is the control condition? 3. Discuss the possibility for selection bias, differential attrition, endogenous change, 5. Can you think of any as-if randomization we might exploit to approximate a randomized control trial?