Threats to validity in intervention studies. Potential problems Issues to consider in planning

Similar documents
Study Design. Svetlana Yampolskaya, Ph.D. Summer 2013

Experimental and Quasi-Experimental designs

26:010:557 / 26:620:557 Social Science Research Methods

Rival Plausible Explanations

Experimental Design Part II

Levels of Evaluation. Question. Example of Method Does the program. Focus groups with make sense? target population. implemented as intended?

Threats to Validity in Experiments. The John Henry Effect

Sociology 201: Social Research Design

8 people got 100 (I rounded down to 100 for those who went over) Two had perfect exams

OBSERVATION METHODS: EXPERIMENTS

Experimental Research. Types of Group Comparison Research. Types of Group Comparison Research. Stephen E. Brock, Ph.D.

Classic experimental design. Definitions. Decisions. Example: Manhattan Bail Project

In this chapter we discuss validity issues for quantitative research and for qualitative research.

Reliability and Validity checks S-005

HPS301 Exam Notes- Contents

CHAPTER LEARNING OUTCOMES

VALIDITY OF QUANTITATIVE RESEARCH

UNIT III: Research Design. In designing a needs assessment the first thing to consider is level for the assessment

PLS 506 Mark T. Imperial, Ph.D. Lecture Notes: Reliability & Validity

The Logic of Data Analysis Using Statistical Techniques M. E. Swisher, 2016

Research Approach & Design. Awatif Alam MBBS, Msc (Toronto),ABCM Professor Community Medicine Vice Provost Girls Section

Georgina Salas. Topics EDCI Intro to Research Dr. A.J. Herrera

Lecture 3. Previous lecture. Learning outcomes of lecture 3. Today. Trustworthiness in Fixed Design Research. Class question - Causality

Designing and Analyzing RCTs. David L. Streiner, Ph.D.

The validity of inferences about the correlation (covariation) between treatment and outcome.

STATISTICAL CONCLUSION VALIDITY

One slide on research question Literature review: structured; holes you will fill in Your research design

CHAPTER NINE DATA ANALYSIS / EVALUATING QUALITY (VALIDITY) OF BETWEEN GROUP EXPERIMENTS

Random Assignment/Selection DESIGNS AND THEIR LIMITATIONS. Limitations

Previous Example. New. Tradition

Educational Research. S.Shafiee. Expert PDF Trial

The following are questions that students had difficulty with on the first three exams.

INVENTORY OF POSITIVE PSYCHOLOGICAL ATTITUDES (IPPA-32R) Self-Test Version

What We Will Cover in This Section

GUIDE 4: COUNSELING THE UNEMPLOYED

What We Will Cover in This Section

Measuring and Assessing Study Quality

Regressions usually happen at the following ages 4 months, 8 months, 12 months, 18 months, 2 years.

Research Design. Source: John W. Creswell RESEARCH DESIGN. Qualitative, Quantitative, and Mixed Methods Approaches Third Edition

PYSC 224 Introduction to Experimental Psychology

RESEARCH METHODS. Winfred, research methods, ; rv ; rv

Patrick Breheny. January 28

RESEARCH METHODS. Winfred, research methods,

M6728. Goals. Depression Scores. Research Designs

Overview of the Logic and Language of Psychology Research

Chapter 9 Experimental Research (Reminder: Don t forget to utilize the concept maps and study questions as you study this and the other chapters.

Evaluation and Assessment of Health Policy. Gerard F. Anderson, PhD Johns Hopkins University

PST-PC Appendix. Introducing PST-PC to the Patient in Session 1. Checklist

Formative and Impact Evaluation. Formative Evaluation. Impact Evaluation

Evidence Informed Practice Online Learning Module Glossary

Validity and Quantitative Research. What is Validity? What is Validity Cont. RCS /16/04

Roots of Empathy. Evaluation of. in Scotland Executive Summary. for Action for Children December 2015

More on Experiments: Confounding and Obscuring Variables (Part 1) Dr. Stefanie Drew

TECH 646 Analysis of Research in Industry and Technology. Experiments

The Singapore Copyright Act applies to the use of this document.

Experimental Validity

1. Temporal precedence 2. Existence of a relationship 3. Elimination of Rival Plausible Explanations

Research Plan And Design

Topic #2. A key criterion in evaluating any test, measure, or piece of research is validity.

Abhinav: So, Ephraim, tell us a little bit about your journey until this point and how you came to be an infectious disease doctor.

Sooner really isn t better TEENS AND DRINKING:

The Helping Orientations Inventory Open Materials 1

TECH 646 Analysis of Research in Industry and Technology. Experiments

Empirical Knowledge: based on observations. Answer questions why, whom, how, and when.

Common Measurement Framework: Possible Front Runner Measures

Lecture 4: Research Approaches

A Patient s Guide to Chronic Pain Management

Communication Research Practice Questions

The Basics of Experimental Design [A Quick and Non-Technical Guide]

Survey Research Centre. An Introduction to Survey Research

Assessing Studies Based on Multiple Regression. Chapter 7. Michael Ash CPPA

Lecture 15. There is a strong scientific consensus that the Earth is getting warmer over time.

DEPENDENT VARIABLE. TEST UNITS - subjects or entities whose response to the experimental treatment are measured or observed. Dependent Variable

PM12 Validity P R O F. D R. P A S Q U A L E R U G G I E R O D E P A R T M E N T O F B U S I N E S S A N D L A W

UNIT 7 EXPERIMENTAL RESEARCH-I1

GCSE PSYCHOLOGY UNIT 2 FURTHER RESEARCH METHODS

Statistical Sampling: An Overview for Criminal Justice Researchers April 28, 2016

Higher Psychology RESEARCH REVISION

Research Methodologies

Internal Validity and Experimental Design

Public Policy & Evidence:

WHOLE HEALTH: INFORMATION FOR VETERANS. The Healing Power of Hope and Optimism

Neural circuits PSY 310 Greg Francis. Lecture 05. Rods and cones

COMMENTS ON THE PAPER OF PROF. PARTHA DASGUPTA

CRITICALLY APPRAISED PAPER

Letter to the teachers

Chapter Three Research Methodology

Introductory: Coding

Preliminary Research Considerations. Lecture Overview. Stephen E. Brock, Ph.D., NCSP

Overview of Perspectives on Causal Inference: Campbell and Rubin. Stephen G. West Arizona State University Freie Universität Berlin, Germany

The Parent's Perspectives on Autism Spectrum Disorder

Experimental Methods in Health Research A. Niroshan Siriwardena

CogSysIII Lecture 4/5: Empirical Evaluation of Software-Systems

Research Landscape. Qualitative = Constructivist approach. Quantitative = Positivist/post-positivist approach Mixed methods = Pragmatist approach

Realist Interviewing and Realist Qualitative Analysis

Use of the Quantitative-Methods Approach in Scientific Inquiry. Du Feng, Ph.D. Professor School of Nursing University of Nevada, Las Vegas

We re going to talk about a class of designs which generally are known as quasiexperiments. They re very important in evaluating educational programs

Introduction: Research design

EXPERIMENTAL AND EX POST FACTO DESIGNS M O N A R A H I M I

What is Psychology? chapter 1

Transcription:

Threats to validity in intervention studies Potential problems Issues to consider in planning

An important distinction Credited to Campbell & Stanley (1963) Threats to Internal validity Threats to External validity Problems we might encounter in this particular study. Problems that might limit the generalizability of the results Was it really the new program (the intervention) that caused a difference, or was it something else? Okay, the program seemed to work in this study, but would it work elsewhere? Next time? With new students, teachers, etc.?

Threats to Internal validity Outside events (History) Attrition Maturation Selection Practice (testing) Instrumentation Regression (statistical regression) Selection-Maturation Experimenter effects Subject expectations Diffusion

Outside events (History) Problems with issues that happen out in the real world Events might influence the outcome we are studying, but they are not really part of our new treatment News events The Sesame Street example There are almost always lots of factors that are influencing things, so other factors can have an influence on our participants Control groups can help deal with this

Attrition Drop outs A big issue in longitudinal studies College studies are a classic example Differential attrition Can make new programs look good e.g., if weaker students drop out Can make new programs look bad e.g., if weaker students are retained Or if weaker students drop out of control group Not always a problem in short-term

Maturation Normal development may occur while the new treatment is being studied An important issue with young children Comparison groups can help

Selection effects In comparative studies How are people selected for the treatment and nontreatment conditions? Volunteers? Intact groups? What biases might be present? This is the classic problem we encounter when we do not have a randomized experiment and have to use an observational study or a quasi-experimental design.

Practice effects Can occur with tests Also a potential problem with interviews Observations Other situations? Can be tricky to control

Instrumentation Changes in the instruments Different tests Different questions (in interviews or questionnaires) Different interviewers Different observers Different scoring systems Different raters

Regression to the mean Regression artifacts High scores tend to be not quite as high Low scores tend to be not quite so low Only an issue in special circumstances Can have important effects The Head Start example Here is a link to a nice discussion of regression to the mean: http://onlinestatbook.com/2/regression/regression_to ward_mean.html

Selection-Maturation interaction Even if two groups appear to be similar at the beginning of the study (at pretest), they may be growing at different rates. (Or their future growth may be different.) This can be a challenge for quasi-experimental designs.

Expectancy effects Experimenter effects Even unintentional influences can play a role One group may be influenced more than another Subject expectations (participant expectations) Hawthorne effects (just being in a study can be an influence) Knowing about new treatment may be an influence Compensatory rivalry in control group Demoralized control (knowing about control group may be an influence) Placebo effects in medical trials How to control these?

Diffusion of treatment Some of the new treatment ideas might diffuse (spread) to others Teachers may share ideas with other teachers So control group is not quite what you wanted People may find alternative ways to gain access to some new program or idea (This might be desirable in some ways, but it can certainly be a limitation for research studies.)

Treatment implementation Was the new treatment really implemented? (What was the Fidelity of the treatment implementation? ) Sometimes the control group implements some of the new ideas (treatment diffusion) Other times the treatment classrooms may not fully implement the new ideas A tricky issue that we run into in lots of studies especially studies that take place over long periods of time. Things can get better over time Or things might get worse when the novelty wears off

Threats to External validity Limited samples Special arrangements Expectancy effects Sensitization Multiple treatments