Quantitative Methods. Lonnie Berger. Research Training Policy Practice

Similar documents
Overview of Perspectives on Causal Inference: Campbell and Rubin. Stephen G. West Arizona State University Freie Universität Berlin, Germany

STATS8: Introduction to Biostatistics. Overview. Babak Shahbaba Department of Statistics, UCI

Propensity Score Methods for Estimating Causality in the Absence of Random Assignment: Applications for Child Care Policy Research

Analysis A step in the research process that involves describing and then making inferences based on a set of data.

Research Design. Beyond Randomized Control Trials. Jody Worley, Ph.D. College of Arts & Sciences Human Relations

Causal Methods for Observational Data Amanda Stevenson, University of Texas at Austin Population Research Center, Austin, TX

Instrumental Variables Estimation: An Introduction

Propensity Score Analysis Shenyang Guo, Ph.D.

Getting ready for propensity score methods: Designing non-experimental studies and selecting comparison groups

Lecturer: Dr. Emmanuel Adjei Department of Information Studies Contact Information:

RESEARCH METHODS. Winfred, research methods,

Chapter 5: Research Language. Published Examples of Research Concepts

Measuring Impact. Conceptual Issues in Program and Policy Evaluation. Daniel L. Millimet. Southern Methodist University.

Evaluation: Scientific Studies. Title Text

Recent advances in non-experimental comparison group designs

observational studies Descriptive studies

MODULE 3 APPRAISING EVIDENCE. Evidence-Informed Policy Making Training

Methods for Addressing Selection Bias in Observational Studies

Introduction to Research Methods

Module 14: Missing Data Concepts

Randomized Controlled Trials Shortcomings & Alternatives icbt Leiden Twitter: CAAL 1

RESEARCH METHODS. Winfred, research methods, ; rv ; rv

Pros. University of Chicago and NORC at the University of Chicago, USA, and IZA, Germany

SMALL n IMPACT EVALUATION. Howard White and Daniel Phillips 3ie

E:\ F SOCI 502\Lectures\Research_Design\Research_Design_Text.wpd SOCI 502: NOTES ON RESEARCH DESIGN

METHODOLOGY FOR DISSERTATION

Geographic Data Science - Lecture IX

Endogeneity is a fancy word for a simple problem. So fancy, in fact, that the Microsoft Word spell-checker does not recognize it.

Lecture II: Difference in Difference. Causality is difficult to Show from cross

PharmaSUG Paper HA-04 Two Roads Diverged in a Narrow Dataset...When Coarsened Exact Matching is More Appropriate than Propensity Score Matching

The use of quasi-experiments in the social sciences: a content analysis

Complier Average Causal Effect (CACE)

Underlying Theory & Basic Issues

Measuring Impact. Program and Policy Evaluation with Observational Data. Daniel L. Millimet. Southern Methodist University.

Introduction to Applied Research in Economics Kamiljon T. Akramov, Ph.D. IFPRI, Washington, DC, USA

Biases in clinical research. Seungho Ryu, MD, PhD Kanguk Samsung Hospital, Sungkyunkwan University

Epidemiologic Methods I & II Epidem 201AB Winter & Spring 2002

Lecture 4: Research Approaches

Methods of Randomization Lupe Bedoya. Development Impact Evaluation Field Coordinator Training Washington, DC April 22-25, 2013

PLS 506 Mark T. Imperial, Ph.D. Lecture Notes: Reliability & Validity

Using ASPES (Analysis of Symmetrically- Predicted Endogenous Subgroups) to understand variation in program impacts. Laura R. Peck.

Lecture 3. Previous lecture. Learning outcomes of lecture 3. Today. Trustworthiness in Fixed Design Research. Class question - Causality


Regression Discontinuity Design

PSYCHOLOGICAL CONSCIOUSNESS AND PHENOMENAL CONSCIOUSNESS. Overview

Research and science: Qualitative methods

Rationale and Types of Research in Obstetrics and Gynaecology.. Professor Friday Okonofua

EXPERIMENTAL RESEARCH DESIGNS

Conducting Strong Quasi-experiments

Structural Approach to Bias in Meta-analyses

The Guide to Community Preventive Services. Systematic Use of Evidence to Address Public Health Questions

AP Psychology -- Chapter 02 Review Research Methods in Psychology

Quantitative Research Methods and Tools

Risk of bias assessment for. statistical methods

The Perils of Empirical Work on Institutions

Prediction, Causation, and Interpretation in Social Science. Duncan Watts Microsoft Research

Causal Validity Considerations for Including High Quality Non-Experimental Evidence in Systematic Reviews

Combining machine learning and matching techniques to improve causal inference in program evaluation

TREATMENT EFFECTIVENESS IN TECHNOLOGY APPRAISAL: May 2015

PEER REVIEW HISTORY ARTICLE DETAILS TITLE (PROVISIONAL)

Can Quasi Experiments Yield Causal Inferences? Sample. Intervention 2/20/2012. Matthew L. Maciejewski, PhD Durham VA HSR&D and Duke University

CASE STUDY 2: VOCATIONAL TRAINING FOR DISADVANTAGED YOUTH

Political Science 15, Winter 2014 Final Review

A Primer on Causality in Data Science

A note on evaluating Supplemental Instruction

Functionalist theories of content

INTERVIEWS II: THEORIES AND TECHNIQUES 5. CLINICAL APPROACH TO INTERVIEWING PART 1

Study Design. Svetlana Yampolskaya, Ph.D. Summer 2013

Introduction to Observational Studies. Jane Pinelis

The Research Roadmap Checklist

Objective: To describe a new approach to neighborhood effects studies based on residential mobility and demonstrate this approach in the context of

Careful with Causal Inference

Chapter 13. Experiments and Observational Studies

Sensitivity Analysis in Observational Research: Introducing the E-value

Throughout this book, we have emphasized the fact that psychological measurement

Science in Natural Resource Management ESRM 304

6. A theory that has been substantially verified is sometimes called a a. law. b. model.

Lecture Outline Biost 517 Applied Biostatistics I. Statistical Goals of Studies Role of Statistical Inference

Propensity Score Matching with Limited Overlap. Abstract

Assignment 4: True or Quasi-Experiment

9 research designs likely for PSYC 2100

August 29, Introduction and Overview

Study Designs in Epidemiology

Methodological Issues in Measuring the Development of Character

Incorporating Experimental Research Designs in Business Communication Research

How should the propensity score be estimated when some confounders are partially observed?

Confounding and Bias

Why do Psychologists Perform Research?

Quasi-experimental analysis Notes for "Structural modelling".

The Scientific Method. Philosophy of Science. Philosophy of Science. Epistemology: the philosophy of knowledge

Analysis Strategies for Clinical Trials with Treatment Non-Adherence Bohdana Ratitch, PhD

Simple Sensitivity Analyses for Matched Samples Thomas E. Love, Ph.D. ASA Course Atlanta Georgia

Other Designs. What Aids Our Causal Arguments? Time Designs Cohort. Time Designs Cross-Sectional

Running head: Investigating the merit of discreetness in health care services targeting young men. Edvard K.N. Karlsen

Quantitative Research Methods FSEHS-ARC

Running head: INDIVIDUAL DIFFERENCES 1. Why to treat subjects as fixed effects. James S. Adelman. University of Warwick.

Trial Designs. Professor Peter Cameron

Cross-Lagged Panel Analysis

PSYC1024 Clinical Perspectives on Anxiety, Mood and Stress

Transcription:

Quantitative Methods Lonnie Berger Research Training Policy Practice

Defining Quantitative and Qualitative Research Quantitative methods: systematic empirical investigation of observable phenomena via statistical, mathematical, or computational techniques generally with the purpose of often uncovering generalizable and/or causal patterns or relations; can be descriptive or explanatory in nature emphasizes objective measurement, highly structured data collection and analytic techniques, and data that is analyzed in numerical form Qualitative methods: systematic empirical investigation that can be exploratory or explanatory in nature and intended to gain understanding of (often) subjective phenomena (e.g., underlying reasons, ideas, opinions, and motivations) that may be difficult to directly observe and/or quantify Data collection tends to be semi-structured and often with a nonrandom sample of limited size, data is typically analyzed in textual form (though some aspects may be counted/quantified)

Quantitative Research Quantitative research is most often focused on describing or explaining a phenomenal in terms of direction, magnitude, and precision of estimation Descriptive: identifying and quantifying trends, group differences, correlations, associations Explanatory: implies identifying and quantifying causal effects Language is important: causal language (effects, impacts, etc.) should only be used in the context of rigorous causal identification; descriptive language (correlations, associations) should be the default Determining whether a relation is correlational or causal is crucial for informing policy

What do we mean by causality? increased probability of some outcome as a direct result of some condition (treatment) What is a causal effect? ideally, the difference in some outcome for the exact same individual observed in two different conditions at the exact same time, all else equal Rubin-Rosenbaum(-Holland) theory (in short): Assume a causal variable or treatment with 2 possible values: experimental (E) and control (C) and an outcome (Y). If an individual receives E, we observe Y i (E); if the individual receives C, we observe Y i (C). The causal effect of the experimental treatment for that person (relative to the control condition) is the difference between these 2 outcomes: i = Y i (E) -Y i (C).

Rubin-Rosenbaum(-Holland) theory: Key take-aways the causal effect ( i ) is defined uniquely for each individual (i.e., the impact of the treatment varies across individuals); that is, it is the difference between how that individual would respond under a given condition versus under another condition therefore, it varies across individuals; the causal effect cannot be observed (or directly computed) because an individual can only be assigned to one condition (note that the counter-factual outcome is that which is not observed); it must be possible to imagine a scenario in which the individual could have received either E or C (thus, a fixed attribute of an individual cannot typically be a cause we cannot realistically know how a particular girl would be different in some regard if she were a boy, etc.)

Causal estimation (1) According to RRH, the problem associated with causal inference is basically a missing data problem we only observed one potential outcome per individual (if we observed both we could simply calculate the causal effect), such that the counterfactual is always missing. The primary barrier to producing causal estimates is establishing a plausible and observable counterfactual. Randomized studies ensure that the missing counterfactual is always missing completely at random ensuring that the decision about which outcome will be observed (Y i (E) or Y i (C)) is determined by chance alone assuming randomization worked and sample size is large enough.

Causal estimation (2) In a perfectly estimated random experiment, we cannot estimate the causal effect for each individual, but we can calculate an unbiased estimate of the average causal effect: = E[Y i (E) -Y i (C)]; the population average difference between potential outcomes (i.e., the difference between sample means of the E and C groups). The E-C difference is an intent-to-treat effect and may be heavily influenced by take up; we therefore often also estimate treatment-on-the-treated effect Randomization is often impossible Thus, we often rely on quasi-experimental methods, such as natural experiments, for causal inference Focus is on isolating effect of treatment variable by establishing an appropriate counterfactual

Observational studies In observational (non-experimental) studies, we are worried about confounders or pretreatment characteristics that are related to both the propensity to receive a treatment and the potential outcome(s). This is sometimes characterized as a social selection or endogeneity problem. Valid causal inference requires that the propensity to receive the treatment is independent of the outcomes conditional on observable characteristics in this way, we can presumably isolate the effect of the treatment. Randomized studies ensure independence. Nonrandomized studies do not. In non-randomized studies, knowledge of predictors of the propensity to receive the treatment are essential for generating causal estimates. Thus, in observational studies it is the researcher s burden to show that relevant confounders have been accounted for.

Non-experimental approaches Things to think about: What kinds of variation can you leverage? Between or within unit? Exogenous/random or only endogenous? What kinds of confounders can you adjust for? Observable? Unobservable? Approaches: Controlling for observable confounders Adjusting for unobservables (often with longitudinal data or multiple observations within a unit, such as family, state, year, etc.) Leveraging exogenous variation via natural experiment Multiple identification strategies, sensitivity analyses/robustness checks See Duncan et al. (2004); Berger et al. (2009); Berger et al. (2017) on website