Propensity Score Analysis and Strategies for Its Application to Services Training Evaluation

Similar documents
Propensity Score Analysis Shenyang Guo, Ph.D.

Do not copy, post, or distribute

TRIPLL Webinar: Propensity score methods in chronic pain research

Propensity Score Methods for Causal Inference with the PSMATCH Procedure

Introduction to Observational Studies. Jane Pinelis

Estimating average treatment effects from observational data using teffects

Overview of Perspectives on Causal Inference: Campbell and Rubin. Stephen G. West Arizona State University Freie Universität Berlin, Germany

Causal Validity Considerations for Including High Quality Non-Experimental Evidence in Systematic Reviews

EMPIRICAL STRATEGIES IN LABOUR ECONOMICS

Propensity Score Methods for Estimating Causality in the Absence of Random Assignment: Applications for Child Care Policy Research

The role of self-reporting bias in health, mental health and labor force participation: a descriptive analysis

Using register data to estimate causal effects of interventions: An ex post synthetic control-group approach

Do not copy, post, or distribute

Effects of propensity score overlap on the estimates of treatment effects. Yating Zheng & Laura Stapleton

ICPSR Causal Inference in the Social Sciences. Course Syllabus

Pros. University of Chicago and NORC at the University of Chicago, USA, and IZA, Germany

Draft Proof - Do not copy, post, or distribute

An Introduction to Propensity Score Methods for Reducing the Effects of Confounding in Observational Studies

Practical propensity score matching: a reply to Smith and Todd

Empirical Strategies

Causal Methods for Observational Data Amanda Stevenson, University of Texas at Austin Population Research Center, Austin, TX

Research Design. Beyond Randomized Control Trials. Jody Worley, Ph.D. College of Arts & Sciences Human Relations

Matching methods for causal inference: A review and a look forward

Regression Discontinuity Analysis

Propensity scores and causal inference using machine learning methods

P E R S P E C T I V E S

Citation for published version (APA): Ebbes, P. (2004). Latent instrumental variables: a new approach to solve for endogeneity s.n.

Should Causality Be Defined in Terms of Experiments? University of Connecticut University of Colorado

Combining machine learning and matching techniques to improve causal inference in program evaluation

PharmaSUG Paper HA-04 Two Roads Diverged in a Narrow Dataset...When Coarsened Exact Matching is More Appropriate than Propensity Score Matching

A Potential Outcomes View of Value-Added Assessment in Education

Propensity Score Matching with Limited Overlap. Abstract

Recent advances in non-experimental comparison group designs

The University of North Carolina at Chapel Hill School of Social Work

BIOSTATISTICAL METHODS

Complier Average Causal Effect (CACE)

Methods for Addressing Selection Bias in Observational Studies

Version No. 7 Date: July Please send comments or suggestions on this glossary to

Impact and adjustment of selection bias. in the assessment of measurement equivalence

Lecture II: Difference in Difference. Causality is difficult to Show from cross

Jake Bowers Wednesdays, 2-4pm 6648 Haven Hall ( ) CPS Phone is

Evaluating health management programmes over time: application of propensity score-based weighting to longitudinal datajep_

Introduction to Applied Research in Economics Kamiljon T. Akramov, Ph.D. IFPRI, Washington, DC, USA

Getting ready for propensity score methods: Designing non-experimental studies and selecting comparison groups

Causal Inference Course Syllabus

Carrying out an Empirical Project

Quasi-experimental analysis Notes for "Structural modelling".

Write your identification number on each paper and cover sheet (the number stated in the upper right hand corner on your exam cover).

Can Quasi Experiments Yield Causal Inferences? Sample. Intervention 2/20/2012. Matthew L. Maciejewski, PhD Durham VA HSR&D and Duke University

Identifying Peer Influence Effects in Observational Social Network Data: An Evaluation of Propensity Score Methods

Supplement 2. Use of Directed Acyclic Graphs (DAGs)

Propensity score analysis with the latest SAS/STAT procedures PSMATCH and CAUSALTRT

Assessing Studies Based on Multiple Regression. Chapter 7. Michael Ash CPPA

1. INTRODUCTION. Lalonde estimates the impact of the National Supported Work (NSW) Demonstration, a labor

Confounding by indication developments in matching, and instrumental variable methods. Richard Grieve London School of Hygiene and Tropical Medicine

SUNGUR GUREL UNIVERSITY OF FLORIDA

26:010:557 / 26:620:557 Social Science Research Methods

Using Ensemble-Based Methods for Directly Estimating Causal Effects: An Investigation of Tree-Based G-Computation

Chapter 21 Multilevel Propensity Score Methods for Estimating Causal Effects: A Latent Class Modeling Strategy

Predicting the efficacy of future training programs using past experiences at other locations

Biostatistics II

State-of-the-art Strategies for Addressing Selection Bias When Comparing Two or More Treatment Groups. Beth Ann Griffin Daniel McCaffrey

TREATMENT EFFECTIVENESS IN TECHNOLOGY APPRAISAL: May 2015

1. Introduction Consider a government contemplating the implementation of a training (or other social assistance) program. The decision to implement t

Lecture II: Difference in Difference and Regression Discontinuity

The Logic of Causal Inference

The Effects of Maternal Alcohol Use and Smoking on Children s Mental Health: Evidence from the National Longitudinal Survey of Children and Youth

Propensity score methods : a simulation and case study involving breast cancer patients.

Which Comparison-Group ( Quasi-Experimental ) Study Designs Are Most Likely to Produce Valid Estimates of a Program s Impact?:

Logistic regression: Why we often can do what we think we can do 1.

We define a simple difference-in-differences (DD) estimator for. the treatment effect of Hospital Compare (HC) from the

Marno Verbeek Erasmus University, the Netherlands. Cons. Pros

Working Paper: Designs of Empirical Evaluations of Non-Experimental Methods in Field Settings. Vivian C. Wong 1 & Peter M.

TESTING FOR COVARIATE BALANCE IN COMPARATIVE STUDIES

Technical Specifications

PubH 7405: REGRESSION ANALYSIS. Propensity Score

Propensity Score Analysis: Its rationale & potential for applied social/behavioral research. Bob Pruzek University at Albany

Abstract Title Page. Authors and Affiliations: Chi Chang, Michigan State University. SREE Spring 2015 Conference Abstract Template

CASE STUDY 2: VOCATIONAL TRAINING FOR DISADVANTAGED YOUTH

Formative and Impact Evaluation. Formative Evaluation. Impact Evaluation

WORKING PAPER SERIES

What is Multilevel Modelling Vs Fixed Effects. Will Cook Social Statistics

Improving Causal Claims in Observational Research: An Investigation of Propensity Score Methods in Applied Educational Research

Quantitative Methods. Lonnie Berger. Research Training Policy Practice

Introduction to Survival Analysis Procedures (Chapter)

QUASI-EXPERIMENTAL HEALTH SERVICE EVALUATION COMPASS 1 APRIL 2016

Macroeconometric Analysis. Chapter 1. Introduction

Too Much Ado about Propensity Score Models? Comparing Methods of Propensity Score Matching

Empirical Tools of Public Finance. 131 Undergraduate Public Economics Emmanuel Saez UC Berkeley

Measuring Impact. Conceptual Issues in Program and Policy Evaluation. Daniel L. Millimet. Southern Methodist University.

Early Release from Prison and Recidivism: A Regression Discontinuity Approach *

A Guide to Quasi-Experimental Designs

Review of Veterinary Epidemiologic Research by Dohoo, Martin, and Stryhn

NBER WORKING PAPER SERIES ALCOHOL CONSUMPTION AND TAX DIFFERENTIALS BETWEEN BEER, WINE AND SPIRITS. Working Paper No. 3200

Score Tests of Normality in Bivariate Probit Models

August 29, Introduction and Overview

Using machine learning to assess covariate balance in matching studies

Machine Learning and Computational Social Science Intersections and Collisions

Statistical methods for assessing treatment effects for observational studies.

Transcription:

Propensity Score Analysis and Strategies for Its Application to Services Training Evaluation Shenyang Guo, Ph.D. School of Social Work University of North Carolina at Chapel Hill June 14, 2011 For the 14 th Annual National Human Services Training Evaluation Symposium Cornell University

Overview 1. Types of evaluation problems 2. Why and when propensity score analysis is needed 3. Conceptual frameworks and assumptions 4. Overview of corrective methods 5. Greedy matching 6. Strategies to apply PSA to human services training evaluation

1. Types of evaluation problems

Three Different Types of Evaluation Problems 1. Policy evaluation 2. Program evaluation 3. Training evaluation

Policy Evaluation Heckman (2005, pp. 7-9) distinguished three broad classes of policy evaluation questions: 1. evaluating the impact of previous interventions on outcomes (internal validity); 2. forecasting the impacts of interventions implemented in one environment in other environments (external validity); and 3. forecasting the impacts of interventions never historically experienced for other environments (using history to forecast the consequences of new policies).

Program Evaluation The field of program evaluation is distinguished principally by cause-effect studies that aim to answer a key question: To what extent can the net difference observed in outcomes between treated and nontreated groups be attributed to an intervention, given that all other things are held constant?

Program Evaluation: Internal Validity and Threats Internal validity the validity of inferences about whether the relationship between two variables is causal (Shadish, Cook, & Campbell, 2002). In program evaluation and observational studies in general, researchers are concerned about threats to internal validity. These threats are factors affecting outcomes other than intervention or the focal stimuli. There are nine types of threats.* Selection bias is the most problematic one! *These include differential attrition, maturation, regression to the mean, instrumentation, testing effects, and more.

Fisher s Randomized Experiment: A Gold Standard of Program Evaluation A theory of observational studies must have a clear view of the role of randomization, so it can have an equally clear view of the consequences of its absence (Rosenbaum, 2002). Fisher s book, The Design of Experiments (1935/1971), introduced the principles of randomization. Randomized clinical trial is a gold standard.

Selection Bias in Evaluation of Quasi-Experimental Programs Total Sample Individual decision to participate Individual Decision not to participate in experiment Administrator s decision to select Administrator s decision not to select Control group Treatment group Drop out Continue Drop out Continue Source: Maddala, 1983, p. 266

Training Evaluation Cindy Parry & Robin Leak: Training evaluation concerns the extent to which knowledge, attitudes and skills learned in a training context are possibly applied to the job. Evaluators aims to evaluate the effectiveness of transfer of learning (TOF), that is, how learned behaviors are generalized to the job context AND maintained over a period of time. In my view, if we view training is an intervention, then training evaluation is analogous to program evaluation where randomization is infeasible. Under this context, PSA is applicable.

2. Why and when propensity score analysis is needed

Why and when propensity score analysis is needed? (1) Need 1: Remove Selection Bias The randomized clinical trial is the gold standard in outcome evaluation. However, in social and health research, RCTs are not always practical, ethical, or even desirable. Under such conditions, evaluators often use quasi-experimental designs, which in most instances are vulnerable to selection. Propensity score models help to remove selection bias. Example: In an evaluation of the effect of Catholic versus public school on learning, Morgan (2001) found that the Catholic school effect is strongest among Catholic school students who are less likely to attend Catholic schools.

Why and when propensity score analysis is needed? (2) Need 2: Analyze causal effects in observational studies Observational data - those that are not generated by mechanisms of randomized experiments, such as surveys, administrative records, and census data. To analyze such data, an ordinary least square (OLS) regression model using a dichotomous indicator of treatment does not work, because in such model the error term is correlated with explanatory variables. The violation of OLS assumption will cause an inflated and asymptotically biased estimate of treatment effect.

The Problem of Contemporaneous Correlation in Regression Analysis Consider a routine regression equation for the outcome, Y i : Y i = + W i + X i +e i where W i is a dichotomous variable indicating intervention, and X i is the vector of covariates for case i. In this approach, we wish to estimate the effect ( ) of treatment (W) on Y i by controlling for observed confounding variables (X i ). When randomization is compromised or not used, the correlation between W and e may not be equal to zero. As a result, the ordinary least square estimator of the effect of intervention ( ) may be biased and inconsistent. W is not exogenous.

How Big Is This Problem? Very big! The majority of nonrandomized studies that have used statistical controls to balance treatment and nontreatment groups may have produced erroneous findings. Note. The amount of error in findings will be related to the degree to which the error term is NOT independent of explanatory measures, including the treatment indicator. This problem applies to any statistical model in which the independence of the error term is assumed.

Consequence of Contemporaneous Correlation: Inflated (Steeper) Slope and Asymptotical Bias Dependent Variable............. OLS estim ating line True relationship Source: Kennedy (2003), p.158 Independent Variable

3. Conceptual frameworks and assumptions

The Neyman-Rubin Counterfactual Framework (1) Counterfactual: what would have happened to the treated subjects, had they not received treatment? One of the seminal developments in the conceptualization of program evaluation is the Neyman (1923) Rubin (1978) counterfactual framework. The key assumption of this framework is that individuals selected into treatment and nontreatment groups have potential outcomes in both states: the one in which they are observed and the one in which they are not observed. This framework is expressed as: Y i = W i Y 1i + (1 - W i )Y 0i The key message conveyed in this equation is that to infer a causal relationship between W i (the cause ) and Y i (the outcome) the analyst cannot directly link Y 1i to W i under the condition W i =1; instead, the analyst must check the outcome of Y 0i under the condition of W i =0, and compare Y 0i with Y 1i.

The Neyman-Rubin Counterfactual Framework (2) There is a crucial problem in the above formulation: Y 0i is not observed. Holland (1986, p. 947) called this issue the fundamental problem of causal inference. The Neyman-Rubin counterfactual framework holds that a researcher can estimate the counterfactual by examining the average outcome of the treatment participants (i.e., E(Y 1 W=1)]) and the average outcome of the nontreatment participants (i.e., E(Y 0 W=0))in the population. Because both outcomes are observable, we can then define the treatment effect as a mean difference: = E(Y 1 W=1) - E(Y 0 W=0) Under this framework, the evaluation of E(Y 1 W=1) - E(Y 0 W=0) can be thought as an effort that uses E(Y 0 W=0) to estimate the counterfactual E(Y 0 W=1). The central interest of the evaluation is not in E(Y 0 W=0), but in E(Y 0 W=1).

The Neyman-Rubin Counterfactual Framework (3) With sample data, evaluators can estimate the average treatment effect as: ˆ 0 E( yˆ 1) ( ˆ 1 w E y w The real debate about the classical experimental approach centers on the question: whether E(Y 0 W=0) really represents E(Y 0 W=1)? In a series of papers, Heckman and colleagues criticized this assumption. Consider E(Y 1 W=1) E(Y 0 W=0). Add and subtract E(Y 0 W=1), we have {E(Y 1 W=1) E(Y 0 W=1)} + {E(Y 0 W=1) - E(Y 0 W=0)} The standard estimator provides unbiased estimation if and only if E(Y 0 W=1) = E(Y 0 W=0). In many empirical projects, E(Y 0 W=1) E(Y 0 W=0). 0)

The Neyman-Rubin Counterfactual Framework (4) Heckman & Smith (1995) - Four Important Questions: What are the effects of factors such as subsidies, advertising, local labor markets, family income, race, and sex on program application decision? What are the effects of bureaucratic performance standards, local labor markets and individual characteristics on administrative decisions to accept applicants and place them in specific programs? What are the effects of family background, subsidies and local market conditions on decisions to drop out from a program and on the length of time taken to complete a program? What are the costs of various alternative treatments?

The Fundamental Assumption: Strongly Ignorable Treatment Assignment Rosenbaum & Rubin (1983) ( Y, Y ) W X. 0 1 Different versions: unconfoundedness and ignorable treatment assignment (Rosenbaum & Robin, 1983), selection on observables (Barnow, Cain, & Goldberger, 1980), conditional independence (Lechner 1999), and exogeneity (Imbens, 2004)

The SUTVA assumption (1) To evaluate program effects, statisticians also make the Stable Unit Treatment Value Assumption, or SUTVA (Rubin, 1986), which says that the potential outcomes for any unit do not vary with the treatments assigned to any other units, and there are no different versions of the treatment. Imbens (on his Web page) uses an aspirin example to interpret this assumption, that is, the first part of the assumption says that taking aspirin has no effect on your headache, and the second part of the assumption rules out differences on outcome due to different aspirin tablets.

The SUTVA assumption (2) According to Rubin, SUTVA is violated when there exists interference between units or there exist unrepresented versions of treatments. The SUTVA assumption imposes exclusion restrictions on outcome differences. Because of this reason, economists underscore the importance of analyzing average treatment effects for the subpopulation of treated units, which is frequently more important than the effect on the population as a whole. This is especially a concern when evaluating the importance of a narrowly targeted program, e.g., a labor-market intervention. What statisticians and econometricians called evaluating average treatment effects for the treated is similar to the efficacy subset analysis found in the literature of intervention research.

Two traditions (1) There are two traditions in modeling causal effects when random assignment is not possible or is compromised: the econometric versus the statistical approach The econometric approach emphasizes the structure of selection, and, therefore, underscores a direct modeling of selection bias The statistical approach assumes that selection is random conditional on covariates. Both approaches emphasize a direct control of observed covariates by using conditional probability of receiving treatment The two approaches are based on different assumptions for their correction models and differ on the level of restrictiveness of assumptions

Two traditions (2) Heckman s econometric model of causality (2005) and the contrast of his model to the statistical model Statistical Causal Model Econometric Models Sources of randomness Implicit Explicit Models of conditional counterfactuals Implicit Explicit Mechanism of intervention Hypothetical randomization Many mechanisms of for determining counterfactuals hypothetical interventions including randomization; mechanism is explicitly modeled Treatment of interdependence Recursive Recursive or Simultaneous systems Social/market interactions Ignored Modeled in general Equilibrium frameworks Projections to different populations? Does not project Projects Parametric? Nonparametric Becoming nonparametric Range of questions answered One focused treatment effect In principle, answers many possible questions Source: Heckman (2005, p.87)

4. Overview of Corrective Methods

Four Models Described by Guo & Fraser (2010) 1. Heckman s sample selection model (Heckman, 1976, 1978, 1979) and its revised version estimating treatment effects (Maddala, 1983)

Four Models Described by Guo & Fraser (2010) 2. Propensity score matching (Rosenbaum & Rubin, 1983), optimal matching (Rosenbaum, 2002), propensity score weighting, modeling treatment dosage, and related models

Four Models Described by Guo & Fraser (2010) 3. Matching estimators (Abadie & Imbens, 2002, 2006)

Four Models Described by Guo & Fraser (2010) 4. Propensity score analysis with nonparametric regression (Heckman, Ichimura, & Todd, 1997, 1998)

General Procedure for Propensity Score Matching Summarized by Guo & Fraser (2010) Step 1: Logistic regression Dependent variable: log odds of receiving treatment Search an appropriate set of conditioning variables (boosted regression, etc.) Estimated propensity scores: predicted probability (p) or log[(1-p)/p]. Step 2:Analysis using propensity scores Analysis of weighted mean differences using kernel or local linear regression (difference-indifferences model of Heckman et al.) Step 2: Analysis using propensity scores: Multivariate analysis using propensity scores as weights Step 2: Matching Greedy match (nearest neighbor with or without calipers) Mahalanobis with or without propensity scores Optimal match (pair matching, matching with a variable number of controls, full matching) Step 3: Post-matching analysis Multivariate analysis based on matched sample Step 3: Post-matching analysis Stratification (subclassification) based on matched sample

Computing Software Packages for Running the Four Models (Stata & R) Procedure Name & Useful References Chapter & Methods Stata R Chapter 4 Heckman (1978, 1979) sample seletion model & Madalla (1983) treatment effect model heckman (StataCorp, 2003) treatreg (StataCorp, 2003) sampleselection (Toomet Henningsen, 2008) Chapter 5 cem (Deheja & Wahba, Rosenbaum & Rubin's (1983) psmatch2 (Leuven & 1999; Iacus, King, & propensity score matching Sianesi, 2003) Porro, 2008) Matching (Sekehon, 2007) MatchIt (Ho, Imai, King, & Stuart, 2004) PSAgraphics (Helmreich Generalized boosted regression boost (Schonlau, 2007) Optimal maching (Rosenbaum, 2002a) & Pruzek, 2008) WhatIf (King & Zeng, 2006; King & Zeng, 2007) USPS (Obenchain, 2007) gbm (McCaffrey, Ricgeway, & Morral, 2004) optmatch (Hansen, 2007) Procedure Name & Useful References Chapter & Methods Stata R Post-matching covariance imbalance check (Haviland, Nagin, & Rosenbaum, 2007) Hodges-Lehmann aligned rank test after optimal matching (Haviland, Nagin, & Rosenbaum, imbalance (Guo, 2008a) hl (Guo, 2008b) 2007; Lehmann, 2006) Chapter 6 Abadie & Imbens (2002, 2006) nnmatch (Abadie,Drukker, Matching (Sekehon, matching estimagors Herr, & Imbens, 2004) 2007) Chapter 7 Kernel-based maching psmatch2 (Leuven & (Heckman, Ichimura, & Todd, 1997, 1998) Sianesi, 2003) Chapter 8 Rosenbaum's (2002a) sensitivity analysis rbounds (Gangl, 2007) rbounds (Keele, 2008)

Other Corrective Models Regression discontinuity designs Instrumental variables approaches (Guo & Fraser [2010] reviews this method) Interrupted time series designs Bayesian approaches to inference for average treatment effects

The Companion Website of Guo & Fraser (2010) All data and syntax files of the examples used in the book are available in the following website: http://ssw.unc.edu/psa/

5.Greedy propensity score matching (Rosenbaum & Rubin, 1983)

Rosenbaum and Rubin PSM (1) Greedy matching Nearest neighbor: C( P) min P P, j I The nonparticipant with the value of P j that is closest to P i is selected as the match and A i is a singleton set. Caliper: A variation of nearest neighbor: A match for person i is selected only if where is a pre-specified tolerance. Recommended caliper size:.25 p i 1-to-1 Nearest neighbor within caliper (The is a common practice) 1-to-n Nearest neighbor within caliper j i j 0 P P, j I i j 0

Rosenbaum and Rubin PSM (2) Mahalanobis metric matching: Mahalanobis without p-score: Randomly ordering subjects, calculate the distance between the first participant and all nonparticipants. The distance, d(i,j) can be defined by the Mahalanobis distance: d( i, j) ( u v) v) where u and v are values of the matching variables for participant i and nonparticipant j, and C is the sample covariance matrix of the matching variables from the full set of nonparticipants. Mahalanobis metric matching with p-score added (to u and v). Nearest available Mahalandobis metric matching within calipers defined by the propensity score (need your own programming). T C 1 ( u

Rosenbaum and Rubin PSM (3) Multivariate analysis at Step-3 One may perform routine multivariate analysis. These analyses may include: multiple regression generalized linear model survival analysis structural equation modeling with multiple-group comparison, and hierarchical linear modeling (HLM) As usual, we use a dichotomous variable indicating treatment versus control in these models.

Sample Syntax Running Stata psmatch2 for Greedy Matching // Nearest neighbor within caliper (.25*SD=.401) Specification of caliper size: =.25*SD psmatch2 aodserv, pscore(logit1) caliper(0.401) /// noreplacement descending Name of the propensity score Name of treatment variable variable saved from logistic Program Name regression

Example of Greedy Matching (1) Research Questions The association between parental substance abuse and child welfare system involvement is well-known but little understood. This study aims to address the following questions: Whether or not these children are living in a safe environment? Does substance abuse treatment for caregivers affect the risk of child maltreatment re-report?

Example of Greedy Matching (2) Data and Study Sample A secondary analysis of the National Survey of Child and Adolescent Well-Being (NSCAW) data. It employed NSCAW of two waves: baseline information between October 1999 and December 2000, and the 18-months follow-up. The sample for this study was limited to 2,758 children who lived at home (e.g., were not in foster care) and whose primary caregivers were female.

Example of Greedy Matching (3) Measures The choice of explanatory variables (i.e., matching variables) in the logistic regression model is crucial. We chose these variables based on a review of substance abuse literature to determine what characteristics were associated with treatment receipt. We found that these characteristics fall into four categories: demographic characteristics; risk factors; prior receipt of substance abuse treatment; and need for substance abuse services.

Example of Greedy Matching (4) Analytic Plan: 3 x 2 x 2 design = 12 Matching Schemes: Three logistic regression models (i.e., each specified a different set of matching variables); Two matching algorithms (i.e., nearest neighbor within caliper and Mahalanobis), and Two matching specifications (i.e., for nearest neighbor we used two different specifications on caliper size, and for Mahalanobis we used one with and one without propensity score as a covariate to calculate the Mahalanobis metric distances). Outcome analysis: survival model using Kaplan-Meier estimator evaluating difference in the survivor curve.

0.00 0.25 0.50 0.75 1.00 Example of Greedy Matching (5) Findings: Children of substance abuser service users appear to live in an environment that elevates risk of maltreatment and warrants continued protective supervision. The analysis based on the original sample without controlling for heterogeneity of service receipt masked the fact that substance abuse treatment may be a marker for greater risk. Sample Based on Matching Scheme 9 N=482 0 5 10 15 20 analysis time aodserv = 0 aodserv = 1

Example of Greedy Matching (6) For more information about this example, see Guo & Fraser, 2010, pp.175-186. Guo, S., Barth, R.P., & Gibbons, C. (2006). Propensity score matching strategies for evaluating substance abuse services for child welfare clients. Children and Youth Services Review 28: 357-383 Barth, R.P., Gibbons, C., & Guo, S. (2006). Substance abuse treatment and the recurrence of maltreatment among caregivers with children living at home: A propensity score analysis. Journal of Substance Abuse Treatment 30: 93-104.

Example of Greedy Matching (7) Additional examples of child welfare research employing greedy matching: Barth, R.P., Lee, C.K., Wildfire, J., & Guo, S. (2006). A comparison of the governmental costs of long-term foster care and adoption. Social Service Review 80(1): 127-158. Barth, R.P., Guo, S., McCrae, J. (2008). Propensity score matching strategies for evaluating the success of child and family service programs. Research on Social Work Practice 18, 212-222. Weigensberg, E.C.; Barth, R.P., Guo, S. (2009). Family group decision making: A propensity score analysis to evaluate child and family services at baseline and after 36- months. Children and Youth Services Review, 31, 383-390.

6.Strategies to apply PSA to human services training evaluation

Strategy I: Yoke Comparison Group Using National Survey Data (1) A randomized clinical controlled trial RCT or experimental approach Treated Control Nonexperimental approach A survey or panel study based on a representative sample (Comparison) Yoke training group with a comparison group from a national representative survey A small to large match 49

Strategy I: Yoke Comparison Group Using National Survey Data (2) An example: Glisson, Dukes, & Green (2006) employs randomized blocks to evaluate the effects of a one-year training (i.e., the Availability, Responsiveness, and Continuity [ARC] organizational intervention). If one replicates the ARC study but only has training data for the intervention group, one can employs the National Survey of Child and Adolescent Well-Being (NSCAW) as a big pool to form a comparison group. That is, no additional effort is required to collect data from a control group. 50

Strategy I: Yoke Comparison Group Using National Survey Data (3) Key requirement: both the training and national survey datasets should contain common outcome variables and key matching variables. The ARC study aims to implement an organizational intervention to reduce casework turnover, and to create positive organizational climates and culture in a child welfare and juvenile justice system. NSCAW is a national representative panel data designed to address a range of questions about the outcomes of children who are involved in child welfare system across the country. It now has two cohorts of study children from 92 counties in 36 states. 51

Strategy I: Yoke Comparison Group Using National Survey Data (4) NSCAW contain many outcome variables developed by Glisson to measure organizational climates and culture. To evaluate the ARC training effects, evaluators should collect these outcome variables from their own training group. This is often not a problem. To make a valid evaluation, researchers should make sure that both the training data and NSCAW data contain key variables to control for selection bias (e.g., urban/rural, caseload, type of maltreatments, etc.). Evaluators then use PSA to create a comparison group from NSCAW, and evaluate the outcome differences between the training and comparison groups. The net difference on outcomes after PSA may be a better measure of transfer of learning. 52

Strategy II: Use Data from any Nonequivalent Comparison Group The same strategy can be applied to a comparison between the training group and any nonequivalent comparison group that is readily available to the evaluator. These nonequivalent comparison groups may be agencies in similar counties or states that do not receive a targeted training. Again, the key requirement is that both the training and comparison groups should contain key variables on outcomes as well as matching variables. PSA is a strategy to make the two groups as comparable as possible (i.e., free of selection bias on the observed measures) so that researchers can discern the effectiveness of transfer of learning. 53

References (1) Abadie, A., & Imbens, G. W. (2002). Simple and bias-corrected matching estimators [Technical report]. Department of Economics, University of California, Berkeley. Retrieved August 8, 2008, from http://ideas.repec.org/p/nbr/nberte/0283.html Abadie, A., & Imbens, G. W. (2006). Large sample properties of matching estimators for average treatment effects. Econometrica, 74, 235-267. Barnow, B. S., Cain, G. G., & Goldberger, A. S. (1980). Issues in the analysis of selectivity bias. In E. Stromsdorfer & G. Farkas (Eds.), Education studies, vol. 5, San Francisco: Sage. Fisher, R. A. (1935/1971). The design of experiments. Edinburgh, UK: Oliver & Boyd. Glisson, C., Dukes, D., Green, P. (2006). The effects of the ARC organizational intervention on caseworker turnover, climate, and culture in children s service systems. Child Abuse & Neglect, 30, 855-880 Guo, S., & Fraser, M. (2010). Propensity score analysis: Statistical methods and applications. Thousand Oaks, CA: Sage Publications, Inc. Heckman, J. J. (1976). Simultaneous equations model with continuous and discrete endogenous variables and structural shifts. In S. M. Goldfeld & R. E. Quandt (Eds.), Studies in nonlinear estimation. Cambridge, MA: Ballinger. Heckman, J. J. (1978). Dummy endogenous variables in a simultaneous equations system. Econometrica, 46, 931-960. Heckman, J. J. (1979). Sample selection bias as a specification error. Econometrica, 47, 153-161. Heckman, J. J. (2005). The scientific model of causality. Sociological Methodology, 35, 1-97. Heckman, J. J., Ichimura, H., & Todd, P. E. (1997). Matching as an econometric evaluation estimator: Evidence from evaluating a job training programme. Review of Economic Studies, 64, 605-654. Heckman, J. J., Ichimura, H., & Todd, P. E. (1998). Matching as an econometric evaluation estimator. Review of Economic Studies, 65, 261-294. 54

References (2) Heckman, J., & Smith, J. (1995). Assessing the case for social experiments. The Journal of Economic Perspectives, 9, 85-110. Holland, P. (1986). Statistics and causal inference (with discussion). Journal of the American Statistical Association, 81, 945-970. Imbens, G.W. (2004). Nonparametric estimation of average treatment effects under exogeneity: A review. The Review of Economics and Statistics, 86, 4-29. Kennedy, P. (2003). A guide to econometrics (5th ed.). Cambridge, MA: The MIT Press. Lechner, M. (1999). Earnings and employment effects of continuous off-the-job training in East Germany after unification. Journal of Business and Economic Statistics, 17, 74-90. Maddala, G. S. (1983). Limited-dependent and qualitative variables in econometrics. Cambridge, UK: Cambridge University Press. Morgan, S. L. (2001). Counterfactuals, causal effect, heterogeneity, and the Catholic school effect on learning. Sociology of Education, 74, 341-374. Neyman, J. S. (1923). Statistical problems in agricultural experiments. Journal of the Royal Statistical Society Series B, 2: 107-180. Rosenbaum, P. R. (2002). Observational studies (2nd ed.). New York: Springer. Rosenbaum, P. R., & Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70, 41-55. Rubin, D. B. (1978). Bayesian inference for causal effects: The role of randomization. Annals of Statistics, 6, 34-58. Rubin, D. B. (1986). Which ifs have causal answers? Journal of the American Statistical Association, 81, 961-962. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin. 55

Thank you! Questions and Discussion 56