Odds Ratio, Delta, ETS Classification, and Standardization Measures of DIF Magnitude for Binary Logistic Regression

Similar documents
Mantel-Haenszel Procedures for Detecting Differential Item Functioning

THE MANTEL-HAENSZEL METHOD FOR DETECTING DIFFERENTIAL ITEM FUNCTIONING IN DICHOTOMOUSLY SCORED ITEMS: A MULTILEVEL APPROACH

International Journal of Education and Research Vol. 5 No. 5 May 2017

Research and Evaluation Methodology Program, School of Human Development and Organizational Studies in Education, University of Florida

An Introduction to Missing Data in the Context of Differential Item Functioning

Differential Item Functioning

Detection of Differential Test Functioning (DTF) and Differential Item Functioning (DIF) in MCCQE Part II Using Logistic Models

Academic Discipline DIF in an English Language Proficiency Test

Assessing Measurement Invariance in the Attitude to Marriage Scale across East Asian Societies. Xiaowen Zhu. Xi an Jiaotong University.

The Matching Criterion Purification for Differential Item Functioning Analyses in a Large-Scale Assessment

The Influence of Conditioning Scores In Performing DIF Analyses

Nonparametric DIF. Bruno D. Zumbo and Petronilla M. Witarsa University of British Columbia

A Modified CATSIB Procedure for Detecting Differential Item Function. on Computer-Based Tests. Johnson Ching-hong Li 1. Mark J. Gierl 1.

Impact of Differential Item Functioning on Subsequent Statistical Conclusions Based on Observed Test Score Data. Zhen Li & Bruno D.

Unit 1 Exploring and Understanding Data

Keywords: Dichotomous test, ordinal test, differential item functioning (DIF), magnitude of DIF, and test-takers. Introduction

Manifestation Of Differences In Item-Level Characteristics In Scale-Level Measurement Invariance Tests Of Multi-Group Confirmatory Factor Analyses

Section 5. Field Test Analyses

An Investigation of the Efficacy of Criterion Refinement Procedures in Mantel-Haenszel DIF Analysis

Differential Item Functioning Amplification and Cancellation in a Reading Test

11/18/2013. Correlational Research. Correlational Designs. Why Use a Correlational Design? CORRELATIONAL RESEARCH STUDIES

Investigating the Invariance of Person Parameter Estimates Based on Classical Test and Item Response Theories

MCAS Equating Research Report: An Investigation of FCIP-1, FCIP-2, and Stocking and. Lord Equating Methods 1,2

Contents. What is item analysis in general? Psy 427 Cal State Northridge Andrew Ainsworth, PhD

Business Statistics Probability

Thank You Acknowledgments

THE APPLICATION OF ORDINAL LOGISTIC HEIRARCHICAL LINEAR MODELING IN ITEM RESPONSE THEORY FOR THE PURPOSES OF DIFFERENTIAL ITEM FUNCTIONING DETECTION

Known-Groups Validity 2017 FSSE Measurement Invariance

Improvements for Differential Functioning of Items and Tests (DFIT): Investigating the Addition of Reporting an Effect Size Measure and Power

11/24/2017. Do not imply a cause-and-effect relationship

A Differential Item Functioning (DIF) Analysis of the Self-Report Psychopathy Scale. Craig Nathanson and Delroy L. Paulhus

GMAC. Scaling Item Difficulty Estimates from Nonequivalent Groups

Model fit and robustness? - A critical look at the foundation of the PISA project

Item Response Theory. Steven P. Reise University of California, U.S.A. Unidimensional IRT Models for Dichotomous Item Responses

André Cyr and Alexander Davies

Comparability Study of Online and Paper and Pencil Tests Using Modified Internally and Externally Matched Criteria

Differential Performance of Test Items by Geographical Regions. Konstantin E. Augemberg Fordham University. Deanna L. Morgan The College Board

A Comparison of Several Goodness-of-Fit Statistics

Follow this and additional works at:

The Effects of Controlling for Distributional Differences on the Mantel-Haenszel Procedure. Daniel F. Bowen. Chapel Hill 2011

Published by European Centre for Research Training and Development UK (

Describe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo

Using the Rasch Modeling for psychometrics examination of food security and acculturation surveys

Differential Item Functioning from a Compensatory-Noncompensatory Perspective

Author's response to reviews

Empirical Formula for Creating Error Bars for the Method of Paired Comparison

Bruno D. Zumbo, Ph.D. University of Northern British Columbia

Using the Distractor Categories of Multiple-Choice Items to Improve IRT Linking

Center for Advanced Studies in Measurement and Assessment. CASMA Research Report

Item purification does not always improve DIF detection: a counterexample. with Angoff s Delta plot

Score Tests of Normality in Bivariate Probit Models

Item Purification Does Not Always Improve DIF Detection: A Counterexample With Angoff s Delta Plot

Daniel Boduszek University of Huddersfield

USE OF DIFFERENTIAL ITEM FUNCTIONING (DIF) ANALYSIS FOR BIAS ANALYSIS IN TEST CONSTRUCTION

Describe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo

bivariate analysis: The statistical analysis of the relationship between two variables.

Still important ideas

Determining Differential Item Functioning in Mathematics Word Problems Using Item Response Theory

A Comparison of Pseudo-Bayesian and Joint Maximum Likelihood Procedures for Estimating Item Parameters in the Three-Parameter IRT Model

Emotions as infectious diseases in a large social network: the SISa. model

Revisiting Differential Item Functioning: Implications for Fairness Investigation

CHAPTER VI RESEARCH METHODOLOGY

AnExaminationoftheQualityand UtilityofInterviewerEstimatesof HouseholdCharacteristicsinthe NationalSurveyofFamilyGrowth. BradyWest

Three Generations of DIF Analyses: Considering Where It Has Been, Where It Is Now, and Where It Is Going

Research Report No Using DIF Dissection Method to Assess Effects of Item Deletion

Assessing the item response theory with covariate (IRT-C) procedure for ascertaining. differential item functioning. Louis Tay

Gender-Based Differential Item Performance in English Usage Items

On indirect measurement of health based on survey data. Responses to health related questions (items) Y 1,..,Y k A unidimensional latent health state

A Bayesian Nonparametric Model Fit statistic of Item Response Models

Differential item functioning procedures for polytomous items when examinee sample sizes are small

Still important ideas

A Monte Carlo Study Investigating Missing Data, Differential Item Functioning, and Effect Size

María Verónica Santelices 1 and Mark Wilson 2

1. Evaluate the methodological quality of a study with the COSMIN checklist

Analysis and Interpretation of Data Part 1

Day 11: Measures of Association and ANOVA

Psychometric Methods for Investigating DIF and Test Bias During Test Adaptation Across Languages and Cultures

Parameter Estimation of Cognitive Attributes using the Crossed Random- Effects Linear Logistic Test Model with PROC GLIMMIX

Inferential Statistics

SUPPLEMENTAL MATERIAL

Measuring mathematics anxiety: Paper 2 - Constructing and validating the measure. Rob Cavanagh Len Sparrow Curtin University

Readings: Textbook readings: OpenStax - Chapters 1 13 (emphasis on Chapter 12) Online readings: Appendix D, E & F

Data and Statistics 101: Key Concepts in the Collection, Analysis, and Application of Child Welfare Data

Summary & Conclusion. Lecture 10 Survey Research & Design in Psychology James Neill, 2016 Creative Commons Attribution 4.0

Chapter 13 Estimating the Modified Odds Ratio

Midterm Exam ANSWERS Categorical Data Analysis, CHL5407H

Comparison of the Null Distributions of

Item Response Theory: Methods for the Analysis of Discrete Survey Response Data

Daniel Boduszek University of Huddersfield

Running head: NESTED FACTOR ANALYTIC MODEL COMPARISON 1. John M. Clark III. Pearson. Author Note

Modeling DIF with the Rasch Model: The Unfortunate Combination of Mean Ability Differences and Guessing

Small Group Presentations

Sawtooth Software. MaxDiff Analysis: Simple Counting, Individual-Level Logit, and HB RESEARCH PAPER SERIES. Bryan Orme, Sawtooth Software, Inc.

Introduction to diagnostic accuracy meta-analysis. Yemisi Takwoingi October 2015

A DIFFERENTIAL RESPONSE FUNCTIONING FRAMEWORK FOR UNDERSTANDING ITEM, BUNDLE, AND TEST BIAS ROBERT PHILIP SIDNEY CHALMERS

Comparing Proportions between Two Independent Populations. John McGready Johns Hopkins University

Appendix B Statistical Methods

Statistical analysis DIANA SAPLACAN 2017 * SLIDES ADAPTED BASED ON LECTURE NOTES BY ALMA LEORA CULEN

Fundamental Clinical Trial Design

IAPT: Regression. Regression analyses

Transcription:

Journal of Educational and Behavioral Statistics March 2007, Vol. 32, No. 1, pp. 92 109 DOI: 10.3102/1076998606298035 Ó AERA and ASA. http://jebs.aera.net Odds Ratio, Delta, ETS Classification, and Standardization Measures of DIF Magnitude for Binary Logistic Regression Patrick O. Monahan Indiana University Colleen A. McHorney Merck & Co., Inc. Timothy E. Stump Anthony J. Perkins Regenstrief Institute, Inc. and Indiana University Previous methodological and applied studies that used binary logistic regression (LR) for detection of differential item functioning (DIF) in dichotomously scored items either did not report an effect size or did not employ several useful measures of DIF magnitude derived from the LR model. Equations are provided for these effect size indices. Using two large data sets, the authors demonstrate the usefulness of these effect sizes for judging practical importance: the LR adjusted odds ratio and its conversions to the delta metric, the Educational Testing Service (ETS) classification system, and the p metric; the LR model-based standardization indices, using various weights for averaging stratum-specific differences in fitted probabilities; and a p metric classification system. Pros and cons of these effect sizes are discussed. Recommendations are offered. These LR effect sizes will be valuable to practitioners, particularly for preventing flagging of statistically significant but practically unimportant DIF in large samples. Keywords: differential item functioning; logistic regression; effect sizes In differential item functioning (DIF) analyses, groups are compared on item performance after adjusting for overall performance on the measured trait (Holland & Wainer, 1993). Since Swaminathan and Rogers (1990) applied the binary logistic regression (LR) procedure to the detection of DIF in dichotomous test items, the LR method has become increasingly popular for this purpose. However, Swaminathan and colleagues focused on hypothesis testing This research was supported by NIA Grant R01 AG022067, NCI Grant R03 CA 113099-01, and the Mary Margaret Walther Program for Cancer Care Research. Suggestions by the editor and two anonymous reviewers led to improved presentation. 92

(Narayanan & Swaminathan, 1996; Rogers & Swaminathan, 1993; Swaminathan & Rogers, 1990). It is important to incorporate an effect size into flagging rules, especially in large samples, because high power can yield significance for practically unimportant effect sizes (e.g., Kirk, 1996). Several methodological and applied studies investigating binary LR for DIF have flagged items for DIF based only on statistical significance (Clauser, Nungester, Mazor, & Ripkey, 1996; Huang & Dunbar, 1998; Kwak, Davenport, & Davison, 1998; Marshall, Mungas, Weldon, Reed, & Haan, 1997; Mazor, Kanjee, & Clauser, 1995; Whitmore & Schumacker, 1999; Woodard, Auchus, Godsall, & Green, 1998). Previous attempts to report effect sizes for binary LR have included using the LR Wald chi-square value (Huang & Dunbar, 1998), reporting raw or standardized LR coefficients on the log odds scale (Borsboom, Mellenbergh, & Heerden, 2002; Clauser & Mazor, 1998; Millsap & Everson, 1993; Swanson, Clauser, Case, Nungester, & Featherman, 2002), presenting R 2 -like measures (Swanson et al., 2002; Zumbo, 1999), calculating the partial gamma (Groenvold, Bjorner, Klee, & Kreiner, 1995), listing eta-squared (Whitmore & Schumacker, 1999), adopting a chance-corrected proportion of correct classification (Hess, Olejnik, & Huberty, 2001), and plotting fitted probabilities or fitted logits (Schmitt, Holland, & Dorans, 1993). These attempts contributed to DIF literature. However, none of these works focused on several intuitive effect sizes that can easily be derived from binary LR: the adjusted odds ratio, delta statistic, Educational Testing Service (ETS) classification system, adjusted odds ratio reported on the p metric, and model-based standardization indices of conditional differences in proportions. We found only one DIF study that reported odds ratios for binary LR (Volk, Cantor, Steinbauer, & Cass, 1997). The purposes of this article are to (a) provide and explain the equations for obtaining these useful effect sizes for the LR procedure, (b) demonstrate the application of these effect sizes, and (c) present the pros and cons of these effect sizes and offer guidance in how to use them. We focus here on effect sizes for uniform DIF. We are investigating LR effect sizes for nonuniform DIF. Although a strength of LR is powerful detection of nonuniform DIF, corresponding effect sizes requires more research because the choice of weights for averaging stratum-specific measures is especially critical when interactions are present (e.g., Mosteller & Tukey, 1977). Theoretical Foundation and Effect Size Formulas The Logistic Regression (LR) Procedure for DIF Detection In the binary LR model, the probability of endorsing a dichotomously scored item is Pðu ¼ 1 x; g; xgþ ¼ Effect Sizes for Logistic Regression 1 1 þ e ðb 0þb 1 xþb 2 gþb 3 xgþ ; ð1þ 93

Monahan, McHorney, Stump, and Perkins and the log odds (or logit) of endorsing the item is modeled as P ln ¼ b 1 P 0 þ b 1 x þ b 2 g þ b 3 xg; ð2þ where ln is the natural logarithm, x is a measure of overall proficiency (usually total score), g is a dummy variable representing group membership (traditionally, 1 = reference group, 0 = focal group), xg is the interaction term between total score and group membership, and b 0 is the intercept (Swaminathan & Rogers, 1990). The 1 df test of b 3 ¼ 0 is a test of nonuniform DIF. If nonuniform DIF is absent, the xg term can be deleted from the model, and then the 1 df test of b 2 ¼ 0 provides a test of uniform DIF. 1 Effect sizes complement these tests. We describe two categories of effect sizes distinguished by the metric of defining departures from the null hypothesis: conditional log odds ratios versus conditional differences in proportions. Effect Sizes for LR Based on the Conditional-Log-Odds-Ratio Definition of DIF A natural effect size for LR is the odds ratio. LR coefficients (^b j ) are estimated on the log odds scale. The exponential of ^b j [i.e., exp(^b j )] yields the maximum likelihood estimated odds ratio of the event of interest for every one-unit increase in the jth predictor, adjusted for other covariates in the LR model (Hosmer & Lemeshow, 2000). Thus, the exponential of ^b 2 provides the reference-to-focal odds ratio of endorsing the item, conditional on proficiency: 2 ^a LR ¼ expð^b 2 Þ: ð3þ Odds ratios range from 0 to. Values of ^a LR further from 1.0 represent greater DIF magnitude. An odds ratio and its reciprocal are equivalent in strength but not symmetrical in distance from the null value of 1.0 (e.g., 4.0 and 0.25). Another option for an effect size is to transform ^a LR to the logistic definition of the delta scale, used by ETS to measure item difficulty. We use the formula Holland and Thayer (1988) used to convert the Mantel-Haenszel (MH) odds ratio (^a MH ) to the MH delta-dif statistic (MH-D-DIF or ^D MH ): D-DIF or ^ LR ¼ 2:35 lnð^a LR Þ¼ 2:35ð^b 2 Þ: ð4þ It is apparent that D-DIF is a simple linear rescaling of the regression coefficient, ^b 2. 3 94

Effect Sizes for Logistic Regression In addition, one can calculate the ETS classification system (Dorans & Holland, 1993): Category A. Items with negligible or nonsignificant DIF. Defined by D-DIF not significantly different from zero or absolute value less than 1.0. Category B. Items with slight to moderate magnitude of statistically significant DIF. Defined by D-DIF significantly different from zero and absolute value of at least 1.0 and either less than 1.5 or not significantly greater than 1.0. Category C. Items with moderate to large magnitude of statistically significant DIF. Defined by absolute value of D-DIF of at least 1.5 and significantly greater than 1.0. Notice that assigning Categories A and B entails using LR to test H o : b 2 ¼ 0: Assigning Categories B and C requires testing H o : D-DIF 1:0: Practitioners can perform the latter test in LR by testing H o : b 2 :4255 (i.e., D LR of 1.0 equals b 2 of.4255). It is also possible for reporting purposes to convert a conditional log-oddsratio-based index to the metric of differences in item-proportion-correct called the p metric. We use the formula that Dorans and Holland (1993) used to convert ^a MH to MH-P-DIF: P-DIF ¼ P f P C r ; ð5þ where, P C r ^a LR P f ¼ : ð6þ ð1 P f Þþ^a LR P f The term Pr C is the predicted proportion of examinees endorsing the item in the reference group based on ^a LR, and P f is the proportion of examinees endorsing the item in the focal group. Effect Sizes for LR Based on the Conditional-Differencein-Proportions Definition of DIF The contingency-table standardization (STD) procedure defines departures from the null DIF hypothesis with conditional differences in proportions, and the resulting measure is usually reported in the p metric (STD-P-DIF) (Dorans & Holland, 1993; Dorans & Kulick, 1986). One could estimate a LR model-based STD measure of DIF: STD-P-DIF ¼ where Pfm LR and PLR rm are predicted from the LR model. P w m ðpfm LR PLR rm Þ m P ; ð7þ w m m 95

Monahan, McHorney, Stump, and Perkins This index is reminiscent of item response theory (IRT) model-based standardization (Wainer, 1993) except instead of integrating over y, averaging occurs over total scores. Historically, absolute values between.05 and.10 are inspected to ensure that no possible DIF is overlooked, and absolute values above.10 are considered more unusual and should be examined (Dorans & Kulick, 1986). One could implement a p metric classification system for LR, applicable to STD-P-DIF or P-DIF: Category A. Items with negligible or nonsignificant DIF. Defined by p index not significantly different from zero or 0 p index :05. Category B. Items with marginal magnitude of statistically significant DIF. Defined by p index significantly different from zero and :05 < p index :10. Category C. Items with definite magnitude of statistically significant DIF. Defined by p index significantly different from zero and p index > :10. The weights (w m ) for averaging conditional differences in proportions in the STD procedure have traditionally been based on intuitive rationale. In DIF studies, w m is often chosen to be the number of focal group examinees at each stratum (N fm ) (Dorans & Kulick, 1986). Other plausible weights include (Dorans & Holland, 1993; Mosteller & Tukey, 1977) (a) the number of reference group examinees at each stratum (N rm ), (b) the number of the total examinees at each stratum (N tm ), or (c) the relative frequency of some real or hypothetical standard group. One could also use Cochran s (1954) statistically driven weights (Dorans & Holland, 1993): c m ¼ N rmn fm N tm : ð8þ Another option, available in the STDIF software (Robin, 2001; Zenisky, Hambleton, & Robin, 2003), is the equal weight (w m ¼ 1), which yields an unweighted average. Examples We performed a gender (1 = male, 0 = female) DIF analysis in two data sets. The first data set was the Supplement on Aging (SOA) to the 1984 National Health Interview Survey (U.S. Department of Health and Human Services, 1997). This study was designed to assess the future needs of the elderly in the United States. Participants were 55 and older (n ¼ 12; 943). We analyzed 23 dichotomous functional status items. Each item measured whether participants reported a problem (1 = yes, 0 = no) performing an activity. The second data set was from the Established Populations for Epidemiologic Studies of the Elderly (EPESE). Persons (age 65) were interviewed to identify predictors of mortality (Taylor, Wallace, Ostfeld, & Blazer, 1998). We analyzed the 20 dichotomous items of the Center for Epidemiologic Studies Depression Scale 96

Effect Sizes for Logistic Regression (CES-D; Radloff, 1977) obtained at baseline on 3,401 participants from the Duke site. The CES-D is a widely used self-report measure of depressive symptomatology for the general population. Each item was scored for presence (1) or absence (0) of a depressive symptom. Statistical Methods We modeled Equation 2, without the interaction term, using binary LR with the SAS LOGISTIC procedure. The matching score was the total score including the studied item (Holland & Thayer, 1988). For the purposes of this article, we examined only uniform DIF. Graphs of empirical logits and LOWESS smoothed curves indicated the LR assumption of linearity of the logit was reasonably satisfied for all items (Monahan, 2004). The SOA and EPESE items were approximately unidimensional because the cross-validated DETECT index was.16 and.28, respectively (Monahan, Stump, Finch, & Hambleton, in press; Roussos & Ozbek, 2006). In SOA data, there were 7,822 women and 5,121 men. In EPESE data, there were 2,203 women and 1,198 men. For both data sets, total score was skewed right (less skewed for EPESE), and men had a slightly lower mean and variance than women. Result of Using LR Statistical Test Alone to Detect Uniform DIF Table 1 displays DIF effect sizes for functional status items from the SOA data set. Each row (studied item) in Table 1 represents a different LR model. The right-most column shows the observed significance of the two-sided LR Wald chi-square test of uniform DIF. Based on this test alone, even if we used a conservative Bonferroni-adjusted significance level of.00217 (i.e.,.05/23), 11 of 23 items would be flagged for DIF (Table 1, bold items). Most DIF studies using LR have relied on Wald tests alone. We now illustrate the importance of effect sizes. Effect Sizes for LR Based on the Conditional-Log-Odds-Ratio Definition of DIF We will interpret the effect sizes in Table 1 from left to right, beginning with the LR odds ratio (^a LR ). By sorting on ascending D-DIF (equivalently, descending ^a LR ), items were conveniently grouped by direction and magnitude of DIF. Thus, for the 11 statistically significant items, the 6 items at the top were more greatly endorsed by men and the 5 items at the bottom were more greatly endorsed by women after adjusting for total score. The ^a LR for these 11 items varied in strength from 1.55 to 2.94 (men displayed greater functional problems after adjustment) and from 0.74 to 0.21 (women displayed greater functional problems after adjustment) (Table 1). For example, the LR model estimated that the odds that men reported having a problem with lifting and carrying 25 pounds was about one fifth (0.21) times the odds that women reported this problem, adjusted for overall functional status. The estimated odds of having a 97

Monahan, McHorney, Stump, and Perkins problem with using the telephone was almost 3 (2.94) times greater for men compared to women, controlling for overall functional status (Table 1). However, the odds of having a problem with walking was only 1½ times greater for men. Therefore, ^a LR indicates that not all 11 statistically significant items exhibited equally important DIF magnitude. Likewise, D-DIF (^D LR ) for the 11 statistically significant items varied in strength from 1.02 to 2.53 and from 0.71 to 3.63 (Table 1). According to the based ETS classification system (ETS-class in Table 1), only 5 of 11 items displayed moderate to large magnitude of statistically significant DIF (C), and 5 items showed moderate DIF (B). We also used the transformation of ^a LR to the p metric (P-DIF in Table 1) to classify items according to the p metric system described earlier (P-class in Table 1). This P-DIF classification system indicated weaker DIF than the ETS classification system for 5 items and stronger DIF than ETS classification for 2 items (Table 1). This was because the nonlinear relationship between D-DIF and P-DIF depends on the proportion of focal group examinees endorsing the item (P f ). Notice that P f for Items 2, 3, 9, 11, and 23 were closer to zero than P f for Items 14 and 16 (Table 1). Using Equations 4, 5, and 6, it can be shown that for a given value of ^D LR or ^a LR, P-DIF will be less for lowly or highly endorsed items than for moderately endorsed items. Likewise, for a given value of P- DIF, ^D LR and ^a LR will be greater for P f near zero or one than for P f near.50 (see Discussion). Effect Sizes for LR Based on the Conditional-Difference-in-Proportions Definition of DIF Using the traditional weight for DIF analyses (w m ¼ N fm ), the magnitudes of the based standardization index (STD-P-focal in Table 1) for the 11 statistically significant items were even smaller than the log-odds-ratio-based p metric magnitudes of P-DIF (Table 1). A p metric classification system based on STD-P-focal (i.e., STD-P-focal-class in Table 1) resulted in flagging only 1 item as definite DIF (C)andonly1itemasmarginalDIF(B). For both data sets, the LR standardization effect size was very similar when two other weights were used [total group distribution (N tm )andcochran(c m )], differing at most from STD-P-focal by.02 for any item but usually by.01 or less (data not shown). The standardization index using w m ¼ 1 (STD-P-equal in Table 1) differed from the other three STD-P-DIF indices (w m ¼ N fm, N tm, c m ), generally displaying slightly greater absolute values, resulting in 1 item demonstrating definite DIF (C) and 5 items revealing marginal DIF (B). Abbreviated Results for EPESE Data Table 2 shows effect sizes for the CES-D depression items. Again, fewer items were flagged when effect sizes supplemented the LR Wald test. Of five statistically significant items, only one item displayed moderate to large DIF (C) (text continued on p. 103) 98

TABLE 1 Logistic Regression (LR) Effect Sizes for Measuring Uniform Differential Item Functioning (DIF): Gender DIF in Supplement on Aging (SOA) Functional Status Items Definition for Measuring Departure From Null DIF Hypothesis Conditional Log Odds Ratio Conditional Difference in Proportions Item Description LR Odds Ratio ð^αlrþ D- DIF ( ^ LR) ETS- Class P- DIF P- Class Pf STD-P- Focal STD-P- Focal- Class STD- P- Equal STD-P- Equal- Class LR Wald HO : b2 = 0 p value 11 Using the telephone 2.94 2.53 C.0495 A.03.03 A.12 C <.0001 2 Dressing 2.51 2.16 C.07 B.05.02 A.08 B <.0001 3 Eating 2.43 2.09 B.02 A.01.01 A.08 B <.0001 14 Walking 1.83 1.42 B.13 C.25.04 A.03 A <.0001 quarter mile 16 Standing 2 hours 1.55 1.03 B.101 C.31.03 A.02 A <.0001 5 Walking 1.55 1.02 B.07 B.16.02 A.03 A <.0001 1.40 0.79 A.02 A.06.01 A.02 A.03 13 Light housework 10 Managing money 1.34 0.69 A.01 A.04.01 A.03 A.03 8 Preparing meals 1.28 0.59 A.02 A.06.01 A.02 A.09 1 Bathing 1.28 0.58 A.02 A.08.01 A.02 A.052 7 Using the toilet 1.08 0.18 A.00 A.04.00 A.01 A.68 99

20 Reaching out 1.06 0.14 A.00 A.02.00 A.01 A.71 19 Reaching over 1.01 0.02 A.00 A.16.00 A.00 A.88 head 18 Stooping, 0.97 0.07 A.01 A.37.00 A.00 A.62 crouching, kneeling 17 Sitting 2 hours 0.95 0.11 A.00 A.11.00 A.01 A.51 15 Walk up 10 0.90 0.24 A.02 A.23.01 A.01 A.18 steps 4 Getting in and out of bed 0.89 0.27 A.01 A.08.00 A.01 A.34 6 Getting outside 0.80 0.53 A.02 A.09.01 A.02 A.12 21 Using fingers 0.74 0.71 A.03 A.11.02 A.05 A.0002 to grasp 9 Shopping 0.64 1.06 B.04 A.10.01 A.03 A.001 23 Lifting, 0.41 2.09 C.08 B.15.04 A.08 B <.0001 carrying 10 pounds 12 Heavy housework 22 Lifting, carrying 25 pounds 0.40 2.16 C.14 C.26.0504 B.051 B <.0001 0.21 3.63 C.26 C.38.11 C.08 B <.0001 Note: Items where men displayed greater functional problems than women, adjusted for overall functional problems, are indicated by: LR odds ratio > 1.0 and negative D-DIF, P-DIF, and STD-P-DIF. The Bonferroni-corrected significance level =.05/23 =.00217 (bold items = significant uniform DIF). Items are sorted by D-DIF. 100

TABLE 2 Logistic Regression (LR) Effect Sizes for Measuring Uniform Differential Item Functioning (DIF): Gender DIF in Established Populations for Epidemiologic Studies of the Elderly (EPESE) Depression Items Definition for Measuring Departure From Null DIF Hypothesis Conditional Log Odds Ratio Conditional Difference in Proportions Item Description LR Odds Ratio ð^αlrþ D- DIF ð ^ LRÞ ETS- Class P- DIF P- Class Pf STD-P- Focal STD-P- Focal- Class STD-P- Equal STD-P- Equal- Class LR Wald Ho : b 2 ¼ 0 p value 9 Life had been a failure 1.78 1.35 B.04 A.06.03 A.07 B.001 12 Was not happy 1.71 1.27 B.054 B.09.04 A.08 B.0002 1.46 0.90 A.03 A.06.02 A.051 B.02 16 Did not enjoy life 13 Talked less than usual 7 Felt everything an effort 15 People were unfriendly 19 Felt people disliked me 4 Felt not as good as others 1.30 0.61 A.03 A.14.02 A.04 A.04 1.22 0.46 A.04 A.28.03 A.03 A.05 1.21 0.45 A.01 A.07.01 A.03 A.24 1.17 0.37 A.01 A.06.01 A.02 A.39 1.17 0.37 A.01 A.06.01 A.02 A.30 101

8 Did not feel hopeful about future 2 Appetite was poor 11 Sleep was restless 1 Bothered by things usually do not 20 I could not get going 1.06 0.14 A.01 A.23.01 A.01 A.53 0.93 0.17 A.01 A.20.01 A.01 A.53 0.88 0.29 A.02 A.26.02 A.02 A.22 0.88 0.30 A.02 A.16.01 A.02 A.32 0.87 0.32 A.02 A.22.01 A.02 A.25 6 Felt depressed 0.81 0.48 A.04 A.27.02 A.02 A.10 3 Could not shake 0.78 0.59 A.03 A.16.02 A.03 A.09 off the blues 10 Felt fearful 0.74 0.70 A.03 A.13.02 A.04 A.03 18 Felt sad 0.74 0.71 A.06 B.28.03 A.03 A.01 5 Trouble 0.70 0.83 A.06 B.24.04 A.054 B.001 keeping mind on doing 14 Felt lonely 0.68 0.91 A.07 B.30.047 A.04 A.001 17 Had crying 0.29 2.89 C.07 B.10.06 B.15 C <.0001 spells Note: Items where men displayed greater depressive symptoms than women, adjusted for overall depression, are indicated by: LR odds ratio > 1.0 and negative D-DIF, P-DIF, and STD-P-DIF. The Bonferroni-corrected significance level =.05/20 =.0025 (bold items = significant uniform DIF). Items are sorted by D-DIF. 102

by any classification system (Table 2). The main difference between Table 1 and Table 2 is that compared to functional status items (Table 1), depression items (Table 2) showed less difference between the log-odds-ratio-based p index (P-DIF) and the focal-group standardization p index (STD-P- focal). This was because depression items revealed less DIF magnitude and less skewness of total score. Specifically, using Equations 5, 6, and 7, where w m ¼ N fm, it can be shown that the difference between P-DIF and STD-P-focal depends on ^a LR, Pfm LR, and N fm (P f is a function of N fm and Pfm LR; PLR rm is a function of ^a LR and Pfm LR because the odds ratio is assumed to be constant across strata in uniform-dif LR). In short, the magnitude of STD- P-focal increases as P LR fm N fm ) than P LR fm Effect Sizes for Logistic Regression values near.50 receive greater weight (i.e., larger values near zero or one. Sensitivity of Results Results were very similar after deleting examinees at the floor and ceiling. We computed Cochran s (1954) test criterion by specifying w m ¼ c m in STD-P-DIF and by using predicted proportions in the standard error; the observed significance was extremely similar to the LR Wald observed significance for all items in both data sets (differed at most by.008). This is not surprising given that Cochran (1954) derived these weights for a test criterion that would be powerful for detecting an alternative hypothesis of a constant difference on either the logit or probit scale. Thus, in LR, although Cochran weights are an option when computing STD-P-DIF, the Cochran test might be an unnecessary adjunct to the LR Wald test. Discussion Choosing an Effect Size: Pros and Cons The effect sizes can be contrasted on a number of dimensions. First, as for ease of interpretation, indices reported on the delta and p metric are symmetrical around their null value of zero, which facilitates interpreting DIF in opposite directions. However, those experienced with interpreting odds ratios may find ^a LR easier to interpret than D-DIF, which is on the ETS-preferred delta metric. For data conforming to the two-parameter logistic (2PL) IRT model, one advantage of D-DIF is that the MH-D-DIF parameter (D 2PL ) can be written as a linear rescaling of the difference between b parameters (Roussos, Schnipke, & Pashley, 1999): 4 D 2PL ¼ 4aðb R b F ). The a MH parameter also shares the advantage of being related, although nonlinearly, to IRT b-dif (Roussos et al., 1999). The p metric is probably the most universally understood metric, conveniently connected to total and true score metrics. Practitioners should choose an effect size that they and their readership can easily interpret. Second, practitioners should choose an effect size whose metric for defining departures from the null hypothesis supplies the most valid definition of DIF for 103

Monahan, McHorney, Stump, and Perkins their purpose. Specifically, relative to conditional odds ratios (^a LR ) or log odds ratios (^D LR ), conditional differences between proportions (STD-P-DIF) will be compressed for items with low or high endorsement rates. However, from another perspective, odds ratios and log odds ratios play up small differences in proportions that are near zero or near one. P-DIF is based on the conditional log-odds-ratio definition but is reported on the p metric. Therefore, P-DIF shares some properties with STD-P-DIF (ease of interpretation and lower magnitude, relative to odds ratios, for lowly or highly endorsed items). Third, in terms of fundamental connections to the LR model, ^a LR is a natural estimator, fundamentally connected to a parameter of the LR model. D-DIF is also a simple rescaling of an estimated LR parameter and therefore is fundamentally connected to the LR model. A disadvantage of the STD-P-DIF index is that it is the most removed from the LR parameter estimation method. However, this does not invalidate it as a descriptive measure of DIF. Fourth, as far as ease of programming, standard software for LR automatically provides ^a LR. D-DIF can easily be calculated. P-DIF requires slightly more programming to convert ^a LR to P-DIF. STD-P-DIF requires the most additional programming because predicted proportions at each total score level must be first computed and then weighted. Fifth, the purpose of weights in standardization is not only to standardize according to the distribution of interest but also to yield smaller weight to sparse strata that provide less precise information. Using equal weights could be dangerous if one or more strata are sparse. (None of the values for total score were sparse for the present data sets.) If one chooses an outside (real or hypothetical) standard distribution, one must be careful to not combine large weights with ill-determined differences in proportions (Mosteller & Tukey, 1977). Recommendations on How to Use the Effect Sizes First, we recommend using an effect size and a statistical test when deciding whether items exhibit DIF. The effect size prevents flagging unimportant differences in large samples, and the statistical test prevents flagging noise in small samples. Unimportant differences can be significant, as demonstrated here, when using a statistical test alone, even if statistical tests are conservatively adjusted for multiple comparisons. In addition, effect sizes and their classifications help distinguish between levels of nonnegligible DIF (e.g., B vs. C). Second, practitioners must decide what values of the effect size represent negligible, moderate, and large magnitudes for the intended purpose. For example, ETS uses thresholds of 1.0 and 1.5 on the absolute value of the delta metric, which are equivalent to odds ratios greater than 1.53 (or less than 0.65) and greater than 1.89 (or less than 0.53), respectively. Users of STD procedures often use.05 and.10 thresholds on the absolute value of the p metric. In the medical sciences, the ^a LR thresholds of 1.5 and 2.0 are common due to convenient interpretations of one and one-half and twice the odds, respectively. 104

Effect Sizes for Logistic Regression However, smaller thresholds are used (e.g., 1.1) if the exposure is prevalent and disease serious (e.g., when determining whether risk of heart disease is associated with hormone supplements). Interestingly, ^a LR thresholds of 1.5 and 2.0 are nearly equivalent to the delta thresholds used in the ETS classification system. Third, one can take steps to facilitate interpretations. One could calculate the reciprocal of odds ratios less than one. By calculating 1/.74 = 1.35, one can readily see that.74 for Item 21 in Table 1 is not as strong as 1.55 for Items 5 and 16. One can use graphs (e.g., scatter, line, and bar), which help discern relative distances between DIF magnitudes. In addition, sorting items by direction and magnitude of DIF in tables, as we did here, aids interpretation. Fourth, these effect sizes can be used to facilitate comparisons of DIF procedures. One could compare MH, LR, SIBTEST, STD, and IRT procedures on the p metric (using P-DIF or STD-P-DIF for LR). Likewise, one could compare procedures on the odds ratio or delta metric, where STD-P-DIF and the latent-true-score adjusted difference in proportions of SIBTEST are converted using a formula similar to Equation 22 in Dorans and Holland (1993). Limitations The present analyses employed large sample sizes. In smaller sample sizes, the discrepancy between statistical significance versus flagging items with the combination of effect sizes and statistical significance should not be as great. The degree of discrepancy observed here between D-DIF, P-DIF, and STD-P-DIF may differ for other data sets. Conclusions These effect sizes and classification systems have received little attention in the DIF literature for binary LR: the adjusted odds ratio (^a LR ), D-DIF (^D LR ), P-DIF, LR model-based standardization indices (STD-P-DIF), and the ETS and p metric classification systems. The present examples demonstrate that these effect sizes are quite useful for preventing practically unimportant DIF from being flagged, especially in large samples. There are various pros and cons for choosing among these effects sizes. When steps are taken for their proper use, these effect sizes should be of great benefit to practitioners. Notes 1. The original proposal was to use the two df simultaneous test of uniform and nonuniform differential item functioning (DIF); however, when only uniform DIF is present, including the interaction term in the test may decrease power (Swaminathan & Rogers, 1990). 2. We considered the ML subscript (maximum likelihood estimation); however, the LR subscript in Equation 3 reminds practitioners that the odds ratio was estimated by assuming a logistic regression (LR) model. 105

Monahan, McHorney, Stump, and Perkins 3. Jodoin and Gierl (2001) suggested that R 2 -like indices are preferable to effect sizes based on ^b 2 because the latter would depend on the coding of the group variable [i.e., reference cell (0/1) vs. deviations-from-means method ( 1/1)]. However, we agree with Hosmer and Lemeshow (2000) that the reference cell method is more useful for LR because the exponential of b 2 is interpreted as a ratio of odds for one group versus the other group. If one codes the focal group as 0 and the reference group as 1 and then models item endorsement, as we did here, ^a LR and ^D LR have the same interpretations as for the Mantel-Haenszel (MH) procedure. Reference cell coding is no more arbitrary than row and column specification in the MH procedure. 4. In this formula (i.e., Equation 16 in Roussos, Schnipke, & Pashley, 1999), item discrimination (a) for the two-parameter logistic (2PL) item response theory (IRT) model varies over items and the MH delta-dif (MH-D-DIF) parameter is conditional on theta, whereas in Equation 13 in Donoghue, Holland, and Thayer (1993), a is constant across items because the MH-D-DIF parameter is conditional on observed total score where the corresponding IRT model is the Rasch model. References Borsboom, D., Mellenbergh, G. J., & Heerden, J. v. (2002). Different kinds of DIF: A distinction between absolute and relative forms of measurement invariance and bias. Applied Psychological Measurement, 26, 433-450. Clauser, B. E., & Mazor, K. M. (1998). Using statistical procedures to identify differentially functioning test items. Educational Measurement: Issues and Practice, 17(1), 31-44. Clauser, B. E., Nungester, R. J., Mazor, K., & Ripkey, D. (1996). A comparison of alternative matching strategies for DIF detection in tests that are multidimensional. Journal of Educational Measurement, 33, 202-214. Cochran, W. G. (1954). Some methods for strengthening the common w 2 tests. Biometrics, 10, 417-451. Donoghue, J. R., Holland, P. W., & Thayer, D. T. (1993). A Monte Carlo study of factors that affect the Mantel-Haenszel and standardization measures of differential item functioning. In P. W. Holland & H. Wainer (Eds.), Differential item functioning (pp. 137-166). Hillsdale, NJ: Lawrence Erlbaum. Dorans, N. J., & Holland, P. W. (1993). DIF detection and description: Mantel-Haenszel and standardization. In P. W. Holland & H. Wainer (Eds.), Differential item functioning (pp. 35-66). Hillsdale, NJ: Lawrence Erlbaum. Dorans, N. J., & Kulick, E. (1986). Demonstrating the utility of the standardization approach to assessing unexpected differential item performance on the Scholastic Aptitude Test. Journal of Educational Measurement, 23, 355-368. Groenvold, M., Bjorner, J. B., Klee, M. C., & Kreiner, S. (1995). Test for item bias in a quality of life questionnaire. Journal of Clinical Epidemiology, 48, 805-816. Hess, B., Olejnik, S., & Huberty, C. J. (2001, April). The efficacy of two improvementover-chance effect size measures for two-group univariate comparisons under variance heterogeneity and nonnormality. Paper presented at the annual meeting of the American Educational Research Association, Seattle, WA. 106

Effect Sizes for Logistic Regression Holland, P. W., & Thayer, D. T. (1988). Differential item performance and the Mantel- Haenszel procedure. In H. Wainer & H. I. Braun (Eds.), Test validity (pp. 129-145). Hillsdale, NJ: Lawrence Erlbaum. Holland, P. W., & Wainer, H. (Eds.). (1993). Differential item functioning. Hillsdale, NJ: Lawrence Erlbaum. Hosmer, D. W., & Lemeshow, S. (2000). Applied logistic regression (2nd ed.). New York: John Wiley. Huang, C.-Y., & Dunbar, S. B. (1998, April). Factors influencing the reliability of DIF detection methods. Paper presented at the annual meeting of the American Educational Research Association, San Diego, CA. Jodoin, M. G., & Gierl, M. J. (2001). Evaluating Type I error and power rates using an effect size measure with the logistic regression procedure for DIF detection. Applied Measurement in Education, 14, 329-349. Kirk, R. E. (1996). Practical significance: A concept whose time has come. Educational and Psychological Measurement, 56, 746-759. Kwak, N., Davenport, E. C., Jr., & Davison, M. L. (1998, April). A comparative study of observed score approaches and purification procedures for detecting differential item functioning. Paper presented at the annual meeting of the National Council on Measurement in Education, San Diego, CA. Marshall, S. C., Mungas, D., Weldon, M., Reed, B., & Haan, M. (1997). Differential item functioning in the Mini-Mental State Examination in English- and Spanish-speaking older adults. Psychology and Aging, 12, 718-725. Mazor, K. M., Kanjee, A., & Clauser, B. E. (1995). Using logistic regression and the Mantel-Haenszel with multiple ability estimates to detect differential item functioning. Journal of Educational Measurement, 32, 131-144. Millsap, R. E., & Everson, H. T. (1993). Methodological review: Statistical approaches for assessing measurement bias. Applied Psychological Measurement, 17, 297-334. Monahan, P. O. (2004, April). Examining the assumption of linearity of the logit in the logistic regression procedure for detecting DIF. Paper presented at the annual meeting of the National Council on Measurement in Education, San Diego, CA. Monahan, P. O., Stump, T. E., Finch, H., & Hambleton, R. K. (in press). Bias of exploratory and cross-validated DETECT index under unidimensionality. Applied Psychological Measurement. Mosteller, F., & Tukey, J. W. (1977). Data analysis and regression: A second course in statistics. Reading, MA: Addison-Wesley. Narayanan, P., & Swaminathan, H. (1996). Identification of items that show nonuniform DIF. Applied Psychological Measurement, 20, 257-274. Radloff, L. S. (1977). The CES-D scale: A self-report depression scale for research in the general population. Applied Psychological Measurement, 1, 385-401. Robin, F. (2001). STDIF: Standardization-DIF analysis program. Amherst: University of Massachusetts, School of Education. Rogers, H. J., & Swaminathan, H. (1993). A comparison of logistic regression and Mantel- Haenszel procedures for detecting differential item functioning. Applied Psychological Measurement, 17,105-116. Roussos, L. A., & Ozbek, O. (2006). Formulation of the DETECT population parameter and evaluation of DETECT estimator bias. Journal of Educational Measurement, 43, 215-243. 107

Monahan, McHorney, Stump, and Perkins Roussos, L. A., Schnipke, D. L., & Pashley, P. J. (1999). A generalized formula for the Mantel-Haenszel differential item functioning parameter. Journal of Educational and Behavioral Statistics, 24, 293-322. Schmitt, A. P., Holland, P. W., & Dorans, N. J. (1993). Evaluating hypotheses about differential item functioning. In P. W. Holland & H. Wainer (Eds.), Differential item functioning (pp. 281-315). Hillsdale, NJ: Lawrence Erlbaum. Swaminathan, H., & Rogers, J. (1990). Detecting differential item functioning using logistic regression procedures. Journal of Educational Measurement, 27, 361-370. Swanson, D. B., Clauser, B. E., Case, S. M., Nungester, R. J., & Featherman, C. (2002). Analysis of differential item functioning (DIF) using hierarchical logistic regression models. Journal of Educational and Behavioral Statistics, 27, 53-75. Taylor, J. O., Wallace, R. B., Ostfeld, A. M., & Blazer, D. G. (1998). Established populations for epidemiologic studies of the elderly, 1981-1993 (3rd ICPSR version) [Electronic version]. Ann Arbor, MI: Inter-university Consortium for Political and Social Research. U.S. Department of Health and Human Services. (1997). Longitudinal study of aging, 1984-1990 (6th ICPSR version) [Electronic version]. Ann Arbor, MI: Inter-university Consortium for Political and Social Research. Volk, R. J., Cantor, S. B., Steinbauer, J. R., & Cass, A. R. (1997). Item bias in the CAGE screening test for alcohol use disorders. Journal of General Internal Medicine, 12, 763-769. Wainer, H. (1993). Model-based standardized measurement of an item s differential impact. In P. W. Holland & H. Wainer (Eds.), Differential item functioning (pp. 123-135). Hillsdale, NJ: Lawrence Erlbaum. Whitmore, M. L., & Schumacker, R. E. (1999). A comparison of logistic regression and analysis of variance differential item functioning detection methods. Educational and Psychological Measurement, 59, 910-927. Woodard, J. L., Auchus, A. P., Godsall, R. E., & Green, R. C. (1998). An analysis of test bias and differential item functioning due to race on the Mattis Dementia Rating Scale. Journals of Gerontology. Series B, Psychological Sciences and Social Sciences, 53, P370-P374. Zenisky, A. L., Hambleton, R. K., & Robin, F. (2003). Detection of differential item functioning in large-scale state assessments: A study evaluating a two-stage approach. Educational and Psychological Measurement, 63, 51-64. Zumbo, B. D. (1999). A handbook on the theory and methods of differential item functioning (DIF): Logistic regression modeling as a unitary framework for binary and Likert-type (ordinal) item scores. Ottawa, Canada: Directorate of Human Resources Research and Evaluation, Department of National Defense. Authors PATRICK O. MONAHAN is assistant professor, Division of Biostatistics, Department of Medicine, School of Medicine, Indiana University, 410 West 10th Street Suite 3000, Indianapolis, IN 46202; pmonahan@iupui.edu. His area of interest is measurement and statistics applied to the behavioral and social sciences. COLLEEN A. MCHORNEY, PhD, is director of outcomes research at Merck & Co., Inc., WP39-166, 770 Sumneytown Pike, West Point, PA 19486-0004. Her areas of expertise 108

Effect Sizes for Logistic Regression relate to the measurement and evaluation of patient-reported outcomes, including health status, quality of life, patient satisfaction, and patient preferences. TIMOTHY E. STUMP is statistician, Regenstrief Institute, Inc. and the Indiana University Center for Aging Research; tstump@regenstrief.org. His area of interest is measurement and statistics in the medical sciences. ANTHONY J. PERKINS is a statistical consultant for the Regenstrief Institute, Inc. and the Indiana University Center for Aging Research; tperkins348@sbcglobal.net. His area of interest is item bias in quality of life instruments. Manuscript received July 15, 2004 Accepted August 2, 2005 109