Revisiting Respondent Fatigue Bias in the National Crime Victimization Survey

Similar documents
Methodological Considerations to Minimize Total Survey Error in the National Crime Victimization Survey

Linda R. Murphy and Charles D. Cowan, U.S. Bureau of the Census EFFECTS OF BOUNDING ON TELESCOPING IN THE NATIONAL CRIME SURVEY.

THE EFFECTS OF SELF AND PROXY RESPONSE STATUS ON THE REPORTING OF RACE AND ETHNICITY l

POSSIBLE IMPROVEMENTS TO THE NATIONAL CRIME VICTIMIZATION SURVEY USING THE AMERICAN COMMUNITY SURVEY

Does it Matter How You Ask? Question Wording and Males' Reporting of Contraceptive Use at Last Sex

USING THE CENSUS 2000/2001 SUPPLEMENTARY SURVEY AS A SAMPLING FRAME FOR THE NATIONAL EPIDEMIOLOGICAL SURVEY ON ALCOHOL AND RELATED CONDITIONS

Nonresponse Bias in a Longitudinal Follow-up to a Random- Digit Dial Survey

SURVEY TOPIC INVOLVEMENT AND NONRESPONSE BIAS 1

Daniel Boduszek University of Huddersfield

The author(s) shown below used Federal funds provided by the U.S. Department of Justice and prepared the following final report:

Evaluating Questionnaire Issues in Mail Surveys of All Adults in a Household

Nonresponse Adjustment Methodology for NHIS-Medicare Linked Data

NCVS- KENTUCKY S ISSUES

Co-occurrence of Substance Use Behaviors in Youth

Running head: PRELIM KSVS SCALES 1

Crime & Victimisation Module

1. The Role of Sample Survey Design

Introduction. chapter one. James P. Lynch and Lynn A. Addington

AnExaminationoftheQualityand UtilityofInterviewerEstimatesof HouseholdCharacteristicsinthe NationalSurveyofFamilyGrowth. BradyWest

NONRESPONSE ADJUSTMENT IN A LONGITUDINAL SURVEY OF AFRICAN AMERICANS

PATHWAYS. Age is one of the most consistent correlates. Is Desistance Just a Waiting Game? Research on Pathways to Desistance.

NM Coalition of Sexual Assault Programs, Inc.

CHAPTER 3: METHODOLOGY

Depressive Symptoms Among Colorado Farmers 1

On an average day in , up to 4.4% of state

ORIGINAL ARTICLE. Comparison With the National Crime Victimization Survey

Decision Making Process

SAQ-Adult Probation III & SAQ-Short Form

Allen County Community Corrections. Home Detention-Day Reporting Program. Report for Calendar Years

Title: Determinants of intention to get tested for STI/HIV among the Surinamese and Antilleans in the Netherlands: results of an online survey

Methodological Issues for Longitudinal Studies. Graham Kalton

Pathways to Crime. Female Offender Experiences of Victimization. JRSA/BJS National Conference, Portland Maine, 10/28/10

Ramsey County Proxy Tool Norming & Validation Results

Geographical Accuracy of Cell Phone Samples and the Effect on Telephone Survey Bias, Variance, and Cost

Measuring Trauma: A Discussion Brief

Kansas Bureau of Investigation

Methodology for the VoicesDMV Survey

CHAPTER 4: FINDINGS 4.1 Introduction This chapter includes five major sections. The first section reports descriptive statistics and discusses the

RESEARCH NOTE. The Frequency of the Fear of Crime

Crash Risk of Alcohol Impaired Driving

Kentucky SAFE Kit Backlog Research Project Initial Report: Data Collection, Sources, Methods, and Research Questions

Examining the relationship between the accuracy of self-reported data and the availability of respondent financial records

An Application of Propensity Modeling: Comparing Unweighted and Weighted Logistic Regression Models for Nonresponse Adjustments

Sampling for Success. Dr. Jim Mirabella President, Mirabella Research Services, Inc. Professor of Research & Statistics

LOGISTIC PROPENSITY MODELS TO ADJUST FOR NONRESPONSE IN PHYSICIAN SURVEYS

Supporting Information for How Public Opinion Constrains the U.S. Supreme Court

Arizona Youth Tobacco Survey 2005 Report

Attitudes Toward Smoking Restrictions in Work Sites, Restaurants, and Bars Among North Carolinians. Abstract

A Risk Assessment and Risk Management Approach to Sexual Offending for the Probation Service

Use of Paradata in a Responsive Design Framework to Manage a Field Data Collection

Validity refers to the accuracy of a measure. A measurement is valid when it measures what it is suppose to measure and performs the functions that

DAZED AND CONFUSED: THE CHARACTERISTICS AND BEHAVIOROF TITLE CONFUSED READERS

Judy Kruger, PhD, MS, Deborah A. Galuska, PhD, MPH, Mary K. Serdula, MD, MPH, Deborah A. Jones, PhD

Epidemiology of Asthma. In Wayne County, Michigan

Technical Background on the Redesigned National Crime Victimization Survey October 30, 1994, NCJ

UMbRELLA interim report Preparatory work

Tim Johnson Survey Research Laboratory University of Illinois at Chicago September 2011

Chapter V Depression and Women with Spinal Cord Injury

The National Crime Victimization Survey (NCVS) is a major source of crime statistics

Mapping fear of crime dynamically on everyday transport: SUMMARY (1 of 5) Author: Reka Solymosi, UCL Department of Security & Crime Science

CRIMINOLOGY AREA EXAMINATION GENERAL STATEMENT AND GUIDELINES

HIV in Prisons, 2005

Within-Household Selection in Mail Surveys: Explicit Questions Are Better Than Cover Letter Instructions

A Strategy for Handling Missing Data in the Longitudinal Study of Young People in England (LSYPE)

EVALUATION OF A MONETARY INCENTIVE PAYMENT EXPERIMENT IN THE NATIONAL LONGITUDINAL SURVEY OF YOUTH, 1997 COHORT

BLACK RESIDENTS VIEWS ON HIV/AIDS IN THE DISTRICT OF COLUMBIA

APPENDIX: Supplementary Materials for Advance Directives And Nursing. Home Stays Associated With Less Aggressive End-Of-Life Care For

Lewis & Clark National Estimation and Awareness Study

Summary Report: The Effectiveness of Online Ads: A Field Experiment

SEXUAL VICTIMIZATION AND MEASUREMENT OF CORE CONCEPTS. Julak Lee Professor, Kyonggi University, Republic of Korea

Using Computer-Assisted Recorded Interviewing to Enhance Field Monitoring and Improve Data Quality 1

Title: The Theory of Planned Behavior (TPB) and Texting While Driving Behavior in College Students MS # Manuscript ID GCPI

This chapter examines the sociodemographic

SAQ-Short Form Reliability and Validity Study in a Large Sample of Offenders

SOCIOLOGICAL RESEARCH

Revisiting Incapacitation: Can We Generate New Estimates?

Response rates are the most widely compared statistic for judging survey quality. Outline. Main Problem Confronting Us

EXECUTIVE SUMMARY. New Mexico Statistical Analysis Center April Prepared by: Kristine Denman, Director, NMSAC

LUCAS COUNTY TASC, INC. OUTCOME ANALYSIS

Factors Affecting Fear of Crime and Victimization of Residents in Nebraska

Adjuvant Chemotherapy for Patients with Stage III Colon Cancer: Results from a CDC-NPCR Patterns of Care Study

Methodological Problems in Victim Surveys and Their Implications for Research in Victimology

Food Labeling Survey ~ January 2019

Exploring the Impact of Missing Data in Multiple Regression

Measuring Trauma. Discussion Brief. Dean G. Kilpatrick, Medical University of South Carolina Frederick Conrad, University of Michigan.

Weight Adjustment Methods using Multilevel Propensity Models and Random Forests

AN INVESTIGATION OF THE RELATIONSHIP BETWEEN TRAFFIC ENFORCEMENT AND THE PERCEIVED RISK OF DETECTION FOR DRIVING OFFENCES

Gary Kleck College of Criminology and Criminal Justice Florida State University Tallahassee, FL

Check List: B.A in Sociology

Detailed Contents. 1 Science, Society, and Criminological Research 1. About the Authors xvii Preface xix

Preliminary Findings from Dartmouth s 2015 AAU Campus Climate Survey on Sexual Assault and Sexual Misconduct 1

Impact of Florida s Medicaid Reform on Recipients of Mental Health Services

Ecstasy and Related Drugs Reporting System drug trends bulletin July 2012

INCREASING REPRESENTATION IN A MIXED-MODE QUALITY OF LIFE SURVEY OF MEDICARE ESRD BENEFICIARIES

Maribeth L. Rezey Department of Criminal Justice and Criminology Loyola University Chicago

A LONGITUDINAL ANALYSIS OF BOUNDING, RESPONDENT CONDITIONING AND MOBILITY AS SOURCES OF PANEL BIAS IN THE NATIONAL CRIME SURVEY

A Comparison of Variance Estimates for Schools and Students Using Taylor Series and Replicate Weighting

CARE Cross-project Collectives Analysis: Technical Appendix

COMMUNITY-LEVEL EFFECTS OF INDIVIDUAL AND PEER RISK AND PROTECTIVE FACTORS ON ADOLESCENT SUBSTANCE USE

Sexual Adjustment Inventory

Transcription:

Journal of Quantitative Criminology, Vol. 21, No. 3, September 2005 (Ó 2005) DOI: 10.1007/s10940-005-4275-4 Revisiting Respondent Fatigue Bias in the National Crime Victimization Survey Timothy C. Hart, 1,4 Callie Marie Rennison, 2 and Chris Gibson 3 For more than three decades the National Crime Victimization Survey (NCVS) and its predecessor the National Crime Survey (NCS) have been used to calculate estimates of nonfatal crime in the United States. Though the survey has contributed much to our understanding of criminal victimization, some aspects of the survey s methodology continue to be analyzed (e.g., repeat victimizations, proxy interviews, and bounding). Surprisingly, one important aspect of NCVS methodology has escaped this scrutiny: respondent fatigue. A potential source of nonsampling error, fatigue bias is thought to manifest as respondents become test wise after repeated exposure to NCVS survey instruments. Using a special longitudinal NCVS data file, we revisit the presence and influence of respondent fatigue in the NCVS. Specifically, we test the theory that respondents exposed to longer interviews during their first interview are more likely to refuse to participate in the survey 6 months later. Contrary to expectations based on the literature, results show that prior reporting of victimization and exposure to a longer interview is not a significant predictor of a noninterview during the following time-in-sample once relevant individual characteristics are accounted for. Findings do demonstrate significant effects of survey mode and several respondent characteristics on subsequent survey nonparticipation. KEY WORDS: National Crime Victimization Survey (NCVS); longitudinal surveys; fatigue bias; nonresponse; nonsampling error. 1. INTRODUCTION The National Crime Victimization Survey (NCVS) has long offered researchers and policymakers estimates of the nature and extent of criminal 1 Department of Criminology, University of South Florida, 4202 E. Fowler Ave., SOC 107, Tampa, FL, 33620, USA. 2 Department of Criminology and Criminal Justice, University of Missouri St. Louis, MO, USA. 3 Department of Political Science, Georgia Southern University, GA, USA. 4 To whom correspondence should be addressed. E-mail: thart2@mail.usf.edu 345 0748-4518/05/0900-0345/0 Ó 2005 Springer Science+Business Media, Inc.

346 Hart, Rennison, and Gibson victimization in the United States. Since its inception in July 1972, the NCVS formerly the National Crime Survey (NCS) has benefited from careful attention from researchers and is often used to benchmark other crime estimates (Cantor and Lynch, 2000; Chilton and Jarvis, 1999; Kindermann et al., 1997; Rand and Rennison, 2002; Tourangeau and McNeeley, 2003). Past examinations of the NCVS exposed various methodological concerns commonly associated with longitudinal surveys. For example, nonsampling error caused by nonresponse, panel attrition, telescoping, and the use of proxy interviews are limitations that were identified in the NCVS (Biderman and Cantor, 1984; Bushery, 1978; Lehnen and Reiss, 1978a, b; National Research Council, 1976; Sliwa, 1977; Taylor, 1989; Woltman, 1975; Woltman and Bushery, 1977a, b; Ybarra and Lohr, 2000). In part, because of these issues, the survey underwent a massive redesign effort that resulted in substantial methodological changes when implemented in 1992. For example, cue questions used on the Basic Screen Questionnaire (NCVS-1) were changed to improve respondent recall. More descriptions of crime incidents were included, computer-assisted telephone interviews were introduced, and specific questions about rape and sexual assaults were added. Given the major changes implemented in the NCVS over time, it is surprising that findings from some very early methodological investigations of the NCS continue to be accepted as part-and-parcel of the contemporary NCVS. One example of this conventional wisdom is that multiple interviews generate fatigue and cause a decreased level in reporting victimizations in response to survey items (Thornberry and Krohn, 2003). One very early publication suggested that a possible source of nonsampling error in the NCS is respondent fatigue, also known as fatigue bias (Biderman, 1967; Biderman et al., 1967). Biderman et al. (1967) first identified motivational fatigue during NCS pretests, suggesting that respondents could become test wise to the survey instrument. He also noted that the issue deserved more attention. In the 1970s, Biderman s initial claim that respondents could become test wise was supported by research conducted to assess respondent fatigue associated with the NCS (Lehnen and Reiss, 1978a, b). Since that time, and despite the threat it poses to estimation, little empirical attention has been paid to respondent fatigue in the NCVS. We argue that the issue of respondent fatigue needs to be revisited considering the major methodological and measurement improvements implemented in the NCVS since these early studies were published. Our objective is to explore the extent of respondent fatigue in the NCVS using longitudinal data collected between 1996 and 1999. First, we revisit the work of Lehnen and Reiss (1978a, b) using a more appropriate dependent variable to assess the differences, if any, in contemporary and historical

Revisiting Respondent Fatigue Bias 347 determinants of fatigue defined in terms of victimization. We then extend their work by modifying the operational measure of fatigue. That is, we examine the issue in terms of whether respondents who are exposed to longer interviews during their first interview (i.e., they were victims and provided information for an incident report) are more likely to refuse to participate in the subsequent interview (rather than reduce the level of victimizations they reveal). Our analytic strategy is to link interviews from first-time subjects to information about their second interview 6 months later. The level of respondents refusal to participate a Type-Z 5 noninterview during the second interview is assessed for all respondents. We also examine instrumentand respondent-level characteristics to provide a current understanding of the correlates of respondent fatigue bias in the NCVS. 2. PRIOR RESEARCH ON NCVS RESPONDENT FATIGUE One of the first attempts to estimate the influence of respondent fatigue on victimization survey estimates compared rival techniques of survey administration (Biderman et al., 1967; Skogan, 1981). The first technique allowed a respondent to become test wise to the survey instrument. The survey was administered in a way that permitted a respondent to link a positive response (i.e., reporting being victimized) with a lengthy respondent task (i.e., being asked more detailed questions about a victimization). The second method of survey administration circumvented this situation by asking all detailed victimization questions following all general incident screening questions. Biderman et al. (1967) found that the second interviewing procedure (i.e., the non-test-wise version) produced 2½ times the number of reported victimizations than the test-wise version. These findings supported the idea that fatigue bias contributed to nonsampling error in the NCS. While the conclusions are important, it is also important to note that they are based on a crosssectional survey of only 183 respondents. Lehnen and Reiss (1978a, b) authored two important works on fatigue bias as it related to the NCS. They argued that the multiple exposure to stimuli problem in the NCS due to repeated exposure to the same questionnaire would substantially decrease the number of reported victimizations by respondents. And in fact Lehnen and Reiss (1978b) found that a 5 A Type-Z noninterview (i.e., refusal or never available) occurs when an eligible respondent does not provide an interview and the respondent is not the household respondent. A household respondent is the household member that is selected by the interviewer to be the first household member interviewed. The expectation is that the household respondent will be able to provide information for all persons in the sample household.

348 Hart, Rennison, and Gibson principal source of response error in the NCS was due to respondents repeated exposure to the survey. They argued that an NCS respondent has several opportunities to learn what is desired and become sensitized to the objective of the survey (Lehnen and Reiss, 1978a, p. 112). The importance of the work of Lehnen and Reiss (1978a, b) is clear. However, nearly a quarter century has passed and replications of their work have not been conducted. Given the significant changes in the NCVS methodology implemented during this time, much remains unknown about the nature and extent of fatigue bias in the NCVS. Most fundamentally is that the level of respondent fatigue in the contemporary NCVS is unknown. The availability of a contemporary, longitudinal NCVS data set makes it possible to extend this classic work. These longitudinal data offer many advantages. First, the longitudinal file provides a large sample size (N > 323,000). Earlier estimates of individual fatigue bias were based on small, non-representative, cross-sectional samples raising the possibility that findings were not generalizable. Second, extant data allow the unit of analysis to shift from the sub-group to the individual. At the time of Lehnen and Reiss (1978a, b) work, it was not possible to match individual respondents across enumerations, and conclusions about individual fatigue bias could not be made. With new data, it is possible to assess factors that may predict individual fatigue bias over time. Another way the work of Lehnen and Reiss (1978a, b) can be built upon is by controlling for changes in household composition across interviews. Previous data did not allow one to control for instances when an individual within a household, or an entire household moved into or out of a sample address. This lack of control resulted in a variation in the number of exposures to the survey instrument across individuals in the same panel. Controlling for changes in mobility is especially important given that the number of prior interviews is related to subsequent reports of victimization. Indeed, Lehnen and Reiss (1978a, p. 121) acknowledged, the decline in observed reporting with number of previous interviews may be at least partially the result of sample attrition and not response fatigue (see also Addington, 2005). A final and perhaps the most significant way earlier work can be enhanced is by using an alternative measure of fatigue. In earlier work, if a respondent reported a higher number of victimizations during the first interview compared to later interviews, this was considered an indication of fatigue bias. This measurement scheme did not account for instances when a respondent was simply victimized less often during the second reference period compared to during the first. This measure of respondent fatigue raises the possibility of misclassifying individuals as fatigued when they simply were not victims of crime as much over time. We suggest that extant

Revisiting Respondent Fatigue Bias 349 findings are not necessarily incorrect, but that by revisiting this topic with alternative measures, operational definitions and the improved methodology reflected in post-redesign data, we can further our understanding of fatigue bias. Before presenting findings, the following section describes the data, measures and analytic strategy used to address the issues presented above. 3. DATA The NCVS is a stratified, multistage, cluster sample employing a rotating panel design. Eligible household members are those individuals age 12 or older residing in the home at the time of the survey. Interviews with respondents are gathered through both face-to-face and telephone survey modes. In total, eligible residents in a sample household are interviewed seven times once every 6 months over a three-year period. During the basic screening interview, demographic information such as age, gender, race and ethnicity, marital status, and educational status for each eligible household member is collected. Some of this information (i.e., age and marital status) is updated during subsequent interviews if necessary. When respondents report an incident during this process, detailed incident-based data are collected. For example, characteristics of the crime (e.g., month, time, location and type of crime), victim and offender relationship, offender characteristics, self-protective actions taken by the victim, consequences of victim behaviors, whether the crime was reported to the police, and the presence of any weapons represent some of the information collected on the incident form. Annually, NCVS data are compiled and released for public-use. The first of seven interviews in the NCVS is unbounded 6 and excluded from public-use files. Since initial exposure to the NCVS is vital to assessing fatigue, and data from first interviews are excluded from typical public-use files, a special longitudinal hierarchical file was required for this study. Recently, the Census Bureau compiled NCVS records from 1996 to 1999 and created a public-use, longitudinal, data file (Bureau of Justice Statistics, 2002). The 1996 1999 NCVS Longitudinal File is a hierarchical file containing five types of records: (1) index address ID records, (2) address ID records, (3) household records, (4) personal records, and (5) incident records. The index address ID records are unique to the longitudinal file and allow linkage of individuals records across all seven interviews. The address 6 Bounding interviews is a quality assurance process used in the NCVS to minimize the effects of telescoping. Each incident reported during an interview is checked against incidents reported for the same respondent during the previous interviews. Since no information is available as a reference for incidents reported during the first interview, incident data collected during the first enumeration is excluded altogether. For additional information on bounding, see Addington, this volume.

350 Hart, Rennison, and Gibson ID records contain household identifiers, as well as rotation and panel information. The household records contain information about the household as reported by the household respondent. Personal records contain information about each eligible household member as reported by that person. 7 Finally, incident records contain data for each incident reported by an individual respondent. The 1996 1999 NCVS Longitudinal File is a nested, hierarchical, incident record-defined, data file containing 323,265 personal records, 8 consisting of eighteen quarterly collection cycles. Several selection criteria were applied to the longitudinal file to create a subsample of data used in these analyses. Only an individual s initial and subsequent exposures to the survey were included. 9 Because initial exposure to the survey must have resulted in a completed face-to-face or telephone survey, noninterviews (i.e., Type-Z noninterviews) at time-in-sample one were excluded. 10 Further, proxy interviews during either the first or second interview were excluded. Because the sampling unit in the NCVS is a household address, households were included only if the occupants did not move out of the sample address between the initial and subsequent exposure. Finally, only a Type-Z noninterview 11 in which the respondent refused to be interviewed and noninterviews occurring when the respondent was never available were included in the analysis. Application of these selection criteria resulted in a subsample of 32,612 person-level records for analysis in the current study. 4. MEASURES 4.1. Fatigue Past research exploring respondent fatigue in the NCVS used the number of incidents reported by a respondent as a proxy measure for fatigue (Lehnen and Reiss, 1978a, b; Skogan, 1981). For example, if at Time-In- Sample 1 (TIS1) an individual reported 1 incident and at TIS2 the same person reported 0 incidents, previous research concluded that this reduction 7 In very rare instances, a proxy interview is obtained. A proxy interview is an interview that is conducted when someone other than the intended household member answers the interview questions for the eligible household member. 8 While BJS analyses of NCVS data involve using incident-level data, an examination of individual respondent fatigue requires the use of individual-level data. Therefore, for the purposes of this study, personal records are used in lieu of incident records. 9 Data collected between the third quarter of 1995 and the second quarter of 1997. 10 A respondent s initial interview is referred to as time-in-sample one or TIS1; and the second interview as TIS2 or time-in-sample two. 11 Type-Z noninterview category consists of never available, refusal, temporarily absent, mentally or physically unavailable, other, or office use only.

Revisiting Respondent Fatigue Bias 351 in incidents was evidence of respondent fatigue. It is feasible that respondents may not report incidents in subsequent NCVS interviews in hopes of shorting the length of the subsequent interview. However, we argue that a more efficient way of limiting exposure to the survey is for a respondent to avoid participating in it altogether during the following enumeration. Following this logic, we use an alternative measure for the dependent variable: The presence or absence of an interview during the respondent s second time-in-sample or wave two. Response fatigue is measured using Type-Z noninterviews that include (1) refusing to be interviewed outright, or (2) avoiding the interviewer, by never being available to participate in the interview, which is coded as 0 (interview) or 1 (noninterview). Most of the 32,612 respondents in the current investigation (96%) participated in an interview at TIS2 (see Table I). 4.2. Instrument- and Respondent-Level Variables Instrument- and respondent-level independent variables were included in the current study. Instrument-level variables include the survey mode and the number of victimizations reported during a respondent s initial exposure to the survey. It is important to control for these variables because they have been shown to have an effect on survey participation in the survey nonresponse literature (Dillman et al., 2002; Finkelhor et al., 1995; Groves and Couper, 1992, 1993, 1998; Harris-Kjoetin and Tucker, 1998; Johnson, 1988; Lepkowski and Couper, 2002; Madans et al., 1986). Survey mode was coded as 0 (telephone) or 1 (face-to-face) to reflect the type of interview individuals had at TIS1. The majority of TIS1 interviews (71%) were conducted face-to-face. Conversely, most interviews at TIS2 (87%) were conducted over the telephone. Victimizations, or the number of self-reported victimizations, 12 is a continuous variable ranging from 0 to 7. Higher scores indicate more reported victimizations. For respondents reporting victimizations, the mean number of victimizations reported at TIS1 was 1.25 with a 0.6 standard deviation. As mentioned, this study hypothesizes that those individuals who reported more victimization at TIS1 will be less likely to participate in the study at TIS2, net of variables that have shown to affect survey nonresponse. Such an effect is expected due to the additional time victims spend answering questions concerning specific aspects of each incident. A final group of control variables included is respondent-level demographic characteristics of interviewees. Demographic variables were important to include in our models for several reasons. First, past studies 12 Victimizations include both personal and property crime.

352 Hart, Rennison, and Gibson Table I. Descriptive Statistics Variables Mean SD % Min Max Dependent variable Respondent fatigue (TIS2) 0 1 Interview 96.5 Noninterview 3.5 Instrument-level characteristics Reported victimizations (TIS1) 0 7 No 89.9 Yes 10.1 Number of victimizations 1.25 0.62 Survey mode (TIS1) 0 1 Telephone 29.0 Face-to-face 71.0 Respondent-level characteristics Age (in years) 43.85 18.14 12 90 Gender 0 1 Male 45.7 Female 54.3 Race/ethnicity 1 4 White non-hispanic 77.6 Black non-hispanic 10.2 Other non-hispanic 3.6 Hispanic, any race 8.7 Marital status 1 5 Married 58.7 Never married 24.1 Widowed 6.5 Divorced 8.7 Separated 1.9 Educational status (in years) 13.19 3.45 0 19 Note: Data file is 1996 1999 longitudinally linked NCVSs. Statistics reflect weighted data. Unweighted n = 32,612. have shown age, gender, race and ethnicity, marital status, and education predict victimization (e.g., see Catalano, 2004; Rennison and Rand, 2003). Therefore, it is important to know if similar demographic characteristics contribute to survey nonresponse in the NCVS, given the implications this would have on victimization estimates. Second, research on nonsampling error has indicated that similar demographic characteristics influence survey participation. Excluding such variables risks model misspecification. Demographic variables included are respondent s age, gender, race and Hispanic origin, as well as marital and educational status. Age is a continuous variable ranging from 12 to 90. Respondents averaged about 43 years in age. Gender is coded as 0 (male) or 1 (female). Most respondents were female (54%). Race and Hispanic origin is captured through a set of four dichotomous variables: white non-hispanic (78%), black non-hispanic

Revisiting Respondent Fatigue Bias 353 (10%), other non-hispanic (4%), and Hispanic 13 (9%). For the multivariate models that follow, white non-hispanic is the excluded category. Marital status is captured using a set of five dichotomous variables: married (59%), never married (24%), widowed (7%), divorced (9%) and separated (2%). Married served as the reference category. Finally, educational status is a continuous variable measuring the years of completed formal education ranging from 0 (no formal education) to 19. Educational status averaged 13 years of formal education completed. To address the issues outlined above, we begin by presenting our analytic strategy followed by a series of tables depicting the relationship between instrument- and individual-level variables on respondent participation. 5. ANALYTIC STRATEGY The research questions will be examined using survey-weighted logistic regression analyses (STATA Statistical Software, 2003). By using this procedure, modeling takes into account the complex sample design and clustering factors associated with the NCVS. Use of other statistical softwares most which assume a simple random sample would lead to the underestimation of standard errors and erroneous conclusions. 14 The objective of this research is to address the nature of response effects in the NCVS using an improved dependent variable. The initial surveyweighted logistic regression models explore the influence of instrument-level characteristics on individuals participation during the second wave of interviews (i.e., TIS2). Specifically, these models consider the survey mode used (i.e., telephone or face-to-face) and reporting of an incident during the screening process (i.e., was an incident report completed) during the first interview. We next offer a model that includes only respondent demographics (i.e., age, gender, race and Hispanic origin, marital status, and educational status) to determine the role that these variables play on respondent participation during TIS2. The next fully specified model explores the influence of all instrumentand respondent-level characteristics on individuals participation during TIS2. The final two models offer a more detailed examination of nonresponse by assessing models for telephone and face-to-face interviews at TIS1 13 Other non-hispanic category includes individuals who describe themselves as an Asian, Pacific Islander, American Indian, Aleut, or Eskimo. Hispanic is a measure of ethnicity and may include persons of any race. 14 Currently, no methods are available in standard software packages for assessing fit of logistic models derived from complex sample surveys. Following the recommendation of Hosmer and Lemeshow (2000), goodness of fit statistics reported in the tables are based on model-based analyses.

354 Hart, Rennison, and Gibson separately. By investigating survey modes separately we can determine the influence of instrument- and respondent-level characteristics on nonresponse at TIS2. 6. RESULTS Several regression models that evaluate the effect of response effects on nonresponse when controlling for individual characteristics are presented in Table II. Except for a difference in the dependent variable used, these models are similar to the work of Lehnen and Reiss (1978a, b). Panel A in Table II offers a basic model examining the effect of number of reported victimizations at TIS1 on a respondent s willingness to participate at TIS2. Findings show that the number of previously reported victimizations is a significant predictor of nonresponse at TIS2. This finding is in agreement with the findings of Lehnen and Reiss (1978a, b). Respondents who reported victimization were more likely than respondents who reported no victimizations at TIS1 to refuse to participate at TIS2. Panel B evaluates the effects of two response effects survey mode and prior victimizations on future nonresponse. Like the model in Panel A, this model demonstrates a significant positive effect of previously reported victimizations on future nonresponse (b=0.17). In addition, findings show a significant negative effect of survey mode on future nonresponse (b=)0.45). That is, respondents who reported victimizations during TIS1 were more likely than respondents who reported no victimizations to refuse to participate at TIS2, net of the effect of survey mode. In addition, persons who were interviewed over the phone were significantly more likely than those interviewed in person at TIS1 to refuse to participate during the following enumeration, controlling for whether a prior victimization had been reported. This panel demonstrates that the rapport established between the field representative and the respondent during an in-person interview matters significantly. Panel C marks the point at which our analyses move beyond the work of Lehnen and Reiss (1978a, b). Panel C presents findings for a regression model evaluating the predictive value of respondent demographics on future nonresponse. Panel C shows that nearly all respondent demographics exert a significant effect on the odds of nonresponse at TIS2. Age demonstrates a significant negative effect on nonresponse at TIS2 (b=)0.02) That is, younger persons are significantly more likely than older respondents to refuse to participate during TIS2. Gender exerts a significant negative effect on future nonresponse (b=)0.55) demonstrating that nonresponse at TIS2 is significantly more likely among male than female respondents. Net of other individual characteristics, black non-hispanics (b=0.62), other

Revisiting Respondent Fatigue Bias 355 Table II. Survey Weighted Logistic Regression Model Predicting Nonresponse a at TIS2 Panel A Panel B Panel C Variables b SE Wald Exp(b) b SE Wald Exp(b) b SE Wald Exp(b) Reported victimizations (TIS1) 0.17 0.08 4.34* 1.19 0.17 0.08 4.42* 1.19 Survey mode (TIS1) Telephone (reference) Face-to-face )0.45 0.07 43.96* 0.64 Age Gender Male (reference) Female )0.02 0.00 54.11* 0.98 )0.55 0.07 63.72* 0.58 Race White non-hispanic (reference) Black non-hispanic 0.62 0.12 28.20* 1.86 Other non-hispanic 0.47 0.18 6.72* 1.59 Hispanic, any race 0.48 0.14 10.43* 1.61 Marital status Married (reference) Never married 0.09 0.09 0.99 1.09 Widowed )1.02 0.29 12.57* 0.36 Divorced )0.52 0.15 12.00* 0.59 Separated )0.82 0.31 6.95* 0.44 Educational status )0.01 0.01 0.60 0.99 Constant )3.32 0.05 4540.16* 0.00 )3.02 0.06 2330.48* 0.05 )2.37 0.19 149.25* 0.09 )2 Log-likelihood 9802.17 9745.67 9393.56 Nagelkerke R-squared 0.00* 0.01* 0.05* Note: Data file is 1996 1999 longitudinally linked NCVSs. a Nonresponse is coded (0,1) where participating in an interview equals 0 and nonresponse equals 1. Unweighted n = 32,612. *p < 0.05.

356 Hart, Rennison, and Gibson non-hispanics (b=0.47) and Hispanics (b=0.48) were significantly more likely than white non-hispanics to refuse to participate during TIS2. Findings in Panel C demonstrate that married persons were significantly more likely to refuse to participate at TIS2 than widowed (b=)1.02), divorced (b=)0.52) or separated (b=)0.82) respondents. No difference in the odds of married and never married respondents likelihood of nonresponse at TIS2 was measured. Educational status is not a significant predictor of nonresponse during TIS2. Odds ratios in Panel C indicates great change nonresponse for several individual characteristics. For example, the odds ratio of 1.86 for black non-hispanic respondents suggests a moderate change in nonresponse at TIS2 compared to white non-hispanic respondents. A similar amount of change in the odds of nonresponse at TIS2 is found for other non-hispanics (or = 1.59) and Hispanic respondents (or = 1.61). Table III presents regression output from a fully-specified model containing instrument- and respondent-level indicators. Findings show that once respondent demographics are accounted for, the number of victimizations reported during TIS1 is no longer a significant predictor of future survey nonresponse. These findings offer no support for the hypothesis that exposure to a longer survey instrument during an initial NCVS interview results in respondents being less likely to participate in a subsequent interview. Controlling for individual and instrument level characteristics, survey mode continues to exert a significant negative effect of nonresponse at TIS2 (b=)0.32). Specifically, respondents interviewed via the telephone compared to respondents interviewed in person at TIS1 were more likely to refuse to participate at TIS2. With few exceptions, the effects of demographic characteristics on future nonresponse did not change when controls for instrument characteristics were included in the model. One change that emerged with the introduction of response effect variables is the significant positive effect of never married on nonresponse (b=0.07). Specifically, married persons were more likely that never married persons to refuse to participate at TIS2. A second change was measured for widowed persons. In Panel C of Table II, findings suggested that married persons were significantly more likely to refuse to participate at TIS2 than widowed (b=)1.02) respondents. In Table III, the sign of the coefficient for widowed respondents flipped, suggesting that once survey characteristics are accounted for, widowed persons (b=1.02) were more likely to refuse to participate at TIS2 than their married counterparts. Odds ratios in Table III indicates that great difference in nonresponse for several characteristics. For example, the odds ratio of 1.89 for black non-hispanic respondents suggests a nontrivial difference in the odds of nonsresponse at TIS2 compared to those of a white

Revisiting Respondent Fatigue Bias 357 Table III. Survey Weighted Logistic Regression Model Predicting Nonresponse a at TIS2 Variables b SE Wald Exp(b) Reported victimizations (TIS1) 0.082 0.09 0.87 1.09 Survey mode (TIS1) Telephone (reference) Face-to-face )0.32 0.07 21.51* 0.73 Age )0.02 0.00 46.50* 0.98 Gender Male (reference) Female )0.53 0.07 60.12* 0.59 Race White non-hispanic (reference) Black non-hispanic 0.64 0.12 29.78* 1.89 Other non-hispanic 0.49 0.18 7.46* 1.63 Hispanic, any race 0.48 0.14 11.70* 1.61 Marital status Married (reference) Never married 0.07 0.09 0.58* 1.07 Widowed 1.02 0.29 12.49* 2.76 Divorced )0.51 0.15 11.44* 0.60 Separated )0.82 0.31 6.89* 0.44 Educational status )0.01 0.01 1.14 0.99 Constant )2.19 0.20 122.01* 0.11 )2 Log-likelihood 9365.38 Nagelkerke R-squared 0.05* Note: Data file is 1996 1999 longitudinally linked NCVS. a Nonresponse is coded (0,1) where participating in an interview equals 0 and nonresponse equals 1. Unweighted n = 32,612. *p < 0.05. non-hispanic. Likewise, a large difference in the odds of nonresponse at TIS2 for widowed respondents is evidenced by an odds ratio of 2.76. The models presented so far demonstrate the significance of survey mode on future nonresponse. Regression models in Table IV evaluate whether the observed effects in the fully specified model in shown in Table III differ by the survey mode during TIS1. The first set of findings presented in Table IV are based on models only for respondents interviewed in person during TIS1. The second regression output in Table IV offers findings for respondents who were interviewed over the phone during TIS1. Table IV results demonstrate that once individual characteristics of respondents are accounted for, the number of reported victimizations measured at TIS1 was not significantly related to nonresponse at TIS2. This finding holds regardless of the mode of surveying during TIS1. Consistent with earlier models presented, and regardless of the survey mode, younger persons were significantly more likely than older respondents to refuse to participate during TIS2. And like earlier models, females were significantly

358 Hart, Rennison, and Gibson Table IV. Survey Weighted Logistic Regression Models Predicting Nonresponse a at TIS2 by Survey Mode Face-to-Face Survey Telephone Survey Variables b SE Wald Exp(b) b SE Wald Exp(b) Difference between coefficients b Reported victimizations (TIS1) 0.03 0.10 0.11 1.04 0.15 0.16 0.93 1.16 )0.62 Age )0.02 0.00 43.45* 0.98 )0.01 0.00 6.31* 0.99 )1.82 Gender Male (reference) Female )0.61 0.09 44.21* 0.54 )0.41 0.11 14.94* 0.66 )1.42 Race White non-hispanic (reference) Black non-hispanic 0.74 0.14 26.93* 2.09 0.46 0.19 5.94* 1.58 1.17 Other non-hispanic 0.38 0.22 2.84 1.46 0.66 0.25 6.94* 1.93 )0.84 Hispanic, any race 0.59 0.15 16.33* 1.80 0.22 0.22 1.00 1.24 1.42 Marital status Married (reference) Never married )0.04 0.11 0.11 0.96 0.27 0.14 3.78 1.30 )1.70 Widowed )0.97 0.33 8.65* 0.38 )1.11 0.60 3.38 0.33 0.20 Divorced )0.56 0.19 9.04* 0.57 )0.42 0.29 2.06 0.66 )0.42 Separated )0.87 0.39 5.53 * 0.42 )0.72 0.60 1.46 0.48 )0.20 Educational status )0.01 0.01 0.75 0.99 )0.01 0.01 0.63 0.99 0.05 Constant )2.37 0.24 99.41* 0.09 )2.48 0.28 75.92* 0.08 0.30 )2 Log-Likelihood 6005.35 3431.25 Nagelkerke R-squared 0.04* 0.03* Note: Data file is 1996 1999 longitudinally linked NCVSs. a Nonresponse is coded (0,1) where participating in an interview equals 0 and nonresponse equals 1. b See Brame et al. (1998). *p < 0.05.

Revisiting Respondent Fatigue Bias 359 less likely than males to refuse to participate at TIS2, regardless of survey mode. Again, regardless of survey mode, findings show that black non- Hispanics were more likely not to participate at TIS2 than were white non-hispanics. Survey mode made a difference for Hispanics and other non-hispanics. A significant positive effect was found for face-to-face surveys of Hispanic respondents. When interviewed in person at TIS1, Hispanic respondents were significantly more likely than white non-hispanics to refuse to participate in TIS2. In contrast, when interviewed over the phone at TIS1, other non-hispanics were more likely to refuse to participate at TIS2. Differences in the survey mode models were found for marital status by survey mode. Among those interviewed in person during TIS1, married persons were significantly more likely to refuse to participate at TIS2 than were never married, widowed, divorced or separated respondents. In contrast, marital status was not a significant predictor of future nonresponse when the surveys at TIS1 were conducted over the phone. The previous section outlined significant predictors of future nonresponse for respondents who were interviewed initially by telephone, and those interviewed initially in person. A useful question is whether the coefficients in the two survey mode models differ significantly. The final column in Table IV presents findings from z-tests (Brame et al., 1998). Findings demonstrate that despite apparent differences between coefficients in the two models, none reached the level of statistical significance. 7. CONCLUSION Our objective was to revisit the issue of respondent fatigue in the NCVS. Using an improved dependent variable our results failed to support the conventional wisdom that respondents participating in the NCVS become test wise after being exposed to a longer survey instrument. Estimations failing to include theoretically significant predictors of future nonresponse suggested fatigue on the part of the respondent. Controlling for these important variables offers support for the notion that greater exposure to the NCVS does not lead to respondent fatigue and future nonresponse. The lack of support for the respondent fatigue argument is the key finding in our study. However, other important findings were observed that have implications for victimization surveys. Like the conclusions of Lehnen and Reiss (1978a, b), our work shows that survey mode matters greatly. Respondents whose initial interview was conducted over the telephone were more likely not to participate at TIS2 than were respondents who were initially interviewed in person. This finding is robust across all models considered. The effect of survey mode on future nonresponse is important to consider in terms of exposure to the survey. A majority of TIS1 interviews

360 Hart, Rennison, and Gibson are conducted in person (71%), while only 13% are conducted over the phone. In contrast, 87% of TIS2 surveys are conducted via the telephone. Given the increased percentage of surveys conducted over the phone between TIS1 and TIS2, it should come as no surprise that nonresponse increases over time. Like victimization in general, demographic characteristics such as age, gender, and race are significantly related to noninterview at TIS2. If demographic characteristics are linked to nonresponse and to victimization, this suggests that victimization estimates may be underestimated. By identifying the effects of demographics on nonresponse, specific efforts can be made to retain these individuals in future data collection efforts. Our study offers several advantages over prior studies assessing respondent fatigue. Past research has explored the issue of respondent fatigue associated with the NCVS in terms of a positive-response burden and not reporting or under reporting individual victimization. Arguably, attempts to discern whether a respondent fails to report an incident because of fatigue or because one was not victimized is problematic. In fact, past research that has defined the dependent variable in terms of the number of incidents of victimization assumes that fatigue would not prevent a respondent from participating in the initial screening process, but manifests only during incident-based questions. Presumably, a better indicator of respondent fatigue measures a respondent s willingness to participate in the survey. Hence we used the number of prior victimization incidents reported to gauge the length of interviews. We suggest that Type-Z noninterviews (i.e., a situation in which an individual respondent refuses to take part in the interview or avoids being interviewed altogether) provide a more accurate measure of respondent fatigue. By avoiding exposure to the Basic Screen Questionnaire and subsequent incident-based questions, respondents eliminate the chance of being burdened entirely. In addition to relying on an alternative measure of the dependent variable, the current study employed a different unit of analysis than past research on respondent fatigue. Unlike prior studies, the unit of analysis used in the current investigation was the individual respondent, not the panel. Originally, the notion of motivational fatigue referred to individuals (Biderman et al., 1967). However, the panel design of the NCVS prevented revisiting the idea of fatigue bias as a source of nonsampling error among individuals. The availability of the NCVS Longitudinal File removes this constraint. Specifically, by using data from the Longitudinal File, we were able to examine individuals over time. Furthermore, using these data enabled us to account for changes in household, which past studies were not able to do. In short, using individual-level, longitudinal, NCVS data has enabled us to offer a new perspective on the issue of respondent fatigue.

Revisiting Respondent Fatigue Bias 361 Finally, data for the current research were generated after the NCVS underwent significant design changes. To our knowledge, no research on fatigue bias has examined post-redesign NCVS data. This major redesign improved the survey s ability to measure all victimizations, especially difficult to measure crimes such as rape, sexual assault, and domestic violence. In addition the redesign addressed some of the problems identified with the survey during the first two decades of its use. Given these major changes, it seems appropriate to reassess the extent and influence of fatigue bias in the NCVS. Our study should not be viewed as the definitive work on NCVS respondent fatigue. Our results suggest the need for additional research as to whether or not fatigue bias presents a significant methodological problem in the NCVS. The logical next step would be to extend our investigation over multiple waves of data, as our investigation only used two time points of data collection to assess respondent fatigue. Perhaps by incorporating three, four, or five waves of data, for example, we might observe a test wise effect such as those observed in past research (see Lehnen and Reiss, 1978a). That is, respondent fatigue could be a process that occurs over time that does not appear until after the second interview. Furthermore, future studies could also consider the reporting of specific victimizations and how this might affect respondent fatigue over time. Perhaps reporting specific victimizations repeatedly over multiple waves could lead to a respondent becoming test wise. Perhaps there is something unique about reporting a particular type of victimization that results in different respondent reactions or more rigorous follow up questions surrounding the victimization. Only through further empirical investigation can we better understand the nature and extent fatigue bias impacts national crime victimization estimates. REFERENCES Addington, L. A. (2005). Disentangling the effects of bounding and mobility on reports of criminal victimization. J. Quant. Criminol. 21: (this issue). Biderman, A. D. (1967). Surveys of population samples for estimating crime incidence. Ann. Am. Acad. Polit. Soc. Sci. 374: 16 33. Biderman, A. D., and Cantor, D. (1984). A longitudinal analysis of bounding, respondent conditioning and mobility as sources of panel bias in the national crime survey. In Paper Presented at the Proceedings of the Survey Research Methods Section, American Statistical Association, Alexandria, VA. Biderman, A. D., Johnson, L. A., McIntyre, J., and Weir, A. W. (1967). Field Survey I: Report on a Pilot Study in the District of Columbia on Victimization and Attitudes Toward Law Enforcement. President s Commission on Law Enforcement and Administration of Justice Field Survey I. US Government Printing Office, Washington, DC.

362 Hart, Rennison, and Gibson Brame, R., Paternoster, R., Mazerolle, P., and Piquero, A. (1998). Testing for the equality of maximum-likelihood regression coefficients between two independent equations. J. Quant. Criminol. 14(3): 245 261. Bureau of Justice Statistics (2002). National Crime Victimization Survey Longitudinal File, 1996 1999 [Computer file]. Conducted by U.S. Department of Commerce, Bureau of the Census. Bushery, J. M. (1978). NCS Noninterview Rates by Time-in-Sample. U.S. Department of Commerce, Bureau of the Census, Washington, DC, Unpublished memorandum. Cantor, D., and Lynch, J. P. (2000). Self-Report surveys as measures of crime and criminal victimization. In Duffee, D. (ed.), Criminal Justice 2000: Measurement and Analysis of Crime and Justice, Vol. 4, Government Printing Office, Washington, DC, pp. 85 138. Catalano, S. (2004). Criminal Victimization 2003, US Government Printing Office, Bureau of Justice Statistics, (NCJ 205455), Washington, DC. Chilton, R., and Jarvis, J. (1999). Victims and offenders in two cities statistics programs: A comparison of the National Incident-Based Reporting System (NIBRS) and the National Crime Victimization Survey (NCVS). J. Quant. Criminol. 15(2): 193 205. Dillman, D. A., Eltinge, J. L., Groves, R. M., and Little, R. J. A. (2002). Survey nonresponse in design, data collection, and analysis. In Groves, R. M., Dillman, D. A., Eltinge, J. L., and Little, R. J. A. (eds.), Survey Nonresponse, Wiley, New York, NY. Finkelhor, D., Asdigian, N., and Dziuba-Leatherman, J. (1995). Victimization prevention programs for children: A follow-up. Am. J. Public Health 85: 1684 1689. Groves, R. M., and Couper, M. P. (1992). Correlates of nonresponse in personal visit surveys. In Paper Presented at the Proceedings of the Survey Research Methods Section, American Statistical Association, Alexandria, VA. Groves, R. M., and Couper, M. P. (1993). Multivariate analysis of nonresponse in personal visit surveys. In Paper Presented at the Proceedings of the Survey Research Methods Section, American Statistical Association, Alexandria, VA. Groves, R. M., and Couper, M. P. (1998). Nonresponse in Household Interview Surveys, Wiley, New York, NY. Harris-Kjoetin, B. A., and Tucker, C. (1998). Longitudinal nonresponse in the current population survey (CPS). ZUMA Nachtrichen Spezial 4: 263 272. Hosmer, D. W., and Lemeshow, S. (2000). Applied Logistic Regression (2nd ed.), Wiley, New York, NY. Johnson, T. P. (1988). The Social Environment and Health, University of Kentucky, Lexington, Unpublished doctoral dissertation. Kindermann, C., Lynch, J., and Cantor, D. (1997). Effects of the Redesign on Victimization Estimates, Washington, DC. Bureau of Justice Statistics: US Government Printing Office, (NCJ 1643681). Lehnen, R. G., and Reiss, A. J. (1978a). Response effects in the national crime survey. Victimology 3(1 2): 110 122. Lehnen, R. G., and Reiss, A. J. (1978b). Some response effects of the national crime survey. In Paper Presented at the Proceedings of the Survey Research Methods Section, American Statistical Association, Alexandria, VA. Lepkowski, J. M., and Couper, M. P. (2002). Nonresponse in the second wave of longitudinal household surveys. In Groves, R. M., Dillman, D. A., Eltinge, J. L., and Little, R. J. A. (eds) Survey Nonresponse, Wiley, New York, NY. Madans, J. H., Kleinman, J. C., Cox, C. S., Barbano, H. E., Feldman, J. J., Cohen, B., Finucane, F. F., and Cornoni-Huntley, J. (1986). Ten years after NHANES I: Report of initial follow up, 1982 1984. Public Health Reports 101: 465 473.

Revisiting Respondent Fatigue Bias 363 National Research Council (1976). Surveying crime. In Eidson-Penick, B. K., and Owens, M. E. B. (eds) Panel for the Evaluation of Crime Surveys. National Academy of Sciences, Washington, DC. Rennison, C. M., and Rand, M. R. (2003). Criminal Victimization, 2002, Government Printing Office, U.S. Bureau of Justice Statistics, Washington, DC. Rand, M. R., and Rennison, C. M. (2002). True crime stories? Accounting for differences in our national crime indicators. Chance 15(1): 47 51. Skogan, W. G. (1981). Issues in the Measurement of Victimization. Government Printing Office, U.S. Bureau of Justice Statistics, (NCJ 74682), Washington, DC. Sliwa, G. E. (1977). Analysis of Nonresponse Rates for Various Household Surveys. U.S. Department of Commerce, Bureau of the Census, Washington, DC, Unpublished memorandum. STATA Statistical Software (2003). Release 8.0, StataCorp, College Station, TX. Taylor, B. M. (1989). New Directions for the National Crime Survey. Government Printing Office, U.S. Bureau of Justice Statistics, (NCJ 115571), Washington, DC. Thornberry, T. P., and Krohn, M. D. (2003). Comparison of self-report and official data for measuring crime. In National Academy of Sciences (ed.), Measurement Problems in Criminal Justice Research, National Academies Press, Washington, DC. Tourangeau, R., and McNeeley, M. (2003). Measuring crime and crime victimization: methodological issues. In National Research Council (ed.), Measurement Problems in Criminal Justice Research. National Academies Press, Washington, DC. Woltman, H. (1975). Recall Bias and Telescoping in the NCS. U.S. Department of Commerce, Bureau of the Census, Washington, DC, Unpublished memorandum. Woltman, H., and Bushery, J. (1977a). Optimum Number of Times to Retain a Panel in Sample in NCS, U.S. Department of Commerce, Bureau of the Census, Washington, DC, Unpublished memorandum. Woltman, H., and Bushery, J. (1977b). Update of the NCS Panel Bias Study, U.S. Department of Commerce, Bureau of the Census, Washington, DC, Unpublished memorandum. Ybarra, L., and Lohr, S. L. (2000). Effects of attrition in the national crime victimization survey. In Paper Presented at the Proceedings of the Survey Research Methods Section, American Statistical Association, Alexandria, VA.