RESEARCH NOTE. University of Michigan, USA

Size: px
Start display at page:

Download "RESEARCH NOTE. University of Michigan, USA"

Transcription

1 International Journal of Public Opinion Research Vol. 22 No. 4 ß The Author Published by Oxford University Press on behalf of The World Association for Public Opinion Research. All rights reserved. doi: /ijpor/edq037 Advance Access publication 25 October 2010 RESEARCH NOTE The Relation Between Unit Nonresponse and Item Nonresponse: A Response Continuum Perspective Ting Yan 1 and Richard Curtin 2 1 NORC at the University of Chicago, USA and 2 Survey Research Center, University of Michigan, USA Nonresponse is a significant problem for survey research. The phenomena encompass nonresponse at both the unit and the item level. Unit nonresponse refers to the complete absence of an interview from a sampled household whereas item nonresponse refers to the absence of answers to specific questions in the interview after the sampled household agrees to participate in the survey. Traditionally, survey research has treated unit and item nonresponse as two separate problems with different impacts on data quality, different statistical treatments and adjustments, and different underlying causes (Beatty & Herrmann, 2002; Groves, 1989; Groves, Cialdini & Couper, 1992; Groves, Presser, & Dipko, 2004; Gorves, Singer, & Corning, 2000). Unit nonresponse is usually considered to pose a much greater threat to survey research than item nonresponse. Quantitatively, unit nonresponse is usually much larger than item nonresponse and there are a wide variety of statistical techniques that have been developed to address unit nonresponse. Item nonresponse poses additional threat to data quality because it reduces sample size if only completed cases are used in an analysis. Imputation techniques can be employed to impute for missing data to avoid the shrinkage of sample size. However, it is still problematic for surveys when item nonresponse produces nonignorable missing data. Nonignorable missing data happen when the missing data pattern is correlated with the values of the variable of interest (Little & Rubin, 1987). Of course, unit nonresponse can also create nonignorable nonresponse if the missing pattern in sample persons is related to the values of the variable of interest. The survey literature has recorded a persistent and well-documented rise over the past few decades in unit nonresponse rates for household surveys (Atrostic, Bates, Burt, & Silberstein, 2001; Curtin, Presser, & Singer, 2005; de Heer, 1999; de Leeuw All correspondence concerning this article should be addressed to Ting Yan, NORC at the University of Chicago, 55 East Monroe Street, Chicago, IL, 60603, USA. yan-ting@norc.org

2 536 INTERNATIONAL JOURNAL OF PUBLIC OPINION RESEARCH & de Heer, 2002; Djerf, 2004; Hox & de Leeuw, 1994). But the trend of item nonresponse over time is less well documented. Furthermore, the literature on the interconnection between unit and item nonresponse is sparse. The empirical evidence seems to unanimously point to a positive relation between respondents propensity to not participate in a survey and the amount of item nonresponse. For instance, respondents who initially refused but were later successfully converted had higher item missing data rates than initial cooperators (Mason, Lesser, & Traugott, 2002; Stinchcombe, Jone, & Sheatsley, 1981). Respondents who expressed disinterest initially at survey introduction and who were coded to be reluctant to take part by interviewers were also more likely to produce missing data than those who did not show disinterest (Burton, Laurie, & Moon, 1999; Campanelli, Sturgis, & Moon, 1996; Couper, 1997). In addition, respondents who had a higher item missing data in an earlier wave tended to refuse to grant a subsequent interview (Loosveldt, Pickery, & Billiet, 2002). However, the theoretical aspect of the interconnection between unit and item nonresponse is less well developed. The traditional view is that the causes and the underlying processes are different for unit and item nonresponse; Groves and Couper (1998) believe that the influences towards item nonresponse are quite different from those of the initial acceptance of the interview (p. 22). However, this view does not seem to be supported by the observed positive relation between unit and item nonresponse propensity. As a competing alternative view, we posit a response continuum model to link unit and item nonresponse (also see a similar model linking item nonresponse and unit nonresponse illustrated in Burton et al., 1999). This model places respondents on a response continuum based on their propensity to respond to a survey request and to answer survey questions. Theoretically speaking, respondents with a zero propensity to take part in a survey are placed on the left and those who are the most cooperative and will participate with certainty in a survey and answer all questions occupy the other end of the continuum. As we move from left to right on the continuum, respondents propensity to participate in a survey increase and so does their propensity to answer survey questions, as shown in Figure 1. Other than at the extremes, the figure is not meant to indicate the model implies the same quantitative probability for unit and item nonresponse. The figure is only meant to indicate differences in relative probabilities. We also acknowledge that a sample person s propensity to respond to a survey request and to answer a survey question is conditional on survey conditions. Any change in survey design features such as the mode of administration, interviewer assignment, and interviewer behavior will change a sample person s propensity to take part in a survey and/or to answer a survey question, moving him/her to a different location on our response continuum. Figure 1 The response continuum model Zero propensity to respond to interview & questions Low relative propensity to respond to interview & questions High relative propensity to respond to interview & questions Certain propensity to respond to interview & questions

3 RESEARCH NOTE 537 The unique thing about the response continuum model is its focus on common antecedents shared by unit and item nonresponse. Survey practitioners have long recognized the connection between unit and item nonresponse in their judgments about whether a respondent answered a sufficient number of questions to reach the minimum requirements to qualify as a completed or partial interview based on American Association for Public Opinion Research guidelines. These judgments represent choosing a point on the response continuum. At the individual level, the response continuum model predicts a positive relation between a sample member s propensity to respond to a survey request and his/her propensity to answer survey questions. This individual-level prediction manifests itself in two different forms. On the one hand, according to the response continuum model, sample persons who have a low propensity to answer survey questions also have a low propensity to cooperate with a survey request. Research in the context of longitudinal surveys has found evidence that sample persons who are more likely to provide missing data in an earlier wave of a longitudinal survey are less likely to participate in the next wave (Loosveldt et al., 2002; Yan & Curtin, 2007). On the other hand, the model predicts that respondents who are more likely to not participate in a survey are also more likely to provide missing data. This relationship is often observed in cross-sectional surveys (Campanelli et al., 1996; Couper, 1997; Mason et al., 2002; Stinchcombe et al., 1981). Both manifestations of the positive relation between individual respondent s propensity to respond to surveys and propensity to answer survey questions are important for survey researchers. For instance, respondents with high levels of item nonresponse in one wave of a longitudinal survey can be given special treatments such as higher incentives or special advance letters so as to increase their chance to take part in the next wave. Similarly, sample persons who require more effort to agree to the survey request could be assigned interviewers trained to reduce their propensity to provide missing data. The response continuum model can, thus, be used in responsive designs to help survey practitioners to make informed survey design decisions. In addition to individual level relationship, the response continuum model can also generate predictions at the aggregate level (i.e., at the survey level). In terms of a cross-sectional survey, the response continuum model predicts a trade-off between unit and item nonresponse rates. Surveys with lower unit nonresponse rates, which presumably include more people with a low propensity to respond, will have higher item nonresponse rates. Thus, at the survey level, there exists a negative relation between unit and item nonresponse rates. This negative relation has great practical interest. One concern that survey practitioners have about expending effort to reduce unit nonresponse rates is that people with a low propensity to participate in surveys may transfer their resistance into the question answer process by expending less effort on their answers. It is conceivable, for example, that a sample person who is bothered by an interviewer s persistent callback decides to do the survey but to answer as few questions as possible. A common concern among survey practitioners is whether bringing these people into the respondent pool would reduce data quality. With limited resources, the survey practitioners have to stop the effort to recruit difficult sample persons at some point. The decision of when to stop should take into

4 538 INTERNATIONAL JOURNAL OF PUBLIC OPINION RESEARCH consideration this trade-off between unit and item nonresponse. To our best knowledge, there are few studies looking into the relation between unit and item nonresponse rates. This article has two main goals. The first aim of the article is to examine the trend of item nonresponse over 20 years. Specifically, we investigate the trend of item nonresponse in face of increasing unit nonresponse experienced by most surveys in the recent decades. Second, we examine the relation between unit and item nonresponse rates by using the response continuum model. We first test the response continuum model with cross-sectional respondent-level data. We then expand the model in two ways by testing the aggregate level hypotheses of the negative relation between unit and item nonresponse rates and by examining the trend of this relation over time. The Data Twenty years of data from the Surveys of Consumers (SCA), conducted by the University of Michigan, are used to examine the relationship between trends in unit and item nonresponse. Data were collected from 1986 to 2005, when the wording of the core set of questions asked in the SCA remained unchanged. Utilizing this database has advantages and disadvantages for this examination. The main disadvantage is that all the analysis is solely focused on the prevalence of missing data on questions about economic expectations. Clearly, the findings may not generalize to other attitudes or other behavioral measures. The advantages of using these data are that the sample design and survey methodology has been constant. The survey design factors are an important source of differences in item nonresponse, such as the mode of the interview, the complexity of the questions, and so forth. More importantly, holding these factors constant allows the analysis to get a net effect of unit nonresponse on item nonresponse. The SCA started out as an area-probability in-home survey in the mid-1940s, and was converted to a random-digit dial-telephone survey in the mid-1970s. The survey is conducted monthly, and is based on a rotating-panel design. The analysis is restricted to the newly drawn representative samples. This restriction is consistent with published work on the survey (e.g., Curtin, Presser, & Singer, 2000, 2005; Singer, Van Hoewyk, & Maher, 2000). The new cases were selected until 1993 using Mitofsky Waksberg procedures (Waksberg, 1978; for random digit dialing sampling methods, see Wolter, Chowdhury, & Kelly, 2009) and since then using list-assisted procedures. From each household, one respondent has been randomly selected from among all household residents aged 18 years or older. About 300 new interviews are now conducted each month. Except for the constraint imposed by the month-long interviewing period, no limit is placed on the number of calls, and attempts are made to convert virtually all initial refusals. For information on the historical trend of unit response rates, see Curtin et al. (2000, 2005). The core part of the SCA assesses respondents current and future financial situation, current and future economic conditions, and expectations for inflation, unemployment, interest rate, and attitudes toward buying condition. These questions

5 RESEARCH NOTE 539 are components to the Index of Consumer Sentiment and are asked at the beginning of each monthly survey; the wordings of these core questions (displayed in Appendix A) have remained unchanged for the past 20 years. Prevalence of Item Nonresponse We first present results on the prevalence of item nonresponse and its trend over the past 20 years before discussing the interconnection between unit and item nonresponse rates. We calculated an index of item nonresponse based on responses to 16 of the core questions asked each month over the 20-year period. The index represents the average number of times the respondents refused to answer or answered don t know to these 16 items. This index is constructed in the same way as used in Singer et al. (2000) except that their index was based on 17 questions; we removed one question from our index since it was not asked over the entire 20-year period. The only non-attitudinal component item to our index is the income question, which has a reputation for incurring higher item nonresponse. The Pearson bivariate correlation of the item nonresponse index with and without the income question is.94. Excluding the income question from the item nonresponse index does not change the conclusions reported below. Thus, we kept the income question in our item nonresponse index to be consistent with published literature (i.e., Singer et al., 2000). The item nonresponse percentage for each of the 16 items in the index is displayed in Appendix B. We also analyzed don t know and refused responses separately. The conclusions were unchanged whether don t know and refused responses were combined or separated. Therefore, we combined the two categories in the results reported below. The index of item nonresponse measures the extent of missing values at the respondent level; that is, the percent of times a respondent offered a missing value (rather than a substantive value) or the percent of missing values a respondent provided. Over the 20-year period, from June 1986 to December 2005, the overall mean value on the index of item nonresponse is 0.044, which means that, on average, respondents provided missing values 4.4% of the time, or to 0.7 survey questions (i.e., ¼ 0.704). Figure 2 shows the time trend in the yearly average index values of item nonresponse (i.e., the top line in the figure corresponding to the left vertical axis), which ranges from 2.4 to 5.5%. (The trend presented in Figure 2 is based on unweighted data. The weighted item nonresponse rates were also examined to determine the effect of differential nonresponse rates across population subgroups over the 20-year period. There was no significant difference between the weighted and the unweighted trends of item nonresponse. As a result, the rest of analyses are based on the unweighted data.) Perhaps the most remarkable aspect of the data in item nonresponse is how much variation was recorded to an identical set of questions over the past few decades. For most of the period, the variations were unsystematic, with a trend line indicating insignificant variations until the past several years. Presumably the variations were due to changes in the motivations of respondents, as expressed in unit as well as item nonresponse rates.

6 540 INTERNATIONAL JOURNAL OF PUBLIC OPINION RESEARCH Figure 2 Item nonresponse rates and unit nonresponse rates, % 60% 5% 50% 4% 40% 3% 30% 2% 20% 1% 10% 0% % Item Nonresponse Rates Unit Nonresponse Rates Another remarkable feature is the sharp decline in item nonresponse in the recent years. Our examination of the causes for the decline started with the trend line for unit nonresponse rates, which is also shown in Figure 2 (the bottom line corresponding to the right vertical axis). As also shown in Curtin et al. (2005), a gradual increase to no change from 1979 to 1996 was recorded in the SCA unit nonresponse rates and a sharper increase after To better understand the trend in item nonresponse in light of this increasing trend of unit nonresponse, we fitted separate trend lines for the two subperiods: and (see Figure 3). Indeed, the trend line fitted from 1986 to 1996 suggests there is no significant change in item nonresponse for those 10 years when unit nonresponse rates showed no change as well; the trend coefficient is (SE ¼ 0.041). By contrast, the data from 1997 to 2005 indicate a significant decline (trend coefficient ¼ 0.302, SE¼ 0.079), falling from 4.8% in 1997 to just 2.4% in2005. The recent decline is outside of the range recorded in the prior 20 years to a considerable degree, with new lows in item nonresponse recorded in each of the past 3 years. The trend data seemed to suggest that item nonresponse rates moved with the unit nonresponse rates. When there was no significant trend change in unit nonresponse rates, item nonresponse rates did not show a significant trend change either. However, item nonresponse rates showed a significant drop in trend during the same time period when unit nonresponse rates increased. We test this relationship between unit and item nonresponse rates formally in the next section.

7 RESEARCH NOTE 541 Figure 3 Item nonresponse rates, % 5% 4% 3% 2% y = x y = x % 0% Relation Between Unit and Item Nonresponse at the Individual Level The response continuum model predicts a positive relation between a respondent s propensity to be a unit nonresponder and to produce missing data. To test the hypothesis that respondents who had a lower propensity to respond provided more missing data, an ordinary least square regression was run on the combined cross-sectional data with the index of item nonresponse as the dependent variable. Included as key independent variables are two proxy indicators of a low propensity to agree to the survey request the number of call attempts a respondent took to complete the interview and an indicator of whether the respondent initially refused the survey request. Even though we are not always able to make inference about noncontacts from respondents requiring a higher number of calls and refusal from those who initially refused the survey request, the number of calls and the refusal status are measures of the difficulty to contact and recruit a sample person. Respondents requiring a higher number of call attempts and refusal conversion efforts place them toward the left end of the response continuum and they are expected to have more missing data than those who, for instance, agreed to the survey request at the first call and who did not need refusal conversion efforts. We also entered into the regression model demographic characteristics (such as age, gender, education, race, marital status, and income) to control for any idiosyncratic effects. Table 1 shows the results from the regression model at the respondent level, drawing on monthly surveys of 20 years. Consistent with the response continuum model, respondents who needed more calls to complete the interview had significantly more item nonresponse, holding various demographic characteristics constant. On average, one additional call produces an increase of about one hundredth percentage point in the index of item nonresponse. Replicating the results of Mason and colleagues (Mason et al., 2002) and Stinchcombe and colleagues (Stinchcombe et al., 1981), respondents who initially refused the survey request also had higher item nonresponse. Initial refusers had, on average, 0.6 percentage points more item nonresponse than cooperators. The regression further confirmed that older respondents,

8 542 INTERNATIONAL JOURNAL OF PUBLIC OPINION RESEARCH Table 1 Predicting Item Nonresponse Rates: Individual Level Regression Individual level model Parameter estimate SE Initial refuser Number of call attempts Time index Age Male Married White Income above median High school or less education Intercept R Note: All estimates are significant at p <.05. females, respondents who were not married, non-white, less wealthy, and had high school or less education had more missing data, consistent with prior work on item nonresponse (Bell, 1984; Craig & McCaan, 1978; de Leeuw, 2001; Ferber, 1966; Riphahn & Serfling, 2005). Since January 2000, SCA began sending a $5 prepaid incentive to all potential respondents whose mailing address was located (see Curtin et al., 2005). Given the sharp decline in item nonresponse in the recent years, there was a concern that the decline was in part driven by the inclusion of respondents who received a prepaid incentive. In other words, receiving an incentive might affect the amount of item nonresponse a respondent provided. To examine this effect of incentive receiving, we compared those respondents who received a prepaid incentive with those who were not sent an incentive in terms of item nonresponse. We were only able to identify, for each person, whether they were sent an incentive for four monthly surveys (November 2003, December 2003, January 2004, and February 2004). Data from these four surveys were used to examine the effects of incentives on item nonresponse. Respondents who were not sent a prepaid incentive tended to have more missing data (M ¼ 3.26) than those who received an incentive (M ¼ 2.92). However, the difference was not statistically significant, F(1, 1180) ¼ 0.88, ns. In addition, we entered the incentive receiving status to the regression model in Table 1. The status variable was not significantly related to the item nonresponse index (regression coefficient ¼ 0.355, SE¼ 0.426). Furthermore, it did not interact with the number of calls and the initial refuser indicator variable either (results not shown). These analyses seem to suggest that the use of incentives cannot account for the sharp decline in item nonresponse in the recent years. To test the individual-level hypothesis that respondents who had higher item nonresponse at wave one were more likely to not participate in the reinterview, we modeled the probability of not responding to the reinterview as a function of the item

9 Table 2 Predicting Response to Reinterview Using Item Nonresponse Index Individual level model Parameter estimate SE Odds ratio in log scale Item nonresponse index Initial refuser Number of call attempts Time index Age Male Married White Income above median High school or less education Intercept Max-Rescaled R Note: Bolded estimates are significant at p <.05. RESEARCH NOTE 543 nonresponse index at the first interview. (Besides the item nonresponse index, the same independent variables in the ordinary least square regression in Table 1 were also entered.) As shown in Table 2, the logistic regression results confirmed that, holding everything else constant, respondents who had more missing data (a higher value at the item nonresponse index) were more likely to not respond to the reinterview (regression coefficient in log scale ¼ 0.020, SE¼ 0.001); in other words, holding everything else constant, when item nonresponse index was increased by 1 unit, the odds of not responding to the reinterview was increased by 1.02 (see also Yan & Curtin, 2007). Both individual level hypotheses of the response continuum model are supported. Relation Between Unit and Item Nonresponse at the Aggregate Level In the earlier section, we showed that the item nonresponse index moved with the trend of the unit nonresponse. The response continuum model hypothesizes that, at the aggregate level, surveys with lower unit nonresponse rates are expected to have higher item nonresponse rates. This is presumably because surveys with lower unit nonresponse rates include more respondents with a lower propensity to participate in the survey. To test for this survey-level hypothesis, we used the survey-level data; that is, each observation in the analysis is a monthly survey and the variables are monthly survey-level attributes. Twenty years SCA s data included 231 monthly surveys, yielding 231 pairs of unit and item nonresponse rates (monthly average of the index of item nonresponse). We first took a simple correlation of unit nonresponse rates for monthly surveys and the monthly average of item nonresponse index values. The data indicate that

10 544 INTERNATIONAL JOURNAL OF PUBLIC OPINION RESEARCH Table 3 Predicting Item Nonresponse Rates: Aggregate Level Regression Aggregate level model Parameter estimate SE Unit nonresponse rates Percent of initial refusers Average number of calls Mean age Percent of male respondents Percent of married respondents Percent of White respondents Percent of respondents with income above median Percent of respondents with high school or less education Intercept R Note: Bolded estimates are significant at p <.05. unit nonresponse rates have risen but item nonresponse has fallen; coefficient of is statistically significant (p <.0001), suggesting that there is a significant negative correlation between the unit and item nonresponse rates and supporting the aggregate level hypothesis predicted by the response continuum model. To obtain a net effect of unit nonresponse rates on item nonresponse rates, we regressed the monthly average of item nonresponse index values on monthly unit nonresponse rates, the percentage of initial refusers in the monthly sample, and the average number of call attempts as the independent variables. To control for the possible effects of differential sample composition of each monthly survey, we included in the model as control variables the mean age, the percentage of males, the percentage of whites, the percentage of married, the percentage of respondents having an income higher than the median, and the percentage of respondents who had high school or less (the same set of variables is also used in weighting). An AR(1) autoregressive model was fit to correct for autocorrelated errors resulting from the use of time-series data. The regression results are displayed in the Table 3. As shown in the Table 3, after controlling for sample composition, the unit nonresponse rates still have a significant negative effect on item nonresponse rates. Reducing the unit nonresponse rates by 1 percentage point (or improving unit response rates by 1 percentage point) increases item nonresponse by almost 0.06 percentage points. The net effect might be small for 1 unit change in unit nonresponse rates, but the accumulative effects are more substantial. The unit nonresponse rates have increased from about 29.3% in 1986 to 55.2% in 2005, which translates into a reduction of item nonresponse by 1.53 percentage points; roughly corresponding to the difference in item nonresponse at 1986 (4.3%) and 2005 (2.4%). Both the significant negative regression coefficient from the aggregate level regression model and the significant negative Pearson correlation coefficient provide one

11 RESEARCH NOTE 545 more support to the response continuum model. We next examine how this negative relation between unit and item nonresponse changes over time. Trend of Aggregate-Level Relation Between Unit and Item Nonresponse Because of the increase of SCA unit nonresponse rates at 1996, we calculated the bivariate correlation between unit and item nonresponse separately for and For the first 10 years from 1986 to 1996, the correlation between unit and item nonresponse is essentially zero (r ¼.090, p ¼.317). By contrast, the correlation between the two aspects of nonresponse is.742 (p <.001) from 1997 to The set of correlations confirmed the trend we observed in the early section. When unit nonresponse rates did not change, there was no change in item nonresponse rates. However, the increase in unit nonresponse rates was accompanied by the fall in item nonresponse rates, supporting the response continuum model. We ran the same regression model in Table 3, but we ran it twice; one on monthly surveys before 1997 and the other since Both models contain the same set of independent variables and control variables. The dependent variables are the monthly average of the index of item nonresponse, as in Table 3. Table 4 displays the regression results. Holding sample compositions constant, unit nonresponse rates are not significantly related to item nonresponse rates for monthly surveys conducted before However, unit nonresponse rates are negatively related with item nonresponse for surveys conducted in 1997 or later; the negative coefficient is significant at the p ¼.01 level. The results of unit nonresponse rates supported the aggregate-level hypothesis derived from the response continuum model. Discussion This article provided some rare good news about nonresponse rates: item-nonresponse rates have significantly declined during recent years. The bad news was that it was partly due to increases in unit nonresponse rates. The data confirmed a significant connection between unit and item nonresponse at both the individual and the aggregate level. Respondents with a lower propensity to participate in surveys provided more missing data and surveys with lower unit nonresponse rates (or higher response rates) were associated with higher item nonresponse rates. These findings support the response continuum model. An examination of the response continuum model only at the aggregate level may involve an ecological fallacy. The concerns would be valid if the relationship between unit and item nonresponse at the aggregate level was used to make inferences about the likelihood of an individual to participate in the survey and to answer survey questions. In contrast, this analysis started by estimating the response continuum model at the individual level and then the individual-level information was aggregated to study the relationship between unit and item nonresponse at the survey level. This bottom-up analysis does not run the risk of an ecological fallacy. In addition, the fact

12 546 INTERNATIONAL JOURNAL OF PUBLIC OPINION RESEARCH Table 4 Predicting Item Nonresponse Rates at the Aggregate Level: Before 1996 versus 1996 or Later (n ¼ 127) (n ¼ 104) Parameter estimate SE Parameter estimate Unit nonresponse rates Percentage of initial refusers in the sample Average number of call attempts Mean age Percent of male respondents Percent of married respondents Percent of White respondents Percent of respondents with income above median Percent of respondents with high school or less education Intercept R Note: Bolded estimates are significant at p <.05. SE that both individual- and aggregate-level analyses come to the same conclusion demonstrates that the aggregate-level analyses are not an example of ecological fallacy. This article showed a sharper decline in item nonrepsonse rates after 2001 than before Respondent characteristics, interviewer characteristics, question characteristics, and survey-design features all contribute to item nonrepsonse. The study protocol and wording of survey questions that contribute to the item nonresponse rate remained unchanged for the 20 years examined. Conversations with SCA project managers confirmed that training protocol has been consistent throughout the time period presented in this article. The decline in item nonresponse rates, then, is more likely to result from changes in respondent-level characteristics, both in terms of measured variables (e.g., race, education, sex, income) and unmeasured variables (e.g., sensitivity thresholds, motivation to respond, general cooperativeness) that are related to item nonresponse. The response continuum model points toward common factors that are partly responsible for the increase of unit nonresponse rates and the decrease of item nonresponse rates observed in the past 5 years. For instance, there is evidence suggesting that privacy issues have grown in importance. Studies have shown that the percentage of the U.S. population that owns an answering machines rose from the 13% in 1985 to around 67% in 2000 (Tuckel & Feinberg, 1991; Tuckel & O Neil, 1995, 2001). Roth, Montaquila, and Brick (2001) also reported the national estimate of the prevalence of owning a call-screening device (including answering machines and

13 RESEARCH NOTE 547 calling IDs) was around 81% in These changes, together with the increasing refusal rates, reflect an increasing willingness of respondents to assert their privacy rights and not participate in surveys as well as making it harder to contact respondents. Such a change in social norms suggests that cooperating respondents have been increasingly drawn from the right side of the response continuum. As a result, respondents retained in the interviewed pools are more cooperative in nature, resulting in lower item nonresponse rates at the cost of higher unit nonresponse rates. To be sure, our findings may not generalize to other survey modes (such as Web surveys) or to other measures of attitudes or behaviors. But the relatively large decline in item nonresponse rates for measures of economic attitudes and expectations we observed reflects a trend uncontaminated by changes in survey design, question wording, or interview methodology. Our findings have other implications for survey research. The significant negative relationship between unit and item nonresponse rates (even after controlling for respondent characteristics) represents a classic trade-off. To properly evaluate effect on data quality, the impact of shifts in unit and item nonresponse needs to be jointly assessed. While the impact of response rates on data quality has been addressed in the literature, the impact of item nonresponse rates on data quality has not received as much attention and there is even less attention on the joint impact of unit and item nonresponse rates. Furthermore, the survey practitioners are still exploring the best ways to balance the two aspects of nonresponse. Thus, it is extremely important for future research to study the joint impact of both unit and item nonresponse on data quality in order to gain a more complete understanding of potential nonresponse errors. We believe, however, that it is important for survey practitioners to keep this trade-off in mind and avoid naive attempts to decrease one dimension of nonresponse rate at the price of increasing the other. The confirmation of the response continuum model has implications for survey researchers and practitioners. The model can inform design decisions (such as when to stop the data collection) and guide research on nonresponse adjustments. We showed that the response continuum model was able to capture accurately the relationship between unit and item nonresponse at the survey level and the relationship between respondents propensity to take part in a survey and their propensity to answer survey questions at the individual level. The response continuum model indicates that respondents who have high missing-data rates are similar in some respects to those who refused to participate in the survey. Future research should take into account of these findings and consider using the response continuum model in responsive design and for nonresponse adjustment and imputation. Finally, the findings offer one more basic insight about nonresponse. Item-nonresponse rates have varied considerably in the past and can be expected to vary in the future. Any single survey may document an unusually high or an unusually low incidence of item missing data. These fluctuations suggest that more dynamic theories are needed to adequately understand changes in item nonresponse.

14 548 INTERNATIONAL JOURNAL OF PUBLIC OPINION RESEARCH Appendix A Question Wording of the 16 Core Questions of Surveys of Consumers Used in Analysis 1. (PAGO) We are interested in how people are getting along financially these days. Would you say that you (and your family living there) are better off or worse off financially than you were a year ago? 2. (PEXP) Now looking ahead do you think that a year from now you (and your family living there) will be better off financially, or worse off, or just about the same as now? 3. (BUS12) Now turning to business conditions in the country as a whole do you think that during the next 12 months we ll have good times financially, or bad times, or what? 4. (BAGO) Would you say that at the present time business conditions are better or worse than they were a year ago? 5. (NEWS1) During the last few months, have you heard of any favorable or unfavorable changes in business conditions? 6. (BEXP) And how about a year from now, do you expect that in the country as a whole business conditions will be better, or worse than they are at present, or just about the same? 7. (GOVT) As to the economic policy of the government I mean steps taken to fight inflation or unemployment would you say the government is doing a good job, only fair, or a poor job? 8. (UNEMP) How about people out of work during the coming 12 months do you think that there will be more unemployment than now, about the same, or less? 9. (RATEX) No one can say for sure, but what do you think will happen to interest rates for borrowing money during the next 12 months will they go up, stay the same, or go down? 10. (PX1Q1) During the next 12 months, do you think that prices in general will go up, or go down, or stay where they are now? 11. (RINC) During the next year or two, do you expect that your (family) income will go up more than prices will go up, about the same, or less than prices will go up? 12. (INEXP1) During the next 12 months, do you expect your (family) income to be higher or lower than during the past year? 13. (HOM) Generally speaking, do you think now is a good time or a bad time to buy a house? 14. (DUR) About the big things people buy for their homes such as furniture, a refrigerator, stove, television, and things like that. Generally speaking, do you think now is a good or a bad time for people to buy major household items? 15. (CAR) Speaking now of the automobile market do you think the next 12 months or so will be a good time or a bad time to buy a car?

15 16. (INCOME) To get a picture of people s financial situation we need to know the general range of income of all people we interview. Now, thinking about (your/ your family s) total income from all sources (including your job), how much did (you/your family) receive in (filling in time)? Appendix B Item Nonresponse Rates for the 16 Component Questions RESEARCH NOTE 549 Mean Item Nonresponse Percentage Over 20 years (%) 1. PAGO PEXP BUS BAGO NEWS BEXP GOVT UNEMP RATEX PX1Q RINC INEXQ HOM DUR CAR INCOME Item nonresponse index (excluding income) 3.27 Item nonresponse index (including income) 4.40 References Atrostic, B. K., Bates, N., Burt, G., & Silberstein., A. (2001). Nonresponse in U.S. government household surveys: Consistent measures, recent trends. new insights. Journal of Official Statistics, 17, Beatty, P., & Herrmann, D. (2002). To answer or not to answer: Decision processes related to survey item nonresponse. In R. Groves, D. Dillman, J. Eltinge & R. Little (Eds.), Survey nonresponse (pp ). New York, NY: John Wiley & Sons. Burton, J., Laurie, H., & Moon, N. (1999). Don t ask me nothin about nothin, I just might tell you the truth. The interaction between unit non-response and item non-response. Paper presented at the 1999 International Conference on Survey Nonresponse, Portland, OR.

16 550 INTERNATIONAL JOURNAL OF PUBLIC OPINION RESEARCH Campanelli, P., Sturgis, P., & Moon, N. (1996). Exploring the impact of survey introduction. In American Statistical Association Proceedings: Section on survey research methods (pp ). Alexandria, VA: American Statistical Association. Couper, M. (1997). Survey introduction and data quality. Public Opinion Quarterly, 61, Curtin, R., Presser, S., & Singer, E. (2000). The effects of response rate changes on the index of consumer sentiment. Public Opinion Quarterly, 64, Curtin, R., Presser, S., & Singer, E. (2005). Changes in telephone survey nonresponse over the past quarter century. Public Opinion Quarterly, 69, Djerf, K. (2004). Nonresponse in time: A time series analysis of the Finnish labor force survey. Journal of Official Statistics, 20, de Heer, W. (1999). International response trends: Results of an international survey. Journal of Official Statistics, 15, de Leeuw, E. (2001). Reducing missing data in surveys: An overview of methods. Quality and Quantity, 35, de Leeuw, E., & de Heer, W. (2002). Trends in household survey nonresponse: A longitudinal and international Comparison. In R. Groves, D. Dillman, J. Eltinge & R. Little (Eds.), Survey nonresponse (pp ). New York, NY: John Wiley & Sons. Groves, R. (1989). Survey errors and survey cost. New York: John Wiley & Sons. Groves, R., Cialdini, R., & Couper, M. (1992). Understanding the decision to participate in a survey. Public Opinion Quarterly, 56, Groves, R., & Couper, M. (1998). Nonresponse in Household Surveys. John Wiley. Groves, R., Presser, S., & Dipko, S. (2004). The role of topic interest in survey participation decisions. Public Opinion Quarterly, 68, Groves, R., Singer, E., & Corning, A. (2000). Leverage-salience theory of survey participation: Description and illustration. Public Opinion Quarterly, 64, Hox, J., & de Leeuw, E. (1994). A comparison of nonresponse in mail, telephone, and face-to-face surveys: Applying multilevel modeling to meta-analysis. Quality and Quantity, 28, Little, R., & Rubin, D. (1987). Statistical analysis with missing data. New York, NY: John Wiley & Sons. Loosveldt, G., Pickery, J., & Billiet, J. (2002). Item nonresponse as a predictor of unit nonresponse in a panel survey. Journal of Official Statistics, 18, Mason, R., Lesser, V., & Traugott, M. (2002). Effect of item nonresponse on nonresponse error and inference. In R. Groves, D. Dillman, J. Eltinge & R. Little (Eds.), Survey nonresponse (pp ). New York, NY: John Wiley & Sons. Roth, S., Montaquila, J., & Brick, J. (2001). Effects of telephone technologies and call screening devices on sampling, weighting and cooperation in a random digit dialing (RDD) survey. In American Statistical Association Proceedings: Section on survey research methods. Alexandria, VA: American Statistical Association. Singer, E., Van Hoewyk, J., & Maher, M. (2000). Experiments with incentives in telephone surveys. Public Opinion Quarterly, 64, Stinchcombe, A., Jones, C., & Sheatsley, P. (1981). Nonresponse bias for attitude questions. Public Opinion Quarterly, 45,

17 RESEARCH NOTE 551 Sudman, S., Bradburn, N., & Schwarz, N. (1996). Thinking about answers: The application of cognitive processes to survey methodology. San Francisco, CA: Jossey-Bass Publishers. Tourangeau, R., Rips, L., & Rasinski, K. (2000). The psychology of survey response. Cambridge: Cambridge University Press. Tuckel, P. S., & Feinberg, B. M. (1991). The answering machine poses many questions for telephone survey researchers. Public Opinion Quarterly, 55, Tuckel, P., & O Neill, H. (1995). A profile of telephone answering machine owners and screeners. In American Statistical Association Proceedings: Section on survey research methods (pp ). Alexandria, VA: American Statistical Association. Tuckel, P., & O Neill, H. (2001). The vanishing respondent in telephone surveys. In American Statistical Association Proceedings: Section on survey research methods. Alexandria, VA: American Statistical Association. Waksberg, J. (1978). Sampling methods for random digit dialing. Journal of the American Statistical Association, 73, Wolter, K., Chowdhury, S., & Kelly, J. (2009). Design, conduct, and analysis of random-digit dialing surveys. In D. Pfeffermann & C. R. Rao (Eds.), Handbook of statistics Sample surveys: Design, methods and applications, Volume 29A (pp ). The Netherlands: North-Holland. Yan, T., & Curtin, R. (2007). Responsive designs in longitudinal surveys. Paper presented at the 2007 American Association for Public Opinion Research, Orange County, CA. Biographical Notes Dr Ting Yan received her Ph.D. in Survey Methodology from the Joint Program in Survey Methodology (JPSM), University of Maryland. After her Ph.D., she completed a two-year post-doctoral training at the Institute for Social Research, University of Michigan. She is currently working at NORC at the University of Chicago as a Senior Survey Methodologist. Dr Yan has published articles on a wide variety of methodological issues. Her main research interests include measurement errors, understanding of nonresponse errors and the link between measurement and nonresponse errors, and web survey methodology. Dr Richard Curtin is a Research Associate Professor and the Director of the University of Michigan s Surveys of Consumers and the Panel Study of Entrepreneurial Dynamics.

Nonresponse Bias in a Longitudinal Follow-up to a Random- Digit Dial Survey

Nonresponse Bias in a Longitudinal Follow-up to a Random- Digit Dial Survey Nonresponse Bias in a Longitudinal Follow-up to a Random- Digit Dial Survey Doug Currivan and Lisa Carley-Baxter RTI International This paper was prepared for the Method of Longitudinal Surveys conference,

More information

SURVEY TOPIC INVOLVEMENT AND NONRESPONSE BIAS 1

SURVEY TOPIC INVOLVEMENT AND NONRESPONSE BIAS 1 SURVEY TOPIC INVOLVEMENT AND NONRESPONSE BIAS 1 Brian A. Kojetin (BLS), Eugene Borgida and Mark Snyder (University of Minnesota) Brian A. Kojetin, Bureau of Labor Statistics, 2 Massachusetts Ave. N.E.,

More information

HBA/BDL QUARTERLY REPORT JANUARY 2017 METROPOLITAN ATLANTA CONSUMER CONFIDENCE SURVEY

HBA/BDL QUARTERLY REPORT JANUARY 2017 METROPOLITAN ATLANTA CONSUMER CONFIDENCE SURVEY HBA/BDL QUARTERLY REPORT JANUARY 17 METROPOLITAN ATLANTA CONSUMER CONFIDENCE SURVEY Metropolitan Atlanta Consumer Confidence Survey Fourth Quarter 16 BDL Roundtable January 18, 17 FOR IMMEDIATE RELEASE

More information

FOR IMMEDIATE RELEASE METROPOLITAN ATLANTA CONSUMER CONFIDENCE SURVEY

FOR IMMEDIATE RELEASE METROPOLITAN ATLANTA CONSUMER CONFIDENCE SURVEY FOR IMMEDIATE RELEASE METROPOLITAN ATLANTA CONSUMER CONFIDENCE SURVEY CONTACT: Dr. Roger Tutterow, Econometric Center, Coles College of Business, Kennesaw State University Mobile: 4.247.9339 INDEX RESULTS

More information

The Effectiveness of Advance Letters for Cell Telephone Samples

The Effectiveness of Advance Letters for Cell Telephone Samples The Effectiveness of Advance Letters for Cell Telephone Samples Benjamin Skalland 1, Zhen Zhao 2, Jenny Jeyarajah 2 1 NORC at the University of Chicago 2 Centers for Disease Control and Prevention Abstract

More information

RECOMMENDED CITATION: Pew Research Center, December, 2014, Perceptions of Job News Trend Upward

RECOMMENDED CITATION: Pew Research Center, December, 2014, Perceptions of Job News Trend Upward NUMBERS, FACTS AND TRENDS SHAPING THE WORLD FOR RELEASE DECEMBER 16, 2014 FOR FURTHER INFORMATION ON THIS REPORT: Carroll Doherty, Director of Political Research Seth Motel, Research Analyst Rachel Weisel,

More information

Using Soft Refusal Status in the Cell-Phone Nonresponse Adjustment in the National Immunization Survey

Using Soft Refusal Status in the Cell-Phone Nonresponse Adjustment in the National Immunization Survey Using Soft Refusal Status in the Cell-Phone Nonresponse Adjustment in the National Immunization Survey Wei Zeng 1, David Yankey 2 Nadarajasundaram Ganesh 1, Vicki Pineau 1, Phil Smith 2 1 NORC at the University

More information

EXPERIMENTS IN PRODUCING NONRESPONSE BIAS

EXPERIMENTS IN PRODUCING NONRESPONSE BIAS Public Opinion Quarterly, Vol. 70, No. 5, Special Issue 2006, pp. 720 736 EXPERIMENTS IN PRODUCING NONRESPONSE BIAS ROBERT M. GROVES MICK P. COUPER STANLEY PRESSER ELEANOR SINGER ROGER TOURANGEAU GIORGINA

More information

Political Science 15, Winter 2014 Final Review

Political Science 15, Winter 2014 Final Review Political Science 15, Winter 2014 Final Review The major topics covered in class are listed below. You should also take a look at the readings listed on the class website. Studying Politics Scientifically

More information

How Errors Cumulate: Two Examples

How Errors Cumulate: Two Examples How Errors Cumulate: Two Examples Roger Tourangeau, Westat Hansen Lecture October 11, 2018 Washington, DC Hansen s Contributions The Total Survey Error model has served as the paradigm for most methodological

More information

Methodology for the VoicesDMV Survey

Methodology for the VoicesDMV Survey M E T R O P O L I T A N H O U S I N G A N D C O M M U N I T I E S P O L I C Y C E N T E R Methodology for the VoicesDMV Survey Timothy Triplett December 2017 Voices of the Community: DC, Maryland, Virginia

More information

Nonresponse Rates and Nonresponse Bias In Household Surveys

Nonresponse Rates and Nonresponse Bias In Household Surveys Nonresponse Rates and Nonresponse Bias In Household Surveys Robert M. Groves University of Michigan and Joint Program in Survey Methodology Funding from the Methodology, Measurement, and Statistics Program

More information

The Effects of Introductory Scripts and Respondent Incentives on Overall Participation and Subgroup Participation in an RDD Telephone Survey

The Effects of Introductory Scripts and Respondent Incentives on Overall Participation and Subgroup Participation in an RDD Telephone Survey The Effects of Introductory Scripts and Respondent Incentives on Overall Participation and Subgroup Participation in an RDD Telephone Survey Doug Currivan, Matthew Farrelly, and Joanne Pais, RTI International

More information

Do non-response follow-ups improve or reduce data quality?: A review of the existing literature

Do non-response follow-ups improve or reduce data quality?: A review of the existing literature University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln Sociology Department, Faculty Publications Sociology, Department of 1-2013 Do non-response follow-ups improve or reduce

More information

Exploring a Method to Evaluate Survey Response Scales

Exploring a Method to Evaluate Survey Response Scales Exploring a Method to Evaluate Survey Response Scales René Bautista 1, Lisa Lee 1 1 NORC at the University of Chicago, 55 East Monroe, Suite 2000, Chicago, IL 60603 Abstract We present results from a qualitative

More information

CLS Cohort. Studies. Centre for Longitudinal. Studies CLS. Experimental testing of refusal conversion strategies in a large-scale longitudinal study

CLS Cohort. Studies. Centre for Longitudinal. Studies CLS. Experimental testing of refusal conversion strategies in a large-scale longitudinal study CLS CLS Cohort Studies Working Paper 2010/9 Centre for Longitudinal Studies Experimental testing of refusal conversion strategies in a large-scale longitudinal study Lisa Calderwood Ian Plewis Sosthenes

More information

How Much Should We Trust the World Values Survey Trust Question?

How Much Should We Trust the World Values Survey Trust Question? How Much Should We Trust the World Values Survey Trust Question? Noel D. Johnson * Department of Economics George Mason University Alexandra Mislin Kogod School of Business, American University Abstract

More information

Does it Matter How You Ask? Question Wording and Males' Reporting of Contraceptive Use at Last Sex

Does it Matter How You Ask? Question Wording and Males' Reporting of Contraceptive Use at Last Sex Does it Matter How You Ask? Question Wording and Males' Reporting of Contraceptive Use at Last Sex Jennifer Yarger University of Michigan Sarah Brauner-Otto Mississippi State University Joyce Abma National

More information

Weight Adjustment Methods using Multilevel Propensity Models and Random Forests

Weight Adjustment Methods using Multilevel Propensity Models and Random Forests Weight Adjustment Methods using Multilevel Propensity Models and Random Forests Ronaldo Iachan 1, Maria Prosviryakova 1, Kurt Peters 2, Lauren Restivo 1 1 ICF International, 530 Gaither Road Suite 500,

More information

A Comparison of Listed and Unlisted Households on Nonresponse and Measurement Error

A Comparison of Listed and Unlisted Households on Nonresponse and Measurement Error A Comparison of Listed and Unlisted Households on Nonresponse and Measurement Error Ting Yan 1, Chao Xu 1, Meena Khare 2 1 NORC at the University of Chicago, 55 East Monroe Street, Chicago, IL 60603 2

More information

AnExaminationofWithin-Person VariationinResponsePropensity. KristenOlsonandRobertM.Groves

AnExaminationofWithin-Person VariationinResponsePropensity. KristenOlsonandRobertM.Groves AnExaminationofWithin-Person VariationinResponsePropensity overthedatacolectionfieldperiod KristenOlsonandRobertM.Groves An Examination of Within-Person Variation in Response Propensity over the Data Collection

More information

Use of Paradata in a Responsive Design Framework to Manage a Field Data Collection

Use of Paradata in a Responsive Design Framework to Manage a Field Data Collection Journal of Official Statistics, Vol. 28, No. 4, 2012, pp. 477 499 Use of Paradata in a Responsive Design Framework to Manage a Field Data Collection James Wagner 1, Brady T. West 1, Nicole Kirgis 1, James

More information

Determinants of contact procedure duration and data richness: empirical analysis based on ESS Round 4 contact files

Determinants of contact procedure duration and data richness: empirical analysis based on ESS Round 4 contact files PRELIMINARY DRAFT 13 August 2010 European Population Conference (Vienna, 1-4 September, 2010) Section Data and method Determinants of contact procedure duration and data richness: empirical analysis based

More information

Huber, Gregory A. and John S. Lapinski "The "Race Card" Revisited: Assessing Racial Priming in

Huber, Gregory A. and John S. Lapinski The Race Card Revisited: Assessing Racial Priming in Huber, Gregory A. and John S. Lapinski. 2006. "The "Race Card" Revisited: Assessing Racial Priming in Policy Contests." American Journal of Political Science 50 (2 April): 421-440. Appendix, Version 1.0

More information

General Online Research Conference (GOR 17) March 2017 HTW Berlin University of Applied Sciences Berlin, Germany

General Online Research Conference (GOR 17) March 2017 HTW Berlin University of Applied Sciences Berlin, Germany General Online Research Conference (GOR 17) 15-17 March 2017 HTW Berlin University of Applied Sciences Berlin, Germany Edith de Leeuw, Utrecht University Joop Hox, Utrecht University Benjamin Rosche, Utrecht

More information

MEASURING HAPPINESS IN SURVEYS: A TEST OF THE SUBTRACTION HYPOTHESIS. ROGER TOURANGEAU, KENNETH A. RASINSKI, and NORMAN BRADBURN

MEASURING HAPPINESS IN SURVEYS: A TEST OF THE SUBTRACTION HYPOTHESIS. ROGER TOURANGEAU, KENNETH A. RASINSKI, and NORMAN BRADBURN MEASURING HAPPINESS IN SURVEYS: A TEST OF THE SUBTRACTION HYPOTHESIS ROGER TOURANGEAU, KENNETH A. RASINSKI, and NORMAN BRADBURN Abstract Responses to an item on general happiness can change when that item

More information

Tim Johnson Survey Research Laboratory University of Illinois at Chicago September 2011

Tim Johnson Survey Research Laboratory University of Illinois at Chicago September 2011 Tim Johnson Survey Research Laboratory University of Illinois at Chicago September 2011 1 Characteristics of Surveys Collection of standardized data Primarily quantitative Data are consciously provided

More information

EVALUATION OF A MONETARY INCENTIVE PAYMENT EXPERIMENT IN THE NATIONAL LONGITUDINAL SURVEY OF YOUTH, 1997 COHORT

EVALUATION OF A MONETARY INCENTIVE PAYMENT EXPERIMENT IN THE NATIONAL LONGITUDINAL SURVEY OF YOUTH, 1997 COHORT EVALUATION OF A MONETARY INCENTIVE PAYMENT EXPERIMENT IN THE NATIONAL LONGITUDINAL SURVEY OF YOUTH, 1997 COHORT A. Rupa Datta Michael W. Horrigan James R. Walker 1 The evidence on incentives in survey

More information

The Impact of Advance Letters on Cellphone Response in a Statewide Dual-Frame Survey

The Impact of Advance Letters on Cellphone Response in a Statewide Dual-Frame Survey Vol. 11, Issue 2, 2018 The Impact of Advance Letters on Cellphone Response in a Statewide Dual-Frame Survey Eva Aizpurua *, Ki H. Park, Mitchell Avery, Jill Wittrock *, Rodney Muilenburg, Mary E. Losch

More information

NONRESPONSE ADJUSTMENT IN A LONGITUDINAL SURVEY OF AFRICAN AMERICANS

NONRESPONSE ADJUSTMENT IN A LONGITUDINAL SURVEY OF AFRICAN AMERICANS NONRESPONSE ADJUSTMENT IN A LONGITUDINAL SURVEY OF AFRICAN AMERICANS Monica L. Wolford, Senior Research Fellow, Program on International Policy Attitudes, Center on Policy Attitudes and the Center for

More information

THE EFFECTS OF SELF AND PROXY RESPONSE STATUS ON THE REPORTING OF RACE AND ETHNICITY l

THE EFFECTS OF SELF AND PROXY RESPONSE STATUS ON THE REPORTING OF RACE AND ETHNICITY l THE EFFECTS OF SELF AND PROXY RESPONSE STATUS ON THE REPORTING OF RACE AND ETHNICITY l Brian A. Harris-Kojetin, Arbitron, and Nancy A. Mathiowetz, University of Maryland Brian Harris-Kojetin, The Arbitron

More information

EVALUATING THE IMPACT OF A NEW CATI SCREENER IN THE NATIONAL IMMUNIZATION SURVEY

EVALUATING THE IMPACT OF A NEW CATI SCREENER IN THE NATIONAL IMMUNIZATION SURVEY EVALUATING THE IMPACT OF A NEW CATI SCREENER IN THE NATIONAL IMMUNIZATION SURVEY Michael P. Battaglia, Vicki Huggins, Ann-Sofi Rodén, Abt Associates Inc.; Robert A. Wright, National Center for Health Statistics;

More information

Attrition in the Swiss Household Panel by demographic characteristics and levels of social involvement

Attrition in the Swiss Household Panel by demographic characteristics and levels of social involvement Attrition in the Swiss Household Panel by demographic characteristics and levels of social involvement Working Paper 1_09 Swiss Household Panel, FORS June 2009 by Marieke Voorpostel Correspondence to:

More information

P E R S P E C T I V E S

P E R S P E C T I V E S PHOENIX CENTER FOR ADVANCED LEGAL & ECONOMIC PUBLIC POLICY STUDIES Revisiting Internet Use and Depression Among the Elderly George S. Ford, PhD June 7, 2013 Introduction Four years ago in a paper entitled

More information

Mode Effects on In-Person and Internet Surveys: A Comparison of the General Social Survey and Knowledge Network Surveys

Mode Effects on In-Person and Internet Surveys: A Comparison of the General Social Survey and Knowledge Network Surveys Mode Effects on In-Person and Internet Surveys: A Comparison of the General Social Survey and Knowledge Network Surveys Tom W. Smith 1, J. Michael Dennis 2 1 National Opinion Research Center/University

More information

Item Nonresponse and the 10-Point Response Scale in Telephone Surveys 1

Item Nonresponse and the 10-Point Response Scale in Telephone Surveys 1 Vol. 5, no 4, 2012 www.surveypractice.org The premier e-journal resource for the public opinion and survey research community Item Nonresponse and the 10-Point Response Scale in Telephone Surveys 1 Matthew

More information

Optimal Flow Experience in Web Navigation

Optimal Flow Experience in Web Navigation Optimal Flow Experience in Web Navigation Hsiang Chen, Rolf T. Wigand and Michael Nilan School of Information Studies, Syracuse University Syracuse, NY 13244 Email: [ hchen04, rwigand, mnilan]@mailbox.syr.edu

More information

Examining the relationship between the accuracy of self-reported data and the availability of respondent financial records

Examining the relationship between the accuracy of self-reported data and the availability of respondent financial records Examining the relationship between the accuracy of self-reported data and the availability of respondent financial records Emily Geisen, Charles Strohm, M. Christopher Stringer, Brandon Kopp, Ashley Richards

More information

I. Survey Methodology

I. Survey Methodology I. Survey Methodology The Elon University Poll is conducted using a stratified random sample of households with telephones in the population of interest in this case, citizens in North Carolina. The sample

More information

Sources of Comparability Between Probability Sample Estimates and Nonprobability Web Sample Estimates

Sources of Comparability Between Probability Sample Estimates and Nonprobability Web Sample Estimates Sources of Comparability Between Probability Sample Estimates and Nonprobability Web Sample Estimates William Riley 1, Ron D. Hays 2, Robert M. Kaplan 1, David Cella 3, 1 National Institutes of Health,

More information

INSECURITY. Food. Though analyses at the regional and national levels

INSECURITY. Food. Though analyses at the regional and national levels Food INSECURITY The Southern Rural Development Center addresses... Report from RIDGE-funded research in the Southern Region Food insecurity and emotional well-being among single mothers in the rural South

More information

AnExaminationoftheQualityand UtilityofInterviewerEstimatesof HouseholdCharacteristicsinthe NationalSurveyofFamilyGrowth. BradyWest

AnExaminationoftheQualityand UtilityofInterviewerEstimatesof HouseholdCharacteristicsinthe NationalSurveyofFamilyGrowth. BradyWest AnExaminationoftheQualityand UtilityofInterviewerEstimatesof HouseholdCharacteristicsinthe NationalSurveyofFamilyGrowth BradyWest An Examination of the Quality and Utility of Interviewer Estimates of Household

More information

Sampling for Success. Dr. Jim Mirabella President, Mirabella Research Services, Inc. Professor of Research & Statistics

Sampling for Success. Dr. Jim Mirabella President, Mirabella Research Services, Inc. Professor of Research & Statistics Sampling for Success Dr. Jim Mirabella President, Mirabella Research Services, Inc. Professor of Research & Statistics Session Objectives Upon completion of this workshop, participants will be able to:

More information

An Experimental Investigation of Self-Serving Biases in an Auditing Trust Game: The Effect of Group Affiliation: Discussion

An Experimental Investigation of Self-Serving Biases in an Auditing Trust Game: The Effect of Group Affiliation: Discussion 1 An Experimental Investigation of Self-Serving Biases in an Auditing Trust Game: The Effect of Group Affiliation: Discussion Shyam Sunder, Yale School of Management P rofessor King has written an interesting

More information

Interviewer Effects on Nonresponse in the European Social Survey

Interviewer Effects on Nonresponse in the European Social Survey Journal of Official Statistics, Vol. 27, No. 2, 2011, pp. 359 377 Interviewer Effects on Nonresponse in the European Social Survey Annelies G. Blom 1, Edith D. de Leeuw 2, and Joop J. Hox 3 In face-to-face

More information

The Impact of Relative Standards on the Propensity to Disclose. Alessandro Acquisti, Leslie K. John, George Loewenstein WEB APPENDIX

The Impact of Relative Standards on the Propensity to Disclose. Alessandro Acquisti, Leslie K. John, George Loewenstein WEB APPENDIX The Impact of Relative Standards on the Propensity to Disclose Alessandro Acquisti, Leslie K. John, George Loewenstein WEB APPENDIX 2 Web Appendix A: Panel data estimation approach As noted in the main

More information

Testing for non-response and sample selection bias in contingent valuation: Analysis of a combination phone/mail survey

Testing for non-response and sample selection bias in contingent valuation: Analysis of a combination phone/mail survey Whitehead, J.C., Groothuis, P.A., and Blomquist, G.C. (1993) Testing for Nonresponse and Sample Selection Bias in Contingent Valuation: Analysis of a Combination Phone/Mail Survey, Economics Letters, 41(2):

More information

In this chapter we discuss validity issues for quantitative research and for qualitative research.

In this chapter we discuss validity issues for quantitative research and for qualitative research. Chapter 8 Validity of Research Results (Reminder: Don t forget to utilize the concept maps and study questions as you study this and the other chapters.) In this chapter we discuss validity issues for

More information

LOGISTIC PROPENSITY MODELS TO ADJUST FOR NONRESPONSE IN PHYSICIAN SURVEYS

LOGISTIC PROPENSITY MODELS TO ADJUST FOR NONRESPONSE IN PHYSICIAN SURVEYS LOGISTIC PROPENSITY MODELS TO ADJUST FOR NONRESPONSE IN PHYSICIAN SURVEYS Nuria Diaz-Tena, Frank Potter, Michael Sinclair and Stephen Williams Mathematica Policy Research, Inc., Princeton, New Jersey 08543-2393

More information

Questionnaire Design Clinic

Questionnaire Design Clinic Questionnaire Design Clinic Spring 2008 Seminar Series University of Illinois www.srl.uic.edu 1 Cognitive Steps in Answering Questions 1. Understand question. 2. Search memory for information. 3. Integrate

More information

Nonresponse Adjustment Methodology for NHIS-Medicare Linked Data

Nonresponse Adjustment Methodology for NHIS-Medicare Linked Data Nonresponse Adjustment Methodology for NHIS-Medicare Linked Data Michael D. Larsen 1, Michelle Roozeboom 2, and Kathy Schneider 2 1 Department of Statistics, The George Washington University, Rockville,

More information

Representativity Indicators for Survey Quality. Indicators and data collection control Work plan and preliminary findings Pilot Statistics Netherlands

Representativity Indicators for Survey Quality. Indicators and data collection control Work plan and preliminary findings Pilot Statistics Netherlands Representativity Indicators for Survey Quality Indicators and data collection control Work plan and preliminary findings Pilot Statistics Netherlands Work package 7, Deliverable 8.2 Annemieke Luiten &

More information

Glossary From Running Randomized Evaluations: A Practical Guide, by Rachel Glennerster and Kudzai Takavarasha

Glossary From Running Randomized Evaluations: A Practical Guide, by Rachel Glennerster and Kudzai Takavarasha Glossary From Running Randomized Evaluations: A Practical Guide, by Rachel Glennerster and Kudzai Takavarasha attrition: When data are missing because we are unable to measure the outcomes of some of the

More information

Nonresponse Strategies and Measurement Error

Nonresponse Strategies and Measurement Error Nonresponse Strategies and Measurement Error Neil Malhotra (corresponding author) University of Pennsylvania 238 Stiteler Hall Philadelphia, PA 19104 (408) 772-7969 neilmal@sas.upenn.edu Joanne Miller

More information

Interviewer Characteristics, their Doorstep Behaviour, and Survey Co-operation

Interviewer Characteristics, their Doorstep Behaviour, and Survey Co-operation Interviewer Characteristics, their Doorstep Behaviour, and Survey Co-operation Jennifer Sinibaldi 1, Annette Jäckle 2, Sarah Tipping 1, Peter Lynn 2 1 National Centre for Social Research, 35 Northampton

More information

You must answer question 1.

You must answer question 1. Research Methods and Statistics Specialty Area Exam October 28, 2015 Part I: Statistics Committee: Richard Williams (Chair), Elizabeth McClintock, Sarah Mustillo You must answer question 1. 1. Suppose

More information

EXAMINING THE RELATIONSHIP BETWEEN NONRESPONSE PROPENSITY AND DATA QUALITY IN TWO NATIONAL HOUSEHOLD SURVEYS

EXAMINING THE RELATIONSHIP BETWEEN NONRESPONSE PROPENSITY AND DATA QUALITY IN TWO NATIONAL HOUSEHOLD SURVEYS Public Opinion Quarterly, Vol. 74, No. 5, 2010, pp. 934 955 EXAMINING THE RELATIONSHIP BETWEEN NONRESPONSE PROPENSITY AND DATA QUALITY IN TWO NATIONAL HOUSEHOLD SURVEYS SCOTT FRICKER* ROGER TOURANGEAU

More information

Methodological Issues for Longitudinal Studies. Graham Kalton

Methodological Issues for Longitudinal Studies. Graham Kalton Methodological Issues for Longitudinal Studies Graham Kalton grahamkalton@westat.com Varieties of Longitudinal Studies 2 A. Cohort studies of individuals NLSY, ECLS-B and K, PATH, HRS (financial units),

More information

Reduction of Nonresponse Bias through Case Prioritization. Paper presented at the 2009 annual AAPOR conference, Hollywood, FL

Reduction of Nonresponse Bias through Case Prioritization. Paper presented at the 2009 annual AAPOR conference, Hollywood, FL Reduction of Nonresponse Bias through Case Prioritization Andy Peytchev 1, Sarah Riley 2, Jeff Rosen 1, Joe Murphy 1, Mark Lindblad 2 Paper presented at the 2009 annual AAPOR conference, Hollywood, FL

More information

August 29, Introduction and Overview

August 29, Introduction and Overview August 29, 2018 Introduction and Overview Why are we here? Haavelmo(1944): to become master of the happenings of real life. Theoretical models are necessary tools in our attempts to understand and explain

More information

ATTITUDES, BELIEFS, AND TRANSPORTATION BEHAVIOR

ATTITUDES, BELIEFS, AND TRANSPORTATION BEHAVIOR CHAPTER 6 ATTITUDES, BELIEFS, AND TRANSPORTATION BEHAVIOR Several studies were done as part of the UTDFP that were based substantially on subjective data, reflecting travelers beliefs, attitudes, and intentions.

More information

Impact of Nonresponse on Survey Estimates of Physical Fitness & Sleep Quality. LinChiat Chang, Ph.D. ESRA 2015 Conference Reykjavik, Iceland

Impact of Nonresponse on Survey Estimates of Physical Fitness & Sleep Quality. LinChiat Chang, Ph.D. ESRA 2015 Conference Reykjavik, Iceland Impact of Nonresponse on Survey Estimates of Physical Fitness & Sleep Quality LinChiat Chang, Ph.D. ESRA 2015 Conference Reykjavik, Iceland Data Source CDC/NCHS, National Health Interview Survey (NHIS)

More information

Donna L. Coffman Joint Prevention Methodology Seminar

Donna L. Coffman Joint Prevention Methodology Seminar Donna L. Coffman Joint Prevention Methodology Seminar The purpose of this talk is to illustrate how to obtain propensity scores in multilevel data and use these to strengthen causal inferences about mediation.

More information

Measuring and Assessing Study Quality

Measuring and Assessing Study Quality Measuring and Assessing Study Quality Jeff Valentine, PhD Co-Chair, Campbell Collaboration Training Group & Associate Professor, College of Education and Human Development, University of Louisville Why

More information

The Accuracy and Utility of Using Paradata to Detect Interviewer Question-Reading Deviations

The Accuracy and Utility of Using Paradata to Detect Interviewer Question-Reading Deviations University of Nebraska - Lincoln DigitalCommons@University of Nebraska - Lincoln 2019 Workshop: Interviewers and Their Effects from a Total Survey Error Perspective Sociology, Department of 2-26-2019 The

More information

An Application of Propensity Modeling: Comparing Unweighted and Weighted Logistic Regression Models for Nonresponse Adjustments

An Application of Propensity Modeling: Comparing Unweighted and Weighted Logistic Regression Models for Nonresponse Adjustments An Application of Propensity Modeling: Comparing Unweighted and Weighted Logistic Regression Models for Nonresponse Adjustments Frank Potter, 1 Eric Grau, 1 Stephen Williams, 1 Nuria Diaz-Tena, 2 and Barbara

More information

Effects of Data Collection Methods On Participant Responses In Television Research Studies

Effects of Data Collection Methods On Participant Responses In Television Research Studies Cairo University Faculty of Mass Communication Radio and Television Department Effects of Data Collection Methods On Participant Responses In Television Research Studies A Comparative Study A Summary of

More information

Item-Nonresponse and the 10-Point Response Scale in Telephone Surveys

Item-Nonresponse and the 10-Point Response Scale in Telephone Surveys Vol. 5, Issue 4, 2012 Item-Nonresponse and the 10-Point Response Scale in Telephone Surveys Matthew Courser 1, Paul J. Lavrakas 2 Survey Practice 10.29115/SP-2012-0021 Dec 01, 2012 Tags: measurement error,

More information

BY Lee Rainie and Cary Funk

BY Lee Rainie and Cary Funk NUMBERS, FACTS AND TRENDS SHAPING THE WORLD JULY 8, BY Lee Rainie and Cary Funk FOR MEDIA OR OTHER INQUIRIES: Lee Rainie, Director Internet, Science and Technology Research Cary Funk, Associate Director,

More information

Looking Beyond Demographics: Panel Attrition in the ANES and GSS

Looking Beyond Demographics: Panel Attrition in the ANES and GSS Political Analysis Advance Access published October 21, 2013 Political Analysis (2013) pp. 1 18 doi:10.1093/pan/mpt020 Looking Beyond Demographics: Panel Attrition in the ANES and GSS Laura Lazarus Frankel

More information

Results & Statistics: Description and Correlation. I. Scales of Measurement A Review

Results & Statistics: Description and Correlation. I. Scales of Measurement A Review Results & Statistics: Description and Correlation The description and presentation of results involves a number of topics. These include scales of measurement, descriptive statistics used to summarize

More information

Examples of Responsive Design in the National Survey of Family Growth

Examples of Responsive Design in the National Survey of Family Growth Examples of Responsive Design in the National Survey of Family Growth James M. Lepkowski, Brady T. West, James Wagner, Nicole Kirgis, Shonda Kruger-Ndiaye, William Axinn, Robert M. Groves Institute for

More information

An Examination of Visual Design Effects in a Self- Administered Mail Survey

An Examination of Visual Design Effects in a Self- Administered Mail Survey An Examination of Visual Design Effects in a Self- Administered Mail Survey Sarah Hastedt 1, Douglas Williams 2, and Catherine Billington, Ph.D. 2 1 National Center for Education Statistics, 1990 K Street

More information

RESPONSE FORMATS HOUSEKEEPING 4/4/2016

RESPONSE FORMATS HOUSEKEEPING 4/4/2016 RESPONSE FORMATS Allyson L. Holbrook Associate Professor of Public Administration and Psychology & Associate Research Professor at the Survey Research Laboratory of the University of Illinois at Chicago

More information

Anticipatory Survey Design: Reduction of Nonresponse Bias through Bias Prediction Models

Anticipatory Survey Design: Reduction of Nonresponse Bias through Bias Prediction Models Anticipatory Survey Design: Reduction of Nonresponse Bias through Bias Prediction Models Andy Peytchev 1, Sarah Riley 2, Jeff Rosen 1, Joe Murphy 1, Mark Lindblad 2, Paul Biemer 1,2 1 RTI International

More information

Active Lifestyle, Health, and Perceived Well-being

Active Lifestyle, Health, and Perceived Well-being Active Lifestyle, Health, and Perceived Well-being Prior studies have documented that physical activity leads to improved health and well-being through two main pathways: 1) improved cardiovascular function

More information

MATH-134. Experimental Design

MATH-134. Experimental Design Experimental Design Controlled Experiment: Researchers assign treatment and control groups and examine any resulting changes in the response variable. (cause-and-effect conclusion) Observational Study:

More information

Methodological Considerations to Minimize Total Survey Error in the National Crime Victimization Survey

Methodological Considerations to Minimize Total Survey Error in the National Crime Victimization Survey Methodological Considerations to Minimize Total Survey Error in the National Crime Victimization Survey Andrew Moore, M.Stat., RTI International Marcus Berzofsky, Dr.P.H., RTI International Lynn Langton,

More information

An Introduction to the CBS Health Cognitive Assessment

An Introduction to the CBS Health Cognitive Assessment An Introduction to the CBS Health Cognitive Assessment CBS Health is an online brain health assessment service used by leading healthcare practitioners to quantify and objectively 1 of 9 assess, monitor,

More information

Paid work versus accessibility in surveys: Are we running the risk of nonresponse bias? The example of ESS 5 in Poland

Paid work versus accessibility in surveys: Are we running the risk of nonresponse bias? The example of ESS 5 in Poland Research & Methods ISSN 1234-9224 Vol. 23 (1, 2014): 79 101 The Ohio State University Columbus, Ohio, USA Institute of Philosophy and Sociology Polish Academy of Sciences, Warsaw, Poland www.askresearchandmethods.org

More information

Summary and conclusion Replication of a measurement instrument for social exclusion

Summary and conclusion Replication of a measurement instrument for social exclusion Summary and conclusion Replication of a measurement instrument for social exclusion Policymakers have been seeking to combat social exclusion since the beginning of the 1990s. Not only does social exclusion

More information

You can t fix by analysis what you bungled by design. Fancy analysis can t fix a poorly designed study.

You can t fix by analysis what you bungled by design. Fancy analysis can t fix a poorly designed study. You can t fix by analysis what you bungled by design. Light, Singer and Willett Or, not as catchy but perhaps more accurate: Fancy analysis can t fix a poorly designed study. Producing Data The Role of

More information

Reducing Non Response in Longitudinal Studies: What Can We Do Instead of Increasing Monetary Incentives?

Reducing Non Response in Longitudinal Studies: What Can We Do Instead of Increasing Monetary Incentives? Reducing Non Response in Longitudinal Studies: What Can We Do Instead of Increasing Monetary Incentives? Kelly Elver Project Director University of Wisconsin Survey Center Overview Background info on large

More information

Prepared for Otter Tail County Public Health in Minnesota

Prepared for Otter Tail County Public Health in Minnesota 2006 Secondhand Smoke Survey of Registered Voters in Otter Tail County, Minnesota Issued June 2006 Prepared for Otter Tail County Public Health in Minnesota Prepared by North Dakota State Data Center at

More information

Understanding and Applying Multilevel Models in Maternal and Child Health Epidemiology and Public Health

Understanding and Applying Multilevel Models in Maternal and Child Health Epidemiology and Public Health Understanding and Applying Multilevel Models in Maternal and Child Health Epidemiology and Public Health Adam C. Carle, M.A., Ph.D. adam.carle@cchmc.org Division of Health Policy and Clinical Effectiveness

More information

International Journal of Public Opinion Research Advance Access published April 1, 2005 RESEARCH NOTE

International Journal of Public Opinion Research Advance Access published April 1, 2005 RESEARCH NOTE International Journal of Public Opinion Research Advance Access published April 1, 2005 International Journal of Public Opinion Research The Author 2005. Published by Oxford University Press on behalf

More information

multilevel modeling for social and personality psychology

multilevel modeling for social and personality psychology 1 Introduction Once you know that hierarchies exist, you see them everywhere. I have used this quote by Kreft and de Leeuw (1998) frequently when writing about why, when, and how to use multilevel models

More information

Inference and Error in Surveys. Professor Ron Fricker Naval Postgraduate School Monterey, California

Inference and Error in Surveys. Professor Ron Fricker Naval Postgraduate School Monterey, California Inference and Error in Surveys Professor Ron Fricker Naval Postgraduate School Monterey, California 1 Goals for this Lecture Learn about how and why errors arise in surveys So we can avoid/mitigate them

More information

Survey Errors and Survey Costs

Survey Errors and Survey Costs Survey Errors and Survey Costs ROBERT M. GROVES The University of Michigan WILEY- INTERSCIENCE A JOHN WILEY & SONS, INC., PUBLICATION CONTENTS 1. An Introduction To Survey Errors 1 1.1 Diverse Perspectives

More information

MEASURING INTERVIEWER EFFECTS ON SELF-REPORTS FROM HOMELESS PERSONS

MEASURING INTERVIEWER EFFECTS ON SELF-REPORTS FROM HOMELESS PERSONS MEASURING INTERVIEWER EFFECTS ON SELF-REPORTS FROM HOMELESS PERSONS Timothy P. Johnson, Jennifer A. Parsons, Survey Research Laboratory, University of Illinois Timothy Johnson, 910 West VanBuren, Suite

More information

This is copyrighted material. Women and Men in U.S. Corporate Leadership. Same Workplace, Different Realities?

This is copyrighted material. Women and Men in U.S. Corporate Leadership. Same Workplace, Different Realities? Women and Men in U.S. Corporate Leadership Same Workplace, Different Realities? INTRODUCTION Although women occupy one-half (50.5 percent) of managerial and professional specialty positions in the United

More information

A COMPARISON OF IMPUTATION METHODS FOR MISSING DATA IN A MULTI-CENTER RANDOMIZED CLINICAL TRIAL: THE IMPACT STUDY

A COMPARISON OF IMPUTATION METHODS FOR MISSING DATA IN A MULTI-CENTER RANDOMIZED CLINICAL TRIAL: THE IMPACT STUDY A COMPARISON OF IMPUTATION METHODS FOR MISSING DATA IN A MULTI-CENTER RANDOMIZED CLINICAL TRIAL: THE IMPACT STUDY Lingqi Tang 1, Thomas R. Belin 2, and Juwon Song 2 1 Center for Health Services Research,

More information

The Impact of Cellphone Sample Representation on Variance Estimates in a Dual-Frame Telephone Survey

The Impact of Cellphone Sample Representation on Variance Estimates in a Dual-Frame Telephone Survey The Impact of Cellphone Sample Representation on Variance Estimates in a Dual-Frame Telephone Survey A. Elizabeth Ormson 1, Kennon R. Copeland 1, B. Stephen J. Blumberg 2, and N. Ganesh 1 1 NORC at the

More information

Oliver Lipps Introduction

Oliver Lipps Introduction Journal of Official Statistics, Vol. 25, No. 3, 2009, pp. 323 338 Cooperation in Centralised CATI Household Panel Surveys A Contact-based Multilevel Analysis to Examine Interviewer, Respondent, and Fieldwork

More information

PDRF About Propensity Weighting emma in Australia Adam Hodgson & Andrey Ponomarev Ipsos Connect Australia

PDRF About Propensity Weighting emma in Australia Adam Hodgson & Andrey Ponomarev Ipsos Connect Australia 1. Introduction It is not news for the research industry that over time, we have to face lower response rates from consumer surveys (Cook, 2000, Holbrook, 2008). It is not infrequent these days, especially

More information

Why do Psychologists Perform Research?

Why do Psychologists Perform Research? PSY 102 1 PSY 102 Understanding and Thinking Critically About Psychological Research Thinking critically about research means knowing the right questions to ask to assess the validity or accuracy of a

More information

Lecture Slides. Elementary Statistics Eleventh Edition. by Mario F. Triola. and the Triola Statistics Series 1.1-1

Lecture Slides. Elementary Statistics Eleventh Edition. by Mario F. Triola. and the Triola Statistics Series 1.1-1 Lecture Slides Elementary Statistics Eleventh Edition and the Triola Statistics Series by Mario F. Triola 1.1-1 Chapter 1 Introduction to Statistics 1-1 Review and Preview 1-2 Statistical Thinking 1-3

More information

Survey Methods in Relationship Research

Survey Methods in Relationship Research Purdue University Purdue e-pubs Department of Psychological Sciences Faculty Publications Department of Psychological Sciences 1-1-2009 Survey Methods in Relationship Research Christopher Agnew Purdue

More information

Chapter 1 Introduction to Educational Research

Chapter 1 Introduction to Educational Research Chapter 1 Introduction to Educational Research The purpose of Chapter One is to provide an overview of educational research and introduce you to some important terms and concepts. My discussion in this

More information

Time-Sharing Experiments for the Social Sciences. Impact of response scale direction on survey responses to factual/behavioral questions

Time-Sharing Experiments for the Social Sciences. Impact of response scale direction on survey responses to factual/behavioral questions Impact of response scale direction on survey responses to factual/behavioral questions Journal: Time-Sharing Experiments for the Social Sciences Manuscript ID: TESS-0.R Manuscript Type: Original Article

More information