Examining context effects in organization survey data using IRT

Size: px
Start display at page:

Download "Examining context effects in organization survey data using IRT"

Transcription

1 Rivers, D., Meade, A. W., & Fuller, W. L. (2007, April). Examining context effects in organization survey data using IRT. Paper presented at the 22nd Annual Meeting of the Society for Industrial and Organizational Psychology, New York. Examining context effects in organization survey data using IRT Drew Rivers and Adam W. Meade North Carolina State University W. Lou Fuller Duke Energy Organizational researchers often modify employee surveys over time. However, changes to the survey form can introduce measurement artifacts, such as context effects, leading to differences in observed responses that are not due to true organizational change. This paper illustrates the use of IRT to identify context effects in organizational surveys. Surveys are perhaps the most frequently applied data collection technique used by organizations (Church & Waclawski, 2001; Kraut & Saari, 1999; Sackett & Larson, 1990; Stanton, Sinar, Balzer, & Smith, 2002). One reason for their widespread use is the enormous amount of flexibility in the variety of purposes that they can serve (Church, 2001). For example, organization surveys are commonly used to identify areas of concern, analyze long-term trends, evaluate the impact of organizational change, provide information for future decisions, analyze organizational behavior, serve as a communication channel, improve organizational functioning, and to provide symbolic communication (Kraut, 1996). Additionally, surveys are relatively nonintrusive and inexpensive compared to other data collection procedures (Church, 2001). Evidence of the importance organizations place on surveys can be witnessed by the existence of consortia dedicated to providing benchmarks for survey data across organizations and industries. Further, Kraut (1996) estimates that more than half of US companies regularly survey their employees. The frequent application of organizational survey data in the literature is a testament to its continued value to both academics and practitioners alike. Despite the widespread use of organizational surveys in both research and practice, relatively little attention has been given to the potential presence of extraneous factors that can affect survey data. While common in other areas of research (e.g., public opinion surveys; Colasanto, Singer, & Rogers, 1992), context effects (to which they are commonly referred) have rarely been discussed in organizational research. Context effects occur when responses to questionnaire or interview items vary as a function of other items on the instrument. Contexts effects can have a strong influence on survey responses, particularly for common attitudinal measures such as job satisfaction and organizational commitment, yet can be difficult to detect by examining item content or basic item and scale statistics alone. Undetected, context effects can strongly mislead persons relying on survey results as an accurate assessment of organizational phenomena (e.g., culture, satisfaction). In this study, we propose the use of item response theory (IRT) as a means to detect context effects in longitudinal organizational survey research. Longitudinal Organizational Surveys and Context Effects The analysis of organizational survey data across time offers benefits both to academics, in terms of developing and testing theories, and to practitioners, with regard to designing and evaluating business initiatives. However, organizational survey programs are under constant pressure to change survey content, item wording, and scale formats to meet organizational needs (Higgs & Ashworth, 1998). While such changes are necessary to meet the evolving needs of the organization, these changes also introduce the potential for context effects, leaving the researcher unsure of whether changes in responses across time are attributable to real changes or to artifacts associated with the different survey versions. Context effects occur when responses to questionnaire (or interview) items differ based on the presence of other items on the survey. Underlying this phenomenon is the idea that attitude judgments are temporary and malleable constructions, rather than fixed or stable evaluations. Tourangeau and Rasinski (1988) describe attitudes as structures in long-term memory comprised of interrelated beliefs. These beliefs encompass experiences, feelings, general propositions, knowledge, past judgments, and

2 Context Effects & IRT 2 images regarding a particular attitude object or target stimulus. Importantly, while the attitude structure may remain stable in memory, an individual may report different attitudinal judgments depending on the context in which the attitude is elicited. While attitudinal judgments are formed based on information stored in memory, not all relevant information is retrieved when prompted for an attitude judgment. Individuals will generally truncate the retrieval process once enough information has been accessed to form a judgment (Schwarz, 1995). Further, the accessibility of information can be either chronic or temporal (Schwarz, 1995; Schwarz & Bless, 1992). Generally, chronically accessibility information is used frequently, and is therefore readily accessible and independent of contextual cues. Conversely, temporarily accessible information is used infrequently, but may be primed into retrieval by contextual cues, for example, earlier items in a questionnaire. The assumption of information accessibility underlies two models of context effects: the beliefsampling model (Tourangeau, 1999; Tourangeau, Rips, & Rasinski, 2000) and the inclusion/exclusion model (Schwarz & Bless, 1992; Schwarz, 1995). According to the belief-sampling model, respondents utilize a sample of all relevant beliefs stored in memory when forming a judgment, and the judgment is based on an aggregation of these beliefs (Tourangeau, 1999). Since the context can influence what beliefs are sampled, the reliability of attitudinal judgments across situations depends on three factors: (1) The homogeneity of the underlying pool of relevant beliefs, (2) the consistency in the sampling of beliefs, and (3) the consistency in the aggregation of beliefs to form a judgment. The inclusion/exclusion model also rests on the assumption of information accessibility. Schwarz and Bless (1992) suggest that in order for individuals to form a judgment about a target stimulus, a representation of both the target and a standard of comparison must be constructed. Both constructions are, in part, dependent on the context in which the judgment is elicited. Again, not all relevant information is retrieved to make the judgment. Information made temporarily accessible may be included in the representation of the target, or it may be excluded from the target and subsequently incorporated into the representation of the standard. The inclusion or exclusion of contextual information depends on the perceived appropriateness and relevance to the target stimulus (Schwarz, 1995). When information is included into the target representation, the effect is referred as assimilation; when it is excluded from the target and included into the standard, it is referred to as contrast (i.e., exclusion). In general, target questions that allow for the inclusion of a wide range of information are more likely to generate an assimilation effect. Conversely, specific target questions are more likely to elicit a contrast effect. A study by Colasanto et al. (1992) offers an example of how context can influence target item interpretation. The authors suspected a context effect to be partially responsible for a significant change in public opinion about contracting AIDS by donating blood: opinions jumped from 28.9% to 43.5% in a one-year period. In both opinion polls, respondents were asked about nine different methods through which a person might contract AIDS, including donating blood and receiving a blood transfusion. In the earlier opinion poll, the donating blood method always followed the blood transfusion method. In the later poll, the methods of transmission were randomly presented across respondents. To test for a context effect, Colasanto and colleagues conducted a split-ballot experiment, with half the sample asked about blood transfusion first, and the other half about donating blood first. The researchers found that those asked about blood transfusion first were less likely to believe AIDS could be acquired by donating blood. In this case, respondents may have excluded information about blood transfusions from their representation of the target item, blood donations. Studies that involve naturally occurring context effects tend to be an exception in the literature. As Tourangeau, Singer, and Presser (2003) note, much of the work in this area tends to involve controlled experiments designed to generate context effects. Organizational researchers, however, often have little control over survey design and administration. As a result, carefully controlled experimental design and sampling is not possible. Fortunately, item response theory (IRT) provides a method, superior in many ways to experimental mean comparisons, that can be used to evaluate the presence of context effects in organizational surveys. IRT & Differential Item Functioning IRT models the relationship between the probability of observed response and an examinee s level of some latent trait or attitude. Perhaps the most commonly used IRT model in organizational research has been Samejima s (1969) graded response model (GRM). In the GRM, a series of boundary response functions (BRFs) are graphed that relate the probability of responding at or above a given response category to the respondent s underlying level of the latent trait or attitude (such as satisfaction). A set of parameters is estimated for each item in the GRM, which is then used to plot the

3 Context Effects & IRT 3 BRFs. These parameters can also be used to plot category response curves (CRCs). The CRCs represent the probability of responding in a given response category across the range of the latent trait. We refer the reader to Embretson and Reise (2000) for a much more thorough discussion of the GRM. When multiple samples (e.g., groups, time periods, response formats, etc.) respond to the same set of items, item parameters can be estimated separately for those samples. The extent to which the item functions differently for the groups is commonly referred to as differential item functioning (DIF), though the term item parameter drift is sometimes used in DIF studies of longitudinal data (e.g., Donoghue & Isham, 1998; Wells, Subkoviak, & Serlin, 2002). Conceptually, DIF in IRT is highly similar to measurement invariance research using confirmatory factor analysis (Maurer, Raju, & Collins, 1998; Meade & Lautenschlager, 2004; Raju, Laffitte, & Byrne, 2002). Raju and colleagues (Flowers, Oshima, & Raju, 1999; Raju, van der Linden, & Fleer, 1995) have developed the differential functioning of items and tests (DFIT) framework to identify practically significant DIF at the item and test level. A technical presentation of DFIT is available in the aforementioned articles, but note that DFIT has been used many times in organizational research (e.g., Facteau & Craig, 2001; Maurer et al., 1998). Study Overview This study will examine organization survey data from a large US-based energy company. The Company has maintained a survey program for nearly two decades and has transitioned the survey through various modes of data collection and numerous organization changes. The survey program adheres to standards of survey research established by a national consortium of top US firms, of which the Company has been a member for more than a decade. While the Company has attempted to minimize changes to the survey content, such changes are inevitable in a dynamic and evolving workplace. In 2004, the Company incorporated a series of changes to the survey, which was associated with a significant decline in favorable ratings to an item: How satisfied are you with the information you receive from management on what's going on at this company? (Hereafter referred to as the target item.) Historically, the target item received favorable ratings (defined as ratings of four and five on the five point scale) in the range of 49% to 53% (See Figure 1). Following the 2004 administration favorable ratings dropped to 33%, a 16 percentage point decrease from the previous year. No significant, negative organization events occurred in the interim that could explain the precipitous decline. However, the Company s survey administrator suspected a potential context effect resulting from the inclusion of a new item immediately preceding the target item. In order to investigate the possibility of a context effect, we employed IRT methods of DIF assessment. If changes to the survey form altered the interpretation of the target item, the DIF analysis should reveal a difference in item functioning from 2003 to Method Survey Instrument This study focuses on a subset of items that address employee attitudes toward the Company s direction and general communications with its employees. These items were scattered throughout the original survey form, but factor analysis revealed that a single factor (Goals and Direction) underlay these items. The items comprising this factor appear in Table 1. Item 10 in the scale is the target item and was the last of these items to appear in the survey. Participants The participants include employees of the Company for the years 2003 and 2004, who: (1) were invited to participate in the survey, (2) responded to all Goals and Direction items, and (3) reported their primary work location within the USA (to control for potential DIF due to translation and cross-cultural differences; Ellis, Minsel, & Becker, 1989). The final dataset included N=10,440 respondents in 2003 and N=6,651 respondents in Table 2 depicts descriptive statistics for the Goals and Direction items. Note that with IRT, it is not necessary that the exact same respondents are included in the 2003 and 2004 samples, as the relationship between item response and latent trait does not vary, even when mean differences in latent traits exist. Procedure Unidimensionality. An important assumption underlying IRT models is unidimensionality (Hambleton, Swaminathan & Rogers, 1991). Exploratory factor analysis with principal axis factoring was conducted to assess the dimensionality of the 10-item scale using the combined 2003 & 2004 sample. IRT Analyses. Item parameters were estimated separately in 2003 and 2004 using the GRM and MULTILOG 7.03 (Thissen, 1991). One requirement of IRT DIF procedures is that item parameter estimates be put onto the same metric. We used the EQUATE 2.1 program (Baker, 1995; cf. Raju et al., 1995), and then used these linked item

4 Context Effects & IRT 4 parameters as input into the DFITPS6 program (Raju, 1999). Since equating assumes the common anchor items are invariant between groups, we used an iterative linking process to identify a stable set of anchor items (see Lautenschlager, Flaherty, & Park, 1994 and Park & Lautenschlager, 1990 for a discussion of the advantages of iterative linking). Results The EFA scree plot indicated a unidimensional scale (see Figure 2) with the first factor explaining 51.7% of item variance. In addition, confirmatory factor analysis using Mplus 3.13 (Muthen & Muthen, 2005) was conducted to test whether a single factor model could adequately explain the data. Again, the samples were combined for the analysis. Results suggest an adequate fit: CFI =.935, TLI =.916; SRMR=.042. Overall, the unidimensionality of the Goals and Direction scale was upheld. The iterative linking process revealed the presence of only one DIF item at both the first and second stage of linking. Table 3 shows DFIT results following the final stage of linking. An item was considered differentially functioning when the NCDIF index exceeded the.096 cut-off value recommended by Raju (1999). Only the target item, Item 10, showed DIF while the overall scale did not show differential test functioning (DTF=.143 versus the recommended cutoff,.960). To further explore Item 10, BRFs and CRCs were generated for the 2003 and 2004 samples using the equated item parameters. Equated parameters for Item 10 are presented in Table 4. Figure 3 depicts the BRFs for Item 10 while CRCs offer another view of the DIF between samples (see Figure 4). The interpretation with both the BRFs and CRCs is that a member of the 2003 sample has a higher probability of selecting a response of 4 or 5 on the scale than a member of the 2004 sample given the same level of satisfaction. Put differently, members of the 2004 sample required higher levels of satisfaction to have the same probability of responding with satisfied or very satisfied. This finding is consistent with the decline in favorable ratings uncovered by the Company in 2004 and indicates that this decline is unlikely to be due to a true drop in satisfaction. Rather, the item functions differently on the 2003 and 2004 surveys, thus no change in the overall sample latent satisfaction would be necessary to witness a lower observed mean on the 2004 survey for Item 10. Discussion Observed item score differences over time can emerge for one of two reasons. First, the item could accurately measure the trait or attitude on which respondents actually differ over time, or second, an item functions differently over time leading the an inaccurate perception of change. Item 10 in the Goals and Direction scale appears to have functioned differently in 2003 and 2004, likely due to a survey context effect. This context effect then erroneously leads to an impression of lower satisfaction over time. These findings have several implications. For researchers, this study is the first to illustrate the use of IRT for identifying context effects with organizational survey data. The IRT methods illustrated here are more advantageous than traditional split-ballot designs for several reasons. First, unlike traditional techniques, the IRT method does not necessitate the administration of multiple survey forms, which is seldom feasible with largescale organizational surveys. Second traditional methods are prone to problems associated with the unreliability of single-item measures (the outcome measure in split-ballot designs) whereas the IRT model accommodates for unreliability (see Embretson & Reise, 2000). Third, IRT accommodates for differences in the latent trait or attitude over time, eliminating the need for carefully matched samples across time periods. For the Company, these results have implications for organizational decision makers. Item 10 is an invalid indicator of changes across time with regard to employee satisfaction with information from management. Actions based mainly on the item alone have limited support in the data. Alternatively, other scale items and the scale itself were shown to lack DIF. An assessment of the survey context surrounding the target item reveals several changes in the survey form that may have generated a context effect. Table 5 displays the context for the target item in 2003 and Table 6 does the same for In order to better understand the potential source of the context effect, we examined two significant changes to the survey between 2003 and The first was a change in the item immediately preceding the target item (see Context Item 2 in Tables 5 and 6). In the 2003 survey form, this context item asked employees to select their primary information source regarding general news about the company. The method of response required the selection of one information source among a list of six alternatives. In the 2004 survey form, this item was replaced by a Likert scaled item asking employees to rate their agreement with a statement

5 Context Effects & IRT 5 regarding the credibility of company communications to employees. According to the inclusion/exclusion model (Schwarz & Bless, 1992), respondents construct a representation of the target item and of a standard of comparison prior to making a judgment. These constructions are influenced in part by the context in which the item is presented. In the present case, the change in the preceding item could have altered the representation of the target item. Specifically, in 2003 the preceding item could have influenced employees to interpret the target item as addressing satisfaction with the quantity or availability of information from management. In 2004, the preceding item could have swayed the interpretation of the target toward the quality or honesty of communication from management. In both cases, the effect was one of assimilation, whereby information from the context is integrated into the representation of the target. The belief sample model (Tourangeau et al., 2000) offers a similar explanation. According to the model, respondents will retrieve only a sample of relevant information to form a judgment about the target item: in this case, satisfaction with information from management. If the attitude judgment involves an aggregation of beliefs across various dimensions for example, information availability, information quality, information honesty, information timeliness, and information quantity then judgments are more likely to vary across different contexts. In this case, the preceding item in 2003 may have made beliefs about information availability or quantity more accessible and therefore more prominent when forming a judgment on the target item. Conversely, in 2004 the preceding may item have made beliefs about information quality or honesty more available. The second significant change made on the 2004 survey form suggests a context effect may have occurred at the final stage of the response process: mapping the judgment onto the scale (Tourangeau & Rasinski, 1988). The 2003 target item utilized a Likert scale ranging from Very Satisfied to Very Dissatisfied. In 2004, the target item incorporate a different set of anchors, ranging from To a Very Great Extent down to To a Little or No Extent. While the 2003 anchor descriptions ranged from positive to negative, the 2004 anchors effectively included a zero point (i.e., To a Little or No Extent) at one end of the scale. A mapping of 2003 anchors on to the 2004 anchors would likely shift favorable judgments in 2003 (e.g., Satisfied) down to neutral judgments in 2004 (To Some Extent). Examination of the Category Response Curves (Figure 4) offers some support for this notion. Ultimately, the source of the context effect likely emerged from several different changes to the survey form, including changes to both the preceding item and the scale anchors. In any case IRT proved a useful tool for identifying the presence of a context effect. Limitations Despite the apparent support for the context effect hypothesis, some limitations are worth noting. From a theoretical perspective, the relationship between context effects and DIF remains unclear. If context effects can occur at different phases of the response process (Tourangeau, 1999), at which phase will context effects produce DIF? For example, two groups may generate a similar interpretation of an item yet use a different standard of comparison when forming a judgment. Clearly further research is needed on the role of IRT to detect context effects in organizational survey research. Research designs like the Colosanto et al. (1998) that incorporate DIF analysis may help further refine this application of IRT. References Baker, F. B. (1995). EQUATE 2.1. A Fortran-based computer program used for equate item parameters between two groups. University of Wisconsin. Church, A. H. (2001). Is there a method to our madness? The impact of data collection methodology on organizational survey results. Personnel Psychology, 54, Church, A. H., Waclawski, J. (2001). Designing and using organizational surveys: A seven step process. San Francisco: Jossey-Bass. Colsanto, D., Singer, E. & Rogers, T. (1992) Context effects on responses to questions about AIDS. Public Opinion Quarterly, 56, Donoghue, J. R., & Isham, S. P. (1998). A comparison of procedures to detect item parameter drift. Applied Psychological Measurement, 22, Ellis, B. B., Minsel, B., & Becker, P. (1989). Evaluation of attitude survey translations: An investigation using item response theory. International Journal of Psychology, 24, Embretson, S. E., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah, NJ: Lawrence Erlbaum Associates. Facteau, J. D., & Craig, S. B. (2001). Are performance appraisal ratings from different rating sources comparable? Journal of Applied Psychology, 86,

6 Context Effects & IRT 6 Flowers, C. P., Oshima, T.C., & Raju, N. S. (1999). A description and demonstration of the polytomous-dfit framework. Applied Psychological Measurement, 23, Hambleton, R. K., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of Item Response theory. Newbury Park, CA: Sage Publications. Higgs, A. C., & Ashworth, S. D. (1998). Organizational surveys: Tools for assessment and research. In A. I. Kraut (Ed.). Organizational Surveys: Tools for Assessment and Change. San Francisco, CA: Jossey-Bass. Kraut, A. I. (1996). An overview of organizational surveys. In A. I. Kraut (Ed.), Organizational surveys: Tools for assessment and research (pp. 1-14). San Francisco, CA: Jossey-Bass. Kraut, A. I., & Saari, L. M. (1999). Organizational surveys: Coming of age for a new era. In A. I. Kraut & A. K. Korman (Eds.), Evolving practices in human resource management: Responses to a changing world of work (pp ). San Francisco: Jossey-Bass. Lautenschlager, G. J., Flaherty, V. L., & Park, D. G. (1994). IRT differential item functioning: An examination of ability scale purifications. Educational and Psychological Measurement, 54, Maurer, T. J., Raju, N. S., & Collins, W. C. (1998). Peer and subordinate performance appraisal measurement equivalence. Journal of Applied Psychology, 83, Meade, A. W., & Lautenschlager, G. J. (2004). A comparison of item response theory and confirmatory factor analytic methodologies for establishing measurement equivalence/invariance. Organizational Research Methods, 7, Muthen, B., & Muthen, L. (2004). Mplus A computer program for structural modeling. Los Angeles, CA. Park, D. G., & Lautenschlager, G. J. (1990). Improving IRT item bias detection with iterative linking and ability scale purification. Applied Psychological Measurement, 14, Raju, N. (1999). DFITPS6. A Fortran-based computer program for calculating DIF/DTF. [Computer Program] Raju, N. S., Laffitte, L. J., & Byrne, B. M. (2002). Measurement equivalence: A comparison of methods based on confirmatory factor analysis and item response theory. Journal of Applied Psychology, 87, Raju, N. S., van der Linden,W. J., & Fleer, P. F. (1995). IRT-based internal measure of differential functioning of items and tests. Applied Psychological Measurement, 19, Sackett, P. R., & Larson, J. R. Jr. (1990). Research strategies and tactics in industrial and organizational psychology. In M. D. Dunnette & L. M. Hough (Eds.), Handbook of industrial and organizational psychology, (2 nd ed.), Vol. 1, p Palo Alto, CA: Consulting Psychologists Press. Samejima, F. (1969). Estimation of latent ability using a response pattern of graded scores (Psychometric Monograph No. 18). Iowa City, IA: Psychometric Society. Schwarz, N. (1995). Social cognition: Information accessibility and use in social judgment. In E. E. Smith & D. N. Oshenson (Eds.), Thinking: An Invitation to Cognitive Science, Second Edition: Cambridge Mass: MIT Press. Schwarz, N., & Bless, H. (1992). Assimilation and contrast effects in attitude measurement: An inclusion/exclusion model. Advances in Consumer Research, 19, Stanton, J. M., Sinar, E. F., Balzer, W. K., & Smith, P. C. (2002). Issues and strategies for reducing the length of self-report scales. Personnel Psychology, 55, Thissen, D. (1991). MULTILOG users guide: Multiple categorical item analysis and test scoring using item response theory (computer program). Chicago,IL: Scientific Software International. Tourangeau, R. (1999) Context effects on answers to attitude questions. In M. G. Sirken, D. J. Herrmann, S. Schechter, N. Schwarz, J. M Tanur, R. Tourangeau. Cognition and Survey Research, pp New York: John Wiley and Sons. Tourangeau, R., & Rasinski, K. A. (1988). Cognitive processes underlying context effects in attitude measurement. Psychological Bulletin, 103, Tourangeau, R., Rips, L. J., & Rasinski, K. (2000). The psychology of survey response.: Cambridge University Press. Tourangeau, R., Singer, E., & Presser, S. (2003). Context effects in attitude surveys: Effects on remote items and impact on predictive validity. Sociological Methods & Research, 31, Wells, C. S., Subkoviak, M. J., & Serlin, R. C. (2002). The effect of item parameter drift on

7 Context Effects & IRT 7 examinee ability estimates. Applied Psychological Measurement, 26, Author Contact Info: Drew Rivers Department of Psychology North Carolina State University Campus Box 7650 Raleigh, NC Phone: Fax: dcrivers@ncsu.edu Adam W. Meade Department of Psychology North Carolina State University Campus Box 7650 Raleigh, NC Phone: Fax: awmeade@ncsu.edu

8 Context Effects & IRT 8 Table 1 Items Forming The Goals And Direction Scale 1 I agree with the directions and plans for this company. 2 I understand the directions and plans for this company. 3 [This Company] is effective at integrating all of its various work locations into one unified company. 4 This company is making the changes necessary to compete effectively. 5 I believe this company appropriately balances the needs of its customers, employees and investors. 6 I am confident of the ability of senior management (e.g., the top 150 leaders within [The Company] to make decisions necessary to ensure the future success of [The Company] 7 High ethical standards are practiced by this company's management. 8 This company encourages employee involvement in the communities in which it operates. 9 Communications I receive from management help me understand [The Company s] business strategies and values. 10 (Target How satisfied are you with the information you receive from management on what's going on at this company? Item) Note: The first nine items utilized a 5-point, strongly disagree to strongly agree scale. Item ten utilized a 5-point very dissatisfied to very satisfied scale.

9 Context Effects & IRT 9 Table 2 Response Frequencies (Percentages), Means, and Standard Deviations for 2003 Sample (N=10440) and 2004 Sample (N=6651) Groups Category Percentages Item Mean sd Sample Sample Sample Sample Sample Sample Sample Sample Sample Sample Sample Sample Sample Sample Sample Sample Sample Sample Sample Sample

10 Context Effects & IRT 10 Table 3 CDIF and NCDIF Results Item CDIF NCDIF DTF Note: CDIF=Compensatory Differential Item Functioning; NCDIF=Non-compensatory Differential Item Functioning

11 Context Effects & IRT 11 Table 4 Equated Parameter Estimates for Item 10 Parameter estimates Time period a b1 b2 b3 b Sample Sample

12 Context Effects & IRT 12 Table 5 Target Item Context 2003 Survey Form Context Item 1 Context Item 2 As an employee of [Company Name], you receive a variety of information in a variety of ways. In general, do you feel you receive: Too much information About the right amount of information Too little information Where you usually hear general news about [Company Name] first? Click on the arrow below to select your first choice. Printed company publications Location or departmental communications My immediate supervisor/manager Employee Portal/Intranet Word of mouth/employee "grapevine" External Resources (e.g. Internet, newspaper, TV) Target Item How satisfied are you with information you receive from management on what s going on in the company? Very Satisfied Satisfied Neither Satisfied nor Dissatisfied Dissatisfied Very Dissatisfied. NO RESPONSE

13 Context Effects & IRT 13 Table 6 Target Item Context 2004 Survey Form Context Item 1 Context Item 2 Target Item As an employee of [Company Name], you receive a variety of information in a variety of ways. In general, do you feel you receive: Too much information About the right amount of information Too little information The company s communications to employees are straightforward and credible. Strongly Agree Agree Neither Agree nor Disagree Disagree Strongly Disagree No Response How satisfied are you with information you receive from management on what s going on in the company? To A Very Great Extent To a Great Extent To Some Extent To a Little Extent To Very Little or No Extent No Response

14 Context Effects & IRT 14 Figure 1. Trend in favorable ratings, Percent Favorable Year

15 Context Effects & IRT 15 Figure 2. Scree Plot for 10-item Scale 6 5 Eigenvalue Factor

16 Context Effects & IRT 16 Figure 3. Boundary Response Functions for Item P(theta) Theta

17 Context Effects & IRT 17 Figure 4. Category Response Curves for Item P(theta) Theta 2003 P(theta) Theta

Sensitivity of DFIT Tests of Measurement Invariance for Likert Data

Sensitivity of DFIT Tests of Measurement Invariance for Likert Data Meade, A. W. & Lautenschlager, G. J. (2005, April). Sensitivity of DFIT Tests of Measurement Invariance for Likert Data. Paper presented at the 20 th Annual Conference of the Society for Industrial and

More information

Item Response Theory: Methods for the Analysis of Discrete Survey Response Data

Item Response Theory: Methods for the Analysis of Discrete Survey Response Data Item Response Theory: Methods for the Analysis of Discrete Survey Response Data ICPSR Summer Workshop at the University of Michigan June 29, 2015 July 3, 2015 Presented by: Dr. Jonathan Templin Department

More information

Investigating the Invariance of Person Parameter Estimates Based on Classical Test and Item Response Theories

Investigating the Invariance of Person Parameter Estimates Based on Classical Test and Item Response Theories Kamla-Raj 010 Int J Edu Sci, (): 107-113 (010) Investigating the Invariance of Person Parameter Estimates Based on Classical Test and Item Response Theories O.O. Adedoyin Department of Educational Foundations,

More information

Violating the Independent Observations Assumption in IRT-based Analyses of 360 Instruments: Can we get away with It?

Violating the Independent Observations Assumption in IRT-based Analyses of 360 Instruments: Can we get away with It? In R.B. Kaiser & S.B. Craig (co-chairs) Modern Analytic Techniques in the Study of 360 Performance Ratings. Symposium presented at the 16 th annual conference of the Society for Industrial-Organizational

More information

Influences of IRT Item Attributes on Angoff Rater Judgments

Influences of IRT Item Attributes on Angoff Rater Judgments Influences of IRT Item Attributes on Angoff Rater Judgments Christian Jones, M.A. CPS Human Resource Services Greg Hurt!, Ph.D. CSUS, Sacramento Angoff Method Assemble a panel of subject matter experts

More information

Item Response Theory. Steven P. Reise University of California, U.S.A. Unidimensional IRT Models for Dichotomous Item Responses

Item Response Theory. Steven P. Reise University of California, U.S.A. Unidimensional IRT Models for Dichotomous Item Responses Item Response Theory Steven P. Reise University of California, U.S.A. Item response theory (IRT), or modern measurement theory, provides alternatives to classical test theory (CTT) methods for the construction,

More information

Survey Sampling Weights and Item Response Parameter Estimation

Survey Sampling Weights and Item Response Parameter Estimation Survey Sampling Weights and Item Response Parameter Estimation Spring 2014 Survey Methodology Simmons School of Education and Human Development Center on Research & Evaluation Paul Yovanoff, Ph.D. Department

More information

A Comparison of Item Response Theory and Confirmatory Factor Analytic Methodologies for Establishing Measurement Equivalence/Invariance

A Comparison of Item Response Theory and Confirmatory Factor Analytic Methodologies for Establishing Measurement Equivalence/Invariance 10.1177/1094428104268027 ORGANIZATIONAL Meade, Lautenschlager RESEARCH / COMP ARISON METHODS OF IRT AND CFA A Comparison of Item Response Theory and Confirmatory Factor Analytic Methodologies for Establishing

More information

Manifestation Of Differences In Item-Level Characteristics In Scale-Level Measurement Invariance Tests Of Multi-Group Confirmatory Factor Analyses

Manifestation Of Differences In Item-Level Characteristics In Scale-Level Measurement Invariance Tests Of Multi-Group Confirmatory Factor Analyses Journal of Modern Applied Statistical Methods Copyright 2005 JMASM, Inc. May, 2005, Vol. 4, No.1, 275-282 1538 9472/05/$95.00 Manifestation Of Differences In Item-Level Characteristics In Scale-Level Measurement

More information

Comprehensive Statistical Analysis of a Mathematics Placement Test

Comprehensive Statistical Analysis of a Mathematics Placement Test Comprehensive Statistical Analysis of a Mathematics Placement Test Robert J. Hall Department of Educational Psychology Texas A&M University, USA (bobhall@tamu.edu) Eunju Jung Department of Educational

More information

The Psychometric Development Process of Recovery Measures and Markers: Classical Test Theory and Item Response Theory

The Psychometric Development Process of Recovery Measures and Markers: Classical Test Theory and Item Response Theory The Psychometric Development Process of Recovery Measures and Markers: Classical Test Theory and Item Response Theory Kate DeRoche, M.A. Mental Health Center of Denver Antonio Olmos, Ph.D. Mental Health

More information

Assessing Measurement Invariance in the Attitude to Marriage Scale across East Asian Societies. Xiaowen Zhu. Xi an Jiaotong University.

Assessing Measurement Invariance in the Attitude to Marriage Scale across East Asian Societies. Xiaowen Zhu. Xi an Jiaotong University. Running head: ASSESS MEASUREMENT INVARIANCE Assessing Measurement Invariance in the Attitude to Marriage Scale across East Asian Societies Xiaowen Zhu Xi an Jiaotong University Yanjie Bian Xi an Jiaotong

More information

Techniques for Explaining Item Response Theory to Stakeholder

Techniques for Explaining Item Response Theory to Stakeholder Techniques for Explaining Item Response Theory to Stakeholder Kate DeRoche Antonio Olmos C.J. Mckinney Mental Health Center of Denver Presented on March 23, 2007 at the Eastern Evaluation Research Society

More information

Differential Item Functioning

Differential Item Functioning Differential Item Functioning Lecture #11 ICPSR Item Response Theory Workshop Lecture #11: 1of 62 Lecture Overview Detection of Differential Item Functioning (DIF) Distinguish Bias from DIF Test vs. Item

More information

MEASURING HAPPINESS IN SURVEYS: A TEST OF THE SUBTRACTION HYPOTHESIS. ROGER TOURANGEAU, KENNETH A. RASINSKI, and NORMAN BRADBURN

MEASURING HAPPINESS IN SURVEYS: A TEST OF THE SUBTRACTION HYPOTHESIS. ROGER TOURANGEAU, KENNETH A. RASINSKI, and NORMAN BRADBURN MEASURING HAPPINESS IN SURVEYS: A TEST OF THE SUBTRACTION HYPOTHESIS ROGER TOURANGEAU, KENNETH A. RASINSKI, and NORMAN BRADBURN Abstract Responses to an item on general happiness can change when that item

More information

Measuring mathematics anxiety: Paper 2 - Constructing and validating the measure. Rob Cavanagh Len Sparrow Curtin University

Measuring mathematics anxiety: Paper 2 - Constructing and validating the measure. Rob Cavanagh Len Sparrow Curtin University Measuring mathematics anxiety: Paper 2 - Constructing and validating the measure Rob Cavanagh Len Sparrow Curtin University R.Cavanagh@curtin.edu.au Abstract The study sought to measure mathematics anxiety

More information

IDEA Technical Report No. 20. Updated Technical Manual for the IDEA Feedback System for Administrators. Stephen L. Benton Dan Li

IDEA Technical Report No. 20. Updated Technical Manual for the IDEA Feedback System for Administrators. Stephen L. Benton Dan Li IDEA Technical Report No. 20 Updated Technical Manual for the IDEA Feedback System for Administrators Stephen L. Benton Dan Li July 2018 2 Table of Contents Introduction... 5 Sample Description... 6 Response

More information

Cross-cultural DIF; China is group labelled 1 (N=537), and USA is group labelled 2 (N=438). Satisfaction with Life Scale

Cross-cultural DIF; China is group labelled 1 (N=537), and USA is group labelled 2 (N=438). Satisfaction with Life Scale Page 1 of 6 Nonparametric IRT Differential Item Functioning and Differential Test Functioning (DIF/DTF) analysis of the Diener Subjective Well-being scale. Cross-cultural DIF; China is group labelled 1

More information

Comparing Factor Loadings in Exploratory Factor Analysis: A New Randomization Test

Comparing Factor Loadings in Exploratory Factor Analysis: A New Randomization Test Journal of Modern Applied Statistical Methods Volume 7 Issue 2 Article 3 11-1-2008 Comparing Factor Loadings in Exploratory Factor Analysis: A New Randomization Test W. Holmes Finch Ball State University,

More information

Does factor indeterminacy matter in multi-dimensional item response theory?

Does factor indeterminacy matter in multi-dimensional item response theory? ABSTRACT Paper 957-2017 Does factor indeterminacy matter in multi-dimensional item response theory? Chong Ho Yu, Ph.D., Azusa Pacific University This paper aims to illustrate proper applications of multi-dimensional

More information

ITEM RESPONSE THEORY ANALYSIS OF THE TOP LEADERSHIP DIRECTION SCALE

ITEM RESPONSE THEORY ANALYSIS OF THE TOP LEADERSHIP DIRECTION SCALE California State University, San Bernardino CSUSB ScholarWorks Electronic Theses, Projects, and Dissertations Office of Graduate Studies 6-2016 ITEM RESPONSE THEORY ANALYSIS OF THE TOP LEADERSHIP DIRECTION

More information

Measurement Invariance (MI): a general overview

Measurement Invariance (MI): a general overview Measurement Invariance (MI): a general overview Eric Duku Offord Centre for Child Studies 21 January 2015 Plan Background What is Measurement Invariance Methodology to test MI Challenges with post-hoc

More information

Validating Measures of Self Control via Rasch Measurement. Jonathan Hasford Department of Marketing, University of Kentucky

Validating Measures of Self Control via Rasch Measurement. Jonathan Hasford Department of Marketing, University of Kentucky Validating Measures of Self Control via Rasch Measurement Jonathan Hasford Department of Marketing, University of Kentucky Kelly D. Bradley Department of Educational Policy Studies & Evaluation, University

More information

Questionnaire Design Clinic

Questionnaire Design Clinic Questionnaire Design Clinic Spring 2008 Seminar Series University of Illinois www.srl.uic.edu 1 Cognitive Steps in Answering Questions 1. Understand question. 2. Search memory for information. 3. Integrate

More information

Determining Differential Item Functioning in Mathematics Word Problems Using Item Response Theory

Determining Differential Item Functioning in Mathematics Word Problems Using Item Response Theory Determining Differential Item Functioning in Mathematics Word Problems Using Item Response Theory Teodora M. Salubayba St. Scholastica s College-Manila dory41@yahoo.com Abstract Mathematics word-problem

More information

RECOMMENDATIONS FOR STANDARDIZATION OF THE MCC 360 SCALE

RECOMMENDATIONS FOR STANDARDIZATION OF THE MCC 360 SCALE RECOMMENDATIONS FOR STANDARDIZATION OF THE MCC 360 SCALE Marguerite Roy, Medical Council of Canada Cindy Streefkerk, Medical Council of Canada 2017 Anticipated changes to the MCC 360 questionnaires At

More information

2008 Ohio State University. Campus Climate Study. Prepared by. Student Life Research and Assessment

2008 Ohio State University. Campus Climate Study. Prepared by. Student Life Research and Assessment 2008 Ohio State University Campus Climate Study Prepared by Student Life Research and Assessment January 22, 2009 Executive Summary The purpose of this report is to describe the experiences and perceptions

More information

Integrating Emotion and the Theory of Planned Behavior to Explain Consumers Activism in the Internet Web site

Integrating Emotion and the Theory of Planned Behavior to Explain Consumers Activism in the Internet Web site Integrating Emotion and the Theory of Planned Behavior to Explain Consumers Activism in the Internet Web site SEUNGHO CHO shcho72@gmail.com LAURA RICHARDSON WALTON lwalton@comm.msstate.edu Mississippi

More information

Physicians' Acceptance of Web-Based Medical Assessment Systems: Findings from a National Survey

Physicians' Acceptance of Web-Based Medical Assessment Systems: Findings from a National Survey Association for Information Systems AIS Electronic Library (AISeL) AMCIS 2003 Proceedings Americas Conference on Information Systems (AMCIS) 12-31-2003 Physicians' Acceptance of Web-Based Medical Assessment

More information

Known-Groups Validity 2017 FSSE Measurement Invariance

Known-Groups Validity 2017 FSSE Measurement Invariance Known-Groups Validity 2017 FSSE Measurement Invariance A key assumption of any latent measure (any questionnaire trying to assess an unobservable construct) is that it functions equally across all different

More information

Impact of Differential Item Functioning on Subsequent Statistical Conclusions Based on Observed Test Score Data. Zhen Li & Bruno D.

Impact of Differential Item Functioning on Subsequent Statistical Conclusions Based on Observed Test Score Data. Zhen Li & Bruno D. Psicológica (2009), 30, 343-370. SECCIÓN METODOLÓGICA Impact of Differential Item Functioning on Subsequent Statistical Conclusions Based on Observed Test Score Data Zhen Li & Bruno D. Zumbo 1 University

More information

The Development of Scales to Measure QISA s Three Guiding Principles of Student Aspirations Using the My Voice TM Survey

The Development of Scales to Measure QISA s Three Guiding Principles of Student Aspirations Using the My Voice TM Survey The Development of Scales to Measure QISA s Three Guiding Principles of Student Aspirations Using the My Voice TM Survey Matthew J. Bundick, Ph.D. Director of Research February 2011 The Development of

More information

Connectedness DEOCS 4.1 Construct Validity Summary

Connectedness DEOCS 4.1 Construct Validity Summary Connectedness DEOCS 4.1 Construct Validity Summary DEFENSE EQUAL OPPORTUNITY MANAGEMENT INSTITUTE DIRECTORATE OF RESEARCH DEVELOPMENT AND STRATEGIC INITIATIVES Directed by Dr. Daniel P. McDonald, Executive

More information

Paul Irwing, Manchester Business School

Paul Irwing, Manchester Business School Paul Irwing, Manchester Business School Factor analysis has been the prime statistical technique for the development of structural theories in social science, such as the hierarchical factor model of human

More information

Empirical Formula for Creating Error Bars for the Method of Paired Comparison

Empirical Formula for Creating Error Bars for the Method of Paired Comparison Empirical Formula for Creating Error Bars for the Method of Paired Comparison Ethan D. Montag Rochester Institute of Technology Munsell Color Science Laboratory Chester F. Carlson Center for Imaging Science

More information

USE OF DIFFERENTIAL ITEM FUNCTIONING (DIF) ANALYSIS FOR BIAS ANALYSIS IN TEST CONSTRUCTION

USE OF DIFFERENTIAL ITEM FUNCTIONING (DIF) ANALYSIS FOR BIAS ANALYSIS IN TEST CONSTRUCTION USE OF DIFFERENTIAL ITEM FUNCTIONING (DIF) ANALYSIS FOR BIAS ANALYSIS IN TEST CONSTRUCTION Iweka Fidelis (Ph.D) Department of Educational Psychology, Guidance and Counselling, University of Port Harcourt,

More information

The Impact of Item Sequence Order on Local Item Dependence: An Item Response Theory Perspective

The Impact of Item Sequence Order on Local Item Dependence: An Item Response Theory Perspective Vol. 9, Issue 5, 2016 The Impact of Item Sequence Order on Local Item Dependence: An Item Response Theory Perspective Kenneth D. Royal 1 Survey Practice 10.29115/SP-2016-0027 Sep 01, 2016 Tags: bias, item

More information

André Cyr and Alexander Davies

André Cyr and Alexander Davies Item Response Theory and Latent variable modeling for surveys with complex sampling design The case of the National Longitudinal Survey of Children and Youth in Canada Background André Cyr and Alexander

More information

Detecting Suspect Examinees: An Application of Differential Person Functioning Analysis. Russell W. Smith Susan L. Davis-Becker

Detecting Suspect Examinees: An Application of Differential Person Functioning Analysis. Russell W. Smith Susan L. Davis-Becker Detecting Suspect Examinees: An Application of Differential Person Functioning Analysis Russell W. Smith Susan L. Davis-Becker Alpine Testing Solutions Paper presented at the annual conference of the National

More information

PERCEIVED TRUSTWORTHINESS OF KNOWLEDGE SOURCES: THE MODERATING IMPACT OF RELATIONSHIP LENGTH

PERCEIVED TRUSTWORTHINESS OF KNOWLEDGE SOURCES: THE MODERATING IMPACT OF RELATIONSHIP LENGTH PERCEIVED TRUSTWORTHINESS OF KNOWLEDGE SOURCES: THE MODERATING IMPACT OF RELATIONSHIP LENGTH DANIEL Z. LEVIN Management and Global Business Dept. Rutgers Business School Newark and New Brunswick Rutgers

More information

GENERALIZABILITY AND RELIABILITY: APPROACHES FOR THROUGH-COURSE ASSESSMENTS

GENERALIZABILITY AND RELIABILITY: APPROACHES FOR THROUGH-COURSE ASSESSMENTS GENERALIZABILITY AND RELIABILITY: APPROACHES FOR THROUGH-COURSE ASSESSMENTS Michael J. Kolen The University of Iowa March 2011 Commissioned by the Center for K 12 Assessment & Performance Management at

More information

RELATIONSHIPS BETWEEN COGNITIVE PSYCHOLOGY AND SURVEY RESEARCH ~

RELATIONSHIPS BETWEEN COGNITIVE PSYCHOLOGY AND SURVEY RESEARCH ~ RELATIONSHIPS BETWEEN COGNITIVE PSYCHOLOGY AND SURVEY RESEARCH ~ Monroe Sirken, National Center for Health Statistics; Douglas Herrmann, Indiana State University Monroe Sirken, NCHS, 6525 Belcrest Road,

More information

Using the Distractor Categories of Multiple-Choice Items to Improve IRT Linking

Using the Distractor Categories of Multiple-Choice Items to Improve IRT Linking Using the Distractor Categories of Multiple-Choice Items to Improve IRT Linking Jee Seon Kim University of Wisconsin, Madison Paper presented at 2006 NCME Annual Meeting San Francisco, CA Correspondence

More information

Organizational readiness for implementing change: a psychometric assessment of a new measure

Organizational readiness for implementing change: a psychometric assessment of a new measure Shea et al. Implementation Science 2014, 9:7 Implementation Science RESEARCH Organizational readiness for implementing change: a psychometric assessment of a new measure Christopher M Shea 1,2*, Sara R

More information

The Classification Accuracy of Measurement Decision Theory. Lawrence Rudner University of Maryland

The Classification Accuracy of Measurement Decision Theory. Lawrence Rudner University of Maryland Paper presented at the annual meeting of the National Council on Measurement in Education, Chicago, April 23-25, 2003 The Classification Accuracy of Measurement Decision Theory Lawrence Rudner University

More information

FACTOR VALIDITY OF THE MERIDEN SCHOOL CLIMATE SURVEY- STUDENT VERSION (MSCS-SV)

FACTOR VALIDITY OF THE MERIDEN SCHOOL CLIMATE SURVEY- STUDENT VERSION (MSCS-SV) FACTOR VALIDITY OF THE MERIDEN SCHOOL CLIMATE SURVEY- STUDENT VERSION (MSCS-SV) Nela Marinković 1,2, Ivana Zečević 2 & Siniša Subotić 3 2 Faculty of Philosophy, University of Banja Luka 3 University of

More information

Adaptive Testing With the Multi-Unidimensional Pairwise Preference Model Stephen Stark University of South Florida

Adaptive Testing With the Multi-Unidimensional Pairwise Preference Model Stephen Stark University of South Florida Adaptive Testing With the Multi-Unidimensional Pairwise Preference Model Stephen Stark University of South Florida and Oleksandr S. Chernyshenko University of Canterbury Presented at the New CAT Models

More information

The Current State of Our Education

The Current State of Our Education 1 The Current State of Our Education 2 Quantitative Research School of Management www.ramayah.com Mental Challenge A man and his son are involved in an automobile accident. The man is killed and the boy,

More information

Contents. What is item analysis in general? Psy 427 Cal State Northridge Andrew Ainsworth, PhD

Contents. What is item analysis in general? Psy 427 Cal State Northridge Andrew Ainsworth, PhD Psy 427 Cal State Northridge Andrew Ainsworth, PhD Contents Item Analysis in General Classical Test Theory Item Response Theory Basics Item Response Functions Item Information Functions Invariance IRT

More information

College Student Self-Assessment Survey (CSSAS)

College Student Self-Assessment Survey (CSSAS) 13 College Student Self-Assessment Survey (CSSAS) Development of College Student Self Assessment Survey (CSSAS) The collection and analysis of student achievement indicator data are of primary importance

More information

System and User Characteristics in the Adoption and Use of e-learning Management Systems: A Cross-Age Study

System and User Characteristics in the Adoption and Use of e-learning Management Systems: A Cross-Age Study System and User Characteristics in the Adoption and Use of e-learning Management Systems: A Cross-Age Study Oscar Lorenzo Dueñas-Rugnon, Santiago Iglesias-Pradas, and Ángel Hernández-García Grupo de Tecnologías

More information

By Hui Bian Office for Faculty Excellence

By Hui Bian Office for Faculty Excellence By Hui Bian Office for Faculty Excellence 1 Email: bianh@ecu.edu Phone: 328-5428 Location: 1001 Joyner Library, room 1006 Office hours: 8:00am-5:00pm, Monday-Friday 2 Educational tests and regular surveys

More information

Qualitative Attitude Research To Determine the Employee Opinion of a Business Hotel in Istanbul - Turkey. Ahmet Ferda Seymen 1

Qualitative Attitude Research To Determine the Employee Opinion of a Business Hotel in Istanbul - Turkey. Ahmet Ferda Seymen 1 Qualitative Attitude Research To Determine the Employee Opinion of a Business Hotel in Istanbul - Turkey Ahmet Ferda Seymen 1 INRTODUCTION Qualitative research is concerned with qualitative phenomenon,

More information

Computerized Mastery Testing

Computerized Mastery Testing Computerized Mastery Testing With Nonequivalent Testlets Kathleen Sheehan and Charles Lewis Educational Testing Service A procedure for determining the effect of testlet nonequivalence on the operating

More information

References. Embretson, S. E. & Reise, S. P. (2000). Item response theory for psychologists. Mahwah,

References. Embretson, S. E. & Reise, S. P. (2000). Item response theory for psychologists. Mahwah, The Western Aphasia Battery (WAB) (Kertesz, 1982) is used to classify aphasia by classical type, measure overall severity, and measure change over time. Despite its near-ubiquitousness, it has significant

More information

VALUES AND ENTREPRENEURIAL ATTITUDE AS PREDICTORS OF NASCENT ENTREPRENEUR INTENTIONS

VALUES AND ENTREPRENEURIAL ATTITUDE AS PREDICTORS OF NASCENT ENTREPRENEUR INTENTIONS VALUES AND ENTREPRENEURIAL ATTITUDE AS PREDICTORS OF NASCENT ENTREPRENEUR INTENTIONS Noel J. Lindsay 1, Anton Jordaan, and Wendy A. Lindsay Centre for the Development of Entrepreneurs University of South

More information

The Use of Unidimensional Parameter Estimates of Multidimensional Items in Adaptive Testing

The Use of Unidimensional Parameter Estimates of Multidimensional Items in Adaptive Testing The Use of Unidimensional Parameter Estimates of Multidimensional Items in Adaptive Testing Terry A. Ackerman University of Illinois This study investigated the effect of using multidimensional items in

More information

Culture & Survey Measurement. Timothy Johnson Survey Research Laboratory University of Illinois at Chicago

Culture & Survey Measurement. Timothy Johnson Survey Research Laboratory University of Illinois at Chicago Culture & Survey Measurement Timothy Johnson Survey Research Laboratory University of Illinois at Chicago What is culture? It is the collective programming of the mind which distinguishes the members of

More information

Item Response Theory (IRT): A Modern Statistical Theory for Solving Measurement Problem in 21st Century

Item Response Theory (IRT): A Modern Statistical Theory for Solving Measurement Problem in 21st Century International Journal of Scientific Research in Education, SEPTEMBER 2018, Vol. 11(3B), 627-635. Item Response Theory (IRT): A Modern Statistical Theory for Solving Measurement Problem in 21st Century

More information

Psychological Experience of Attitudinal Ambivalence as a Function of Manipulated Source of Conflict and Individual Difference in Self-Construal

Psychological Experience of Attitudinal Ambivalence as a Function of Manipulated Source of Conflict and Individual Difference in Self-Construal Seoul Journal of Business Volume 11, Number 1 (June 2005) Psychological Experience of Attitudinal Ambivalence as a Function of Manipulated Source of Conflict and Individual Difference in Self-Construal

More information

Measuring Nonspecific Factors in Treatment: Item Banks that Assess the Healthcare

Measuring Nonspecific Factors in Treatment: Item Banks that Assess the Healthcare Measuring Nonspecific Factors in Treatment: Item Banks that Assess the Healthcare Experience and Attitudes from the Patient s Perspective Supplemental Material Online Only This supplement contains details

More information

Alternative Methods for Assessing the Fit of Structural Equation Models in Developmental Research

Alternative Methods for Assessing the Fit of Structural Equation Models in Developmental Research Alternative Methods for Assessing the Fit of Structural Equation Models in Developmental Research Michael T. Willoughby, B.S. & Patrick J. Curran, Ph.D. Duke University Abstract Structural Equation Modeling

More information

Impact and adjustment of selection bias. in the assessment of measurement equivalence

Impact and adjustment of selection bias. in the assessment of measurement equivalence Impact and adjustment of selection bias in the assessment of measurement equivalence Thomas Klausch, Joop Hox,& Barry Schouten Working Paper, Utrecht, December 2012 Corresponding author: Thomas Klausch,

More information

Incorporating Measurement Nonequivalence in a Cross-Study Latent Growth Curve Analysis

Incorporating Measurement Nonequivalence in a Cross-Study Latent Growth Curve Analysis Structural Equation Modeling, 15:676 704, 2008 Copyright Taylor & Francis Group, LLC ISSN: 1070-5511 print/1532-8007 online DOI: 10.1080/10705510802339080 TEACHER S CORNER Incorporating Measurement Nonequivalence

More information

Justice Context and Changes in Fairness-Related Criteria Over Time

Justice Context and Changes in Fairness-Related Criteria Over Time Presented at the 73 rd Annual Meeting of MPA May 2001; Chicago, IL Justice Context and Changes in Fairness-Related Criteria Over Time Craig A. Wendorf and Sheldon Alexander Wayne State University Research

More information

Methodological Issues in Measuring the Development of Character

Methodological Issues in Measuring the Development of Character Methodological Issues in Measuring the Development of Character Noel A. Card Department of Human Development and Family Studies College of Liberal Arts and Sciences Supported by a grant from the John Templeton

More information

Foundations for a Science of Social Inclusion Systems

Foundations for a Science of Social Inclusion Systems Foundations for a Science of Social Inclusion Systems Fabio N. Akhras Renato Archer Center of Information Technology Rodovia Dom Pedro I, km 143,6 13089-500 Campinas, São Paulo, Brazil Phone: 0055-19-37466268

More information

Conceptualising computerized adaptive testing for measurement of latent variables associated with physical objects

Conceptualising computerized adaptive testing for measurement of latent variables associated with physical objects Journal of Physics: Conference Series OPEN ACCESS Conceptualising computerized adaptive testing for measurement of latent variables associated with physical objects Recent citations - Adaptive Measurement

More information

INTRODUCTION TO QUESTIONNAIRE DESIGN October 22, 2014 Allyson L. Holbrook

INTRODUCTION TO QUESTIONNAIRE DESIGN October 22, 2014 Allyson L. Holbrook INTRODUCTION TO QUESTIONNAIRE DESIGN October 22, 2014 Allyson L. Holbrook www.srl.uic.edu General information Please hold questions until the end of the presentation Slides available at www.srl.uic.edu/seminars/fall14seminars.htm

More information

Measurement Equivalence of Ordinal Items: A Comparison of Factor. Analytic, Item Response Theory, and Latent Class Approaches.

Measurement Equivalence of Ordinal Items: A Comparison of Factor. Analytic, Item Response Theory, and Latent Class Approaches. Measurement Equivalence of Ordinal Items: A Comparison of Factor Analytic, Item Response Theory, and Latent Class Approaches Miloš Kankaraš *, Jeroen K. Vermunt* and Guy Moors* Abstract Three distinctive

More information

[3] Coombs, C.H., 1964, A theory of data, New York: Wiley.

[3] Coombs, C.H., 1964, A theory of data, New York: Wiley. Bibliography [1] Birnbaum, A., 1968, Some latent trait models and their use in inferring an examinee s ability, In F.M. Lord & M.R. Novick (Eds.), Statistical theories of mental test scores (pp. 397-479),

More information

The Modification of Dichotomous and Polytomous Item Response Theory to Structural Equation Modeling Analysis

The Modification of Dichotomous and Polytomous Item Response Theory to Structural Equation Modeling Analysis Canadian Social Science Vol. 8, No. 5, 2012, pp. 71-78 DOI:10.3968/j.css.1923669720120805.1148 ISSN 1712-8056[Print] ISSN 1923-6697[Online] www.cscanada.net www.cscanada.org The Modification of Dichotomous

More information

Running head: NESTED FACTOR ANALYTIC MODEL COMPARISON 1. John M. Clark III. Pearson. Author Note

Running head: NESTED FACTOR ANALYTIC MODEL COMPARISON 1. John M. Clark III. Pearson. Author Note Running head: NESTED FACTOR ANALYTIC MODEL COMPARISON 1 Nested Factor Analytic Model Comparison as a Means to Detect Aberrant Response Patterns John M. Clark III Pearson Author Note John M. Clark III,

More information

Using Item Response Theory to Evaluate Self-directed Learning Readiness Scale

Using Item Response Theory to Evaluate Self-directed Learning Readiness Scale Journal of Educational and Developmental Psychology; Vol. 8, No. ; 8 ISSN 97-56 E-ISSN 97-5 Published by Canadian Center of Science and Education Using Item Response Theory to Evaluate Self-directed Learning

More information

Developing and Testing Survey Items

Developing and Testing Survey Items Developing and Testing Survey Items William Riley, Ph.D. Chief, Science of Research and Technology Branch National Cancer Institute With Thanks to Gordon Willis Contributions to Self-Report Errors Self-report

More information

Survey Methods in Relationship Research

Survey Methods in Relationship Research Purdue University Purdue e-pubs Department of Psychological Sciences Faculty Publications Department of Psychological Sciences 1-1-2009 Survey Methods in Relationship Research Christopher Agnew Purdue

More information

Development, Standardization and Application of

Development, Standardization and Application of American Journal of Educational Research, 2018, Vol. 6, No. 3, 238-257 Available online at http://pubs.sciepub.com/education/6/3/11 Science and Education Publishing DOI:10.12691/education-6-3-11 Development,

More information

An Examination of the Effects of Self-Regulatory Focus on the Perception of the Media Richness: The Case of

An Examination of the Effects of Self-Regulatory Focus on the Perception of the Media Richness: The Case of An Examination of the Effects of Self-Regulatory Focus on the Perception of the Media Richness: The Case of Email Communication is a key element in organizations business success. The Media Richness Theory

More information

Issues in Information Systems

Issues in Information Systems ANALYZING THE ROLE OF SOME PERSONAL DETERMINANTS IN WEB 2.0 APPLICATIONS USAGE Adel M. Aladwani, Kuwait University, adel.aladwani@ku.edu.kw ABSTRACT This study examines the personal determinants of Web

More information

Reliability and Validity of the Hospital Survey on Patient Safety Culture at a Norwegian Hospital

Reliability and Validity of the Hospital Survey on Patient Safety Culture at a Norwegian Hospital Paper I Olsen, E. (2008). Reliability and Validity of the Hospital Survey on Patient Safety Culture at a Norwegian Hospital. In J. Øvretveit and P. J. Sousa (Eds.), Quality and Safety Improvement Research:

More information

COMPUTING READER AGREEMENT FOR THE GRE

COMPUTING READER AGREEMENT FOR THE GRE RM-00-8 R E S E A R C H M E M O R A N D U M COMPUTING READER AGREEMENT FOR THE GRE WRITING ASSESSMENT Donald E. Powers Princeton, New Jersey 08541 October 2000 Computing Reader Agreement for the GRE Writing

More information

International Framework for Assurance Engagements

International Framework for Assurance Engagements FRAMEWORK March 2015 Framework International Framework for Assurance Engagements Explanatory Foreword The Council of the Malaysian Institute of Accountants has approved this International Framework for

More information

Job stress, psychological empowerment, and job satisfaction among the IT employees in Coimbatore

Job stress, psychological empowerment, and job satisfaction among the IT employees in Coimbatore 2015; 1(8): 126-131 ISSN Print: 2394-7500 ISSN Online: 2394-5869 Impact Factor: 5.2 IJAR 2015; 1(8): 126-131 www.allresearchjournal.com Received: 13-05-2015 Accepted: 16-06-2015 Deepa J Assistant Professor,

More information

Personality Traits Effects on Job Satisfaction: The Role of Goal Commitment

Personality Traits Effects on Job Satisfaction: The Role of Goal Commitment Marshall University Marshall Digital Scholar Management Faculty Research Management, Marketing and MIS Fall 11-14-2009 Personality Traits Effects on Job Satisfaction: The Role of Goal Commitment Wai Kwan

More information

INFLUENCING FLU VACCINATION BEHAVIOR: Identifying Drivers & Evaluating Campaigns for Future Promotion Planning

INFLUENCING FLU VACCINATION BEHAVIOR: Identifying Drivers & Evaluating Campaigns for Future Promotion Planning INFLUENCING FLU VACCINATION BEHAVIOR: Identifying Drivers & Evaluating Campaigns for Future Promotion Planning Cathy St. Pierre, MS ACHA 2011 Annual Conference June 1, 2011 H1N1 Flu Media Coverage Source:

More information

Psychometrics for Beginners. Lawrence J. Fabrey, PhD Applied Measurement Professionals

Psychometrics for Beginners. Lawrence J. Fabrey, PhD Applied Measurement Professionals Psychometrics for Beginners Lawrence J. Fabrey, PhD Applied Measurement Professionals Learning Objectives Identify key NCCA Accreditation requirements Identify two underlying models of measurement Describe

More information

Analyzing the Relationship between the Personnel s Achievement Motivation and their Performance at the Islamic Azad University, Shoushtar branch

Analyzing the Relationship between the Personnel s Achievement Motivation and their Performance at the Islamic Azad University, Shoushtar branch Analyzing the Relationship between the Personnel s Achievement Motivation and their Performance at the Islamic Azad University, Shoushtar branch Masoud Ahmadinejad *, Omme Kolsom Gholamhosseinzadeh **,

More information

Attitude Measurement

Attitude Measurement Business Research Methods 9e Zikmund Babin Carr Griffin Attitude Measurement 14 Chapter 14 Attitude Measurement 2013 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or

More information

During the past century, mathematics

During the past century, mathematics An Evaluation of Mathematics Competitions Using Item Response Theory Jim Gleason During the past century, mathematics competitions have become part of the landscape in mathematics education. The first

More information

PDRF About Propensity Weighting emma in Australia Adam Hodgson & Andrey Ponomarev Ipsos Connect Australia

PDRF About Propensity Weighting emma in Australia Adam Hodgson & Andrey Ponomarev Ipsos Connect Australia 1. Introduction It is not news for the research industry that over time, we have to face lower response rates from consumer surveys (Cook, 2000, Holbrook, 2008). It is not infrequent these days, especially

More information

Psychology (PSYC) Psychology (PSYC) 1

Psychology (PSYC) Psychology (PSYC) 1 Psychology (PSYC) 1 Psychology (PSYC) PSYC 111. Introduction to Psychology. 3 Credits. Survey of the scientific study of behavior and mental processes. PSYC 189. Skills for Academic Success. 1 Credit.

More information

Journal of Educational and Psychological Studies - Sultan Qaboos University (Pages ) Vol.7 Issue

Journal of Educational and Psychological Studies - Sultan Qaboos University (Pages ) Vol.7 Issue Journal of Educational and Psychological Studies - Sultan Qaboos University (Pages 537-548) Vol.7 Issue 4 2013 Constructing a Scale of Attitudes toward School Science Using the General Graded Unfolding

More information

Internal Consistency and Reliability of the Networked Minds Social Presence Measure

Internal Consistency and Reliability of the Networked Minds Social Presence Measure Internal Consistency and Reliability of the Networked Minds Social Presence Measure Chad Harms, Frank Biocca Iowa State University, Michigan State University Harms@iastate.edu, Biocca@msu.edu Abstract

More information

The Multi Institutional Study of Leadership University of San Diego Overall Findings from the Study

The Multi Institutional Study of Leadership University of San Diego Overall Findings from the Study Background of the Study The Multi Institutional Study of Leadership University of San Diego Overall Findings from the Study Prepared by Paige Haber, Department of Leadership Studies Questions? Contact

More information

ATTITUDES, BELIEFS, AND TRANSPORTATION BEHAVIOR

ATTITUDES, BELIEFS, AND TRANSPORTATION BEHAVIOR CHAPTER 6 ATTITUDES, BELIEFS, AND TRANSPORTATION BEHAVIOR Several studies were done as part of the UTDFP that were based substantially on subjective data, reflecting travelers beliefs, attitudes, and intentions.

More information

Exploring a Method to Evaluate Survey Response Scales

Exploring a Method to Evaluate Survey Response Scales Exploring a Method to Evaluate Survey Response Scales René Bautista 1, Lisa Lee 1 1 NORC at the University of Chicago, 55 East Monroe, Suite 2000, Chicago, IL 60603 Abstract We present results from a qualitative

More information

ASSESSING THE UNIDIMENSIONALITY, RELIABILITY, VALIDITY AND FITNESS OF INFLUENTIAL FACTORS OF 8 TH GRADES STUDENT S MATHEMATICS ACHIEVEMENT IN MALAYSIA

ASSESSING THE UNIDIMENSIONALITY, RELIABILITY, VALIDITY AND FITNESS OF INFLUENTIAL FACTORS OF 8 TH GRADES STUDENT S MATHEMATICS ACHIEVEMENT IN MALAYSIA 1 International Journal of Advance Research, IJOAR.org Volume 1, Issue 2, MAY 2013, Online: ASSESSING THE UNIDIMENSIONALITY, RELIABILITY, VALIDITY AND FITNESS OF INFLUENTIAL FACTORS OF 8 TH GRADES STUDENT

More information

Research on Software Continuous Usage Based on Expectation-confirmation Theory

Research on Software Continuous Usage Based on Expectation-confirmation Theory Research on Software Continuous Usage Based on Expectation-confirmation Theory Daqing Zheng 1, Jincheng Wang 1, Jia Wang 2 (1. School of Information Management & Engineering, Shanghai University of Finance

More information

Survey Research Methodology

Survey Research Methodology Survey Research Methodology Prepared by: Praveen Sapkota IAAS, TU, Rampur Chitwan, Nepal Social research Meaning of social research A social research is a systematic method of exploring, analyzing and

More information

Design and Analysis of Surveys

Design and Analysis of Surveys Beth Redbird; redbird@northwestern.edus Soc 476 Office Location: 1810 Chicago Room 120 R 9:30 AM 12:20 PM Office Hours: R 3:30 4:30, Appointment Required Location: Harris Hall L06 Design and Analysis of

More information