Methods for the Statistical Analysis of Discrete-Choice Experiments: A Report of the ISPOR Conjoint Analysis Good Research Practices Task Force

Size: px
Start display at page:

Download "Methods for the Statistical Analysis of Discrete-Choice Experiments: A Report of the ISPOR Conjoint Analysis Good Research Practices Task Force"

Transcription

1 Methods for the Statistical Analysis of Discrete-Choice Experiments: An ISPOR Conjoint Analysis Good Research Practices Task Force Report Methods for the Statistical Analysis of Discrete-Choice Experiments: A Report of the ISPOR Conjoint Analysis Good Research Practices Task Force A. Brett Hauber, PhD, Senior Economist and Vice President, Health Preference Assessment, RTI Health Solutions, Research Triangle Park, NC USA Juan Marcos Gonzalez, PhD, Senior Research Economist, Health Preference Assessment, RTI Health Solutions, Research Triangle Park, NC, USA Catharina G.M. Groothuis-Oudshoorn, PhD, Assistant Professor, Health Technology and Services Research, University of Twente, Enschede, Netherlands Thomas Prior, PhD Candidate, Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA Deborah A. Marshall, PhD, Canada Research Chair, Health Services and Systems Research; Associate Professor, Department of Community Health Sciences, Faculty of Medicine, University of Calgary; Alberta Bone and Joint Health Institute, Calgary, AB, Canada Charles Cunningham, PhD, Professor, Department of Psychiatry & Behavioural Neuroscience, Jack Laidlaw Chair in Patient-Centered Health Care, Michael G. DeGroote School of Medicine, McMaster University, Hamilton, Ontario, Canada Maarten J. IJzerman, PhD, Professor of Clinical Epidemiology & Health Technology Assessment, Chair, Department Health Technology & Services Research, University of Twente, Enschede, the Netherlands. John F.P. Bridges, PhD, Associate Professor, Department of Health Policy and Management, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA * Authors represent the ISPOR Conjoint Analysis Statistical Analysis Good Research Practices Task Force 1

2 Abstract Conjoint analysis is a stated-preference survey method that can be used to elicit responses that reveal preferences, priorities, and the relative importance of individual features associated with health care interventions or services. Conjoint analysis methods, particularly discretechoice experiments (DCE), have been increasingly used to quantify preferences of patients, caregivers, physicians, and other healthcare stakeholders. Recent consensus-based guidance on good research practices, including two recent ISPOR task force reports Conjoint Analysis Use in Health Studies - a Checklist: A Report of the ISPOR Conjoint Analysis in Health Good Research Practices Task Force (the Checklist) (Bridges et al., 2011) and Constructing Experimental Designs for Discrete-Choice Experiments: Report of the ISPOR Conjoint Analysis Experimental Design Task Force (Johnson et al., 2013) have aided in improving the quality of conjoint analyses and discrete-choice experiments in health economics and outcomes research. However, uncertainty regarding good research practices for the statistical analysis of data from DCEs persists. There exist multiple methods for analyzing DCE data. Each analysis method has different characteristics and can potentially yield outputs that differ quantitatively. Understanding the characteristics and appropriate use of different analysis methods is critical to conducting a well-designed DCE study. This report will assist researchers in evaluating and selecting from among alternative approaches to conducting statistical analysis of DCE data. In this report, we first present a very basic DCE example and a simple method for using the resulting data. We then present an archetypal case of a DCE a three-attribute, two-alternative, forced-choice and one of the most common approaches to analyzing data from such a question format multinomial logit. We then describe some common alternative methods for analyzing these data. This report briefly introduces the theoretical underpinnings of using DCEs to elicit preference information and the strengths and weaknesses of each alternative analysis method. This report does not endorse any specific method; however, it does provide a guide for selecting a method that is appropriate for a particular study and for understanding the implications of the selection of statistical analysis method on what conclusions can be supported by the results. Keywords: conjoint analysis, discrete-choice experiment, statistical analysis, stated preference methods, patient-centered outcomes research. 21 2

3 22 Background to the Task Force The ISPOR Conjoint Analysis Statistical Analysis Good Research Practices Task Force is the third ISPOR Conjoint Analysis Task Force. It builds on two previous task force reports, Conjoint Analysis Applications in Health A Checklist: A Report of the ISPOR Good Research Practices for Conjoint Analysis Task Force (Bridges et al., 2011) and the ISPOR Conjoint Analysis Experimental Design Task Force (Johnson et al., 2013). The Conjoint Analysis Checklist report developed a 10-point checklist for conjoint analysis. Checklist items included: 1) the research question, 2) the attributes and levels, 3) the format of the question, 4) the experimental design, 5) the preference elicitation, 6) the design of the instrument, 7) the data-collection plan, 8) the statistical analysis, 9) the results and conclusions, and 10) the study s presentation. This first task force determined that several items, including experimental design (Checklist Item #4) and methods for analyzing data from conjoint analysis studies (Checklist Item #8), deserved more detailed attention. Thus, the ISPOR Conjoint Analysis Experimental Design Task Force focused on experimental design to assist researchers in evaluating alternative approaches to this difficult and important element of a successful conjointanalysis study. This third task force report focuses on the range of options available to researchers to analyze data generated from studies using a particular type of conjoint analysis the discrete-choice experiment and the types of results generated by each method. This report also describes the issues researchers should consider when evaluating each analysis method and factors to consider when choosing a method for statistical analysis. The Conjoint Analysis Statistical Analysis Good Research Practices Task Force proposal was submitted to the ISPOR Health Science Policy Council for evaluation in December The Council recommended the proposal to the ISPOR Board of Directors, and it was subsequently approved in January Researchers experienced in stated preferences and discrete-choice experiments working in academia and research organizations in Canada, the Netherlands and the United States were invited to join the task force s leadership group. The leadership group met via regular teleconference to identify and discuss current analytical techniques, develop the topics and outline, and prepare drafts of the manuscript. An international group of analytical experts was consulted during this process to discuss methods for analysis of conjoint analysis studies and to review the task force s draft reports. (This background to the task force section will include details of the review process and be completed before submission to Value in Health.) 3

4 INTRODUCTION Over the past two decades there has been a rapid increase in the use of conjoint analysis to measure the preferences of patients and other stakeholders in health applications (Ryan and Gerard, 2003; Bridges et al., 2008; Marshall et al., 2010; debekker-grob et al., 2012; Hauber et al., 2013a, Clark et al., 2014). While early applications highlight the importance of process utility (Ryan, 1999; Longworth et al., 2001), applications now frequently focus on patient preferences for health status (Hauber et al., 2010; Mohamed et al., 2010), screening (Marshall et al., 2010), prevention (Poulos et al., 2011; van Gils et al., 2011), pharmaceutical treatment (Bridges et al., 2012; Hauber et al., 2013b), therapeutic devices (Ho et al., 2015; Sanders et al., 2010), diagnostic testing (Groothuis-Oudshoorn et al., 2014; Plumb et al., 2014) and end-of-life care (Fishman et al., 2009; Hall et al., 2013). In addition, conjoint analysis methods have been used to study decision making among stakeholders other than patients including clinicians (Nathan et al., 2012; Faggioli et al., 2011; Arellano et al., 2015), caregivers (Morton et al., 2012; Faggioli et al., 2011), and the general public (Honda et al., 2105; Regier et al., 2015). Conjoint analysis is a broad term that can be used to describe a range of stated-preference methods that have respondents rate, rank, or choose from among a set of experimentally controlled profiles consisting of multiple attributes varying across levels. The most common type of conjoint analysis used in health economics and outcomes research is the discrete-choice experiment (DCE) (Clark et al., 2014; de Bekker-Grob et al., 2012). This Task Force report focuses on motivating and reviewing the most common statistical methods that are currently used to analyze data from a DCE. The primary objective when analyzing DCE data is to statistically decompose the determinants of the choice between profiles. In doing so, one can infer the importance of attributes and levels included in the profiles. The premise of DCE is that choice is motivated by the attributes of the alternatives considered by decision makers. By controlling these attributes experimentally and observing choices among populations of interest, DCE provides the principles that reverse engineer choice to estimate the implied importance of the attributes and attribute levels behind these choices. We refer to these estimates of strength-of-preference, which are sometimes called part-worth utilities, as preference weights. Because preference weights in a DCE are estimated on a common scale, they can be used to calculate ratios describing the tradeoffs respondents are willing to make among the attributes. Examples of these tradeoffs include estimates of money equivalence (willingness to pay) (Gray et al., 2015; Johnson P et al., 2014), risk equivalence (maximum acceptable risk) (Ho et al., 2015; Kauf et al., 2015), or time equivalence (Whitty et al., 2015; Gonzalez et al., 2013) for various changes in attributes or attribute levels. 4

5 In contrast to choice-based conjoint analysis studies conducted in market research or marketing science, DCEs in health economics and outcomes research tend to focus more on quantifying preferences for changes in attributes and attribute levels and less on predicting choices. To that end, most statistical analyses of DCE in health applications data are geared toward estimating a model to test hypotheses regarding strength of preference, importance, and tradeoffs among attributes rather than estimating the model with the greatest predictive power. The application of DCEs to measuring preferences for health and healthcare has benefited from a growing literature on methods (Ryan and Farrar 2000; Bridges, 2003; Carlsson and Martinsson 2003; Viney et al., 2005; Lancsar and Louviere 2008; Johnson and Mansfield 2008) including the two previous task force reports from the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Conjoint Analysis Good Research Practice. This report builds on the first of these reports, Conjoint Analysis Applications in Health A Checklist: A Report of the ISPOR Good Research Practices for Conjoint Analysis Task Force (Bridges et al., 2010). The Checklist outlines the steps to take for the development, analysis, and publication of conjoint analyses, including experimental design (Checklist Item #4) and methods for analyzing data from conjoint analysis studies (Checklist Item #8). While there are several other key methodological references that may be useful to more experienced researchers (Louviere et al., 2000; Hensher et al., 2005; Orme, 2010), this task force report is targeted to a more general audience of healthcare researchers. Understanding the characteristics and appropriate analysis of preference data generated by DCE surveys is critical to conducting a welldesigned DCE. This report will assist in understanding some aspects of the preference information obtained through DCEs that should be taken into account as researchers evaluate and select between alternative statistical analyses methods to analyzing DCE data. As such, this Task Force report is a primer on statistical methods as much as a statement of good research practices. The Task Force agreed on this approach because, despite the growing use of conjoint analysis methods in health economics and outcomes research, there remains inconsistency in the statistical methods used to analyze data from DCEs (Bridges et al., 2011; Bridges et al., 2008; Marshall et al., 2010; Johnson and Mansfield, 2008). This report also highlights the fact that there is no single best way to analyze DCE data. Therefore, this report concludes with a checklist of principles to consider when designing a DCE and selecting an appropriate approach to statistical analysis. This report starts with the basic idea behind deriving preference weights from DCE data by describing two simple approaches to calculating preference weights from a DCE with a very limited number of attributes and levels. The purpose of providing these examples is to help readers understand some of the basic properties of choice data. We then present an archetypal case a three-attribute, two-alternative, forced-choice DCE analyzed using a multinomial logit model consistent with the random utility model of choice (McFadden, 1974; Louviere et al 5

6 ; Bateman et al, 2002; Ryan and Ferrar, 2000). Because most of the other commonly used methods for analyzing DCE data are, in effect, variations on the multinomial logit model, we then describe extensions to multinomial logit that can be used to analyze the same DCE data. These extensions include random-parameters (mixed) logit, hierarchal Bayes, and latent class analysis. To demonstrate the differences in the properties of each of these analysis methods we present the results of each method as applied to a common simulated data set. A SIMPLE EXAMPLE To understand the basic concepts underlying the analysis of DCE data, consider the case of a hypothetical, over-the-counter analgesic. If we assume that the relevant analgesics can be described by three attributes with two possible levels (time to onset of action can be 30 minutes or 5 minutes; duration of pain relief can be 4 hours or 8 hours; and formulation can be either tablets or capsules), then these can be combined into 8 possible profiles. While these 8 profiles can be combined into a very large number of distinct pairs, we could use a main-effects orthogonal design (Johnson et al., 2013) to generate an experimental design consisting of four paired profile tasks (Table 1). Table 1 Profile A Profile B Task 1 30 minute onset, 4 hour duration, tablet 5 minute onset, 8 hour duration, capsule Task 2 30 minute onset, 8 hour duration, capsule 5 minute onset, 4 hour duration, tablet Task 3 5 minute onset, 4 hour duration, capsule 30 minute onset, 8 hour duration, tablet Task 4 5 minute onset, 8 hour duration, tablet 30 minute onset, 4 hour duration, capsule If there were three respondents who completed all four choice tasks, we would have 12 observations with which to estimate a model. Assume respondents answered as follows: Respondent 1 (B, B, A, B), Respondent 2 (A, B, A, A), and Respondent 3 (A, B, B, A). The simplest form of analysis is to count how many times each level of each attribute was chosen. In Table 2, we count the number of times each attribute level was chosen by each respondent, sum these totals across all respondents and then divide this sum by the number of times each attribute was presented across the three respondents to calculate a score for each attribute level. Although there appears to be some heterogeneity in preferences among the respondents (e.g., Respondent 1 appears to prefer capsules to tablets while Respondents 2 and 3 appear to prefer tablets to capsules), we can still infer sample-level preferences from these data. Across the sample: 5 minute onset was preferred to 30 minute onset, 4 hour duration was preferred to 8 hour duration, and tablets were preferred to capsules. 6

7 Table 2 Level Resp. 1 Resp. 2 Resp. 3 Sum Score 30 minute onset minute onset hour duration hour duration Tablet Capsule We can also use regression analysis to linearly relate the probability of choosing one profile over another to the characteristics of the profiles. This model is known as a linear probability model and assumes that the probability of choosing one profile (we assume that is Profile A in our example) over another is a linear function of attribute level differences between profiles in each choice task. Thus, in our example above the model can be described as, Pr(choice = Profile A) = β o + β i (X i, A X i, B ) i [1] Where X i is the level of attribute i, β 0 is the intercept, and β i is the preference weight for attribute i. The linear probability model defines a relationship between choices and attribute levels that can be leveraged to estimate the preference weights through various linear regression models. One such estimator is ordinary least squares (OLS). With OLS, the value of the set of β in the equation above is determined by determining the value for the set that minimizes the sum of the squared residuals. To set up the data for regression analysis in this example, let Quick=1 if the onset of action is 30 minutes and Quick=0 if it is 5 minutes in profile A, Duration=1 if the duration is 4 hours and Duration=0 if it is 8 hours in profile A, and let Tablet=1 if the formulation is a tablet and Tablet=0 if the formulation is a capsule in Profile A. Finally, let Choice=1 if profile A was chosen and 0 if Profile B was chosen. Table 3 presents the data from this experiment, using the above coding. Table 3 Respondent Choice Task Choice Quick Duration Tablet

8 Using OLS on the binary choice variable Y (1 for Profile A and zero if Profile B is chosen) to estimate preference weights for the attribute levels used to define the analgesics in this example, the model takes the form: Y = α + β 1 Quick + β 2 Duration + β 3 Tablet + ε, [2] and the conditional expectation of the binary variable Y equals the probability that Profile A is chosen. The estimated linear probability function is then: Pr(choice=Profile A) = Pr(Y=1) = Quick Duration Tablet. [3] The coefficients from this simple model can be interpreted as marginal probabilities: an analgesic with a 30-minute onset is 33% less likely to be chosen than an analgesic with a 5-minute onset, an analgesic with a 4-hour duration is 33% more likely to be chosen than an analgesic with an 8-hour duration, and an analgesic in the form of a tablet is 33% more likely to be chosen than an analgesic in the form of a capsule. From this we can infer that, on average, respondents in the sample prefer faster onset of action, shorter duration of effect, and tablets over capsules. Note that with results from a linear probability model, all the preference information we have is perfectly confounded with the probability of choice associated with changes in attribute levels. That is, the measure of how much more a respondent prefers a 5-minute onset versus a 30-minute onset (as with any other change in attribute levels) is the marginal change in the probability of choice for treatments that differ only in the time of onset of action. Comparing the results of this simple regression to the results of the counts analysis, we see that the estimated regression parameter on each attribute is simply the difference between the sample-level scores for the levels of that attribute. Specifically, the score for 30-minute onset 8

9 was 0.33 less than the score for 5-minute onset, the score for 4-hour duration was 0.33 higher than the score for 8-hour duration, and the score for tablet was 0.33 higher than the score for capsule. Although users can expect OLS estimates to have standard errors associated with each preference weight, these standard errors are problematic and inferences about the significance of estimates are typically not done with OLS results. This is because the OLS estimator is subject to severe limitations when it is used to analyze DCE data. When using OLS, a researcher must assume that the errors with which they measure choices are independent and identically distributed with mean zero and constant variance (Cox, 1970). In reality, the variance in DCE data changes across choice tasks. In addition, even with estimators other than OLS, linear-probability models can produce choice probabilities that are greater than one or less than zero for certain combinations of attribute levels. For these reasons, among others, linear probability methods are rarely used to analyze DCE data. THE ARCHETYPAL CASE To describe the more common alternatives for analyzing data from a DCE, we define an archetypal case. As above, we define a DCE in which each choice task presents a pair of alternatives. Respondents are asked to choose between the two profiles in each pair. Each profile is defined by three attributes, and each attribute has three possible levels. This case can thus be referred to as a three-attribute, two-alternative, forced-choice experiment. In the archetypal case, each profile is a medication alternative. The three attributes that define each medication alternative are efficacy, a side-effect, and a mode of administration. Efficacy is the treatment outcome measured on a numeric scale between 0 and 10 where higher values represent better outcomes. The levels of the side effect are levels of severity (mild, moderate, severe). The levels of the mode of administration are daily tablets, weekly subcutaneous injections, and monthly intravenous infusions. The attributes and levels used to create the profiles in the archetypal case are presented in Table 4 and an example of a choice task is presented in Figure Table 4. Attributes and Attribute Levels in the Archetypal Case Attributes Levels A1 Efficacy L1 10 (best level) L2 5 (middle level) L3 3 (worst level) A2 Side effect L1 Mild 9

10 L2 Moderate L3 Severe A3 Mode of administration L1 1 tablet once a day L2 Subcutaneous injection once a week L3 Intravenous infusion once a month Figure 1. An Example of a Choice Task for the Archetypal Case Feature Medicine A Medicine B Efficacy Severity of side effects How you take the medicine 10 on a scale from 1 to 10 where 10 it the best Severe Subcutaneous injection once a week 5 on a scale from 1 to 10 where 10 is the best Mild Intravenous infusion once a month Which medicine would you choose? Variable Coding in the Archetypal Case Defining the levels in each row of data for the archetypal case can be accomplished in multiple ways. Attributes with numeric levels (e.g., survival time, risk, cost) can be specified as continuous variables. Using this approach, the level value of the attribute in the profile will appear in the appropriate place in the row. In the archetypal case, only efficacy can be logically coded as a continuous variable because the levels of the side effect and mode of administration are descriptive and thus categorical. Two commonly used methods for categorical coding of attribute levels are dummy-variable coding and effects coding (Hensher, Rose and Green 2005; Bech and Gyrd-Hansen, 2005). In each of these coding approaches, one level of each attribute must be omitted. In both dummy- 10

11 variable coding and effects coding each non-omitted attribute level is assigned a value of 1 when that level is present in the corresponding profile and 0 when another non-omitted level is present in the corresponding profile. The difference between the two coding methods is related to the coding of the non-omitted levels when the omitted level is presented in the profile. With dummy variable coding, all non-omitted levels are coded as 0 when the omitted level is present. With effects coding, all non-omitted levels are coded as -1 when the omitted level is present. Table 5 presents dummy coding and effects coding for an attribute with three levels when L3 is the omitted level. The same coding can be applied to each attribute in the archetypal case. Table 5. Dummy-Variable Coding and Effects Coding for the Archetypal Case Attribute level Dummy-variables Effects-coded variables presented in the L1 L2 L1 L2 profile L L L With dummy-variable coding, all coefficients estimated by the model represent a measure of preference for each level of an attribute relative to the omitted level of that attribute. The coefficient on the omitted category of an effects-coded variable can be recovered as the negative sum of the coefficients on the non-omitted levels of that attribute and thus yields a unique coefficient for each attribute level included in the study. However, each coefficient is estimated relative to the mean attribute effect and statistical tests of significance for each coefficient are not direct tests of the statistical significance of differences between estimated coefficients on two different levels of the same attribute. Dummy or effects coding lead to the same estimates of differences in preference weights between attribute levels (Mark and Swait, 2004). For this reason, the decision to use dummy or effects coding of variables should be based on ease of interpretation of the estimates from the model, not based on the expectation that one type of coding will provide more information than the other. It is important to note that with dummy coding, whenever a constant is added to a model to characterize relative preferences for any specific alternative in the choice questions, the value of this constant represents the preference for the alternative when the alternative assumes 11

12 all the omitted attribute levels in the model 1. The same is true with effects-coded variables, except that with effects coding, the omitted levels correspond to the average effect of the study attributes. Thus, the constant can be interpreted as the preference for the alternative when all attributes are set at their average effect. In other words, the constant in such a case becomes a measure of the average preference for the alternative given the attribute levels in the study. Data Generated by the Archetypal Case One way to set up the data generated by this DCE is to construct two rows of data for each choice task for each respondent, one row per alternative. So, if each respondent is presented with 10 choice tasks generated by an appropriate experimental design, and there are 200 respondents, the total data set will have 4000 rows. In our example with 2 alternatives, the first row of data for each choice task will include the attribute levels that appear in the first profile in the pair presented in that choice task. The second row of data for the same choice task will include the attribute levels for the second profile in that pair. In addition, each row of data will include a choice dummy variable equal to 1 if the chosen profile for the choice task corresponds to that row of data or 0 otherwise. In Table 6, we present an example of this type of data setup for the example choice task presented above using effects coding for all attributes assuming that level 3 is the omitted level for each attribute. Table 6. Data Setup for one Response to the Example Choice Task Using Effects Coding ID Task Choice Eff (L1) Eff (L2) SE (L1) SE (L2) Mode (L1) Mode (L2) The ID number in the first column of Table 6 is the respondent number. Task is the number of the choice task in the series. Choice indicates that the first respondent choose Medicine B when presented with the choice task in Figure 1 above. The level of side effects in Medicine A in the example choice task is severe (the omitted level). Therefore, the non-omitted levels of this attributes are coded as -1 in the first line of data in 1 This may be desirable if the combination of all omitted levels represents a clinically meaningful profile. Then, all preference weights in the model describe deviations from this profile and statistical differences from the levels in the profile. 12

13 Table 6. In addition, the level for the mode of administration in Medicine B is the omitted level for this attribute and the non-omitted levels for this attribute are both coded as -1 in the two columns farthest to the right in the second line of data in Table MULTINOMIAL LOGIT The most common approach to analyzing data generated by this type of experiment is the use of a limited-dependent variable model that relates the probability of choices among two or more alternatives to the characteristics of the individuals making the choices and/or the elements describing those alternatives. This model is commonly known as a multinomial logit (MNL) model or a conditional logit (CL) model. Both models rely on the same statistical assumptions about the relationship between choice and the variables used to help explain choice. MNL is often used to describe the model that relates choices to the characteristics of the respondents choosing, while CL relates choices to the elements describing those alternatives. When this distinction is made in software packages, MNL does not produce estimates for variables that only change across alternatives, and CL does not produce estimates for variables that only change across respondents. Often, however, the distinction between the two models is not made and the two models are characterized as MNL. In a DCE, the elements describing the alternatives are the attribute levels used to define each profile in the choice task. The MNL model was shown by McFadden (McFadden, 1974) to be consistent with random utility theory (RUT) 2. McFadden originally applied this framework to observed transportation choices. His work laid the foundation for what is now known as conjoint analysis (Louviere et al 2000; Bateman et al, 2002) involving hypothetical or stated choices. Using RUT, the utility associated with an alternative or profile is assumed to be a function of observed characteristics (attributes levels) and unobserved characteristics of the alternative. This theoretic framework also assumes that each individual, when faced with a choice between 2 or more alternatives, will choose the alternative that maximizes his or her utility. The utility function is specified as an indirect utility function defined by the attribute levels in the alternative plus a random error term reflecting the researcher s inability to perfectly measure utility: U i = V(β,X i ) + ε i [4] 2 The novelty in McFadden s use of the multinomial logit model is that he applied this model to choice behavior that was consistent with economic theory and derived a regression model that relates choices to the characteristics of the alternatives available to decision makers. McFadden used the term conditional logit to describe this innovation. For simplicity, we will use multinomial logit to describe both the estimator and the application of the estimator to choice experiments. 13

14 where V is a function defined by the attribute levels for alternative i, ε i is a random error term, X i is a vector of attribute levels defining alternative i; and β is a vector of estimated coefficients. Each estimated coefficient is a preference weight and represents the relative contribution of the attribute level to the utility that respondents assign to an alternative. In the MNL model, ε j is assumed to follow an independently and identically distributed type 1 extreme value distribution. The assumption of the extreme-value distribution of ε j results in a logit model: 223 Pr(choice = i) = ev(β,x i ) e V(β,x j ) j [5] where V(β,x i ) is the observed portion of the function for alternative i, and i is one alternative among a set of j alternatives. Simply stated, the probability of choosing alternative i is a function of both the attribute levels of alternative i and the attribute levels of all other profiles presented in a choice task. In the case of the two-alternative, forced-choice DCE, there are two alternatives in each choice task, so j=2. The probability of choosing one profile from the set of two alternatives is 1 minus the probability of choosing the other profile in that choice task. Therefore, neither alternative in the choice task has a choice probability less than 0% or greater than 100%. In addition, this specification implies that the closer the probability of choosing an alternative in a 2-alternative choice task is to 50%, the more sensitive the probability of choosing that alternative is to changes in the attribute levels that define the alternative Results of the Multinomial Logit Model Using Effects-Coding and Dummy-variable Coding Table 7 presents a set of example results from a MNL model regression with effects-coded variables, while Table 8 presents results from the same model where the variables were dummy coded. Details regarding the simulated data set used to demonstrate the models described in the remainder of this report are presented in an appendix. In a MNL model, a coefficient (or preference weight) and a corresponding standard error are estimated for all but one level of each attribute. A t-value (or sometimes a z-value 3 ) and a p-value or probability that the estimated coefficient 3 With large sample sizes, parameters in the regression models are assumed to be approximately normally distributed. With such an assumption about the distribution of the parameters, statistical significance can be tested using a z-value that indicates how many standard deviations from the parameter mean is the value for the most common null hypothesis value, zero. 14

15 is zero, are often calculated for each estimated preference weight. In some statistical packages, confidence intervals at the 95% confidence level are also provided for each preference weight. The table also includes AIC, BIC, and log likelihood values; all measures of model fit discussed later in the report. Table 7. Example Results from a Multinomial Logit Model Using Effects Coding Attribute Level Coefficient Std. Error t-value p-value Efficacy L <0.01 L Side Effect L <0.01 L Mode of Administration L L <0.01 Log Likelihood Log likelihood of model without predictors AIC BIC Table 8. Example Results from a Multinomial Logit Model Using Dummy-variable Coding Attribute Level Coefficient Std. Error t-value p-value Efficacy L <0.01 L Side Effect L <0.01 L Mode of Administration L L <0.01 Log Likelihood of model Log likelihood of model without predictors AIC BIC

16 With effects-coded variables, the preference weight on the omitted level is recovered by calculating the negative of the sum of the estimated preference weights for all non-omitted levels of the attribute. For example, in the results in Table 7 the estimated preference weight for the omitted level of efficacy would be equal to (-0.28 = -[ ]). If the attribute-level variables in the regression model are dummy coded, the estimated preference weight can be interpreted as difference in preference weights between that level and the omitted level. Also, since the preference weight for the omitted attribute level is assumed to be zero, the t-value and p-value in the output are associated with the likelihood that the estimated preference weights are statistically different from the estimated preference weight of the omitted level. Interpreting the Results of the Multinomial Logit Model The preference weights for the effects-coded and dummy-coded MNL models are presented in Figures 2 and 3, respectively. Both figures clearly indicate that higher levels of efficacy are preferred to (have higher preference weights than) lower levels of efficacy, less severe side effects are preferred to more severe side effects, and subcutaneous injections are preferred to 1 tablet once a day which, in turn, is preferred to an intravenous infusion once a month. 16

17 256 Figure 2. Preference Weights for the Archetypal Case Estimated Using the Multinomial Logit Model and Effects-Coded Attribute Levels

18 Figure 3. Preference Weights for the Archetypal Case Estimated Using the Multinomial Logit Model and Dummy-Coded Attribute Levels Figures 2 and 3 demonstrate that the differences in the preference weights among levels within each attribute are the same in both models. The effect of increasing efficacy from L2 to L1 is 0.24 (0.24 = ) in the dummy-coded model and 0.24 (0.19 = ) in the effects-coded model. The effect of reducing side effects from L2 to L1 is 0.30 (0.30 = ) in the dummy-coded model and 0.30 (0.30 = 18

19 ) in the effects-coded model. The similarities between the effects-coded and the dummy-coded models highlight an important aspect of the MNL model. That is, the absolute values of preference weights alone have no meaningful interpretation. Preference weights measure relative preference, which means that only changes between attribute-level estimates and the relative size of those changes across attributes have meaningful interpretations. In the effects-coded and dummy-coded models that we have used in the examples, directly comparing the parameter estimates for the efficacy attribute in each model would have erroneously suggested that the two models provided different results or different information on the relative impact that these attribute levels have on choice. As the figures demonstrate, differences between the two model specifications are merely due to differences in variable coding and no additional information is obtained from either model. This implies that estimates from MNL models cannot be compared across models directly 4. This interpretation of the model estimates, in fact, applies to all the models presented later in this report. Knowing that the two models provide the same information, we will use only the results of the effects-coded model presented in Table 7 and Figure 2 to further interpret the results generated by the MNL model as applied to the archetypal case. First, the difference in preference weights between the best or most preferred level of an attribute and the worst or least preferred level of the same attribute provides an estimate of the relative importance of that attribute over the range of levels included in the experiment. For example, the relative importance of a change in efficacy from a value of 10 to a value of 3 (a 7-point change) is 0.54 (0.26 [-0.28]). The value of a change in side-effect severity from severe to mild is 0.66 (0.32 [-0.34]). Therefore, reducing side-effect severity from severe to mild yields approximately 1.2 (0.66 / 0.54) times as much utility as increasing efficacy by 7 points (from 3 to 10). Likewise, changing the mode of administration from an intravenous infusion once a month to a subcutaneous injection once a week yields only 0.72 (0.39 / 0.54) times as much utility as increasing efficacy by 7 points. These results can also be used to estimate the rate at which respondents would be willing to trade off among the attributes in the experiment. For example, a reduction in side-effect severity from moderate to mild yields an increase in utility of The reduction in efficacy (from the highest level of 10) that exactly offsets this increase in utility would be approximately 5.4 points. This change in efficacy is calculated as the 0.24 reduction in utility achieved by moving from an efficacy value of 10 to an efficacy value of 5, plus the change in efficacy between a value of 5 and an efficacy value of 3 that would yield a reduction in utility of The necessary additional change in utility is equal to one fifth of the difference (0.06 / 0.30) between an efficacy value of 5 and an efficacy value of 3. Interpolating between these 2 points and adding the value to 4 Results from models that share the same specification can be compared following Swait and Louviere, 1993 or Hensher, Rose and Greene,

20 the change from an efficacy value of 10 to and efficacy value of 5 results in a total change in efficacy from 10 to 4.6 necessary to offset the increase in utility from reducing the severity of the side effect. Preference Weight Estimates Although the results presented in Table 7 (Figure 2) and Table 8 (Figure 3) appear to be different at first glance, this difference is solely due to the reference point for the estimated preference weights in each model. The relationship between the two sets of results can be confirmed by transforming the results from the dummy-coded model as deviations from the attribute mean effects. For example, based on the results in Table 8 the mean of the preference weights for the efficacy attribute is 0.29 (0.29 = ( )/3). Subtracting this mean effect from each estimated preference weight for the efficacy attribute we obtain estimates of the preference weights that are equal to the preference-weight estimates obtained using the effects-coded data as shown blow. Efficacy L1 = = 0.26 Efficacy L2 = = 0.02 [6] Efficacy L3 = = The relationship between attribute levels across models is the same for all three attributes. For example, in the case of the side-effect attribute the preference weights of the dummy-coded results can be transformed as shown below: Side effect L1 = = 0.32 Side effect L2 = = 0.02 [7] Side effect L3 = = Goodness of Fit The relationship between the dummy-variable coded and effects-coded models can also be confirmed by looking at a measure of goodness of fit for each model. Contrary to OLS, MNL models do not allow for the calculation of an R-squared measure of goodness of fit. Instead, several measures that mimic an R-squared calculation have been developed using the log likelihood (LL) of these models. The measures are known as Pseudo R-Squared measures and provide a way to determine relative model fit, as opposed to absolute fit as R-squared measured do with OLS results. Table 7 and Table 8 each present two LL values for the effects-coded and dummy-coded models, respectively. In 20

21 each table, there is one LL value for the model specification, and another one for a model without predictors (a model that includes only a constant for all but one of the alternatives available). Log likelihood is an indicator of the maximum explanatory power of a model. Higher (less negative) values in the LL are associated with greater ability of a model to explain the pattern of choices in the data. Although LL values alone cannot be used as a measure of model fit because they are a function of sample size, they can be used to calculate goodness of fit measures like the likelihood ratio chi-square test and McFadden s Pseudo R-Squared, which are commonly reported in software packages. The likelihood ratio chi-square test determines significance based on a Chi-squared distribution and provides a way to determine whether the fit of a model is significantly improved with the attribute level variables relative to a model without any of these attribute level variables in essence providing users with a test equivalent to an F-test in OLS regressions. That is, indicating whether one or more of the preference weights is expected to be different from zero. The critical value for this test can be simply calculated as the following: Likelihood Ratio Χ 2 = 2(LL of model LL model without predictors). [8] The likelihood ratio has a chi-squared distribution with degrees of freedom equal to the number of preference weights estimated in the model. Models that improve fit more significantly with the attribute level variables can be assumed to have better fit. Although the calculation above is what typically is provided in software packages, extensions of this tests provide a way to test for improvements in fit between an unrestricted model (that estimates all preference weights in a model) and a restricted model (that forces one or more preference weights to be zero). In that case, the critical value for this test can be simply calculated as the following: Likelihood Ratio Χ 2 = 2(LL of restricted model LL of unrestricted model) [9] With degrees of freedom equal to the number of restrictions (preference weight estimates forced to be zero). For example, an unrestricted model may be one that estimates preference weights for all attributes in a DCE, whereas the restricted model forces the weights for one of the attributes to be zero (in essence eliminating the attribute from the analysis). In such a case, the likelihood ratio chi-square test informs the user whether estimating the weights for the eliminated attribute improves model fit in a statistically significant way. 21

22 McFadden s Pseudo R-Squared is calculated using the following formula: McFadden s Pseudo R 2 = 1 LL of model LL model without predictors [10] McFadden s Pseudo R-Squared can be zero if all the preference weights are constrained to be zero, but the measure can never reach 1. Note that McFadden s Pseudo R-Squared for the effects-coded model in Table 7 is ( = 1-[ / ]). McFadden s Pseudo R-Squared for the dummy-coded model in Table 8 is also ( = 1-[ / ]). The two models have the same exact goodness of fit indicating that both models explain choices equally well. Although McFadden s Pseudo R-Squared provides a measure of relative (rather than absolute) model fit, a measure between 0.2 and 0.4 can be considered a good model fit. (McFadden, 1978) Other measures of model fit that are commonly provided by statistical packages are the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC). Both criteria are specified as -2LL+Kγ, where LL is log likelihood value of the full model, K is the number of parameter estimates, and γ is a penalty constant that changes between AIC (γ=2) and BIC (γ=ll*sample size). The premise of the calculations behind these measures is different from the calculation of Pseudo R-Squared measures. AIC and BIC evaluate the plausibility of the models focusing on minimizing information loss rather than evaluate improvements in the adequacy of the model to explain responses in the data as done in the Pseudo R-Squared measures. Also, contrary to Pseudo R-Squared measures, models with lower AIC and BIC measures are preferred over models with higher measures. As with McFadden s Pseudo R-Squared, AIC and BIC measures are identical across the models in Table 7 and Table 8. The Multinomial Logit Model with a Continuous Variable An alternative specification of the MNL as applied to the archetypal case is presented in Table 9. In this specification, all attribute levels are effects-coded except for the efficacy attribute, which was coded as a linear continuous variable with the efficacy values shown (3, 5, and 10) in each alternative. Since only one efficacy variable was necessary to characterize treatment efficacy in the model, the output in Table 9 has only one preference-weight estimate for the efficacy attribute. The preference-weight represents that relative marginal utility of a 1-unit change in the efficacy outcome. All changes in the efficacy of more than one unit are assumed to be proportional to a 1-unit change. That is, the effect of a 1- unit change in efficacy on utility is assumed to be linear over the range of levels of this attribute included in the experiment. For example, the 22

23 utility change resulting from a 9-unit change in the efficacy measure is assumed to be 9 times the marginal utility of a 1-unit change in the efficacy measure. Two pieces of evidence suggest that this specification may be inappropriate given the data in this experiment. First, the effects-coded and dummy-variable coded models suggest that the relative marginal utility of a 1-unit change in the efficacy measure is more important when the change in efficacy is between 3 (Level 3) and 5 (Level 2) than when the change in efficacy is between 5 (Level 2) and 10 (Level 1). This is determined because the difference in the relative preference weights between levels 1 and 2 of the efficacy attribute is 0.24 (implying a 0.05 change in preference weight associated with a 1-unit change in the level of efficacy over this range of levels) and the difference in the relative preference weights between levels 2 and 3 of the efficacy attribute is 0.30 (implying a 0.15 change in preference weight associated with a 1-unit change in the level of efficacy over this range of levels). The second indication that the continuous specification of efficacy may be incorrect is the difference in the goodness of fit between the model in which the efficacy attribute is modeled as a linear continuous variable and the models in which the efficacy attribute is modeled as a categorical variable. This change in model fit is evident if we calculate McFadden s Pseudo R-Squared, which is ( = 1-[ / ]). The reduction in the model goodness of fit suggests that the proportionality assumption imposed with the marginal effect of the efficacy attribute does not fit the data as well as the categorical preference weights in the effects-coded and dummy-variable coded models. In addition, the calculated AIC and BIC results suggest that the use of a linear continuous efficacy coding is less appropriate than the previous models because both measures are higher (i.e., less desirable) for the model with a continuous specification for efficacy than the equivalent measures calculated for the effects-coded and dummy-coded models. Table 9. Example Results from a Multinomial Logit Model with Continuous Efficacy Attribute Level Coefficient Std. Error t-value p-value Efficacy Marginal effect 0.07 < <0.01 Side Effect L <0.01 L Mode of Administration L L <0.01 Log Likelihood of model

Sawtooth Software. MaxDiff Analysis: Simple Counting, Individual-Level Logit, and HB RESEARCH PAPER SERIES. Bryan Orme, Sawtooth Software, Inc.

Sawtooth Software. MaxDiff Analysis: Simple Counting, Individual-Level Logit, and HB RESEARCH PAPER SERIES. Bryan Orme, Sawtooth Software, Inc. Sawtooth Software RESEARCH PAPER SERIES MaxDiff Analysis: Simple Counting, Individual-Level Logit, and HB Bryan Orme, Sawtooth Software, Inc. Copyright 009, Sawtooth Software, Inc. 530 W. Fir St. Sequim,

More information

VALUE IN HEALTH 16 (2013) Available online at journal homepage:

VALUE IN HEALTH 16 (2013) Available online at  journal homepage: VALUE IN HEALTH 16 (2013) 3 13 Available online at www.sciencedirect.com journal homepage: www.elsevier.com/locate/jval ISPOR TASK FORCE REPORT Constructing Experimental Designs for Discrete-Choice Experiments:

More information

Citation for published version (APA): Ebbes, P. (2004). Latent instrumental variables: a new approach to solve for endogeneity s.n.

Citation for published version (APA): Ebbes, P. (2004). Latent instrumental variables: a new approach to solve for endogeneity s.n. University of Groningen Latent instrumental variables Ebbes, P. IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please check the document

More information

Daniel Boduszek University of Huddersfield

Daniel Boduszek University of Huddersfield Daniel Boduszek University of Huddersfield d.boduszek@hud.ac.uk Introduction to Multinominal Logistic Regression SPSS procedure of MLR Example based on prison data Interpretation of SPSS output Presenting

More information

EXPLORING HETEROGENEITY OF STATED PREFERENCES THROUGH LATENT CLASS ANALYSIS

EXPLORING HETEROGENEITY OF STATED PREFERENCES THROUGH LATENT CLASS ANALYSIS EXPLORING HETEROGENEITY OF STATED PREFERENCES THROUGH LATENT CLASS ANALYSIS by Mo Zhou A dissertation submitted to Johns Hopkins University in conformity with the requirements for the degree of Doctor

More information

This is a repository copy of Using Conditioning on Observed Choices to Retrieve Individual-Specific Attribute Processing Strategies.

This is a repository copy of Using Conditioning on Observed Choices to Retrieve Individual-Specific Attribute Processing Strategies. This is a repository copy of Using Conditioning on Observed Choices to Retrieve Individual-Specific Attribute Processing Strategies. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/43604/

More information

You must answer question 1.

You must answer question 1. Research Methods and Statistics Specialty Area Exam October 28, 2015 Part I: Statistics Committee: Richard Williams (Chair), Elizabeth McClintock, Sarah Mustillo You must answer question 1. 1. Suppose

More information

Week 8 Hour 1: More on polynomial fits. The AIC. Hour 2: Dummy Variables what are they? An NHL Example. Hour 3: Interactions. The stepwise method.

Week 8 Hour 1: More on polynomial fits. The AIC. Hour 2: Dummy Variables what are they? An NHL Example. Hour 3: Interactions. The stepwise method. Week 8 Hour 1: More on polynomial fits. The AIC Hour 2: Dummy Variables what are they? An NHL Example Hour 3: Interactions. The stepwise method. Stat 302 Notes. Week 8, Hour 1, Page 1 / 34 Human growth

More information

Russian Journal of Agricultural and Socio-Economic Sciences, 3(15)

Russian Journal of Agricultural and Socio-Economic Sciences, 3(15) ON THE COMPARISON OF BAYESIAN INFORMATION CRITERION AND DRAPER S INFORMATION CRITERION IN SELECTION OF AN ASYMMETRIC PRICE RELATIONSHIP: BOOTSTRAP SIMULATION RESULTS Henry de-graft Acquah, Senior Lecturer

More information

Bayesian Logistic Regression Modelling via Markov Chain Monte Carlo Algorithm

Bayesian Logistic Regression Modelling via Markov Chain Monte Carlo Algorithm Journal of Social and Development Sciences Vol. 4, No. 4, pp. 93-97, Apr 203 (ISSN 222-52) Bayesian Logistic Regression Modelling via Markov Chain Monte Carlo Algorithm Henry De-Graft Acquah University

More information

How Much Should We Trust the World Values Survey Trust Question?

How Much Should We Trust the World Values Survey Trust Question? How Much Should We Trust the World Values Survey Trust Question? Noel D. Johnson * Department of Economics George Mason University Alexandra Mislin Kogod School of Business, American University Abstract

More information

MULTIPLE LINEAR REGRESSION 24.1 INTRODUCTION AND OBJECTIVES OBJECTIVES

MULTIPLE LINEAR REGRESSION 24.1 INTRODUCTION AND OBJECTIVES OBJECTIVES 24 MULTIPLE LINEAR REGRESSION 24.1 INTRODUCTION AND OBJECTIVES In the previous chapter, simple linear regression was used when you have one independent variable and one dependent variable. This chapter

More information

A Brief Introduction to Bayesian Statistics

A Brief Introduction to Bayesian Statistics A Brief Introduction to Statistics David Kaplan Department of Educational Psychology Methods for Social Policy Research and, Washington, DC 2017 1 / 37 The Reverend Thomas Bayes, 1701 1761 2 / 37 Pierre-Simon

More information

Introduction to diagnostic accuracy meta-analysis. Yemisi Takwoingi October 2015

Introduction to diagnostic accuracy meta-analysis. Yemisi Takwoingi October 2015 Introduction to diagnostic accuracy meta-analysis Yemisi Takwoingi October 2015 Learning objectives To appreciate the concept underlying DTA meta-analytic approaches To know the Moses-Littenberg SROC method

More information

Midterm Exam ANSWERS Categorical Data Analysis, CHL5407H

Midterm Exam ANSWERS Categorical Data Analysis, CHL5407H Midterm Exam ANSWERS Categorical Data Analysis, CHL5407H 1. Data from a survey of women s attitudes towards mammography are provided in Table 1. Women were classified by their experience with mammography

More information

CHAPTER 3 RESEARCH METHODOLOGY

CHAPTER 3 RESEARCH METHODOLOGY CHAPTER 3 RESEARCH METHODOLOGY 3.1 Introduction 3.1 Methodology 3.1.1 Research Design 3.1. Research Framework Design 3.1.3 Research Instrument 3.1.4 Validity of Questionnaire 3.1.5 Statistical Measurement

More information

Cross-Lagged Panel Analysis

Cross-Lagged Panel Analysis Cross-Lagged Panel Analysis Michael W. Kearney Cross-lagged panel analysis is an analytical strategy used to describe reciprocal relationships, or directional influences, between variables over time. Cross-lagged

More information

Health care policy evaluation: empirical analysis of the restrictions implied by Quality Adjusted Life Years

Health care policy evaluation: empirical analysis of the restrictions implied by Quality Adjusted Life Years Health care policy evaluation: empirical analysis of the restrictions implied by Quality Adusted Life Years Rosalie Viney and Elizabeth Savage CHERE, University of Technology, Sydney Preliminary not for

More information

Advancing stated-preference methods for measuring the preferences of patients with type 2 diabetes

Advancing stated-preference methods for measuring the preferences of patients with type 2 diabetes Advancing stated-preference methods for measuring the preferences of patients with type 2 diabetes Second DAB Meeting November 20, 2014 Baltimore, MD Development of the Preference Tasks Section Outline

More information

2.75: 84% 2.5: 80% 2.25: 78% 2: 74% 1.75: 70% 1.5: 66% 1.25: 64% 1.0: 60% 0.5: 50% 0.25: 25% 0: 0%

2.75: 84% 2.5: 80% 2.25: 78% 2: 74% 1.75: 70% 1.5: 66% 1.25: 64% 1.0: 60% 0.5: 50% 0.25: 25% 0: 0% Capstone Test (will consist of FOUR quizzes and the FINAL test grade will be an average of the four quizzes). Capstone #1: Review of Chapters 1-3 Capstone #2: Review of Chapter 4 Capstone #3: Review of

More information

Using Discrete Choice Experiments with duration to model EQ-5D-5L health state preferences: Testing experimental design strategies

Using Discrete Choice Experiments with duration to model EQ-5D-5L health state preferences: Testing experimental design strategies Using Discrete Choice Experiments with duration to model EQ-5D-5L health state preferences: Testing experimental design strategies Brendan Mulhern 1,2 (MRes), Nick Bansback 3 (PhD), Arne Risa Hole 4 (PhD),

More information

For general queries, contact

For general queries, contact Much of the work in Bayesian econometrics has focused on showing the value of Bayesian methods for parametric models (see, for example, Geweke (2005), Koop (2003), Li and Tobias (2011), and Rossi, Allenby,

More information

HOW STATISTICS IMPACT PHARMACY PRACTICE?

HOW STATISTICS IMPACT PHARMACY PRACTICE? HOW STATISTICS IMPACT PHARMACY PRACTICE? CPPD at NCCR 13 th June, 2013 Mohamed Izham M.I., PhD Professor in Social & Administrative Pharmacy Learning objective.. At the end of the presentation pharmacists

More information

Analysis of Rheumatoid Arthritis Data using Logistic Regression and Penalized Approach

Analysis of Rheumatoid Arthritis Data using Logistic Regression and Penalized Approach University of South Florida Scholar Commons Graduate Theses and Dissertations Graduate School November 2015 Analysis of Rheumatoid Arthritis Data using Logistic Regression and Penalized Approach Wei Chen

More information

CHAPTER - 6 STATISTICAL ANALYSIS. This chapter discusses inferential statistics, which use sample data to

CHAPTER - 6 STATISTICAL ANALYSIS. This chapter discusses inferential statistics, which use sample data to CHAPTER - 6 STATISTICAL ANALYSIS 6.1 Introduction This chapter discusses inferential statistics, which use sample data to make decisions or inferences about population. Populations are group of interest

More information

Statistical reports Regression, 2010

Statistical reports Regression, 2010 Statistical reports Regression, 2010 Niels Richard Hansen June 10, 2010 This document gives some guidelines on how to write a report on a statistical analysis. The document is organized into sections that

More information

11/18/2013. Correlational Research. Correlational Designs. Why Use a Correlational Design? CORRELATIONAL RESEARCH STUDIES

11/18/2013. Correlational Research. Correlational Designs. Why Use a Correlational Design? CORRELATIONAL RESEARCH STUDIES Correlational Research Correlational Designs Correlational research is used to describe the relationship between two or more naturally occurring variables. Is age related to political conservativism? Are

More information

A Case Study: Two-sample categorical data

A Case Study: Two-sample categorical data A Case Study: Two-sample categorical data Patrick Breheny January 31 Patrick Breheny BST 701: Bayesian Modeling in Biostatistics 1/43 Introduction Model specification Continuous vs. mixture priors Choice

More information

In this module I provide a few illustrations of options within lavaan for handling various situations.

In this module I provide a few illustrations of options within lavaan for handling various situations. In this module I provide a few illustrations of options within lavaan for handling various situations. An appropriate citation for this material is Yves Rosseel (2012). lavaan: An R Package for Structural

More information

How to analyze correlated and longitudinal data?

How to analyze correlated and longitudinal data? How to analyze correlated and longitudinal data? Niloofar Ramezani, University of Northern Colorado, Greeley, Colorado ABSTRACT Longitudinal and correlated data are extensively used across disciplines

More information

Moderation in management research: What, why, when and how. Jeremy F. Dawson. University of Sheffield, United Kingdom

Moderation in management research: What, why, when and how. Jeremy F. Dawson. University of Sheffield, United Kingdom Moderation in management research: What, why, when and how Jeremy F. Dawson University of Sheffield, United Kingdom Citing this article: Dawson, J. F. (2014). Moderation in management research: What, why,

More information

Airline choice. Case 1. Australian health care principles. One person s answers. Best Worst Scaling: Principles & History. Who came up With these?

Airline choice. Case 1. Australian health care principles. One person s answers. Best Worst Scaling: Principles & History. Who came up With these? Best Worst Scaling: Principles & History Workshop: Practical Experiences with the Use of Best- Worst Scaling in Economic Evaluation ISPOR, Baltimore, May, 2011 Terry N. Flynn (University of Technology,

More information

The Effects of Maternal Alcohol Use and Smoking on Children s Mental Health: Evidence from the National Longitudinal Survey of Children and Youth

The Effects of Maternal Alcohol Use and Smoking on Children s Mental Health: Evidence from the National Longitudinal Survey of Children and Youth 1 The Effects of Maternal Alcohol Use and Smoking on Children s Mental Health: Evidence from the National Longitudinal Survey of Children and Youth Madeleine Benjamin, MA Policy Research, Economics and

More information

Simple Linear Regression the model, estimation and testing

Simple Linear Regression the model, estimation and testing Simple Linear Regression the model, estimation and testing Lecture No. 05 Example 1 A production manager has compared the dexterity test scores of five assembly-line employees with their hourly productivity.

More information

Online Appendix. According to a recent survey, most economists expect the economic downturn in the United

Online Appendix. According to a recent survey, most economists expect the economic downturn in the United Online Appendix Part I: Text of Experimental Manipulations and Other Survey Items a. Macroeconomic Anxiety Prime According to a recent survey, most economists expect the economic downturn in the United

More information

Assessing Measurement Invariance in the Attitude to Marriage Scale across East Asian Societies. Xiaowen Zhu. Xi an Jiaotong University.

Assessing Measurement Invariance in the Attitude to Marriage Scale across East Asian Societies. Xiaowen Zhu. Xi an Jiaotong University. Running head: ASSESS MEASUREMENT INVARIANCE Assessing Measurement Invariance in the Attitude to Marriage Scale across East Asian Societies Xiaowen Zhu Xi an Jiaotong University Yanjie Bian Xi an Jiaotong

More information

Introduction to Machine Learning. Katherine Heller Deep Learning Summer School 2018

Introduction to Machine Learning. Katherine Heller Deep Learning Summer School 2018 Introduction to Machine Learning Katherine Heller Deep Learning Summer School 2018 Outline Kinds of machine learning Linear regression Regularization Bayesian methods Logistic Regression Why we do this

More information

SUPPLEMENTAL MATERIAL

SUPPLEMENTAL MATERIAL 1 SUPPLEMENTAL MATERIAL Response time and signal detection time distributions SM Fig. 1. Correct response time (thick solid green curve) and error response time densities (dashed red curve), averaged across

More information

Score Tests of Normality in Bivariate Probit Models

Score Tests of Normality in Bivariate Probit Models Score Tests of Normality in Bivariate Probit Models Anthony Murphy Nuffield College, Oxford OX1 1NF, UK Abstract: A relatively simple and convenient score test of normality in the bivariate probit model

More information

Using Experiments to Address Attribute Non-attendance in Consumer Food Choices. Vincenzina Caputo

Using Experiments to Address Attribute Non-attendance in Consumer Food Choices. Vincenzina Caputo Using Experiments to Address Attribute Non-attendance in Consumer Food Choices Vincenzina Caputo Department of Food and Resources Economics College of Life Sciences and Biotechnology Korea University,

More information

Lessons in biostatistics

Lessons in biostatistics Lessons in biostatistics The test of independence Mary L. McHugh Department of Nursing, School of Health and Human Services, National University, Aero Court, San Diego, California, USA Corresponding author:

More information

CRITERIA FOR USE. A GRAPHICAL EXPLANATION OF BI-VARIATE (2 VARIABLE) REGRESSION ANALYSISSys

CRITERIA FOR USE. A GRAPHICAL EXPLANATION OF BI-VARIATE (2 VARIABLE) REGRESSION ANALYSISSys Multiple Regression Analysis 1 CRITERIA FOR USE Multiple regression analysis is used to test the effects of n independent (predictor) variables on a single dependent (criterion) variable. Regression tests

More information

What is Multilevel Modelling Vs Fixed Effects. Will Cook Social Statistics

What is Multilevel Modelling Vs Fixed Effects. Will Cook Social Statistics What is Multilevel Modelling Vs Fixed Effects Will Cook Social Statistics Intro Multilevel models are commonly employed in the social sciences with data that is hierarchically structured Estimated effects

More information

Addendum: Multiple Regression Analysis (DRAFT 8/2/07)

Addendum: Multiple Regression Analysis (DRAFT 8/2/07) Addendum: Multiple Regression Analysis (DRAFT 8/2/07) When conducting a rapid ethnographic assessment, program staff may: Want to assess the relative degree to which a number of possible predictive variables

More information

Unit 1 Exploring and Understanding Data

Unit 1 Exploring and Understanding Data Unit 1 Exploring and Understanding Data Area Principle Bar Chart Boxplot Conditional Distribution Dotplot Empirical Rule Five Number Summary Frequency Distribution Frequency Polygon Histogram Interquartile

More information

Tutorial #7A: Latent Class Growth Model (# seizures)

Tutorial #7A: Latent Class Growth Model (# seizures) Tutorial #7A: Latent Class Growth Model (# seizures) 2.50 Class 3: Unstable (N = 6) Cluster modal 1 2 3 Mean proportional change from baseline 2.00 1.50 1.00 Class 1: No change (N = 36) 0.50 Class 2: Improved

More information

THE APPLICATION OF ORDINAL LOGISTIC HEIRARCHICAL LINEAR MODELING IN ITEM RESPONSE THEORY FOR THE PURPOSES OF DIFFERENTIAL ITEM FUNCTIONING DETECTION

THE APPLICATION OF ORDINAL LOGISTIC HEIRARCHICAL LINEAR MODELING IN ITEM RESPONSE THEORY FOR THE PURPOSES OF DIFFERENTIAL ITEM FUNCTIONING DETECTION THE APPLICATION OF ORDINAL LOGISTIC HEIRARCHICAL LINEAR MODELING IN ITEM RESPONSE THEORY FOR THE PURPOSES OF DIFFERENTIAL ITEM FUNCTIONING DETECTION Timothy Olsen HLM II Dr. Gagne ABSTRACT Recent advances

More information

Investigating the robustness of the nonparametric Levene test with more than two groups

Investigating the robustness of the nonparametric Levene test with more than two groups Psicológica (2014), 35, 361-383. Investigating the robustness of the nonparametric Levene test with more than two groups David W. Nordstokke * and S. Mitchell Colp University of Calgary, Canada Testing

More information

An Introduction to Bayesian Statistics

An Introduction to Bayesian Statistics An Introduction to Bayesian Statistics Robert Weiss Department of Biostatistics UCLA Fielding School of Public Health robweiss@ucla.edu Sept 2015 Robert Weiss (UCLA) An Introduction to Bayesian Statistics

More information

Regression CHAPTER SIXTEEN NOTE TO INSTRUCTORS OUTLINE OF RESOURCES

Regression CHAPTER SIXTEEN NOTE TO INSTRUCTORS OUTLINE OF RESOURCES CHAPTER SIXTEEN Regression NOTE TO INSTRUCTORS This chapter includes a number of complex concepts that may seem intimidating to students. Encourage students to focus on the big picture through some of

More information

MODEL SELECTION STRATEGIES. Tony Panzarella

MODEL SELECTION STRATEGIES. Tony Panzarella MODEL SELECTION STRATEGIES Tony Panzarella Lab Course March 20, 2014 2 Preamble Although focus will be on time-to-event data the same principles apply to other outcome data Lab Course March 20, 2014 3

More information

Causal Mediation Analysis with the CAUSALMED Procedure

Causal Mediation Analysis with the CAUSALMED Procedure Paper SAS1991-2018 Causal Mediation Analysis with the CAUSALMED Procedure Yiu-Fai Yung, Michael Lamm, and Wei Zhang, SAS Institute Inc. Abstract Important policy and health care decisions often depend

More information

A Critique of Two Methods for Assessing the Nutrient Adequacy of Diets

A Critique of Two Methods for Assessing the Nutrient Adequacy of Diets CARD Working Papers CARD Reports and Working Papers 6-1991 A Critique of Two Methods for Assessing the Nutrient Adequacy of Diets Helen H. Jensen Iowa State University, hhjensen@iastate.edu Sarah M. Nusser

More information

Today: Binomial response variable with an explanatory variable on an ordinal (rank) scale.

Today: Binomial response variable with an explanatory variable on an ordinal (rank) scale. Model Based Statistics in Biology. Part V. The Generalized Linear Model. Single Explanatory Variable on an Ordinal Scale ReCap. Part I (Chapters 1,2,3,4), Part II (Ch 5, 6, 7) ReCap Part III (Ch 9, 10,

More information

DRAFT (Final) Concept Paper On choosing appropriate estimands and defining sensitivity analyses in confirmatory clinical trials

DRAFT (Final) Concept Paper On choosing appropriate estimands and defining sensitivity analyses in confirmatory clinical trials DRAFT (Final) Concept Paper On choosing appropriate estimands and defining sensitivity analyses in confirmatory clinical trials EFSPI Comments Page General Priority (H/M/L) Comment The concept to develop

More information

SUMMER 2011 RE-EXAM PSYF11STAT - STATISTIK

SUMMER 2011 RE-EXAM PSYF11STAT - STATISTIK SUMMER 011 RE-EXAM PSYF11STAT - STATISTIK Full Name: Årskortnummer: Date: This exam is made up of three parts: Part 1 includes 30 multiple choice questions; Part includes 10 matching questions; and Part

More information

Adjusting for mode of administration effect in surveys using mailed questionnaire and telephone interview data

Adjusting for mode of administration effect in surveys using mailed questionnaire and telephone interview data Adjusting for mode of administration effect in surveys using mailed questionnaire and telephone interview data Karl Bang Christensen National Institute of Occupational Health, Denmark Helene Feveille National

More information

Addressing elimination and selection by aspects decision. rules in discrete choice experiments: does it matter?

Addressing elimination and selection by aspects decision. rules in discrete choice experiments: does it matter? Addressing elimination and selection by aspects decision rules in discrete choice experiments: does it matter? Seda Erdem Economics Division, Stirling Management School, University of Stirling, UK. E-mail:seda.erdem@stir.ac.uk

More information

Use of the logit scaling approach to test for rank-order and fatigue effects in stated preference data

Use of the logit scaling approach to test for rank-order and fatigue effects in stated preference data Transportation 21: 167-184, 1994 9 1994 KluwerAcademic Publishers. Printed in the Netherlands. Use of the logit scaling approach to test for rank-order and fatigue effects in stated preference data MARK

More information

Confidence Intervals On Subsets May Be Misleading

Confidence Intervals On Subsets May Be Misleading Journal of Modern Applied Statistical Methods Volume 3 Issue 2 Article 2 11-1-2004 Confidence Intervals On Subsets May Be Misleading Juliet Popper Shaffer University of California, Berkeley, shaffer@stat.berkeley.edu

More information

What is a Special Interest Group (SIG)?

What is a Special Interest Group (SIG)? PATIENT REPORTED OUTCOMES RESEARCH SPECIAL INTEREST GROUP Open Meeting 20 th Annual European Congress November 6, 2017 Glasgow, Scotland What is a Special Interest Group (SIG)? Member-driven group interested

More information

Political Science 15, Winter 2014 Final Review

Political Science 15, Winter 2014 Final Review Political Science 15, Winter 2014 Final Review The major topics covered in class are listed below. You should also take a look at the readings listed on the class website. Studying Politics Scientifically

More information

Selection and Combination of Markers for Prediction

Selection and Combination of Markers for Prediction Selection and Combination of Markers for Prediction NACC Data and Methods Meeting September, 2010 Baojiang Chen, PhD Sarah Monsell, MS Xiao-Hua Andrew Zhou, PhD Overview 1. Research motivation 2. Describe

More information

11/24/2017. Do not imply a cause-and-effect relationship

11/24/2017. Do not imply a cause-and-effect relationship Correlational research is used to describe the relationship between two or more naturally occurring variables. Is age related to political conservativism? Are highly extraverted people less afraid of rejection

More information

Chapter 11: Advanced Remedial Measures. Weighted Least Squares (WLS)

Chapter 11: Advanced Remedial Measures. Weighted Least Squares (WLS) Chapter : Advanced Remedial Measures Weighted Least Squares (WLS) When the error variance appears nonconstant, a transformation (of Y and/or X) is a quick remedy. But it may not solve the problem, or it

More information

Psychology, 2010, 1: doi: /psych Published Online August 2010 (

Psychology, 2010, 1: doi: /psych Published Online August 2010 ( Psychology, 2010, 1: 194-198 doi:10.4236/psych.2010.13026 Published Online August 2010 (http://www.scirp.org/journal/psych) Using Generalizability Theory to Evaluate the Applicability of a Serial Bayes

More information

P E R S P E C T I V E S

P E R S P E C T I V E S PHOENIX CENTER FOR ADVANCED LEGAL & ECONOMIC PUBLIC POLICY STUDIES Revisiting Internet Use and Depression Among the Elderly George S. Ford, PhD June 7, 2013 Introduction Four years ago in a paper entitled

More information

Study Guide #2: MULTIPLE REGRESSION in education

Study Guide #2: MULTIPLE REGRESSION in education Study Guide #2: MULTIPLE REGRESSION in education What is Multiple Regression? When using Multiple Regression in education, researchers use the term independent variables to identify those variables that

More information

MEA DISCUSSION PAPERS

MEA DISCUSSION PAPERS Inference Problems under a Special Form of Heteroskedasticity Helmut Farbmacher, Heinrich Kögel 03-2015 MEA DISCUSSION PAPERS mea Amalienstr. 33_D-80799 Munich_Phone+49 89 38602-355_Fax +49 89 38602-390_www.mea.mpisoc.mpg.de

More information

6. Unusual and Influential Data

6. Unusual and Influential Data Sociology 740 John ox Lecture Notes 6. Unusual and Influential Data Copyright 2014 by John ox Unusual and Influential Data 1 1. Introduction I Linear statistical models make strong assumptions about the

More information

THE ROLE OF LABELLING IN CONSUMERS FUNCTIONAL FOOD CHOICES. Ning-Ning (Helen) Zou. Jill E. Hobbs

THE ROLE OF LABELLING IN CONSUMERS FUNCTIONAL FOOD CHOICES. Ning-Ning (Helen) Zou. Jill E. Hobbs THE ROLE OF LABELLING IN CONSUMERS FUNCTIONAL FOOD CHOICES Ning-Ning (Helen) Zou Jill E. Hobbs Department of Bioresource Policy, Business & Economics University of Saskatchewan, Canada Corresponding author:

More information

Doing Quantitative Research 26E02900, 6 ECTS Lecture 6: Structural Equations Modeling. Olli-Pekka Kauppila Daria Kautto

Doing Quantitative Research 26E02900, 6 ECTS Lecture 6: Structural Equations Modeling. Olli-Pekka Kauppila Daria Kautto Doing Quantitative Research 26E02900, 6 ECTS Lecture 6: Structural Equations Modeling Olli-Pekka Kauppila Daria Kautto Session VI, September 20 2017 Learning objectives 1. Get familiar with the basic idea

More information

The COMPASs Study: Community Preferences for Prostate cancer Screening. Protocol for a quantitative preference study

The COMPASs Study: Community Preferences for Prostate cancer Screening. Protocol for a quantitative preference study Open Access To cite: Howard K, Salkeld GP, Mann GJ, et al. The COMPASs Study: Community Preferences for Prostate Cancer Screening. Protocol for a quantitative preference study. BMJ Open 2012;2: e000587.

More information

A critical look at the use of SEM in international business research

A critical look at the use of SEM in international business research sdss A critical look at the use of SEM in international business research Nicole F. Richter University of Southern Denmark Rudolf R. Sinkovics The University of Manchester Christian M. Ringle Hamburg University

More information

Analysis of Environmental Data Conceptual Foundations: En viro n m e n tal Data

Analysis of Environmental Data Conceptual Foundations: En viro n m e n tal Data Analysis of Environmental Data Conceptual Foundations: En viro n m e n tal Data 1. Purpose of data collection...................................................... 2 2. Samples and populations.......................................................

More information

Comparing multiple proportions

Comparing multiple proportions Comparing multiple proportions February 24, 2017 psych10.stanford.edu Announcements / Action Items Practice and assessment problem sets will be posted today, might be after 5 PM Reminder of OH switch today

More information

How many speakers? How many tokens?:

How many speakers? How many tokens?: 1 NWAV 38- Ottawa, Canada 23/10/09 How many speakers? How many tokens?: A methodological contribution to the study of variation. Jorge Aguilar-Sánchez University of Wisconsin-La Crosse 2 Sample size in

More information

A review of statistical methods in the analysis of data arising from observer reliability studies (Part 11) *

A review of statistical methods in the analysis of data arising from observer reliability studies (Part 11) * A review of statistical methods in the analysis of data arising from observer reliability studies (Part 11) * by J. RICHARD LANDIS** and GARY G. KOCH** 4 Methods proposed for nominal and ordinal data Many

More information

ANOVA in SPSS (Practical)

ANOVA in SPSS (Practical) ANOVA in SPSS (Practical) Analysis of Variance practical In this practical we will investigate how we model the influence of a categorical predictor on a continuous response. Centre for Multilevel Modelling

More information

Instrumental Variables Estimation: An Introduction

Instrumental Variables Estimation: An Introduction Instrumental Variables Estimation: An Introduction Susan L. Ettner, Ph.D. Professor Division of General Internal Medicine and Health Services Research, UCLA The Problem The Problem Suppose you wish to

More information

Logistic regression: Why we often can do what we think we can do 1.

Logistic regression: Why we often can do what we think we can do 1. Logistic regression: Why we often can do what we think we can do 1. Augst 8 th 2015 Maarten L. Buis, University of Konstanz, Department of History and Sociology maarten.buis@uni.konstanz.de All propositions

More information

Sawtooth Software. The Number of Levels Effect in Conjoint: Where Does It Come From and Can It Be Eliminated? RESEARCH PAPER SERIES

Sawtooth Software. The Number of Levels Effect in Conjoint: Where Does It Come From and Can It Be Eliminated? RESEARCH PAPER SERIES Sawtooth Software RESEARCH PAPER SERIES The Number of Levels Effect in Conjoint: Where Does It Come From and Can It Be Eliminated? Dick Wittink, Yale University Joel Huber, Duke University Peter Zandan,

More information

Measuring Goodness of Fit for the

Measuring Goodness of Fit for the Measuring Goodness of Fit for the Double-Bounded Logit Model Barbara J. Kanninen and M. Sami Khawaja The traditional approaches of measuring goodness of fit are shown to be inappropriate in the case of

More information

WELCOME! Lecture 11 Thommy Perlinger

WELCOME! Lecture 11 Thommy Perlinger Quantitative Methods II WELCOME! Lecture 11 Thommy Perlinger Regression based on violated assumptions If any of the assumptions are violated, potential inaccuracies may be present in the estimated regression

More information

Estimating drug effects in the presence of placebo response: Causal inference using growth mixture modeling

Estimating drug effects in the presence of placebo response: Causal inference using growth mixture modeling STATISTICS IN MEDICINE Statist. Med. 2009; 28:3363 3385 Published online 3 September 2009 in Wiley InterScience (www.interscience.wiley.com).3721 Estimating drug effects in the presence of placebo response:

More information

Title: The Theory of Planned Behavior (TPB) and Texting While Driving Behavior in College Students MS # Manuscript ID GCPI

Title: The Theory of Planned Behavior (TPB) and Texting While Driving Behavior in College Students MS # Manuscript ID GCPI Title: The Theory of Planned Behavior (TPB) and Texting While Driving Behavior in College Students MS # Manuscript ID GCPI-2015-02298 Appendix 1 Role of TPB in changing other behaviors TPB has been applied

More information

In this chapter, we discuss the statistical methods used to test the viability

In this chapter, we discuss the statistical methods used to test the viability 5 Strategy for Measuring Constructs and Testing Relationships In this chapter, we discuss the statistical methods used to test the viability of our conceptual models as well as the methods used to test

More information

STATISTICS INFORMED DECISIONS USING DATA

STATISTICS INFORMED DECISIONS USING DATA STATISTICS INFORMED DECISIONS USING DATA Fifth Edition Chapter 4 Describing the Relation between Two Variables 4.1 Scatter Diagrams and Correlation Learning Objectives 1. Draw and interpret scatter diagrams

More information

Scale Building with Confirmatory Factor Analysis

Scale Building with Confirmatory Factor Analysis Scale Building with Confirmatory Factor Analysis Latent Trait Measurement and Structural Equation Models Lecture #7 February 27, 2013 PSYC 948: Lecture #7 Today s Class Scale building with confirmatory

More information

Understanding Uncertainty in School League Tables*

Understanding Uncertainty in School League Tables* FISCAL STUDIES, vol. 32, no. 2, pp. 207 224 (2011) 0143-5671 Understanding Uncertainty in School League Tables* GEORGE LECKIE and HARVEY GOLDSTEIN Centre for Multilevel Modelling, University of Bristol

More information

Objective: To describe a new approach to neighborhood effects studies based on residential mobility and demonstrate this approach in the context of

Objective: To describe a new approach to neighborhood effects studies based on residential mobility and demonstrate this approach in the context of Objective: To describe a new approach to neighborhood effects studies based on residential mobility and demonstrate this approach in the context of neighborhood deprivation and preterm birth. Key Points:

More information

Chapter 1. Introduction

Chapter 1. Introduction Chapter 1 Introduction 1.1 Motivation and Goals The increasing availability and decreasing cost of high-throughput (HT) technologies coupled with the availability of computational tools and data form a

More information

Stepwise method Modern Model Selection Methods Quantile-Quantile plot and tests for normality

Stepwise method Modern Model Selection Methods Quantile-Quantile plot and tests for normality Week 9 Hour 3 Stepwise method Modern Model Selection Methods Quantile-Quantile plot and tests for normality Stat 302 Notes. Week 9, Hour 3, Page 1 / 39 Stepwise Now that we've introduced interactions,

More information

12/30/2017. PSY 5102: Advanced Statistics for Psychological and Behavioral Research 2

12/30/2017. PSY 5102: Advanced Statistics for Psychological and Behavioral Research 2 PSY 5102: Advanced Statistics for Psychological and Behavioral Research 2 Selecting a statistical test Relationships among major statistical methods General Linear Model and multiple regression Special

More information

Correlational Research. Correlational Research. Stephen E. Brock, Ph.D., NCSP EDS 250. Descriptive Research 1. Correlational Research: Scatter Plots

Correlational Research. Correlational Research. Stephen E. Brock, Ph.D., NCSP EDS 250. Descriptive Research 1. Correlational Research: Scatter Plots Correlational Research Stephen E. Brock, Ph.D., NCSP California State University, Sacramento 1 Correlational Research A quantitative methodology used to determine whether, and to what degree, a relationship

More information

MMI 409 Spring 2009 Final Examination Gordon Bleil. 1. Is there a difference in depression as a function of group and drug?

MMI 409 Spring 2009 Final Examination Gordon Bleil. 1. Is there a difference in depression as a function of group and drug? MMI 409 Spring 2009 Final Examination Gordon Bleil Table of Contents Research Scenario and General Assumptions Questions for Dataset (Questions are hyperlinked to detailed answers) 1. Is there a difference

More information

Using the Rasch Modeling for psychometrics examination of food security and acculturation surveys

Using the Rasch Modeling for psychometrics examination of food security and acculturation surveys Using the Rasch Modeling for psychometrics examination of food security and acculturation surveys Jill F. Kilanowski, PhD, APRN,CPNP Associate Professor Alpha Zeta & Mu Chi Acknowledgements Dr. Li Lin,

More information

Correlation and regression

Correlation and regression PG Dip in High Intensity Psychological Interventions Correlation and regression Martin Bland Professor of Health Statistics University of York http://martinbland.co.uk/ Correlation Example: Muscle strength

More information

A Discrete Choice Experiment Investigating Preferences for Funding Drugs Used to Treat Orphan Diseases

A Discrete Choice Experiment Investigating Preferences for Funding Drugs Used to Treat Orphan Diseases CHEPA WORKING PAPER SERIES Paper 10-01 A Discrete Choice Experiment Investigating Preferences for Funding Drugs Used to Treat Orphan Diseases January 22, 2010 Emmanouil Mentzakis a,, Patricia Stefanowska

More information