COMPARING PLS TO REGRESSION AND LISREL: A RESPONSE TO MARCOULIDES, CHIN, AND SAUNDERS 1

Size: px
Start display at page:

Download "COMPARING PLS TO REGRESSION AND LISREL: A RESPONSE TO MARCOULIDES, CHIN, AND SAUNDERS 1"

Transcription

1 ISSUES AND OPINIONS COMPARING PLS TO REGRESSION AND LISREL: A RESPONSE TO MARCOULIDES, CHIN, AND SAUNDERS 1 Dale L. Goodhue Terry College of Business, MIS Department, University of Georgia, Athens, GA U.S.A. {dgoodhue@terry.uga.edu} William Lewis {william.w.lewis@gmail.com} Ron Thompson Schools of Business, Wake Forest University, Winston-Salem, NC U.S.A. {thompsrl@wfu.edu} In the Foreword to an MIS Quarterly Special Issue on PLS, the senior editors for the special issue noted that they rejected a number of papers because the authors attempted comparisons between results from PLS, multiple regression, and structural equation modeling (Marcoulides et al. 2009). They raised several issues they argued had to be taken into account to have legitimate comparison studies, supporting their position primarily by citing three authors: Dijkstra (1983), McDonald(1996), and Schneeweiss (1993). As researchers interested in conducting comparison studies, we read the Foreword carefully, but found it did not provide clear guidance on how to conduct legitimate comparisons. Nor did our reading of Dijksta, McDonald, and Schneeweiss raise any red flags about dangers in this kind of comparison research. We were concerned that instead of helping researchers to successfully engage in comparison research, the Foreword might end up discouraging that type of work, and might even be used incorrectly to reject legitimate comparison studies. This Issues and Opinions piece addresses the question of why one might conduct comparison studies, and gives an overview of the process of comparison research with a focus on what is required to make those comparisons legitimate. In addition, we explicitly address the issues raised by Marcoulides et al., to explore where they might (or might not) come into play when conducting or evaluating this type of study. Keywords: Comparing statistical techniques, partial least squares, structural equation modeling, regression, Monte Carlo simulation 1 1 Ron Cenfetelli was the accepting senior editor for this paper. Geneviève Bassellier served as the associate editor. The appendices for this paper are located in the Online Supplements section of the MIS Quarterly s website ( MIS Quarterly Vol. 36 No. 3 pp /September

2 Introduction In Information Systems research, partial least squares (PLS) is a frequently used statistical technique 2 for testing research models involving constructs measured with multiple indicators. PLS s popularity is due at least in part to the fact that it is believed to be an easy to use approach that has advantages over other statistical techniques such as regression and covariance-based structural equation modeling (CB-SEM) under some frequently encountered conditions (e.g., small sample sizes, non-normally distributed data, and formative measurement). Of 54 articles using PLS in three top IS journals 3 during the period of 2006 to 2008 (inclusive), over 50 percent noted that PLS had special abilities with regard to small sample sizes and/or non-normal data distributions. Four of these 54 papers stated that PLS had these special abilities without including any supporting citations, suggesting that this belief is wide-spread enough that it is seen as no longer needing support from the literature. Recently, however, there have been studies that questioned PLS s special abilities (Goodhue et al. 2006; Hwang et al. 2010). These studies used Monte Carlo simulations to compare the efficacy of PLS with that of regression and/or CB- SEM. More such articles were submitted to the 2009 MIS Quarterly Special Issue on PLS. In their Foreword to that Special Issue, the three senior editors (Marcoulides, Chin, and Saunders hereinafter MCS) noted that of the 20 articles that were considered for the MIS Quarterly special issue, two were accepted and one was conditionally accepted. 4 They went on to state that the sticking point for the third paper (that was subsequently withdrawn) and for that matter a good number of other rejected papers, related to attempts at comparing the results of PLS, SEM and multiple regression (p. 171). In a section of their Foreword titled Comparison Across Methods, MCS discussed several issues that they stated must 2 One short note about terminology is in order. The terms statistical technique, statistical model, statistical method, and statistical approach are variously used by different authors at different times to refer to PLS, regression, or CB-SEM, etc. The term model is also used to refer to a display of constructs, causal paths between constructs, and measurement indicators of those constructs. Except for quotes from one author, we will use statistical technique to refer to PLS, regression, and CB-SEM, and we will use model to refer to a representation of constructs and causal paths. 3 Information Systems Research, Journal of Management Information Systems, and MIS Quarterly. 4 In the interest of full disclosure, we are the authors of the paper that was conditionally accepted. We later withdrew our paper rather than comply with the request by the senior editors that we remove the comparisons of results across statistical techniques. be kept in mind when attempting such comparisons, with the most prominent of these being the criticality of correct parameterization. The implication in the Foreword was that the rejected papers did not successfully attend to at least some of these issues, and thus their results were invalid or potentially misleading. As researchers who are very interested in this general domain, we read the Comparison Across Methods section of the Foreword carefully. In particular, we sought to understand what MCS meant by the term correct parameterization. Ultimately, we were unable to determine what specifically MCS were suggesting should be done to have a legitimate comparison between PLS and other statistical techniques such as CB-SEM. It seemed to us that this Foreword, with its strong statement about the difficulty of comparing PLS with other statistical techniques, coupled with the rejection of papers that attempted comparisons but no specifics about exactly how to do a correct comparison would have a chilling effect on anyone contemplating such comparison research. This could mean that claims of PLS s special abilities, which we believe have been incompletely tested for so long, might continue to be so. In our opinion, this would be a very undesirable situation for the IS field, which has been such a prominent champion of PLS. In addition, there are many other legitimate reasons for conducting studies that compare the efficacy of different statistical techniques under a variety of research conditions. After a careful reading of both the Foreword and three of the articles upon which MCS lean most heavily in presenting their arguments, we have come to the conclusion that the Comparison Across Methods section of the Foreword is not very helpful, and may in fact add to confusion rather than reduce it. Although on the surface it seems to provide strong reasons for questioning the validity of comparison research, a careful reading shows that it offers little in either guidance to researchers or clarity to reviewers seeking to evaluate comparison research. Nonetheless, the Foreword has already been used by at least one reviewer as justification for recommending rejection of a comparison paper submitted to a toptier IS journal. In this paper, we first present a broader perspective on the goals researchers might have in such comparisons, and why the Monte Carlo simulation approach can be useful in this context. We then look more closely at the specific issues that are raised when one compares PLS, regression, and CB-SEM. Because of its prominence in the MCS Foreword, we consider three possible meanings of the term parameterization in this 704 MIS Quarterly Vol. 36 No. 3/September 2012

3 context, and how those might relate to the legitimacy of comparisons. As a way to further illustrate these issues, we look at an actual example (comparing these three statistical techniques to determine their efficacy in handling smaller sample sizes) and discuss within that context the issues and choices researchers might have to deal with as they seek to make legitimate comparisons. We then turn to the specific arguments in the MCS Foreword. We briefly summarize why we feel the Foreword is not very helpful to researchers or reviewers, but defer a more in-depth discussion of each issue raised by MCS to Appendix A. In Appendices B, C, and D, we support the points we make by including sections of papers by three of the authors that MCS cite to support their position, along with excerpts from our e- mail exchanges with those authors. Our goal with this Issues and Opinions piece is to clarify what researchers doing this type of comparison research are seeking to do, and what is necessary for the comparisons to be legitimate. In short, we wish to provide guidance to researchers wishing to engage in comparison research, and clarity to reviewers evaluating comparison papers. What Is Required for Effective, Legitimate Comparisons Between Statistical Techniques? A Broader Perspective: What Is a Legitimate Comparison? We first look at the comparison issue from a broader perspective. Consider the theory-testing process in which our statistical techniques are often used in behavioral research, as shown in Figure 1. Behavioral researchers study some underlying reality, and seek to understand it. To put boundaries on our work, we are specifically interested in situations where researchers develop constructs from that underlying reality, and hypothesize causal relationships between those constructs. We will call this set of constructs and relationships the structural model. In addition, we are specifically interested in situations where researchers use multiple indicators to measure each of the constructs. We will refer to the set of measures for all constructs, and the proposed relationships between indicators and constructs, as the measurement model. We will also refer to the combination of the structural and measurement model as the research model. Given a research model, the researcher next obtains data with which to test it. We will focus on situations where the data are obtained through responses from survey participants. For this type of data, as is typical for much IS research, there is the presumption that indicators contain measurement error, and therefore constructs are measured with error. As shown in Figure 1, to this point (before the choice of a statistical technique to enable the testing of hypotheses) the research process is essentially the same regardless of which statistical technique (PLS, regression, CB-SEM, etc.) will ultimately be used in the analysis. There is the same underlying reality, the same research model, and the same data collected. At this point researchers have (at least) three possible choices as options for a statistical technique, as noted by McDonald (1996 pp ): latent variable analysis (a CB-SEM technique such as LISREL), analysis with equally weighted composites (such as regression), or analysis with composites using optimized weights (such as PLS). Note that statistical techniques using composites and those using latent variables are quite distinct in the way in which they treat the relationship between indicator variables and the underlying constructs. 5 However, both composites and latent variables are intended to represent the same thing: theoretical constructs that are not directly observable. The estimated values of any statistical technique are intended to reflect aspects of the underlying reality (e.g., regression paths might represent causal relationships). As McDonald suggests, It is reasonable to regard a path model with weighted composites as approximating the path model with latent variables (p. 264). Regardless of the choice of statistical technique, the primary purpose of the analysis step is the same for all researchers: (1) to ensure that the measurement model is adequate (in terms of reliability and validity), (2) to generate estimates of the strengths of the paths in the structural model, and (3) to determine the statistical significance of those path estimates. Below the three-way choice in Figure 1, the research process is again essentially the same regardless of which statistical estimation technique was used: assuming that measurement validity is confirmed, the estimates for path strength and statistical significance are evaluated, and conclusions are drawn about support (or lack thereof) for the hypothesized relationships. 5 A composite is an exact linear combination of indicator scores used to represent an underlying construct. The indicator scores might be equally weighted, or the weights might be chosen by some other means. In a latent variable model, the indicators are presumed to reflect the value of the underlying construct, but with error. MIS Quarterly Vol. 36 No. 3/September

4 Underlying Reality These are the same regardless of the statistical technique chosen Research Model: Constructs, Hypothesized Paths Between Constructs Data: Multiple Indicator Measures with Error Researchers have a choice of statistical technique a. Latent Variables (CB-SEM) b. Composites with Equal Weights (Regression) c. Composites with Data Dependent Weights (PLS) The way we handle these is the same regardless of the statistical technique chosen Evaluate Resulting Path Estimates, Statistical Significance Conclusions About Path Values, Support for Hypotheses In Monte Carlo simulation, comparison of the top and the bottom two boxes tells us the efficacy of a statistical technique Figure 1. The Choice of Statistical Techniques in the Context of Most Behavioral Research Figure 1 makes clear that the choice of statistical technique is made in the context of a process that starts with the same inputs (research model and data), seeks the same outputs (path strengths and statistical significances), and interprets those outputs in essentially the same way regardless of the statistical technique used. That is, paths with statistically significant values are presumed to represent real causal relationships; the strengths of significant paths are assumed to reflect their relative impact. If there were differences in the efficacy of different statistical techniques under certain circumstances, it would be very valuable for researchers to have guidance about which statistical technique would be most efficacious for their particular research conditions (their research model, their data, etc.). For example, if PLS were more efficacious at small sample size, researchers would most likely wish to take advantage of this fact. A major goal of research on comparisons between statistical techniques is to provide greater guidance along these lines. Below we will consider two general approaches for such comparisons that have been used in published research, and consider under what circumstances the resulting comparisons are likely to provide legitimate and useful guidance to researchers. Approach I: Using Actual Field Data The results from different statistical techniques are sometimes compared using one or more actual datasets drawn from the 706 MIS Quarterly Vol. 36 No. 3/September 2012

5 field (e.g., Barclay et al. 1995; Geffen et al. 2000). Such datasets are analyzed with two or more different statistical techniques, and the path values and statistical significances from the different techniques are compared. Of course each of the statistical techniques compared must be used correctly for the research situation, but these requirements for correct use are essentially the same as for any published research using any statistical technique. 6 Typically, there will be differences in the results from the different statistical techniques because the different techniques use different means to develop their estimates: some path values will be higher in one of the techniques; some statistical significances will be stronger. Conclusions about the relative efficacy of the compared statistical techniques might be drawn from those differences. There are two important limitations to drawing conclusions from this field data comparison approach. The first is that there is always the concern that differences in results could be due to random peculiarities of a particular dataset. Might some other dataset collected from the same population reverse the conclusions? This problem can be partially addressed by using multiple datasets, but using multiple datasets presents additional research difficulties, and doesn t completely eliminate the concern, unless a very large number of datasets is used. A second, more intractable problem can be understood by looking at Figure 1. With field or experimental data, we don t know what the underlying reality actually is (e.g., what the true path values of the structural model are), and so we can t compare our statistical results with the correct results. All we can do is note which statistical technique gave the highest or lowest path estimate for a particular path, or which path estimate had the highest or lowest statistical significance. Even if one statistical technique found a higher path estimate than another technique, it still isn t clear that the higher estimate is closer to the true, underlying value. If one statistical technique detected a significant path where the other did not, does the relationship really exist or not? Our assessment is that such comparisons using field data may be interesting and can be used to demonstrate various aspects of a statistical technique (e.g., to show how a CB-SEM technique may fail to produce a valid solution if it is underidentified), but they cannot give reliable information about the true efficacy of the different techniques. In short, we do not 6 Beyond this, it is certainly important that researchers explain the choices made for any options that might be available for a given method (e.g., for CB-SEM analysis, was maximum likelihood, generalized least squares, or some other criterion used for optimization?). see this field data comparison approach as providing dependable, useful comparison information to guide researchers in their choice of statistical techniques. Approach II: Using Monte Carlo Simulation Monte Carlo simulation has been used for decades to compare statistical techniques under varying conditions (Areskoug 1982; Chin et al. 2003; Chin and Newsted 1999; Goodhue et al. 2007; Hwang et al. 2010; Reinartz et al. 2009). Here we describe how Monte Carlo simulation can be used in this context, and consider the extent to which such comparisons are legitimate. To understand Monte Carlo simulation in the context of Figure 1, it is helpful to first recognize the different purposes of field studies for hypothesis testing on the one hand, and Monte Carlo simulations for evaluation of statistical techniques on the other. In a field study, we hope to improve our understanding of what the underlying reality is. We assume our statistical technique is efficacious. We gather data from the unknown underlying reality. We analyze the data with our statistical technique. The results give us clues about the underlying reality. In the type of Monte Carlo simulation discussed here, we hope to improve our understanding of how efficacious a particular statistical technique is. We specify an interesting research question and define one or more underlying reality models appropriate to studying that question. We generate data from that underlying reality. We analyze the data with the selected statistical technique (or techniques), and compare the results with the known correct values. This gives us clues about the efficacy of the statistical technique when used with this type of underlying reality, under the specific conditions that are being examined. The Monte Carlo Simulation Process in this Context To be more specific, Monte Carlo simulation involves, first, the specification of a given underlying reality (i.e., the top box in Figure 1). The underlying reality can be designed with any desired characteristics (strong or weak path strengths, reliable or non-reliable indicators, etc.). We then specify a research model that matches the underlying reality exactly a path diagram with exactly specified paths between constructs, indicator path values, and error terms. In other words, with Monte Carlo simulation, we start by MIS Quarterly Vol. 36 No. 3/September

6 defining the underlying reality and the research model (the top two boxes of Figure 1). We know the actual path values, etc., at the outset and the interest is in seeing how well the statistical technique will do in arriving at the correct answers. In the next phase, numerous datasets are generated (using random number generators) that are based on the specified underlying research model, and that represent simulated responses from survey questionnaires. A typical number might be 500 datasets for each specific condition that is being tested, with an appropriate number of surveys (cases) in each dataset (e.g., n = 100). The next step involves using each of the target statistical techniques (e.g., regression, PLS, or CB-SEM, etc.) to analyze each of the datasets. The resulting parameter estimates (for example, the 500 sets of parameter estimates for each statistical technique to be compared), are then captured and made available for comparison. As shown on the right side of Figure 1, comparison of the overall results against the known truth of the specified underlying path diagram allows us to determine performance metrics for that particular statistical technique, for example, the frequency of false positives (Type I errors), the frequency of false negatives (Type II errors), the average bias (accuracy) of the path estimates, etc. Because they are based on a large number of datasets, the performance metrics tell us something important about how efficacious each particular statistical technique is in reflecting the true underlying situation. Of course, each statistical technique must be used correctly for the research situation, just as in the context of using actual field data. But this requirement for correct use is essentially the same as for any published research using any statistical technique. Note that Monte Carlo simulation addresses both weaknesses previously identified for comparisons using field data. By using hundreds of datasets, concern about drawing conclusions from an idiosyncratic dataset is removed. Knowing the true underlying values, meaningful performance metrics can be generated, rather than just statements about which statistical technique found the highest or most significant path. Design Choices and the Legitimacy of Comparisons Between PLS, Regression, and CB-SEM Different statistical techniques have different requirements and different options for specifying the input data and setting up the hypothesized research model. In the context of comparing the efficacy of different statistical techniques, we have to be concerned about the degree to which the choices made for each of the different statistical techniques are sufficiently compatible that they are appropriate for the research question being investigated. A key concern in behavioral research is the relative efficacy of the different statistical techniques in terms of their path estimate accuracy, their statistical power, and the extent to which they are subject to false positives. Correct Parameterization Along these lines, the term correct parameterization appears prominently in the MCS Foreword with the suggestion that some submitted papers had illegitimate comparisons because of incorrect parameterization. However, the Foreword provides little guidance on what is meant by the term, and the examples that are provided do not clarify it adequately. In this section of the paper, we consider three possible definitions of the term parameterization and how incorrect parameterization in each of these senses might relate to the legitimacy of comparisons. We then go on to discuss other aspects of the setup for the different statistical techniques, and what other factors might make comparisons appropriate or inappropriate for the research question being asked. Definition A One possible meaning of parameterization comes from the idea that statistical techniques assume that elements of a research model are related in certain quantitative ways. For example, it is assumed that each presumed causal path between constructs has some strength that can be estimated, and that there is also a standard deviation about that path estimate that reflects how certain we are of its value. These might both be called parameters. More generally, with Definition A parameters are those quantitative characteristics of a statistical technique s assumed research model that are either to be estimated or to be prespecified before the estimation technique is carried out. Using Definition A, correct parameterization might mean that statistical techniques should not be compared unless they have the same parameters (whether estimated or prespecified). Table 1 shows several key parameters used in the three statistical techniques, and whether they are prespecified, estimated, or not used. We see that, although PLS, regression, and CB-SEM share some important parameters (such as estimated path values and path variances between constructs in the first and second rows), they also have some important differences (such as the indicator loadings and indicator weightings in the third and fourth rows). 708 MIS Quarterly Vol. 36 No. 3/September 2012

7 Table 1. Sample Parameters Used in Statistical Analysis Techniques PLS (PLS-Graph) Regression (SAS) CB-SEM (LISREL) Path values estimated estimated estimated Path variances estimated estimated estimated Indicator weights estimated prespecified to be equal not applicable Indicator loadings estimated not applicable estimated Indicator error variances often standardized to unity often standardized to unity often standardized to unity Indicator error covariances assumed to be zero assumed to be zero can be estimated or prespecified Exogenous construct correlations estimated estimated can be estimated, or prespecified Underlying these differences is an important fundamental difference between the statistical techniques: the research models assumed by PLS, regression, and CB-SEM are based on very different understandings of the relationship between constructs and their indicators. More specifically, CB-SEM assumes that constructs are latent constructs whose value cannot be known, but whose value is reflected in their indicators, each of which is measured with error. Regression assumes that each construct has a knowable value that is a composite of its equally weighted indicator scores. PLS assumes that constructs have a knowable value that is a weighted composite of selected indicators. Both the PLS algorithm and regression techniques require estimating construct scores. CB-SEM does not, and in fact assumes that construct scores cannot be estimated with the data available. So it is clear at the outset that PLS, regression, and CB-SEM do not all have the same parameters and, in the sense of Definition A, are inherently not identically parameterized. Definition B A second slightly more narrow definition of parameterization would be those model characteristics that are actually estimated by the statistical technique, that is, those that are not specified in advance. Therefore, using definition B, correct parameterization might mean that statistical techniques should not be compared unless they estimate the same parameters. If we used this definition,we might be able to prespecify enough of the quantitative characteristics of the different statistical techniques such that we are left to estimate only parameters that are common across the three techniques. Returning to Table 1 and the example of the relationship between indicators and constructs, we see some problems this approach would create. Regression has fixed, equal weights from indicators to constructs. 7 The only way to insure weights are equal in PLS is to calculate construct scores in advance, and to use those instead of indicator scores in the PLS analysis. The same could be done in CB-SEM, if we calculated a composite score to use as a single indicator for each construct, and specified no error. Under these circumstances, all three statistical techniques would have the same parameters (ala Definition B). But by assuming equal indicator weights and no measurement error, we would have emasculated both PLS and CB-SEM, and in fact we would get identical results from all three. By using the lowest common denominator, we would have eliminated any special abilities of PLS and CB-SEM, and brought both down to the efficacy level of regression. 8 Definition C Dijkstra (1983) provides another possible definition of parameterization, and since MCS cite Dijkstra when they mention correct parameterization, it is well to consider that definition. Dijkstra s definition of correct parameterization is somewhat technical, but in brief suggests that if the entire population (rather than a sample) is analyzed, a correctly parameterized statistical technique would arrive at the correct population values for the path strengths and other attributes. Dijkstra shows that, within this context, CB-SEM is correctly parameterized, and that PLS is incorrectly parameterized. 7 There are certainly other ways to come up with specifications for regression weights, but whatever technique is used, to make PLS and CB-SEM equivalent in the sense of Definition B, it would require calculating construct scores for PLS and CB-SEM and ignoring measurement error. 8 Since PLS uses bootstrapping for standard errors, and regression and CB- SEM rely on the theoretical distributions of the parameter estimates, the standard errors might not be identical. We note that all this assumes that the tested path model is recursive. MIS Quarterly Vol. 36 No. 3/September

8 Using Definition C (Dijkstra s) and MCS s requirement for correct parameterization, one could never legitimately compare PLS and CB-SEM since one is correctly parameterized and the other is not. We will address this definition of correct parameterization and its implications in more depth in the next section of the paper, and in Appendix A, Issue 1. After studying different possible meanings of the term parameterization, we do not see an obvious way to interpret the phrase correct parameterization that would seem to be consistent with the cautions offered by MCS. If by correct parameterization, MCS meant identical parameterization, it appears to us that it would be impossible to have any useful comparison across these three statistical techniques. Instead, we believe we should recognize and accept that the different statistical techniques go about generating estimates of path values and path standard deviations differently. Each has a slightly different set of parameters (whether from definition A, B, or C), and a slightly different way of generating those key results that we need (the estimates of the path values and their statistical significances). It is precisely because (1) they go about it differently and (2) researchers use them under identical circumstances, that comparisons of the efficacy of different statistical techniques under varying situations is of interest to us as researchers. Appropriate Comparisons It is certainly possible that a researcher could inadvertently set up two statistical methods such that unintended differences in the setup led to differences in performance metrics, and that these differences were unrelated to the research question being addressed. In this sense, a study with an inappropriate comparison might lead to incorrect conclusions. Whether a particular set of design choices is appropriate depends upon what question the researcher is seeking to answer, and therefore upon the comparison the researcher wishes to make. An Example of Design Choices for Comparisons Between PLS, Regression, and CB-SEM To describe the challenges of designing appropriate comparisons of different statistical techniques in more detail, and to illustrate the types of issues that arise when conducting such a study, we present an example of a study that we completed. For readers interested in a tutorial on Monte Carlo simulation of statistical techniques in general, we recommend Paxton et al. (2001). Paxton et al. organize their paper around nine steps in planning and executing a Monte Carlo simulation: (1) developing a theoretically derived research question, (2) creating a valid model, (3) designing specific experimental conditions, (4) choosing values of population parameters, (5) choosing an appropriate software package, (6) executing the simulations, (7) file storage, (8) trouble shooting and verification, and (9) summarizing results. We would particularly emphasize the importance of Paxton et al. s early design steps 1 through 4. The amount of work required to complete such a comparison study is sufficiently large that it would be quite unappealing to have to revisit steps 1 to 4, and then redo the study because of nonoptimal initial decisions. However, the Paxton et al. paper discusses challenges in using Monte Carlo simulation to study a single statistical technique under different conditions. Because our focus was on comparing across different statistical techniques, we faced some slightly different issues and have some different insights to share. In particular, we will address the concern for what might be called inappropriate or nonequivalent comparisons across different statistical techniques. The Study Context In our example study (adapted from Goodhue et. al. 2006), we wished to compare the efficacy of three different statistical techniques (regression, PLS, and CB-SEM) under conditions of varying sample size. Given that PLS was used frequently in IS research and was reputed to have advantages at small sample size, we wanted to compare the three as they were typically used by IS researchers. More specifically, for the 500 datasets in each sample size/effect size condition, we wanted to compare the statistical techniques on three aspects: the average accuracy of the path estimates, the overall statistical power of the 500 path estimates, and the overall prevalence of Type I errors (false positives). The research model we used and the general design choices (those that pertain to the top three boxes in Figure 1) are available in Goodhue et al. (2006). Below we will focus on issues of setting up the three statistical techniques to achieve an appropriate comparison, and then on how specifically to compare the results across the three statistical techniques. Specific Design Choices Table 2 shows the primary options that a researcher has to consider in terms of setting up the data and the three statistical 710 MIS Quarterly Vol. 36 No. 3/September 2012

9 Table 2. Setup Options for Statistical Analysis Techniques (Options Chosen in Our Study Are in Bold) PLS (PLS-Graph) Regression (SAS) CB-SEM (LISREL) 1. Form of input data raw indicator data requires construct scores or construct correlation matrix. For construct scores, indictor weights must be prespecified often set to equal. Or can use factor weights. 2. Metric for standardizing input data unit variance with no location, original scale plus locations, etc. 3. Indicator weights estimated by the technique. Options: path, centroid, factor weights 4. Formative versus Reflective Measurement 5. Indicator error covariances* 6. Exogenous construct correlations* can choose Formative or Reflective 7. Estimation Method No options: PLS Algorithm for weights, OLS for path estimates 8. Determining Standard Deviations raw construct scores See Row 1 above. no options raw indicator data, indicator correlation or covariance matrix analysis based on correlations or covariances not applicable Formative** or Reflective, but restrictions on formative assumed to be zero assumed to be zero option of specifying or estimating, specified as zero estimated estimated option of specifying or estimating bootstrapping (jackknifing an option) OLS, etc. normal theory from analysis of the data ML, OLS, etc., normal theory from analysis of the data *Note that indicator error covariances and exogenous construct correlations appear in both Table 1 and Table 2 because they can be estimated or specified in CB-SEM. **Bollen (2011) distinguishes between causal and formative indicators, but for our purposes the distinction is not necessary. techniques. 9 The bolded values indicate the options we used in our sample comparison study. On the surface, it might appear that with so many different options, attempts to conduct meaningful comparisons across the statistical techniques could be very challenging. After all, the output that is obtained from a specific statistical technique may certainly be different depending on the options that are selected. While this issue is certainly important, we believe that the difficulty of achieving appropriate comparisons can be overstated. For example, we note that we (and many other Monte Carlo simulation researchers, e.g., Chin et al. 2003; Goodhue et al. 2007) generated our Monte Carlo data to have zero mean and unit variance. Therefore in our study, the choices in Rows 2 are somewhat moot: the input data is essentially already standardized. 10 However, when the data to be analyzed is not standardized in this way, researchers comparing PLS and CB-SEM need to be aware of the possible mismatch of parameterization; CB-SEM analysis based on the correlation matrix should be consistent with unit variance no location. Failure to parameterize PLS and CB-SEM equivalently on this score could certainly result in uncertainty about whether any difference in results was due to CB-SEM versus PLS or due to covariance versus correlation matrices 9 Note that different software packages for PLS (e.g., PLS-Graph, SmartPLS, WarpPLS, XLSTAT-PLSPM) and CB-SEM (e.g., LISREL, AMOS, EQS, Mplus) may have slightly different options. 10 Strictly speaking, the covariance matrix produced by data with mean zero and variance one will not be exactly equivalent to a correlation matrix. For example, the diagonal values will be close to but not exactly one. MIS Quarterly Vol. 36 No. 3/September

10 being used as the starting point for LISREL. 11 Continuing down the rows of Table 2, we see in Row 3 that only PLS estimates indicator weights, as those are typically set equal for regression, and not used in LISREL. For Row 4 we sought to use reflective measurement, recognizing that this is the most commonly used form in MIS (and related behavioral) research. For PLS and LISREL, we selected the appropriate model specifications for reflective measurement. For regression, one must calculate construct scores. To do that we chose the most common approach used in practice: equal indicator weights in Row 1. For Rows 5 and 6, we chose to match the choices in LISREL 12 to the non-alterable choices in PLS and regression. For Row 7, the non-alterable option of PLS is the unique PLS algorithm. For regression, we chose the most common approach and used OLS (ordinary least squares). For LISREL, we again chose the most common approach and used ML (maximum likelihood). In Row 8, we chose bootstrapping for PLS (the currently recommended approach). For regression and LISREL, we used the default, normal theory testing. As an aside, we did test the impact of using bootstrapping on all three and found it had no substantive impact on the results. In short, although all three statistical techniques assume different relationships between indicators and constructs, we believe that (given our research questions) the set of bolded decisions in Table 2: (1) reflects the way each statistical technique is typically used in practice, (2) follows best practice for each of the techniques, and (3) is the most appropriate comparison possible for our research questions. Clearly, though, the design of appropriate comparisons depends on the statistical techniques compared and the questions researchers wish to answer. 11 The size of the impact of such a mismatch would certainly depend on the characteristics of the data. Monte Carlo simulation would be an excellent way to study this particular question. 12 In CB-SEM it is possible to specify a variety of patterns of indicator measurement error correlations. If one permitted correlated errors in CB- SEM but not in PLS or regression, this might be considered an inappropriate parameterization of the statistical techniques. It would be possible to design a comparison study to look specifically at the impact of that difference in parameterization, to see how much the efficacy of CB-SEM is changed when correlated errors are allowed versus when they are not. Unless that was a specific question to be answered, however, comparing CB-SEM with correlated errors against PLS and regression (which do not provide for correlated errors), would seem to invite misinterpretation, since it would be unclear whether any differences in results were driven by the choice of statistical technique versus by the choice of allowing correlated errors with CB-SEM. Making Sense of the Results Our goal in this example study was to compare the statistical techniques in terms of accuracy, statistical power, and Type I errors. Recall that as shown in Figure 1, with Monte Carlo simulations we know the correct values from the underlying model. Given this knowledge, we can create relevant performance metrics for the different statistical techniques. For accuracy, we can compute the average bias by subtracting the true path value from the average path estimate and dividing by the true path value. The ideal would be a bias of zero. If one technique consistently has a larger or smaller bias than other techniques, this would be a notable finding. Secondly, the statistical power for a given path can be computed by counting the number of statistically significant path estimates and dividing by 500 (the number of datasets in each sample size/effect size condition). Since this results in a proportion, we can use a simple equation to calculate the standard deviation around a proportion value, and determine whether differences in statistical power are statistically significant. Researchers would typically like to see statistical power values of.80 or better. Finally, by including a zero path in our underlying model (a construct which has no impact on the dependent variable), we can see how often each technique indicates that the zero path is statistically significant. All such paths are false positives, and with an overall statistical significance level of.05, we would like to see no more than 5 percent false positives. Again, because the number of false positives is a proportion, we can determine whether any statistical technique has a statistically significant excess number of false positives. This complete process of developing performance metrics will give comparison researchers very clear indications of the relative efficacy of different statistical techniques. We now turn to the specific issues raised by the MCS Foreword as potentially threatening the legitimacy of comparisons in research such as the example described above. Response to Specific Issues Raised by Marcoulides, Chin, and Saunders In MCS s Comparison Across Methods section of their Foreword (2009, pp ), we were able to identify five different issues that they state need to be considered before comparison studies can be considered legitimate. We will briefly address each of those issues here; a more detailed 712 MIS Quarterly Vol. 36 No. 3/September 2012

11 treatment is offered in the Appendices. Rather than deal with them in the order in which they appear in the Foreword, we will start with the issue that is most clearly identified with the rejection decisions: the claim that studies comparing PLS and CB-SEM or regression need to be correctly parameterized. We will then deal with the other four issues in the order they are presented in the Foreword. Correct Parameterization The clearest charge the Foreword made against the comparison articles is that those authors did not use correct parameterization when doing the comparison analysis. In discussing this issue, MCS prominently cite Dijkstra (1983): The instances for which researchers have reported the two modeling approaches as supposedly showing divergence of results generally have more to do with an incorrect comparison of selected mathematical functions and/or model parameterizations (Dijkstra 1983; Marcoulides 2003) (Marcoulides et al. 2009, p. 173). Any observed differences are merely a function of the differentially parameterized models being analyzed. We note that the original term used by Dijkstra (1983, p. 71) was correct parameterization (Marcoulides et al. 2009, p. 173). After reading the MCS Foreword and Dijkstra s paper carefully, and after personal correspondence with Professor Dijkstra (see Appendix B), it is clear that Dijkstra uses two important terms in the above MCS quotes quite differently than the reader might expect. First, Dijkstra uses the term model to mean what we have been referring to as a statistical technique (e.g., PLS, regression etc.). More importantly, Dijkstra uses the term correct parameterization to mean something quite different from what we have called appropriate comparison: the way a user has specified data, constraints and options for compared statistical techniques. For Dijkstra, correct parameterization refers to an inherent characteristic of a statistical technique (PLS, regression, etc.). For Dijkstra, correct parameterization means that a statistical technique is Fisher consistent. Specifically, if the sample analyzed were the total population rather than a subset of the total population, a Fisher consistent statistical technique would return the true underlying values of the path estimates, etc. Using his definition, Dijkstra shows that CB- SEM is correctly parameterized, and PLS is not. We note that Dijkstra assumes that the underlying reality researchers are studying is best modeled by a latent variable model. Latent variable models presume that theoretical constructs are not directly observable, but that relationships between those constructs can be estimated using indicators that reflect the construct values, but also include measurement error. This is generally the assumption in behavioral research. Other models of the underlying reality are possible; for example, one could assume that the underlying reality behavioral researchers study is best modeled as theoretical constructs that are composite variables (Bagozzi 2011). But of course this requires assuming there is no measurement error in indicators. If one did assume no measurement error, then PLS, regression, and CB-SEM would all be correctly parameterized according to Dijkstra s definition. If there is measurement error in the underlying reality, then PLS and regression are not correctly parameterized and CB-SEM is correctly parameterized. We note that assuming away indicator measurement error in behavioral research is hard to defend. See Appendix A, Issue 1 for more detail on this issue. If we were to interpret MCS s statement that the models must exhibit a so-called correct parameterization before there can be any type of comparison (p. 174) using Dijkstra s definition of model and of correct parameterization, then it would imply that PLS can never legitimately be compared with CB-SEM. However, Dijksta clearly believes that comparisons between PLS and CB-SEM are useful, and that differences in their results are to be expected, precisely because PLS is inherently incorrectly parameterized. It does not seem reasonable to assume MCS intended to use Dijkstra s definition and to say that PLS can never be legitimately compared to CB-SEM. This leaves us uncertain about what definition of correct parameterization is intended for the Foreword. However, MCS do point out that certain nonequivalent design or setup choices made by a researcher could result in different values for the techniques estimates. The Foreword presents us with two examples. We have already discussed that CB-SEM has the option of using either the correlation matrix or the covariance matrix as the basis for the analysis, and in a comparison with PLS, this choice for CB-SEM should be consistent with the choice of metric used in PLS-GRAPH. Since covariance and correlation matrices do not contain the same amount of information, there is the possibility of a comparison mismatch if, for example, results from LISREL (or other CB-SEM technique) using the covariance matrix as the basis for the analysis are compared with results from PLS-GRAPH using unit variance with no location as the metric of analyzed data. MIS Quarterly Vol. 36 No. 3/September

12 However, as we stated earlier many Monte Carlo simulations including our described example, Chin et al. (2003) and Goodhue et al. (2007) generate indicator data from a model that assumes means of zero and variances of one. Under these circumstances, the correlation and covariance matrices are essentially identical. As a second example of how setup choices can affect results, MCS note that using PLS Mode A (reflective) versus PLS Mode B (formative) also can lead to different estimates. We agree that these and other setup choices can affect analysis results. However, we don t see either of these possible mismatches as difficult to avoid, as long as comparison researchers are explicit about their research objectives, and as long as the design and setup choices are appropriate for the comparisons they wish to make. In short, we don t see the appropriate comparison issue as a particularly difficult challenge for knowledgeable researchers. Other Issues Raised by the MCS Foreword The second issue raised by the Foreword is that when the different statistical techniques are provided with single indicators for each construct as input, the results will be the same (MCS 2009, p. 172). We agree with this statement. However, we note that when single indicators are used, there is either no measurement error, or no information available to the statistical technique about how much measurement error is present. 13 McDonald (1996, pp ) and Dijkstra (1983, p. 81) both suggest that with multiple indicators for each construct, the more measurement error there is, the more PLS and regression will suffer in terms of accuracy in comparison with CB-SEM. Since we were interested in situations with multiple indicators per construct, the single indicator example does not seem applicable to whether comparisons involving multiple indicators are valid. The third issue MCS discuss is to note that they concur with McDonald that variables in PLS are not true latent variables, but are composites (MCS 2009, pp ). That is, PLS variables are estimated by exact linear combinations of their indicators, whereas the same is not true for CB-SEM techniques. We agree, but again this does not mean that comparisons between PLS results and CB-SEM results are invalid or not legitimate. It simply means that we should be careful in our terminology to ensure that we don t inadvertently begin to believe that PLS and CB-SEM statistical techniques are the same. 13 If you had prior knowledge of measurement error, you could specify it in CB-SEM, which would result in different output for CB-SEM. The fourth issue raised by MCS is that PLS will provide similar results to CB-SEM techniques under certain circumstances (MCS 2009, p. 173). One example is described by Schneeweiss (1993), relating to the ratio of the largest eigenvalue of the error covariance matrix to the sum of the squared loadings. Schneeweiss contends that when this ratio is small, the results from PLS and CB-SEM will be similar, and the user may wish to use PLS rather than CB-SEM because it is less computationally demanding. We agree with this statement, but once again do not see it as calling into question the validity of comparisons across results obtained from PLS and CB-SEM, even if the above ratio is large. The final issue MCS discuss is similar to the fourth issue above, in that it relates to conditions under which one might expect PLS results and CB-SEM results to be similar. MCS cite McDonald who states that with a sufficiently large number of indicators, the results from different statistical techniques might be relatively similar (MCS 2009, pp ). But a careful reading of McDonald shows that he does not expect PLS and CB-SEM results to be very close in most behavioral research. McDonald demonstrates a situation where the use of 12 indicators brought the results to within 10 percent of each other. McDonald s example suggests that for typical IS research with perhaps three to six indicators per construct, we should not expect PLS and CB-SEM results to be very close. Again, we have no disagreement with the assertion that with increasing numbers of indicators, PLS and CB-SEM estimates will converge. But we do not see how this fact would invalidate comparisons across different statistical analysis techniques with any specific number of indicators, as long as the number of indicators and the data are identical for all techniques being compared. In the Appendices, we go through the issues raised in the MCS Foreword in more detail. We draw on quotes from the three papers they cite most heavily Dijkstra (1983), McDonald (1996), and Schneeweiss (1993) as well as quotes from personal correspondence with these three authors. Our position, however, is that since researchers use a variety of statistical techniques within the same context (i.e., to test hypothesized paths within causal models involving constructs measured with multiple indicators), then (providing appropriate parameterizations are used) it is perfectly legitimate to compare results across the different statistical techniques. Conclusion MCS primarily cite three authors (Dijkstra 1983; McDonald 1996; Schneeweiss 1993) in support of their stance that comparison of PLS to other methods cannot and should not 714 MIS Quarterly Vol. 36 No. 3/September 2012

Asignificant amount of information systems (IS) research involves hypothesizing and testing for interaction

Asignificant amount of information systems (IS) research involves hypothesizing and testing for interaction Information Systems Research Vol. 18, No. 2, June 2007, pp. 211 227 issn 1047-7047 eissn 1526-5536 07 1802 0211 informs doi 10.1287/isre.1070.0123 2007 INFORMS Research Note Statistical Power in Analyzing

More information

Field-normalized citation impact indicators and the choice of an appropriate counting method

Field-normalized citation impact indicators and the choice of an appropriate counting method Field-normalized citation impact indicators and the choice of an appropriate counting method Ludo Waltman and Nees Jan van Eck Centre for Science and Technology Studies, Leiden University, The Netherlands

More information

existing statistical techniques. However, even with some statistical background, reading and

existing statistical techniques. However, even with some statistical background, reading and STRUCTURAL EQUATION MODELING (SEM): A STEP BY STEP APPROACH (PART 1) By: Zuraidah Zainol (PhD) Faculty of Management & Economics, Universiti Pendidikan Sultan Idris zuraidah@fpe.upsi.edu.my 2016 INTRODUCTION

More information

Student Performance Q&A:

Student Performance Q&A: Student Performance Q&A: 2009 AP Statistics Free-Response Questions The following comments on the 2009 free-response questions for AP Statistics were written by the Chief Reader, Christine Franklin of

More information

Lec 02: Estimation & Hypothesis Testing in Animal Ecology

Lec 02: Estimation & Hypothesis Testing in Animal Ecology Lec 02: Estimation & Hypothesis Testing in Animal Ecology Parameter Estimation from Samples Samples We typically observe systems incompletely, i.e., we sample according to a designed protocol. We then

More information

Doing Quantitative Research 26E02900, 6 ECTS Lecture 6: Structural Equations Modeling. Olli-Pekka Kauppila Daria Kautto

Doing Quantitative Research 26E02900, 6 ECTS Lecture 6: Structural Equations Modeling. Olli-Pekka Kauppila Daria Kautto Doing Quantitative Research 26E02900, 6 ECTS Lecture 6: Structural Equations Modeling Olli-Pekka Kauppila Daria Kautto Session VI, September 20 2017 Learning objectives 1. Get familiar with the basic idea

More information

11/18/2013. Correlational Research. Correlational Designs. Why Use a Correlational Design? CORRELATIONAL RESEARCH STUDIES

11/18/2013. Correlational Research. Correlational Designs. Why Use a Correlational Design? CORRELATIONAL RESEARCH STUDIES Correlational Research Correlational Designs Correlational research is used to describe the relationship between two or more naturally occurring variables. Is age related to political conservativism? Are

More information

Unit 1 Exploring and Understanding Data

Unit 1 Exploring and Understanding Data Unit 1 Exploring and Understanding Data Area Principle Bar Chart Boxplot Conditional Distribution Dotplot Empirical Rule Five Number Summary Frequency Distribution Frequency Polygon Histogram Interquartile

More information

Minimizing Uncertainty in Property Casualty Loss Reserve Estimates Chris G. Gross, ACAS, MAAA

Minimizing Uncertainty in Property Casualty Loss Reserve Estimates Chris G. Gross, ACAS, MAAA Minimizing Uncertainty in Property Casualty Loss Reserve Estimates Chris G. Gross, ACAS, MAAA The uncertain nature of property casualty loss reserves Property Casualty loss reserves are inherently uncertain.

More information

USE AND MISUSE OF MIXED MODEL ANALYSIS VARIANCE IN ECOLOGICAL STUDIES1

USE AND MISUSE OF MIXED MODEL ANALYSIS VARIANCE IN ECOLOGICAL STUDIES1 Ecology, 75(3), 1994, pp. 717-722 c) 1994 by the Ecological Society of America USE AND MISUSE OF MIXED MODEL ANALYSIS VARIANCE IN ECOLOGICAL STUDIES1 OF CYNTHIA C. BENNINGTON Department of Biology, West

More information

A critical look at the use of SEM in international business research

A critical look at the use of SEM in international business research sdss A critical look at the use of SEM in international business research Nicole F. Richter University of Southern Denmark Rudolf R. Sinkovics The University of Manchester Christian M. Ringle Hamburg University

More information

Regression Discontinuity Analysis

Regression Discontinuity Analysis Regression Discontinuity Analysis A researcher wants to determine whether tutoring underachieving middle school students improves their math grades. Another wonders whether providing financial aid to low-income

More information

9 research designs likely for PSYC 2100

9 research designs likely for PSYC 2100 9 research designs likely for PSYC 2100 1) 1 factor, 2 levels, 1 group (one group gets both treatment levels) related samples t-test (compare means of 2 levels only) 2) 1 factor, 2 levels, 2 groups (one

More information

Assignment 4: True or Quasi-Experiment

Assignment 4: True or Quasi-Experiment Assignment 4: True or Quasi-Experiment Objectives: After completing this assignment, you will be able to Evaluate when you must use an experiment to answer a research question Develop statistical hypotheses

More information

RAG Rating Indicator Values

RAG Rating Indicator Values Technical Guide RAG Rating Indicator Values Introduction This document sets out Public Health England s standard approach to the use of RAG ratings for indicator values in relation to comparator or benchmark

More information

Reliability, validity, and all that jazz

Reliability, validity, and all that jazz Reliability, validity, and all that jazz Dylan Wiliam King s College London Published in Education 3-13, 29 (3) pp. 17-21 (2001) Introduction No measuring instrument is perfect. If we use a thermometer

More information

Why do Psychologists Perform Research?

Why do Psychologists Perform Research? PSY 102 1 PSY 102 Understanding and Thinking Critically About Psychological Research Thinking critically about research means knowing the right questions to ask to assess the validity or accuracy of a

More information

Political Science 15, Winter 2014 Final Review

Political Science 15, Winter 2014 Final Review Political Science 15, Winter 2014 Final Review The major topics covered in class are listed below. You should also take a look at the readings listed on the class website. Studying Politics Scientifically

More information

Technical Specifications

Technical Specifications Technical Specifications In order to provide summary information across a set of exercises, all tests must employ some form of scoring models. The most familiar of these scoring models is the one typically

More information

Reliability, validity, and all that jazz

Reliability, validity, and all that jazz Reliability, validity, and all that jazz Dylan Wiliam King s College London Introduction No measuring instrument is perfect. The most obvious problems relate to reliability. If we use a thermometer to

More information

MS&E 226: Small Data

MS&E 226: Small Data MS&E 226: Small Data Lecture 10: Introduction to inference (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 17 What is inference? 2 / 17 Where did our data come from? Recall our sample is: Y, the vector

More information

ISA 540, Auditing Accounting Estimates, Including Fair Value Accounting Estimates, and Related Disclosures Issues and Task Force Recommendations

ISA 540, Auditing Accounting Estimates, Including Fair Value Accounting Estimates, and Related Disclosures Issues and Task Force Recommendations Agenda Item 1-A ISA 540, Auditing Accounting Estimates, Including Fair Value Accounting Estimates, and Related Disclosures Issues and Task Force Recommendations Introduction 1. Since the September 2016

More information

Panel: Using Structural Equation Modeling (SEM) Using Partial Least Squares (SmartPLS)

Panel: Using Structural Equation Modeling (SEM) Using Partial Least Squares (SmartPLS) Panel: Using Structural Equation Modeling (SEM) Using Partial Least Squares (SmartPLS) Presenters: Dr. Faizan Ali, Assistant Professor Dr. Cihan Cobanoglu, McKibbon Endowed Chair Professor University of

More information

Data and Statistics 101: Key Concepts in the Collection, Analysis, and Application of Child Welfare Data

Data and Statistics 101: Key Concepts in the Collection, Analysis, and Application of Child Welfare Data TECHNICAL REPORT Data and Statistics 101: Key Concepts in the Collection, Analysis, and Application of Child Welfare Data CONTENTS Executive Summary...1 Introduction...2 Overview of Data Analysis Concepts...2

More information

Modeling Sentiment with Ridge Regression

Modeling Sentiment with Ridge Regression Modeling Sentiment with Ridge Regression Luke Segars 2/20/2012 The goal of this project was to generate a linear sentiment model for classifying Amazon book reviews according to their star rank. More generally,

More information

Choose an approach for your research problem

Choose an approach for your research problem Choose an approach for your research problem This course is about doing empirical research with experiments, so your general approach to research has already been chosen by your professor. It s important

More information

Citation for published version (APA): Ebbes, P. (2004). Latent instrumental variables: a new approach to solve for endogeneity s.n.

Citation for published version (APA): Ebbes, P. (2004). Latent instrumental variables: a new approach to solve for endogeneity s.n. University of Groningen Latent instrumental variables Ebbes, P. IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please check the document

More information

VERDIN MANUSCRIPT REVIEW HISTORY REVISION NOTES FROM AUTHORS (ROUND 2)

VERDIN MANUSCRIPT REVIEW HISTORY REVISION NOTES FROM AUTHORS (ROUND 2) 1 VERDIN MANUSCRIPT REVIEW HISTORY REVISION NOTES FROM AUTHORS (ROUND 2) Thank you for providing us with the opportunity to revise our paper. We have revised the manuscript according to the editors and

More information

MULTIPLE LINEAR REGRESSION 24.1 INTRODUCTION AND OBJECTIVES OBJECTIVES

MULTIPLE LINEAR REGRESSION 24.1 INTRODUCTION AND OBJECTIVES OBJECTIVES 24 MULTIPLE LINEAR REGRESSION 24.1 INTRODUCTION AND OBJECTIVES In the previous chapter, simple linear regression was used when you have one independent variable and one dependent variable. This chapter

More information

Recognizing Ambiguity

Recognizing Ambiguity Recognizing Ambiguity How Lack of Information Scares Us Mark Clements Columbia University I. Abstract In this paper, I will examine two different approaches to an experimental decision problem posed by

More information

An Empirical Study on Causal Relationships between Perceived Enjoyment and Perceived Ease of Use

An Empirical Study on Causal Relationships between Perceived Enjoyment and Perceived Ease of Use An Empirical Study on Causal Relationships between Perceived Enjoyment and Perceived Ease of Use Heshan Sun Syracuse University hesun@syr.edu Ping Zhang Syracuse University pzhang@syr.edu ABSTRACT Causality

More information

Ambiguous Data Result in Ambiguous Conclusions: A Reply to Charles T. Tart

Ambiguous Data Result in Ambiguous Conclusions: A Reply to Charles T. Tart Other Methodology Articles Ambiguous Data Result in Ambiguous Conclusions: A Reply to Charles T. Tart J. E. KENNEDY 1 (Original publication and copyright: Journal of the American Society for Psychical

More information

Understanding Uncertainty in School League Tables*

Understanding Uncertainty in School League Tables* FISCAL STUDIES, vol. 32, no. 2, pp. 207 224 (2011) 0143-5671 Understanding Uncertainty in School League Tables* GEORGE LECKIE and HARVEY GOLDSTEIN Centre for Multilevel Modelling, University of Bristol

More information

A Brief Introduction to Bayesian Statistics

A Brief Introduction to Bayesian Statistics A Brief Introduction to Statistics David Kaplan Department of Educational Psychology Methods for Social Policy Research and, Washington, DC 2017 1 / 37 The Reverend Thomas Bayes, 1701 1761 2 / 37 Pierre-Simon

More information

CHAPTER 3 DATA ANALYSIS: DESCRIBING DATA

CHAPTER 3 DATA ANALYSIS: DESCRIBING DATA Data Analysis: Describing Data CHAPTER 3 DATA ANALYSIS: DESCRIBING DATA In the analysis process, the researcher tries to evaluate the data collected both from written documents and from other sources such

More information

Agents with Attitude: Exploring Coombs Unfolding Technique with Agent-Based Models

Agents with Attitude: Exploring Coombs Unfolding Technique with Agent-Based Models Int J Comput Math Learning (2009) 14:51 60 DOI 10.1007/s10758-008-9142-6 COMPUTER MATH SNAPHSHOTS - COLUMN EDITOR: URI WILENSKY* Agents with Attitude: Exploring Coombs Unfolding Technique with Agent-Based

More information

Confidence Intervals On Subsets May Be Misleading

Confidence Intervals On Subsets May Be Misleading Journal of Modern Applied Statistical Methods Volume 3 Issue 2 Article 2 11-1-2004 Confidence Intervals On Subsets May Be Misleading Juliet Popper Shaffer University of California, Berkeley, shaffer@stat.berkeley.edu

More information

INVESTIGATING FIT WITH THE RASCH MODEL. Benjamin Wright and Ronald Mead (1979?) Most disturbances in the measurement process can be considered a form

INVESTIGATING FIT WITH THE RASCH MODEL. Benjamin Wright and Ronald Mead (1979?) Most disturbances in the measurement process can be considered a form INVESTIGATING FIT WITH THE RASCH MODEL Benjamin Wright and Ronald Mead (1979?) Most disturbances in the measurement process can be considered a form of multidimensionality. The settings in which measurement

More information

ISC- GRADE XI HUMANITIES ( ) PSYCHOLOGY. Chapter 2- Methods of Psychology

ISC- GRADE XI HUMANITIES ( ) PSYCHOLOGY. Chapter 2- Methods of Psychology ISC- GRADE XI HUMANITIES (2018-19) PSYCHOLOGY Chapter 2- Methods of Psychology OUTLINE OF THE CHAPTER (i) Scientific Methods in Psychology -observation, case study, surveys, psychological tests, experimentation

More information

Chapter 7: Descriptive Statistics

Chapter 7: Descriptive Statistics Chapter Overview Chapter 7 provides an introduction to basic strategies for describing groups statistically. Statistical concepts around normal distributions are discussed. The statistical procedures of

More information

11/24/2017. Do not imply a cause-and-effect relationship

11/24/2017. Do not imply a cause-and-effect relationship Correlational research is used to describe the relationship between two or more naturally occurring variables. Is age related to political conservativism? Are highly extraverted people less afraid of rejection

More information

Expert System Profile

Expert System Profile Expert System Profile GENERAL Domain: Medical Main General Function: Diagnosis System Name: INTERNIST-I/ CADUCEUS (or INTERNIST-II) Dates: 1970 s 1980 s Researchers: Ph.D. Harry Pople, M.D. Jack D. Myers

More information

August 29, Introduction and Overview

August 29, Introduction and Overview August 29, 2018 Introduction and Overview Why are we here? Haavelmo(1944): to become master of the happenings of real life. Theoretical models are necessary tools in our attempts to understand and explain

More information

FATIGUE. A brief guide to the PROMIS Fatigue instruments:

FATIGUE. A brief guide to the PROMIS Fatigue instruments: FATIGUE A brief guide to the PROMIS Fatigue instruments: ADULT ADULT CANCER PEDIATRIC PARENT PROXY PROMIS Ca Bank v1.0 Fatigue PROMIS Pediatric Bank v2.0 Fatigue PROMIS Pediatric Bank v1.0 Fatigue* PROMIS

More information

Chapter 02 Developing and Evaluating Theories of Behavior

Chapter 02 Developing and Evaluating Theories of Behavior Chapter 02 Developing and Evaluating Theories of Behavior Multiple Choice Questions 1. A theory is a(n): A. plausible or scientifically acceptable, well-substantiated explanation of some aspect of the

More information

How to interpret results of metaanalysis

How to interpret results of metaanalysis How to interpret results of metaanalysis Tony Hak, Henk van Rhee, & Robert Suurmond Version 1.0, March 2016 Version 1.3, Updated June 2018 Meta-analysis is a systematic method for synthesizing quantitative

More information

To: The Public Guardian 4 September 2017.

To: The Public Guardian  4 September 2017. To: The Public Guardian Alan.Eccles@publicguardian.gsi.gov.uk customerservices@publicguardian.gsi.gov.uk From: Mike Stone mhsatstokelib@yahoo.co.uk 4 September 2017 Dear Mr Eccles, I am writing to you

More information

Evaluation Models STUDIES OF DIAGNOSTIC EFFICIENCY

Evaluation Models STUDIES OF DIAGNOSTIC EFFICIENCY 2. Evaluation Model 2 Evaluation Models To understand the strengths and weaknesses of evaluation, one must keep in mind its fundamental purpose: to inform those who make decisions. The inferences drawn

More information

Section 6: Analysing Relationships Between Variables

Section 6: Analysing Relationships Between Variables 6. 1 Analysing Relationships Between Variables Section 6: Analysing Relationships Between Variables Choosing a Technique The Crosstabs Procedure The Chi Square Test The Means Procedure The Correlations

More information

Small Sample Bayesian Factor Analysis. PhUSE 2014 Paper SP03 Dirk Heerwegh

Small Sample Bayesian Factor Analysis. PhUSE 2014 Paper SP03 Dirk Heerwegh Small Sample Bayesian Factor Analysis PhUSE 2014 Paper SP03 Dirk Heerwegh Overview Factor analysis Maximum likelihood Bayes Simulation Studies Design Results Conclusions Factor Analysis (FA) Explain correlation

More information

PLS 506 Mark T. Imperial, Ph.D. Lecture Notes: Reliability & Validity

PLS 506 Mark T. Imperial, Ph.D. Lecture Notes: Reliability & Validity PLS 506 Mark T. Imperial, Ph.D. Lecture Notes: Reliability & Validity Measurement & Variables - Initial step is to conceptualize and clarify the concepts embedded in a hypothesis or research question with

More information

multilevel modeling for social and personality psychology

multilevel modeling for social and personality psychology 1 Introduction Once you know that hierarchies exist, you see them everywhere. I have used this quote by Kreft and de Leeuw (1998) frequently when writing about why, when, and how to use multilevel models

More information

Chapter 11. Experimental Design: One-Way Independent Samples Design

Chapter 11. Experimental Design: One-Way Independent Samples Design 11-1 Chapter 11. Experimental Design: One-Way Independent Samples Design Advantages and Limitations Comparing Two Groups Comparing t Test to ANOVA Independent Samples t Test Independent Samples ANOVA Comparing

More information

WHEN IMPRECISE STATISTICAL STATEMENTS BECOME PROBLEMATIC: A RESPONSE TO GOODHUE, LEWIS, AND THOMPSON 1

WHEN IMPRECISE STATISTICAL STATEMENTS BECOME PROBLEMATIC: A RESPONSE TO GOODHUE, LEWIS, AND THOMPSON 1 ISSUES AND OPINIONS WHEN IMPRECISE STATISTICAL STATEMENTS BECOME PROBLEMATIC: A RESPONSE TO GOODHUE, LEWIS, AND THOMPSON 1 George A. Marcoulides Graduate School of Education, University of California,

More information

Analysis of Environmental Data Conceptual Foundations: En viro n m e n tal Data

Analysis of Environmental Data Conceptual Foundations: En viro n m e n tal Data Analysis of Environmental Data Conceptual Foundations: En viro n m e n tal Data 1. Purpose of data collection...................................................... 2 2. Samples and populations.......................................................

More information

Supplement 2. Use of Directed Acyclic Graphs (DAGs)

Supplement 2. Use of Directed Acyclic Graphs (DAGs) Supplement 2. Use of Directed Acyclic Graphs (DAGs) Abstract This supplement describes how counterfactual theory is used to define causal effects and the conditions in which observed data can be used to

More information

Alternative Methods for Assessing the Fit of Structural Equation Models in Developmental Research

Alternative Methods for Assessing the Fit of Structural Equation Models in Developmental Research Alternative Methods for Assessing the Fit of Structural Equation Models in Developmental Research Michael T. Willoughby, B.S. & Patrick J. Curran, Ph.D. Duke University Abstract Structural Equation Modeling

More information

The Research Roadmap Checklist

The Research Roadmap Checklist 1/5 The Research Roadmap Checklist Version: December 1, 2007 All enquires to bwhitworth@acm.org This checklist is at http://brianwhitworth.com/researchchecklist.pdf The element details are explained at

More information

Chapter 23. Inference About Means. Copyright 2010 Pearson Education, Inc.

Chapter 23. Inference About Means. Copyright 2010 Pearson Education, Inc. Chapter 23 Inference About Means Copyright 2010 Pearson Education, Inc. Getting Started Now that we know how to create confidence intervals and test hypotheses about proportions, it d be nice to be able

More information

Systems Thinking Rubrics

Systems Thinking Rubrics Systems Thinking Rubrics Systems Thinking Beginner (designed for instructor use), pages 1-2 These rubrics were designed for use with primary students and/or students who are just beginning to learn systems

More information

Running head: INDIVIDUAL DIFFERENCES 1. Why to treat subjects as fixed effects. James S. Adelman. University of Warwick.

Running head: INDIVIDUAL DIFFERENCES 1. Why to treat subjects as fixed effects. James S. Adelman. University of Warwick. Running head: INDIVIDUAL DIFFERENCES 1 Why to treat subjects as fixed effects James S. Adelman University of Warwick Zachary Estes Bocconi University Corresponding Author: James S. Adelman Department of

More information

Business Statistics Probability

Business Statistics Probability Business Statistics The following was provided by Dr. Suzanne Delaney, and is a comprehensive review of Business Statistics. The workshop instructor will provide relevant examples during the Skills Assessment

More information

IAASB Main Agenda (February 2007) Page Agenda Item PROPOSED INTERNATIONAL STANDARD ON AUDITING 530 (REDRAFTED)

IAASB Main Agenda (February 2007) Page Agenda Item PROPOSED INTERNATIONAL STANDARD ON AUDITING 530 (REDRAFTED) IAASB Main Agenda (February 2007) Page 2007 423 Agenda Item 6-A PROPOSED INTERNATIONAL STANDARD ON AUDITING 530 (REDRAFTED) AUDIT SAMPLING AND OTHER MEANS OF TESTING CONTENTS Paragraph Introduction Scope

More information

12/31/2016. PSY 512: Advanced Statistics for Psychological and Behavioral Research 2

12/31/2016. PSY 512: Advanced Statistics for Psychological and Behavioral Research 2 PSY 512: Advanced Statistics for Psychological and Behavioral Research 2 Introduce moderated multiple regression Continuous predictor continuous predictor Continuous predictor categorical predictor Understand

More information

Estimation. Preliminary: the Normal distribution

Estimation. Preliminary: the Normal distribution Estimation Preliminary: the Normal distribution Many statistical methods are only valid if we can assume that our data follow a distribution of a particular type, called the Normal distribution. Many naturally

More information

Review Statistics review 2: Samples and populations Elise Whitley* and Jonathan Ball

Review Statistics review 2: Samples and populations Elise Whitley* and Jonathan Ball Available online http://ccforum.com/content/6/2/143 Review Statistics review 2: Samples and populations Elise Whitley* and Jonathan Ball *Lecturer in Medical Statistics, University of Bristol, UK Lecturer

More information

DRAFT (Final) Concept Paper On choosing appropriate estimands and defining sensitivity analyses in confirmatory clinical trials

DRAFT (Final) Concept Paper On choosing appropriate estimands and defining sensitivity analyses in confirmatory clinical trials DRAFT (Final) Concept Paper On choosing appropriate estimands and defining sensitivity analyses in confirmatory clinical trials EFSPI Comments Page General Priority (H/M/L) Comment The concept to develop

More information

You must answer question 1.

You must answer question 1. Research Methods and Statistics Specialty Area Exam October 28, 2015 Part I: Statistics Committee: Richard Williams (Chair), Elizabeth McClintock, Sarah Mustillo You must answer question 1. 1. Suppose

More information

Title: Intention-to-treat and transparency of related practices in randomized, controlled trials of anti-infectives

Title: Intention-to-treat and transparency of related practices in randomized, controlled trials of anti-infectives Author s response to reviews Title: Intention-to-treat and transparency of related practices in randomized, controlled trials of anti-infectives Authors: Robert Beckett (rdbeckett@manchester.edu) Kathryn

More information

The Thesis Writing Process and Literature Review

The Thesis Writing Process and Literature Review The Thesis Writing Process and Literature Review From Splattered Ink Notes to Refined Arguments Christy Ley Senior Thesis Tutorial October 10, 2013 Overview: Thesis Structure! Introduction! Literature

More information

DON M. PALLAIS, CPA 14 Dahlgren Road Richmond, Virginia Telephone: (804) Fax: (804)

DON M. PALLAIS, CPA 14 Dahlgren Road Richmond, Virginia Telephone: (804) Fax: (804) DON M. PALLAIS, CPA 14 Dahlgren Road Richmond, Virginia 23233 Telephone: (804) 784-0884 Fax: (804) 784-0885 Office of the Secretary PCAOB 1666 K Street, NW Washington, DC 20006-2083 Gentlemen: November

More information

SLEEP DISTURBANCE ABOUT SLEEP DISTURBANCE INTRODUCTION TO ASSESSMENT OPTIONS. 6/27/2018 PROMIS Sleep Disturbance Page 1

SLEEP DISTURBANCE ABOUT SLEEP DISTURBANCE INTRODUCTION TO ASSESSMENT OPTIONS. 6/27/2018 PROMIS Sleep Disturbance Page 1 SLEEP DISTURBANCE A brief guide to the PROMIS Sleep Disturbance instruments: ADULT PROMIS Item Bank v1.0 Sleep Disturbance PROMIS Short Form v1.0 Sleep Disturbance 4a PROMIS Short Form v1.0 Sleep Disturbance

More information

Assurance Engagements Other than Audits or Review of Historical Financial Statements

Assurance Engagements Other than Audits or Review of Historical Financial Statements Issued December 2007 International Standard on Assurance Engagements Assurance Engagements Other than Audits or Review of Historical Financial Statements The Malaysian Institute Of Certified Public Accountants

More information

Psychological Visibility as a Source of Value in Friendship

Psychological Visibility as a Source of Value in Friendship Undergraduate Review Volume 10 Issue 1 Article 7 1997 Psychological Visibility as a Source of Value in Friendship Shailushi Baxi '98 Illinois Wesleyan University Recommended Citation Baxi '98, Shailushi

More information

Exemplar for Internal Assessment Resource Physics Level 1

Exemplar for Internal Assessment Resource Physics Level 1 Exemplar for internal assessment resource 1.1B Physics for Achievement Standard 90935 Exemplar for Internal Assessment Resource Physics Level 1 This exemplar supports assessment against: Achievement Standard

More information

Comments on David Rosenthal s Consciousness, Content, and Metacognitive Judgments

Comments on David Rosenthal s Consciousness, Content, and Metacognitive Judgments Consciousness and Cognition 9, 215 219 (2000) doi:10.1006/ccog.2000.0438, available online at http://www.idealibrary.com on Comments on David Rosenthal s Consciousness, Content, and Metacognitive Judgments

More information

Author's response to reviews

Author's response to reviews Author's response to reviews Title: Diabetes duration and health-related quality of life in individuals with onset of diabetes in the age group 15-34 years - a Swedish population-based study using EQ-5D

More information

Speaker Notes: Qualitative Comparative Analysis (QCA) in Implementation Studies

Speaker Notes: Qualitative Comparative Analysis (QCA) in Implementation Studies Speaker Notes: Qualitative Comparative Analysis (QCA) in Implementation Studies PART 1: OVERVIEW Slide 1: Overview Welcome to Qualitative Comparative Analysis in Implementation Studies. This narrated powerpoint

More information

support support support STAND BY ENCOURAGE AFFIRM STRENGTHEN PROMOTE JOIN IN SOLIDARITY Phase 3 ASSIST of the SASA! Community Mobilization Approach

support support support STAND BY ENCOURAGE AFFIRM STRENGTHEN PROMOTE JOIN IN SOLIDARITY Phase 3 ASSIST of the SASA! Community Mobilization Approach support support support Phase 3 of the SASA! Community Mobilization Approach STAND BY STRENGTHEN ENCOURAGE PROMOTE ASSIST AFFIRM JOIN IN SOLIDARITY support_ts.indd 1 11/6/08 6:55:34 PM support Phase 3

More information

Title:Continuity of GP care is associated with lower use of complementary and alternative medical providers A population-based cross-sectional survey

Title:Continuity of GP care is associated with lower use of complementary and alternative medical providers A population-based cross-sectional survey Author's response to reviews Title:Continuity of GP care is associated with lower use of complementary and alternative medical providers A population-based cross-sectional survey Authors: Anne Helen Hansen

More information

INTERNATIONAL STANDARD ON ASSURANCE ENGAGEMENTS 3000 ASSURANCE ENGAGEMENTS OTHER THAN AUDITS OR REVIEWS OF HISTORICAL FINANCIAL INFORMATION CONTENTS

INTERNATIONAL STANDARD ON ASSURANCE ENGAGEMENTS 3000 ASSURANCE ENGAGEMENTS OTHER THAN AUDITS OR REVIEWS OF HISTORICAL FINANCIAL INFORMATION CONTENTS INTERNATIONAL STANDARD ON ASSURANCE ENGAGEMENTS 3000 ASSURANCE ENGAGEMENTS OTHER THAN AUDITS OR REVIEWS OF HISTORICAL FINANCIAL INFORMATION (Effective for assurance reports dated on or after January 1,

More information

Psychology 205, Revelle, Fall 2014 Research Methods in Psychology Mid-Term. Name:

Psychology 205, Revelle, Fall 2014 Research Methods in Psychology Mid-Term. Name: Name: 1. (2 points) What is the primary advantage of using the median instead of the mean as a measure of central tendency? It is less affected by outliers. 2. (2 points) Why is counterbalancing important

More information

BOOTSTRAPPING CONFIDENCE LEVELS FOR HYPOTHESES ABOUT QUADRATIC (U-SHAPED) REGRESSION MODELS

BOOTSTRAPPING CONFIDENCE LEVELS FOR HYPOTHESES ABOUT QUADRATIC (U-SHAPED) REGRESSION MODELS BOOTSTRAPPING CONFIDENCE LEVELS FOR HYPOTHESES ABOUT QUADRATIC (U-SHAPED) REGRESSION MODELS 12 June 2012 Michael Wood University of Portsmouth Business School SBS Department, Richmond Building Portland

More information

Tips For Writing Referee Reports. Lance Cooper

Tips For Writing Referee Reports. Lance Cooper Tips For Writing Referee Reports Lance Cooper Why Referees are Needed in Science An enormous number of scientific articles are submitted daily Most journals rely on impartial, external reviewers to help

More information

Clever Hans the horse could do simple math and spell out the answers to simple questions. He wasn t always correct, but he was most of the time.

Clever Hans the horse could do simple math and spell out the answers to simple questions. He wasn t always correct, but he was most of the time. Clever Hans the horse could do simple math and spell out the answers to simple questions. He wasn t always correct, but he was most of the time. While a team of scientists, veterinarians, zoologists and

More information

Doing High Quality Field Research. Kim Elsbach University of California, Davis

Doing High Quality Field Research. Kim Elsbach University of California, Davis Doing High Quality Field Research Kim Elsbach University of California, Davis 1 1. What Does it Mean to do High Quality (Qualitative) Field Research? a) It plays to the strengths of the method for theory

More information

Incorporating Experimental Research Designs in Business Communication Research

Incorporating Experimental Research Designs in Business Communication Research Incorporating Experimental Research Designs in Business Communication Research Chris Lam, Matt Bauer Illinois Institute of Technology The authors would like to acknowledge Dr. Frank Parker for his help

More information

Analysis of Confidence Rating Pilot Data: Executive Summary for the UKCAT Board

Analysis of Confidence Rating Pilot Data: Executive Summary for the UKCAT Board Analysis of Confidence Rating Pilot Data: Executive Summary for the UKCAT Board Paul Tiffin & Lewis Paton University of York Background Self-confidence may be the best non-cognitive predictor of future

More information

Risk Aversion in Games of Chance

Risk Aversion in Games of Chance Risk Aversion in Games of Chance Imagine the following scenario: Someone asks you to play a game and you are given $5,000 to begin. A ball is drawn from a bin containing 39 balls each numbered 1-39 and

More information

In Support of a No-exceptions Truth-telling Policy in Medicine

In Support of a No-exceptions Truth-telling Policy in Medicine In Support of a No-exceptions Truth-telling Policy in Medicine An odd standard has developed regarding doctors responsibility to tell the truth to their patients. Lying, or the act of deliberate deception,

More information

Statistical reports Regression, 2010

Statistical reports Regression, 2010 Statistical reports Regression, 2010 Niels Richard Hansen June 10, 2010 This document gives some guidelines on how to write a report on a statistical analysis. The document is organized into sections that

More information

SRI LANKA AUDITING STANDARD 530 AUDIT SAMPLING CONTENTS

SRI LANKA AUDITING STANDARD 530 AUDIT SAMPLING CONTENTS SRI LANKA AUDITING STANDARD 530 AUDIT SAMPLING (Effective for audits of financial statements for periods beginning on or after 01 January 2014) CONTENTS Paragraph Introduction Scope of this SLAuS... 1

More information

Bayes Theorem Application: Estimating Outcomes in Terms of Probability

Bayes Theorem Application: Estimating Outcomes in Terms of Probability Bayes Theorem Application: Estimating Outcomes in Terms of Probability The better the estimates, the better the outcomes. It s true in engineering and in just about everything else. Decisions and judgments

More information

Psychology, 2010, 1: doi: /psych Published Online August 2010 (

Psychology, 2010, 1: doi: /psych Published Online August 2010 ( Psychology, 2010, 1: 194-198 doi:10.4236/psych.2010.13026 Published Online August 2010 (http://www.scirp.org/journal/psych) Using Generalizability Theory to Evaluate the Applicability of a Serial Bayes

More information

Is Leisure Theory Needed For Leisure Studies?

Is Leisure Theory Needed For Leisure Studies? Journal of Leisure Research Copyright 2000 2000, Vol. 32, No. 1, pp. 138-142 National Recreation and Park Association Is Leisure Theory Needed For Leisure Studies? KEYWORDS: Mark S. Searle College of Human

More information

How Does Analysis of Competing Hypotheses (ACH) Improve Intelligence Analysis?

How Does Analysis of Competing Hypotheses (ACH) Improve Intelligence Analysis? How Does Analysis of Competing Hypotheses (ACH) Improve Intelligence Analysis? Richards J. Heuer, Jr. Version 1.2, October 16, 2005 This document is from a collection of works by Richards J. Heuer, Jr.

More information

The National Deliberative Poll in Japan, August 4-5, 2012 on Energy and Environmental Policy Options

The National Deliberative Poll in Japan, August 4-5, 2012 on Energy and Environmental Policy Options Executive Summary: The National Deliberative Poll in Japan, August 4-5, 2012 on Energy and Environmental Policy Options Overview This is the first Deliberative Poll (DP) anywhere in the world that was

More information

SEMINAR ON SERVICE MARKETING

SEMINAR ON SERVICE MARKETING SEMINAR ON SERVICE MARKETING Tracy Mary - Nancy LOGO John O. Summers Indiana University Guidelines for Conducting Research and Publishing in Marketing: From Conceptualization through the Review Process

More information

Conditional spectrum-based ground motion selection. Part II: Intensity-based assessments and evaluation of alternative target spectra

Conditional spectrum-based ground motion selection. Part II: Intensity-based assessments and evaluation of alternative target spectra EARTHQUAKE ENGINEERING & STRUCTURAL DYNAMICS Published online 9 May 203 in Wiley Online Library (wileyonlinelibrary.com)..2303 Conditional spectrum-based ground motion selection. Part II: Intensity-based

More information

Neuroscience and Generalized Empirical Method Go Three Rounds

Neuroscience and Generalized Empirical Method Go Three Rounds Bruce Anderson, Neuroscience and Generalized Empirical Method Go Three Rounds: Review of Robert Henman s Global Collaboration: Neuroscience as Paradigmatic Journal of Macrodynamic Analysis 9 (2016): 74-78.

More information