INTERNATIONALES MARKENMANAGEMENT

Similar documents
11/18/2013. Correlational Research. Correlational Designs. Why Use a Correlational Design? CORRELATIONAL RESEARCH STUDIES

11/24/2017. Do not imply a cause-and-effect relationship

Daniel Boduszek University of Huddersfield

Daniel Boduszek University of Huddersfield

WELCOME! Lecture 11 Thommy Perlinger

Correlation and Regression

Daniel Boduszek University of Huddersfield

Multiple Regression Using SPSS/PASW

12/30/2017. PSY 5102: Advanced Statistics for Psychological and Behavioral Research 2

Small Group Presentations

CHAPTER TWO REGRESSION

Study Guide #2: MULTIPLE REGRESSION in education

Summary & Conclusion. Lecture 10 Survey Research & Design in Psychology James Neill, 2016 Creative Commons Attribution 4.0

Business Statistics Probability

Applications. DSC 410/510 Multivariate Statistical Methods. Discriminating Two Groups. What is Discriminant Analysis

Preliminary Report on Simple Statistical Tests (t-tests and bivariate correlations)

MULTIPLE LINEAR REGRESSION 24.1 INTRODUCTION AND OBJECTIVES OBJECTIVES

Dr. Kelly Bradley Final Exam Summer {2 points} Name

12/31/2016. PSY 512: Advanced Statistics for Psychological and Behavioral Research 2

Survey research (Lecture 1) Summary & Conclusion. Lecture 10 Survey Research & Design in Psychology James Neill, 2015 Creative Commons Attribution 4.

Survey research (Lecture 1)

Correlation and regression

Business Research Methods. Introduction to Data Analysis

Data and Statistics 101: Key Concepts in the Collection, Analysis, and Application of Child Welfare Data

POL 242Y Final Test (Take Home) Name

isc ove ring i Statistics sing SPSS

Media, Discussion and Attitudes Technical Appendix. 6 October 2015 BBC Media Action Andrea Scavo and Hana Rohan

Statistical questions for statistical methods

CHAPTER VI RESEARCH METHODOLOGY

Chapter 10: Moderation, mediation and more regression

Part 8 Logistic Regression

Sample Exam Paper Answer Guide

bivariate analysis: The statistical analysis of the relationship between two variables.

Describe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo

Chapter 14: More Powerful Statistical Methods

Multiple Regression Analysis

Chapter 1: Explaining Behavior

IAPT: Regression. Regression analyses

Logistic regression: Why we often can do what we think we can do 1.

CHAPTER ONE CORRELATION

Midterm Exam ANSWERS Categorical Data Analysis, CHL5407H

MODEL SELECTION STRATEGIES. Tony Panzarella

Example 1. Students Mock exam Final exam. Bill Jane Jack Pat John Susan Anna Margret Peggy Joe William Ron Bob Sally Marry

3 CONCEPTUAL FOUNDATIONS OF STATISTICS

Stepwise method Modern Model Selection Methods Quantile-Quantile plot and tests for normality

Addendum: Multiple Regression Analysis (DRAFT 8/2/07)

Unit 1 Exploring and Understanding Data

Unit outcomes. Summary & Conclusion. Lecture 10 Survey Research & Design in Psychology James Neill, 2018 Creative Commons Attribution 4.0.

Unit outcomes. Summary & Conclusion. Lecture 10 Survey Research & Design in Psychology James Neill, 2018 Creative Commons Attribution 4.0.

2.75: 84% 2.5: 80% 2.25: 78% 2: 74% 1.75: 70% 1.5: 66% 1.25: 64% 1.0: 60% 0.5: 50% 0.25: 25% 0: 0%

Class 7 Everything is Related

Content. Basic Statistics and Data Analysis for Health Researchers from Foreign Countries. Research question. Example Newly diagnosed Type 2 Diabetes

SUMMER 2011 RE-EXAM PSYF11STAT - STATISTIK

10. LINEAR REGRESSION AND CORRELATION

Glossary From Running Randomized Evaluations: A Practical Guide, by Rachel Glennerster and Kudzai Takavarasha

Today: Binomial response variable with an explanatory variable on an ordinal (rank) scale.

Modeling Binary outcome

Doing Quantitative Research 26E02900, 6 ECTS Lecture 6: Structural Equations Modeling. Olli-Pekka Kauppila Daria Kautto

Analysis and Interpretation of Data Part 1

CHILD HEALTH AND DEVELOPMENT STUDY

Research Methods in Forest Sciences: Learning Diary. Yoko Lu December Research process

NORTH SOUTH UNIVERSITY TUTORIAL 2

6. Unusual and Influential Data

Describe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo

Overview of Lecture. Survey Methods & Design in Psychology. Correlational statistics vs tests of differences between groups

Citation for published version (APA): Ebbes, P. (2004). Latent instrumental variables: a new approach to solve for endogeneity s.n.

The SAGE Encyclopedia of Educational Research, Measurement, and Evaluation Multivariate Analysis of Variance

Results & Statistics: Description and Correlation. I. Scales of Measurement A Review

CHAPTER 3 RESEARCH METHODOLOGY

Still important ideas

A COMPARISON BETWEEN MULTIVARIATE AND BIVARIATE ANALYSIS USED IN MARKETING RESEARCH

Regression CHAPTER SIXTEEN NOTE TO INSTRUCTORS OUTLINE OF RESOURCES

Midterm project due next Wednesday at 2 PM

SW 9300 Applied Regression Analysis and Generalized Linear Models 3 Credits. Master Syllabus

Differential Item Functioning

(C) Jamalludin Ab Rahman

M15_BERE8380_12_SE_C15.6.qxd 2/21/11 8:21 PM Page Influence Analysis 1

CRITERIA FOR USE. A GRAPHICAL EXPLANATION OF BI-VARIATE (2 VARIABLE) REGRESSION ANALYSISSys

CLASSICAL AND. MODERN REGRESSION WITH APPLICATIONS

Problem #1 Neurological signs and symptoms of ciguatera poisoning as the start of treatment and 2.5 hours after treatment with mannitol.

Data Analysis in the Health Sciences. Final Exam 2010 EPIB 621

Inferential Statistics

STATISTICAL METHODS FOR DIAGNOSTIC TESTING: AN ILLUSTRATION USING A NEW METHOD FOR CANCER DETECTION XIN SUN. PhD, Kansas State University, 2012

STATISTICS INFORMED DECISIONS USING DATA

Simple Linear Regression One Categorical Independent Variable with Several Categories

Daniel Boduszek University of Huddersfield

Moderation in management research: What, why, when and how. Jeremy F. Dawson. University of Sheffield, United Kingdom

Reveal Relationships in Categorical Data

Readings: Textbook readings: OpenStax - Chapters 1 13 (emphasis on Chapter 12) Online readings: Appendix D, E & F

Lecture 6B: more Chapter 5, Section 3 Relationships between Two Quantitative Variables; Regression

Correlational Research. Correlational Research. Stephen E. Brock, Ph.D., NCSP EDS 250. Descriptive Research 1. Correlational Research: Scatter Plots

7 Statistical Issues that Researchers Shouldn t Worry (So Much) About

One-Way Independent ANOVA

Regression Including the Interaction Between Quantitative Variables

Detection of Differential Test Functioning (DTF) and Differential Item Functioning (DIF) in MCCQE Part II Using Logistic Models

The University of North Carolina at Chapel Hill School of Social Work

Statistical analysis DIANA SAPLACAN 2017 * SLIDES ADAPTED BASED ON LECTURE NOTES BY ALMA LEORA CULEN

Effect of Sample Size on Correlation and Regression Coefficients

Section 3.2 Least-Squares Regression

Transcription:

Chair for Marketing and Retailing INTERNATIONALES MARKENMANAGEMENT 5. Übung im Master WS 2015/2016 Spezialisierung: Handel und Internationales Marketing-Management Modul: Retail Management und Internationales Branding Required Literature Steenkamp, J.-B. E. M., Batra, R., & Alden, D. L. (2003), How perceived brand globalness creates brand value, Journal of International Business Studies, 34(1): 53-65. Additional Readings Backhaus et al. (2011), Multivariate Analysemethoden, 13 h ed., Springer. Hair et al. (2010), Multivariate Data Analysis, 7 th ed., Pearson. Field, A. (2011), Discovering Statistics using SPSS, 4 th ed., Sage. Slide 2 1

Chair for Marketing and Retailing 5 Quantitative Research Technique I 5.1 Linear Regression 5.2 Logistic Regression 5.3 Moderated Regression Analysis Branding effects Perceived brand globalness Brand purchase likelihood How to assess linear relationships? Slide 4 2

Corporate Associations and Consumer Product Responses Source: Steenkamp/Batra/Alden (2003), p. 54. Slide 5 Objectives Determine when linear regression analysis is the appropriate statistical tool in analyzing a problem. Be aware of the assumptions underlying regression analysis and how to assess them. Select an estimation technique and explain the difference between stepwise and simultaneous regression. Interpret the results of regression. Understand how to do a multiple regression on SPSS Slide 6 3

Multiple regression Multiple regression analysis = a statistical technique that can be used to analyze the relationship between a single dependent (criterion) variable and several independent (predictor) variables. Y = b 0 + b 1 x 1 + b 2 x 2 +... + b n x n + e Y = Dependent Variable b 0 = Intercept (constant) Value of the Y variable when all X n =0. This is the point at which the regression line crosses the Y-axis. b 1 = Regression coefficient 1 b 2 = Regression coefficient 2 x 1 = Independent variable 1 x 2 = Independent variable 2 e = Prediction error (residual) Slide 7 Linear (single) regression A way of predicting the value of one variable from another. It is a hypothetical model of the relationship between two variables. The model used is a linear one. Therefore, we describe the relationship using the equation of a straight line. Y = b 0 + b 1 X 1 + e Slide 8 4

Multiple regression decision process Stage 1: Research design of multiple regression Stage 2: Assumptions in multiple regression analysis Stage 3: Estimating the regression model and assessing overall fit Stage 4: Interpreting the regression coefficients Slide 9 Stage 1: Research design of multiple regression In selecting suitable applications of multiple regression, the researcher must consider three primary issues: the appropriateness of the research problem, specification of a statistical relationship, and selection of the dependent and independent variables. The researcher should always consider three issues that can affect any decision about variables: The theory that supports using the variables, Measurement and specification error, especially in the dependent variable. Consider sample size a minimum sample of 50 and preferably 100 observations for most research situations. The minimum ratio of observations to variables is 5 to 1, but the preferred ratio is 15 or 20 to 1. Slide 10 5

Stage 2: Assumptions in multiple regression analysis Straightforward assumptions: Variable Type: Outcome must be continuous Predictors can be continuous or dichotomous. Non-Zero Variance: Predictors must not have zero variance. Linearity: The relationship we model is, in reality, linear. Independence: All values of the outcome should come from a different person. Slide 11 Stage 2: Assumptions in multiple regression analysis (cont.) Tricky assumptions: No multicollinearity: Predictors must not be highly correlated. Homoscedasticity: For each value of the predictors the variance of the error term should be constant. Independent errors: For any pair of observations, the error terms should be uncorrelated. Normally-distributed errors Slide 12 6

Stage 2: Multicollinearity Multicollinearity exists if predictors are highly correlated. This assumption can be checked with collinearity diagnostics. Tolerance should be more than 0.2 (Menard, 1995). VIF should be less than 10 (Myers, 1990; Kim/Timm 2006). Drop variables that do not fulfill the criteria! Slide 13 Stage 2: Homoscedacity, independent errors, normally distributed errors Homoscedacity/Independence of errors: Plot ZRESID against ZPRED. ScatterPlot of residuals compares the standardized predicted values of the dependent variable against the standardized residuals from the regression equation. If the plot exhibits a random pattern then this indicates no identifiable violations of the assumptions underlying regression analysis. Independence of errors: Durbin/Watson value: d = 0: perfect positive correlation d = 2: no correlation d = 4: perfect negative correlation Normality of errors: Normal probability plot enables you to determine if the errors are normally distributed. It compares the observed (sample) standardized residuals against the expected standardized residuals from a normal distribution. Histogram of standardized residuals enables you to determine if the errors are normally distributed. Slide 14 7

Stage 3: Estimating the regression model and assessing overall model fit Basic steps: 1. Select a method for specifying the regression model to be estimated, 2. Assess the statistical significance of the overall model in predicting the dependent variable, and 3. Determine whether any of the observations exert an undue influence on the results. Methods of regression: Hierarchical: Experimenter decides the order in which variables are entered into the model. Forced entry (confirmatory, simultaneous): All predictors are entered simultaneously. Stepwise: Predictors are selected using their semi-partial correlation with the outcome. Slide 15 Stage 3: Methods Hierarchical regression: Known predictors (based on past research) are entered into the regression model first. New predictors are then entered in a separate step/block. Experimenter makes the decisions. It is the best method: Based on theory testing. You can see the unique predictive influence of a new variable on the outcome because known predictors are held constant in the model. Bad Point: Relies on the experimenter knowing what they re doing! Forced entry regression: All variables are entered into the model simultaneously. The results obtained depend on the variables entered into the model. It is important to have good theoretical reasons for including a particular variable. Slide 16 8

Stage 3: Testing the model R 2 The proportion of variance accounted for by the regression model. R 2 ranges from 0 to 1.0 and represents the amount of the dependent variable explained by the independent variables combined. Coefficient of determination (measure of goodness of fit.) The F-statistic (ANOVA) Looks at whether the variance explained by the model is significantly greater than the error within the model. It tells us whether using the regression model is significantly better at predicting values of the outcome than using the mean. Is used to determine if the overall regression model is statistically significant. If the F statistic is significant, it means it is unlikely that your sample will produce a large R 2 when the population R 2 is actually zero. Slide 17 Stage 4: Interpreting the regression coefficients Beta values: the change in the outcome associated with a unit change in the predictor. Standardized beta values: tell us the same but expressed as standard deviations. Are necessary to compare relative importance Slide 18 9

Reporting Example Single and multiple linear regressions showing the influence of different dimensions of Corporate Reputation on Loyalty towards the Corporate Brand Henkel Independent variables Model 1 Model 2 Model 3 Model 4 Product/Service quality.674*** (.545 ***).525 *** (.425 ***).354 *** (.287 ***) Social responsibility.492 *** (.455 ***).262 *** (.242 ***).113 ** (.105 **) Customer orientation.320 *** (.274 ***) Good employer.105 ns (.078 ns) Financial strength.039 ns (.029 ns) Gender (dummy).019 ns (.007 ns).-.041 ns (-.016 ns) -.017 ns (-.007 ns).001 ns (.000 ns) Age.013** (.129 **).012 ** (.120 **).011 ** (.111 **).011 ns (.112 ns) R 2.324.233.368.415 Note: standardized coeficients in brackets, *** p<.001; ** p<.005 ; ns: not significant. Slide 19 Inclusion of Control Variables Often, you want to control for other variables (covariates) Simply add centered/z-standardized continuous covariates as predictors to the regression equation. In case of categorical control variables, dummy coding is recommended Typical control variables: Age, gender, nationality, income, etc. Example (Steenkamp/Batra/Alden 2003, p. 57): Although this study focuses on the pathways through which PBG influences purchase likelihood, other exogenous influences are likely. Three sets of covariates are included in our analyses. First, brand familiarity is included because previous research suggests that it may have an important impact on perceived brand quality, brand prestige, and purchase likelihood, whether or not a brand is perceived as global (e.g. Keller, 1998). Second, country-of-origin (CO) image is included to control for the possibility that a certain global brand may attain higher prestige, quality, and/or purchase likelihood because it comes from a particular foreign country, rather than because it is global. ( ) Finally, we add brand dummies to the analyses to control for unobserved, brand-specific effects (such as objective quality, distribution coverage, and market share). Slide 20 10

Regression example Applied in SPSS Sample of 500 consumers in Germany. Which influence do trust (in the product brand) and satisfaction (with the product) have on customer loyalty? Hypotheses: H 1 : Product loyalty will be positively influenced by the trust of consumers in the product brand. H 2 : Product loyalty will be positively influenced by the satisfaction of consumers with the product brand. Slide 21 Step 1: Regression Model Slide 22 11

Step 2: Assumptions Slide 23 Step 3: Output - Assumptions Slide 24 12

Step 4: Output - Model Slide 25 Step 4: Output Regression coefficients Slide 26 13

Exercise Perform a multiple linear regression analysis in SPSS Inv_P_2 Ver_P_2 Loy_P_1 Zuf_P_2 How would you phrase the corresponding hypotheses? What do you find? Which item is more important for explaining the dependent variable? How good is your explanatory power? Slide 27 Chair for Marketing and Retailing 5 Quantitative Research Technique I 5.1 Linear Regression 5.2 Logistic Regression 5.3 Moderated Regression Analysis 14

Branding Effects Perceived brand globalness Brand purchase (0/1) How to assess non-linear relationships? Slide 29 Objectives State the circumstances under which logistic regression should be used instead of multiple regression. Identify the types of dependent and independent variables used in the application of logistic regression. Describe the method used to transform binary measures into the likelihood and probability measures used in logistic regression. Interpret the results of a logistic regression analysis and assessing predictive accuracy, with comparisons to both multiple regression and discriminant analysis. Understand the strengths and weaknesses of logistic regression compared to discriminant analysis and multiple regression. Slide 30 15

Logistic regression Logistic Regression = a specialized form of regression that is designed to predict and explain a binary (two-group dummy variable 0/1) categorical variable rather than a metric dependent measure. P( Y) 1 e 1 ( bo b1 x1 b2x2... b j x ) j j P(Y) = Probability of dependant variable occurring (Y=1) Slide 31 Multiple Regression Decision Process Stage 1: Research design of logistic regression Stage 2: Assumptions of logistic regression Stage 3: Estimation of the logistic regression model and assessing overall fit Stage 4: Interpretation of the results Slide 32 16

Stage 1: Research design of logistic regression Logistic regression is best suited to address two research objectives... Identifying the independent variables that impact group membership in the dependent variable. Establishing a classification system based on the logistic model for determining group membership. The binary nature of the dependent variable (0/1) means the error term has a binomial distribution instead of a normal distribution, and it thus invalidates all testing based on the assumption of normality. The variance of the dichotomous variable is not constant, creating instances of heteroscedasticity as well. Neither of the above violations can be remedied through transformations of the dependent or independent variables. Logistic regression was developed to specifically deal with these issues. Slide 33 Stage 2: Assumptions of logistic regression The advantages of logistic regression are primarily the result of the general lack of assumptions. Logistic regression does not require any specific distributional form for the independent variables. Heteroscedasticity of the independent variables is not required. Linear relationships between the dependent and independent variables are not required. But: You should test multicollinearity, independency of errors and check for outliers Slide 34 17

Stage 3: Testing the model There are two ways to assess the accuracy of the model in the sample: Residual Statistics Standardized Residuals In an average sample, 95% of standardized residuals should lie between 2. 99% of standardized residuals should lie between 2.5. Outliers: Any case for which the absolute value of the standardized residual is 3 or more, is likely to be an outlier. Influential cases Cook s distance Measures the influence of a single case on the model as a whole. Absolute values greater than 1 may be cause for concern (Weisberg 1982). Slide 35 Stage 3: Estimating the model Model estimation using a maximum likelihood approach, not least squares as in multiple regression The estimation process maximizes the likelihood that an event will occur the event being a respondent is assigned to one group versus another Issues to consider: Sample size (at least 100 cases) Group size should be more or less equal/similar Formulation of hypotheses ( probability) Methods of regression Slide 36 18

Stage 3: Assessing the model Criteria Goodness of fit Acceptable values -2loglikelihood statistic (-2LL) -2LL close to 0 (significance close to 1; not significant) Likelihood-Ratio-Test (LR-Test) Model goodness High Chi-Square Value (significance close to 0; significant) McFaddens-R² More than 0.2; good 0.4 Cox & Snell R² More than 0.2; good 0.4 Nagelkerkes R² More than 0.2; good 0.4; very good 0.5 Evaluation of Classification results Proportional Chance Criterion (PCC): α²+(1-α)² Maximum Chance Criterion (MCC) Proportion of correctly classified objects Hosmer-Lemeshow-Test α = relative size of larger group (for different group sizes) relative size of larger group (for similar group sizes) More than PCC and MCC Small Chi-Square Value (significance close to 1; not significant) Source: According to Krafft (2000), p. 244, Backhaus et al. (2008), p. 270. Slide 37 Stage 4: Interpretation of the results A positive relationship means an increase in the independent variable is associated with an increase in the predicted probability, and vice versa. Magnitude cannot be assed with the given coefficients (NOT LINEAR!). Interpretation: positive b 0 : function moves to the left negative b 0 : function moves to the right b j > 1: fast rise of probability 0 < b j < 1: slow rise b j = 0: horizontal P(y=1) = 0,5 b j < 0: analogous, but fall of probability 19

Stage 4: Assessing predictors The Wald statistic: Similar to t-statistic in Regression. Tests the null hypothesis that b = 0. Is biased when b is large. The Odds-Ratio, Exp(B): Indicates the change in odds resulting from a unit change in the predictor. Exp(B) > 1: Predictor, Probability of outcome occurring. Exp(B) < 1: Predictor, Probability of outcome occurring. Odds = P(y=1) 1-P(y=1) Slide 39 Reporting Example Binary logistic regression analysis with dichotomized dependent variable Variable β Exp(b) β Exp(b) Dependent variable Independent variables Model 1: Satisfaction with Product (0/1) Model 2: Satisfaction with Product (0/1) Product quality 2.223 *** 9.233 2.162 *** 8.687 Product price.164 ns 1.178 Control variables Age -.006 ns.994 -.005 ns.995 Constant -9.025 ***.000-9.491 ***.000 No of observations 498 498 n group 1/ n group 2 107/ 391 107 /391 Model Chi-Square (df) 209.529 *** (3df) 210.889 *** (3df) Correctly classified 87.1 86.5 Maximal chance (MCC) 78.5 78.5 Proportional chance (PCC) 66.24 66.24 Nagelkerkes R 2.531.534 Slide 40 20

Multinomial logistic regression Logistic regression to predict membership of more than two categories. It (basically) works in the same way as binary logistic regression. The analysis breaks the outcome variable down into a series of comparisons between two categories. E.g., if you have three outcome categories (A, B and C), then the analysis will consist of two comparisons that you choose: Compare everything against your first category (e.g. A vs. B and A vs. C), Or your last category (e.g. A vs. C and B vs. C), Or a custom category (e.g. B vs. A and B vs. C). The important parts of the analysis and output are much the same as we have just seen for binary logistic regression. Slide 41 Logistic regression example Applied in SPSS Sample of 500 consumers in Germany. How do trust and satisfaction influence the likelihood that customers are loyal towards the product? Hypotheses: H 1 : Higher trust of consumers in the product brand will increase the likelihood of product loyalty. H 2 : Higher satisfaction of consumers with the product brand will increase the likelihood of product loyalty. Transformation of dependent variable Slide 42 21

Step 1: The regression model Slide 43 Step 2: Assumptions Slide 44 22

Step 2 Procedure: Cook s distance & standardized residuals Innerhalb der binären logistischen Regression unter Speichern : Haken bei Cook und Standardisiert Nach der Berechnung der logistischen Regressionen erscheinen am Ende des Datensatzes zwei neue Variablen COO_1 (Cook s distance) und ZRE_1 (standardisierte Residuen der abhängigen Variablen) Zur Berechnung sollten nur jene Fälle einbezogen werden, die keine Ausreißer beinhalten: Daten > Fälle auswählen > Falls Bedingung zutrifft markieren > Falls > Formel abs(zre_1) < 3.0 and COO_1 < 1.0 > Weiter Ausschluss der Variablen, die diese Bedingung nicht erfüllen (erneute Variable filter_$ am Ende des Datensatzes mit 1 = selektiert, 0 = ausselektiert) Binäre log. Regression erneut berechnen, allerdings ohne das erneut die Variablen Cook s distance und standardisierte Residuen berechnet werden (Haken deaktivieren) Nur dann mit ausgeschlossenen Fällen weiter rechnen, wenn sich die Klassifizierungsgüte um mehr als 2%-Punkte verbessert Slide 45 Step 3: Output Maximum chance Slide 46 23

Step 3: Output - Model goodness Slide 47 Step 3: Output Regression coefficients Slide 48 24

Exercise Perform a logistic regression analysis in SPSS Inv_P_2 Ver_P_2 Loy_P_1 Zuf_P_2 Slide 49 Chair for Marketing and Retailing 5 Quantitative Research Technique I 5.1 Linear Regression 5.2 Logistic Regression 5.3 Moderated Regression Analysis 25

Moderation Model Consumer Ethnocentrism PBG.34***? Brand purchase likelihood PBG* CE? CE.34*** Brand purchase likelihood PBG.07n.s. Slide 51 Objectives Understand the meaning of an interaction effect and the difference to direct effects. State the circumstances under moderation should be used. Learn to differentiate between different types of moderators. Accomplish moderated regression analysis in SPSS. Slide 52 26

Moderated regression analysis A moderator variable has been defined as one which systematically modifies either the form and/or strength of the relationship between a predictor and a criterion variable. (Sharma/Durand/Gur-Arie 1981) Effect of a predictor variable (x) on a criterion (y) depends on a third variable (z), the moderator Interaction effect Each moderator effect has to be grounded in theory Predictor and moderator should be mean centered before MRA z x y Slide 53 Moderators (1) y b 0 b x 1 x y z (2) y b 0 b x b 1 2 z x y z (3) y b 0 b x b z b xz 1 2 3 x y x*z Slide 54 27

Quick facts y b0 b1 x b2 z b3 xz The interaction is carried by the xz term, the product of x and z The b 3 coefficient reflects the interaction between x and z only if the lower order terms b 1 x and b 1 z are included in the equation! Leaving out these terms confounds the additive and multiplicative effects, producing misleading results. The coefficient b 3 indicates the unit change in the effect of x as z changes. The coefficients b 1 and b 1 represent the effects of x and z, respectively, when the other variable is zero (conditional effect meaningful zero levels through mean centering important). Each individual has a score on x and z. To form the xz term, multiply together the individual s scores on x and z. Slide 55 Suggested MRA steps Suggested procedure : 1. Test assumptions 2. Mean centering (i.e. Mittelwertzentrieren ) 3. Create interaction term (multiply x and z) 4. Calculate regression with one depv and three indepv (x, z, x*z) 5. Evaluate model 6. Interpret coefficients Slide 56 28

Reporting Example Source: Steenkamp/Batra/Alden 2003, p. 60 Slide 57 Moderated regression example Applied in SPSS Sample of 500 consumers in Germany. Do trust and satisfaction interact in explaining loyalty? Or, is the effect of satisfaction on loyalty dependent on trust? Hypotheses: H 1a : The influence of satisfaction on loyalty will be moderated by trust. or: H 1a : The influence of satisfaction on loyalty will be increased (positively moderated) by trust. (The more trust the consumer has the stronger the effect of satisfaction on loyalty) Slide 58 29

Step 1: Mean centered values! Slide 59 Step 2: Model & Output Slide 60 30

Exercise Test the following moderation model Inv_P_2 Ver_P_2 Loy_P_1 Slide 61 Exercise for next week Show what you learned today! Get to know the Henkel Data Derive one or several meaningful research questions Test hypotheses (that fit to the research questions) using multiple (linear) regression analysis, moderated regression analysis, and (binary) logistic regression analysis. Prepare a short (max 20 min., 15 slides) presentation; please use the presentation template on our homepage and don t forget to insert your group number, names and matriculation numbers! Hand over the printed slides (i.e. handout) to the lecturer before class. Be prepared to present Up to 3 points on top of the exam!!!! Slide 62 31