Kyle B. Murray and Gerald Häubl

Similar documents
An Empirical Study of the Roles of Affective Variables in User Adoption of Search Engines

Understanding Social Norms, Enjoyment, and the Moderating Effect of Gender on E-Commerce Adoption

Exploring the Time Dimension in the Technology Acceptance Model with Latent Growth Curve Modeling

Ofir Turel. Alexander Serenko

Controlled Experiments

Cognitive Dissonance. by Saul McLeod published 2008, updated

EXTENDING THE EXPECTATION- CONFIRMATION MODEL TO EVALUATE THE POST-ADOPTION BEHAVIOR OF IT CONTINUERS AND DISCONTINUERS

How Self-Efficacy and Gender Issues Affect Software Adoption and Use

An Empirical Study on Causal Relationships between Perceived Enjoyment and Perceived Ease of Use

26:010:557 / 26:620:557 Social Science Research Methods

Regression Discontinuity Analysis

Priming Effects by Visual Image Information in On-Line Shopping Malls

Chapter 3-Attitude Change - Objectives. Chapter 3 Outline -Attitude Change

Development of the Web Users Self Efficacy scale (WUSE)

Supplemental Materials: Facing One s Implicit Biases: From Awareness to Acknowledgment

The Influence of Hedonic versus Utilitarian Consumption Goals on the Compromise Effect. Abstract

Issues in Information Systems

Recommendations often play a positive role in the decision process by reducing the difficulty associated with

Attitude I. Attitude A. A positive or negative evaluation of a concept B. Attitudes tend to be based on 1)...values 2)...beliefs 3)...

IDC Regional Congestion Management Training Document March 2006

Physicians' Acceptance of Web-Based Medical Assessment Systems: Findings from a National Survey

Feedback, task performance, and interface preferences

EXPERIMENTAL RESEARCH DESIGNS

External Variables and the Technology Acceptance Model

Running head: How large denominators are leading to large errors 1

One slide on research question Literature review: structured; holes you will fill in Your research design

Experimental Design. Dewayne E Perry ENS C Empirical Studies in Software Engineering Lecture 8

Cognitive dissonance: effects of perceived choice on attitude change

Psychology 205, Revelle, Fall 2014 Research Methods in Psychology Mid-Term. Name:

Self-Consciousness and its Effects on Dissonance-Evoking Behavior

The Adoption of Mobile Games in China: An Empirical Study

The Antecedents of Students Expectation Confirmation Regarding Electronic Textbooks

Lecture 4: Research Approaches

Conflict of Interest. Motivational Interviewing (MI) What is Motivational Interviewing. Empathy & MI spirit Consistent use of MI

All Types of Mortality Salience Are Not Equal: The Effect of Contemplating Natural versus Unnatural Death on. Materialism Behavior

PLS 506 Mark T. Imperial, Ph.D. Lecture Notes: Reliability & Validity

Proceedings of the 41st Hawaii International Conference on System Sciences

The Regression-Discontinuity Design

Does Content Relevance Lead to Positive Attitude toward Websites? Exploring the Role of Flow and Goal Specificity

Examining the efficacy of the Theory of Planned Behavior (TPB) to understand pre-service teachers intention to use technology*

Rong Quan Low Universiti Sains Malaysia, Pulau Pinang, Malaysia

the Global Financial Crisis and the Discipline of Economics by Adam Kessler

The Role of Modeling and Feedback in. Task Performance and the Development of Self-Efficacy. Skidmore College

Visual Information Priming in Internet of Things: Focusing on the interface of smart refrigerator

ADOPTION PROCESS FOR VoIP: THE UTAUT MODEL

Implications of Different Feedback Types on Error Perception and Psychological Reactance

Thinking Like a Researcher

Chapter 11. Experimental Design: One-Way Independent Samples Design

Behavioral Finance 1-1. Chapter 5 Heuristics and Biases

User Acceptance of E-Government Services

A Short Form of Sweeney, Hausknecht and Soutar s Cognitive Dissonance Scale

On the Internet No One Knows I m an Introvert : Extroversion, Neuroticism, and Internet Interaction

An Investigation of Factors Influencing Causal Attributions in Managerial Decision Making

Previous Example. New. Tradition

Research on Software Continuous Usage Based on Expectation-confirmation Theory

Describe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo

Still important ideas

The Number of Alternatives and the Role it Plays in Decision Making

Topic 1 Social Networking Service (SNS) Users Theory of Planned Behavior (TPB)

a, Emre Sezgin a, Sevgi Özkan a, * Systems Ankara, Turkey

When Attitudes Don't Predict Behavior: A Study of Attitude Strength

Incorporating Experimental Research Designs in Business Communication Research

Eye Movements, Perceptions, and Performance

Why do Psychologists Perform Research?

Chapter Three Research Methodology

MBACATÓLICA JAN/APRIL Marketing Research. Fernando S. Machado. Experimentation

PREDICTING THE USE OF WEB-BASED INFORMATION SYSTEMS: INTRINSIC MOTIVATION AND SELF-EFFICACY

Factors Influencing the Usage of Websites: The Case of a Generic Portal in the Netherlands

Are Impulsive buying and brand switching satisfactory and emotional?

Psychological Experience of Attitudinal Ambivalence as a Function of Manipulated Source of Conflict and Individual Difference in Self-Construal

Online Appendix to: Online Gambling Behavior: The Impacts of. Cumulative Outcomes, Recent Outcomes, and Prior Use

Thank you very much for your guidance on this revision. We look forward to hearing from you.

ATTITUDE CHANGE FROM AN IMPLIED THREAT TO ATTITUDINAL FREEDOM 1

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

43. Can subliminal messages affect behavior? o Subliminal messages have NO effect on behavior - but people perceive that their behavior changed.

System and User Characteristics in the Adoption and Use of e-learning Management Systems: A Cross-Age Study

HPS301 Exam Notes- Contents

Survey Research. We can learn a lot simply by asking people what we want to know... THE PREVALENCE OF SURVEYS IN COMMUNICATION RESEARCH

Huber, Gregory A. and John S. Lapinski "The "Race Card" Revisited: Assessing Racial Priming in

An Investigation of the Determinants of User Acceptance of Information Technology in a West African Nation: The Case of Nigeria

Women s Symptom Memories: More Accessible Than Men s?

EXPLAINING PHYSICIAN TECHNOLOGY ACCEPTANCE OF COMPUTERIZED PHYSICIAN ORDER ENTRY (CPOE)

Experimental Research. Types of Group Comparison Research. Types of Group Comparison Research. Stephen E. Brock, Ph.D.

Gender differences in internet usage intentions for learning in higher education: An empirical study. Jimmy Macharia & Emmanuel Nyakwende

Assignment 4: True or Quasi-Experiment

THE ROLE OF COLOR IN INFLUENCING TRUST THE ROLE OF COLOR IN INFLUENCING TRUST IN E-COMMERCE WEB SITES

Computers in Human Behavior

Chapter Three. Methodology. This research used experimental design with quasi-experimental

Can Conscious Experience Affect Brain Activity?

Chapter 13: Introduction to Analysis of Variance

Designing Interfaces to Induce Choice Closure: Why and How

Toward E-Commerce Website Evaluation and Use: Qualitative and Quantitative Understandings

Several studies have researched the effects of framing on opinion formation and decision

WE-INTENTION TO USE INSTANT MESSAGING FOR COLLABORATIVE WORK: THE MODERATING EFFECT OF EXPERIENCE

The Association between Free Will Beliefs and Stereotypes: People's Belief in Fatalism Promotes Gender Stereotypes

Some Thoughts on the Principle of Revealed Preference 1

Research in Physical Medicine and Rehabilitation

Reliability and Validity checks S-005

The Relations among Executive Functions and Users' Perceptions toward Using Technologies to Multitask

How do People Really Think about Climate Change?

Transcription:

RESEARCH ARTICLE FREEDOM OF CHOICE, EASE OF USE, AND THE FORMATION OF INTERFACE PREFERENCES Kyle B. Murray and Gerald Häubl School of Business, University of Alberta, Edmonton, Alberta T6G 2R6 CANADA {kyle.murray@ualberta.ca} {gerald.haeubl@ualberta.ca} Appendix A Screen Shots Figure A1. Interface A MIS Quarterly Vol. 35 No. 4 Appendices/December 2011 A1

Figure A2. Interface B Figure A3. Interface C A2 MIS Quarterly Vol. 35 No. 4 Appendices/December 2011

Appendix B Post-Experiment Rating-Scale Measures 1. I liked the interface that I chose to use for the last trial. (1 = Strongly Disagree to 10 = Strongly Agree ) 2. I found Interface A easy to use. (1 = Strongly Disagree to 10 = Strongly Agree ) 3. I found Interface B easy to use. (1 = Strongly Disagree to 10 = Strongly Agree ) 4. I found Interface C easy to use. (1 = Strongly Disagree to 10 = Strongly Agree ) 5. How much experience do you have with the Internet? (1 = No Experience to 10 = A Great Deal of Experience ) 6. How old are you? 7. What is your gender? Appendix C Single Item Measures Although the use of a single item measure raises the threat of a mono-operational bias (e.g., Cook and Campbell 1979), in this research we chose to use single item measures for two primary reasons. First, it has been demonstrated that the use of multiple items to measure a single construct can actually aggravate respondent behavior and undermine the reliability of measurement (Bergkvist and Rossiter 2007; Drolet and Morrison 2001; Rossiter 2002). Given that our experiment already required a substantial amount of effort on the part of participants having completed 10 consecutive information search tasks we wanted to minimize the effort required to respond to our follow-up questions. In this way, our work reflects a similar approach taken in other research that has also used a single-item measure of perceived ease of use (e.g., Karahanna and Straub 1999; Murray and Häubl 2007). Second, because most prior research on perceived ease of use did employ multi-item scales, the measurement properties (e.g., validity) of these sets of items are well established. The typical finding in prior work that has used multi-item scales to measure perceived ease of use is that the items are very highly correlated and, thus, close to redundant. For instance, Davis (1989), Gefen and Straub (2000), and Venkatesh and Davis (1996) all report very high reliability for the multi-item scales they used (values of Cronbach α well above.9). Based on these results, we selected the shortest and most intuitive item from the set used in previous studies. Appendix D Strength of Preference Data Analysis Given that our key dependent measure requires participants to choose one interface or the other, it is possible that the choice shares do not represent strong user preferences that is, people may be choosing because they are required to select one of the two interfaces; however, they may not have any real preference between the two. To examine this possibility, and further test H2 and H4, we looked at the strength of users preferences for the interface that they had selected. The mean extent-of-preference rating (scale: 1 = weakly prefer to 10 = strongly prefer ) was 7.7, suggesting that, on average, participants had a strong preference for the interface that they selected. In addition, we found no differences in strength of preference between our experimental conditions (M FREE = 7.56; M CONSTRAINED = 7.81; F =.428; p =.515, η² = 0.006). We also did not find any difference based on whether the users chose the incumbent or the competitor (M INCUMBENT = 7.90; M COMPETITOR = 7.33; F = 2.103; p =.151, η² = 0.027). MIS Quarterly Vol. 35 No. 4 Appendices/December 2011 A3

Appendix E Possible Alternative Explanations Usage Errors One candidate alternative explanation is that there was a difference in the number of usage errors made between the two conditions. Previous research has demonstrated that errors made while learning to use an interface can have a strong negative effect on users ultimate interface preferences (e.g., Murray and Häubl 2007). However, such errors were not a factor in this research. First, we pretested the interfaces (as reported above) and found no differences between them in terms of the a priori probability of errors occurring. Second, the data from the main experiment are inconsistent with this explanation. The mean number of errors made across all of the incumbent trials was 1.0 (median = 0; mode = 0; minimum = 0; maximum = 12). Given that participants completed a total of 27 navigation steps with the incumbent interface (i.e., 9 trials with 3 steps each), the fact that on average only one misstep (or error) was made suggests that learning proceeded in a relatively errorfree manner. As expected, we find that the number of errors made had no effect on perceived ease of use (p =.731) or the users interface choice (p =.274). Similarly, whether or not participants made an error at all had no effect on perceived ease of use (p =.229) or the users eventual interface choice (p =.851). Chronic Individual Differences In addition, because participants were randomly assigned to the two experimental conditions, we can rule out the possibility that chronic individual differences might have driven the choice results. Moreover, given that we collected data on age, gender, and Internet experience, we can directly test the effect of these variables on preference. We find that there was no difference between the two experimental conditions on any of these participant characteristics (all p values > 0.05), and none of these three variables had any effect on eventual preference for the incumbent interface (all p values > 0.30). Ease of Switching Another potential alternative explanation for the effect of freedom of choice during the incumbent trials on ultimate interface preference is that those participants who were not able to choose among interfaces during the initial trials somehow found it easier to switch from the incumbent to the competitor. This possibility is based on prior research that has demonstrated that, as the ease of transfer from an incumbent interface to a competitor interface increases, user preference for the competitor tends to increase (Johnson et al. 2003; Murray and Häubl 2002). 1 Following Murray and Häubl (2007), we measure the ease of transfer as the relative task completion time (RTCT), which captures the extent of skill transfer from an incumbent to a competitor at the level of the individual user. Specifically, RTCT is calculated by subtracting the time it took a person to perform a task using a competitor interface for the first time (T c ) from the time it took him/her to perform the same task when s/he last used an incumbent interface (T li ), that is, RTCT = T li - T c. Therefore, a negative RTCT indicates that the user was slower on the first trial with the competitor than s/he was on the last trial with the incumbent. In the present experiment, there was no significant difference on this measure between participants in the free condition (mean RTCT = -5.9) and those in the constrained condition (mean RTCT = -13.6; ANOVA: F (1,77) = 2.297, p = 0.134, η² = 0.029). Nevertheless, users who had freedom of choice during the initial trials developed a much stronger preference for the incumbent than those who did not. Initial Subjective Preference Yet another possible alternative explanation for the effect of freedom of choice on subsequent interface preference is that participants in the free condition may simply have selected the subjectively preferred of the two initially available interfaces (i.e., the one that was better for them 1 The ease of transfer to a new interface has been shown to be a function of prior experience, such that the ease of transfer decreases as the amount of experience (and consequently also performance) with an incumbent interface increases. A4 MIS Quarterly Vol. 35 No. 4 Appendices/December 2011

personally) and, therefore, use of their chosen incumbent may have resulted in increased task performance compared to participants in the constrained condition, who were restricted to using a single incumbent. If this had been the case, participants in the free condition would, on average, have been comparing a subjectively better incumbent to the competitor than those in the constrained condition and this, in turn, could have resulted in the greater eventual preference for the incumbent interface. However, the data clearly do not support this account. The mean task completion times on the ninth trial for the two experimental conditions are statistically indistinguishable (ANOVA: F (1,77) = 2.629, p = 0.109, η² = 0.033), although participants who were free to choose one of the two initially available interfaces as their incumbent actually took slightly longer (mean = 11.4 seconds) than did those in the constrained condition (mean = 9.8 seconds), which is directionally inconsistent with a possible selection-based explanation of the effect of freedom of choice on interface preference. Cognitive Dissonance A fifth alternative explanation is that those users who, after trying the competitor, chose to stay with the incumbent interface may have reported higher perceived ease of use because they were experiencing cognitive dissonance a form of discomfort or tension caused by simultaneously held contradictory ideas, which people are motivated to reduce by modifying their beliefs (Brehm 1956; Festinger 1957). Given that the number of people who preferred the incumbent was significantly higher among those who were initially free to choose, these individuals could also have reported a higher perceived ease of use of the incumbent as a result of cognitive dissonance (rather than the decrease in psychological reactance that we have hypothesized). Our results, however, are completely inconsistent with this account. The effects we report are driven by the difference between free and constrained participants among those who chose the competitor (see Figure 6). Specifically, although the difference between the free and constrained conditions among those who chose the incumbent is not significant (F (1,47) = 1.219, p = 0.275, η² = 0.026), the difference between these two conditions among those who chose the competitor is significant (F (1,29) = 4.996, p = 0.034, η² = 0.151). Clearly, this does not support the cognitive dissonance explanation, which predicts that, in the constrained condition, perceived ease of use of the incumbent will be higher among those who chose it. Our results are, however, fully consistent with the proposed account based on the theory of psychological reactance. Constrained users chose the competitor because of the lower perceived ease of use of the incumbent. The critical effect here is the significant difference in perceived ease of use of the incumbent interface (reported above) between those who were initially free to choose and those who were constrained (p =.003). Opportunity to Compare Alternatives The results of the test of H7 (illustrated in Figure 6) also rule out the possibility that our findings are driven simply by users opportunity to compare the incumbent interface to alternative interfaces. Although it is true that the opportunity to compare Interface A and B only exists in the free condition, if this was driving the difference in perceived ease of use, then that difference should be significant regardless of which interface users ultimately chose. That is, if the mere opportunity to compare caused a difference in ratings of perceived ease of use, that difference should be observed between all users in the free versus constrained conditions. Clearly the results of our experiment are inconsistent with this explanation. A Priori Familiarity Yet another possibility is that, given that Interface C which users navigate via hyperlinks could be seen as the a priori more familiar interface, one could argue that when users chose A or B over C, this preference might have been driven by the relative novelty of the navigation features of A (pull-down menus) and B (radio buttons) as compared to C (hyperlinks). This novelty explanation would predict that there should be a significant difference in perceived ease of use between those who chose the incumbent and those who chose the competitor in both the free-choice and the constrained-choice conditions. However, as reported above, there is no difference in perceived ease of use between conditions when participants choose the incumbent; the difference is significant only for participants who choose the competitor. Therefore, our results do not support the novelty explanation, but they are entirely consistent with psychological reactance (see Figure 6). That is, as predicted by the theory of psychological reactance (H7), when users choose the incumbent, the difference in perceived ease of use is small regardless of whether their initial experiences with the incumbent were free or constrained. However, those users who choose the competitor perceive the incumbent to be substantially less easy to use when they were constrained (as compared to free to choose) during the incumbent trials. MIS Quarterly Vol. 35 No. 4 Appendices/December 2011 A5

Ordering of Information Finally, the results reported in this paper cannot be accounted for by the fact that our design introduced differences in the order in which some information (i.e., navigation categories and subcategories) was presented. The results of the pretest indicated that Interfaces A and B, which had different navigation features and information orders, were equivalent to each other and inferior to C. Yet, we find substantial differences in the formation of interface preferences based on our experimental manipulations. These results demonstrate that, when initial learning is constrained, users are substantially more likely to prefer the incumbent interface over a subsequently introduced competitor. References Bergkvist, L., and Rossiter, J. R. 2007. The Predictive Validity of Multiple-Item Versus Single-Item Measures of the Same Constructs, Journal of Marketing Research (44:2), pp. 175-184. Brehm, J. W. 1956. Postdecision Changes in the Desirability of Alternatives, The Journal of Abnormal and Social Psychology (52:3), pp. 384-389. Cook, T. D., and Campbell, D. T. 1979. Quasi-Experimentation: Design and Analysis Issues for Field Settings, Boston: Houghton Mifflin. Davis, F. D. 1989. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology, MIS Quarterly (13), pp. 319-340. Drolet, A. L., and Morrison, D. G. 2001. Do We Really Need Multiple-Item Measures in Service Research, Journal of Service Research, (3:3), pp. 196-204. Festinger, L. 1962. A Theory of Cognitive Dissonance, Stanford, CA: Stanford University Press. Gefen, D., and Straub, D. W. 2000. The Relative Importance of Perceived Ease of Use in IS Adoption: A Study of E-Commerce Adoption, Journal of the Association for Information Systems (1:8). Johnson, E. J., Bellman, S., and Lohse, G. L. 2003. Cognitive Lock-In and the Power Law of Practice, Journal of Marketing (67:2), pp. 62-75. Karahanna, E., and Straub, D. W. 1999. The Psychological Origins of Perceived Usefulness and Ease-of-Use, Information & Management (35), pp. 237-250. Murray, K. B., and Häubl, G. 2007. Explaining Cognitive Lock-In: The Role of Skill-Based Habits of Use in Consumer Choice, Journal of Consumer Research (34:1), pp. 77-88. Rossiter, J. R. 2002. The C-OAR-SE Procedure for Scale Development in Marketing, International Journal of Research in Marketing (19), pp. 305-335/ Venkatesh, V., and Davis, F. D. 1996. A Model of the Antecedents of Perceived Ease of Use: Development and Test, Decision Sciences (27:3), pp. 451-481. A6 MIS Quarterly Vol. 35 No. 4 Appendices/December 2011