The Way to Choose: How Does Perceived Knowledge Flow

Similar documents
Online Appendix A. A1 Ability

Introduction to Behavioral Economics Like the subject matter of behavioral economics, this course is divided into two parts:

Appendix: Instructions for Treatment Index B (Human Opponents, With Recommendations)

Size of Ellsberg Urn. Emel Filiz-Ozbay, Huseyin Gulen, Yusufcan Masatlioglu, Erkut Ozbay. University of Maryland

The role of training in experimental auctions

On The Correlated Impact Of The Factors Of Risk Aversion On Decision-Making

Experimental Testing of Intrinsic Preferences for NonInstrumental Information

Perception Matters: The Role of Task Gender Stereotype on Confidence and Tournament Selection

When Intuition. Differs from Relative Frequency. Chapter 18. Copyright 2005 Brooks/Cole, a division of Thomson Learning, Inc.

Comparative Ignorance and the Ellsberg Paradox

GROUP DECISION MAKING IN RISKY ENVIRONMENT ANALYSIS OF GENDER BIAS

Psychological. Influences on Personal Probability. Chapter 17. Copyright 2005 Brooks/Cole, a division of Thomson Learning, Inc.

It is Whether You Win or Lose: The Importance of the Overall Probabilities of Winning or Losing in Risky Choice

References. Christos A. Ioannou 2/37

Social Identity and Competitiveness

Gender specific attitudes towards risk and ambiguity an experimental investigation

Risk Aversion in Games of Chance

Identity, Homophily and In-Group Bias: Experimental Evidence

Risk attitude in decision making: A clash of three approaches

Social Preferences of Young Adults in Japan: The Roles of Age and Gender

Gender Effects in Private Value Auctions. John C. Ham Department of Economics, University of Southern California and IZA. and

DEPARTMENT OF ECONOMICS AND FINANCE COLLEGE OF BUSINESS AND ECONOMICS UNIVERSITY OF CANTERBURY CHRISTCHURCH, NEW ZEALAND

Recognizing Ambiguity

Effects of Sequential Context on Judgments and Decisions in the Prisoner s Dilemma Game

Do Women Shy Away from Competition? Do Men Compete too Much?

Desirability Bias: Do Desires Influence Expectations? It Depends on How You Ask.

Findings from lab experiments in urban and rural Uganda

Why do Psychologists Perform Research?

Behavioral Finance 1-1. Chapter 5 Heuristics and Biases

Endowment Effects in Contests

Gender Differences in Willingness to Guess

Changing Public Behavior Levers of Change

Sampling Controlled experiments Summary. Study design. Patrick Breheny. January 22. Patrick Breheny Introduction to Biostatistics (BIOS 4120) 1/34

An Understanding of Role of Heuristic on Investment Decisions

The 2D:4D ratio and Myopic Loss Aversion (MLA): An Experimental Investigation

Performance in competitive Environments: Gender differences

FEEDBACK TUTORIAL LETTER

The Influence of Framing Effects and Regret on Health Decision-Making

DO WOMEN SHY AWAY FROM COMPETITION? DO MEN COMPETE TOO MUCH?

Checking the counterarguments confirms that publication bias contaminated studies relating social class and unethical behavior

Study Design. Study design. Patrick Breheny. January 23. Patrick Breheny Introduction to Biostatistics (171:161) 1/34

Shifting the blame to a powerless intermediary

Confidence in Sampling: Why Every Lawyer Needs to Know the Number 384. By John G. McCabe, M.A. and Justin C. Mary

Hedging and Ambiguity

Reduce Tension by Making the Desired Choice Easier

Clever Hans the horse could do simple math and spell out the answers to simple questions. He wasn t always correct, but he was most of the time.

Carmen Thoma: Is Underconfidence Favored over Overconfidence? An Experiment on the Perception of a Biased Self-Assessment

Self-Serving Assessments of Fairness and Pretrial Bargaining

Explaining Bargaining Impasse: The Role of Self-Serving Biases

Multiple Switching Behavior in Multiple Price Lists

A Belief-Based Account of Decision under Uncertainty. Craig R. Fox, Amos Tversky

EXPERIMENTAL ECONOMICS INTRODUCTION. Ernesto Reuben

COOPERATION 1. How Economic Rewards Affect Cooperation Reconsidered. Dan R. Schley and John H. Kagel. The Ohio State University

Glossary From Running Randomized Evaluations: A Practical Guide, by Rachel Glennerster and Kudzai Takavarasha

FAQ: Heuristics, Biases, and Alternatives

Leadership with Individual Rewards and Punishments

Effect of Choice Set on Valuation of Risky Prospects

Area Conferences 2012

Louis Lévy-Garboua Paris School of Economics, Université de Paris 1, & CIRANO. Séminaire CIRANO, 15 Novembre 2012

Lecture 2: Learning and Equilibrium Extensive-Form Games

CHAPTER 3 DATA ANALYSIS: DESCRIBING DATA

The effect of decision frame and decision justification on risky choice

Reexamining Coherent Arbitrariness for the Evaluation of Common Goods and Simple Lotteries

Status Quo Bias under Uncertainty: An Experimental Study

Sampling for Success. Dr. Jim Mirabella President, Mirabella Research Services, Inc. Professor of Research & Statistics

Carrying out an Empirical Project

A cash effect in ultimatum game experiments

Sampling experience reverses preferences for ambiguity

The Human Side of Science: I ll Take That Bet! Balancing Risk and Benefit. Uncertainty, Risk and Probability: Fundamental Definitions and Concepts

Inside View Versus Outside View Irregularity Hypothesis Attribute Substitution

Risky Choice Decisions from a Tri-Reference Point Perspective

Do Women Shy Away from Competition?

ORGANISATIONAL BEHAVIOUR

The Wellbeing Course. Resource: Mental Skills. The Wellbeing Course was written by Professor Nick Titov and Dr Blake Dear

Here s a list of the Behavioral Economics Principles included in this card deck

Paradoxes and Mechanisms for Choice under Risk By James C. Cox, Vjollca Sadiraj, and Ulrich Schmidt

I Make You Risk-Averse: The Effect of Fist Person Pronoun in a Lottery Choice Experiment. Tai Sen HE

IMPLICIT BIAS IN A PROFESSIONAL SETTING

Volume 36, Issue 3. David M McEvoy Appalachian State University

Probabilities and Research. Statistics

SIZE MATTERS UNDER AMBIGUITY

Effects of causal relatedness and uncertainty on integration of outcomes of concurrent decisions

Koji Kotani International University of Japan. Abstract

Take it or leave it: experimental evidence on the effect of time-limited offers on consumer behaviour Robert Sugden* Mengjie Wang* Daniel John Zizzo**

Does the elicitation method impact the WTA/WTP disparity?* Sarah Brebner a and Joep Sonnemans a,b,c a

Folland et al Chapter 4

The effect of anchoring on dishonest behavior. Hiromasa Takahashi a, Junyi Shen b,c

Chapter 11 Decision Making. Syllogism. The Logic

Connecting commitment to self-control: Evidence from a field experiment with participants in a weight loss challenge

Money management for people who may lack capacity. Alison Picton

13 Research for the right reasons: blueprint for a better future

Pooling Subjective Confidence Intervals

Probability Models for Sampling

Math HL Chapter 12 Probability

Tilburg University. The Gambler's Fallacy and Gender Suetens, Sigrid; Tyran, J.R. Publication date: Link to publication

Providing Help and Support for people with a hearing loss throughout North Wales

Experimental Economics Lecture 3: Bayesian updating and cognitive heuristics

HYPOTHETICAL AND REAL INCENTIVES IN THE ULTIMATUM GAME AND ANDREONI S PUBLIC GOODS GAME: AN EXPERIMENTAL STUDY

Name Class Date. Even when random sampling is used for a survey, the survey s results can have errors. Some of the sources of errors are:

Write your identification number on each paper and cover sheet (the number stated in the upper right hand corner on your exam cover).

Transcription:

The Way to Choose: How Does Perceived Knowledge Flow Iansã Melo Ferreira February 1, 2013 1 Third Year Seminars This is a new version of the work I presented last quarter. In response to some of the comments I got, the lit review was excluded almost completely, and the introduction was re-written to present the idea in a more direct and clean way. The experiment itself has been changed in several aspects, and all the procedures for the upcoming lab performance have been pinned down. Hope you ll like it, and as always, all comments are greatly appreciated. Iansã 2 Introduction The choices we make are not isolated events. In general, when making a choice agents will draw comparisons regarding their own past experiences or previously acquired knowledge and the options at hand (Moore, 1999; Fox and Tversky, 1995). When faced with a string of similar decisions such as employee hiring, food or beverage tasting, speed dating decisions, etc. individuals tend to carry on to the next decision a baseline judgment that generates from the qualifications of previous ones (Bhargava and Fisman, 2012). This type of comparative judgment also happens when agents evaluate policies, programs, portfolios and other types of choice-objects for which the decision maker s understanding of procedures, pros and cons are likely to affect his choices. Once an individual is presented with a highly desirable alternative, he is likely to update his impressions and judge subsequent options more harshly. Similarly, when presented with a highly undesirable option, individuals are more likely to recalibrate, and accept not so good alternatives. Indications of the comparison driven behavior were found by several authors in different situations and experiments. According to Moore (1999), people don t have pre-established global preferences, instead they have pre-established mental procedures that allow them to generate preference orderings when called for. Therefore, the parameters one 1

would consider when making a choice wouldn t necessarily be fixed. In fact, the effects of a comparison parameter over one s evaluations should decrease as their decisions scatter away from it. This pattern was also observed in several experiments regarding perceived knowledge. According to the Comparative Ignorance Hypothesis (Fox and Tversky, 1995) people don t classify their knowledge in absolute terms, but instead they tend to draw comparisons based on previous experiences or on the knowledge of others. In that context, Fox and Weber (2002) showed that when previously presented with a highly familiar event, people are likely to underestimate their own knowledge regarding an average familiarity event, and when previously presented with an unfamiliar event, they tend to overestimate their knowledge about the average familiarity one. Expanding on that, Ferreira and Resende (2011) showed evidence that when previously presented with two events, a high and a low familiarity ones, individuals tend to put a greater weight on the first one presented yielding a similar, albeit smaller, bias on the last event s judgment then the one observed by Fox and Weber. This result brings up the question of what reduces the comparative bias. Whether it is the increase in the distance between the first and the last events, or the opposing bias presented by the intermediate question? In this paper I attempt to better understand how can the perception of own knowledge be affected by framing and contrast effects. Building up on the results by Ferreira and Resende (2011), this paper presents an experiment which attempts to shed some light into the aforementioned question by testing specifically for the effects of distance and opposing biases. 3 Experimental Design and Hypothesis In order to evaluate how is the comparative knowledge flow affected by the distance between events and the direction of the events bias, two questionnaire types were formulated, each consisting of the ordered presentation/evaluation of two or four events, according to one of three possible treatments. Each treatment consists of a different subtype questionnaire regarding an specific ordering on the same pool of events, as shown below: T1: Low-familiarity; Average-familiarity1; T2: Low-familiarity; Average-familiarity2; Average-familiarity3; Average-familiarity1 T3: Low-familiarity; Average-familiarity2; High-familiarity; Average-familiarity1 Each participant will receive one unique treatment for each questionnaire. The events in each questionnaire type were chosen to maximize the match between the desired familiarity level (by the experimenter) and 2

the participants perceived familiarity level regarding each event. Recognizing that this perception is unlikely to yield a perfect match, possibly jeopardizing the results, all questionnaires were followed by a knowledge sorting sheet, where participants will be asked to state how knowledgeable they believed themselves to be on a scale of 1 (no knowledge) to 7 (perfect knowledge) about each of the events they faced (Resende and Wu, 2008; Ferreira and Resende, 2011). The use of this scale allows us to recognize and exclude individuals whose knowledge-familiarity ordering conflicts with our assumption. Also, understanding the role of risk preferences on the choices made by the individuals throughout the experiment, and the importance of controlling for this preferences, at the beginning of each section participants will be presented with a risk preference eliciting question, exact words below: Suppose you were endowed with 100 lab tokens and asked what portion of it (between 0 and 100, inclusive) you wish to invest in a risky investment. If the investment succeeds it will pay 2.5 times as many tokens as invested, otherwise you ll loose your investment. Each outcome happens with a 50% probability, and what you don t invest is yours to keep. How many tokens would you invest? This method, known as the Gneezy and Potters method (Gneezy and Potters, 1997; Charness et. al., forthcoming), was chosen due to its simplicity regarding participant understanding and lab performance. Although it does not allow for a distinction between risk-neutral and risk-seeking agents, 1 it allows for a scaling on risk-aversion, which can be then controlled on our data analysis. 3.1 Questionnaire 1: Weather Lotteries In this questionnaire based on Fox and Tversky (1995) (See also Fox and Weber, 2002; and Ferreira and Resende, 2011) each participant should be exposed to one event at a time, according to the treatment received. The events should be: 2 Low-familiarity event: Mid-March temperature at the city of Tarhuna, in Libya; Average-familiarity event 1 : Mid-March temperature at the city of Douglas, Arizona; Average-familiarity event 2 : Mid-March temperature at the city of Fort Worth, Texas; Average-familiarity event 3 : Mid-March temperature at the city of Cleveland, Mississippi; 1 In this task, any risk-neutral or risk-seeking agent should opt for investing all the tokens, since the expected gains are always higher than the initial endowment. 2 Events may be modified in case the experiment happens to close to mid-march. 3

High-familiarity event: Mid-March temperature at the city of Sacramento, California. For each event presented, the participant will be asked the maximum he/she would be willing to pay for a lottery ticket which paid US$100 if the highest afternoon temperature in the corresponding city was greater than or equal to [also, lower than] 55F, on the following March 15th. The exact words below: Suppose you were offered a lottery ticket which would pay US$100 if the highest afternoon temperature in the city of [CITY, LOCATION] was at least 55F, on the next March 15th. How much would you be willing to pay for that lottery ticket? I d be willing to pay US$ Suppose you were offered a lottery ticket which would pay US$100 if the highest afternoon temperature in the city of [CITY, LOCATION] was less than 55F, on the next March 15th. How much would you be willing to pay for that lottery ticket? I d be willing to pay US$ Participants will be informed that there are no wrong answers, and that they should think carefully about what would be their true reservation prices for each lottery ticket, and that they should price complementary lotteries independently, not as if they could acquire both to ensure winning. They should also be warned that they were not to look either back or forth in order to answer a question, but should take them one at a time, as presented in each questionnaire. In order to avoid distortions regarding ordering and beliefs, some balancing efforts were made. For each event, half of questionnaires were written offering first a lottery that would pay for an afternoon temperature at least, and then one which paid for a temperature less than, while the other half faced the opposite ordering. All the analysis is to be performed on the sum of prices reported for each event on the two complementary lotteries. 3 By eliciting agents willingness to pay for a given set of lotteries we ll try to understand how does the flow of perceived knowledge influence choices. According to previous literature, the less confident an individual feel about an event, the less he/she would be willing to bet on it, which would reduce the total anyone individual would be willing to pay for a given event (sum of complementary lotteries) as his/her perceived knowledge regarding that event grows fainter. Observe that all treatments end on the same question (averagefamiliarity1), so we expect the elicited price for this event s occurrence to indicate how confident agents feel about it when they reach it. 3 This procedure is common on previous literature (see for example Fox and Tversky, 1995; Fox and Weber, 2002; Ferreira and Resende, 2011), and aims to avoid distortions generated by common beliefs regarding the any of the cities temperatures used in the questionnaires. 4

If distance matters on determining how does the first event affects perception regarding the last one, we should expect to have agents pricing the last lottery (Average-familiarity1) more expensively on treatment 1 than on treatment 2, once the addition of the two intermediate events would decrease the upward bias promoted by the low-familiarity event. Also, if the direction of the bias generated from intermediate events should matter, we should expect a higher price for the last lottery on treatment 2, then on treatment 3, where a high-familiarity event takes place and should diminish the initial bias. How to test the hypothesis In order to analyze the data and investigate the validity of our hypothesis there are several possible procedures we can undertake. First, we should construct a table of mean and median prices, eliciting the corresponding standard deviations and analyze the differences for each treatment. The usual mean/standard deviation table, provides us with a t-test, which should need a reasonable amount of observations in order to attain internal validity [NEED TO COMMENT ON EXTERNAL VALIDITY ONCE RESULTS COME UP]. The medians elicited should be an interesting support of the findings. Another possible procedure for data evaluation would be a Rank-Sum-Test by Willcoxon-Mann-Whitney, which evaluates whether the data gathered for a given event on different treatment groups belongs to the same distribution. It is important, however, to observe that neither of these tests allow for a clear control of risk preferences, thus a heteroskedasticity robust OLS over the following models for each of our hypothesis should also be performed: H1: Distance matters P RICE A1 = β 0 + β 1 P RICE Lot1 + β 2 DIST + β 3 RA + ɛ H2: Intermediate events matter P RICE A1 = β 0 + β 1 P RICE Lot1 + β 2 D High + β 3 RA + ɛ Where P RICE A1 = price elicited for the last lottery (Average-familiarity1); P RICE Lot1 = price elicited for the first lottery; DIST = dummy for having two question between the first and the last (treatment 2) ; D High = dummy for having also a high-familiarity event between the first and last lotteries (treatment 3); RA = risk aversion measure given by the amount (from 0 to 100) that the participant chose to invest on the Gneezy and Potters risk task. 5

and, for both models, β 2 < 0 should be interpreted as a corroboration to the hypothesis. 3.2 Questionnaire 2: Capitals Lottery Using the same pool of subjects, and the same experimental setting, 4 this second questionnaire used the following events: Low-familiarity event: What is the capital of Estonia; Average-familiarity event 1 : What is the capital of Greece; Average-familiarity event 2 : What is the capital of Egypt; Average-familiarity event 3 : What is the capital of Brazil; High-familiarity event: What is the capital of England. For each of the events in this questionnaire the participants were asked to state what city did they believed to be the capital of the corresponding country, and following they were asked to state whether they d prefer to be paid US$10 if they got the right answer, or if they d prefer to have a chance to receive the same US$10 contingent on a coin flip. Exact words below: Q1 - What city do you believe to be the capital of [COUNTRY]? I believe it is Q2 - Would you prefer to receive US$10 if your answer is correct, or to receive the same US$10 upon a coin flip? (Circle one) [MY ANSWER] [COIN FLIP] For this second questionnaire, should we expect the distance to be influential, the proportion of participants willing to bet on their own answers for the last lottery should be higher on the first than on the second treatment. And if the intermediate events matter, we should expect less people willing to bet on their own answers about the last event on the third treatment than on the second one. In order to incentivize more reliable answers from this task participants should be paid according to their answers. A prize of $10 or $0, will be awarded according to their answers at a randomly chosen event, i. e. 4 Randomly assigned treatments, one per participant, using the same pool of events differing only in the order of presentation, and also the presence of a knowledge sorting sheet at the end of each questionnaire for matching purposes. 6

according to their guessed capital whether they chose to bet on that or on the coin flip. Here, a random draw out of 4 sets of 4 cards (Ace, 2, 3 and 4) would determine what event would determine payment. For treatment 1 an odd number (Ace or 3) would stand for the first question, and an even (2 or 4) for the second one. How to test the hypothesis In order to analyze the data and investigate the validity of our hypothesis for this questionnaire we should construct a table to analyze the difference in confidence level elicited by the difference in proportion of agents who prefer to bet on their own answers for a given treatment. Again, such a procedure could use some statistical corroboration, in order to do so, we should perform a binary probit regression over the following models: H1: Distance matters OwnAnsw A1 = β 0 + β 1 OwnAnsw Lot1 + β 2 DIST + β 3 RA + ɛ H2: Intermediate events matter OwnAnsw A1 = β 0 + β 1 OwnAnsw Lot1 + β 2 D High + β 3 RA + ɛ Where OwnAnsw A1 = 1 dummy for choosing his own answer over the random draw for the last lottery (Average-familiarity1); OwnAnsw Lot1 = dummy for choosing his own answer over the random draw on the first lottery (Lowfamiliarity); DIST = dummy for having two questions between the first and the last (treatment 2) ; D High = dummy for having a high-familiarity event between the first and the last ones (treatment 3); RA = risk aversion measure given by the amount (from 0 to 100) that the participant chose to invest on the Gneezy and Potters risk task. 4 Experimental Procedures Running the Experiment The experimental sessions are to be performed in 3 distinct phases where all participants will respond to the risk elicitation task, and both the weather lotteries and the capitals lotteries questionnaires. Once all participants have arrived and settled, each will be given a experimental ID number, which they will be 7

asked to fill inn on each step of the experiment, allowing for the experimenter to backtrack their elicited risk preferences, and their choice behavior on each questionnaire. Phase 1: Risk-preference Elicitation Procedures: payment procedures and values for this task should be explained to participants before they perform this task. Then, questionnaires should be distributed, and all participants reminded to fill in their experimental ID. Once all participants have answered the risk question, and the questionnaires have been collected, a coin will be flipped deciding the investment s outcome (H = good; T = bad). Payments: ten percent of the participants will be randomly chosen to be paid for this task by means of a bag containing their ID numbers. Prizes will be paid in cash at the end of each session altogether with show up fee and other earnings when applicable. Each token will be worth US$0.05, so that payments for this task will range from $0 to $12.50. Considering that the average investment should be about 60 tokens, the expected earnings from this task for anyone paid participant are: E[earnings] = (40 + 60 2.5 1/2).05 = $5.75, since only 10% of participants will be paid, the total average payment should be aproximatelly $0.60 cents per participant. Phase 2: Weather Lotteries Procedures: the questionnaires are to be distributed randomly regarding treatment and the instructions read and cleared. Participants should be reminded they need to write their ID number on the top of the questionnaire. When no questions regarding procedures or protocol persist, the participants will be given the order to start and, once finished, put the questionnaire aside and wait until further instructions. It is important to make clear for all participants that questions should regard procedures and protocols, NOT the research idea, nor the expectations (theirs or the experimenter s) about results. Once the questionnaire has been answered by all participants, they should be collected and the third step of the experiment should take place. Payments: there are no specific payments for this questionnaire. Phase 3: Capitals Lotteries Procedures: Again, questionnaires are to be distributed randomly regarding treatment, instructions read and cleared (including payment rules), and participants should be reminded they need to write their ID number on the top of the questionnaire. Once all procedures have been cleared, participants should be given order to start, and once finished, put the questionnaire aside and wait until further instructions. Once everyone has finished, questionnaires should be collected. Payments: all participants should be paid according to their answers to this questionnaire. Once questionnaires have been collected, four sets of four cards (Ace, 2, 3 and 4) will be shown to the participants and one will be randomly drawn to determine what question will determine payments. If the Ace is drawn, par- 8

ticipants are to be paid according to the first question; if the 2 is drawn, the second question will determine payoffs, and so on. For participants holding a treatment 1 questionnaire, an odd number (Ace or 3) will stand for the first question; while an even number (2 or 4) will stand for the second one. With the events to be paid determined, a coin flip will be performed (H = $10, T = $0) and the (opaque) envelops identified by the experimental IDs will then be added the capital s prize according to the questionnaire answers. The expected earnings from this task will be given by: E[earnings] = 10 [1/4 P rob(win Q1) + 1/4 P rob(win Q2) + 1/4 P rob(win Q3) + 1/4 P rob(win Q4)], or, for treatment 1: E[earnings] = 10 [1/2 P rob(win Q1)+1/2 P rob(win Q2)]. Also, the probability of winning for any given question should be: P (winning T i, Qj) = 1/2 P (choose CoinF lip)+p (choose OwnAnswer) P (being right chose ownanswer). Considering that on average, people who choose to bet on their own answers will be right 2/3 of the times, and that the probability of choosing to bet in one s own answer varies with question and treatment, we estimate that the total probability of winning is.57, placing our average earnings for this task around US$6.00 per participant. Also, it is important to point out that the ordering of phases 2 and 3 (weather and capital s questionnaires) will be balanced across sessions. Once all phases of a given session are completed, students will be privately paid and dismissed. The overall expected payments will be given by the show up fee of $5.00, plus expected earning from the risk eliciting task ($0.60), plus the capital s lotteries earnings ($6.00), adding up to a total of US$ 11.60 per participant. With a total of 160 participants (10 on the pilot, and 150 on the actual experiment), the experiment s budget for this experiment foresees a total cost of US$1,856.00. Pilot Before the actual experiment takes place, a pilot experiment should be run using a group of 10 students recruited by the random recruiting system used by the department. According to the results of the pilot, changes in the procedures, particularly on the instructions and chosen events can be made to perfect understanding and enhance the chances of getting useful results. Experiment Recruiting and payments One hundred and fifty participants should be recruited for the experiment, which should happen on the course of 2 days, in a maximum of 4 sessions. Upon arrival, each participant is immediately entitled $5 show up fee, which he/she shall receive at the end, altogether with any extra earnings from the capitals lottery, and the risk eliciting task, when applicable. Payments should be made by handing opaque envelopes identified only by the experimental IDs and 9

containing the amount of cash corresponding to participant s show up fee ($5), plus their earnings from the capital s lottery (either $10 or zero), and from the risk assessment task (up to $12.5), when applicable. References [1] Bhargava, S. & Fisman, R (2012). Contrast effects in sequential decisions: Evidence from speed dating. forthcoming. [2] Charness, G. & Gneezy, U. & Imas, A. (2012). Experimental methods: Eliciting risk preferences. Journal of Economic Behavior and Organization, http://dx.doi.org/10.1016/j.jebo.2012.12.023 [3] Ferreira, I. M. & Resende, J. G. L. (2011). Escolhas e ambiguidades: Um estudo sobre o conhecimento comparativo. Revista Brasileira de Economia, 65(3):253-266. [4] Fox, C. R. & Tversky, A. (1995). Ambiguity aversion and comparative ignorance. Quarterly Journal of Economics, 110: 585-603. [5] Fox, C. R. & Weber, M. (2002). Ambiguity aversion, comparative ignorance and decision context. Organizational Behavior and Human Decision Processes, 88: 476-498. [6] Gneezy, U. & Potters, J. (1997). An experiment on risk taking and evaluation periods. Quarterly Journal of Economics, 112(2): 631-645. [7] Moore, D. A. (1999). Order Effects in Preference Judgments: Evidence for Context Dependence in the Generation of Preferences. Organizational Behavior and Human Decision Processes, 78(2): 146-165. [8] de Lara Resende, J. & Wu, G. (2010). Competence effects for choices involving gains and losses. Journal of Risk and Uncertainty, 40: 109-132. 10