Your Loss Is My Gain: A Recruitment Experiment with Framed Incentives

Size: px
Start display at page:

Download "Your Loss Is My Gain: A Recruitment Experiment with Framed Incentives"

Transcription

1 Your Loss Is My Gain: A Recruitment Experiment with Framed Incentives Jonathan de Quidt First version: November 2013 This version: September 2014 JOB MARKET PAPER Latest version available here Abstract Empirically, contracts that penalize failure elicit greater effort than equivalent contracts framed as rewarding success, consistent with loss aversion. Loss aversion also predicts that workers will demand higher wages to accept penalty contracts, a plausible explanation for why such contracts are rare. I recruited data entry workers under framed incentive contracts to test this prediction. Penalty framing increased performance by 0.2 standard deviations, consistent with the existing literature, and surprisingly I find no evidence of selection effects. More surprisingly, penalty framing increased the contract acceptance rate by 25 percent. Follow-up experiments rule out a number of explanations, and support a salience mechanism whereby workers valuations are influenced by the high base pay in the penalty contract. The results have implications for firms and economic theory, but the puzzle of why penalties are rare remains a puzzle, particularly as the effect seems quite persistent. Keywords: loss aversion; reference points; framing; selection; salience; Mechanical Turk JEL Classification: D03, J41, D86 I thank STICERD for financial support, and many people for helpful discussions, particularly Philippe Aghion, Oriana Bandiera, Tim Besley, Gharad Bryan, Tom Cunningham, Ernesto Dal Bó, Erik Eyster, Greg Fischer, Maitreesh Ghatak, Dean Karlan, Matthew Levy, George Loewenstein, Rocco Macchiavello and Torsten Persson, seminar participants at the LSE and WZB Berlin, IEA World Congress 2014, FUR 2014 and EEA/ESEM I thank Gabriele Paolacci, Puja Singhal and Kelly Zhang for help in setting up the experiment. An earlier draft was circulated under the title Recruiting Workers Under Framed Incentives: An Online Labor Market Experiment. LSE and IIES, Stockholm University. jonathan.dequidt@iies.su.se. 1

2 Consider two job contracts, the first of which pays a base wage of $100, plus a bonus of $100 if a performance target is reached, while the second pays a base wage of $200, minus a penalty of $100 if the target is not reached. Rational agents will behave identically under either one. However, a large body of empirical evidence suggests that behavior does respond to framing manipulations. In particular, multiple lab and field studies find that workers exert higher effort under penalty contracts than equivalent bonus contracts. The leading explanation for these findings is loss aversion around a reference point (Kahneman and Tversky, 1979), where the reference point is influenced by framing. Losses loom larger than gains, so people work harder to avoid a penalty than to achieve a bonus, just as they demand more to give up an endowment than they will pay to gain it. When would workers be willing to accept such contracts? This is an important question for three reasons. First, firms need to understand the participation constraint in order to be able to effectively use penalty incentives. Second, while we know a lot about how people make choices given a reference point, such as between a safe or risky loss, we know little about preferences between reference points, such as whether people prefer bonus or penalty contracts. Third, it speaks to an old empirical puzzle: why are penalty contracts rare (Baker et al., 1988; Lazear, 1991)? Theory predicts that workers will dislike penalty contracts. The high implied reference point sets them up for disappointment which is unattractive for the same reason that one should not count chickens before they are hatched. Under a bonus contract the worker feels great when successful and fine when unsuccessful, under a penalty contract she feels fine when successful and terrible when unsuccessful. As a result, firms must pay more to recruit workers under penalty contracts. It turns out this can provide a simple explanation for why firms seem unwilling to use penalty contracts in practice. Bonus contracts can elicit the same effort at lower cost. I present evidence from three online experiments with 1,848 workers in total, designed to test this prediction. I recruited data entry workers under framed incentive contracts using a two-stage, between-subjects design that enables me to estimate the effect of penalty framing on worker s willingness to accept the contract, to test for differential selection of types into contracts, and to analyze the effect of penalty framing on effort provision. 2

3 I replicate the finding in the literature that penalties elicit higher effort. In my case, accuracy was 6 percent higher (0.2 s.d.) under the penalty contract. This estimate is robust to controlling for selection because, surprisingly, I see no evidence of selection effects: bonus and penalty contract acceptors are extremely similar in terms of observables. More surprisingly, and contrary to the theoretical predictions, workers offered a penalty framed contract were 25 percent more likely to accept than those offered an equivalent bonus contract. Evidence from all three experiments enables me to rule out a number of possible confounds. The results are not explained by inattention, misunderstanding, or changes in beliefs induced by the framing treatment. More interestingly, they are not explained by a commitment mechanism whereby workers want to force themselves work harder and increase their earnings, and so are attracted by the motivational power of the penalty contract. I argue that the workers formed a more positive subjective view of the penalty contract induced by its salient high base pay, supported by survey evidence suggesting that workers subjectively perceived the penalty contract as better paid despite understanding the objective terms of the contract. This is similar to recent evidence of underweighting of non-salient sales taxes (Chetty et al., 2009) and ebay shipping costs (Hossain and Morgan, 2006), and overweighting of salient features of a good s price distribution (Mazar et al., 2013) or transparently irrelevant anchors (Ariely et al., 2003). Paraphrasing Mazar et al. (2013), workers appear to focus on something other than the total benefit that the [contract] confers to them. The results suggest that penalty contracts are a win-win proposition for firms, offering improved performance at lower cost. Why, then, are they rare? It could be that over time, workers reference points adjust, eroding the performance gains and leading some those who were induced to accept by the framing to quit, which is costly. The third experiment explored this, but found that the penalty contract was again more popular among workers invited back for an additional task. Evidently more research is needed, and I conclude the paper with possible directions. The rest of the paper is as follows. Section 1 outlines the theoretical predictions of the effects of penalty framing in a loss aversion model. Section 2 outlines the design of the experiments and section 3, the results. 3

4 Section 4 analyzes possible mechanisms. Section 5 discusses the results, external validity and related literature. Section 6 concludes. Three Web Appendices contain additional theory, results and experimental details. 1 A simple model Consider an agent (A) deciding whether to accept a contract to perform a task, the success of which depends upon her effort. A chooses an effort level e [0, 1] which equals the probability that the task is successful. If unsuccessful, the contract pays an amount w, if successful it pays w + b. In addition to these pecuniary incentives, A s utility is reference-dependent and loss-averse, and her reference point is influenced by how the contract is framed, represented by F [0, 1]. F = 0 corresponds to a pure bonus frame where w is the base pay and b is a bonus for success. F = 1 is a pure penalty frame where w + b is the base pay and b is the penalty for failure. F (0, 1) is a mixed frame with base pay w + F b, bonus (1 F )b for success and penalty F b for failure. Similar to Kőszegi and Rabin (2006, 2007) (henceforth, KR), I assume that A s utility function is a sum of a standard component, equal to expected income less a convex cost of effort c(e), and a gain-loss component that evaluates monetary payoffs against a reference point. Specifically, for monetary outcome x and reference point r, gain-loss utility is equal to µ(x r) for x r (a gain) and λµ(x r) for x < r (a loss), where µ(0) = 0, µ > 0, µ 0 and λ 0. λ is A s coefficient of loss aversion. If λ > 1 she is loss averse: the disutility of a loss exceeds the utility of an equal-sized gain. If µ < 0 A exhibits diminishing sensitivity in the gain and loss domains. I follow KR in assuming no probability weighting. For now, I assume that A s reference point r is non-stochastic and equal to the base pay specified in the contract: r = w + F b. With probability e A earns w + b, a gain of (1 F )b. With probability 1 e she earns w and thus experiences a loss equal to F b. In Web Appendix A I allow the reference point to depend on A s expected effort (using a simple extension of KR) and discuss reference dependence in the cost of effort. 4

5 A s utility function is: U(e, w, b, F ) = w + eb c(e) + eµ((1 F )b) (1 e)λµ(f b). (1) A s optimal effort e solves the first order condition: 1 b + µ((1 F )b) + λµ(f b) c (e (b, F )) = 0. (2) A accepts a contract (w, b, F ) if her participation constraint is satisfied: U (w, b, F ) ū 0 (3) Where U (w, b, F ) = U(e (b, F ), w, b, F ) and ū is the utility of her outside option. This simple model yields three key testable predictions: Prediction 1. If A is loss averse (λ > 1) her effort is strictly higher under a pure penalty contract than a pure bonus contract: e (b, 1) > e (b, 0). 2 Prediction 2. Penalties have a larger effect on effort for more loss-averse agents: 2 e F λ > 0.3 Prediction 3. Penalty framing reduces A s willingness to accept the contract: U (w,b,f ) F < 0. 4 Predictions 1 and 2 match the findings in the existing literature on the effects of penalty framed incentives, and are also explored in the empirical analysis in this paper. The main focus of the paper is on Prediction 3, and the experiment is designed to test whether workers indeed prefer bonus contracts. Furthermore, agents may be heterogenous, for example differing in loss aversion or the cost of effort. The empirical part of the paper studies whether types differentially select into penalty contracts. Interestingly, it turns out that the negative effect of penalty framing on agents willingness to accept the contract is sufficiently strong that it is 1 For simplicity, I focus on b, F such that the solution e is smaller than one. 2 If µ(.) is linear, effort is strictly increasing in F. With diminishing sensitivity it may not be (see Armantier and Boly (2012) for evidence). It is everywhere increasing if and only if λ > µ (0) µ (b) 1. Proof: e (b,f ) F = λµ (F b) µ ((1 F )b) c (e (b,f )). The numerator is decreasing in F, so if it is positive for F = 1, it is for all F. Intuitively, diminishing sensitivity implies that outcomes far from the reference point are weighted less strongly than outcomes close to the reference point, so the incentives may be sharper with intermediate than extreme reference points. 3 Strictly, penalties have a more positive effect, since e (b,f ) F can be negative. 4 This holds for all λ 0. It relies only on reference dependence, not loss aversion. Also note that incorporating probability weighting would not change the result, as an increase in F decreases gain-loss utility however its components are weighted. 5

6 more costly to elicit a given effort level using penalties than using bonuses, despite the positive incentive effects of penalties. Thus if Prediction 3 is correct, it may help to explain why firms are reluctant to use penalty contracts. This point is illustrated by the following Proposition (the proof is given in Web Appendix A): Proposition 1. Consider a contract (w, b, F ), where F > 0, that elicits effort level e and gives A utility u. Then, there exists an alternative contract, (w, b, F ), where F < F, that elicits e, gives A at least u and where A s expected compensation is strictly lower, i.e. w + eb < w + eb. Therefore, the lowest-cost contract that elicits e is a pure bonus contract with F = 0. 2 Experimental design I ran three experiments with online workers on Amazon Mechanical Turk (MTurk, for short), each with the same basic design. Each consisted of a first stage where workers were recruited on MTurk for a real-effort task and survey, and paid a flat wage. Then, a week later, workers from the first stage were sent a surprise job offer to perform the task again (stage 2), this time under framed performance pay. Experiment 1 was the largest, involving 1,146 workers, a data entry task incentivized on accuracy, three different financial incentives (each framed as either a bonus or penalty contract). Experiment 2 involved 304 workers, a single financial incentive structure and the same data entry task. Experiment 2 changed the phrasing of the job offer to test whether inattention might be driving the results of experiment 1. It also added a follow-up survey to try to distinguish mechanisms. Experiment 3 used 398 workers, the same phrasing as Experiment 2, but this time changed the task to a coin toss guessing task, to test how the results changed when performance did not depend on effort. Experiment 3 also added a surprise stage 3, one week after stage 2, to see whether framing effects persisted. 2.1 Platform: Amazon Mechanical Turk (MTurk) MTurk is an online labor market for micro outsourcing. For example, a requester that needs data entered, audio recordings transcribed, images 6

7 categorized, proofreading, etc. can post a job on MTurk, and recruit workers to carry it out. Pay is set by the requester. A key advantage of MTurk for this experiment is that it enables testing for selection effects in a natural environment, where the worker s outside option (should she reject the job offer) is the other tasks she can perform on MTurk. In contrast, in the lab one must typically create an outside option either by giving the worker the option to choose another task (as in e.g. Dohmen and Falk (2011)) or a sum of money. Second, most work on MTurk is performed for low wages (my workers reported a mean reservation wage of $4.97 per hour, and mean typical hourly earnings of $4.70), enabling me to recruit a large sample to increase power. MTurk is becoming increasingly commonly used for research by economists. To cite a couple of examples, Bordalo et al. (2012) test their theory of salience using MTurk surveys. Barankay (2011) uses MTurk to study the effect on willingness to undertake more work of telling workers about their rank in an initial task. Horton et al. (2011) and Amir et al. (2012) replicate some classic experimental results with MTurk workers. 2.2 Real-effort tasks The task in experiments 1 and 2 was transcribing 50 text strings, increasing in length from 10 to 55 characters. The strings were generated using random combinations of letters, numbers and punctuation and distorted to give the appearance of having been scanned or photocopied. Workers typically took minutes to complete the task. The task was chosen to be implementable online and be sufficiently difficult to avoid ceiling effects, without putting the workers under time pressure (Amazon encourages requesters to give workers time to complete tasks at their own pace). 5 each stage workers were randomly assigned to one of 10 possible sets of strings. An example screen is reproduced in Web Appendix Figure C2. The task in experiment 3 was guessing 50 coin tosses, chosen to make 5 The task resembles CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) puzzles, used in web forms to prevent bots and spammers from accessing sites. This has led to some spammers recruiting MTurk workers to solve CAPTCHAs for them. See e.g. New York Times blog, March 13, 2008: bits.blogs.nytimes.com/2008/03/13/breaking-google-captchas-for-3-a-day/. In 7

8 it clear that performance did not depend on effort while mirroring the structure of experiments 1 and 2. It took around 10 minutes. 2.3 Experimental design I use a two-stage design similar to Dohmen and Falk (2011). The design is summarized in a timeline in Web Appendix Figure C1. In the first stage, workers were recruited on MTurk for a typing task and survey or guessing task and survey, for which they were paid a fixed amount, $3 for the typing task and $1 for the shorter guessing task. Flat pay was used to avoid exposing workers to more than one form of incentive pay during the experiment. The next day, they were sent an informing them of their accuracy on the task (text is given in Appendix C.3). Then, one week later workers were sent a surprise invitation to perform the same task again, only this time for randomized performance pay. Workers were allowed four days to complete the task, and were free to ignore the invitation if not interested. Each contract had three components: a fixed pay component that did not depend on performance, a variable pay component that did depend on performance, and a frame that is either bonus or penalty. Table 1: Treatments Experiment Group N Fixed pay Variable pay Frame Experiment $0.50 $1.50 Bonus Data entry $0.50 $1.50 Penalty $0.50 $3 Bonus $0.50 $3 Penalty $2 $1.50 Bonus $2 $1.50 Penalty Experiment $0.50 $3 Bonus Data entry $0.50 $3 Penalty Experiment $0.30 $1 Bonus Coin toss $0.30 $1 Penalty Performance pay was calculated as follows. Workers were told that after completion of the task I would select, using a random number generator, one of the 50 strings or coin tosses that they had been assigned to type 8

9 or guess. They would receive the bonus (avoid the penalty) conditional on that item being entered correctly. This structure means that workers probability of receiving the bonus was equal to their accuracy rate, the statistic reported to them in stage 1. Experiment 1 used three financial incentives as follows (fixed pay, variable pay): ($0.50, $1.50), ($0.50, $3), and ($2, $1.50). Experiment 2 used ($0.50, $3). Experiment 3 used ($0.30,$1.30) reflecting the shorter length of the task. Pay rates were chosen to be comparable to typical rates on MTurk (otherwise it would be difficult to generate selection effects) but relatively high powered to maximize statistical power. The treatments are detailed in Table 1. Only the pay portion of the invitation differed between treatments. The key phrasing is given in Table 2 and full text in Web Appendix C.4. I deliberately avoided emotive words like bonus and penalty. Table 2: Framing text Experiment 1 Bonus... The basic pay for the task is $0.50. We will then randomly select one of the 50 items for checking. If you entered it correctly, the pay will be increased by $ Experiments 2 & 3 Bonus... The pay for this task depends on your typing accuracy. We will randomly select one item for checking, and if it was entered correctly, the pay will be increased above the base pay. The base pay is $0.50 which will be increased by $3 if the checked item is correct... Experiment 1 Penalty... The basic pay for the task is $3.50. We will then randomly select one of the 50 items for checking. If you entered it incorrectly, the pay will be reduced by $ Experiments 2 & 3 Penalty... The pay for this task depends on your typing accuracy. We will randomly select one item for checking, and if it was entered incorrectly, the pay will be reduced below the base pay. The base pay is $3.50 which will be reduced by $3 if the checked item is incorrect... Note: Experiment 3 referred to guesses and coin tosses instead of accuracy and items. Finally, in experiment 2, workers were invited to a paid follow-up survey eight days after stage 2. In experiment 3, workers were invited to complete a stage 3 one week after stage 2, under the same terms as stage 2. The two stage design serves three main purposes. First, it enables me to measure types prior to treatment, and ensure that the treatment 9

10 randomization is balanced across type, both of which enable me to test for selection by comparing the distribution of types that accept each contract. 6 Second, it ensures that workers know the task and their ability. This is important because workers might make inference about the nature of the task from the incentive contract they are offered. For example, a contract that penalizes failure might be seen as easy (failure is unlikely) while a contract that rewards success is seen as hard. 7 Task experience should mitigate this effect. Third, it ensures that workers have interacted with the principal (me) before. This is important because penalty contract offers might be perceived as more or less trustworthy Data This section describes the data collected in the main survey and effort tasks. Summary statistics are presented in Web Appendix Tables B1 and B2. The measure of loss aversion I use is an unincentivized variant of that of Abeler et al. (2011). Workers indicated whether they would play each of 12 lotteries of the form 50% chance of winning $10, 50% chance of losing $X, where X varies from $0 to $11. I proxy for loss aversion with the number of rejected lotteries. 7 percent of workers made inconsistent choices, accepting a lottery that is dominated by one they rejected. A screenshot of the lottery questions is given in Web Appendix C.6. Two other key variables that I attempt to measure are workers reservation wages and their perceptions of what constitutes a fair wage. To elicit reservation wages I ask workers what is the minimum hourly wage at which they are willing to work on MTurk. 9 I ask workers the minimum fair wage that requesters should pay on MTurk, and use this measure to 6 I did not use the approach of Karlan and Zinman (2009) because it would expose workers to both frames and therefore make transparent the equivalence of the two. 7 Bénabou and Tirole (2003) analyze an asymmetric information context whereby if the principal offers higher pay for a task it signals that the task is undesirable. This result relies on the pay acting as a costly signal, while altering the frame is costless. 8 Workers agreed to an informed consent form that states their work is part of a research project from the LSE (note that they were not told that it was an incentives study), gives my name and contact details. They were paid promptly after completing the first stage, and received a personalized performance report after stage 1. 9 Fehr and Gächter (2002) find in a buyer-seller experiment that penalty-framed performance incentives led to more shirking among sellers than equivalent bonus-framed offers, and argue that this is because the penalty contracts are perceived as less fair. 10

11 proxy for fairness concerns. Reservation wages are smaller or equal to fair wages for 92 percent of workers. The main performance measure is Accuracy Task X, the fraction of strings entered correctly or tosses guessed correctly in stage X. In the typing task I also compute Scaled Distance Task X, which can be thought of as the error rate per character typed. 10 Third, I try to measure how much time workers spent on their responses. There are large outliers since I cannot observe how long workers were actually working on a given page of responses, only how long the page was open for, so I take the time the worker spent on the median page, multiplied by 10 to estimate the total time. Finally, at the beginning of stage 2 workers were asked to estimate the mean accuracy rate from stage 1, a variable I label Predicted Accuracy. In total 1,465 workers were recruited for experiments 1 and 2, of which 693 returned for stage are dropped from all of the analysis, six because I have strong reasons to suspect that the same person used two MTurk accounts to participate twice 11 and nine because they scored zero percent accuracy in the stage 1 typing task, suggesting that they did not take the task seriously (of the six of these who returned for stage 2, five scored zero percent again). Results are robust to including these workers were recruited for experiment 3, of which 267 completed stage 2 and 245 completed stage Randomization I stratified the randomization on the key variables on which I anticipated selection: stage 1 performance, rejected lotteries and reservation wage. In case some workers might know one another (for example, a couple who both work on MTurk), the treatments were randomized and standard errors 10 For each text string I compute the Levenshtein distance (the minimum number of single character insertions, deletions, or swaps needed to convert string A into string B) between the worker s response and the correct answer, and divide by the length of the correct answer, then average over all answers. This then roughly corresponds to the probability of error per character. In the regressions I use the natural log of this measure since it is is heavily skewed. 11 I received two pairs of near identical s, each pair within a couple of minutes, strongly suggesting that one person was operating two accounts simultaneously. The third pair was revealed by the fact they typed identical nonsense in the second stage typing task. 11

12 clustered at the zipcode-experiment level. 12 In robustness checks that drop workers who share a zipcode this is equivalent to using robust standard errors, since then each cluster is of size one. As a graphical check of balance, Web Appendix B.2 plots the CDFs by treatment and the associated Mann-Whitney U-test p-values for key observables, confirming good balance on these variables. Web Appendix Table B3 presents the results of the statistical balance tests. There is good mean balance on all characteristics with the exception of the minimum fair wage, where the difference comes from differences between experiments 1 and 2, and the number of MTurk HITs completed, where the difference is driven by a small number of outliers. 3 Results This section discusses the effect of the penalty frame on workers willingness to accept the contract, on the types of workers who select into the contract, and on performance on the job. For this analysis I pool the data from experiments 1 and 2 to increase power. I then discuss the follow-up survey, the coin toss experiment 3, and effect persistence. 3.1 Acceptance Figure 1 graphs the rates of acceptance of the stage 2 job offer by treatment. The striking pattern is that penalty framed contracts were much more likely to be accepted than equivalent bonus framed contracts. This result is particularly notable because it directly contradicts model Prediction 3. In addition, acceptance is substantially higher under higher fixed pay, while the relationship between variable pay and acceptance appears weak at best. The basic regression specification is a linear probability model with dependent variable Accept i {0, 1}, individuals indexed by i: Accept i = β 0 +β 1 P enalty i +β 2 HighF ixed i +β 3 HighV ariable i +X iβ 4 +ɛ i P enalty is a dummy equal to 1 if the contract is penalty framed and zero if bonus framed. HighF ixed is a dummy indicating fixed pay equal to $2 12 In experiment 1, 179 individuals, in experiment 2, 8 and in experiment 3, 30 individuals reported the same zipcode as another worker. 12

13 (alternative: $0.50). HighV ariable is a dummy indicating variable pay of $3 (alternative: $1.50). Since I do not have a group with both high fixed and variable pay, the comparison group in each case is the group with low fixed and low variable pay. X i is a vector of variables measured in stage 1. In particular, I include accuracy and time spent on the stage 1 effort task, to jointly proxy for ability and intrinsic motivation, and dummies for the set of items assigned to be typed by that worker (10 possible sets per stage). Note that the main specifications estimate the average effect of the penalty frame across all incentive pairs to increase power. Table 3 presents the main results. I find that switching from bonus to penalty framing increases the acceptance rate by approximately 10 percentage points. This implies a 25 percent higher acceptance rate under the penalty frame than the bonus frame (the acceptance rate under the bonus frame was 42 percent), a large effect for a simple framing manipulation. High fixed pay increases acceptance by around percentage points, or around 36 percent (the acceptance rate in the comparison group, low fixed and variable pay, was 42 percent). Surprisingly, the effect of high variable pay is positive but much smaller at 3 percentage points greater take-up, and not statistically significant. The results are robust to dropping workers who made inconsistent choices in stage 1, outliers on time on the first task, reservation or fair wages, and those from zipcodes with more than one respondent. Near-identical average marginal effects are obtained using logistic instead of linear regression. Column (5) of Table 3 interacts the penalty treatment with the high fixed and high variable pay treatments, to estimate the differential effect of penalties under these regimes. The point estimates suggest that the effect of the penalty frame on acceptance was smaller for high fixed pay and larger for high variable pay, however neither estimate is statistically significant. In addition, the point estimate on high variable pay is essentially zero for workers under the bonus frame, implying that the potential for a $3 bonus as opposed to a $1.50 bonus did not make the job offer significantly more attractive. It is important to note that the smaller difference in acceptance rates between bonus and penalty under high fixed pay does not mean that the framing effect necessarily shrinks with the level of pay. It is entirely consistent with the fact that as the level of compensation increases we 13

14 move into the right tail of the reservation wage distribution, which leads acceptance rates to converge. For example, if the fixed pay was $1000 one would expect all workers to accept. Additionally, penalty contract recipients were 9 percentage points more likely to click on the link in the job offer (p=0.001, 52 percent clicked under the bonus treatment). Clicking took them to a page that replicated the text of the offer . Workers are coded as accepting the offer if they then proceeded to the task. They were 6 percentage points more likely to proceed conditional on clicking (p=0.014, 80 percent proceeded under the bonus treatment). Web Appendix Figure B6 compares the distributions of key observables between those who did and did not accept the job offer. Workers who performed better in stage 1 were significantly more likely to accept the stage 2 job offer, as is clear from Web Appendix Table B1. This is consistent with the common finding that performance pay differentially selects more able or motivated workers, which I discuss further in Web Appendix B.12. Workers with a higher reservation wage were significantly less likely to accept the offer. The coefficient on minimum fair wage is small and not statistically significant, suggesting that fairness concerns (as measured by this variable) were not of primary importance for willingness to accept the contract. The number of rejected lotteries is not predictive of acceptance, whether or not I drop workers who made inconsistent choices in the lottery questions. This is surprising as the stage 2 contract is risky, so one would expect more risk/loss averse workers to be less willing to accept. 3.2 Selection Now I turn to the effect of the penalty frame on the types of workers that select into the contract. Figure 2 plots CDFs of stage 1 task performance, time spent on stage 1 task, rejected lotteries, reservation wage and fair wage, comparing those who accepted the bonus frame with those who accepted the penalty frame. Surprisingly, the distributions are barely distinguishable for all variables except for reservation wages, consistent with no selection on these variables. I do observe suggestive evidence that the penalty contract attracted workers with higher reservation wages on aver- 14

15 age, as would be expected from the higher acceptance rate. However the correlation between reservation wages and other characteristics is small. Table 5 tests for selection effects of penalty framing by regressing the key observables on contract terms, conditional on acceptance. The coefficient on penalty frame is interpreted as the difference in the conditional mean of the outcome in question between penalty and bonus workers. 13 The results confirm what we saw in the graphs: the differences between bonus and penalty workers are small and not statistically significant. Focusing on task 1 accuracy (the strongest predictor of task 2 performance), the point estimate implies 0.2 percentage points higher task 1 accuracy among penalty contract acceptors. Multiplied by the estimated coefficient on task 1 accuracy in the main performance regressions (0.72, see Table 4 column (2)), this implies less than 0.2 percentage points higher performance under the penalty contract explained by selection on task 1 performance, less than percent of the estimated treatment effect. It is surprising that there seems to be no selection effect of penalty framing. One possibility is that selection is hard to detect in this context. It is reassuring therefore that we do observe a standard selection effect: workers who scored higher accuracy on stage 1 were more likely to accept the stage 2 job offer, as discussed above and in Web Appendix B In Web Appendix table B.3 I present an alternative exercise where I regress acceptance on characteristics interacted with a penalty dummy, estimating to what extent that characteristic differentially predicts acceptance under the penalty contract. The results are very similar. A joint test fails to reject the null that all interaction coefficients are equal to zero (p=0.90). 14 To put an extreme upper bound on the effect of selection on task 1 performance, the 95 percent confidence interval for the difference in this variable rules out differences greater than 2.9 percentage points in either direction. Multiplied by the corresponding upper bound on the estimated coefficient on task 1 accuracy in the main performance regressions (0.79) a 2.9 percentage point difference translates into 2.3 percentage points higher accuracy, around 65 percent of the estimated penalty treatment effect. 15 As for the other covariates of note, penalty acceptors were 6 percentage points more likely to be male than bonus acceptors (p=0.13), and were 5 percentage points more likely to mainly work on MTurk to earn money (p<0.01, note that 93 percent of workers gave this response). They had also previously completed around percent more HITS in the past, depending on the specification (this variable is heavily skewed with large outliers, the difference is not significant and months of experience is not different between frames). None of these results is consequential for performance, as illustrated by the lack of selection on performance measures and the evidence presented below. 15

16 3.3 Performance The primary focus of the paper is on the effect of the penalty contract on job offer acceptance and selection, and the ideal experimental design for estimating incentive effects eliminates selection by removing the option of rejecting the contract, as in previous studies. 16 It is nevertheless instructive to compare performance under the different framing treatments. First, I can check for higher performance under penalty contracts to replicate the existing literature. Second, it allows a further check for selection into penalty contracts. Let Y i be a measure of effort or performance. The basic regression equation is: Y i = δ 0 + δ 1 P enalty i + δ 2 HighF ixed i + δ 3 HighV ariable i + X iδ 4 + ɛ i. In general the estimates of δ 1, δ 2 and δ 3 will be biased by selection: if the workers that accept one contract are different from those that accept another, then performance differences may simply reflect different types rather than different effort responses to incentives. However as already documented, I do not observe differential selection on observables between frames, which would bias the estimate of the key coefficient of interest, δ 1. Moreover, since I have stage 1 measures of type, I can control for selection on observables by including these. Figure 3 presents the mean performance on the stage 2 task by treatment group. I find that at each incentive level, performance is higher under the penalty than under the bonus frame (although not always statistically significantly so), consistent with the existing experimental studies. Pooling the framing treatment, in Figure 4 I plot CDFs of the accuracy measure, the log distance measure (recall that this is interpreted as the log of the per-character error rate) and time spent, and find that performance and effort is higher under the penalty frame right across the distribution This would be harder on MTurk than in the lab as workers can attrit more easily. 17 Web Appendix table B5 presents regressions with time spent or the distance measure of accuracy (log errors per character typed) as dependent variables. The penalty frame led to workers committing 20 percent fewer errors per character (from a mean of 0.066), and spending around two to three minutes longer on the task (mean 41 minutes), although the latter is not significant when controls are included. The point estimates on fixed and variable pay mirror their counterparts in the main regressions. 16

17 Table 4 presents the main results. Accuracy under the penalty frame was 3.6 percentage points (around 0.18 standard deviations or 6 percent of the mean accuracy of 0.59) higher than under the bonus frame, statistically significant at 5 percent without and 1 percent with controls. The coefficient estimate is robust to dropping workers who made inconsistent lottery choices, workers from zipcodes with multiple respondents, and outliers on the reservation and fair wage questions. Crucially, the point estimate does not change with the inclusion or exclusion of controls, consistent with the contract frame not inducing outcome-relevant selection on observables. For selection to explain the results, there would have to be a substantial unobserved driver of performance that is differentially selected under the penalty frame and orthogonal to the set of controls included in the regressions. 18 High fixed pay increased accuracy by around 2-4 percentage points, significant at 5 percent when including controls. The point estimate doubles when controls are included, indicating adverse selection induced by the higher fixed pay. If anything, the fact that a selection effect is observed here gives further comfort that the lack of observed selection between bonus and penalty reflects a true lack of selection in the data. High variable pay increases accuracy by around percentage points, although this is never significant at conventional levels. Column (5) interacts the penalty dummy with high fixed and high variable pay to estimate the differential effect of penalties under each financial incentive. Increasing the fixed pay seems to have the same positive effect under both bonus and penalty contracts (the interaction effect is an imprecisely estimated zero). Increasing the size of the variable pay leads to higher performance under both frames, with a smaller effect under the penalty frame. However, neither estimate is significant Parametric sample selection analysis requires an exclusion restriction (an instrument that predicts acceptance but not performance), for which there are no obvious candidates. However, I can apply the approach of Lee (2009). The idea is to bound the estimate of the effect of penalty framing on accuracy by dropping observations from either the upper or lower tail of outcomes until the samples are balanced. Unfortunately, due to the large difference in acceptance rates the estimated bounds are wide and somewhat uninformative: a lower bound of -2.8 percentage points (s.e.=0.021) and an upper bound of 11.0 percentage points (s.e.=0.020). 19 Workers were not forced to complete every item of the data entry task (blank responses were simply counted as incorrect). 93 percent of bonus workers and 95 percent of penalty workers completed the task (difference p=0.2). However, high fixed pay work- 17

18 As for the other key variables, performance in the first stage very strongly predicts performance in the second stage, while the coefficient on time spent in the previous task is negative, small in magnitude and not significant. A higher reservation wage is associated with poorer performance, while fair wage has no effect on performance. In this stage the number of rejected lotteries is negatively associated with performance and significant. A one standard deviation increase in the number of rejected lotteries is associated with around 1 percentage point worse performance. Table 6 reports estimates of heterogeneous effects of the penalty treatment by the main variables. In each case the individually estimated interaction effect is not statistically significant: there is little evidence of strong heterogeneous effects. Focusing on the coefficient on rejected lotteries, I note that neither the main effect (interpreted as the effect under the bonus frame) nor interaction coefficient are statistically significant. The implied total effect under the penalty frame is however negative and significant at the 5 percent level. Model Prediction 2 implies that the effect should be more positive under the penalty frame. 20 I lack power to dig into this relationship in depth. I do however perform one simple exercise. Appendix Figure B2 non-parametrically plots accuracy against rejected lotteries separately under bonus and penalty frame, after partialling out the other variables, dropping workers with inconsistent choices and those who rejected or accepted all lotteries. Over much of the range of rejected lotteries the slopes are approximately the same. However there is a strongly negative relationship between performance and rejected lotteries for the workers who rejected most of the lotteries. I discuss below why the rejected lotteries variable seems to do such a poor job of predicting acceptance and performance. ers were 7 percentage points more likely to complete than low fixed pay (p=0.01). If I restrict the sample to those who completed the task, the point estimate of the penalty effect falls slightly to 3.2 percentage points, retaining significance at the 1 percent level. The coefficient on high fixed pay drops to 1.3 percentage points and loses significance. Most of the effect of high fixed pay is explained by the higher completion rate. 20 A negative overall relationship between loss aversion and effort is possible in the extended model outlined in Appendix A.1. 18

19 3.4 Follow-up survey All workers in experiment 2 were invited to complete a survey for a fixed payment of $2. 83 percent did (128 of 153 bonus workers and 124 of 151 penalty workers). 21 Note that questions were unincentivized and conducted after the completion and payment of stage 2. Workers were first reminded of the job offer they received in stage 2, then asked a series of questions about it. Results are presented in table 7. Workers were asked to indicate agreement on a 1-7 scale to whether their job offer or task was fun, easy, well paid, fair, was a good motivator, earning $3.50 was achievable, understandable, and whether the principal could be trusted. Results are presented in Panel A. They were then asked to what extent they agreed that the offer was attractive because of good pay, because they would be elated to receive $3.50, and because it encouraged effort, and to what extent it was unattractive because it was risky, because they would be disappointed to receive $0.50, and because it was difficult (Panel B). Third, they were asked to guess the acceptance and accuracy rates of workers who received the same job offer as they did (Panel C). For most questions I find no significant differences between frames. However the penalty offer was rated significantly higher for good pay and was more considered more attractive due to good pay. If anything, the penalty was perceived as a less good motivator than the bonus contract, and workers agreed less with the statement that the offer was attractive because it encourages effort under the penalty frame than under the bonus frame, although neither coefficient is statistically significant. The second finding is that estimated acceptance rates and performance were not significantly different between bonus and penalty frames. Workers were also asked how willing they would be to accept their contract again and for an amount of money that would make them indifferent between the contract and the money (WTA). I find no significant difference in willingness to re-accept between bonus and penalty contract (WTA values were very noisy). Finally they were presented with the alternative contract (i.e. the one they did not receive) and asked to rate it on similar 21 Workers who accepted in stage 2 were more likely to complete the survey (96 percent vs 73 percent, p-value < 0.001), probably reflecting that some non-participation in stage 2 is driven by workers who did not see my s. 19

20 scales to their own contract. I find no significant differences in ratings between bonus and penalty recipients. 22 These results are presented in Web Appendix Table B Experiment 3 and effect persistence As previously described, experiment 3 followed the same design as experiments 1 and 2, but the task was changed to guessing 50 coin tosses rather than typing 50 strings. This design therefore ensured that performance did not depend on effort 23 It also added a stage 3 to test for effect persistence. In stage 2 the penalty contract was significantly more likely to be accepted than the bonus contract. 62 percent of bonus workers and 72 percent of penalty workers completed the task (difference p=0.043). 24 These results are presented in Table 8, Panel A and Web Appendix Figure B4, Panel A. None of the main observables, including rejected lotteries, predicts acceptance. Notable and important for interpretation is that task 1 accuracy does not predict acceptance, unlike in experiments 1 and 2 (workers understood that effort or skill cannot influence performance on this task), and workers did not spend more time on the task under the penalty contract. 25 I again find no evidence of selection on observables into the penalty contract, see Web Appendix Table B8. Stage 3 tested whether the popularity of the penalty contract wore off. Consider the workers who accepted in stage 2. If the effect wore off between stages 2 and 3 we would expect a lower acceptance rate under the penalty frame in stage 3. The findings are presented in Table 8, Panel B and Web Appendix Figure B4, Panel B. Once again, the penalty contract was significantly more popular overall. It was more popular among those 22 The precise interpretation of this result is subtle: when asked to consider the alternative contract, penalty recipients do not rate the bonus contract more or less favorably than bonus recipients rate the penalty contract. The strong modal response was that the other contract was the same on each scale. However, interestingly, both penalty and bonus workers rated the other contract on average slightly less attractive, fair, generous, trustworthy and achievable, and slightly more motivating, than their own. One could interpret this as an endowment-type effect. 23 To ensure this, workers were told they must guess all 50 tosses to be paid. Recall that previously workers were considered acceptors if they partially completed the task. 24 Under the previous definition of acceptance (including those who partially completed the task) the figures were 67 and 75 percent respectively, difference p= Mann-Whitney U p-values 0.35 and 0.80 for stages 2 and 3 respectively. 20

21 who did and those who did not accept the offer in stage 2. Among those who completed the task in stage 2, it was more popular both among those who were lucky (guessed correctly and received the bonus) and among those who were unlucky. Note that although the point estimates are not statistically significant in the latter four cases, the key question of was whether any would be significantly negative (e.g. because people learned that the penalty contract was unattractive) and I see no evidence of this. Finally, I also examine within-task persistence by looking at item-byitem performance in experiment 1 to see if the performance difference between bonus and penalty frames disappears over the course of the task. It does not. Regressing a dummy for whether a given item was entered correctly on the item number, penalty dummy and their interaction (plus controls) I find that the interaction term is a precisely estimated zero, whereas convergence would imply a negative interaction. See Web Appendix Figure B5 and Table B11. Hossain and List (2012) also present evidence that their framing treatment did not wear off over the course of several weeks. 4 Mechanisms While loss aversion can explain the higher effort provision under the penalty contract, it cannot explain the higher acceptance rate. Loewenstein and Adler (1995), Van Boven et al. (2000) and Van Boven et al. (2003) find that people underestimate the endowment effect: for example, they predict a lower willingness to accept to give up a mug when asked to imagine being endowed with it than when actually endowed with it (see also Loewenstein et al. (2003)). However, at best this predicts equal acceptance rates. This section discusses possible explanations for the main result, drawing on evidence from all three experiments. 4.1 Inattention and misunderstanding Experiment 1 was motivated by the prediction that the penalty contract would be unpopular. A first reaction to the surprising opposite finding was that perhaps workers were inattentive when reading the job offers. Since these prominently mentioned the base pay, followed by details of potential 21

Your Loss Is My Gain: A Recruitment Experiment With Framed Incentives

Your Loss Is My Gain: A Recruitment Experiment With Framed Incentives Your Loss Is My Gain: A Recruitment Experiment With Framed Incentives Jonathan de Quidt April 2, 214 Latest version available here Abstract Empirically, labor contracts that financially penalize failure

More information

Author's personal copy

Author's personal copy Exp Econ DOI 10.1007/s10683-015-9466-8 ORIGINAL PAPER The effects of endowment size and strategy method on third party punishment Jillian Jordan 1 Katherine McAuliffe 1,2 David Rand 1,3,4 Received: 19

More information

Glossary From Running Randomized Evaluations: A Practical Guide, by Rachel Glennerster and Kudzai Takavarasha

Glossary From Running Randomized Evaluations: A Practical Guide, by Rachel Glennerster and Kudzai Takavarasha Glossary From Running Randomized Evaluations: A Practical Guide, by Rachel Glennerster and Kudzai Takavarasha attrition: When data are missing because we are unable to measure the outcomes of some of the

More information

Online Appendix A. A1 Ability

Online Appendix A. A1 Ability Online Appendix A A1 Ability To exclude the possibility of a gender difference in ability in our sample, we conducted a betweenparticipants test in which we measured ability by asking participants to engage

More information

The Game Prisoners Really Play: Preference Elicitation and the Impact of Communication

The Game Prisoners Really Play: Preference Elicitation and the Impact of Communication The Game Prisoners Really Play: Preference Elicitation and the Impact of Communication Michael Kosfeld University of Zurich Ernst Fehr University of Zurich October 10, 2003 Unfinished version: Please do

More information

Self-Serving Assessments of Fairness and Pretrial Bargaining

Self-Serving Assessments of Fairness and Pretrial Bargaining Self-Serving Assessments of Fairness and Pretrial Bargaining George Loewenstein Samuel Issacharoff Colin Camerer and Linda Babcock Journal of Legal Studies 1993 報告人 : 高培儒 20091028 1 1. Introduction Why

More information

Working When No One Is Watching: Motivation, Test Scores, and Economic Success

Working When No One Is Watching: Motivation, Test Scores, and Economic Success Working When No One Is Watching: Motivation, Test Scores, and Economic Success Carmit Segal Department of Economics, University of Zurich, Zurich 8006, Switzerland. carmit.segal@econ.uzh.ch This paper

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION Supplementary Statistics and Results This file contains supplementary statistical information and a discussion of the interpretation of the belief effect on the basis of additional data. We also present

More information

MEA DISCUSSION PAPERS

MEA DISCUSSION PAPERS Inference Problems under a Special Form of Heteroskedasticity Helmut Farbmacher, Heinrich Kögel 03-2015 MEA DISCUSSION PAPERS mea Amalienstr. 33_D-80799 Munich_Phone+49 89 38602-355_Fax +49 89 38602-390_www.mea.mpisoc.mpg.de

More information

Explaining Bargaining Impasse: The Role of Self-Serving Biases

Explaining Bargaining Impasse: The Role of Self-Serving Biases Explaining Bargaining Impasse: The Role of Self-Serving Biases Linda Babcock and George Loewenstein Journal of Economic Perspectives, 1997 報告人 : 高培儒 20091028 1 1. Introduction Economists, and more specifically

More information

Experimental Testing of Intrinsic Preferences for NonInstrumental Information

Experimental Testing of Intrinsic Preferences for NonInstrumental Information Experimental Testing of Intrinsic Preferences for NonInstrumental Information By Kfir Eliaz and Andrew Schotter* The classical model of decision making under uncertainty assumes that decision makers care

More information

Unit 1 Exploring and Understanding Data

Unit 1 Exploring and Understanding Data Unit 1 Exploring and Understanding Data Area Principle Bar Chart Boxplot Conditional Distribution Dotplot Empirical Rule Five Number Summary Frequency Distribution Frequency Polygon Histogram Interquartile

More information

It is Whether You Win or Lose: The Importance of the Overall Probabilities of Winning or Losing in Risky Choice

It is Whether You Win or Lose: The Importance of the Overall Probabilities of Winning or Losing in Risky Choice The Journal of Risk and Uncertainty, 30:1; 5 19, 2005 c 2005 Springer Science + Business Media, Inc. Manufactured in The Netherlands. It is Whether You Win or Lose: The Importance of the Overall Probabilities

More information

References. Christos A. Ioannou 2/37

References. Christos A. Ioannou 2/37 Prospect Theory References Tversky, A., and D. Kahneman: Judgement under Uncertainty: Heuristics and Biases, Science, 185 (1974), 1124-1131. Tversky, A., and D. Kahneman: Prospect Theory: An Analysis of

More information

Student Performance Q&A:

Student Performance Q&A: Student Performance Q&A: 2009 AP Statistics Free-Response Questions The following comments on the 2009 free-response questions for AP Statistics were written by the Chief Reader, Christine Franklin of

More information

Exploring the reference point in prospect theory

Exploring the reference point in prospect theory 3 Exploring the reference point in prospect theory Gambles for length of life Exploring the reference point in prospect theory: Gambles for length of life. S.M.C. van Osch, W.B. van den Hout, A.M. Stiggelbout

More information

Pre-analysis Plan Outline for Sleepless in Chennai: The Consequences of Sleep Deprivation Among the Urban Poor

Pre-analysis Plan Outline for Sleepless in Chennai: The Consequences of Sleep Deprivation Among the Urban Poor Pre-analysis Plan Outline for Sleepless in Chennai: The Consequences of Sleep Deprivation Among the Urban Poor Pedro Bessone Tepedino MIT Gautam Rao Harvard University Frank Schilbach MIT Heather Schofield

More information

Objectives. Quantifying the quality of hypothesis tests. Type I and II errors. Power of a test. Cautions about significance tests

Objectives. Quantifying the quality of hypothesis tests. Type I and II errors. Power of a test. Cautions about significance tests Objectives Quantifying the quality of hypothesis tests Type I and II errors Power of a test Cautions about significance tests Designing Experiments based on power Evaluating a testing procedure The testing

More information

An Experimental Investigation of Self-Serving Biases in an Auditing Trust Game: The Effect of Group Affiliation: Discussion

An Experimental Investigation of Self-Serving Biases in an Auditing Trust Game: The Effect of Group Affiliation: Discussion 1 An Experimental Investigation of Self-Serving Biases in an Auditing Trust Game: The Effect of Group Affiliation: Discussion Shyam Sunder, Yale School of Management P rofessor King has written an interesting

More information

Fairness and Reciprocity in the Hawk-Dove game

Fairness and Reciprocity in the Hawk-Dove game Fairness and Reciprocity in the Hawk-Dove game Tibor Neugebauer*, Anders Poulsen**, and Arthur Schram*** Abstract We study fairness and reciprocity in a Hawk-Dove game. This allows us to test various models

More information

Multiple Switching Behavior in Multiple Price Lists

Multiple Switching Behavior in Multiple Price Lists Multiple Switching Behavior in Multiple Price Lists David M. Bruner This version: September 2007 Abstract A common mechanism to elicit risk preferences requires a respondent to make a series of dichotomous

More information

Effects of Sequential Context on Judgments and Decisions in the Prisoner s Dilemma Game

Effects of Sequential Context on Judgments and Decisions in the Prisoner s Dilemma Game Effects of Sequential Context on Judgments and Decisions in the Prisoner s Dilemma Game Ivaylo Vlaev (ivaylo.vlaev@psy.ox.ac.uk) Department of Experimental Psychology, University of Oxford, Oxford, OX1

More information

Chapter 1: Explaining Behavior

Chapter 1: Explaining Behavior Chapter 1: Explaining Behavior GOAL OF SCIENCE is to generate explanations for various puzzling natural phenomenon. - Generate general laws of behavior (psychology) RESEARCH: principle method for acquiring

More information

Sheila Barron Statistics Outreach Center 2/8/2011

Sheila Barron Statistics Outreach Center 2/8/2011 Sheila Barron Statistics Outreach Center 2/8/2011 What is Power? When conducting a research study using a statistical hypothesis test, power is the probability of getting statistical significance when

More information

Statistics and Probability

Statistics and Probability Statistics and a single count or measurement variable. S.ID.1: Represent data with plots on the real number line (dot plots, histograms, and box plots). S.ID.2: Use statistics appropriate to the shape

More information

Performance in competitive Environments: Gender differences

Performance in competitive Environments: Gender differences Performance in competitive Environments: Gender differences Uri Gneezy Technion and Chicago Business School Muriel Niederle Harvard University Aldo Rustichini University of Minnesota 1 Gender differences

More information

How financial incentives and cognitive abilities. affect task performance in laboratory settings: an illustration

How financial incentives and cognitive abilities. affect task performance in laboratory settings: an illustration How financial incentives and cognitive abilities affect task performance in laboratory settings: an illustration Ondrej Rydval, Andreas Ortmann CERGE-EI, Prague, Czech Republic April 2004 Abstract Drawing

More information

Today s lecture. A thought experiment. Topic 3: Social preferences and fairness. Overview readings: Fehr and Fischbacher (2002) Sobel (2005)

Today s lecture. A thought experiment. Topic 3: Social preferences and fairness. Overview readings: Fehr and Fischbacher (2002) Sobel (2005) Topic 3: Social preferences and fairness Are we perfectly selfish? If not, does it affect economic analysis? How to take it into account? Overview readings: Fehr and Fischbacher (2002) Sobel (2005) Today

More information

Correlation Neglect in Belief Formation

Correlation Neglect in Belief Formation Correlation Neglect in Belief Formation Benjamin Enke Florian Zimmermann Bonn Graduate School of Economics University of Zurich NYU Bounded Rationality in Choice Conference May 31, 2015 Benjamin Enke (Bonn)

More information

What do Americans know about inequality? It depends on how you ask them

What do Americans know about inequality? It depends on how you ask them Judgment and Decision Making, Vol. 7, No. 6, November 2012, pp. 741 745 What do Americans know about inequality? It depends on how you ask them Kimmo Eriksson Brent Simpson Abstract A recent survey of

More information

Volume 36, Issue 3. David M McEvoy Appalachian State University

Volume 36, Issue 3. David M McEvoy Appalachian State University Volume 36, Issue 3 Loss Aversion and Student Achievement David M McEvoy Appalachian State University Abstract We conduct a field experiment to test if loss aversion behavior can be exploited to improve

More information

The effects of payout and probability magnitude on the Allais paradox

The effects of payout and probability magnitude on the Allais paradox Memory & Cognition 2008, 36 (5), 1013-1023 doi: 10.3758/MC.36.5.1013 The effects of payout and probability magnitude on the Allais paradox BETHANY Y J. WEBER Rutgers University, New Brunswick, New Jersey

More information

Political Science 15, Winter 2014 Final Review

Political Science 15, Winter 2014 Final Review Political Science 15, Winter 2014 Final Review The major topics covered in class are listed below. You should also take a look at the readings listed on the class website. Studying Politics Scientifically

More information

Area Conferences 2012

Area Conferences 2012 A joint initiative of Ludwig-Maximilians University s Center for Economic Studies and the Ifo Institute CESifo Conference Centre, Munich Area Conferences 2012 CESifo Area Conference on Behavioural Economics

More information

Sawtooth Software. The Number of Levels Effect in Conjoint: Where Does It Come From and Can It Be Eliminated? RESEARCH PAPER SERIES

Sawtooth Software. The Number of Levels Effect in Conjoint: Where Does It Come From and Can It Be Eliminated? RESEARCH PAPER SERIES Sawtooth Software RESEARCH PAPER SERIES The Number of Levels Effect in Conjoint: Where Does It Come From and Can It Be Eliminated? Dick Wittink, Yale University Joel Huber, Duke University Peter Zandan,

More information

Gender specific attitudes towards risk and ambiguity an experimental investigation

Gender specific attitudes towards risk and ambiguity an experimental investigation Research Collection Working Paper Gender specific attitudes towards risk and ambiguity an experimental investigation Author(s): Schubert, Renate; Gysler, Matthias; Brown, Martin; Brachinger, Hans Wolfgang

More information

The Impact of Relative Standards on the Propensity to Disclose. Alessandro Acquisti, Leslie K. John, George Loewenstein WEB APPENDIX

The Impact of Relative Standards on the Propensity to Disclose. Alessandro Acquisti, Leslie K. John, George Loewenstein WEB APPENDIX The Impact of Relative Standards on the Propensity to Disclose Alessandro Acquisti, Leslie K. John, George Loewenstein WEB APPENDIX 2 Web Appendix A: Panel data estimation approach As noted in the main

More information

Gender Effects in Private Value Auctions. John C. Ham Department of Economics, University of Southern California and IZA. and

Gender Effects in Private Value Auctions. John C. Ham Department of Economics, University of Southern California and IZA. and Gender Effects in Private Value Auctions 2/1/05 Revised 3/3/06 John C. Ham Department of Economics, University of Southern California and IZA and John H. Kagel** Department of Economics, The Ohio State

More information

Section 6: Analysing Relationships Between Variables

Section 6: Analysing Relationships Between Variables 6. 1 Analysing Relationships Between Variables Section 6: Analysing Relationships Between Variables Choosing a Technique The Crosstabs Procedure The Chi Square Test The Means Procedure The Correlations

More information

Goal-setting for a healthier self: evidence from a weight loss challenge

Goal-setting for a healthier self: evidence from a weight loss challenge Goal-setting for a healthier self: evidence from a weight loss challenge Séverine Toussaert (NYU) November 12, 2015 Goals as self-disciplining devices (1) 1. Goals are a key instrument of self-regulation.

More information

Guilt and Pro-Social Behavior amongst Workers in an Online Labor Market

Guilt and Pro-Social Behavior amongst Workers in an Online Labor Market Guilt and Pro-Social Behavior amongst Workers in an Online Labor Market Dr. Moran Blueshtein Naveen Jindal School of Management University of Texas at Dallas USA Abstract Do workers in online labor markets

More information

The role of sampling assumptions in generalization with multiple categories

The role of sampling assumptions in generalization with multiple categories The role of sampling assumptions in generalization with multiple categories Wai Keen Vong (waikeen.vong@adelaide.edu.au) Andrew T. Hendrickson (drew.hendrickson@adelaide.edu.au) Amy Perfors (amy.perfors@adelaide.edu.au)

More information

Background. Created at 2005 One of many Population. Uses

Background. Created at 2005 One of many Population. Uses Outline Background Terms Demographics Workers perspective (incentives) Price and time Filtering Worker-Requester relationship Testing Familiar problems Special constellations Pros and Cons Tips Future

More information

NBER WORKING PAPER SERIES IS THE ENDOWMENT EFFECT A REFERENCE EFFECT? Ori Heffetz John A. List. Working Paper

NBER WORKING PAPER SERIES IS THE ENDOWMENT EFFECT A REFERENCE EFFECT? Ori Heffetz John A. List. Working Paper NBER WORKING PAPER SERIES IS THE ENDOWMENT EFFECT A REFERENCE EFFECT? Ori Heffetz John A. List Working Paper 16715 http://www.nber.org/papers/w16715 NATIONAL BUREAU OF ECONOMIC RESEARCH 1050 Massachusetts

More information

Chapter 11. Experimental Design: One-Way Independent Samples Design

Chapter 11. Experimental Design: One-Way Independent Samples Design 11-1 Chapter 11. Experimental Design: One-Way Independent Samples Design Advantages and Limitations Comparing Two Groups Comparing t Test to ANOVA Independent Samples t Test Independent Samples ANOVA Comparing

More information

Department of Economics Working Paper Series

Department of Economics Working Paper Series Department of Economics Working Paper Series The Common Ratio Effect in Choice, Pricing, and Happiness Tasks by Mark Schneider Chapman University Mikhael Shor University of Connecticut Working Paper 2016-29

More information

Liar Liar: Experimental evidence of the effect of confirmation-reports on dishonesty.

Liar Liar: Experimental evidence of the effect of confirmation-reports on dishonesty. Liar Liar: Experimental evidence of the effect of confirmation-reports on dishonesty. Denvil Duncan Danyang Li September 30, 2016 Abstract We identify the effect of confirmation-reports on dishonesty using

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION SUPPLEMENTARY INFORMATION 1. Online recruitment procedure using Amazon Mechanical Turk... 2 2. Log-transforming decision times... 3 3. Study 1: Correlational decision time experiment on AMT... 4 4. Studies

More information

Checking the counterarguments confirms that publication bias contaminated studies relating social class and unethical behavior

Checking the counterarguments confirms that publication bias contaminated studies relating social class and unethical behavior 1 Checking the counterarguments confirms that publication bias contaminated studies relating social class and unethical behavior Gregory Francis Department of Psychological Sciences Purdue University gfrancis@purdue.edu

More information

EXPERIMENTAL ECONOMICS INTRODUCTION. Ernesto Reuben

EXPERIMENTAL ECONOMICS INTRODUCTION. Ernesto Reuben EXPERIMENTAL ECONOMICS INTRODUCTION Ernesto Reuben WHAT IS EXPERIMENTAL ECONOMICS? 2 WHAT IS AN ECONOMICS EXPERIMENT? A method of collecting data in controlled environments with the purpose of furthering

More information

WDHS Curriculum Map Probability and Statistics. What is Statistics and how does it relate to you?

WDHS Curriculum Map Probability and Statistics. What is Statistics and how does it relate to you? WDHS Curriculum Map Probability and Statistics Time Interval/ Unit 1: Introduction to Statistics 1.1-1.3 2 weeks S-IC-1: Understand statistics as a process for making inferences about population parameters

More information

ECON Microeconomics III

ECON Microeconomics III ECON 7130 - Microeconomics III Spring 2016 Notes for Lecture #5 Today: Difference-in-Differences (DD) Estimators Difference-in-Difference-in-Differences (DDD) Estimators (Triple Difference) Difference-in-Difference

More information

10 Intraclass Correlations under the Mixed Factorial Design

10 Intraclass Correlations under the Mixed Factorial Design CHAPTER 1 Intraclass Correlations under the Mixed Factorial Design OBJECTIVE This chapter aims at presenting methods for analyzing intraclass correlation coefficients for reliability studies based on a

More information

Empirical Tools of Public Finance. 131 Undergraduate Public Economics Emmanuel Saez UC Berkeley

Empirical Tools of Public Finance. 131 Undergraduate Public Economics Emmanuel Saez UC Berkeley Empirical Tools of Public Finance 131 Undergraduate Public Economics Emmanuel Saez UC Berkeley 1 DEFINITIONS Empirical public finance: The use of data and statistical methods to measure the impact of government

More information

The role of training in experimental auctions

The role of training in experimental auctions AUA Working Paper Series No. 2010-2 February 2010 The role of training in experimental auctions Andreas Drichoutis Department of Economics University of Ioannina, Greece adrihout@cc.uoi.gr Rodolfo M. Nayga,

More information

Analysis and Interpretation of Data Part 1

Analysis and Interpretation of Data Part 1 Analysis and Interpretation of Data Part 1 DATA ANALYSIS: PRELIMINARY STEPS 1. Editing Field Edit Completeness Legibility Comprehensibility Consistency Uniformity Central Office Edit 2. Coding Specifying

More information

Examining differences between two sets of scores

Examining differences between two sets of scores 6 Examining differences between two sets of scores In this chapter you will learn about tests which tell us if there is a statistically significant difference between two sets of scores. In so doing you

More information

Folland et al Chapter 4

Folland et al Chapter 4 Folland et al Chapter 4 Chris Auld Economics 317 January 11, 2011 Chapter 2. We won t discuss, but you should already know: PPF. Supply and demand. Theory of the consumer (indifference curves etc) Theory

More information

Regression Discontinuity Analysis

Regression Discontinuity Analysis Regression Discontinuity Analysis A researcher wants to determine whether tutoring underachieving middle school students improves their math grades. Another wonders whether providing financial aid to low-income

More information

Supporting Information

Supporting Information Supporting Information Burton-Chellew and West 10.1073/pnas.1210960110 SI Results Fig. S4 A and B shows the percentage of free riders and cooperators over time for each treatment. Although Fig. S4A shows

More information

Signalling, shame and silence in social learning. Arun Chandrasekhar, Benjamin Golub, He Yang Presented by: Helena, Jasmin, Matt and Eszter

Signalling, shame and silence in social learning. Arun Chandrasekhar, Benjamin Golub, He Yang Presented by: Helena, Jasmin, Matt and Eszter Signalling, shame and silence in social learning Arun Chandrasekhar, Benjamin Golub, He Yang Presented by: Helena, Jasmin, Matt and Eszter Motivation Asking is an important information channel. But the

More information

Lecture 2: Learning and Equilibrium Extensive-Form Games

Lecture 2: Learning and Equilibrium Extensive-Form Games Lecture 2: Learning and Equilibrium Extensive-Form Games III. Nash Equilibrium in Extensive Form Games IV. Self-Confirming Equilibrium and Passive Learning V. Learning Off-path Play D. Fudenberg Marshall

More information

Risky Choice Decisions from a Tri-Reference Point Perspective

Risky Choice Decisions from a Tri-Reference Point Perspective Academic Leadership Journal in Student Research Volume 4 Spring 2016 Article 4 2016 Risky Choice Decisions from a Tri-Reference Point Perspective Kevin L. Kenney Fort Hays State University Follow this

More information

Effects of Civil Society Involvement on Popular Legitimacy of Global Environmental Governance

Effects of Civil Society Involvement on Popular Legitimacy of Global Environmental Governance Effects of Civil Society Involvement on Popular Legitimacy of Global Environmental Governance Thomas Bernauer and Robert Gampfer Global Environmental Change 23(2) Supplementary Content Treatment materials

More information

Journal of Economic Behavior & Organization

Journal of Economic Behavior & Organization Journal of Economic Behavior & Organization 85 (2013) 20 34 Contents lists available at SciVerse ScienceDirect Journal of Economic Behavior & Organization j our nal ho me p age: www.elsevier.com/locate/jebo

More information

9 research designs likely for PSYC 2100

9 research designs likely for PSYC 2100 9 research designs likely for PSYC 2100 1) 1 factor, 2 levels, 1 group (one group gets both treatment levels) related samples t-test (compare means of 2 levels only) 2) 1 factor, 2 levels, 2 groups (one

More information

The Perils of Empirical Work on Institutions

The Perils of Empirical Work on Institutions 166 The Perils of Empirical Work on Institutions Comment by JONATHAN KLICK 1 Introduction Empirical work on the effects of legal institutions on development and economic activity has been spectacularly

More information

Using Experimental Methods to Inform Public Policy Debates. Jim Murphy Presentation to ISER July 13, 2006

Using Experimental Methods to Inform Public Policy Debates. Jim Murphy Presentation to ISER July 13, 2006 Using Experimental Methods to Inform Public Policy Debates Jim Murphy Presentation to ISER July 13, 2006 Experiments are just one more tool for the applied economist Econometric analysis of nonexperimental

More information

SAMPLING AND SAMPLE SIZE

SAMPLING AND SAMPLE SIZE SAMPLING AND SAMPLE SIZE Andrew Zeitlin Georgetown University and IGC Rwanda With slides from Ben Olken and the World Bank s Development Impact Evaluation Initiative 2 Review We want to learn how a program

More information

Summary Report: The Effectiveness of Online Ads: A Field Experiment

Summary Report: The Effectiveness of Online Ads: A Field Experiment Summary Report: The Effectiveness of Online Ads: A Field Experiment Alexander Coppock and David Broockman September 16, 215 This document is a summary of experimental findings only. Additionally, this

More information

Contributions and Beliefs in Liner Public Goods Experiment: Difference between Partners and Strangers Design

Contributions and Beliefs in Liner Public Goods Experiment: Difference between Partners and Strangers Design Working Paper Contributions and Beliefs in Liner Public Goods Experiment: Difference between Partners and Strangers Design Tsuyoshi Nihonsugi 1, 2 1 Research Fellow of the Japan Society for the Promotion

More information

The Limits of Inference Without Theory

The Limits of Inference Without Theory The Limits of Inference Without Theory Kenneth I. Wolpin University of Pennsylvania Koopmans Memorial Lecture (2) Cowles Foundation Yale University November 3, 2010 Introduction Fuller utilization of the

More information

Some Thoughts on the Principle of Revealed Preference 1

Some Thoughts on the Principle of Revealed Preference 1 Some Thoughts on the Principle of Revealed Preference 1 Ariel Rubinstein School of Economics, Tel Aviv University and Department of Economics, New York University and Yuval Salant Graduate School of Business,

More information

Subjects Motivations

Subjects Motivations Subjects Motivations Lecture 9 Rebecca B. Morton NYU EPS Lectures R B Morton (NYU) EPS Lecture 9 EPS Lectures 1 / 66 Subjects Motivations Financial Incentives, Theory Testing, and Validity: Theory Testing

More information

AP Psychology -- Chapter 02 Review Research Methods in Psychology

AP Psychology -- Chapter 02 Review Research Methods in Psychology AP Psychology -- Chapter 02 Review Research Methods in Psychology 1. In the opening vignette, to what was Alicia's condition linked? The death of her parents and only brother 2. What did Pennebaker s study

More information

Lecture Slides. Elementary Statistics Eleventh Edition. by Mario F. Triola. and the Triola Statistics Series 1.1-1

Lecture Slides. Elementary Statistics Eleventh Edition. by Mario F. Triola. and the Triola Statistics Series 1.1-1 Lecture Slides Elementary Statistics Eleventh Edition and the Triola Statistics Series by Mario F. Triola 1.1-1 Chapter 1 Introduction to Statistics 1-1 Review and Preview 1-2 Statistical Thinking 1-3

More information

Do Women Shy Away from Competition? Do Men Compete too Much?

Do Women Shy Away from Competition? Do Men Compete too Much? This work is distributed as a Discussion Paper by the STANFORD INSTITUTE FOR ECONOMIC POLICY RESEARCH SIEPR Discussion Paper No. 04-30 Do Women Shy Away from Competition? Do Men Compete too Much? By Muriel

More information

Risk Aversion in Games of Chance

Risk Aversion in Games of Chance Risk Aversion in Games of Chance Imagine the following scenario: Someone asks you to play a game and you are given $5,000 to begin. A ball is drawn from a bin containing 39 balls each numbered 1-39 and

More information

Topic 3: Social preferences and fairness

Topic 3: Social preferences and fairness Topic 3: Social preferences and fairness Are we perfectly selfish and self-centered? If not, does it affect economic analysis? How to take it into account? Focus: Descriptive analysis Examples Will monitoring

More information

b. Associate Professor at UCLA Anderson School of Management

b. Associate Professor at UCLA Anderson School of Management 1 The Benefits of Emergency Reserves: Greater Preference and Persistence for Goals having Slack with a Cost MARISSA A. SHARIF a and SUZANNE B. SHU b a. PhD Candidate at UCLA Anderson School of Management

More information

UNIVERSITY OF ILLINOIS LIBRARY AT URBANA-CHAMPAIGN BOOKSTACKS

UNIVERSITY OF ILLINOIS LIBRARY AT URBANA-CHAMPAIGN BOOKSTACKS UNIVERSITY OF ILLINOIS LIBRARY AT URBANA-CHAMPAIGN BOOKSTACKS Digitized by the Internet Archive in 2012 with funding from University of Illinois Urbana-Champaign http://www.archive.org/details/interactionofint131cald

More information

DO WOMEN SHY AWAY FROM COMPETITION? DO MEN COMPETE TOO MUCH?

DO WOMEN SHY AWAY FROM COMPETITION? DO MEN COMPETE TOO MUCH? DO WOMEN SHY AWAY FROM COMPETITION? DO MEN COMPETE TOO MUCH? Muriel Niederle and Lise Vesterlund February 21, 2006 Abstract We explore whether women and men differ in their selection into competitive environments.

More information

Applied Econometrics for Development: Experiments II

Applied Econometrics for Development: Experiments II TSE 16th January 2019 Applied Econometrics for Development: Experiments II Ana GAZMURI Paul SEABRIGHT The Cohen-Dupas bednets study The question: does subsidizing insecticide-treated anti-malarial bednets

More information

Introduction to Behavioral Economics Like the subject matter of behavioral economics, this course is divided into two parts:

Introduction to Behavioral Economics Like the subject matter of behavioral economics, this course is divided into two parts: Economics 142: Behavioral Economics Spring 2008 Vincent Crawford (with very large debts to Colin Camerer of Caltech, David Laibson of Harvard, and especially Botond Koszegi and Matthew Rabin of UC Berkeley)

More information

A Brief Introduction to Bayesian Statistics

A Brief Introduction to Bayesian Statistics A Brief Introduction to Statistics David Kaplan Department of Educational Psychology Methods for Social Policy Research and, Washington, DC 2017 1 / 37 The Reverend Thomas Bayes, 1701 1761 2 / 37 Pierre-Simon

More information

Further Properties of the Priority Rule

Further Properties of the Priority Rule Further Properties of the Priority Rule Michael Strevens Draft of July 2003 Abstract In Strevens (2003), I showed that science s priority system for distributing credit promotes an allocation of labor

More information

Belief Formation in a Signalling Game without Common Prior: An Experiment

Belief Formation in a Signalling Game without Common Prior: An Experiment Belief Formation in a Signalling Game without Common Prior: An Experiment Alex Possajennikov University of Nottingham February 2012 Abstract Using belief elicitation, the paper investigates the formation

More information

More Effort with Less Pay: On Information Avoidance, Belief Design and Performance

More Effort with Less Pay: On Information Avoidance, Belief Design and Performance More Effort with Less Pay: On Information Avoidance, Belief Design and Performance Nora Szech, KIT, WZB, CESifo joint with Steffen Huck, WZB, UCL, CESifo and Lukas Wenner, UCL, U Cologne CMU, June 2017

More information

Social Identity and Competitiveness

Social Identity and Competitiveness Social Identity and Competitiveness Marie-Pierre Dargnies First Draft October 25, 2009 1 Introduction Women are widely under-represented in top-level social positions. While several potential reasons may

More information

Pre-analysis plan Media and Motivation: The effect of performance pay on writers and content

Pre-analysis plan Media and Motivation: The effect of performance pay on writers and content Pre-analysis plan Media and Motivation: The effect of performance pay on writers and content Jared Gars and Emilia Tjernström University of Wisconsin, Madison May 15, 2017 1 Introduction Performance contracts

More information

Designed Beliefs and Performance: A Real-Effort Experiment

Designed Beliefs and Performance: A Real-Effort Experiment Designed Beliefs and Performance: A Real-Effort Experiment Steffen Huck (WZB & UCL) Nora Szech (KIT) Lukas M. Wenner (UCL) May 7, 2015 Abstract In a tedious real effort task, agents can choose to receive

More information

SUPPLEMENTAL MATERIAL

SUPPLEMENTAL MATERIAL 1 SUPPLEMENTAL MATERIAL Response time and signal detection time distributions SM Fig. 1. Correct response time (thick solid green curve) and error response time densities (dashed red curve), averaged across

More information

Responsiveness to feedback as a personal trait

Responsiveness to feedback as a personal trait Responsiveness to feedback as a personal trait Thomas Buser University of Amsterdam Leonie Gerhards University of Hamburg Joël van der Weele University of Amsterdam Pittsburgh, June 12, 2017 1 / 30 Feedback

More information

arxiv: v2 [cs.ai] 26 Sep 2018

arxiv: v2 [cs.ai] 26 Sep 2018 Manipulating and Measuring Model Interpretability arxiv:1802.07810v2 [cs.ai] 26 Sep 2018 Forough Poursabzi-Sangdeh forough.poursabzi@microsoft.com Microsoft Research Jennifer Wortman Vaughan jenn@microsoft.com

More information

Session 3: Dealing with Reverse Causality

Session 3: Dealing with Reverse Causality Principal, Developing Trade Consultants Ltd. ARTNeT Capacity Building Workshop for Trade Research: Gravity Modeling Thursday, August 26, 2010 Outline Introduction 1 Introduction Overview Endogeneity and

More information

Chapter 1 Review Questions

Chapter 1 Review Questions Chapter 1 Review Questions 1.1 Why is the standard economic model a good thing, and why is it a bad thing, in trying to understand economic behavior? A good economic model is simple and yet gives useful

More information

Motivated Errors. Christine L. Exley and Judd B. Kessler. May 31, Abstract

Motivated Errors. Christine L. Exley and Judd B. Kessler. May 31, Abstract Motivated Errors Christine L. Exley and Judd B. Kessler May 31, 2018 Abstract Behavioral biases that cause errors in decision making are often blamed on cognitive limitations. We show that biases can also

More information

Motivated Cognitive Limitations

Motivated Cognitive Limitations Motivated Cognitive Limitations Christine L. Exley and Judd B. Kessler May 3, 2018 Abstract Behavioral biases are often blamed on agents inherent cognitive limitations. We show that biases can also arise,

More information

CHAPTER ONE CORRELATION

CHAPTER ONE CORRELATION CHAPTER ONE CORRELATION 1.0 Introduction The first chapter focuses on the nature of statistical data of correlation. The aim of the series of exercises is to ensure the students are able to use SPSS to

More information

Carrying out an Empirical Project

Carrying out an Empirical Project Carrying out an Empirical Project Empirical Analysis & Style Hint Special program: Pre-training 1 Carrying out an Empirical Project 1. Posing a Question 2. Literature Review 3. Data Collection 4. Econometric

More information