Hierarchical maximum likelihood parameter estimation for cumulative prospect theory: Improving the reliability of individual risk parameter estimates

Size: px
Start display at page:

Download "Hierarchical maximum likelihood parameter estimation for cumulative prospect theory: Improving the reliability of individual risk parameter estimates"

Transcription

1 Hierarchical maximum likelihood parameter estimation for cumulative prospect theory: Improving the reliability of individual risk parameter estimates Ryan O. Murphy, Robert H.W. ten Brincke ETH Risk Center Working Paper Series ETH-RC-14-5 The ETH Risk Center, established at ETH Zurich (Switzerland) in 211, aims to develop crossdisciplinary approaches to integrative risk management. The center combines competences from the natural, engineering, social, economic and political sciences. By integrating modeling and simulation efforts with empirical and experimental methods, the Center helps societies to better manage risk. More information can be found at:

2 ETH-RC-14-5 Hierarchical maximum likelihood parameter estimation for cumulative prospect theory: Improving the reliability of individual risk parameter estimates Ryan O. Murphy, Robert H.W. ten Brincke Abstract Individual risk preferences can be identified by using decision models with tuned parameters that maximally fit a set of risky choices made by a decision maker. A goal of this model fitting procedure is to isolate parameters that correspond to stable risk preferences. These preferences can be modeled as an individual difference, indicating a particular decision maker s tastes and willingness to tolerate risk. Using hierarchical statistical methods we show significant improvements in the reliability of individual risk preference parameters over other common estimation methods. This hierarchal procedure uses population level information (in addition to an individual s choices) to break ties (or near-ties) in the fit quality for sets of possible risk preference parameters. By breaking these statistical ties in a sensible way, researchers can avoid overfitting choice data and thus better measure individual differences in people s risk preferences. Keywords: Prospect theory, Risk preference, Decision making under risk, Hierarchical parameter estimation, Maximum likelihood Classifications: JEL Codes: D81, D3 URL: risk center wps/eth-rc-14-5 Notes and Comments: ETH Risk Center Working Paper Series

3 Hierarchical maximum likelihood parameter estimation for cumulative prospect theory: Improving the reliability of individual risk parameter estimates Ryan O. Murphy, Robert H.W. ten Brincke Chair of Decision Theory and Behavioral Game Theory Swiss Federal Institute of Technology Zürich (ETHZ) Clausiusstrasse 5, 86 Zurich, Switzerland Abstract Individual risk preferences can be identified by using decision models with tuned parameters that maximally fit a set of risky choices made by a decision maker. A goal of this model fitting procedure is to isolate parameters that correspond to stable risk preferences. These preferences can be modeled as an individual difference, indicating a particular decision maker s tastes and willingness to tolerate risk. Using hierarchical statistical methods we show significant improvements in the reliability of individual risk preference parameters over other common estimation methods. This hierarchal procedure uses population level information (in addition to an individual s choices) to break ties (or near-ties) in the fit quality for sets of possible risk preference parameters. By breaking these statistical ties in a sensible way, researchers can avoid overfitting choice data and thus better measure individual differences in people s risk preferences. Key words: Prospect theory, Risk preference, Decision-making under risk, Hierarchical parameter estimation, Maximum likelihood Corresponding author. addresses: rmurphy@ethz.ch (Ryan O. Murphy), robert.tenbrincke@gess.ethz.ch (Robert H.W. ten Brincke) Working paper April 16, 214

4 Introduction People must often make choices among a number of different options for which the outcomes are not certain. Epistemic limitations can be precisely quantified using probabilities, and given a well defined sets of options, each option with its respective probability of fruition, the elements of risky decision making are established (Knight 1932; Luce and Raiffa, 1957). Expected value maximization stands as the normative solution to risky choice problems, but people s choices do not always conform to this optimization principle. Rather decision makers (DMs) reveal different preferences for risk, sometimes forgoing an option with a higher expectation in lieu of an option with lower variance (thus indicating risk aversion; in some cases DMs reveal the opposite preference too thus indicating risk seeking). Behavioral theories of risky decision making (Kahneman and Tversky, 1979; Tversky and Kahneman, 1992; Kahneman and Tversky, 2) have been developed to isolate and highlight the order in these choice patterns and provide psychological insights into the underlying structure of DM s preferences. This paper is about measuring those subjective preferences, and in particular finding a statistical estimation procedure that can increase the reliability of those measures, thus better capturing risk preferences and doing so at the individual level Example of a risky choice A binary lottery (i.e., a gamble or a prospect) is a common tool for studying risky decision making, so common in fact that they have been called the fruit flies of decision research (Lopes, 1983). In these simple risky choices, DMs are presented with monetary outcomes x i, each associated with an explicit probability p i. Consider for example the lottery in Table 1, offering a DM the choice between option A, a payoff of $1 with probability.62 or $83 with.38; or option B, a payoff of $37 with probability.41 or $24 with probability.59. The decision maker is called upon to choose either option A or option B and then the lottery is played out via a random process for real consequences EV maximization and descriptive individual level modeling The normative solution to the decision problem in Table 1 is straightforward. The goal of maximizing long term monetary expectations is satisfied 2

5 option A $1 with probability.62 $83.38 option B $37 with probability.41 $24.59 Table 1: Here is displayed an example of a simple, one-shot risky decision. The choice is simply whether to select option A or option B. The expected value of option A is higher ($32.16) than that of option B ($29.33), but option A contains the possibility of an outcome with a relatively small value ($1). As it turns out, the majority of DMs prefer option B, even though it has a smaller expected value by selecting the option with the highest expected value. Although this decision policy has desirable properties, it is often not an accurate description of what people prefer nor what they choose when presented with risky options. As can be anticipated, different people have different tastes, and this heterogeneity includes preferences for risk as well. For example, the majority of incentivized DMs select option B from the lottery shown above, indicating, perhaps, that these DMs have some degree of risk aversion. However the magnitude of this risk aversion is still unknown and it cannot be estimated from only one choice resting from a binary lottery. This limitation has lead researchers to use larger sets of binary lotteries where DMs make many independent choices, and from the overall pattern of choices, researchers can draw inferences about the DM s risk preferences. Risk preferences have often been modeled at the aggregate level, by fitting parameters to the most selected option across many individuals, as opposed to modeling preferences at the individual level (see Kahneman and Tversky, 1979; Tversky and Kahneman, 1992). This aggregate approach is a useful first step in that it helps establish stylized facts about human decision making and further facilitates the identification of systematicity in the deviations from EV maximization. However, actual risk preferences exist at the individual level, and thus improving the resolution of measurement and modeling is a natural development in this research field. Substantial work exists along these lines already for both the measurement of risk preferences as well their application (e.g. Gonzalez and Wu, 1999; Holt and Laury, 22; Abdellaoui, 2; Fehr-Duda et al., 26). The purpose of this paper is to add to that literature by developing and evaluating a hierarchical estimation method that improves the reliability of parameter estimates of risk preferences at the individual level. Moreover we want to do so with a measurement tool that is readily useable and broadly applicable as a measure of individual 3

6 risk preferences Three necessary components for measuring risk preferences There are three elementary components in measuring risk preferences and these parts serve as the foundation for developing behavioral models of risky choice. These three components are: lotteries, models, and statistical estimation/fitting procedures. We explain each of these components below in general terms, and then provide detailed examples and discussion of the elements in the subsequent sections of the paper Lotteries Choices among options reveal DM s preferences (Samuelson, 1938; Varian, 26). Lotteries are used to elicit and record risky decisions and these resulting choice data serve as the input for the decision models and parameter estimation procedures. Here we focus on choices from binary lotteries (such as Stott, 26; Rieskamp, 28; Nilsson et al., 211), but the elicitation of certainty equivalence values is another well-established method for quantifying risk preferences (see for example Zeisberger et al., 21). Binary lotteries are arguably the simplest way for subjects to make choices and thus elicit risk preferences. One reason to use binary lotteries is because people appear to have problems with assessing a lottery s certainty equivalent (Lichtenstein and Slovic, 1971), although this simplicity comes at the cost of requiring a DM to make many binary choices to achieve the same degree of estimation fidelity that other methods purport to (e.g., the three decision risk measure from Tanaka et al., 21) Decision model (non-expected utility theory) Models may reflect the general structure (e.g., stylized facts) of DMs aggregate preferences by invoking a latent construct like utility. These models also typically have free parameters that can be tuned to improve the accuracy of how they capture different choice patterns. An individual s choices from the lotteries are fit to a model by tuning these parameters and thus identifying differences in revealed preferences and summarizing the pattern of choices (with as much accuracy as possible) by using particular parameter values. These parameters can, for example, establish not just the existence of risk aversion, but can quantify the degree of this particular preference. This is useful in characterizing an individual decision maker and isolating cognitive mechanisms that underlie behavior (e.g., correlating risk preferences 4

7 with other variables, including measures of physiological processes such as in Figner and Murphy, 21; Schulte-Mecklenbeck et al., 214) as well as tuning a model to make predictions about what this DM will choose when presented with different options. Here we focus on the non-expected utility model of cumulative prospect theory (Kahneman and Tversky, 1979; Tversky and Kahneman, 1992; Starmer, 2) as our decision model. Prospect theory is arguably the most important and influential descriptive model of risky choice to date and it has been used extensively in research related to model fitting and risky decision making Parameter estimation procedure Given a set of risky choices and a particular decision model, the best fitting parameter(s) can be identified via a statistical estimation procedure. A perfectly fitting parameterized model would exactly reproduce all of a DM s choices correctly (i.e., perfect correspondence). Obviously this is a lofty goal and it is almost never observed in practice, as there almost certainly is measurement error in a set of observed choices. A decision model, with best fitting parameters, can be evaluated in several different ways. One approach of evaluating the goodness of fit would be to determine how many correct predictions the parameterized model makes of a DM s choices. For example, for a set of risky choices from binary lotteries, researchers may find that 6% of the choices are constant with expected value maximization (EV maximization is the normative decision policy and trivially is a zero parameter decision model). If researchers used a concave power function to transform values into utilities, this single parameter model may increase to 7% consistency with the selected choices. Although this model scoring method is straightforward and intuitive, it has some stark shortcomings (as we will show later in the paper). Other more sophisticated methods, like maximum likelihood estimation, have been developed that allow for a more nuanced approach to evaluating the fit of a choice model (see for example Harless and Camerer, 1994; Hey and Orme, 1994). All of these different evaluation methods can also include both in-sample, and out-ofsample tests, which can be useful to diagnose and mitigate the overfitting choice data. The interrelationship between lotteries, a decision model, and an estimation procedure, is shown in Figure 1. Lotteries provide stimuli and the resulting choices are the input for the models; a decision model and an estimation procedure tune parameters to maximize correspondence between the 5

8 Lotteries with probability with probability with probability.99.1 with probability with probability with probability.6.4 Input Choices time 1 time 2 Week 1 parameters α, λ, δ, λ Model Prospect Theory Parameter estimation Reliability Week 2 parameters α, λ, δ, λ Out-of-sample (prediction) In-sample (summarization) Figure 1: On the left we find three choices for each of the binary lotteries that serve as input for the model. Using an estimation method we fit the model s risk preference parameters to the individual risky choices at time 1. These parameters, to some extent, summarize the choices made in time 1 by using the parameters in the model and reproduce the choices (depending on the quality of the fit). The out-of-sample predictive capacity of the model can be evaluated by comparing the predictions against actual choices at time 2. The reliability of the estimates is evaluated by comparing the time 1 parameter estimates to estimates obtained using the same estimation method on the time 2 data. This experimental design holds the lotteries and choices consistent, but varies the estimation procedures; moreover this is done in a test-retest design which allows the computation and comparison of both in-sample and out-of-sample statistics model and the actual choices from the decision maker facing the lotteries. This process of eliciting choices can be repeated again using the same lotteries and the same experimental subjects. This test-retest design allows for both the development of in-sample parameter fitting (i.e., statistical explanation or summarization), as well as out-of-sample parameter fitting (i.e., prediction). Moreover, the test-retest reliability of the different parameter 6

9 estimates can be computed as a correlation, as two sets of parameters exist for each decision maker. This is a powerful experimental design as it can accommodate both fitting and prediction, and further can also diagnose instances of overfitting which can undermine psychological interpretations (see Gigerenzer and Gaissmaier, 211) Overfitting and bounding parameters Multi-parameter models estimation methods may be prone to overfitting and in doing so adjust to noise instead of real risk preferences (Roberts and Pashler, 2). This can sometimes be observed when parameter values emerge that are highly atypical and extreme. A common solution to this problem is to set boundaries and limit the range of parameter values that are potentially estimated. Boundaries prevent extreme parameter values in the case of weak evidence and thus reduce overfitting, but on the downside they negate the possibility of reflecting extreme preferences altogether, even though these preferences may be real. Boundaries are also defined arbitrary and may create serious estimation problems due to parameter interdependence. For example, setting a boundary at the lower range on one parameter may radically change the estimate of another parameter (e.g., restricting the range of probability distortion may unduly influence estimates of loss aversion) for one particular subject. The fact that a functional (mis)specification can affect other parameter estimates (see for example Nilsson et al., 211) and that interaction effects between parameters have been found (Zeisberger et al., 21) indicates that there are multiple ways in which prospect theory can account for preferences. This in turn may mean that choosing different parameter boundaries for one or more parameters may affect the estimates of other parameter estimates as well if a parameter estimate runs up against an imposed boundary. To circumvent the pitfalls of arbitrary parameter boundaries, we use a hierarchical estimation method based on Farrell and Ludwig (28) without such boundaries. At its core, this method uses estimates of risk preferences of the whole sample to inform estimates of risk preferences at the individual level. In this paper we address to what degree an estimation method combining group level information with individual level information can more reliably represent individual risk preferences, rather than using either individual or aggregate information exclusively. 7

10 Justification for using hierarchical estimation methods The ultimate goal of this hierarchical estimation procedure is to obtain reliable estimates of individual risk preferences that can be used to make better predictions about risky choice behavior contrasted to other estimation methods. This is not modeling as a means to only maximize in-sample fit, but rather to maximize out-of-sample correspondence; and moreover these methods are a way to gain insights into what people are actually motivated by as they cogitate about risky options and make choices when confronting irreducible uncertainty. Ideally, the parameters should not only be about just summarizing choices, but also about capturing some psychological mechanisms that underlie risky decision making Other applications of hierarchical estimation methods Other researchers have applied hierarchical estimation methods to risky choice modeling. Nilsson et al. (211) have applied a Bayesian hierarchical parameter estimation model to simple risky choice data. The hierarchical procedure outperformed maximum likelihood estimation in a parameter recovery test. The authors however did not test out-of-sample predictions nor the retest reliability of their parameter estimates. Wetzels et al. (21) applied Bayesian hierarchical parameter estimation in a learning model and found it was more robust with regard to extreme estimates and misunderstanding of task dynamics. Scheibehenne and Pachur (213) applied Bayesian hierarchical parameter estimation to risky choice data using a TAX model (Birnbaum and Chavez, 1997) but reported no improvement in parameter stability Other research examining parameter stability There exists some research on estimated parameter stability. Glöckner and Pachur (212) tested the reliability of parameter estimates using a nonhierarchical estimation procedure. The results show that individual risk preferences are generally stable and that the individual parameter values outperform aggregate values in terms of prediction. The authors concluded that the reliability of parameters suffered when extra model parameters were added. This is contrasted against the work of Fehr-Duda and Epper (212) who conclude, based on experimental data and a literature review, that those additional parameters (related to the probability weighting function) are necessary to reliably capture individual risk preferences. Zeisberger et al. (21) also tested individual parameter reliability using a different type of risky 8

11 choice (certainly equivalent vs. binary choice). They found significant differences in parameter value estimates over time but did not use a hierarchical estimation procedure Structure of the rest of this paper In this paper we focus on evaluating different statistical estimation procedures for fitting individual choice data to risky decision models. To this end, we hold constant the set of lotteries DMs made choices with, and the functional form of the risky choice model. We then fit the same set of choice data using a variety of estimation methods, and then contrast the results to differentiate their efficiency. We also compute the test-retest reliability of parameter estimations that resulted from different estimation procedures. This broad approach allows us to evaluate different estimation methods and make substantiated conclusions about fit quality as well as diagnose overfitting. We conclude with recommendations for estimating risk preferences at the individual level and briefly describe an online tool for measuring and estimating risk preferences that researchers are invited to use as part of their own research programs. 2. Stimuli and Experimental design 2.1. Participants One hundred eighty five participants from the subject pool at the Max Planck Institute for Human Development in Berlin volunteered to participate in two sessions (referred to hereafter as time 1 and time 2) that were administered approximately two weeks apart. After both sessions were conducted, complete data from both sessions was available for 142 participants; these subjects with complete datasets are retained for subsequent analysis. The experiment was incentive compatible and was conducted without deception. The data used here are the choice only results from the Schulte-Mecklenbeck et al. (214) experiment and we are indebted to these authors for their generosity in sharing their data Stimuli In each session of the experiment individuals made choices from a set of 91 simple binary option lotteries. Each option has two possible outcomes between -1 and 1 that occur with known probabilities that sum to one. There were four types of lotteries for which examples are shown in Table 9

12 The four types are: gains only; lotteries with losses only; mixed lotteries with both gains and losses; and mixed-zero lotteries with one gain and one loss and zero (status quo) as the alternative outcome. The first three types were included to cover the spectrum of risky decisions while the mixed-zero type allows for measuring loss aversion separately from risk aversion (Rabin, 2; Wakker, 25). The same 91 lotteries were used in both test-retest sessions of this research. The set was compiled of items used by Rieskamp (28), Gaechter et al. (27) and Holt and Laury (22); these items are also used by Schulte-Mecklenbeck et al. (214). Thirty-five lotteries are gain only, 25 are loss only, 25 are mixed, and 6 are mixed-zero. All of the lotteries are listed in Appendix A. Lottery Option A Option B Type 1 32 with probability with probability gain 2-24 with probability with probability loss 3 58 with probability with probability mixed 4-3 with probability for sure mixed-zero Table 2: Examples of binary lotteries from each of the four types Procedure Participants received extensive instructions regarding the experiment at the beginning of the first session. All participants received 1 EUR as a guaranteed payment for participation in the research, and could earn more based on their choices. Participants then worked through several examples of risky choices to familiarize themselves with the interface of the MouselabWeb software (Willemsen and Johnson, 211, 214) that was used to administer the study. While the same 91 lotteries were used for both experimental sessions, the item order was randomized. Additionally the order of the outcomes (topbottom) and options order (A or B) and the orientation (A above B, A left of B) were randomized and stored during the first session. The exact opposite spacial representation of this was used in the second session, to mitigate order and presentation effects. 1

13 Incentive compatibility was implemented by randomly selecting one lottery at the end of each experimental session and paying the participant according to their choice on the selected item when it was played out with the stated probabilities. An exchange rate of 1:1 was used between experimental values and payments for choices. Thus, participants earned a fixed payment of 1 EUR plus one tenth of the outcome of one randomly selected lottery for each completed experimental session. As a result participants earned about 3 EUR (approximately 4 USD) on average for participating in both sessions. 3. Model specification We use cumulative prospect theory (Tversky and Kahneman, 1992) to model risk preferences. A two-outcome lottery 1 L is valued in utility u( ) as the sum of its components in which the monetary reward x i is weighted by a value function v( ) and the associated probability p i by a probability weighting function w( ). This is shown in Equation 1. u(l) = v(x 1 )w(p 1 ) + v(x 2 )(1 w(p 1 )) (1) Cumulative prospect theory has many possible mathematical specifications. These different functional specifications have been tested and reported in the literature (see for example Stott, 26) and the functional forms and parameterizations we use here are justifiable given the preponderance of findings Value function In general power functions have been empirically shown to fit choice data better than many other functional forms at the individual level (Stott, 26). Stevens (1957) also cites experimental evidence showing the merit of power functions in general for modeling psychological processes. Here we use a power value-function as displayed in Equation 2. 1 Value x 1 and its associated probability p 1 and is the option with the highest value for lotteries in the positive domain and x 1 is the option with the lowest associated value for lotteries in the negative domain. The order is irrelevant when lotteries consist of uniform gains or losses, however in mixed lotteries it matters. This ordering is to ensure that a cumulative distribution function is applied to the options systematically. See Tversky and Kahneman (1992) for details. 11

14 25 1 v(x) (subjective value) w(p) (subjective probability) x (objective value) p (objective probability) Figure 2: This figure shows typical value function (left) and probability weighting function (right) from prospect theory. The parameters used to plot the solid lines are from Tversky and Kahneman (1992) and reflect the empirical findings at the aggregate level. The dashed lines represent risk-neutral preferences. The solid line for the value function is concave over gains and convex over losses, and further exhibits a kink at the reference point (in this case the origin) consistent with loss aversion. The probability weighting function overweights small probabilities and underweights large probabilities. It is worth noting that at the individual level, the plots can differ significantly from these aggregate level plots v(x) = { x α x ; α > λ( x) α x < ; α > ; λ > The α-parameter controls the curvature of the value function. If the value of α is below one, there are diminishing marginal returns in the domain of gains. If the value of α is one, the function is linear and consistent with riskneutrality (i.e., EV maximizing) in decision-making. For α values above one, there are increasing marginal returns for positive values of x. This pattern reverses in the domain of losses (values of x less than ), which is referred to as the reflection effect. For this paper we use the same value function parameter α for both gains and losses, and our reasoning is as follows. First, this is parsimonious. Second, when using a power function with different parameters for gains and losses, loss aversion cannot be defined anymore as the magnitude of kink at the reference point (Köbberling and Wakker, 25) but would instead need (2) 12

15 to be defined contingently over the whole curve. 2 This complicates interpretation and undermines clear separability between utility curvature and loss aversion, and further may cause modeling problems for mixed lotteries that include an outcome of zero. Problems of induced correlation between the loss aversion parameter and the curvature of the value function in the negative domain have furthermore been reported when using α gain α loss in a power utility-model (Nilsson et al., 211) Probability weighting function To capture subjective probability, we use Prelec s functional specification (Prelec, 1998; see Figure 5). Prelec s two-parameter probability weighting function can accommodate both inverse S-shaped as well as S-shaped weightings and has been shown to fit individual data well (Fehr-Duda and Epper, 212; Gonzalez and Wu, 1999). Its specification can be found in Equation 3. w(p) = exp ( δ( ln(p)) γ ), δ > ; γ > (3) The γ parameter controls the curvature of the weighting function. The psychological interpretation of the curvature of the function is a diminishing sensitivity away from the end points: both and 1 serve as necessary boundaries and the further from these edges, the less sensitive individuals are to changes in probability (Tversky and Kahneman, 1992). The δ-parameter controls the general elevation of the probability weighting function. It is an index of how attractive lotteries are considered to be (Gonzalez and Wu, 1999) in general and it corresponds to how optimistic (or pessimistic) an individual decision maker is. The use of Prelec s two-parameter weighting function over the original specification used in cumulative prospect theory (Tversky and Kahneman, 1992) requires additional explanation. In the original specification the point of intersection with the diagonal changes simultaneously with the shape of the weighting function, whereas in Prelec s specification, the line always intersects at the same point if δ is kept constant. 3 However, changing the 2 It is possible to use α gain α loss by using an exponential utility function (Köbberling and Wakker, 25). 3 That point is 1/e.37 for δ = 1, which is consistent with some empirical evidence (Gonzalez and Wu, 1999; Camerer and Ho, 1994). The (aggregate) point of intersection is sometimes also found to be closer to.5 (Fehr-Duda and Epper, 212), which is 13

16 point of intersection and curvature simultaneously induces a negative correlation with the value function parameter α because both parameters capture similar characteristics and also does not allow for a wide variety of individual preferences (see Fehr-Duda and Epper, 212). Furthermore, the original specification of the weighting function is not monotonic for γ <.279 (Ingersoll, 28). While that low value is generally not in the range of reported aggregate parameters (Camerer and Ho, 1994), this non-monotonicity may become relevant when estimating individual preferences, where there is considerably more heterogeneity. 4. Estimation methods Individual risk preferences are captured by obtaining a set of parameters that best fit the observed choices implemented via a particular choice model. In this section we explain the workings of the simple direct maximization of the explained fraction (MXF) of choices, of standard maximum likelihood estimation (MLE) and of hierarchical maximum likelihood estimation (HML) Direct maximization of explained fraction of choices (MXF) Arguably the most straightforward way to capture an individual s preferences is by finding the set of parameters that maximizes the fraction of explained choices made over all the lotteries considered. By explained choices we mean the lotteries for which the utility for the chosen option exceeds that of the alternative option, given some decision model and parameter value(s). The term explained does not imply having established some deeper causal insights into the decision making process, but is used here simply to refer to statistical explanation (e.g., a summarization or reproducibility) of choices given tuned parameters and a decision model. The goal of this MXF method is to find the set of parameters M = {α, λ, δ, γ} that maximizes the fraction of explained choices from an individual DM. M i = arg max M 1 N N c(y i M) (4) i=1 consistent with Karmarkar s single-parameter weighting function (Karmarkar, 1979). A two-parameter extension of that function is provided by Goldstein and Einhorn s function (Goldstein and Einhorn, 1987), which appears to fit about equally well as both Prelec s, and Gonzalez and Wu s (Gonzalez and Wu, 1999). 14

17 By c( ) we denote the choice itself as in Equation 5, in which u( ) is given by Equation 1. It returns 1 if the utility given the parameters is consistent with the actual choice of the subject and otherwise. if A is chosen and u(a) u(b) or 1 if B is chosen and u(a) u(b) c(y i M) = (5) otherwise This is a discrete problem because we use a finite number of binary lotteries. Solution(s) can be found using a grid search (i.e. parameter sweep) approach, combined with a fine-tuning algorithm that uses the best results of the grid search as a starting point and then refines the estimate as needed, improving the precision of the parameter values. Note that this process can be computationally intense for a high resolution estimates since the full combination of four parameters over their full range has to be taken into consideration. The MXF estimation method has some advantages. Finding the parameters that maximize the explained fraction of choices is a simple metric that is directly related to the actual choices from a decision maker. It is easy to explain and it is also robust to the presence of a few aberrant choices. This quality prevents the estimates from yielding results that are unduly affected by a single decision, measurement error, or overfitting. An important shortcoming of MXF as an estimation method is that the characteristics of the explained lotteries are not taken into consideration when fitting the choice data. Consider the lottery at the beginning of the paper (Table 1), in which the expected value for option A is higher than that for B, but just barely. Consider a decision maker that we suspect to be generally risk-neutral, and therefore the EV-maximizing policy would dictate selecting option A over option B. But for argument s sake, consider if we observe the DM select option B; how much evidence does this provide against our conjecture of risk neutrality? Hardly any, one can conclude, given the small difference in utility between the options. Now consider the same lottery, but the payoff for A is $483 instead of $83. Picking B over A is now a huge mistake for a risk-neutral decision maker and such an observation would yield stronger evidence that would force one to revise a conjecture about the decision maker s risk neutrality. MXF as a estimation method does not distinguish between these two instances as its simplistic right/wrong crite- 15

18 rion lacks the nuance to integrate this additional information about options, choices, and the strength of evidence. Another limitation of MXF is that it results in numerous ties for bestfitting parameters. In other words, many parameter sets result in exactly the same fraction of choices being explained. For several subjects the same fraction of choices is explained using very different parameters. For one subject just over seventy percent of choices were explained by parameters that included.6 to 1.25 for the utility curvature function, 1.5 to 6 (and possibly beyond) for loss aversion,.44 to 1.3 for the elevation in the weighting function and.8 to 2.5 for its curvature. We find such ties for various constellations of parameters (albeit somewhat smaller) even for subjects for which we can explain up to ninety percent of all choices. Two other estimation methods (MLE and HLM) avoid these problems and we turn our attention to them next Maximum Likelihood Estimation (MLE) A strict application of cumulative prospect theory dictates that even a trivial difference in utility leads the DM to always choose the option with the highest utility. However, even in early experiments with repeated lotteries, Mosteller and Nogee (1951) found that this is not the case with real decision makers. Choices were instead partially stochastic, with the probability of choosing the generally more favored option increasing as the utility difference between the options increased. This idea of random utility maximization has been developed by Luce (1959) and others (e.g. Harless and Camerer, 1994; Hey and Orme, 1994). In this tradition, we use a logistic function that specifies the probability of picking one option, depending on each option s utility. This formulation is displayed in Equation 6. A logistic function fits the data from Mosteller and Nogee (1951) well for example and has been shown generally to perform well with behavioral data (Stott, 26). 1 p(a B) =, ϕ (6) 1 + eϕ(u(b) u(a)) The parameter ϕ is an index of the sensitivity to differences in utility. Higher values for ϕ diminish the importance of the difference in utility, as is illustrated in Figure 3. It is important to note that this parameter operates on the absolute difference in utility assigned to both options. Two individuals with different risk attitudes but equal sensitivity, will almost certainly end up 16

19 1 probability of picking A over B ϕ=2 ϕ= utility of A Figure 3: This figure shows two example logistic functions. Consider a lottery for which option B has a utility of 1 while option A s utility is plotted on the x-axis. Given the utility of B, the probability of picking A over B is plotted on the y- axis. The option with a higher utility is more likely to be selected in general. In the instance when both options utilities are equal, the probability of choosing each option is equivalent. The sensitivity of a DM is reflected in parameter ϕ. A perfectly discriminating (and perfectly consistent) DM would have a step function as ϕ ; less perfect discrimination on behalf of DMs is captured by larger ϕ values with different ϕ parameters simply because the differences in options utilities are also determined by the risk attitude. While the parameter is useful to help fit an individual s choice data, one cannot compare the values of ϕ across individuals unless both the lotteries and the individuals risk attitudes are identical. The interpretation of the sensitivity parameter is therefore often ambiguous and it should be considered as simply an aid to the estimation method. In maximum likelihood estimation the goal function is to maximize the likelihood of observing the outcome, which consists of the observed choices, given a set of parameters. The likelihood is expressed using the choice function above, yielding a stochastic specification. We use the notation M = {α, λ, δ, γ, ϕ}, where y i denotes the choice for 17

20 the i-th lottery. M i = arg max M N c(y i M) (7) By c( ) we denote the choice itself, where p(a B) is given by the logistic choice function in Equation 6. i=1 c(y i M) = { p(a B) if A is chosen (yi is ) 1 p(a B) if B is chosen (y i is 1) (8) This approach overcomes some of the major shortcomings of other methods discussed above. MLE takes the utility difference between the two options into account and therefore does not only fit the number of choices correctly predicted, but also measures the quality of the lottery and its fit. Ironically, because it is the utility difference that drives the fit (not a count of correct predictions), the resulting best-fitting parameters may explain fewer choices than the MXF method. This is because MLE tolerates several smaller prediction mistakes instead of fewer, but very large, mistakes. The fact that the quality of the fit drives the parameter estimates may be beneficial for predictions, especially when the lotteries have not been carefully chosen and large mistakes have large consequences. Because the resulting fit (likelihood) has a smoother (but still reactively lumpy) surface, finding robust solutions is computationally less intense than with MXF. On the downside, MLE is not highly robust to aberrant choices. If by confusion or accident the DM chooses an atypical option for a lottery that is greatly out of line with her other choices, the resulting parameters may be disproportionately affected by this single anomalous choice. While MLE does take into account the quality of the fit with respect to utility discrepancies, the fitting procedure has the simple goal of finding the parameter set that generates the best overall fit (even if the differences in fit among parameter sets are trivially divergent). This approach ignores the fact that there may be different parameter sets that fit choices virtually the same. Consider subject A in Figure 8, which is the resulting distribution of parameter estimates when we change one single choice and reestimate the MLE parameters. Especially for loss aversion and the elevation of the probability weighting function we find that only a single choice can make the difference between very different parameter values. The MLE method solves the tie-breaking problem of MXF, but does so 18

21 by ascribing a great deal of importance to potentially trivial differences in fit quality. Radically different combinations of parameters may be identified by the procedure as nearly equivalently good, and the winning parameter set may emerge but only differ from the other almost-as-good sets by a slightest of margins in fit quality. This process of selecting the winning parameter set from among the space of plausible combinations ignores how realistic or reflective the sets may be of what the DM s actual preferences are. Moreover the ill behaved structure of the parameter space makes it a challenge to find reliable results, as different solutions can be nearly identical in fit quality but be located in radically different regimes of the parameter space. A minor change in the lotteries, or just one different choice, may bounce the resulting parameter estimations substantially. Furthermore, the MLE procedure does not consider the psychological interpretation of the parameters and therefore the fact that we may select a parameter that attributes preferences that are out of the ordinary with only very little evidence to distinguish it from more typical preferences Hierarchical Maximum Likelihood Estimation (HML) The HML estimation procedure developed here is based on Farrell and Ludwig (28). It is a two-step procedure that is explained below. Step 1. In the first step the likelihood of occurrence for each of the four model parameters in the population as a whole is estimated. These likelihoods of occurrence are captured using probability density distributions and reflect how likely (or conversely, out of the ordinary) a particular parameter value is in the population given everyone s choices. These density distributions are found by solving the integral in Equation 9. For each of the four cumulative prospect theory parameters we use a lognormal density distribution, as this distribution has only positive values, is positively skewed, has only two distribution parameters, and is not too computationally intense. Because the sensitivity parameter ϕ is not independent of the other parameters, and because it has no clear psychological interpretation, we do not estimate a distribution of occurrence for sensitivity, but instead take the value we find for the aggregate data with classical maximum likelihood estimation for this step. 4 We use the notation M = {α, λ, δ, γ, ϕ}, P α = {µ α, σ α }, 4 It may be that a different approach with regard to the sensitivity parameter leads to better performance. We cannot judge this without data of a third re-test, as it requires 19

22 4 Step 1 4 Step density 2 α (curvature) density 2 α 1 1 λ (loss aversion) parameter value 4 λ parameter value density 2 density 2 1 (shape) γ δ (elevation) 1 γ δ parameter value parameter value Figure 4: The two steps of the hierarchical maximum likelihood estimation procedure. In the first step we fit the data to four probability density distributions. In the second step we obtain the actual individual estimates. A priori and in the absence of individual choice data the small square has the highest likelihood, which is at the modal value of each parameter distribution. The individual choices are balanced against the likelihood of the occurrence of a parameter to obtain individual parameter estimates, depicted by big squares. 54 P λ = {µ λ, σ λ }, P δ = {µ δ, σ δ }, P γ = {µ γ, σ γ }, P = {P α, P λ, P δ, P γ }, S and I estimation in a first session, calibration of the method using a second session, and true out-of-sample testing of this calibration in a third session. This may be interesting but is beyond the scope of the current paper. 2

23 denote the number of participants and the number of lotteries respectively. P = arg max P S s=1 [ N i=1 c(y s,i M) ] ln(α P α ) ln(λ P λ ) ln(δ P δ ) ln(γ P γ ) dα dλ dδ dγ (9) This first step can be explained using a simplified example which is a discrete version of Equation 9. For each of the four risk preference parameters let us pick a set of values that covers a sufficiently large interval, for example {.5, 1,..., 5}. We then take all parameter combinations (Cartesian product) of these four sets, i.e. α =.5, λ =.5, δ =.5, γ =.5, then α =.5, λ =.5, δ =.5, γ = 1., and so forth. Each of these combinations has a certain likelihood of explaining a subject s observed choices, displayed within the block parentheses in Equation 9. However, not all of the parameter values in these combinations are equally supported by the data. It could for example be that the combination showed above that contains γ =.5 is 1 times more likely to explain the observed choices than γ = 1.. How likely each parameter value is in relation to another value is precisely what we intend to measure. The fact that one set of parameters is associated with a higher likelihood of explaining observed choices (and therefore is more strongly supported by the data) can be reflected by changing the four log-normal density functions for each of the model parameters. Because we multiply how likely a set of values explains the observed choices by the weight put on this set of parameters through density functions, Equation 9 achieves this. The density functions, and the product thereof, denoted by ln( ) within the integrals, are therefore ways to measure the relative weight of certain parameter values. The maximization is done over all participants simultaneously under the condition and to ensure that it needs to be a density distribution. Applying this step to our data leads to the density distributions displayed in Figure 4. Step 2. In this step we estimate individual parameters as carried out in standard maximum likelihood estimation, but now weigh parameters by the likelihood of the occurrence of these parameter values as given by the density functions obtained in step 1. This likelihood of occurrence is determined by the distributions we obtained from step 1. Those distributions and both steps are illustrated in Figure 4. This second step of the procedure is described in Equation 1. With M i we denote M as above for subject i. The resulting parameters are not only driven by individual decision data, but also by how 21

24 likely it is that such a parameter combination occurs in the population. [ N ] M i = arg max c(y i M i ) ln(α P α ) ln(λ P λ ) ln(δ P δ ) ln(γ P γ ) (1) M i i=1 This HML procedure is motivated by the principle that extreme conclusions require extreme evidence. In the first step of the HLM procedure, one simultaneously extracts density distributions of all parameter values using all choice data from the population of DMs. In the second step, the likelihood of a particular parameter value is determined by how well it fits the observed choices from an individual, and at the same time, it is weighted by the likelihood of observing that particular parameter value in the population distribution (defined by the density function obtained in step 1). The parameter set with the most likely combination of both of those elements is selected. The weaker the evidence is, the more parameter estimates are pulled to the center of the density distributions. HML therefore counters maximum likelihood estimation s tendency to fit parameters to extreme choices by requiring stronger evidence to establish extreme parameter estimates. Thus, if a DM makes very consistent choices, the population parameter densities will have very little effect on the estimates. Conversely, if a DM has greater inconsistency in his choices, the population parameters will have more pull on the individual estimates. In the extreme, if a DM has wildly inconsistent choices, the best parameter estimates for the DM will simply be those for the population average. This is illustrated in Figure 5 that compares the estimates for four representative participants by MLE (points) and HML (squares). Subject 1 is the same subject as in Figure 8. We find that HML estimates are different from MLE, but are generally still within the 95% confidence intervals. This is the statistical tie-breaking mechanism we referred to in the previous section. It is also possible that the differences in estimates are large and fall outside the univariate MLE confidence intervals (displayed by lines), such as for subject 2. For the other two participants HML s estimates are virtually identical, irrespective of the confidence interval. This is largely driven by how consistent a DM s choices supporting particular parameter values are. The estimation method has been implemented in both MATLAB and C. Parameter estimates in the second step were found using the simplex algorithm by Nelder and Mead (1965) and initialized with multiple starting points 22

Hierarchical Maximum Likelihood Parameter Estimation for Cumulative Prospect Theory: Improving the Reliability of Individual Risk Parameter Estimates

Hierarchical Maximum Likelihood Parameter Estimation for Cumulative Prospect Theory: Improving the Reliability of Individual Risk Parameter Estimates http://pubsonline.informs.org/journal/mnsc/ MANAGEMENT SCIENCE Articles in Advance, pp. 1 19 ISSN 25-199 (print), ISSN 1526-551 (online) Hierarchical Maximum Likelihood Parameter Estimation for Cumulative

More information

How Does Prospect Theory Reflect Heuristics Probability Sensitivity in Risky Choice?

How Does Prospect Theory Reflect Heuristics Probability Sensitivity in Risky Choice? How Does Prospect Theory Reflect Heuristics Probability Sensitivity in Risky Choice? Renata S. Suter (suter@mpib-berlin.mpg.de) Max Planck Institute for Human Development, Lentzeallee 94, 495 Berlin, Germany

More information

It is Whether You Win or Lose: The Importance of the Overall Probabilities of Winning or Losing in Risky Choice

It is Whether You Win or Lose: The Importance of the Overall Probabilities of Winning or Losing in Risky Choice The Journal of Risk and Uncertainty, 30:1; 5 19, 2005 c 2005 Springer Science + Business Media, Inc. Manufactured in The Netherlands. It is Whether You Win or Lose: The Importance of the Overall Probabilities

More information

References. Christos A. Ioannou 2/37

References. Christos A. Ioannou 2/37 Prospect Theory References Tversky, A., and D. Kahneman: Judgement under Uncertainty: Heuristics and Biases, Science, 185 (1974), 1124-1131. Tversky, A., and D. Kahneman: Prospect Theory: An Analysis of

More information

A Bayesian Approach to Characterizing Heterogeneity of Rank-Dependent Expected Utility Models of Lottery Choices

A Bayesian Approach to Characterizing Heterogeneity of Rank-Dependent Expected Utility Models of Lottery Choices A Bayesian Approach to Characterizing Heterogeneity of Rank-Dependent Expected Utility Models of Lottery Choices by Dale O. Stahl Malcolm Forsman Centennial Professor Department of Economics University

More information

No aspiration to win? An experimental test of the aspiration level model

No aspiration to win? An experimental test of the aspiration level model J Risk Uncertain (2015) 51:245 266 DOI 10.1007/s11166-015-9229-0 No aspiration to win? An experimental test of the aspiration level model Enrico Diecidue 1 & Moshe Levy 2 & Jeroen van de Ven 3,4 Published

More information

A Belief-Based Account of Decision under Uncertainty. Craig R. Fox, Amos Tversky

A Belief-Based Account of Decision under Uncertainty. Craig R. Fox, Amos Tversky A Belief-Based Account of Decision under Uncertainty Craig R. Fox, Amos Tversky Outline Problem Definition Decision under Uncertainty (classical Theory) Two-Stage Model Probability Judgment and Support

More information

Risk attitude in decision making: A clash of three approaches

Risk attitude in decision making: A clash of three approaches Risk attitude in decision making: A clash of three approaches Eldad Yechiam (yeldad@tx.technion.ac.il) Faculty of Industrial Engineering and Management, Technion Israel Institute of Technology Haifa, 32000

More information

Paradoxes and Violations of Normative Decision Theory. Jay Simon Defense Resources Management Institute, Naval Postgraduate School

Paradoxes and Violations of Normative Decision Theory. Jay Simon Defense Resources Management Institute, Naval Postgraduate School Paradoxes and Violations of Normative Decision Theory Jay Simon Defense Resources Management Institute, Naval Postgraduate School Yitong Wang University of California, Irvine L. Robin Keller University

More information

Effect of Choice Set on Valuation of Risky Prospects

Effect of Choice Set on Valuation of Risky Prospects Effect of Choice Set on Valuation of Risky Prospects Neil Stewart (neil.stewart@warwick.ac.uk) Nick Chater (nick.chater@warwick.ac.uk) Henry P. Stott (hstott@owc.com) Department of Psychology, University

More information

Technical Specifications

Technical Specifications Technical Specifications In order to provide summary information across a set of exercises, all tests must employ some form of scoring models. The most familiar of these scoring models is the one typically

More information

Value Function Elicitation: A Comment on Craig R. Fox & Amos Tversky, "A Belief-Based Account of Decision under Uncertainty"

Value Function Elicitation: A Comment on Craig R. Fox & Amos Tversky, A Belief-Based Account of Decision under Uncertainty Value Function Elicitation: A Comment on Craig R. Fox & Amos Tversky, "A Belief-Based Account of Decision under Uncertainty" Craig R. Fox Peter P. Wakker Fuqua School of Business, Duke University Box 90120,

More information

Sawtooth Software. MaxDiff Analysis: Simple Counting, Individual-Level Logit, and HB RESEARCH PAPER SERIES. Bryan Orme, Sawtooth Software, Inc.

Sawtooth Software. MaxDiff Analysis: Simple Counting, Individual-Level Logit, and HB RESEARCH PAPER SERIES. Bryan Orme, Sawtooth Software, Inc. Sawtooth Software RESEARCH PAPER SERIES MaxDiff Analysis: Simple Counting, Individual-Level Logit, and HB Bryan Orme, Sawtooth Software, Inc. Copyright 009, Sawtooth Software, Inc. 530 W. Fir St. Sequim,

More information

The effects of payout and probability magnitude on the Allais paradox

The effects of payout and probability magnitude on the Allais paradox Memory & Cognition 2008, 36 (5), 1013-1023 doi: 10.3758/MC.36.5.1013 The effects of payout and probability magnitude on the Allais paradox BETHANY Y J. WEBER Rutgers University, New Brunswick, New Jersey

More information

Size of Ellsberg Urn. Emel Filiz-Ozbay, Huseyin Gulen, Yusufcan Masatlioglu, Erkut Ozbay. University of Maryland

Size of Ellsberg Urn. Emel Filiz-Ozbay, Huseyin Gulen, Yusufcan Masatlioglu, Erkut Ozbay. University of Maryland Size of Ellsberg Urn Emel Filiz-Ozbay, Huseyin Gulen, Yusufcan Masatlioglu, Erkut Ozbay University of Maryland behavior fundamentally changes when the uncertainty is explicitly specified and vaguely described

More information

Citation for published version (APA): Ebbes, P. (2004). Latent instrumental variables: a new approach to solve for endogeneity s.n.

Citation for published version (APA): Ebbes, P. (2004). Latent instrumental variables: a new approach to solve for endogeneity s.n. University of Groningen Latent instrumental variables Ebbes, P. IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from it. Please check the document

More information

Multiple Switching Behavior in Multiple Price Lists

Multiple Switching Behavior in Multiple Price Lists Multiple Switching Behavior in Multiple Price Lists David M. Bruner This version: September 2007 Abstract A common mechanism to elicit risk preferences requires a respondent to make a series of dichotomous

More information

A Brief Introduction to Bayesian Statistics

A Brief Introduction to Bayesian Statistics A Brief Introduction to Statistics David Kaplan Department of Educational Psychology Methods for Social Policy Research and, Washington, DC 2017 1 / 37 The Reverend Thomas Bayes, 1701 1761 2 / 37 Pierre-Simon

More information

Conditional spectrum-based ground motion selection. Part II: Intensity-based assessments and evaluation of alternative target spectra

Conditional spectrum-based ground motion selection. Part II: Intensity-based assessments and evaluation of alternative target spectra EARTHQUAKE ENGINEERING & STRUCTURAL DYNAMICS Published online 9 May 203 in Wiley Online Library (wileyonlinelibrary.com)..2303 Conditional spectrum-based ground motion selection. Part II: Intensity-based

More information

Alternative Payoff Mechanisms for Choice under Risk. by James C. Cox, Vjollca Sadiraj Ulrich Schmidt

Alternative Payoff Mechanisms for Choice under Risk. by James C. Cox, Vjollca Sadiraj Ulrich Schmidt Alternative Payoff Mechanisms for Choice under Risk by James C. Cox, Vjollca Sadiraj Ulrich Schmidt No. 1932 June 2014 Kiel Institute for the World Economy, Kiellinie 66, 24105 Kiel, Germany Kiel Working

More information

Risky Choice Decisions from a Tri-Reference Point Perspective

Risky Choice Decisions from a Tri-Reference Point Perspective Academic Leadership Journal in Student Research Volume 4 Spring 2016 Article 4 2016 Risky Choice Decisions from a Tri-Reference Point Perspective Kevin L. Kenney Fort Hays State University Follow this

More information

Effects of Sequential Context on Judgments and Decisions in the Prisoner s Dilemma Game

Effects of Sequential Context on Judgments and Decisions in the Prisoner s Dilemma Game Effects of Sequential Context on Judgments and Decisions in the Prisoner s Dilemma Game Ivaylo Vlaev (ivaylo.vlaev@psy.ox.ac.uk) Department of Experimental Psychology, University of Oxford, Oxford, OX1

More information

Unit 1 Exploring and Understanding Data

Unit 1 Exploring and Understanding Data Unit 1 Exploring and Understanding Data Area Principle Bar Chart Boxplot Conditional Distribution Dotplot Empirical Rule Five Number Summary Frequency Distribution Frequency Polygon Histogram Interquartile

More information

Assessment and Estimation of Risk Preferences (Outline and Pre-summary)

Assessment and Estimation of Risk Preferences (Outline and Pre-summary) Assessment and Estimation of Risk Preferences (Outline and Pre-summary) Charles A. Holt and Susan K. Laury 1 In press (2013) for the Handbook of the Economics of Risk and Uncertainty, Chapter 4, M. Machina

More information

Appendix III Individual-level analysis

Appendix III Individual-level analysis Appendix III Individual-level analysis Our user-friendly experimental interface makes it possible to present each subject with many choices in the course of a single experiment, yielding a rich individual-level

More information

AREC 815: Experimental and Behavioral Economics. Experiments Testing Prospect Theory. Professor: Pamela Jakiela

AREC 815: Experimental and Behavioral Economics. Experiments Testing Prospect Theory. Professor: Pamela Jakiela AREC 815: Experimental and Behavioral Economics Experiments Testing Prospect Theory Professor: Pamela Jakiela Department of Agricultural and Resource Economics University of Maryland, College Park Risk

More information

Gender specific attitudes towards risk and ambiguity an experimental investigation

Gender specific attitudes towards risk and ambiguity an experimental investigation Research Collection Working Paper Gender specific attitudes towards risk and ambiguity an experimental investigation Author(s): Schubert, Renate; Gysler, Matthias; Brown, Martin; Brachinger, Hans Wolfgang

More information

SUPPLEMENTARY INFORMATION

SUPPLEMENTARY INFORMATION Supplementary Statistics and Results This file contains supplementary statistical information and a discussion of the interpretation of the belief effect on the basis of additional data. We also present

More information

Exploring the reference point in prospect theory

Exploring the reference point in prospect theory 3 Exploring the reference point in prospect theory Gambles for length of life Exploring the reference point in prospect theory: Gambles for length of life. S.M.C. van Osch, W.B. van den Hout, A.M. Stiggelbout

More information

The Game Prisoners Really Play: Preference Elicitation and the Impact of Communication

The Game Prisoners Really Play: Preference Elicitation and the Impact of Communication The Game Prisoners Really Play: Preference Elicitation and the Impact of Communication Michael Kosfeld University of Zurich Ernst Fehr University of Zurich October 10, 2003 Unfinished version: Please do

More information

The Description Experience Gap in Risky and Ambiguous Gambles

The Description Experience Gap in Risky and Ambiguous Gambles Journal of Behavioral Decision Making, J. Behav. Dec. Making, 27: 316 327 (2014) Published online 30 October 2013 in Wiley Online Library (wileyonlinelibrary.com).1808 The Description Experience Gap in

More information

Cognitive Models of Choice: Comparing Decision Field Theory to the Proportional Difference Model

Cognitive Models of Choice: Comparing Decision Field Theory to the Proportional Difference Model Cognitive Science 33 (2009) 911 939 Copyright Ó 2009 Cognitive Science Society, Inc. All rights reserved. ISSN: 0364-0213 print / 1551-6709 online DOI: 10.1111/j.1551-6709.2009.01034.x Cognitive Models

More information

6. Unusual and Influential Data

6. Unusual and Influential Data Sociology 740 John ox Lecture Notes 6. Unusual and Influential Data Copyright 2014 by John ox Unusual and Influential Data 1 1. Introduction I Linear statistical models make strong assumptions about the

More information

Choice adaptation to increasing and decreasing event probabilities

Choice adaptation to increasing and decreasing event probabilities Choice adaptation to increasing and decreasing event probabilities Samuel Cheyette (sjcheyette@gmail.com) Dynamic Decision Making Laboratory, Carnegie Mellon University Pittsburgh, PA. 15213 Emmanouil

More information

Hierarchical Bayesian Modeling: Does it Improve Parameter Stability?

Hierarchical Bayesian Modeling: Does it Improve Parameter Stability? Hierarchical Bayesian Modeling: Does it Improve Parameter Stability? Benjamin Scheibehenne (benjamin.scheibehenne@unibas.ch) Economic Psychology, Department of Psychology, Missionsstrasse 62a 4055 Basel,

More information

Sequential sampling and paradoxes of risky choice

Sequential sampling and paradoxes of risky choice Psychon Bull Rev (2014) 21:1095 1111 DOI 10.3758/s13423-014-0650-1 THEORETICAL REVIEW Sequential sampling and paradoxes of risky choice Sudeep Bhatia Published online: 5 June 2014 Psychonomic Society,

More information

The Regression-Discontinuity Design

The Regression-Discontinuity Design Page 1 of 10 Home» Design» Quasi-Experimental Design» The Regression-Discontinuity Design The regression-discontinuity design. What a terrible name! In everyday language both parts of the term have connotations

More information

A Case Study: Two-sample categorical data

A Case Study: Two-sample categorical data A Case Study: Two-sample categorical data Patrick Breheny January 31 Patrick Breheny BST 701: Bayesian Modeling in Biostatistics 1/43 Introduction Model specification Continuous vs. mixture priors Choice

More information

A Race Model of Perceptual Forced Choice Reaction Time

A Race Model of Perceptual Forced Choice Reaction Time A Race Model of Perceptual Forced Choice Reaction Time David E. Huber (dhuber@psyc.umd.edu) Department of Psychology, 1147 Biology/Psychology Building College Park, MD 2742 USA Denis Cousineau (Denis.Cousineau@UMontreal.CA)

More information

Further Properties of the Priority Rule

Further Properties of the Priority Rule Further Properties of the Priority Rule Michael Strevens Draft of July 2003 Abstract In Strevens (2003), I showed that science s priority system for distributing credit promotes an allocation of labor

More information

Recognizing Ambiguity

Recognizing Ambiguity Recognizing Ambiguity How Lack of Information Scares Us Mark Clements Columbia University I. Abstract In this paper, I will examine two different approaches to an experimental decision problem posed by

More information

Discriminating among probability weighting functions using adaptive design optimization

Discriminating among probability weighting functions using adaptive design optimization J Risk Uncertain (23) 47:255 289 DOI.7/s66-3-979-3 Discriminating among probability weighting functions using adaptive design optimization Daniel R. Cavagnaro Mark A. Pitt Richard Gonzalez Jay I. Myung

More information

How financial incentives and cognitive abilities. affect task performance in laboratory settings: an illustration

How financial incentives and cognitive abilities. affect task performance in laboratory settings: an illustration How financial incentives and cognitive abilities affect task performance in laboratory settings: an illustration Ondrej Rydval, Andreas Ortmann CERGE-EI, Prague, Czech Republic April 2004 Abstract Drawing

More information

Theoretical Explanations of Treatment Effects in Voluntary Contributions Experiments

Theoretical Explanations of Treatment Effects in Voluntary Contributions Experiments Theoretical Explanations of Treatment Effects in Voluntary Contributions Experiments Charles A. Holt and Susan K. Laury * November 1997 Introduction Public goods experiments are notable in that they produce

More information

Department of Economics Working Paper Series

Department of Economics Working Paper Series Department of Economics Working Paper Series The Common Ratio Effect in Choice, Pricing, and Happiness Tasks by Mark Schneider Chapman University Mikhael Shor University of Connecticut Working Paper 2016-29

More information

ELICITING RISK PREFERENCES USING CHOICE LISTS

ELICITING RISK PREFERENCES USING CHOICE LISTS ELICITING RISK PREFERENCES USING CHOICE LISTS DAVID J. FREEMAN, YORAM HALEVY AND TERRI KNEELAND Abstract. We study the effect of embedding pairwise choices between lotteries within a choice list on measured

More information

Evaluation Models STUDIES OF DIAGNOSTIC EFFICIENCY

Evaluation Models STUDIES OF DIAGNOSTIC EFFICIENCY 2. Evaluation Model 2 Evaluation Models To understand the strengths and weaknesses of evaluation, one must keep in mind its fundamental purpose: to inform those who make decisions. The inferences drawn

More information

A Race Model of Perceptual Forced Choice Reaction Time

A Race Model of Perceptual Forced Choice Reaction Time A Race Model of Perceptual Forced Choice Reaction Time David E. Huber (dhuber@psych.colorado.edu) Department of Psychology, 1147 Biology/Psychology Building College Park, MD 2742 USA Denis Cousineau (Denis.Cousineau@UMontreal.CA)

More information

Sawtooth Software. The Number of Levels Effect in Conjoint: Where Does It Come From and Can It Be Eliminated? RESEARCH PAPER SERIES

Sawtooth Software. The Number of Levels Effect in Conjoint: Where Does It Come From and Can It Be Eliminated? RESEARCH PAPER SERIES Sawtooth Software RESEARCH PAPER SERIES The Number of Levels Effect in Conjoint: Where Does It Come From and Can It Be Eliminated? Dick Wittink, Yale University Joel Huber, Duke University Peter Zandan,

More information

Attentional Theory Is a Viable Explanation of the Inverse Base Rate Effect: A Reply to Winman, Wennerholm, and Juslin (2003)

Attentional Theory Is a Viable Explanation of the Inverse Base Rate Effect: A Reply to Winman, Wennerholm, and Juslin (2003) Journal of Experimental Psychology: Learning, Memory, and Cognition 2003, Vol. 29, No. 6, 1396 1400 Copyright 2003 by the American Psychological Association, Inc. 0278-7393/03/$12.00 DOI: 10.1037/0278-7393.29.6.1396

More information

For general queries, contact

For general queries, contact Much of the work in Bayesian econometrics has focused on showing the value of Bayesian methods for parametric models (see, for example, Geweke (2005), Koop (2003), Li and Tobias (2011), and Rossi, Allenby,

More information

Pros. University of Chicago and NORC at the University of Chicago, USA, and IZA, Germany

Pros. University of Chicago and NORC at the University of Chicago, USA, and IZA, Germany Dan A. Black University of Chicago and NORC at the University of Chicago, USA, and IZA, Germany Matching as a regression estimator Matching avoids making assumptions about the functional form of the regression

More information

A short-but-efficient test for overconfidence and prospect theory. Experimental validation

A short-but-efficient test for overconfidence and prospect theory. Experimental validation MPRA Munich Personal RePEc Archive A short-but-efficient test for overconfidence and prospect theory. Experimental validation David Peon and Anxo Calvo and Manel Antelo University of A Coruna, University

More information

Stepwise Knowledge Acquisition in a Fuzzy Knowledge Representation Framework

Stepwise Knowledge Acquisition in a Fuzzy Knowledge Representation Framework Stepwise Knowledge Acquisition in a Fuzzy Knowledge Representation Framework Thomas E. Rothenfluh 1, Karl Bögl 2, and Klaus-Peter Adlassnig 2 1 Department of Psychology University of Zurich, Zürichbergstraße

More information

Which determines Dictating the Risk, risk preference or social image? Experimental evidence-

Which determines Dictating the Risk, risk preference or social image? Experimental evidence- Which determines Dictating the Risk, risk preference or social image? Experimental evidence- Tetsuya Kawamura a, Kazuhito Ogawa a,b and Yusuke Osaki a,c a Center for Experimental Economics, Kansai University

More information

3 CONCEPTUAL FOUNDATIONS OF STATISTICS

3 CONCEPTUAL FOUNDATIONS OF STATISTICS 3 CONCEPTUAL FOUNDATIONS OF STATISTICS In this chapter, we examine the conceptual foundations of statistics. The goal is to give you an appreciation and conceptual understanding of some basic statistical

More information

Daniel Schunk; Joachim Winter: The Relationship Between Risk Attitudes and Heuristics in Search Tasks: A Laboratory Experiment

Daniel Schunk; Joachim Winter: The Relationship Between Risk Attitudes and Heuristics in Search Tasks: A Laboratory Experiment Daniel Schunk; Joachim Winter: The Relationship Between Risk Attitudes and Heuristics in Search Tasks: A Laboratory Experiment Munich Discussion Paper No. 2007-9 Department of Economics University of Munich

More information

Author's personal copy

Author's personal copy Erkenn DOI 10.1007/s10670-013-9543-3 ORIGINAL ARTICLE Brad Armendt Received: 2 October 2013 / Accepted: 2 October 2013 Ó Springer Science+Business Media Dordrecht 2013 Abstract It is widely held that the

More information

Loss Aversion under Prospect Theory: A Parameter-Free Measurement 1

Loss Aversion under Prospect Theory: A Parameter-Free Measurement 1 Loss Aversion under Prospect Theory: A Parameter-Free Measurement 1 Mohammed Abdellaoui Maison de la Recherche de l ESTP, GRID, 30 avenue du Président Wilson, 94230 Cachan, France, abdellaoui@grid.ensam.estp.fr.

More information

Loss Aversion, Diminishing Sensitivity, and the Effect of Experience on Repeated Decisions y

Loss Aversion, Diminishing Sensitivity, and the Effect of Experience on Repeated Decisions y Journal of Behavioral Decision Making J. Behav. Dec. Making, 21: 575 597 (2008) Published online 8 May 2008 in Wiley InterScience (www.interscience.wiley.com).602 Loss Aversion, Diminishing Sensitivity,

More information

UNESCO EOLSS. This article deals with risk-defusing behavior. It is argued that this forms a central part in decision processes.

UNESCO EOLSS. This article deals with risk-defusing behavior. It is argued that this forms a central part in decision processes. RISK-DEFUSING BEHAVIOR Oswald Huber University of Fribourg, Switzerland Keywords: cognitive bias, control, cost of risk-defusing operators, decision making, effect of risk-defusing operators, lottery,

More information

Lec 02: Estimation & Hypothesis Testing in Animal Ecology

Lec 02: Estimation & Hypothesis Testing in Animal Ecology Lec 02: Estimation & Hypothesis Testing in Animal Ecology Parameter Estimation from Samples Samples We typically observe systems incompletely, i.e., we sample according to a designed protocol. We then

More information

WIF - Institute of Economic Research

WIF - Institute of Economic Research WIF - Institute of Economic Research Economics Working Paper Series Eidgenössische Technische Hochschule Zürich Swiss Federal Institute of Technology Zurich Gender, Financial Risk, and Probability Weights

More information

Chapter 11. Experimental Design: One-Way Independent Samples Design

Chapter 11. Experimental Design: One-Way Independent Samples Design 11-1 Chapter 11. Experimental Design: One-Way Independent Samples Design Advantages and Limitations Comparing Two Groups Comparing t Test to ANOVA Independent Samples t Test Independent Samples ANOVA Comparing

More information

Evaluating generalizability and parameter consistency in learning models

Evaluating generalizability and parameter consistency in learning models 1 Evaluating generalizability and parameter consistency in learning models By Eldad Yechiam Technion Israel Institute of Technology Jerome R. Busemeyer Indiana University, Bloomington IN To Appear in Games

More information

MULTIPLE LINEAR REGRESSION 24.1 INTRODUCTION AND OBJECTIVES OBJECTIVES

MULTIPLE LINEAR REGRESSION 24.1 INTRODUCTION AND OBJECTIVES OBJECTIVES 24 MULTIPLE LINEAR REGRESSION 24.1 INTRODUCTION AND OBJECTIVES In the previous chapter, simple linear regression was used when you have one independent variable and one dependent variable. This chapter

More information

Identification of Tissue Independent Cancer Driver Genes

Identification of Tissue Independent Cancer Driver Genes Identification of Tissue Independent Cancer Driver Genes Alexandros Manolakos, Idoia Ochoa, Kartik Venkat Supervisor: Olivier Gevaert Abstract Identification of genomic patterns in tumors is an important

More information

Cognitive modeling versus game theory: Why cognition matters

Cognitive modeling versus game theory: Why cognition matters Cognitive modeling versus game theory: Why cognition matters Matthew F. Rutledge-Taylor (mrtaylo2@connect.carleton.ca) Institute of Cognitive Science, Carleton University, 1125 Colonel By Drive Ottawa,

More information

Exploring Experiential Learning: Simulations and Experiential Exercises, Volume 5, 1978 THE USE OF PROGRAM BAYAUD IN THE TEACHING OF AUDIT SAMPLING

Exploring Experiential Learning: Simulations and Experiential Exercises, Volume 5, 1978 THE USE OF PROGRAM BAYAUD IN THE TEACHING OF AUDIT SAMPLING THE USE OF PROGRAM BAYAUD IN THE TEACHING OF AUDIT SAMPLING James W. Gentry, Kansas State University Mary H. Bonczkowski, Kansas State University Charles W. Caldwell, Kansas State University INTRODUCTION

More information

Placebo and Belief Effects: Optimal Design for Randomized Trials

Placebo and Belief Effects: Optimal Design for Randomized Trials Placebo and Belief Effects: Optimal Design for Randomized Trials Scott Ogawa & Ken Onishi 2 Department of Economics Northwestern University Abstract The mere possibility of receiving a placebo during a

More information

The effects of losses and event splitting on the Allais paradox

The effects of losses and event splitting on the Allais paradox Judgment and Decision Making, Vol. 2, No. 2, April 2007, pp. 115 125 The effects of losses and event splitting on the Allais paradox Bethany J. Weber Brain Imaging and Analysis Center Duke University Abstract

More information

Doing Quantitative Research 26E02900, 6 ECTS Lecture 6: Structural Equations Modeling. Olli-Pekka Kauppila Daria Kautto

Doing Quantitative Research 26E02900, 6 ECTS Lecture 6: Structural Equations Modeling. Olli-Pekka Kauppila Daria Kautto Doing Quantitative Research 26E02900, 6 ECTS Lecture 6: Structural Equations Modeling Olli-Pekka Kauppila Daria Kautto Session VI, September 20 2017 Learning objectives 1. Get familiar with the basic idea

More information

An Experimental Test of Loss Aversion and Scale Compatibility. Han Bleichrodt, imta, Erasmus University, Rotterdam, The Netherlands

An Experimental Test of Loss Aversion and Scale Compatibility. Han Bleichrodt, imta, Erasmus University, Rotterdam, The Netherlands An Experimental Test of Loss Aversion and Scale Compatibility Han Bleichrodt, imta, Erasmus University, Rotterdam, The Netherlands Jose Luis Pinto, Universitat Pompeu Fabra, Barcelona, Spain Address correspondence

More information

How Does Analysis of Competing Hypotheses (ACH) Improve Intelligence Analysis?

How Does Analysis of Competing Hypotheses (ACH) Improve Intelligence Analysis? How Does Analysis of Competing Hypotheses (ACH) Improve Intelligence Analysis? Richards J. Heuer, Jr. Version 1.2, October 16, 2005 This document is from a collection of works by Richards J. Heuer, Jr.

More information

SUPPLEMENTAL MATERIAL

SUPPLEMENTAL MATERIAL 1 SUPPLEMENTAL MATERIAL Response time and signal detection time distributions SM Fig. 1. Correct response time (thick solid green curve) and error response time densities (dashed red curve), averaged across

More information

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2014

UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2014 UNIVERSITY of PENNSYLVANIA CIS 520: Machine Learning Final, Fall 2014 Exam policy: This exam allows two one-page, two-sided cheat sheets (i.e. 4 sides); No other materials. Time: 2 hours. Be sure to write

More information

How Different Choice Strategies Can Affect the Risk Elicitation Process

How Different Choice Strategies Can Affect the Risk Elicitation Process IAENG International Journal of Computer Science, 32:4, IJCS_32_4_13 How Different Choice Strategies Can Affect the Risk Elicitation Process Ari Riabacke, Mona Påhlman, Aron Larsson Abstract This paper

More information

Reevaluating evidence on myopic loss aversion: aggregate patterns versus individual choices

Reevaluating evidence on myopic loss aversion: aggregate patterns versus individual choices Theory Dec. DOI 10.1007/s18-009-9143-5 Reevaluating evidence on myopic loss aversion: aggregate patterns versus individual choices Pavlo R. Blavatskyy Ganna Pogrebna Springer Science+Business Media, LLC.

More information

Economics Bulletin, 2013, Vol. 33 No. 1 pp

Economics Bulletin, 2013, Vol. 33 No. 1 pp 1. Introduction An often-quoted paper on self-image as the motivation behind a moral action is An economic model of moral motivation by Brekke et al. (2003). The authors built the model in two steps: firstly,

More information

Reinforcement Learning : Theory and Practice - Programming Assignment 1

Reinforcement Learning : Theory and Practice - Programming Assignment 1 Reinforcement Learning : Theory and Practice - Programming Assignment 1 August 2016 Background It is well known in Game Theory that the game of Rock, Paper, Scissors has one and only one Nash Equilibrium.

More information

Introduction to Behavioral Economics Like the subject matter of behavioral economics, this course is divided into two parts:

Introduction to Behavioral Economics Like the subject matter of behavioral economics, this course is divided into two parts: Economics 142: Behavioral Economics Spring 2008 Vincent Crawford (with very large debts to Colin Camerer of Caltech, David Laibson of Harvard, and especially Botond Koszegi and Matthew Rabin of UC Berkeley)

More information

Glossary From Running Randomized Evaluations: A Practical Guide, by Rachel Glennerster and Kudzai Takavarasha

Glossary From Running Randomized Evaluations: A Practical Guide, by Rachel Glennerster and Kudzai Takavarasha Glossary From Running Randomized Evaluations: A Practical Guide, by Rachel Glennerster and Kudzai Takavarasha attrition: When data are missing because we are unable to measure the outcomes of some of the

More information

Adaptive Theory: Limited Neural Resource, Attention and Risk Taking

Adaptive Theory: Limited Neural Resource, Attention and Risk Taking Adaptive Theory: Limited Neural Resource, Attention and Risk Taking By CHENG Qiqi Draft: December 31, 2018 This paper presents a new descriptive theory for decision making under risk, called adaptive theory,

More information

The Boundaries of Instance-Based Learning Theory for Explaining Decisions. from Experience. Cleotilde Gonzalez. Dynamic Decision Making Laboratory

The Boundaries of Instance-Based Learning Theory for Explaining Decisions. from Experience. Cleotilde Gonzalez. Dynamic Decision Making Laboratory The Boundaries of Instance-Based Learning Theory for Explaining Decisions from Experience Cleotilde Gonzalez Dynamic Decision Making Laboratory Social and Decision Sciences Department Carnegie Mellon University

More information

Responsiveness to feedback as a personal trait

Responsiveness to feedback as a personal trait Responsiveness to feedback as a personal trait Thomas Buser University of Amsterdam Leonie Gerhards University of Hamburg Joël van der Weele University of Amsterdam Pittsburgh, June 12, 2017 1 / 30 Feedback

More information

Statistics and Probability

Statistics and Probability Statistics and a single count or measurement variable. S.ID.1: Represent data with plots on the real number line (dot plots, histograms, and box plots). S.ID.2: Use statistics appropriate to the shape

More information

Separation of Intertemporal Substitution and Time Preference Rate from Risk Aversion: Experimental Analysis

Separation of Intertemporal Substitution and Time Preference Rate from Risk Aversion: Experimental Analysis . International Conference Experiments in Economic Sciences 3. Oct. 2004 Separation of Intertemporal Substitution and Time Preference Rate from Risk Aversion: Experimental Analysis Ryoko Wada Assistant

More information

On the How and Why of Decision Theory. Joint confusions with Bart Lipman

On the How and Why of Decision Theory. Joint confusions with Bart Lipman On the How and Why of Decision Theory Joint confusions with Bart Lipman Caveats Owes much to many people with whom we have benefitted from discussing decision theory over the years. These include Larry

More information

Access from the University of Nottingham repository:

Access from the University of Nottingham repository: Murad, Zahra and Sefton, Martin and Starmer, Chris (2016) How do risk attitudes affect measured confidence? Journal of Risk and Uncertainty, 52 (1). pp. 21-46. ISSN 1573-0476 Access from the University

More information

How the Mind Exploits Risk-Reward Structures in Decisions under Risk

How the Mind Exploits Risk-Reward Structures in Decisions under Risk How the Mind Exploits Risk-Reward Structures in Decisions under Risk Christina Leuker (leuker@mpib-berlin-mpg.de) Timothy J. Pleskac (pleskac@mpib-berlin.mpg.de) Thorsten Pachur (pachur@mpib-berlin.mpg.de)

More information

Empowered by Psychometrics The Fundamentals of Psychometrics. Jim Wollack University of Wisconsin Madison

Empowered by Psychometrics The Fundamentals of Psychometrics. Jim Wollack University of Wisconsin Madison Empowered by Psychometrics The Fundamentals of Psychometrics Jim Wollack University of Wisconsin Madison Psycho-what? Psychometrics is the field of study concerned with the measurement of mental and psychological

More information

11/18/2013. Correlational Research. Correlational Designs. Why Use a Correlational Design? CORRELATIONAL RESEARCH STUDIES

11/18/2013. Correlational Research. Correlational Designs. Why Use a Correlational Design? CORRELATIONAL RESEARCH STUDIES Correlational Research Correlational Designs Correlational research is used to describe the relationship between two or more naturally occurring variables. Is age related to political conservativism? Are

More information

Loss Aversion and Scale Compatibility in Two-Attribute Trade-Offs

Loss Aversion and Scale Compatibility in Two-Attribute Trade-Offs Journal of Mathematical Psychology 46, 315 337 (2002) doi:10.1006/jmps.2001.1390 Loss Aversion and Scale Compatibility in Two-Attribute Trade-Offs Han Bleichrodt Erasmus University and Jose Luis Pinto

More information

No Aspiration to Win? An Experimental Test of the Aspiration Level Model *

No Aspiration to Win? An Experimental Test of the Aspiration Level Model * No Aspiration to Win? An Experimental Test of the Aspiration Level Model * June 27, 23 Enrico Diecidue, Moshe Levy, and Jeroen van de Ven Abstract In the area of decision making under risk, a growing body

More information

PSYCHOLOGICAL SCIENCE. Research Report. CONFLICT AND THE STOCHASTIC-DOMINANCE PRINCIPLE OF DECISION MAKING Adele Diederich 1 and Jerome R.

PSYCHOLOGICAL SCIENCE. Research Report. CONFLICT AND THE STOCHASTIC-DOMINANCE PRINCIPLE OF DECISION MAKING Adele Diederich 1 and Jerome R. Research Report CONFLICT AND THE STOCHASTIC-DOMINANCE PRINCIPLE OF DECISION MAKING Adele Diederich 1 and Jerome R. Busemeyer 2 1 Univesitat Oldenburg, Oldenburg, Germany, and 2 Indiana University Abstract

More information

Constructing Preference From Experience: The Endowment Effect Reflected in External Information Search

Constructing Preference From Experience: The Endowment Effect Reflected in External Information Search Journal of Experimental Psychology: Learning, Memory, and Cognition 2012, Vol. 38, No. 4, 1108 1116 2012 American Psychological Association 0278-7393/12/$12.00 DOI: 10.1037/a0027637 RESEARCH REPORT Constructing

More information

GROUP DECISION MAKING IN RISKY ENVIRONMENT ANALYSIS OF GENDER BIAS

GROUP DECISION MAKING IN RISKY ENVIRONMENT ANALYSIS OF GENDER BIAS GROUP DECISION MAKING IN RISKY ENVIRONMENT ANALYSIS OF GENDER BIAS Andrea Vasiľková, Matúš Kubák, Vladimír Gazda, Marek Gróf Abstract Article presents an experimental study of gender bias in group decisions.

More information

Prospect Theory and the Brain

Prospect Theory and the Brain C H A P T E R 11 Prospect Theory and the Brain Craig R. Fox and Russell A. Poldrack O U T L I N E Introduction to Prospect Theory 145 Historical Context 146 Prospect Theory 149 Applications to Riskless

More information

THE APPLICATION OF ORDINAL LOGISTIC HEIRARCHICAL LINEAR MODELING IN ITEM RESPONSE THEORY FOR THE PURPOSES OF DIFFERENTIAL ITEM FUNCTIONING DETECTION

THE APPLICATION OF ORDINAL LOGISTIC HEIRARCHICAL LINEAR MODELING IN ITEM RESPONSE THEORY FOR THE PURPOSES OF DIFFERENTIAL ITEM FUNCTIONING DETECTION THE APPLICATION OF ORDINAL LOGISTIC HEIRARCHICAL LINEAR MODELING IN ITEM RESPONSE THEORY FOR THE PURPOSES OF DIFFERENTIAL ITEM FUNCTIONING DETECTION Timothy Olsen HLM II Dr. Gagne ABSTRACT Recent advances

More information

The optimism bias may support rational action

The optimism bias may support rational action The optimism bias may support rational action Falk Lieder, Sidharth Goel, Ronald Kwan, Thomas L. Griffiths University of California, Berkeley 1 Introduction People systematically overestimate the probability

More information