Valerie Thompson a & Jonathan St. B. T. Evans a a Department of Psychology, University of

Size: px
Start display at page:

Download "Valerie Thompson a & Jonathan St. B. T. Evans a a Department of Psychology, University of"

Transcription

1 This article was downloaded by: [University of Saskatchewan Library] On: 24 August 2012, At: 16:08 Publisher: Psychology Press Informa Ltd Registered in England and Wales Registered Number: Registered office: Mortimer House, Mortimer Street, London W1T 3JH, UK Thinking & Reasoning Publication details, including instructions for authors and subscription information: Belief bias in informal reasoning Valerie Thompson a & Jonathan St. B. T. Evans a a Department of Psychology, University of Saskatchewan, Saskatoon, SK, Canada b University of Plymouth, Plymouth, UK Version of record first published: 11 Jun 2012 To cite this article: Valerie Thompson & Jonathan St. B. T. Evans (2012): Belief bias in informal reasoning, Thinking & Reasoning, 18:3, To link to this article: PLEASE SCROLL DOWN FOR ARTICLE Full terms and conditions of use: This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sublicensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.

2 THINKING & REASONING, 2012, 18 (3), Belief bias in informal reasoning Valerie Thompson 1 and Jonathan St. B. T. Evans 2 1 Department of Psychology, University of Saskatchewan, Saskatoon, SK, Canada 2 University of Plymouth, Plymouth, UK In two experiments we tested the hypothesis that the mechanisms that produce belief bias generalise across reasoning tasks. In formal reasoning (i.e., syllogisms) judgements of validity are influenced by actual validity, believability of the conclusions, and an interaction between the two. Although apparently analogous effects of belief and argument strength have been observed in informal reasoning, the design of those studies does not permit an analysis of the interaction effect. In the present studies we redesigned two informal reasoning tasks: the Argument Evaluation Task (AET) and a Law of Large Numbers (LLN) task in order to test the similarity of the phenomena concerned. Our findings provide little support for the idea that belief bias on formal and informal reasoning is a unitary phenomenon. First, there was no correlation across individuals in the extent of belief bias shown on the three tasks. Second, evidence for belief by strength interaction was observed only on AET and under conditions not required for the comparable finding on syllogistic reasoning. Finally, we found that while conclusion believability strongly influenced assessments of arguments strength, it had a relatively weak influence on the verbal justifications offered on the two informal reasoning tasks. Keywords: Belief bias; Informal reasoning; Argument evaluation task; Law of large numbers; Syllogistic reasoning. Correspondence should be addressed to Valerie Thompson, 9 Campus Drive, Department of Psychology, 9 Campus Drive, Saskatoon, SK. Canada, S7N 5A5. valerie.thompson@ usask.ca The authors contributed equally to the preparation of this manuscript. The research was funded by a Discovery Grant from the Natural Sciences and Engineering and Research Council of Canada to Valerie Thompson. We would like to thank Shira Elqayam for her comments on an earlier draft of this manuscript. Ó 2012 Psychology Press, an imprint of the Taylor & Francis Group, an Informa business

3 BELIEF BIAS IN INFORMAL REASONING 279 Imagine a debate between two individuals you see on television. They have opposing views. Perhaps A is a politician arguing for the benefits of the free market and B, from a rival political party, is arguing the need for state intervention. Alternatively it might be an argument between two scientists where A is promoting the benefits of genetically engineered crops and B warning of negative environmental impacts and potential risks. In both cases you will probably expect to witness something like this: A presents evidence for his case and B provides alternative evidence supporting her position. Then A attempts to discredit or diminish B s evidence, while B does the same to A. At the end of the debate neither has changed their views in the slightest, despite both being intelligent, well educated, and very knowledgeable about the topic. There are several possible cognitive biases that could operate in such arguments, but the one of interest here is belief bias. A number of authors, as we shall see, assume that there is a general bias for people to accept uncritically the conclusions of studies or evidence when the conclusion accords with their prior beliefs; by contrast, they tend to dismiss or discredit evidence that supports a conclusion that they do not agree with. This belief bias effect is pervasive and robust, and has been noted on a large variety of reasoning tasks, both formal and informal. The goal of the current paper is to test the hypothesis that these belief effects are a manifestation of a much more general bias that pervades everyday thinking and reasoning. BELIEF BIAS IN FORMAL REASONING The classical evidence for belief bias comes from syllogistic reasoning. In the paradigm introduced by Evans, Barston, and Pollard (1983), participants are asked to evaluate syllogisms comprising two premises and a stated conclusion. The task is to judge whether or not the conclusion is valid, and the instructions should have left little doubt that this should be based on logical reasoning rather than prior belief: You should answer this question on the assumption that all the information given is true... If you judge that the conclusion necessarily follows from the statements... you should answer yes, otherwise no. Problems are presented in four categories as follows: No conflict (congruent) Valid arguments with believable conclusions (VB) Invalid arguments with unbelievable conclusions (IU) Conflict (incongruent) Valid arguments with unbelievable conclusions (VU) Invalid arguments with believable conclusions (IB) In Table 1 we show examples of the four types of syllogisms used by Evans et al. (1983) together with the percentage rates of Yes responses.

4 280 THOMPSON AND EVANS TABLE 1 Examples of four types of syllogism Valid-believable No police dogs are vicious Some highly trained dogs are vicious Therefore, some highly trained dogs are not police dogs 89% Valid-unbelievable No nutritional things are inexpensive Some vitamin tablets are inexpensive Therefore, some vitamin tablets are not nutritional 56% Invalid-believable No addictive things are inexpensive Some cigarettes are inexpensive Therefore, some addictive things are not cigarettes 71% Invalid-unbelievable No millionaires are hard workers Some rich people are hard workers Therefore, some millionaires are not rich people 10% Examples of the four types of syllogism used by Evans et al. (1983) together with acceptance rates combined over three experiments (from Evans, 2007a, Table 4.1, p. 88, reproduced with permission). It is apparent that people accept both far more valid than invalid inferences (suggesting logical reasoning) and far more believable than unbelievable conclusions (belief bias). However, inspection of the acceptance rates in Table 1 reveals a third finding: there is a large interaction between logic and belief such that the belief bias is larger on invalid arguments. All of these effects are highly reliable and have been repeated many times since (for recent reviews, see Dube, Rotello, & Heit, 2010; Evans, 2007a; Klauer, Musch, & Naumer, 2000). BELIEF BIAS IN INFORMAL TASKS Several types of informal reasoning tasks appear to show similar belief biases. A good example, and one that is studied further in this paper, is the Argument Evaluation Task or AET (Stanovich & West, 1997). Participants are given a series of arguments from an imaginary character called Dale, together with a counter-argument and a refutation. Here is an example: Dale s belief: 17-year-olds should have the legal right to drink alcoholic beverages. Dale s premise or justification for belief: 17-year-olds are just as responsible as 19-year-olds, so they ought to be granted the same drinking rights as other adults.

5 BELIEF BIAS IN INFORMAL REASONING 281 Critics counter-argument: 17-year-olds are three times more likely to be involved in automobile accident while under the influence of alcohol than 19- year-olds (assume statement factually correct). Dale s rebuttal to Critics counter-argument: 17-year-olds will drink no matter what the law says (assume statement factually correct) so it is useless to try to legislate that they not drink. Indicate the strength of Dale s rebuttal to Critic s counter-argument: A ¼ Very Weak B ¼ Weak C ¼ Strong D ¼ Very Strong Problems like these clearly do not have the formal structure of syllogisms and do not support logically valid inferences. However, Stanovich and West (1997) sought to get around this by using a panel of expert judges to determine the strength (not validity) of Dale s arguments for each of the problems presented. In the above case, not surprisingly, the argument would be judged objectively weak. Participants agreement with Dale s beliefs was rated by the participants themselves in a separate task. The statistical approach taken by Stanovich and West was multiple regression, on individual participants, where both subjective belief and objective argument strength were predictors and the rated strength of Dale s argument was the dependent variable. The mean beta weight of the belief predictor was.451, while that for the objective strength was.330. This appears to show a similar finding to the classical belief bias paradigm where both belief in the conclusion and the actual logical validity of the argument (for which we substitute strength) contribute to participants tendency to accept conclusions. Other informal reasoning tasks also show evidence of belief bias. For example, Klaczysnki and Robinson (2000) recruited participants with differing religious beliefs or socio-economic backgrounds and constructed problems to be consistent or inconsistent with their beliefs. They used two tasks based on Law of Large Numbers (LLN) or experiment evaluation. In the LLN task participants rated conclusions based on samples of 1 3 individuals and in the experiment evaluation task they were asked to evaluate research designs with methodological faults. Both tasks included conclusions that were consistent with beliefs, inconsistent, or neutral. In addition Klaczysnki and Robinson asked participants to provide written justifications for their ratings, which were scored for whether they took into account sample size (LLN) or took into account a flaw in the design (experiment evaluation). Unlike in most studies of belief bias, which have just focused on young adults in higher education, Klaczysnki and Robinson also included middle-aged and elderly groups. They found little evidence of belief bias in the young adults, but much more in the older groups. At first glance these findings seem inconsistent with the strong evidence of belief bias in syllogistic reasoning, which is strongly marked in undergraduate students. However, an increase in belief bias in older adults has

6 282 THOMPSON AND EVANS also been recorded with the syllogistic paradigm (Gilinsky & Judd, 1994) and subsequent studies on LLN tasks have found a reliable belief bias in young adult groups as well (Klaczynski & Lavallee, 2005; Neilens, Handley, & Newstead, 2009). Findings from the Stanovich and West AET and the Klaczynski LLN tasks (the latter also included in our Experiment 2) do seem to support the claim that there is a general belief bias in informal reasoning which mirrors that in the syllogistic reasoning tradition. However, it is quite possible for belief bias on all three tasks to arise from different underlying mechanisms. One method for establishing similarities across tasks is to look for correlations in belief-based or evidence-based responding between tasks. For example, in a subsequent paper Sa, West, and Stanovich (1999) administered both the AET and the syllogistic belief bias paradigm to the same group of participants. The ability to focus on argument strength rather than belief on the AET had a small but significant negative correlation (7.236) with a measure of belief bias from the syllogistic reasoning task. In addition both measures correlated with a batch of thinking style measures, collectively known as the Active Open-minded Thinking test (AOT, Stanovich & West, 1997). However, we do not find this convincing evidence for a generalised belief bias. First, the correlation is small with much variance unaccounted for. Second, a correlation across participants could reflect, as their data suggest, a common link to ability or cognitive style which is not in itself informative about the actual cognitive processes involved in the two tasks. An alternative approach, and the one adopted here, is to determine the extent to which performance on the tasks is sensitive to the same variables. For example, do we find evidence that argument strength and belief contribute to evaluations of conclusions on the AET in the same way as conclusion validity and believability do on the syllogistic task? As we outlined above, the initial evidence is favourable, as both tasks show sensitivity to strength/validity as well as belief. However, a signature of performance on the syllogistic task is the interaction between validity and belief; as we outline below, it is this interaction that forms the basis of much of the theorising about the mechanisms driving performance on that task. The alternative view is that the effect of belief on reasoning, like other content-related effects, may be task-specific (Thompson, 2000). Task-specific effects of content arise when task demands highlight the relevance of certain classes of information and shift attention away from others. Thus variables such as the availability of counterexamples or deontic relations may predict a large portion of variance on one conditional reasoning task, but relatively little on a logically equivalent task. This differential sensitivity to the content of the reasoning problems points to different underlying representations and processes (Thompson, 2000).

7 BELIEF BIAS IN INFORMAL REASONING 283 Unfortunately the current data do not tell us whether argument strength and conclusion believability will show a similar interactive pattern (with the effect of argument strength larger for unbelievable than believable conclusions). Such a finding would be evidence that the same processes underlie performance on both tasks. For the LLN, the data are even more sparse. We do not know of any studies that manipulate sample size as well as belief, so we cannot judge the degree to which the evaluation of a putative conclusion reflects evidence strength or the interaction with belief. Thus the second goal of our studies was to develop a paradigm for studying informal reasoning problems that would provide a basis of comparison to the formal domain. THEORIES OF BELIEF BIAS The use of a common paradigm will allow us to consider a common theoretical framework for explaining performance on all three tasks. As a broad framework, dual process theories of reasoning have been invoked to explain all three tasks (Evans, 2008; Klaczynski & Robinson, 2000; Stanovich & West, 1997). On this view, beliefs provide a fast, intuitive (Type 1) answer to a problem that may or may not be overturned by recourse to a more analytic (Type 2) form of thinking that is based on validity, argument strength, or evidence quality. Such an interpretation is consistent with the neural imaging evidence of conflict detection and resolution within this paradigm (De Neys, Vartanian, & Goel, 2008; Goel & Dolan, 2003; Tsujii & Watanabee, 2009). However, the story is complicated by the need to account for the interaction between logic and belief, for which several interpretations are possible within the dual process framework (see Table 2). Two early models were both discussed by Evans et al. (1983). The idea of the selective scrutiny model is that people tend to accept believable conclusions uncritically and focus their reasoning efforts on the unbelievable ones (see also Klaczynski & Robinson, 2000). This explanation is consistent with the default-interventionist formulation of dual process theory, in which the Type 1 output is accepted unless something occurs to trigger Type 2 thinking (Evans, 2006). This model can explain why fallacies in syllogistic reasoning tend be detected mostly on IU problems, producing the interaction. On VU problems, which are also scrutinised, there is no logical basis to reject the conclusion. This model assumes belief-first reasoning and can easily be extended to the other two tasks (indeed Klaczynski & Robinson, 2000, proposed a very similar model to explain performance on the LLN task). On this view, therefore, one would expect similar interactions on the three reasoning tasks, with the evidence of Type 2 thinking more pronounced for unbelievable conclusions, producing a larger discrimination of those conclusions on the basis of validity/argument strength/evidence quality.

8 284 THOMPSON AND EVANS TABLE 2 Three theories of the belief by logic interaction in the syllogistic paradigm (A) Selective Scrutiny Model (Evans, 1989; Evans et al., 1983) Is conclusion believable? If YES, (tend to) accept conclusion. If NO (tend to) base response on logical reasoning (B) Misinterpreted necessity model (Evans, 1989; Evans et al., 1983) Is conclusion determinately valid? If YES (tend to) accept it. If NO (tend to) base response on believability of the conclusion. (C) Mental model theory (Oakhill et al., 1989) (i) Selective scrutiny of unbelievable conclusions (described as a search for counterexample models) (ii) Response bias to choose believable over unbelievable conclusions In contrast to the selective scrutiny model, the misinterpreted necessity model assumes logic first reasoning. People try to prove the conclusion, but if they cannot they turn to belief, such that valid conclusions get accepted but invalid ones have a large belief bias. Although such a model could be extended to informal domains, unfortunately it does not provide a good account of the interaction on syllogistic tasks (Newstead, Pollard, Evans, & Allen, 1992). Moreover, both of the accounts described thus far fail to explain the smaller but generally reliable belief bias on valid arguments. That is, VB arguments are accepted significantly more often than VU arguments in most studies. To handle this finding it has been necessary to assume the existence of response bias to accept believable conclusions and reject unbelievable ones. For example, the mental model theory of belief bias is essentially one of motivated reasoning (Oakhill & Johnson-Laird, 1985; Oakhill, Johnson- Laird, & Garnham, 1989) and thus fits with the general assumptions of the selective scrutiny model. In the general theory people are said to form an initial model which combines premises and conclusions, but then search for counterexamples to putative conclusions, thus being able to disprove potential fallacies. Belief bias and the belief by logic interaction were accounted for by a lack of effort to disprove believable conclusions, which is similar to selective scrutiny. However, Oakhill et al. (1989) were also forced with some reluctance to propose that a response bias was also in operation. This was because participants often refused to endorse valid-unbelievable syllogisms (or to generate conclusions for them in the authors preferred paradigm) despite there being only one simple model connecting the

9 BELIEF BIAS IN INFORMAL REASONING 285 premises to the conclusion. In effect this theory augments the selective scrutiny model with a response bias on valid arguments. A difficulty with the original mental model account is that there is little evidence that people search for counterexamples at all in syllogistic reasoning: instead the evidence suggests that they construct one model of the premises and say yes or no according to whether it includes the conclusion (Evans, Handley, Harper, & Johnson-Laird, 1999). Consequently more recent models have proposed that reasoners instead attempt to construct only a single model of the premises (Evans, 2000; Evans, Handley & Harper, 2001; Klauer et al., 2000; Thompson, Streimer, Reikoff, Gunter, & Campbell, 2003), with differing assumptions about how processing proceeds from there. However, because they tend to rely on the concept of validity or other task-specific representational assumptions, it is not clear how well they might generalise to informal tasks. For example, Thompson et al. s (2003) theory rests on the assumption that it is easier to construct a model for valid than invalid conclusions and the other theories suggest that reasoners build models to confirm or disconfirm the validity of the provided conclusion. For this reason we suggest that the selective scrutiny model is the best basis for beginning our theorising about reasoning in informal tasks. We acknowledge that such a starting point is not without controversy. This model was recently challenged by Dube et al. (2010) who performed a signal detection analysis (in which validity was treated as a signal) and concluded that the logic by validity interaction can be completely accounted for by a response bias. We are sceptical of this account and note the recent counter-argument to their claims by Klauer and Kellen (2011). However, another difficulty arises from the observation that participants show consistently higher latencies for arguments with invalid but believable conclusions (Ball, Wade, & Quayle, 2006; Thompson et al., 2003; Thompson, Newstead, & Morley, 2011). This finding seems inconsistent with selective scrutiny, which does not predict extra processing effort in this case, although Stupple et al. (2011) have argued that these RT data can be reconciled within that framework. Nonetheless, as the selective scrutiny model is the only one that could generalise across both formal and informal tasks, we will use it as our starting point. RATINGS VS WRITTEN EVALUATIONS Also of interest in these studies is the analysis of verbal justifications. Justifications have been used in the study of LLN reasoning (Klaczysnki & Robinson, 2000) and provide an additional source of information for the effect of belief and argument strength/evidence strength on reasoning. For example, Klaczynski and Robinson (2000) concluded that their justification

10 286 THOMPSON AND EVANS data were consistent with a selective scrutiny account of reasoning on the LLN task, in which reasoners selectively introduce normative principles on belief-incongruent problems. Their basic finding was that on beliefcongruent problems reasoners made few references to sample size or experimental confounds in their justifications. On incongruent problems, however, such references increased. Additional evidence that justifications may provide fertile ground for the study of heuristic and analytic processes in reasoning comes from Neilens et al. (2009), who introduced a training manipulation (adapted from Fong, Krantz, & Nisbett, 1986). Without training, a strong justification bias was shown which disappeared with training (i.e., all conditions showed high attention to sample size). However, the belief bias in the rating of argument strength was, remarkably, quite unaffected by the training given in this study. These data suggest that analytic reasoning may contribute more to justifications than to ratings, meaning that we should expect to see large effects of argument strength/evidence quality on justifications, regardless of whether they are present in the ratings task. EXPERIMENT 1 It is evident from the review of belief bias in syllogistic reasoning that the belief by logic interaction is the key finding that has vexed theorists. However, the studies of belief bias in informal reasoning have not previously been designed in such a way as to check for a comparable interaction between belief and argument strength. Of the models of syllogistic belief bias discussed (Table 2) only one, the selective scrutiny model, would seem to readily transfer, as it does not rely on detailed issues about logical validity.it is effectively the common sense argument we gave for our opening example people seem to be more critical of evidence when its implications are against their beliefs. If this model applied to all kinds of belief bias, then we should find that people will be more accurate in identifying the strengths and weaknesses of arguments that go against beliefs. If, on the other hand, logical validity is critical to the interaction, then there would be no reason to expect it to arise in informal reasoning. A strong argument does not, for example, preclude counter-examples cases, so a biased reasoner could still find reasons to reject it. Therefore in Experiment 1 we adapted the Argument Evaluation Task of Stanovich and West (1997) so that problems could be classified into four categories broken down by argument strength and conclusion believability, with an additional measure of informal reasoning that was not administered by Stanovich and West: on some of the problems we asked participants to give a qualitative account of their reasoning, by listing both strengths and weaknesses of Dale s arguments.

11 BELIEF BIAS IN INFORMAL REASONING 287 Method Design. In a pre-test, participants rated their degree of agreement with Dale s beliefs for each of the 23 problems administered by Stanovich and West (1997). 1 On the basis of these personal ratings and the expert evaluations or argument strength used by Stanovich and West, we selected eight AET problems for each participant, two each in the four categories of strong-believable (SB), strong-unbelievable (SU), weak-believable (WB), and weak-unbelievable (WU). These were presented in two blocks. Half of the participants were asked to list strengths and weaknesses in the first block and half in the second block. Participants. The pretest was administered to a large group of Introductory Psychology students who completed it along with several other, unrelated, measures. Those participants were asked if they were willing to complete another study. Of these who said yes, 38 students who rated two each of the strong and weak arguments as believable and unbelievable participated in the current study. They received partial course credit for participation. Procedure. Participants were tested individually. Problems were presented one at a time on a computer; participants entered their ratings of the argument strength and then pressed a space bar to bring up the next problem. Argument strength was measured on a 4-point scale: A ¼ Strongly Disagree, B¼ Disagree, C¼ Agree, D¼ Strongly Agree. Problems were presented in two blocks, each block containing one problem in each belief by strength cell. Participants provided written evaluations of the strengths and weaknesses of the arguments for one block of problems. These were recorded on a sheet of paper with spaces labelled for the information. The instructions were as follows: We are interested in your ability to evaluate counter-arguments. First, you will be presented with a belief held by an individual named Dale. Following this, you will be presented with Dale s premise or justification for holding this particular belief. A Critic will then offer a counter-argument to Dale s justification for the belief. (We will assume that the Critic s statement is factually correct.) Finally, Dale will offer a rebuttal to the Critic s counterargument. (We will assume that Dale s rebuttal is also factually correct.) You are to evaluate the strength of Dale s rebuttal to the Critic s counter-argument, regardless of your feelings about the original belief or Dale s premise. For the first four/last four problems, you will be asked to list both the strengths and weaknesses of Dale s rebuttal to the Critic s counter-argument. List as many 1 Our pre-test differed somewhat to Stanovich and Wests s in that our rating scale included a don t know option.

12 288 THOMPSON AND EVANS strengths and weaknesses that you can possibly think of, but please try to give at least one strength and one weakness for each argument. Try to make your objections as complete as possible (i.e., more specific than It is weak. ). If you have any questions, please ask now. Results The first dependent measure analysed was the participants ratings of perceived argument strength. The design enabled us to break these down by two main factors: objective argument strength (taken from Stanovich & West, 1997) and subjective belief in the proposition supported by the argument, measured individually for each participant. Both factors were included in an ANOVA in addition to a third: this was the order in which participants were asked to provide strengths and weakness (first or second block). The data are plotted in Figure 1. Overall there was a large influence of belief with believable arguments rated with a mean of 2.81 and unbelievable with a mean of 2.19, F(1, 36) ¼ 33.95, MSE ¼.454, p 5.001, Z 2 P ¼.485. Likewise, objectively strong arguments were rated more highly (2.85) than weak ones (2.17) F(1, 36) ¼ 27.91, MSE ¼.619, p 5.001, Z 2 P ¼.437. The overall strength by belief interaction was non-significant (F 5 1), and the three-way interaction between belief, strength, and order was only marginal, F(1, 36) ¼ 2.75, MSE ¼.266, p ¼.11, Z 2 P ¼.071. Analysis of listed strengths and weaknesses. Our first analysis for the listings was based simply on a count of the number of strengths and weakness listed, broken down by believability, objective strength, and the Figure 1. Mean ratings of argument strength in Experiment 1, broken down by objective strength and subjective belief.

13 BELIEF BIAS IN INFORMAL REASONING 289 type of listing (strength or weakness). The means are shown in Table 3a. The ANOVA revealed just two significant effects. There was a main effect of type of listing, F(1, 36) ¼ 4.17, MSE ¼.392, p 5.05, Z 2 P ¼.104, such that people listed more weaknesses (mean 1.34) than strengths (1.18). There was also a significant interaction between listing type and argument strength, F(1, 36) ¼ 10.61, MSE ¼.510, p 5.01, Z 2 P ¼.23. Inspection of the means in Table 3a reveals that this was a cross-over interaction. Perhaps unsurprisingly, participants listed more strengths for strong arguments and more weaknesses for weak arguments. Interestingly there was no significant effect of believability. If people were selectively scrutinising unbelievable arguments, we might have expected them to find more weakness in these cases, irrespective of objective argument strength. A qualitative analysis of the strengths and weakness listed was then performed by independent assessors who classified these into three categories: assertion-based arguments, reason-based arguments, and other. For example, an assertion-based weakness might simply deny a premise of the argument even though it should be assumed. In the example given earlier, a participant might deny 17-year-olds will drink regardless of the TABLE 3 Analysis of listed strengths and weaknesses in Experiment 1 Strong Weak (a) Number of arguments listed STRENGTHS Believable Unbelievable Mean WEAKNESSES Believable Unbelievable Mean Strong Weak Ass Reas Other Ass Reas Other (b) Classification of arguments (%) STRENGTHS Believable Unbelievable WEAKNESSES Believable Unbelievable Ass¼assertion-based argument. Reas¼reason-based argument.

14 290 THOMPSON AND EVANS law. A reason-based argument would typically question the relation between premise and conclusion. In this case the participant might argue that underage drinking should not be condoned and that the problem would be even worse if it were legalised. The other category mostly comprised statements that were not directly relevant to the argument for example, a participant might comment that drunk-driving is a major problem that should be addressed. The classifications of arguments is shown in Table 3b and were analysed using 2 (justification type) by 2 (argument strength) by 2 (believability) within-participants ANOVAs. Since the three measures are non-independent, we conducted the ANOVAs on just one measure: reason-based arguments (the others are included in the table for the sake of completeness). The striking pattern in Table 3b is that such arguments were given more frequently when listing weaknesses than listing strengths and this was confirmed by a highly significant main effect of listing type, F(1, 21) ¼ 32.50, MSE ¼.201 ¼ 2, p 5.001, Z 2 P ¼.607. (Although we did not analyse it separately, we can see that there were correspondingly more assertion-based arguments for strengths.) The objective strength of the argument had no effect in this analysis (F 5 1, Z 2 P ¼.031). There was, however, a tendency for people to give more reason-based arguments for unbelievable conclusions, F(1, 21) ¼ 4.39, MSE ¼.083, p 5.05, Z 2 P ¼.17, which is consistent with the predictions of the selective scrutiny model. However, the only other clearly significant effect was for an interaction between belief and argument strength, F(1, 21) ¼ 6.12, MSE ¼.055, p 5.05, Z 2 P ¼.226. The trend for more reason-based arguments for unbelievable conclusions was more marked for objectively strong than weak arguments. Discussion In Experiment 1 we replicated the findings of Stanovich and West (1997) for the Argument Evaluation Test with a change of method. Using an ANOVA design on selected items, we showed that participants are strongly influenced by both objective argument strength and prior belief in their assessment of informal arguments for a proposition. The original study showed this by separately significant weights in regression analysis. While these two trends are analogous to the large effects of both validity and belief in the syllogistic reasoning paradigm, our method also enabled us to test for the key interaction between the two factors. On the question of whether belief bias is a general phenomenon, our findings were somewhat equivocal. We did not find an interaction analogous to that observed with syllogisms: that is belief bias was not stronger for weak arguments than strong ones. The nature of participant s reasoning was explored by analyses of the strengths and weaknesses that people actually listed. This provided a

15 BELIEF BIAS IN INFORMAL REASONING 291 somewhat different picture from the analysis of rated argument strength. Participants quite rationally listed more strengths for strong arguments and more weakness for weak arguments and these counts were unaffected by believability (Table 3a). On the other hand, when we classified the nature of the reasons listed (Table 3b) we did find a weak tendency for people to give more reason-based arguments for unbelievable conclusions, which significantly interacts with argument strength (more marked on strong arguments). Again, the evidence suggests that participants are not biased to find more objections to unbelievable conclusions. However, it does seem that the type of argument they give is somewhat influenced by the believability of the argument. They are more likely to focus on the link from premise to conclusion for strong but unbelievable arguments. At best, there is some tenuous support for the selective scrutiny model to be found here. EXPERIMENT 2 One reason for the failure to observe an interaction between argument strength and believability in Experiment 1 may have been lack of statistical power. In the current experiment we modified the procedure so that we could test large numbers of participants. In addition we included another informal reasoning task, the law of large numbers task (LLN) adapted from Klaczysnki and Robinson (2000) and discussed in the Introduction. This was also rendered into a format with argument strength and believability manipulated orthogonally so as to allow for test of an interaction. Both tasks also included verbal justifications to permit qualitative analysis. We also included a test of belief bias in the standard syllogistic reasoning paradigm. Thus we can also see whether measures of belief bias on different tasks correlate with each other across individual participants. Method Design. This was a completely within-participants design. Each participant completed all three tasks, and provided written justifications for the AET and LLN tasks. Participants. A total of 179 undergraduate University of Saskatchewan students participated during a regularly scheduled class in partial fulfilment of a course requirement. Materials. Each participant was given a booklet including the three main tasks: syllogistic reasoning problems, argument evaluation, and law of

16 292 THOMPSON AND EVANS large numbers. (Two other cognitive tasks were included which are not reported here.) Participants were given both a block of syllogistic reasoning problems and a block of argument evaluation (AET)/law of large numbers (LLN) problems. The order of these two blocks of problems was counterbalanced across participants. The block of syllogistic reasoning problems contained eight three-term multiple-model problems. Each problem consisted of two premises, followed by a conclusion of the form Some are not. One premise was of the type No are, and one premise was of the type Some are. The two conclusion forms ( Some C are not A and Some A are not C ) were presented equally often across problems. Half of the conclusions were believable and half were unbelievable; the believability of the conclusions was established in a rating study by Evans et al. (1983). Content was assigned to the A, B, and C terms such that the A and C terms referred to familiar categories (e.g., judges, police dogs). For a participant, each problem contained a unique content; across participants, contents appeared equally often in the four belief 6 validity cells. To control for premise believability the B term was a nonsense term (e.g., Rewons, Likels) (Thompson, 1996). Half of each believability type was valid and half was invalid. The problems appeared in two blocks; order was determined using a Latin square. Examples appear in Appendix A. The block of AET and LLN problems also contained eight problems. Participants were given the following instructions for this section: On the following pages you will be asked to read and evaluate eight arguments. Read each argument carefully and answer the questions that follow. Please evaluate the arguments based solely on the evidence provided, putting aside your own knowledge and beliefs. Your thoughts are important to us, so be sure to express them clearly. You may take as long as you wish to solve these problems. However, do not rush through them. Take your time and think carefully. Four of these problems were argument evaluation problems, which were adapted from the Argument Evaluation Task (Stanovich & West, 1997). These were re-written into paragraph form and were followed by an expanded (1 9) rating scale (see Appendix B). 2 The remaining four problems were LLN problems (also included in Appendix B). The four AET problems consisted of one believable strong problem, one unbelievable strong problem, one believable weak problem, and one unbelievable weak problem. The strength of the problems was determined by using experts ratings of the problems (Stanovich & West, 1997). The believability of the problems was 2 Scores on the three rating scales were highly correlated; to avoid redundancy, we computed and analysed the mean of the three ratings.

17 BELIEF BIAS IN INFORMAL REASONING 293 determined by the pretest described in Experiment 1. The problems were matched on belief and strength, so that the strong problems had equal strength ratings, believable problems had equal pretest ratings, etc. Participants received the same four problems, the order of which was counterbalanced using a Latin Square. The four LLN problems consisted of one unbelievable small sample problem, one unbelievable large sample problem, one believable small sample problem, and one believable larger sample problem. The believability of the conclusions was based on the results of a pilot study; the conclusions with the highest (believable) and lowest (unbelievable) ratings were used to construct problems. The content of the problems (i.e., the speaker, the subject of the problem, the believability of the problem, and the size of the sample of the problem) were all completely counterbalanced, resulting in four booklet types. The four problems were counterbalanced within booklets using a Latin Square, and the problem types (i.e., which group of four problems were given) were randomly distributed across booklets. (See Appendix C for one set of four problems.) Scoring. The open-ended responses for the AET and LLN problems were coded by two independent raters in order to evaluate the participants use of statistical reasoning. The AET justifications were coded as argument or assertions in the manner described in Experiment 1. The LLN responses were coded according to whether participants made mention of a statistical principle, such as sample size and random variation, in their responses. For the sake of comparability we will use the term reason-based for cases in which reasoners provided a genuine argument (AET) or based their justification on statistical principles (LLN). Procedure. Participants were tested in a large group and completed the questionnaires at their own pace. Results and discussion We report first the analyses that are comparable across the three tasks: conclusions accepted for the syllogistic reasoning task, and rated strength of argument for AET and LLN tasks, deferring for the moment the analyses of justifications for the last two tasks. In all three cases two-way ANOVAs were run with conclusion believability as one factor and validity (or strength) as the other. In the case of the syllogistic reasoning task we used the number of Yes decisions (accepting the conclusion as valid) as our dependent measure (see Figure 2). Our results indicated typical findings for this paradigm, with significant main effects of validity, F(1, 177) ¼ , MSE ¼.147, p 5.001, Z 2 P¼.461, and belief, F(1, 177) ¼ 89.95, MSE ¼.117,

18 294 THOMPSON AND EVANS Figure 2. Mean percentage Yes decisions in the syllogistic reasoning belief bias task of Experiment 2. p 5.001, Z 2 P¼.337, and a significant interaction between the two, F(1, 177) ¼ 10.05, MSE ¼.084, p Z 2 P¼.054. As is usual, the belief bias was more marked on invalid syllogisms. The analysis of the AET is comparable to that for Experiment 1, with the mean ratings shown in Figure 3a. Note, however, that all participants listed justifications for all of their decisions in Experiment 2. As was the case in Experiment 1, there were main effects of argument strength, F(1, 175) ¼ , MSE ¼ 2.51, p 5.001, Z 2 P ¼.494, and conclusion believability, F(1, 175) ¼ 9.81, MSE ¼ 2.14, p 5.01, Z 2 P ¼.053); this time, however, the interaction was also reliable, F(1, 175) ¼ 72.85, MSE ¼ 1.95, p 5.001, Z 2 P ¼.294. Note, however, that the effect size for believability is relatively small and only significant because of the large sample size used in this experiment. In fact, the effect disappeared on strong arguments (see Figure 3a) and is only present as component of the interaction which showed the belief bias to be restricted to weak arguments. Nevertheless, the interaction is of shape expected by the selective scrutiny hypothesis, with the effect of belief larger for weak than strong arguments. Finally, we performed a similar ANOVA for the LLN task. Again we found significant main effects for both argument strength, F(1, 175) ¼ , MSE ¼ 3.05, p 5.001, Z 2 P ¼.587, and conclusion believability, F(1, 175) ¼ 55.72, MSE ¼ 3.08, p 5.001, Z 2 P ¼.242, both large effects. In spite of the high statistical power, however, the interaction term was tiny and non-significant (F 5 1, Z 2 P ¼.006). With the high power we can safely conclude that belief and strength do not interact in the LLN task. Individual differences. In order to examine individual differences we extracted three underlying indices for each task for each individual participant. For the syllogistic reasoning task these were:

19 BELIEF BIAS IN INFORMAL REASONING 295 Figure 3. Mean ratings for AET and LLN tasks in Experiment 2, broken down by strength of argument and believability of conclusions. (a) Argument evaluation task. (b) Law of large numbers task.. Belief index. The number of yes decisions for problems with believable conclusions minus the number of yes decisions on problems with unbelievable conclusions. Logic index. The number of yes decisions for problems with valid conclusions minus the number of yes decisions on problems with invalid conclusions. Interaction index. The difference in size of the belief bias arguments for invalid and valid problems, i.e. (invalid-believable invalid-unbelievable) (valid-believable valid-unbelievable). For the AET and LLN tasks we computed comparable strength, belief, and interaction indices by subtracting the mean ratings between strong and weak arguments etc. We then computed correlations across participants

20 296 THOMPSON AND EVANS between tasks for each comparable index. In spite of the large sample none of these approached significance. For example, the correlation on belief indices were.124 (syllogism-aet),.077 (syllogism-lln) and.072 (AET- LLN). The correlations for logic/strength indices were above zero, but very small in each case, and those for interaction indices close to zero. Thus we have failed to replicate the finding of a small but significant correlation between AET and a measure of belief bias (which we think is equivalent to our belief index) reported by Stanovich and West (1997). On the basis of our analysis there is no reason to believe that some individuals are consistently more belief biased than others in a way that generalises across the three tasks studied. Analysis of justifications. Following Klaczysnki and Robinson (2000) we asked participants to provide justifications for their ratings in the LLN task. We adopted the same procedure for the AET tasks to allow a compatible analysis of the two informal reasoning tasks, replacing the listing of strengths and weaknesses for the latter used in Experiment 1. Unlike in the previous study, reasons for decisions were given on all problems. The mean numbers of justifications offered on each task are shown in Table 4, and were analysed within a single ANOVA with task, belief, and argument strength as independent variables. For AET, strength was determined in the same way as in Experiment 1. For the LLN task, strong arguments were those with larger samples sizes. There was sufficient power in our study to detect quite small effects. In this ANOVA there significant main effects of task, F(1, 167) ¼ 18.57, MSE ¼.230, p 5.001, Z 2 P ¼.326, and believability, F(1, 167) ¼ 8.53, MSE ¼.14, p 5.01, Z 2 P¼.049. There were more justifications offered on the LLN task, and overall more justifications for unbelievable than believable items, a small but significant trend. There was also a significant interaction between argument strength and task, F(1, 167) ¼ 4.09, MSE ¼.210, p 5.05, Z 2 P ¼.024, another small effect. It reflects TABLE 4 Total number of justifications given for AET and LLN tasks in Experiment 2 strong weak Mean (a) Argument evaluation task Believable Unbelievable Mean (b)law of large numbers task Believable Unbelievable Mean

21 BELIEF BIAS IN INFORMAL REASONING 297 the fact that more justifications were offered for weak than strong arguments, but only on the AET task. As in Experiment 1 we ran an ANOVA on the proportion of reasonbased arguments offered by each participant including both AET and LLN tasks in the same analysis, the other factors being argument strength and conclusion believability. These data are presented in Table 5. There was a very large effect of task, indicating that reason-based arguments were much more commonly observed on the LLN than the AET task, F(1, 166) ¼ , MSE ¼.163, p 5.001, Z 2 P ¼.719. There was also a significant effect of argument strength, such that participants offered more reasonbased justifications when the argument evaluated was weak rather than strong, F(1, 166) ¼ 35.40, MSE ¼.117, p 5.001, Z 2 P ¼.176. Despite the large power, a trend for more reason-based arguments to be given with unbelievable conclusions fell well of significance, F(1, 166) ¼ 2.87, MSE ¼.338, p ¼.092, Z 2 P ¼.017, and the effect size was tiny. The analyses of justifications suggested quite strong differences between the two tasks. There were more justifications offered for LLN than AET and a considerably larger proportion of these were reason based. The one factor in common was the greater number of reason-based justifications for decisions made when arguments were objectively weak. Since such arguments were generally given weaker ratings (Figure 3), this probably reflects participants objections to the weak evidence presented. The other striking feature of the analysis of justifications is how little they were influenced by conclusion believability, despite its large effects on the actual ratings of argument strength (Figure 3). In this respect, we replicate the finding of Neilens et al (2009) that verbal justifications appear more rational and normative than do intuitive ratings. The fact that justifications for the LLN were not sensitive to belief is inconsistent with Klaczynski and Robinson s (2000) finding, but parallels TABLE 5 Analysis of verbal justifications in Experiment 2 Strong Weak Ass Reas Other Ass Reas Other (a) Argument evaluation task (%) Believable Unbelievable (b) Law of large numbers task (%) Believable Unbelievable Ass¼assertion-based argument. Reas¼reason-based argument.

Necessity, possibility and belief: A study of syllogistic reasoning

Necessity, possibility and belief: A study of syllogistic reasoning THE QUARTERLY JOURNAL OF EXPERIMENTAL PSYCHOLOGY, 2001, 54A (3), 935 958 Necessity, possibility and belief: A study of syllogistic reasoning Jonathan St. B.T. Evans, Simon J. Handley, and Catherine N.J.

More information

This is an Author's Original Manuscript of an article submitted to Behavior & Brain Science and may differ from the final version which is available here: http://journals.cambridge.org/action/displayabstract?frompage=online&aid=8242505

More information

J. St.B.T. Evans a, S. E. Newstead a, J. L. Allen b & P. Pollard c a Department of Psychology, University of Plymouth, Plymouth, UK

J. St.B.T. Evans a, S. E. Newstead a, J. L. Allen b & P. Pollard c a Department of Psychology, University of Plymouth, Plymouth, UK This article was downloaded by: [New York University] On: 27 April 2015, At: 14:56 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer

More information

Thinking & Reasoning Publication details, including instructions for authors and subscription information:

Thinking & Reasoning Publication details, including instructions for authors and subscription information: This article was downloaded by: [Umeå University Library] On: 07 October 2013, At: 11:46 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer

More information

A POWER STRUGGLE: BETWEEN- VS. WITHIN-SUBJECTS DESIGNS IN DEDUCTIVE REASONING RESEARCH

A POWER STRUGGLE: BETWEEN- VS. WITHIN-SUBJECTS DESIGNS IN DEDUCTIVE REASONING RESEARCH Psychologia, 2004, 47, 277 296 A POWER STRUGGLE: BETWEEN- VS. WITHIN-SUBJECTS DESIGNS IN DEDUCTIVE REASONING RESEARCH Valerie A. THOMPSON 1) and Jamie I. D. CAMPBELL 1) 1) University of Saskatchewan, Canada

More information

Dual Processes and Training in Statistical Principles

Dual Processes and Training in Statistical Principles Dual Processes and Training in Statistical Principles Helen L. Neilens (hneilens@plymouth.ac.uk) Department of Psychology, University of Plymouth, Drake Circus Plymouth, PL4 8AA UK Simon J. Handley (shandley@plymouth.ac.uk)

More information

Negations in syllogistic reasoning: Evidence for a heuristic analytic conflict

Negations in syllogistic reasoning: Evidence for a heuristic analytic conflict Negations in syllogistic reasoning: Evidence for a heuristic analytic conflict Item type Article Authors Stupple, Edward J. N.; Waterhouse, Eleanor F. Citation Stupple, Edward J. N., Waterhouse, Eleanor

More information

Dimitris Pnevmatikos a a University of Western Macedonia, Greece. Published online: 13 Nov 2014.

Dimitris Pnevmatikos a a University of Western Macedonia, Greece. Published online: 13 Nov 2014. This article was downloaded by: [Dimitrios Pnevmatikos] On: 14 November 2014, At: 22:15 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer

More information

To link to this article:

To link to this article: This article was downloaded by: [University of Kiel] On: 24 October 2014, At: 17:27 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer

More information

Syllogistic reasoning time: Disconfirmation disconfirmed

Syllogistic reasoning time: Disconfirmation disconfirmed Psychonomic Bulletin & Review 2003, 10 (1), 184-189 Syllogistic reasoning time: Disconfirmation disconfirmed VALERIE A. THOMPSON, CHRISTOPHER L. STRIEMER, RHETT REIKOFF, RAYMOND W. GUNTER, and JAMIE I.

More information

Thompson, Valerie A, Ackerman, Rakefet, Sidi, Yael, Ball, Linden, Pennycook, Gordon and Prowse Turner, Jamie A

Thompson, Valerie A, Ackerman, Rakefet, Sidi, Yael, Ball, Linden, Pennycook, Gordon and Prowse Turner, Jamie A Article The role of answer fluency and perceptual fluency in the monitoring and control of reasoning: Reply to Alter, Oppenheimer, and Epley Thompson, Valerie A, Ackerman, Rakefet, Sidi, Yael, Ball, Linden,

More information

Individual Differences and the Belief Bias Effect: Mental Models, Logical Necessity, and Abstract Reasoning

Individual Differences and the Belief Bias Effect: Mental Models, Logical Necessity, and Abstract Reasoning THINKING AND REASONING, 1999, THE 5 (1), BELIEF 1 28 BIAS EFFECT 1 Individual Differences and the Belief Bias Effect: Mental Models, Logical Necessity, and Abstract Reasoning Donna Torrens and Valerie

More information

Costanza Scaffidi Abbate a b, Stefano Ruggieri b & Stefano Boca a a University of Palermo

Costanza Scaffidi Abbate a b, Stefano Ruggieri b & Stefano Boca a a University of Palermo This article was downloaded by: [Costanza Scaffidi Abbate] On: 29 July 2013, At: 06:31 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer

More information

Not All Syllogisms Are Created Equal: Varying Premise Believability Reveals Differences. Between Conditional and Categorical Syllogisms

Not All Syllogisms Are Created Equal: Varying Premise Believability Reveals Differences. Between Conditional and Categorical Syllogisms Not All Syllogisms Are Created Equal: Varying Premise Believability Reveals Differences Between Conditional and Categorical Syllogisms by Stephanie Solcz A thesis presented to the University of Waterloo

More information

Anne A. Lawrence M.D. PhD a a Department of Psychology, University of Lethbridge, Lethbridge, Alberta, Canada Published online: 11 Jan 2010.

Anne A. Lawrence M.D. PhD a a Department of Psychology, University of Lethbridge, Lethbridge, Alberta, Canada Published online: 11 Jan 2010. This article was downloaded by: [University of California, San Francisco] On: 05 May 2015, At: 22:37 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered

More information

Laura N. Young a & Sara Cordes a a Department of Psychology, Boston College, Chestnut

Laura N. Young a & Sara Cordes a a Department of Psychology, Boston College, Chestnut This article was downloaded by: [Boston College] On: 08 November 2012, At: 09:04 Publisher: Psychology Press Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer

More information

An Inspection-Time Analysis of Figural Effects and Processing Direction in Syllogistic Reasoning

An Inspection-Time Analysis of Figural Effects and Processing Direction in Syllogistic Reasoning An Inspection-Time Analysis of Figural Effects and Processing Direction in Syllogistic Reasoning Edward J. N. Stupple (E.J.N.Stupple@derby.ac.uk) Department of Psychology, University of Derby Derby, DE3

More information

Back-Calculation of Fish Length from Scales: Empirical Comparison of Proportional Methods

Back-Calculation of Fish Length from Scales: Empirical Comparison of Proportional Methods Animal Ecology Publications Animal Ecology 1996 Back-Calculation of Fish Length from Scales: Empirical Comparison of Proportional Methods Clay L. Pierce National Biological Service, cpierce@iastate.edu

More information

Richard Lakeman a a School of Health & Human Sciences, Southern Cross University, Lismore, Australia. Published online: 02 Sep 2013.

Richard Lakeman a a School of Health & Human Sciences, Southern Cross University, Lismore, Australia. Published online: 02 Sep 2013. This article was downloaded by: [UQ Library] On: 09 September 2013, At: 21:23 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,

More information

Advanced Projects R&D, New Zealand b Department of Psychology, University of Auckland, Online publication date: 30 March 2011

Advanced Projects R&D, New Zealand b Department of Psychology, University of Auckland, Online publication date: 30 March 2011 This article was downloaded by: [University of Canterbury Library] On: 4 April 2011 Access details: Access Details: [subscription number 917001820] Publisher Psychology Press Informa Ltd Registered in

More information

Why Does Similarity Correlate With Inductive Strength?

Why Does Similarity Correlate With Inductive Strength? Why Does Similarity Correlate With Inductive Strength? Uri Hasson (uhasson@princeton.edu) Psychology Department, Princeton University Princeton, NJ 08540 USA Geoffrey P. Goodwin (ggoodwin@princeton.edu)

More information

6. A theory that has been substantially verified is sometimes called a a. law. b. model.

6. A theory that has been substantially verified is sometimes called a a. law. b. model. Chapter 2 Multiple Choice Questions 1. A theory is a(n) a. a plausible or scientifically acceptable, well-substantiated explanation of some aspect of the natural world. b. a well-substantiated explanation

More information

To link to this article:

To link to this article: This article was downloaded by: [University of Notre Dame] On: 12 February 2015, At: 14:40 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office:

More information

Lora-Jean Collett a & David Lester a a Department of Psychology, Wellesley College and

Lora-Jean Collett a & David Lester a a Department of Psychology, Wellesley College and This article was downloaded by: [122.34.214.87] On: 10 February 2013, At: 16:46 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,

More information

Assessing the Belief Bias Effect With ROCs: It s a Response Bias Effect

Assessing the Belief Bias Effect With ROCs: It s a Response Bias Effect Psychological Review 010 American Psychological Association 010, Vol. 117, No. 3, 831 863 0033-95X/10/$1.00 DOI: 10.1037/a0019634 Assessing the Belief Bias Effect With ROCs: It s a Response Bias Effect

More information

The Influence of Activation Level on Belief Bias in Relational Reasoning

The Influence of Activation Level on Belief Bias in Relational Reasoning Cognitive Science (2012) 1 34 Copyright 2012 Cognitive Science Society, Inc. All rights reserved. ISSN: 0364-0213 print / 1551-6709 online DOI: 10.1111/cogs.12017 The Influence of Activation Level on Belief

More information

FULL REPORT OF RESEARCH ACTIVITIES. Background

FULL REPORT OF RESEARCH ACTIVITIES. Background FULL REPORT OF RESEARCH ACTIVITIES Background There has been a recent upsurge of interest in individual differences in reasoning which has been well summarised by Stanovich & West (2000). The reason for

More information

Chapter 02 Developing and Evaluating Theories of Behavior

Chapter 02 Developing and Evaluating Theories of Behavior Chapter 02 Developing and Evaluating Theories of Behavior Multiple Choice Questions 1. A theory is a(n): A. plausible or scientifically acceptable, well-substantiated explanation of some aspect of the

More information

Wild Minds What Animals Really Think : A Museum Exhibit at the New York Hall of Science, December 2011

Wild Minds What Animals Really Think : A Museum Exhibit at the New York Hall of Science, December 2011 This article was downloaded by: [Dr Kenneth Shapiro] On: 09 June 2015, At: 10:40 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer

More information

Journal of Experimental Psychology: Learning, Memory, and Cognition

Journal of Experimental Psychology: Learning, Memory, and Cognition Journal of Experimental Psychology: Learning, Memory, and Cognition Conflict and Bias in Heuristic Judgment Sudeep Bhatia Online First Publication, September 29, 2016. http://dx.doi.org/10.1037/xlm0000307

More information

Belief-based and analytic processing in transitive inference depends on premise integration difficulty

Belief-based and analytic processing in transitive inference depends on premise integration difficulty Memory & Cognition 2010, 38 (7), 928-940 doi:10.3758/mc.38.7.928 Belief-based and analytic processing in transitive inference depends on premise integration difficulty GLENDA ANDREWS Griffith University,

More information

When Falsification is the Only Path to Truth

When Falsification is the Only Path to Truth When Falsification is the Only Path to Truth Michelle Cowley (cowleym@tcd.ie) Psychology Department, University of Dublin, Trinity College, Dublin, Ireland Ruth M.J. Byrne (rmbyrne@tcd.ie) Psychology Department,

More information

Dual-Process Theory and Syllogistic Reasoning: A Signal Detection Analysis

Dual-Process Theory and Syllogistic Reasoning: A Signal Detection Analysis University of Massachusetts Amherst ScholarWorks@UMass Amherst Masters Theses 1911 - February 2014 Dissertations and Theses 2009 Dual-Process Theory and Syllogistic Reasoning: A Signal Detection Analysis

More information

Published online: 17 Feb 2011.

Published online: 17 Feb 2011. This article was downloaded by: [Iowa State University] On: 23 April 2015, At: 08:45 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer

More information

The Regression-Discontinuity Design

The Regression-Discontinuity Design Page 1 of 10 Home» Design» Quasi-Experimental Design» The Regression-Discontinuity Design The regression-discontinuity design. What a terrible name! In everyday language both parts of the term have connotations

More information

Is inferential reasoning just probabilistic reasoning in disguise?

Is inferential reasoning just probabilistic reasoning in disguise? Memory & Cognition 2005, 33 (7), 1315-1323 Is inferential reasoning just probabilistic reasoning in disguise? HENRY MARKOVITS and SIMON HANDLEY University of Plymouth, Plymouth, England Oaksford, Chater,

More information

In this chapter we discuss validity issues for quantitative research and for qualitative research.

In this chapter we discuss validity issues for quantitative research and for qualitative research. Chapter 8 Validity of Research Results (Reminder: Don t forget to utilize the concept maps and study questions as you study this and the other chapters.) In this chapter we discuss validity issues for

More information

NANCY FUGATE WOODS a a University of Washington

NANCY FUGATE WOODS a a University of Washington This article was downloaded by: [ ] On: 30 June 2011, At: 09:44 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer

More information

Further Properties of the Priority Rule

Further Properties of the Priority Rule Further Properties of the Priority Rule Michael Strevens Draft of July 2003 Abstract In Strevens (2003), I showed that science s priority system for distributing credit promotes an allocation of labor

More information

The Belief Bias Effect Is Aptly Named: A Reply to Klauer and Kellen (2011)

The Belief Bias Effect Is Aptly Named: A Reply to Klauer and Kellen (2011) Psychological Review 2011 American Psychological Association 2011, Vol. 118, No. 1, 155 163 0033-295X/11/$12.00 DOI: 10.1037/a0021774 COMMENT The Belief Bias Effect Is Aptly Named: A Reply to Klauer and

More information

Does momentary accessibility influence metacomprehension judgments? The influence of study judgment lags on accessibility effects

Does momentary accessibility influence metacomprehension judgments? The influence of study judgment lags on accessibility effects Psychonomic Bulletin & Review 26, 13 (1), 6-65 Does momentary accessibility influence metacomprehension judgments? The influence of study judgment lags on accessibility effects JULIE M. C. BAKER and JOHN

More information

Eliminative materialism

Eliminative materialism Michael Lacewing Eliminative materialism Eliminative materialism (also known as eliminativism) argues that future scientific developments will show that the way we think and talk about the mind is fundamentally

More information

Learning Styles Questionnaire

Learning Styles Questionnaire This questionnaire is designed to find out your preferred learning style(s) Over the years you have probably developed learning habits that help you benefit from some experiences than from others Since

More information

Why do Psychologists Perform Research?

Why do Psychologists Perform Research? PSY 102 1 PSY 102 Understanding and Thinking Critically About Psychological Research Thinking critically about research means knowing the right questions to ask to assess the validity or accuracy of a

More information

Interactions between inferential strategies and belief bias

Interactions between inferential strategies and belief bias Mem Cogn (2017) 45:1182 1192 DOI 10.3758/s13421-017-0723-2 Interactions between inferential strategies and belief bias Henry Markovits 1 & Janie Brisson 1 & Pier-Luc de Chantal 1 & Valerie A. Thompson

More information

What is Science 2009 What is science?

What is Science 2009 What is science? What is science? The question we want to address is seemingly simple, but turns out to be quite difficult to answer: what is science? It is reasonable to ask such a question since this is a book/course

More information

J. V. Oakhill a & P. N. Johnson-Laird a a MRC Perceptual and Cognitive Performance Unit,

J. V. Oakhill a & P. N. Johnson-Laird a a MRC Perceptual and Cognitive Performance Unit, This article was downloaded by: [Princeton University] On: 24 February 2013, At: 11:51 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer

More information

Working Memory Span and Everyday Conditional Reasoning: A Trend Analysis

Working Memory Span and Everyday Conditional Reasoning: A Trend Analysis Working Memory Span and Everyday Conditional Reasoning: A Trend Analysis Wim De Neys (Wim.Deneys@psy.kuleuven.ac.be) Walter Schaeken (Walter.Schaeken@psy.kuleuven.ac.be) Géry d Ydewalle (Géry.dYdewalle@psy.kuleuven.ac.be)

More information

Checking the counterarguments confirms that publication bias contaminated studies relating social class and unethical behavior

Checking the counterarguments confirms that publication bias contaminated studies relating social class and unethical behavior 1 Checking the counterarguments confirms that publication bias contaminated studies relating social class and unethical behavior Gregory Francis Department of Psychological Sciences Purdue University gfrancis@purdue.edu

More information

PLEASE SCROLL DOWN FOR ARTICLE. Full terms and conditions of use:

PLEASE SCROLL DOWN FOR ARTICLE. Full terms and conditions of use: This article was downloaded by: [University of Cardiff] On: 3 March 2010 Access details: Access Details: [subscription number 906511392] Publisher Routledge Informa Ltd Registered in England and Wales

More information

the examples she used with her arguments were good ones because they lead the reader to the answer concerning the thesis statement.

the examples she used with her arguments were good ones because they lead the reader to the answer concerning the thesis statement. SAMPLE PAPER 2 Using our PW and CT models, I concluded that Meg Greenfield's essay "In Defense of the Animals" is better than Ron Kline's " A Scientist: I am the enemy". Although both the essays had the

More information

Psychology Research Process

Psychology Research Process Psychology Research Process Logical Processes Induction Observation/Association/Using Correlation Trying to assess, through observation of a large group/sample, what is associated with what? Examples:

More information

Communication Research Practice Questions

Communication Research Practice Questions Communication Research Practice Questions For each of the following questions, select the best answer from the given alternative choices. Additional instructions are given as necessary. Read each question

More information

At least one problem with some formal reasoning paradigms

At least one problem with some formal reasoning paradigms Memory & Cognition 2008, 36 (1), 217-229 doi: 10.3758/MC.36.1.217 At least one problem with some formal reasoning paradigms JAMES R. SCHMIDT University of Waterloo, Waterloo, Ontario, Canada AND VALERIE

More information

Human intuition is remarkably accurate and free from error.

Human intuition is remarkably accurate and free from error. Human intuition is remarkably accurate and free from error. 3 Most people seem to lack confidence in the accuracy of their beliefs. 4 Case studies are particularly useful because of the similarities we

More information

Running head: INDIVIDUAL DIFFERENCES 1. Why to treat subjects as fixed effects. James S. Adelman. University of Warwick.

Running head: INDIVIDUAL DIFFERENCES 1. Why to treat subjects as fixed effects. James S. Adelman. University of Warwick. Running head: INDIVIDUAL DIFFERENCES 1 Why to treat subjects as fixed effects James S. Adelman University of Warwick Zachary Estes Bocconi University Corresponding Author: James S. Adelman Department of

More information

Content Effects in Conditional Reasoning: Evaluating the Container Schema

Content Effects in Conditional Reasoning: Evaluating the Container Schema Effects in Conditional Reasoning: Evaluating the Container Schema Amber N. Bloomfield (a-bloomfield@northwestern.edu) Department of Psychology, 2029 Sheridan Road Evanston, IL 60208 USA Lance J. Rips (rips@northwestern.edu)

More information

Validity and Quantitative Research. What is Validity? What is Validity Cont. RCS /16/04

Validity and Quantitative Research. What is Validity? What is Validity Cont. RCS /16/04 Validity and Quantitative Research RCS 6740 6/16/04 What is Validity? Valid Definition (Dictionary.com): Well grounded; just: a valid objection. Producing the desired results; efficacious: valid methods.

More information

EXPERIMENTAL RESEARCH DESIGNS

EXPERIMENTAL RESEARCH DESIGNS ARTHUR PSYC 204 (EXPERIMENTAL PSYCHOLOGY) 14A LECTURE NOTES [02/28/14] EXPERIMENTAL RESEARCH DESIGNS PAGE 1 Topic #5 EXPERIMENTAL RESEARCH DESIGNS As a strict technical definition, an experiment is a study

More information

Different developmental patterns of simple deductive and probabilistic inferential reasoning

Different developmental patterns of simple deductive and probabilistic inferential reasoning Memory & Cognition 2008, 36 (6), 1066-1078 doi: 10.3758/MC.36.6.1066 Different developmental patterns of simple deductive and probabilistic inferential reasoning Henry Markovits Université du Québec à

More information

The role of theory in construction management: a call for debate

The role of theory in construction management: a call for debate The role of theory in construction management: a call for debate Seymour, D, Crook, D and Rooke, JA http://dx.doi.org/10.1080/014461997373169 Title Authors Type URL The role of theory in construction management:

More information

Category Size and Category-Based Induction

Category Size and Category-Based Induction Category Size and Category-Based Induction Aidan Feeney & David R. Gardiner Department of Psychology University of Durham, Stockton Campus University Boulevard Stockton-on-Tees, TS17 6BH United Kingdom

More information

Critical Thinking Assessment at MCC. How are we doing?

Critical Thinking Assessment at MCC. How are we doing? Critical Thinking Assessment at MCC How are we doing? Prepared by Maura McCool, M.S. Office of Research, Evaluation and Assessment Metropolitan Community Colleges Fall 2003 1 General Education Assessment

More information

Implicit Information in Directionality of Verbal Probability Expressions

Implicit Information in Directionality of Verbal Probability Expressions Implicit Information in Directionality of Verbal Probability Expressions Hidehito Honda (hito@ky.hum.titech.ac.jp) Kimihiko Yamagishi (kimihiko@ky.hum.titech.ac.jp) Graduate School of Decision Science

More information

Chapter 11. Experimental Design: One-Way Independent Samples Design

Chapter 11. Experimental Design: One-Way Independent Samples Design 11-1 Chapter 11. Experimental Design: One-Way Independent Samples Design Advantages and Limitations Comparing Two Groups Comparing t Test to ANOVA Independent Samples t Test Independent Samples ANOVA Comparing

More information

"Games and the Good" Strategy

Games and the Good Strategy "Games and the Good" Hurka!1 Strategy Hurka argues that game-playing is an intrinsic good! He thinks game-playing as an intrinsic good is a "modern view value"! Hurka says he will "defend the value only

More information

Misleading Postevent Information and the Memory Impairment Hypothesis: Comment on Belli and Reply to Tversky and Tuchin

Misleading Postevent Information and the Memory Impairment Hypothesis: Comment on Belli and Reply to Tversky and Tuchin Journal of Experimental Psychology: General 1989, Vol. 118, No. 1,92-99 Copyright 1989 by the American Psychological Association, Im 0096-3445/89/S00.7 Misleading Postevent Information and the Memory Impairment

More information

Journal of Experimental Psychology: Learning, Memory, and Cognition

Journal of Experimental Psychology: Learning, Memory, and Cognition Journal of Experimental Psychology: Learning, Memory, and Cognition Base Rates: Both Neglected and Intuitive Gordon Pennycook, Dries Trippas, Simon J. Handley, and Valerie A. Thompson Online First Publication,

More information

Cognitive domain: Comprehension Answer location: Elements of Empiricism Question type: MC

Cognitive domain: Comprehension Answer location: Elements of Empiricism Question type: MC Chapter 2 1. Knowledge that is evaluative, value laden, and concerned with prescribing what ought to be is known as knowledge. *a. Normative b. Nonnormative c. Probabilistic d. Nonprobabilistic. 2. Most

More information

Assignment 4: True or Quasi-Experiment

Assignment 4: True or Quasi-Experiment Assignment 4: True or Quasi-Experiment Objectives: After completing this assignment, you will be able to Evaluate when you must use an experiment to answer a research question Develop statistical hypotheses

More information

How People Estimate Effect Sizes: The Role of Means and Standard Deviations

How People Estimate Effect Sizes: The Role of Means and Standard Deviations How People Estimate Effect Sizes: The Role of Means and Standard Deviations Motoyuki Saito (m-saito@kwansei.ac.jp) Department of Psychological Science, Kwansei Gakuin University Hyogo 662-8501, JAPAN Abstract

More information

Durkheim. Durkheim s fundamental task in Rules of the Sociological Method is to lay out

Durkheim. Durkheim s fundamental task in Rules of the Sociological Method is to lay out Michelle Lynn Tey Meadow Jane Jones Deirdre O Sullivan Durkheim Durkheim s fundamental task in Rules of the Sociological Method is to lay out the basic disciplinary structure of sociology. He begins by

More information

Programme Specification. MSc/PGDip Forensic and Legal Psychology

Programme Specification. MSc/PGDip Forensic and Legal Psychology Entry Requirements: Programme Specification MSc/PGDip Forensic and Legal Psychology Applicants for the MSc must have a good Honours degree (2:1 or better) in Psychology or a related discipline (e.g. Criminology,

More information

Measuring and Assessing Study Quality

Measuring and Assessing Study Quality Measuring and Assessing Study Quality Jeff Valentine, PhD Co-Chair, Campbell Collaboration Training Group & Associate Professor, College of Education and Human Development, University of Louisville Why

More information

Texas A&M University, College Station, TX, USA b University of Missouri, Columbia, MO, USA

Texas A&M University, College Station, TX, USA b University of Missouri, Columbia, MO, USA This article was downloaded by: [Hicks, Joshua A.][Texas A&M University] On: 11 August 2010 Access details: Access Details: [subscription number 915031380] Publisher Psychology Press Informa Ltd Registered

More information

Examples of Feedback Comments: How to use them to improve your report writing. Example 1: Compare and contrast

Examples of Feedback Comments: How to use them to improve your report writing. Example 1: Compare and contrast Examples of Feedback Comments: How to use them to improve your report writing This document contains 4 examples of writing and feedback comments from Level 2A lab reports, and 4 steps to help you apply

More information

Response to the ASA s statement on p-values: context, process, and purpose

Response to the ASA s statement on p-values: context, process, and purpose Response to the ASA s statement on p-values: context, process, purpose Edward L. Ionides Alexer Giessing Yaacov Ritov Scott E. Page Departments of Complex Systems, Political Science Economics, University

More information

The influence of (in)congruence of communicator expertise and trustworthiness on acceptance of CCS technologies

The influence of (in)congruence of communicator expertise and trustworthiness on acceptance of CCS technologies The influence of (in)congruence of communicator expertise and trustworthiness on acceptance of CCS technologies Emma ter Mors 1,2, Mieneke Weenig 1, Naomi Ellemers 1, Dancker Daamen 1 1 Leiden University,

More information

Marie Stievenart a, Marta Casonato b, Ana Muntean c & Rens van de Schoot d e a Psychological Sciences Research Institute, Universite

Marie Stievenart a, Marta Casonato b, Ana Muntean c & Rens van de Schoot d e a Psychological Sciences Research Institute, Universite This article was downloaded by: [UCL Service Central des Bibliothèques], [Marie Stievenart] On: 19 June 2012, At: 06:10 Publisher: Psychology Press Informa Ltd Registered in England and Wales Registered

More information

This article, the last in a 4-part series on philosophical problems

This article, the last in a 4-part series on philosophical problems GUEST ARTICLE Philosophical Issues in Medicine and Psychiatry, Part IV James Lake, MD This article, the last in a 4-part series on philosophical problems in conventional and integrative medicine, focuses

More information

Highlighting Effect: The Function of Rebuttals in Written Argument

Highlighting Effect: The Function of Rebuttals in Written Argument Highlighting Effect: The Function of Rebuttals in Written Argument Ryosuke Onoda (ndrysk62@p.u-tokyo.ac.jp) Department of Educational Psychology, Graduate School of Education, The University of Tokyo,

More information

Chapter 3-Attitude Change - Objectives. Chapter 3 Outline -Attitude Change

Chapter 3-Attitude Change - Objectives. Chapter 3 Outline -Attitude Change Chapter 3-Attitude Change - Objectives 1) An understanding of how both internal mental processes and external influences lead to attitude change 2) An understanding of when and how behavior which is inconsistent

More information

An Experimental Investigation of Self-Serving Biases in an Auditing Trust Game: The Effect of Group Affiliation: Discussion

An Experimental Investigation of Self-Serving Biases in an Auditing Trust Game: The Effect of Group Affiliation: Discussion 1 An Experimental Investigation of Self-Serving Biases in an Auditing Trust Game: The Effect of Group Affiliation: Discussion Shyam Sunder, Yale School of Management P rofessor King has written an interesting

More information

Laboratoire sur le Langage, le Cerveau et la Cognition (L2C2), Institut des Sciences

Laboratoire sur le Langage, le Cerveau et la Cognition (L2C2), Institut des Sciences Intelligence and reasoning are not one and the same Ira A. Noveck and Jérôme Prado Laboratoire sur le Langage, le Cerveau et la Cognition (L2C2), Institut des Sciences Cognitives, CNRS-Université de Lyon,

More information

CHAPTER 3 METHOD AND PROCEDURE

CHAPTER 3 METHOD AND PROCEDURE CHAPTER 3 METHOD AND PROCEDURE Previous chapter namely Review of the Literature was concerned with the review of the research studies conducted in the field of teacher education, with special reference

More information

The Role of Modeling and Feedback in. Task Performance and the Development of Self-Efficacy. Skidmore College

The Role of Modeling and Feedback in. Task Performance and the Development of Self-Efficacy. Skidmore College Self-Efficacy 1 Running Head: THE DEVELOPMENT OF SELF-EFFICACY The Role of Modeling and Feedback in Task Performance and the Development of Self-Efficacy Skidmore College Self-Efficacy 2 Abstract Participants

More information

Egocentrism, Event Frequency, and Comparative Optimism: When What Happens Frequently Is More Likely to Happen to Me

Egocentrism, Event Frequency, and Comparative Optimism: When What Happens Frequently Is More Likely to Happen to Me 10.1177/0146167203256870 PERSONALITY Chambers et al. AND / EGOCENTRISM SOCIAL PSYCHOLOGY AND EVENT BULLETIN FREQUENCY ARTICLE Egocentrism, Event Frequency, and Comparative Optimism: When What Happens Frequently

More information

CSC2130: Empirical Research Methods for Software Engineering

CSC2130: Empirical Research Methods for Software Engineering CSC2130: Empirical Research Methods for Software Engineering Steve Easterbrook sme@cs.toronto.edu www.cs.toronto.edu/~sme/csc2130/ 2004-5 Steve Easterbrook. This presentation is available free for non-commercial

More information

Decisions based on verbal probabilities: Decision bias or decision by belief sampling?

Decisions based on verbal probabilities: Decision bias or decision by belief sampling? Decisions based on verbal probabilities: Decision bias or decision by belief sampling? Hidehito Honda (hitohonda.02@gmail.com) Graduate School of Arts and Sciences, The University of Tokyo 3-8-1, Komaba,

More information

Does pure water boil, when it s heated to 100 C? : The Associative Strength of Disabling Conditions in Conditional Reasoning

Does pure water boil, when it s heated to 100 C? : The Associative Strength of Disabling Conditions in Conditional Reasoning Does pure water boil, when it s heated to 100 C? : The Associative Strength of Disabling Conditions in Conditional Reasoning Wim De Neys (Wim.Deneys@psy.kuleuven.ac.be) Department of Psychology, K.U.Leuven,

More information

Underlying Theory & Basic Issues

Underlying Theory & Basic Issues Underlying Theory & Basic Issues Dewayne E Perry ENS 623 Perry@ece.utexas.edu 1 All Too True 2 Validity In software engineering, we worry about various issues: E-Type systems: Usefulness is it doing what

More information

The Conference That Counts! March, 2018

The Conference That Counts! March, 2018 The Conference That Counts! March, 2018 Statistics, Definitions, & Theories The Audit Process Getting it Wrong Practice & Application Some Numbers You Should Know Objectivity Analysis Interpretation Reflection

More information

Dual-Process Theories: Questions and Outstanding Issues. Valerie A. Thompson University of Saskatchewan

Dual-Process Theories: Questions and Outstanding Issues. Valerie A. Thompson University of Saskatchewan Dual-Process Theories: Questions and Outstanding Issues Valerie A. Thompson University of Saskatchewan Outline Why do we need Dual Process Theories? Integrate with each other, cognitive theories Integration

More information

The Flynn effect and memory function Sallie Baxendale ab a

The Flynn effect and memory function Sallie Baxendale ab a This article was downloaded by: [University of Minnesota] On: 16 August 2010 Access details: Access Details: [subscription number 917397643] Publisher Psychology Press Informa Ltd Registered in England

More information

Thinking and Intelligence

Thinking and Intelligence Thinking and Intelligence Learning objectives.1 The basic elements of thought.2 Whether the language you speak affects the way you think.3 How subconscious thinking, nonconscious thinking, and mindlessness

More information

CHAPTER 3. Methodology

CHAPTER 3. Methodology CHAPTER 3 Methodology The purpose of this chapter is to provide the research methodology which was designed to achieve the objectives of this study. It is important to select appropriate method to ensure

More information

What do Americans know about inequality? It depends on how you ask them

What do Americans know about inequality? It depends on how you ask them Judgment and Decision Making, Vol. 7, No. 6, November 2012, pp. 741 745 What do Americans know about inequality? It depends on how you ask them Kimmo Eriksson Brent Simpson Abstract A recent survey of

More information

Sleeping Beauty is told the following:

Sleeping Beauty is told the following: Sleeping beauty Sleeping Beauty is told the following: You are going to sleep for three days, during which time you will be woken up either once Now suppose that you are sleeping beauty, and you are woken

More information

Deciding whether a person has the capacity to make a decision the Mental Capacity Act 2005

Deciding whether a person has the capacity to make a decision the Mental Capacity Act 2005 Deciding whether a person has the capacity to make a decision the Mental Capacity Act 2005 April 2015 Deciding whether a person has the capacity to make a decision the Mental Capacity Act 2005 The RMBI,

More information