Independence and Dependence in Human Causal Reasoning. Bob Rehder. Department of Psychology New York University New York NY 10003

Size: px
Start display at page:

Download "Independence and Dependence in Human Causal Reasoning. Bob Rehder. Department of Psychology New York University New York NY 10003"

Transcription

1 Running Head: Independence and Dependence in Causal Reasoning Independence and Dependence in Human Causal Reasoning Bob Rehder Department of Psychology New York University New York NY Send all correspondence to: Bob Rehder Dept. of Psychology 6 Washington Place New York, New York Phone: (212) bob.rehder@nyu.edu

2 Independence and Dependence in Causal Reasoning 2 Abstract Causal graphical models (CGMs) have become a popular formalism used to model human causal reasoning and learning. The key property of CGMs is the causal Markov condition (CM) that stipulates patterns of independence and dependence among causally related variables. Five experiments found that while adult s causal inferences exhibited aspects of veridical causal reasoning, they also exhibited a small but tenacious tendency to violate CM. They also failed to exhibit discounting in which the presence of one cause as an explanation of an effect makes the presence of another less likely. Instead, subjects exhibited a tendency to reason associatively, that is, to assume that the presence of one variable implies the presence of other, causally related variables, even when those other variables were (according to CM) conditionally independent. The rate of independence violations was unaffected by manipulations (e.g., response deadlines) known to influence fast and intuitive reasoning processes, suggesting that an associative response to a causal reasoning question is at least sometimes the product of careful and deliberate thinking. That about 60% of the erroneous associative inferences were made by about a quarter indicates the presence of substantial individual difference in this tendency. There was also evidence that subjects inferences were influenced by their assumptions about the presence of factors that disable causal relations and their use of a conjunctive reasoning strategy.

3 Independence and Dependence in Causal Reasoning 3 Independence and Dependence in Human Causal Reasoning People possess numerous beliefs about the causal structure of the world. They believe that sunrises make roosters crow, that smoking causes lung cancer, and that alcohol consumption leads to traffic accidents. The value of such knowledge lies in allowing one to infer more about a situation that what can be directly observed. For example, one generates explanations by reasoning backward to ascertain the causes of the event at hand. One also reasons forward to predict what might happen in the future. On the basis of, say, a friend's inebriated state, we predict dire consequences if he were to drive, and thus hide his car keys in the nearest flowerpot. A large number of studies have now have investigated how humans make causal inferences. One simple question is: When two variables, X and Y, are causally related, do people infer one from the other? Unsurprisingly, research has confirmed that they do, as X is deemed more likely in the presence of Y and vice versa (Fernbach, Darlow, & Sloman, 2010; Meder, Hagmayer, & Waldmann, 2008; 2009; Rehder & Burnett, 2005; see Rottman & Hastie, 2013, for a review). But causal inferences quickly become more complicated if just one additional variable is introduced. For example, suppose that X and Y are related to one another not directly but rather through a third variable Z. Under these conditions, the question of how one should draw an inference between X and Y will depend on the direction of the causal relations that link them via Z. The three possibilities are presented in Figure 1. First, X and Y might both be effects of Z (Figure 1A), a topology often referred to as a common cause network. For example, a doctor might diagnose a disease (Z) on the basis of a particular symptom (X), and then also predict that the patient will soon exhibit another symptom characteristic of that disease (Y). Second, the variables might form a causal chain in which X causes Z which causes Y (Figure 1B). For example, a politician may (X) calculate that pandering to the extreme wing of his or her party will lead to their support (Z), but that that support will in turn galvanize members of the opposing party (Y). Finally, Z might be caused by X or Y, forming a common effect network (Figure 1C). For example, a police detective might release an individual (Y) suspected of murder (Z) upon discovering the murder weapon in possession of another suspect (X).

4 Independence and Dependence in Causal Reasoning 4 A formalism that specifies the permissible forms of causal inferences and that is generally accepted as normative is known as causal graphical models, hereafter CGM (Glymour, 1998; Jordan, 1999; Koller & Friedman, 2009; Pearl, 1988; 2000; Spirites et al. 1993). CGMs are types of Bayesian networks (or directed acyclic graphs) is which variables are represented as nodes and directed edges between those variables are interpreted as causal relations. Note that a CGM need not be complete in the sense that variables may have exogenous influences (i.e., hidden causes) that are not part of the model; however, these influences are constrained to be uncorrelated. This property, referred to as causal sufficiency (Spirites et al. 1993, 2000), in turn has important implications for the sorts of inferences that are allowable. Specifically, causal sufficiency enables CGMs to stipulate the wellknown causal Markov condition, hereafter referred to as CM, that specifies the conditions under which variables are conditionally dependent or independent of one another (Hausman & Woodward, 1999; Pearl, 1988; 2000; Spirites et al. 1993, 2000). The goal of this research is to test whether the causal inferences people make follow the prediction of CGMs, particularly whether they honor the constraints imposed by CM. This question is important because Bayes nets have become popular for modeling cognitive processes in numerous domains. For example, CGMs have been used as psychological models of not only various forms of causal reasoning (Lee & Holyoak, 2008; Rehder & Burnett, 2005; Rehder, 2009; Shafto, Kemp, Bonawitz, Coley, Tenenbaum, 2008), but also causal learning (Cheng, 1997; Gopnik Glymour, Sobel, Schultz, & Kushnir, 2004; Griffiths & Tenenbaum, 2005; 2009; Lu, Yuille, Liljeholm, Cheng, & Holyoak, 2008; Sobel, Tenenbaum, & Gopnik, 2004; Waldmann, Holyoak, & Fratianne, 1995), interventions (Sloman & Lagnado, 2005; Waldmann & Hagmayer, 2005), and classification (Rehder, 2003; Rehder & Kim, 2009; 2010). Graphical models have also been used as models of non-causal structured knowledge, such as taxonomic hierarchies (Kemp & Tenenbaum, 2009). However, in all of these domains the inferential procedures than accompany Bayes nets and that are taken as candidate models of psychological processes rely on CM for their justification. Said differently: CM is at the heart of Bayes nets. Without it, any claim that knowledge is represented as a Bayes nets amount to no more than the claim that it consists of nodes connected with arrows. Thus, a demonstration that

5 Independence and Dependence in Causal Reasoning 5 humans sharply violate CM would have implications for the role that Bayes nets currently occupy in cognitive modeling. This article has the following structure. I first describe how CM constrains causal inferences. I then present previous empirical research that bears on the psychological question of whether humans violate CM. Five new experiments testing CM are then presented. Implications of the Causal Markov Condition (CM) For tractability, this articles limits itself to restricted instances of the common cause, chain, and common effect networks. First, whereas nothing prevents CGMs from including continuous and ordinal variables, this work consider only binary variables that are either present or absent. Second, whereas CGMs can include inhibitory causal relations (a cause tends to prevent an effect) and relations that involve multiple variables, here I treat only simple facilitory (or generative) relations between pairs of variables. Third, those causal relations have a single sense: The presence of the cause facilitates the presence of the effect but the absence of the cause exerts no influence. Fourth, for the common effect network I will assume that X and Y are independent causes of Z. Under these assumptions, I demonstrate how CGMs constrain inferences for the three networks in Figure 1. CM and Common Cause Networks CM specifies the pattern of conditional independence that arises given knowledge of the state of a subset of variables in a network. Specifically, when that subset includes a variable s direct parents, that variable is conditionally independent of any variable that is neither a parent nor an effect. Since a variable is independent of its parents conditioned on its parents (Hausman & Woodward, 1999), this entails that the variable is independent of it non-effects (i.e., its non-descendants). This condition has a natural causal interpretation: Apart from its descendants, one has learned as much as possible about a variable once one knows the state of all of its direct causes. Because non-descendants only provide information about the variable through the parents, the variable is said to be screened off by the parents from those descendants. Figure 2 illustrate this principle with the common cause network in Figure 1A by presenting

6 Independence and Dependence in Causal Reasoning 6 the eight distinct situations in which one may infer the state of Y as a function of the states of X and Z. In Figure 2 a 1 means a binary variable is present, 0 means that it s absent, and x means that its state is unknown. Y is always unknown and is the variable being inferred. The state of Y s parent cause Z is known to be present in situations A, B, and C, known to be absent in F, G, and H, and its state is unknown in D and E. Situations also vary according to whether the state of the other effect X is present, absent, or unknown. Because the labels X and Y are interchangeable in the common cause network, the situations in Figure 2 include those in which one infers X rather than Y. The eight situations in Figure 2 are arranged into equivalence classes I, II, III, and IV in which problems in the same class provide the same inferential support for Y. Equivalence classes I and IV illustrate CM. In class I, the state of Y's immediate parent Z is known (it is present) and so knowledge about the state of Y s nondescendents (namely, X) provides no additional information about Y. Said differently, because Z screens off Y from X, problem types A, B, and C all provide equivalent support for Y. Similarly, because the known (absent) value of Z screens Y off from X in problem types F, G, and H, they also provide equal support for Y. With some caveats, CGMs also predict that inferences in favor of the causally related value of Y become weaker as one moves from class I to IV. Problems in class I in which Y s immediate cause Z is present generally provide stronger support for Y than that provided by the single problem in class II (D), in which the state of Z is unknown but X is present. This is the case because the presence of X provides evidence for, but not a guarantee of, the presence of Z, and this less-than-certain belief in the presence of Z translates into a weaker inference to Y. However, the distinction between these two classes depends on how the network is parameterized, that is, on the specific strength of some of the causal relations. For example, when the causal link between X and Z is deterministically necessary (an effect is always accompanied by its cause because it has no other potential causes), then the presence of Z is certain in problem type D, and thus the presence of Y is as certain as in problem types A, B, and C. This possible collapse of classes I and II into a single class due to deterministic necessity is represented in Figure 2 with a dashed line. The single problem in class III (E), provides weaker support than D because the causally

7 Independence and Dependence in Causal Reasoning 7 related value of X is absent (suggesting that Z is absent, and thus so too is Y). Finally, the class of problems in which Z is known with certainty to be absent (F, G, and H) provides the weakest support for Y of all. However, when the causal link between X and Z is deterministically sufficient (a cause is always accompanied by its effect), then the absence of Z can be inferred with certainty, and thus the absence of Y is as certain as in problem types F, G, and H. This possible collapse of classes III and IV due to deterministic sufficiency is represented in Figure 2 with a double dashed line. With these exceptions noted, the question of whether inferences honor the ordering of equivalence classes in Figure 2 will be evaluated in the upcoming experiments. CM and Chain Networks The normative pattern of inferences when X, Y, and Z form a causal chain are presented in Figure 3 which presents the different situations in which one can predict Y as a function of X and Z. The analysis of the chain network is very similar to that of the common cause network. First, situations A, B, and C form an equivalence class because the known value of Z screens off Y from the non-descendant X (so that information about X is irrelevant to predicting Y). Next, situation D should support weaker inferences to Y than types A, B, or C, because the presence of X in D suggests but doesn't guarantee the presence of Z (unless the link between X and Z is deterministically sufficient, as discussed above). Situation E is weaker still, because the absence of X suggests the absence of Z (and thus the absence of Y). But, unless the link between X and Z is deterministically necessary (i.e., there are no alternative causes of Z), E will be stronger than situations F, G, and H in which the absence of Z is known with certainty. Finally, types F, G, and H form an equivalence class because the value of Z screens off X from Y. Note that whereas for a common cause network inferences to either X or Y are qualitatively equivalent, this is not the case in a chain network, because X is the initial cause and Y is the terminal effect. Nevertheless, the analysis of problems in which X rather than Y is the to-be-predicted variable parallels the one just presented, with problem types A, B, and C forming one equivalence class and F, G, and H another. Although differences between predicting the initial cause (X) as compared to the

8 Independence and Dependence in Causal Reasoning 8 final effect (Y) are not uninteresting, I will generally collapse over this distinction in what follows. CM and Common Effect Networks The common effect networks in Figure 1C illustrates a second sort of constraint stipulated by CGMs. Whereas in common cause and chain networks knowledge of the state of Z renders X and Y independent, it has the opposite effect in common effect networks: X and Y are independent in the absence of knowledge of Z but become dependent when the state of Z is known. The nature of that dependency depends on how Z is functionally related to it causes. Although in general any functional form is possible (e.g., X and Y may be conjunctive causes of Z such that X and Y must both be present to produce Z, Y might disable the causal relation that links X and Z, etc.) as mentioned I focus on cases in which X and Y are independent, generative causes of Z. Under this assumption, Figure 4 presents the equivalence classes for a common effect network. Of course, the presence of the common effect Z in situations A, B, and C results in them providing stronger evidence in favor of the presence of a cause than the other types. But, among these three problems, the probability that a cause is present when the other cause is known to be absent (situation C) is larger when than when its state is unknown (B) which in turn is larger than when its known to be present (A). This phenomenon is referred to as discounting or explaining away. As mentioned, when the state of Z is unknown, X and Y are conditionally independent. For example, problem types E and D each provide equally strong inferences to Y because X, as an independent cause, provides no information about Y (and thus one's predictions regarding Y should correspond to its base rate, i.e., the probability with which one predicts Y in the absence of any information about X or Z). Finally, problem types F, G, and H also form an equivalence class. This is the case because of the single sense interpretation of the causal relations, that is, the presence of X (or Y) causes the presence of Z but the absence of X (or Y) does not cause the absence of Z. Again, differences between some equivalence classes depend on the parameterization of the causal relations. When those relations are deterministically sufficient, class III collapses into IV. This is the case because the presence of variable X in problem type A completely accounts for the presence

9 Independence and Dependence in Causal Reasoning 9 of Z. Thus, the probability of Y in A should correspond to its base rate, as in problem types E and D. Apparent Violations of CM in Psychological Research Given the prominent use of CGMs in models of cognition, it is unsurprising that a number of investigators have recently asked whether adult human reasoners in fact honor the constraints imposed by CM. I now review three recent studies that bear on this question. Walsh and Sloman (2008) Walsh and Sloman (2008, Experiment 1; also see Park & Sloman, 2013; Walsh & Sloman, 2004) asked subjects to reason about a number of real-world vignettes that involved three variables related by causal knowledge into a common cause network. For example, subjects were told that worrying causes difficulty concentrating and that worrying also causes insomnia. They were then asked two inference questions. First, they were asked whether an individual had difficulty concentrating given that he or she was worried (this corresponds to situation type B in Figure 2). Next, they were asked whether a different individual had difficulty concentrating given that he or she was worried but didn t have insomnia (situation type C). Because the state of the common cause Z (worrying) is given in both questions, Y (difficulty concentrating) is screened off from the additional information provided about X (insomnia) in the second question. In fact, however, probability ratings were much higher for the first question than the second one. Although this result provides prima facie evidence against CM, results from a follow-up experiment suggested that subjects were reasoning with causal knowledge in addition to that emphasized by the experimenters. Specifically, Walsh and Sloman found evidence that the absence of one of the effects in the second question led subjects to assume the presence of a shared disabler that not only explained why the effect X failed to occur but also led them to expect that it would prevent the presence of the other effect Y. For example, some subjects assumed that the absence of insomnia was due to the individual performing relaxation exercises, which in turn would also help prevent difficulty concentrating. This finding is important because inferences that violate CM for one CGM may no longer do

10 Independence and Dependence in Causal Reasoning 10 so if that CGM is elaborated to include hidden variables (i.e., variables that were not provided as part of the cover story and not explicitly mentioned as part of the inference question). The left panel of Figure 5A presents a common cause model elaborated to include the sort of hidden disabler (W) assumed by many of Walsh and Sloman s subjects. In the panel, arcs between two causal links represent interactive causes such that the causal influence of Z on X and Y depends on W; in particular, that influence is absent when W is present. Because in this network the state of one of Y s direct parents (W) is not known, Y is no longer screened off from X by Z; that is, because X (insomnia) provides information about W (relaxation exercises), it thus also provides information about Y (difficulty concentrating) even when the state of Z (worrying) is known. For this causal network, the Walsh and Sloman findings no longer constitute violations of CM. Said differently, representing the subjects causal knowledge as a common cause network without potential disablers violates the causal sufficiency constraint described earlier: Because W is a causal influence that is common to both X and Y, omitting it means that exogenous influences are not uncorrelated. This in turn invalidates the expectations of independence stipulated by CM. More recent work (Park & Sloman, 2013) suggests that reasoners may also assume the presence of a shared disabler with chain networks, where the presence of W now disables the X Y and Y Z causal links (middle panel of Figure 5A). Later, I will present a fuller analysis of how causal inferences are influenced by the possible presence of a shared disabler for the all three types of networks addressed in this article, including common effect networks (right panel of Figure 5A). But for now, these findings illustrate how situations that may appear to be counterexamples to CM may turn out not to be when the causal relations are represented with greater fidelity. Of course, the study of Walsh and Sloman has revealed some interesting and important facts about causal reasoning. That people will respond to an apparent inconsistency in a causal situation by ad hoc elaborations of their causal model to include additional causal factors is a significant finding; so too is that they then use this elaborated model to reason about new individuals. But for the reasons just given, what this study doesn t do is provide decisive evidence against CM.

11 Independence and Dependence in Causal Reasoning 11 Mayrhofer, Hagmayer, and Waldmann (2010) In another test of causal reasoning, Mayrhofer et al. (2010; also see Mayrhofer & Waldmann, 2013) instructed subjects on scenarios involving mind reading aliens. In all conditions, the thoughts of one alien (Gonz) could be transmitted to three others (Murks, Brxxx, and Zoohng). However, Mayrhofer et al. varied the cover story that was provided to subjects. In the sending condition, subjects were told that Gonz could transmit its thoughts into the heads of the other aliens. In the reading condition, they were told that the other aliens could read the thoughts of Gonz. Mayrhofer et al. described both scenarios as involving a common cause network (with Gonz as the common cause) and thus tested CM by asking subjects to predict the thoughts of one of the effect aliens (e.g., Murks) given the thoughts of Gonz and the remaining effects (Brxxx and Zoohng). They found that the effects were not independent: Subjects predicted that Murks was more likely to have the same thought as Gonz if Brxxx and Zoohng did also. Importantly this effect was much stronger in the sending condition as compared to the receiving condition. Rather than interpreting this as a violation of CM however, Mayrhofer et al. noted that subjects were unlikely to have construed the situation as involving a simple common cause model. In the sending condition, it is natural to assume that Gonz s ability to send thoughts to three other aliens relied on some sort of common sending mechanism. This situation corresponds to the causal model in the left panel of Figure 5B in which Gonz s sending mechanism is represented by the variable W. On this account, if, say, Brxxx, doesn t share Gonz s thought, a likely reason is the malfunctioning of Gonz s sending mechanism, in which case Murks is also unlikely to share Gonz s thought. That is, in the model shown in the left panel Figure 5B, an effect (e.g. Y) is no longer screened off from another effect (X) by Z, because X provides information about W and thus Y. Once again, the violation of causal sufficiency represented by neglecting a hidden common cause W means that the effects of that cause are not conditionally independent given Z. The much smaller violations of CM in the receiving condition may have been due to subjects belief that the process of reading mostly depended on some property of the reader itself (thus, the fact that Brxxx had trouble reading Gonz s thought provides no

12 Independence and Dependence in Causal Reasoning 12 reason to think that Murks would too). 1 Appeals to a shared mediator in a common cause network have been used to respond to supposed counterexamples to CM in the philosophical literature. For example, a situation presented by Cartwright (1993) involves two factories that both produce a chemical used to treat sewage but that operate on different days. However, whereas the process used by the first factory produces the useful chemical 100% of the time, the one used by the second sometimes fails to produce the chemical at all, yielding a terrible pollutant instead. Cartwright represents this situations as a common cause (Z, which of two factories produced the chemical) producing two effects, X (the sewage-treating chemical) and Y (the pollutant), and observes that X and Y are not independent conditioned on Z (e.g., even if one knows that the second factory was is in operation today, the presence of the pollutant implies the absence of the useful chemical). In response, Hausman and Woodward (1999) note that the situation is more accurately represented by the network shown in Figure 5B in which the causal influence of factory (Z) determines is mediated by some process (W) which in turn determines the likely presence of the chemical and the pollutant (X and Y). On this analysis, X and Y are only independent conditioned on W, and thus Cartwright s scenario fails to serve as a counterexample to CM (also see Salmon, 1984, and Sober, 1987, for similar problems with similar solutions). Later I will consider how inferences with chain and common effect networks are influenced by the assumed presence of mediating processes (the middle and right panel of Figure 5B). These examples again illustrate how failing to include relevant causal factors can invalidate the patterns of conditional independence stipulated by CM. Of course, the results from Mayrhofer et al. are important insofar as they reveal how subjects construal of agency in a situation (which actor initiates an event) can influence their causal model and thus the inferences they draw. But those findings fail to shed light on whether such inferences in fact honor CM. 1 Mayrhofer et al. themselves followed Walsh and Sloman by modeling these results as involving a shared disabler, one that was stronger in the sending versus receiving condition. Later I demonstrate that shared disablers and mediators the same predictions for the causal reasoning problems presented in the current experiments.

13 Independence and Dependence in Causal Reasoning 13 Rehder and Burnett (2005) Finally, Rehder and Burnett tested CM by instructing subjects on categories with features that were linked by causal relations. These categories were artificial, that is, they denoted entities that don t really exist. For example, subjects who learned Lake Victoria Shrimp were told that such shrimp have a number of typical or characteristic features (e.g., high quantity of the ACh neurotransmitter, a slow flight response, an accelerated sleep cycle, etc.). Subjects were then presented with individual category members with one or more missing features (i.e., stimulus dimensions whose values were unknown) and asked to predict one of those features. Importantly, these experiments went beyond those of Walsh and Sloman (2008) and Mayrhofer et al. (2010) in two ways. First, they tested a larger number of causal network topologies. In addition to common cause networks (i.e., one feature causes all others), subjects were tested on causal chains, common effect networks (one feature caused by all others), and a control condition in which subjects were not instructed on any interfeature causal relations. Second, they tested a wider variety of materials. In addition to biological kinds like Lake Victoria Shrimp, subjects also learned nonliving natural kinds, artifacts, and blank materials (in which the categories were described merely as some sort of object and the features were the letters A, B, etc.). Rehder and Burnett found that subjects appeared to violate CM in their causal inferences. These violations occurred for all causal network topologies and all types of materials. The pattern was the same in all conditions: Predictions were stronger to the extent that the item had more typical category features, even when those additional features were (according to CM) conditionally independent of the to-be-predicted feature. Unlike the violations of CM described above, these results cannot be attributed to subjects assuming the presence of disablers (Figure 5A) or shared mediators (Figure 5B), because those knowledge structures do not explain the violations for all network types tested by Rehder and Burnett (a claim I will demonstrate later). Nevertheless, just as in the previous studies, Rehder and Burnett accounted for their results by appealing to subjects use of additional knowledge. Specifically, they proposed that reasoners assume that categories possess underlying properties or mechanisms that

14 Independence and Dependence in Causal Reasoning 14 produce or generate a category s observable properties. This situation is represented in Figure 5C in which W serves as a shared generative cause of a category s typical features. Because one can reason from X to Y (or vice versa) via W, in Figure 5C X and Y are conditionally dependent even given the state of Z. The common cause W also explains the inferences Rehder and Burnett found in the acausal control condition (not shown in Figure 5C): Although not directly causally related to one another, features are nonindependent because they are all indirectly related via W. One might ask where these beliefs about categories underlying mechanisms come from. In Rehder and Burnett s experiments it was unlikely to have originated with the categories themselves because artificial categories like Lake Victoria Shrimp don t really exist. It was also unlikely to have originated from more general knowledge associated with superordinate categories like shrimp or all biological kinds. For example, although researchers have suggested that people view biological kinds as possessing essential properties that generate, or cause, perceptual features (Gelman, 2003; Medin & Ortony, 1989), Rehder and Burnett s results also obtained with nonbiological kinds and artifacts (and blank materials in which the ontological domain was unspecified). Accordingly, Rehder and Burnett concluded that people possess a domain general assumption that categories typical features are brought about by hidden causal mechanisms, that is, even without knowing what those mechanisms might be. For present purposes, the important point is that CM was again rescued when the investigators assumed that subjects reasoned with a causal model elaborated to include a shared generative cause. In summary, the preceding review reveals that apparent violations of CM can be explained away by appealing to additional knowledge structures that people might have brought to bear on the causal inference. Of course, that prior knowledge can influence reasoning is hardly surprising given the long history of research showing how beliefs affect performance on supposedly formal (content free) reasoning problems. The belief bias effect refers to reasoners tendency to more readily accept the conclusion of a syllogistic reasoning problem as valid if it is believed to be true (Evans, Baston, & Pollard, 1983; and see Evan, Handley, & Bacon, 2009, for an analogous effect with conditional reasoning). Closer to home, suppression effects arise when conditional statements (if p then q) are

15 Independence and Dependence in Causal Reasoning 15 interpreted causally and the reasoner can easily retrieve counterexamples to the rule that imply the presence of alternative causes or disabling conditions (Byrne, 1989; Byrne, Espino, & Santamaria, 1999; Cummins, 1995; Cummins, Lubart, Alksnis, & Rist, 1991; De Neys, Schaeken, & d Ydewalle, 2003a; 2003b; Evans, Handley, & Bacon, 2009; Frosch & Johnson-Laird, 2011; Goldvarg & Johnson- Laird, 2001; Markovits & Quinn, 2002; Quinn & Markovits, 1998; 2002; Verschueren, Schaeken, & d Ydewalle, 2005). Just as in these previous lines of research, reasoners prior beliefs complicate the assessment of whether people honor an important rule of formal reasoning, in this case the Markov condition. Addressing the Challenges in Testing CM The preceding review not only reveals that there currently exists no empirical evidence that decisively demonstrates whether people honor CM in their causal inferences, it illustrates the methodological difficulties involved in testing CM. In essence, demonstrating that CM is psychologically false involves proving a negative, namely, that there exists no additional knowledge structures that subjects might plausibly bring to bear that could account for the observed inferences. Nevertheless, I will argue that this condition is satisfied by the upcoming experiments. University undergraduates were taught binary variables and two causal relations in the domains of economics, meteorology, and sociology. For example, the economic variables were interest rates (which they were told could be low or high), trade deficits (small or large), and retirement savings (low or high). The binary variables in each of the three domains are shown in Table 1. Subjects were provided with no information about the base rates of variables (e.g., subjects in the domain of economics were only told that some economies have low interest rates and that some have high interest rates). The causal relations specified how the sense of one variable caused another (e.g., low interest rates causes small trade deficits). The senses of the variables that were described as causally related was randomized over participants (e.g., some participants were told that low interest rates cause small trade deficits, others that low interest rates cause large trade deficits, still others that high interest rates cause small trade deficits, etc.). The causal relationships formed either a common

16 Independence and Dependence in Causal Reasoning 16 cause, chain, or common effect causal network and were accompanied by descriptions of the mechanisms by which one variable produces another. See Table 2 for examples of the causal mechanisms in the domain of economics. Subjects were then presented with pairs of concrete situations (e.g., two particular economies) and asked to judge, on the basis of the states of other variables in the situations, in which Y (or X) was more likely to be present. These materials address several of the issues that have made previous tests of CM inconclusive. First, the domains of economics, meteorology, and sociology are relatively technical domains about which university students have little prior knowledge, reducing the probability that they will elaborate their causal models with disablers (Figure 5A), shared mediators (Figure 5B), shared generative causes (Figure 5C), or any other types of knowledge structures. But even if they have such knowledge, its influence will be canceled out by randomizing over subjects of which variable senses are described as causally related. For example, if subjects who are taught the common cause knowledge in Table 2 tend to believe that the two effects (small trade deficits and high retirement savings) cause one another (or that they share a hidden common causes, hidden enablers and disablers, etc.), then those subjects will appear to violate CM. But these results will be canceled out by other subjects who are taught that low interest rates causes large trade deficits instead of small ones. As an additional safeguard, an experiment below eliminates the potential use of prior knowledge altogether by use of blank materials (the variables are given the generic labels A, B, and C ). Second, the description of each causal mechanism made it clear that those mechanisms are unrelated and thus independent (e.g., the two causal mechanisms in Table 2 indicate that the processes by which interest rates affect trade deficits and retirement savings are unrelated). These descriptions not only further rule out the shared mediator interpretation of common cause networks that has received attention (Cartwright, 1993; Hausman & Woodward, 1999; Mayrhofer et al., 2010; Salmon, 1984; Sober, 1987), they rule out the analogous interpretations of chain and common effect networks as well (Figure 5B). As yet another safeguard against these interpretations, experiments below will explicitly instruct participants that each causal mechanism operates independently. Finally, these materials address the possibility that apparent violations of CM could arise from

17 Independence and Dependence in Causal Reasoning 17 an abstract (domain general) assumption that all variables are related in the manner assumed by Rehder and Burnett (2005). Although they suggested that typical category features are causally related via underlying mechanisms (Figure 5C), the present materials are not categories and so there is no basis for assuming that only certain dimension values (the typical ones) are causally related while others are not. For example, for the materials in Table 2, even if subjects had the general expectation that interest rates, trade deficits, and retirement savings are (somehow) causally related, there is no basis for them thinking those causal relations are limited to generative causes between only certain variable senses (in Table 2, between low interest rates, small trade deficits, and high retirement savings) and not others (e.g., between low interest rates and large trade deficits). There is also no reason to assume that some of those relations aren t inhibitory rather than generative. That is, the mere expectation that variables are causally related is not sufficient to explain any particular pattern of nonindependence. Although these steps provide a first line of defense against the use of the alternative structures in Figure 5, it is still possible to conceive of reasons why subjects might elaborate their causal model in ways that are specific to the materials they are taught. For example, the processes involved in comprehending the causal relations are likely to include a search of memory for related knowledge. It is conceivable that this search is biased to turn up only additional generative causal relations between the variable senses, perhaps yielding one of the shared generative cause models in Figure 5C. It might also lead reasoners to think of processes that might mediate the causal relations, yielding one of the models in Figure 5B. Once the test phase of the experiment begins subjects may elaborate their models in response to the particular scenarios they are presented with, just as Walsh and Sloman s (2008) subjects apparently introduced shared disablers to account for cause-present/effect-absent situations (Figure 5A). These sorts of context specific model elaborations might produce apparent violations of independence despite the randomization of the materials if different knowledge structures get retrieved in the different randomized conditions. Of course, versions of these accounts that assume that these elaborations consist of concrete pieces of domain knowledge will be addressed by the experiment that tests blank materials. Yet, one might still wonder if subjects assume the presence of such structures

18 Independence and Dependence in Causal Reasoning 18 without any concrete idea of what they might be. Accordingly, later I will present a theoretical analysis of the each of the alternative models in Figure 5 to assess their potential as accounts of the causal inferences made in the following experiments. Experiment 1 The purpose of Experiment 1 is to assess whether people's causal inferences honor the basic patterns of independence and dependence stipulated by CGMs. Each participant was taught the three causal networks in Figure 1, one each in the domains of economics, meteorology, and sociology. A forced-choice procedure was used in which participants were presented with a pair of situations and asked to choose which was more likely to possess a particular variable value. Participants could also select a third equally likely response indicating that neither was more likely than the other to have that value. The choice problems were limited to those necessary to assess the key predictions of conditional independence and dependence made by the three causal networks in Figure 1: A vs. B, B vs. C, D vs. E, F vs. G, and G vs. H. In the common cause and chain conditions, subjects should choose D over E but choose the equally likely alternative otherwise (Figures 2 and 3). In the common effect condition they should prefer B over A and C over B but choose equally likely otherwise (Figure 4). These predictions are summarized in the left hand side of Figure 6. Method Materials. The three binary variables in the domains of economics, meteorology, and sociology are shown in Table 1. In each domain participants were taught two causal relationships forming either a common cause, chain, or common effect network. Each causal link was described as the sense of one variable (e.g., low interest rates) causing another (e.g., small trade deficits), and was accompanied with a short description of the mechanism responsible for the causal relationship (Table 2). The senses of the variable that were described as causally related was randomized for each participant. The complete list of causal relationships used to construct common cause, chain, and common effect networks in each domain are presented in Appendix A.

19 Independence and Dependence in Causal Reasoning 19 Design. Choice problem (A vs. B, B vs. C, D vs. E, F vs. G, and G vs. H.) and causal network were manipulated as within-subject variables. In addition, there were two between-subject counterbalancing factors. First, the order in which the three causal networks were presented was either ceh, hce, or ehc (c = common cause, h = chain, e = common effect). Second, the order in which the three domains were presented was either mes, sme, or esm (m = meteorology, e = economics, s = sociology). As a result, each causal network was instantiated in each of the three domains, and was learned as the first, second, or third network, an equal number of times. Participants. Sixty-three New York University undergraduates received course credit for participating in this experiment. They were assigned in equal numbers to the two between-subject counterbalancing conditions. Procedure. Experimental sessions were conducted by computer. For each domain, participants first studied several screens of information about the domain and then performed the inference test. The initial screens presented a cover story and a description of the domain's three variables and their two values. Subsequent screens presented the two causal relationships and their associated causal mechanisms. Participants also observed a diagram depicting the topology of the causal links (common cause, chain, or common effect). When ready, participants took a multiple-choice test that tested them on the knowledge they had just studied. While taking the test, participants were free to return to the information screens they had studied; however, doing so obligated them to retake the test. The only way to pass the test and proceed to subsequent phases was to complete it without error and without returning to the initial information screens for help. The feature inference phase presented participants with the five types of choice problems. The two examples were presented one above the other and participants were asked which was more likely to have a particular value for one of the unknown variables. For example, the list of variables for one economy might be Low interest rates, Small trade deficits, and??? (indicating that the value for the third variable, retirement savings, was unknown), those for the second economy might be "Low interest rates,???, and???, and participants would be asked which economy was more likely to have high retirement savings. Possible responses were 1 for the first example, 2 for the second

20 Independence and Dependence in Causal Reasoning 20 example, and 3 for equally likely. There were two versions of each of the five types of choice problems, one in which the participant was asked to choose which example was more likely to have Y (as shown in Figs. 2-4), and the corresponding version in which they were asked to which was more likely to have X. To average over any bias for choosing the top or bottom example, each of these 10 problems was presented twice, with the order of the two examples reversed. The order of these 20 problems was randomized for each participant. Results To construct a single choice score that summarizes subjects responses, choices in favor of the first alternative (e.g., A in A vs. B) were coded as 1, those in favor of the second (B) were coded as 0, and an equally likely response was coded as.5. Initial analyses revealed that choice scores were unaffected by either domain or the order in which the causal networks were presented. Accordingly, subjects choices are presented in Table 3 and their choice score are presented on the right hand side of Figure 6 collapsed over these factors. Figure 6 reveals that responses in the common cause and chain conditions were approximately equal and substantially different from those in the common effect condition. This observation was supported by statistical analysis. A 3 x 5 ANOVA with causal network and choice problem type as factors yielded an overall effect of choice problem type, F(4, 248) = 40.6, MSE =.041, p <.0001 and an interaction between problem type and network, F(8, 496) = 5.5, MSE =.025, p < However, whereas the interaction between problem type and the contrast between the common cause and chain network combined versus the common effect network was significant (p <.0001), the interaction between the common cause and chain network was not (p >.20). Accordingly, I discuss the common cause and chain conditions together and then the common effect condition. Common cause and chain results. On one hand, the common cause and chain choice scores in Figures 6A and 6B exhibit some of the properties of normative causal reasoning specified by CGMs. When asked whether situation D or E was more likely to have the causally-relevant value of Y (or X), most participants chose D (choice scores of.79 and.90 in the common cause and chain conditions,

21 Independence and Dependence in Causal Reasoning 21 respectively), consistent with the predictions of the normative model. Both these scores were significantly different than.50, t(62) = 9.20 and 17.33, ps < This result indicates that in both conditions participants were willing to engage in indirect inferences, that is, from X to Y or Y to X when the state of Z was unknown. Unfortunately, participants violated CM on the remaining problems. Recall that when the state of Z is known, the state of X (Y) should have no influence on the whether Y (X) is present. In fact, the average choice scores on these problems (A vs. B, B vs. C, F vs. G, and G vs. H) was.57 and.62 in the common cause and chain conditions, respectively, t(62) = 4.57 and 7.12, ps < That is, in subjects minds, the presence of one variable made the presence of the other more likely even when those variables were supposedly screened off from one another. Nevertheless, the fact that these scores were lower than those for the D vs. E, problem indicated that subjects exhibited some sensitivity to the difference between independent and dependent problems, t(62) = 6.99 and 11.45, in the common cause and chain conditions, respectively, ps < Common effect results. Recall that an important property of common effect networks is discounting in which the presence of one cause of an effect makes the presence of another less likely. Discounting suggests that B should be preferred in the A vs. B choice problem and that C should be preferred in the B vs. C problem. Figure 6C shows that subjects instead exhibited the opposite pattern (preferring A in the first problem and B in the second, choice scores of.56 and.58, respectively). These scores were significantly greater than.50, t(62) = 4.07, p < On the independent problems (D vs. E, F vs. G, and G vs. H), the average choice scores (.57) were also significantly greater than.50, t(62) = 4.22, p < Only the score for the F vs. G problem (.52) was not significantly greater than.50. Individual differences. It is important to assess whether Experiment 1 s group results were manifested consistently by all participants or only arose as a result of averaging over individuals with different response profiles. In fact, cluster analyses revealed two subgroups of participants with qualitatively different responses. The responses of one cluster of 18 participants, shown in the left side of Figure 7, were virtually identical for all three causal networks. That is, 29% of the participants

22 Independence and Dependence in Causal Reasoning 22 labeled "associative reasoners" for reasons discussed below showed no sensitivity to causal direction and usually chose the alternative in which more causally related variables were present. The other cluster of 45 participants labeled "causal reasoners" in the right side of Figure 7 instead demonstrated a sensitivity to causal direction by generating different responses for the common effect condition as compared to the common cause and chain conditions. They also committed many fewer violations of CM: These individual chose the correct equally likely response alternative on 78% of independent choice problems as compared to 41% for the associative reasoners. Nevertheless, when they didn t respond correctly, even these individual were more likely to choose the alternative in which more causally related variables were present. As a result, their choice scores continued to be significantly above chance on a number of independent problems (e.g., B vs. C in the common cause and chain conditions and D vs. E in the common effect condition). Discussion The results from Experiment 1 paint a mixed picture. On one hand, when reasoning with common cause or chain networks, participants correctly inferred that the states of X and Y covaried when the state of Z was unknown. But participants also committed numerous violations of CM in which the subjective probability of one variable was influenced by the state of another that was supposedly conditionally independent. And, rather than discounting when reasoning with a common effect structure, they were more likely to predict the presence a cause when another cause was already present. As reviewed, previous explanation of independence violations have assumed that reasoners make use of causal knowledge in addition to that supplied by the experimenter. However, in Experiment 1 the use of specific prior knowledge was minimized by use of unfamiliar domains and by randomizing which variable senses were described as causally related. Abstract, domain general expectations that variables are (somehow) causally related also fail to explain the results because subjects had no reason to think that variables were related via generative but not also inhibitory causal relations. Additional tests of the role of prior knowledge will be presented starting in Experiment 3.

Reasoning with Conjunctive Causes

Reasoning with Conjunctive Causes Reasoning with Causes Bob Rehder (bob.rehder@nyu.edu) Department of Psychology, New York University 6 Washington Place, New York, NY 10003 USA Abstract causes are causes that all need to be present for

More information

A Generative Model of Causal Cycles

A Generative Model of Causal Cycles A Generative Model of Causal Cycles Bob Rehder (bob.rehder@nyu.edu) Jay B. Martin (jbmartin@nyu.edu) Department of Psychology, New York University 6 Washington Place, New York, NY 10003 USA Abstract Causal

More information

Journal of Experimental Psychology: Learning, Memory, and Cognition

Journal of Experimental Psychology: Learning, Memory, and Cognition Journal of Experimental Psychology: Learning, Memory, and Cognition The Role of Functional Form in Causal-Based Categorization Bob Rehder Online First Publication, August 11, 14. http://dx.doi.org/1.137/xlm48

More information

Categorization as causal reasoning

Categorization as causal reasoning Cognitive Science 27 (2003) 709 748 Categorization as causal reasoning Bob Rehder Department of Psychology, New York University, 6 Washington Place, New York, NY 10003, USA Received 22 October 2002; received

More information

Learning Deterministic Causal Networks from Observational Data

Learning Deterministic Causal Networks from Observational Data Carnegie Mellon University Research Showcase @ CMU Department of Psychology Dietrich College of Humanities and Social Sciences 8-22 Learning Deterministic Causal Networks from Observational Data Ben Deverett

More information

Journal of Experimental Psychology: Learning, Memory, and Cognition

Journal of Experimental Psychology: Learning, Memory, and Cognition Journal of Experimental Psychology: Learning, Memory, and Cognition The Impact of Disablers on Predictive Inference Denise Dellarosa Cummins Online First Publication, June 9, 014. http://dx.doi.org/10.1037/xlm000004

More information

Doing After Seeing. Seeing vs. Doing in Causal Bayes Nets

Doing After Seeing. Seeing vs. Doing in Causal Bayes Nets Doing After Seeing Björn Meder (bmeder@uni-goettingen.de) York Hagmayer (york.hagmayer@bio.uni-goettingen.de) Michael R. Waldmann (michael.waldmann@bio.uni-goettingen.de) Department of Psychology, University

More information

Structured Correlation from the Causal Background

Structured Correlation from the Causal Background Structured Correlation from the Causal Background Ralf Mayrhofer 1, Noah D. Goodman 2, Michael R. Waldmann 1, and Joshua B. Tenenbaum 2 1 {rmayrho, mwaldma}@uni-goettingen.de; Department of Psychology,

More information

Agents and causes: Dispositional intuitions as a guide to causal structure. Ralf Mayrhofer and Michael R. Waldmann

Agents and causes: Dispositional intuitions as a guide to causal structure. Ralf Mayrhofer and Michael R. Waldmann Running head: AGENTS AND CAUSES 1 Agents and causes: Dispositional intuitions as a guide to causal structure Ralf Mayrhofer and Michael R. Waldmann Department of Psychology, University of Göttingen, Germany

More information

Agents and Causes: Dispositional Intuitions As a Guide to Causal Structure

Agents and Causes: Dispositional Intuitions As a Guide to Causal Structure Cognitive Science 39 (2015) 65 95 Copyright 2014 Cognitive Science Society, Inc. All rights reserved. ISSN: 0364-0213 print / 1551-6709 online DOI: 10.1111/cogs.12132 Agents and Causes: Dispositional Intuitions

More information

T. Kushnir & A. Gopnik (2005 ). Young children infer causal strength from probabilities and interventions. Psychological Science 16 (9):

T. Kushnir & A. Gopnik (2005 ). Young children infer causal strength from probabilities and interventions. Psychological Science 16 (9): Probabilities and Interventions 1 Running Head: PROBABILITIES AND INTERVENTIONS T. Kushnir & A. Gopnik (2005 ). Young children infer causal strength from probabilities and interventions. Psychological

More information

Evaluating the Causal Role of Unobserved Variables

Evaluating the Causal Role of Unobserved Variables Evaluating the Causal Role of Unobserved Variables Christian C. Luhmann (christian.luhmann@vanderbilt.edu) Department of Psychology, Vanderbilt University 301 Wilson Hall, Nashville, TN 37203 USA Woo-kyoung

More information

Understanding the Causal Logic of Confounds

Understanding the Causal Logic of Confounds Understanding the Causal Logic of Confounds Björn Meder (bmeder@uni-goettingen.de) York Hagmayer (york.hagmayer@bio.uni-goettingen.de) Michael R. Waldmann (michael.waldmann@bio.uni-goettingen.de) Department

More information

Inferring Hidden Causes

Inferring Hidden Causes Inferring Hidden Causes Tamar Kushnir (tkushnir@socrates.berkeley.edu), Alison Gopnik (gopnik@socrates.berkeley.edu), Laura Schulz (laurasch@socrates.berkeley.edu) Department of Psychology, Tolman Hall,

More information

The Acquisition and Use of Causal Structure Knowledge. Benjamin Margolin Rottman

The Acquisition and Use of Causal Structure Knowledge. Benjamin Margolin Rottman The Acquisition and Use of Causal Structure Knowledge Benjamin Margolin Rottman Learning Research and Development Center University of Pittsburgh 3939 O Hara Street Pittsburgh PA 526 In M.R. Waldmann (Ed.),

More information

Naïve Beliefs About Intervening on Causes and Symptoms in the Health Domain

Naïve Beliefs About Intervening on Causes and Symptoms in the Health Domain Naïve Beliefs About Intervening on Causes and Symptoms in the Health Domain Jessecae K. Marsh (jem311@lehigh.edu) Department of Psychology, 17 Memorial Drive East Bethlehem, PA 18015 USA Andrew Zeveney

More information

Causal learning in humans. Alison Gopnik Dept. of Psychology UC-Berkeley

Causal learning in humans. Alison Gopnik Dept. of Psychology UC-Berkeley Causal learning in humans Alison Gopnik Dept. of Psychology UC-Berkeley Knowledge as an inverse problem East Coast Solutions Structured Abstract Innate Domain-specific Modularity West Coast Solutions Distributed

More information

Category coherence and category-based property induction

Category coherence and category-based property induction Cognition 91 (2004) 113 153 www.elsevier.com/locate/cognit Category coherence and category-based property induction Bob Rehder a, *, Reid Hastie b a Department of Psychology, New York University, 6 Washington

More information

Diagnostic Reasoning

Diagnostic Reasoning CHAPTER 23 Diagnostic Reasoning Björn Meder and Ralf Mayrhofer Abstract This chapter discusses diagnostic reasoning from the perspective of causal inference. The computational framework that provides the

More information

How Causal Knowledge Affects Classification: A Generative Theory of Categorization

How Causal Knowledge Affects Classification: A Generative Theory of Categorization Journal of Experimental Psychology: Learning, Memory, and Cognition 2006, Vol. 32, No. 4, 659 683 Copyright 2006 by the American Psychological Association 0278-7393/06/$12.00 DOI: 10.1037/0278-7393.32.4.659

More information

Causal Models Interact with Structure Mapping to Guide Analogical Inference

Causal Models Interact with Structure Mapping to Guide Analogical Inference Causal Models Interact with Structure Mapping to Guide Analogical Inference Hee Seung Lee (heeseung@ucla.edu) Keith J. Holyoak (holyoak@lifesci.ucla.edu) Department of Psychology, University of California,

More information

Betting on Transitivity in an Economic Setting

Betting on Transitivity in an Economic Setting Betting on Transitivity in an Economic Setting Dennis Hebbelmann (Dennis.Hebbelmann@psychologie.uni-heidelberg.de) Momme von Sydow (Momme.von-Sydow@uni-heidelberg.de) Department of Psychology, University

More information

Non-Bayesian Inference: Causal Structure Trumps Correlation

Non-Bayesian Inference: Causal Structure Trumps Correlation Cognitive Science 36 (2012) 1178 1203 Copyright Ó 2012 Cognitive Science Society, Inc. All rights reserved. ISSN: 0364-0213 print / 1551-6709 online DOI: 10.1111/j.1551-6709.2012.01262.x Non-Bayesian Inference:

More information

Seeing Versus Doing: Two Modes of Accessing Causal Knowledge

Seeing Versus Doing: Two Modes of Accessing Causal Knowledge Journal of Experimental Psychology: Learning, Memory, and Cognition 2005, Vol. 31, No. 2, 216 227 Copyright 2005 by the American Psychological Association 0278-7393/05/$12.00 DOI: 10.1037/0278-7393.31.2.216

More information

Generic Priors Yield Competition Between Independently-Occurring Preventive Causes

Generic Priors Yield Competition Between Independently-Occurring Preventive Causes Powell, D., Merrick, M. A., Lu, H., & Holyoak, K.J. (2014). Generic priors yield competition between independentlyoccurring preventive causes. In B. Bello, M. Guarini, M. McShane, & B. Scassellati (Eds.),

More information

Finding the Cause: Examining the Role of Qualitative Causal Inference through Categorical Judgments

Finding the Cause: Examining the Role of Qualitative Causal Inference through Categorical Judgments Finding the Cause: Examining the Role of Qualitative Causal Inference through Categorical Judgments Michael Pacer (mpacer@berkeley.edu) Department of Psychology, University of California, Berkeley Berkeley,

More information

Is it possible to gain new knowledge by deduction?

Is it possible to gain new knowledge by deduction? Is it possible to gain new knowledge by deduction? Abstract In this paper I will try to defend the hypothesis that it is possible to gain new knowledge through deduction. In order to achieve that goal,

More information

Confidence in Causal Inferences: The Case of Devaluation

Confidence in Causal Inferences: The Case of Devaluation Confidence in Causal Inferences: The Case of Devaluation Uwe Drewitz (uwe.drewitz@tu-berlin.de) Stefan Brandenburg (stefan.brandenburg@tu-berlin.de) Berlin Institute of Technology, Department of Cognitive

More information

The influence of causal information on judgments of treatment efficacy

The influence of causal information on judgments of treatment efficacy Memory & Cognition 9, 37 (1), 29-41 doi:.3758/mc.37.1.29 The influence of causal information on judgments of treatment efficacy Jennelle E. Yopchick and Nancy S. Kim Northeastern University, Boston, Massachusetts

More information

The Regression-Discontinuity Design

The Regression-Discontinuity Design Page 1 of 10 Home» Design» Quasi-Experimental Design» The Regression-Discontinuity Design The regression-discontinuity design. What a terrible name! In everyday language both parts of the term have connotations

More information

Supplement 2. Use of Directed Acyclic Graphs (DAGs)

Supplement 2. Use of Directed Acyclic Graphs (DAGs) Supplement 2. Use of Directed Acyclic Graphs (DAGs) Abstract This supplement describes how counterfactual theory is used to define causal effects and the conditions in which observed data can be used to

More information

A Comparison of Three Measures of the Association Between a Feature and a Concept

A Comparison of Three Measures of the Association Between a Feature and a Concept A Comparison of Three Measures of the Association Between a Feature and a Concept Matthew D. Zeigenfuse (mzeigenf@msu.edu) Department of Psychology, Michigan State University East Lansing, MI 48823 USA

More information

Are Category Labels Features or Naïve Assumption?

Are Category Labels Features or Naïve Assumption? Are Category Labels Features or Naïve Assumption? Na-Yung Yu (nayungyu@gmail.com) Takashi Yamauchi (tya@psyc.tamu.edu) Department of Psychology, Mail Stop 4235 Texas A&M University, College Station, TX

More information

The Logic of Causal Order Richard Williams, University of Notre Dame, https://www3.nd.edu/~rwilliam/ Last revised February 15, 2015

The Logic of Causal Order Richard Williams, University of Notre Dame, https://www3.nd.edu/~rwilliam/ Last revised February 15, 2015 The Logic of Causal Order Richard Williams, University of Notre Dame, https://www3.nd.edu/~rwilliam/ Last revised February 15, 2015 [NOTE: Toolbook files will be used when presenting this material] First,

More information

Babies and Bayes Nets II: Observations, Interventions and Prior Knowledge. Tamar Kushnir Alison Gopnik

Babies and Bayes Nets II: Observations, Interventions and Prior Knowledge. Tamar Kushnir Alison Gopnik Babies and Bayes Nets II: Observations, Interventions and Prior Knowledge Tamar Kushnir Alison Gopnik Outline Part 1: Interventions vs Observations Causal strength judgments Causal structure learning Part

More information

Causal Schema-Based Inductive Reasoning

Causal Schema-Based Inductive Reasoning Causal Schema-Based Inductive Reasoning Oren Griffiths (ogriffiths@psy.unsw.edu.au) Ralf Mayrhofer (rmayrho@uni.goettingen.de) Jonas Nagel (nagel.jonas@gmail.com) Michael R. Waldmann (michael.waldmann@bio.uni-goettingen.de)

More information

Does pure water boil, when it s heated to 100 C? : The Associative Strength of Disabling Conditions in Conditional Reasoning

Does pure water boil, when it s heated to 100 C? : The Associative Strength of Disabling Conditions in Conditional Reasoning Does pure water boil, when it s heated to 100 C? : The Associative Strength of Disabling Conditions in Conditional Reasoning Wim De Neys (Wim.Deneys@psy.kuleuven.ac.be) Department of Psychology, K.U.Leuven,

More information

How Categories Shape Causality

How Categories Shape Causality How Categories Shape Causality Michael R. Waldmann (michael.waldmann@bio.uni-goettingen.de) Department of Psychology, University of Göttingen, Gosslerstr. 14, 37073 Göttingen, Germany York Hagmayer (york.hagmayer@bio.uni-goettingen.de)

More information

A Bayesian Network Model of Causal Learning

A Bayesian Network Model of Causal Learning A Bayesian Network Model of Causal Learning Michael R. Waldmann (waldmann@mpipf-muenchen.mpg.de) Max Planck Institute for Psychological Research; Leopoldstr. 24, 80802 Munich, Germany Laura Martignon (martignon@mpib-berlin.mpg.de)

More information

Temporalization In Causal Modeling. Jonathan Livengood & Karen Zwier TaCitS 06/07/2017

Temporalization In Causal Modeling. Jonathan Livengood & Karen Zwier TaCitS 06/07/2017 Temporalization In Causal Modeling Jonathan Livengood & Karen Zwier TaCitS 06/07/2017 Temporalization In Causal Modeling Jonathan Livengood & Karen Zwier TaCitS 06/07/2017 Introduction Causal influence,

More information

EXPERIMENTAL RESEARCH DESIGNS

EXPERIMENTAL RESEARCH DESIGNS ARTHUR PSYC 204 (EXPERIMENTAL PSYCHOLOGY) 14A LECTURE NOTES [02/28/14] EXPERIMENTAL RESEARCH DESIGNS PAGE 1 Topic #5 EXPERIMENTAL RESEARCH DESIGNS As a strict technical definition, an experiment is a study

More information

Bayes and blickets: Effects of knowledge on causal induction in children and adults. Thomas L. Griffiths. University of California, Berkeley

Bayes and blickets: Effects of knowledge on causal induction in children and adults. Thomas L. Griffiths. University of California, Berkeley Running Head: BAYES AND BLICKETS Bayes and blickets: Effects of knowledge on causal induction in children and adults Thomas L. Griffiths University of California, Berkeley David M. Sobel Brown University

More information

Do I know that you know what you know? Modeling testimony in causal inference

Do I know that you know what you know? Modeling testimony in causal inference Do I know that you know what you know? Modeling testimony in causal inference Daphna Buchsbaum, Sophie Bridgers, Andrew Whalen Elizabeth Seiver, Thomas L. Griffiths, Alison Gopnik {daphnab, sbridgers,

More information

Structure mapping in spatial reasoning

Structure mapping in spatial reasoning Cognitive Development 17 (2002) 1157 1183 Structure mapping in spatial reasoning Merideth Gattis Max Planck Institute for Psychological Research, Munich, Germany Received 1 June 2001; received in revised

More information

Observational Category Learning as a Path to More Robust Generative Knowledge

Observational Category Learning as a Path to More Robust Generative Knowledge Observational Category Learning as a Path to More Robust Generative Knowledge Kimery R. Levering (kleveri1@binghamton.edu) Kenneth J. Kurtz (kkurtz@binghamton.edu) Department of Psychology, Binghamton

More information

MBios 478: Systems Biology and Bayesian Networks, 27 [Dr. Wyrick] Slide #1. Lecture 27: Systems Biology and Bayesian Networks

MBios 478: Systems Biology and Bayesian Networks, 27 [Dr. Wyrick] Slide #1. Lecture 27: Systems Biology and Bayesian Networks MBios 478: Systems Biology and Bayesian Networks, 27 [Dr. Wyrick] Slide #1 Lecture 27: Systems Biology and Bayesian Networks Systems Biology and Regulatory Networks o Definitions o Network motifs o Examples

More information

Why Does Similarity Correlate With Inductive Strength?

Why Does Similarity Correlate With Inductive Strength? Why Does Similarity Correlate With Inductive Strength? Uri Hasson (uhasson@princeton.edu) Psychology Department, Princeton University Princeton, NJ 08540 USA Geoffrey P. Goodwin (ggoodwin@princeton.edu)

More information

Functionalist theories of content

Functionalist theories of content Functionalist theories of content PHIL 93507 April 22, 2012 Let s assume that there is a certain stable dependence relation between the physical internal states of subjects and the phenomenal characters

More information

Wason's Cards: What is Wrong?

Wason's Cards: What is Wrong? Wason's Cards: What is Wrong? Pei Wang Computer and Information Sciences, Temple University This paper proposes a new interpretation

More information

MS&E 226: Small Data

MS&E 226: Small Data MS&E 226: Small Data Lecture 10: Introduction to inference (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 17 What is inference? 2 / 17 Where did our data come from? Recall our sample is: Y, the vector

More information

Relations between premise similarity and inductive strength

Relations between premise similarity and inductive strength Psychonomic Bulletin & Review 2005, 12 (2), 340-344 Relations between premise similarity and inductive strength EVAN HEIT University of Warwick, Coventry, England and AIDAN FEENEY University of Durham,

More information

with relevance to recent developments in computational theories of human learning. According

with relevance to recent developments in computational theories of human learning. According Causality and Imagination Caren M. Walker & Alison Gopnik University of California, Berkeley Abstract This review describes the relation between the imagination and causal cognition, particularly with

More information

OCW Epidemiology and Biostatistics, 2010 David Tybor, MS, MPH and Kenneth Chui, PhD Tufts University School of Medicine October 27, 2010

OCW Epidemiology and Biostatistics, 2010 David Tybor, MS, MPH and Kenneth Chui, PhD Tufts University School of Medicine October 27, 2010 OCW Epidemiology and Biostatistics, 2010 David Tybor, MS, MPH and Kenneth Chui, PhD Tufts University School of Medicine October 27, 2010 SAMPLING AND CONFIDENCE INTERVALS Learning objectives for this session:

More information

Expectations and Interpretations During Causal Learning

Expectations and Interpretations During Causal Learning Journal of Experimental Psychology: Learning, Memory, and Cognition 2011, Vol. 37, No. 3, 568 587 2011 American Psychological Association 0278-7393/11/$12.00 DOI: 10.1037/a0022970 Expectations and Interpretations

More information

VIRGINIA LAW REVIEW IN BRIEF

VIRGINIA LAW REVIEW IN BRIEF VIRGINIA LAW REVIEW IN BRIEF VOLUME 96 JUNE 15, 2010 PAGES 35 39 REPLY GOOD INTENTIONS MATTER Katharine T. Bartlett * W HILE writing the article to which Professors Mitchell and Bielby have published responses,

More information

Working Memory Span and Everyday Conditional Reasoning: A Trend Analysis

Working Memory Span and Everyday Conditional Reasoning: A Trend Analysis Working Memory Span and Everyday Conditional Reasoning: A Trend Analysis Wim De Neys (Wim.Deneys@psy.kuleuven.ac.be) Walter Schaeken (Walter.Schaeken@psy.kuleuven.ac.be) Géry d Ydewalle (Géry.dYdewalle@psy.kuleuven.ac.be)

More information

Studying the effect of change on change : a different viewpoint

Studying the effect of change on change : a different viewpoint Studying the effect of change on change : a different viewpoint Eyal Shahar Professor, Division of Epidemiology and Biostatistics, Mel and Enid Zuckerman College of Public Health, University of Arizona

More information

Interpreting Instructional Cues in Task Switching Procedures: The Role of Mediator Retrieval

Interpreting Instructional Cues in Task Switching Procedures: The Role of Mediator Retrieval Journal of Experimental Psychology: Learning, Memory, and Cognition 2006, Vol. 32, No. 3, 347 363 Copyright 2006 by the American Psychological Association 0278-7393/06/$12.00 DOI: 10.1037/0278-7393.32.3.347

More information

Causal learning in an imperfect world

Causal learning in an imperfect world Causal learning in an imperfect world Reliability, background noise and memory David Lagnado Neil Bramley Contents 1. Introduction to causal learning Causal models Learning a causal model Interventions

More information

Goodness of Pattern and Pattern Uncertainty 1

Goodness of Pattern and Pattern Uncertainty 1 J'OURNAL OF VERBAL LEARNING AND VERBAL BEHAVIOR 2, 446-452 (1963) Goodness of Pattern and Pattern Uncertainty 1 A visual configuration, or pattern, has qualities over and above those which can be specified

More information

Introduction to Behavioral Economics Like the subject matter of behavioral economics, this course is divided into two parts:

Introduction to Behavioral Economics Like the subject matter of behavioral economics, this course is divided into two parts: Economics 142: Behavioral Economics Spring 2008 Vincent Crawford (with very large debts to Colin Camerer of Caltech, David Laibson of Harvard, and especially Botond Koszegi and Matthew Rabin of UC Berkeley)

More information

The Role of Causal Models in Analogical Inference

The Role of Causal Models in Analogical Inference Journal of Experimental Psychology: Learning, Memory, and Cognition 2008, Vol. 34, No. 5, 1111 1122 Copyright 2008 by the American Psychological Association 0278-7393/08/$12.00 DOI: 10.1037/a0012581 The

More information

The Role of Causal Models in Reasoning Under Uncertainty

The Role of Causal Models in Reasoning Under Uncertainty The Role of Causal Models in Reasoning Under Uncertainty Tevye R. Krynski (tevye@mit.edu) Joshua B. Tenenbaum (jbt@mit.edu) Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology

More information

Gold and Hohwy, Rationality and Schizophrenic Delusion

Gold and Hohwy, Rationality and Schizophrenic Delusion PHIL 5983: Irrational Belief Seminar Prof. Funkhouser 2/6/13 Gold and Hohwy, Rationality and Schizophrenic Delusion There are two plausible departments of rationality: procedural and content. Procedural

More information

Implicit Information in Directionality of Verbal Probability Expressions

Implicit Information in Directionality of Verbal Probability Expressions Implicit Information in Directionality of Verbal Probability Expressions Hidehito Honda (hito@ky.hum.titech.ac.jp) Kimihiko Yamagishi (kimihiko@ky.hum.titech.ac.jp) Graduate School of Decision Science

More information

Data and Statistics 101: Key Concepts in the Collection, Analysis, and Application of Child Welfare Data

Data and Statistics 101: Key Concepts in the Collection, Analysis, and Application of Child Welfare Data TECHNICAL REPORT Data and Statistics 101: Key Concepts in the Collection, Analysis, and Application of Child Welfare Data CONTENTS Executive Summary...1 Introduction...2 Overview of Data Analysis Concepts...2

More information

On the diversity principle and local falsifiability

On the diversity principle and local falsifiability On the diversity principle and local falsifiability Uriel Feige October 22, 2012 1 Introduction This manuscript concerns the methodology of evaluating one particular aspect of TCS (theoretical computer

More information

Content Effects in Conditional Reasoning: Evaluating the Container Schema

Content Effects in Conditional Reasoning: Evaluating the Container Schema Effects in Conditional Reasoning: Evaluating the Container Schema Amber N. Bloomfield (a-bloomfield@northwestern.edu) Department of Psychology, 2029 Sheridan Road Evanston, IL 60208 USA Lance J. Rips (rips@northwestern.edu)

More information

Necessity, possibility and belief: A study of syllogistic reasoning

Necessity, possibility and belief: A study of syllogistic reasoning THE QUARTERLY JOURNAL OF EXPERIMENTAL PSYCHOLOGY, 2001, 54A (3), 935 958 Necessity, possibility and belief: A study of syllogistic reasoning Jonathan St. B.T. Evans, Simon J. Handley, and Catherine N.J.

More information

Misleading Postevent Information and the Memory Impairment Hypothesis: Comment on Belli and Reply to Tversky and Tuchin

Misleading Postevent Information and the Memory Impairment Hypothesis: Comment on Belli and Reply to Tversky and Tuchin Journal of Experimental Psychology: General 1989, Vol. 118, No. 1,92-99 Copyright 1989 by the American Psychological Association, Im 0096-3445/89/S00.7 Misleading Postevent Information and the Memory Impairment

More information

Causal Determinism and Preschoolers Causal Inferences

Causal Determinism and Preschoolers Causal Inferences Causal Determinism and Preschoolers Causal Inferences Laura E. Schulz (lschulz@mit.edu) Department of Brain and Cognitive Sciences, MIT, 77 Massachusetts Avenue Cambridge, MA 02139, USA Jessica Sommerville

More information

Children s causal inferences from indirect evidence: Backwards blocking and Bayesian reasoning in preschoolers

Children s causal inferences from indirect evidence: Backwards blocking and Bayesian reasoning in preschoolers Cognitive Science 28 (2004) 303 333 Children s causal inferences from indirect evidence: Backwards blocking and Bayesian reasoning in preschoolers David M. Sobel a,, Joshua B. Tenenbaum b, Alison Gopnik

More information

Validity and Quantitative Research. What is Validity? What is Validity Cont. RCS /16/04

Validity and Quantitative Research. What is Validity? What is Validity Cont. RCS /16/04 Validity and Quantitative Research RCS 6740 6/16/04 What is Validity? Valid Definition (Dictionary.com): Well grounded; just: a valid objection. Producing the desired results; efficacious: valid methods.

More information

A Computational Model of Counterfactual Thinking: The Temporal Order Effect

A Computational Model of Counterfactual Thinking: The Temporal Order Effect A Computational Model of Counterfactual Thinking: The Temporal Order Effect Clare R. Walsh (cwalsh@tcd.ie) Psychology Department, University of Dublin, Trinity College, Dublin 2, Ireland Ruth M.J. Byrne

More information

VALIDITY OF QUANTITATIVE RESEARCH

VALIDITY OF QUANTITATIVE RESEARCH Validity 1 VALIDITY OF QUANTITATIVE RESEARCH Recall the basic aim of science is to explain natural phenomena. Such explanations are called theories (Kerlinger, 1986, p. 8). Theories have varying degrees

More information

Color naming and color matching: A reply to Kuehni and Hardin

Color naming and color matching: A reply to Kuehni and Hardin 1 Color naming and color matching: A reply to Kuehni and Hardin Pendaran Roberts & Kelly Schmidtke Forthcoming in Review of Philosophy and Psychology. The final publication is available at Springer via

More information

Different developmental patterns of simple deductive and probabilistic inferential reasoning

Different developmental patterns of simple deductive and probabilistic inferential reasoning Memory & Cognition 2008, 36 (6), 1066-1078 doi: 10.3758/MC.36.6.1066 Different developmental patterns of simple deductive and probabilistic inferential reasoning Henry Markovits Université du Québec à

More information

Lecture 4: Research Approaches

Lecture 4: Research Approaches Lecture 4: Research Approaches Lecture Objectives Theories in research Research design approaches ú Experimental vs. non-experimental ú Cross-sectional and longitudinal ú Descriptive approaches How to

More information

Category Size and Category-Based Induction

Category Size and Category-Based Induction Category Size and Category-Based Induction Aidan Feeney & David R. Gardiner Department of Psychology University of Durham, Stockton Campus University Boulevard Stockton-on-Tees, TS17 6BH United Kingdom

More information

PLS 506 Mark T. Imperial, Ph.D. Lecture Notes: Reliability & Validity

PLS 506 Mark T. Imperial, Ph.D. Lecture Notes: Reliability & Validity PLS 506 Mark T. Imperial, Ph.D. Lecture Notes: Reliability & Validity Measurement & Variables - Initial step is to conceptualize and clarify the concepts embedded in a hypothesis or research question with

More information

A Cue Imputation Bayesian Model of Information Aggregation

A Cue Imputation Bayesian Model of Information Aggregation A Cue Imputation Bayesian Model of Information Aggregation Jennifer S. Trueblood, George Kachergis, and John K. Kruschke {jstruebl, gkacherg, kruschke}@indiana.edu Cognitive Science Program, 819 Eigenmann,

More information

Convergence Principles: Information in the Answer

Convergence Principles: Information in the Answer Convergence Principles: Information in the Answer Sets of Some Multiple-Choice Intelligence Tests A. P. White and J. E. Zammarelli University of Durham It is hypothesized that some common multiplechoice

More information

Audio: In this lecture we are going to address psychology as a science. Slide #2

Audio: In this lecture we are going to address psychology as a science. Slide #2 Psychology 312: Lecture 2 Psychology as a Science Slide #1 Psychology As A Science In this lecture we are going to address psychology as a science. Slide #2 Outline Psychology is an empirical science.

More information

Chapter 11. Experimental Design: One-Way Independent Samples Design

Chapter 11. Experimental Design: One-Way Independent Samples Design 11-1 Chapter 11. Experimental Design: One-Way Independent Samples Design Advantages and Limitations Comparing Two Groups Comparing t Test to ANOVA Independent Samples t Test Independent Samples ANOVA Comparing

More information

Artificial Intelligence Programming Probability

Artificial Intelligence Programming Probability Artificial Intelligence Programming Probability Chris Brooks Department of Computer Science University of San Francisco Department of Computer Science University of San Francisco p.1/25 17-0: Uncertainty

More information

The Role of Causality in Judgment Under Uncertainty. Tevye R. Krynski & Joshua B. Tenenbaum

The Role of Causality in Judgment Under Uncertainty. Tevye R. Krynski & Joshua B. Tenenbaum Causality in Judgment 1 Running head: CAUSALITY IN JUDGMENT The Role of Causality in Judgment Under Uncertainty Tevye R. Krynski & Joshua B. Tenenbaum Department of Brain & Cognitive Sciences, Massachusetts

More information

IN PRESS: MEMORY AND COGNITION

IN PRESS: MEMORY AND COGNITION IN PRESS: MEMORY AND COGNITION ------------------------------------------------------------------------ The texture of causal construals: Domain specific biases shape causal inference from discourse ------------------------------------------------------------------------

More information

Perception LECTURE FOUR MICHAELMAS Dr Maarten Steenhagen

Perception LECTURE FOUR MICHAELMAS Dr Maarten Steenhagen Perception LECTURE FOUR MICHAELMAS 2017 Dr Maarten Steenhagen ms2416@cam.ac.uk Last week Lecture 1: Naive Realism Lecture 2: The Argument from Hallucination Lecture 3: Representationalism Lecture 4: Disjunctivism

More information

Lecture 2: Learning and Equilibrium Extensive-Form Games

Lecture 2: Learning and Equilibrium Extensive-Form Games Lecture 2: Learning and Equilibrium Extensive-Form Games III. Nash Equilibrium in Extensive Form Games IV. Self-Confirming Equilibrium and Passive Learning V. Learning Off-path Play D. Fudenberg Marshall

More information

In this chapter we discuss validity issues for quantitative research and for qualitative research.

In this chapter we discuss validity issues for quantitative research and for qualitative research. Chapter 8 Validity of Research Results (Reminder: Don t forget to utilize the concept maps and study questions as you study this and the other chapters.) In this chapter we discuss validity issues for

More information

You can use this app to build a causal Bayesian network and experiment with inferences. We hope you ll find it interesting and helpful.

You can use this app to build a causal Bayesian network and experiment with inferences. We hope you ll find it interesting and helpful. icausalbayes USER MANUAL INTRODUCTION You can use this app to build a causal Bayesian network and experiment with inferences. We hope you ll find it interesting and helpful. We expect most of our users

More information

Time as a Guide to Cause

Time as a Guide to Cause Journal of Experimental Psychology: Learning, Memory, and Cognition 2006, Vol. 32, No. 3, 451 460 Copyright 2006 by the American Psychological Association 0278-7393/06/$12.00 DOI: 10.1037/0278-7393.32.3.451

More information

Christopher Guy Lucas. A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy. Psychology.

Christopher Guy Lucas. A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy. Psychology. Acquired Abstract knowledge in causal induction: a hierarchical Bayesian approach by Christopher Guy Lucas A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor

More information

Lassaline (1996) applied structural alignment directly to the issue of category-based inference. She demonstrated that adding a causal relation to an

Lassaline (1996) applied structural alignment directly to the issue of category-based inference. She demonstrated that adding a causal relation to an Wu, M., & Gentner, D. (1998, August). Structure in category-based induction. Proceedings of the Twentieth Annual Conference of the Cognitive Science Society, 1154-115$, Structure in Category-Based Induction

More information

Is it possible to give a philosophical definition of sexual desire?

Is it possible to give a philosophical definition of sexual desire? Issue 1 Spring 2016 Undergraduate Journal of Philosophy Is it possible to give a philosophical definition of sexual desire? William Morgan - The University of Sheffield pp. 47-58 For details of submission

More information

Journal of Experimental Psychology: Learning, Memory, and Cognition

Journal of Experimental Psychology: Learning, Memory, and Cognition Journal of Experimental Psychology: Learning, Memory, and Cognition Repeated Causal Decision Making York Hagmayer and Björn Meder Online First Publication, June 11, 2012. doi: 10.1037/a0028643 CITATION

More information

On A Distinction Between Access and Phenomenal Consciousness

On A Distinction Between Access and Phenomenal Consciousness On A Distinction Between Access and Phenomenal Consciousness By BRENT SILBY Department of Philosophy University of Canterbury New Zealand Copyright (c) Brent Silby 1998 www.def-logic.com/articles In his

More information

Non-categorical approaches to property induction with uncertain categories

Non-categorical approaches to property induction with uncertain categories Non-categorical approaches to property induction with uncertain categories Christopher Papadopoulos (Cpapadopoulos@psy.unsw.edu.au) Brett K. Hayes (B.Hayes@unsw.edu.au) Ben R. Newell (Ben.Newell@unsw.edu.au)

More information

A conversation with Professor David Chalmers, May 20, 2016 Participants

A conversation with Professor David Chalmers, May 20, 2016 Participants A conversation with Professor David Chalmers, May 20, 2016 Participants Professor David Chalmers Professor of Philosophy, New York University (NYU) Luke Muehlhauser Research Analyst, Open Philanthropy

More information

Is inferential reasoning just probabilistic reasoning in disguise?

Is inferential reasoning just probabilistic reasoning in disguise? Memory & Cognition 2005, 33 (7), 1315-1323 Is inferential reasoning just probabilistic reasoning in disguise? HENRY MARKOVITS and SIMON HANDLEY University of Plymouth, Plymouth, England Oaksford, Chater,

More information