Structural Approach to Bias in Meta-analyses

Size: px
Start display at page:

Download "Structural Approach to Bias in Meta-analyses"

Transcription

1 Original Article Received 26 July 2011, Revised 22 November 2011, Accepted 12 December 2011 Published online 2 February 2012 in Wiley Online Library (wileyonlinelibrary.com) DOI: /jrsm.52 Structural Approach to Bias in Meta-analyses Ian Shrier* Methods to calculate bias-adjusted estimates for meta-analyses are becoming more popular. The objective of this paper is to use the structural approach to bias and causal diagrams to show that (i) the current use of the bias-adjusted estimating tools may sometimes introduce bias rather than reduce it and (ii) the Cochrane collaboration risk of bias tool, which was designed for randomized studies, is also applicable to non-randomized studies with only minimal changes. Causal diagrams are used to illustrate each of the items in the current risk of bias tool and how they apply to both randomized and non-randomized studies. With the exception of confounding by indication, the structure of all potential biases present in non-randomized studies may also be present in randomized studies. In addition, causal diagrams demonstrate important limitations to the methods currently being developed to provide bias-adjusted estimates of individual studies in meta-analyses. Finally, causal diagrams can be helpful in deciding when it is appropriate to combine studies in a meta-analysis of non-randomized studies even though the studies may use different adjustment sets. Copyright 2012 John Wiley & Sons, Ltd. Keywords: bias; meta-analyses; diagram Introduction The fundamental objective of evidence-based medicine (EBM) is to enable clinicians and policy makers to make causal inferences about exposures in order to develop or prescribe effective health interventions. In general, randomized trials are stronger than non-randomized studies in establishing causation [1,2]. In randomized studies, the treatment and control groups are expected to have equal distributions of all prognostic factors except for the exposure of interest. Therefore, any difference in outcome is likely due to the treatment. In non-randomized studies, there may be systematic differences between the treatment and control groups, and these differences may be just as likely to explain the difference in outcome, as is the treatment. That said, both randomized and non-randomized studies can have flaws in methodology and analysis that will result in the calculated treatment effect being systematically different from the true treatment effect [1 5], that is, the study may provide a biased estimate of the true treatment effect. Because EBM requires the evidence synthesis from different studies, researchers conducting meta-analyses must evaluate biases within each individual study, as well as biases created during the process of evidence synthesis. Several groups have been working on advanced methods that quantify the direction and magnitude of biases in order to provide bias-adjusted estimates for meta-analyses [3,4,6 8]. The ultimate objective of these methods is to improve the estimated causal effect for meta-analyses limited to randomized studies and metaanalyses limited to non-randomized studies. In addition, because the source of the bias is not important once corrected for, bias-adjusted estimates would also allow for meta-analyses that combine randomized and nonrandomized studies together. There are two general approaches to calculate bias adjusted estimates. First, reviewers may provide estimates for the magnitude and direction (along with uncertainty) for each type of possible bias in each particular study on a study-by-study basis. The effects of the different biases are then combined mathematically for each reviewer, and then the overall estimate is obtained by combining the results across reviewers [3,4]. Currently, these estimates are obtained by expert opinion. An alternative proposed method would use results from meta-epidemiological Centre for Clinical Epidemiology and Community Studies, Jewish General Hospital, Lady Davis Institute for Medical Research, McGill University, Montreal, Canada *Correspondence to: Ian Shrier, MD, PhD, Centre for Clinical Epidemiology and Community Studies, Jewish General Hospital, 3755 Cote Ste- Catherine Rd, Montreal, QC H3T 1E2, Canada. ian.shrier@mcgill.ca 223

2 studies to estimate the usual bias associated with each type of potential bias (e.g., allocation concealment) and apply this usual bias to each of the studies in the meta-analysis [9]. In order for bias-adjusted estimate methods to provide valid estimates, one must be able to list all the potential biases and understand how they interact. For randomized studies, the list is fairly limited. Several bias lists for randomized studies are available and contain similar items; this article focuses on the Cochrane collaboration s qualitative risk of bias tool (Table 1) [5,10]. This is because the Cochrane collaboration produces a large amount of meta-analyses and now requires the risk of bias tool in all future meta-analyses. Although there is general agreement for the lists of bias in randomized studies, the issue appears more complex in non-randomized studies. For example, Chavalarias and Ioannidis [11] found 235 bias terms in the literature. Although some suggestions for a framework have been made [3,12], these lists do not cover the full range of traditional epidemiological biases and do not account for interaction between biases. For bias-adjusted estimates to be practical in a meta-analysis, one needs to develop an approach that addresses both the multitude of biases and the interaction of biases. One promising alternative of assessing bias is to use causal diagrams and the structural approach to bias [13 15]. For those who are unfamiliar with causal diagrams, the diagrams encode causal relationships between variables. A unidirectional arrow from X to Y means that variable X causes variable Y; the absence of an arrow means X does not cause Y. If X and Y are both caused by Z, then Z is said to be a common cause (or ancestor") of X and Y. If Z and L both cause X, then X is said to be a common effect (or collider) of Z and L (or descendent of both Z and L). Causal diagrams need to include all variables that are common causes of any two variables in the diagram but do not need to include variables that Table 1. The Cochrane risk of bias tool [10]. Domain Description Review authors judgement 224 Sequence generation Allocation concealment Blinding of participants, personnel, and outcome assessors Assessments should be made for each main outcome (or class of outcomes) Incomplete outcome data Assessments should be made for each main outcome (or class of outcomes) Selective outcome reporting Other sources of bias Describe the method used to generate the allocation sequence in sufficient detail to allow an assessment of whether it should produce comparable groups. Describe the method used to conceal the allocation sequence in sufficient detail to determine whether intervention allocations could have been foreseen in advance of, or during, enrolment. Describe all measures used, if any, to blind study participants and personnel from knowledge of which intervention a participant received. Provide any information relating to whether the intended blinding was effective. Describe the completeness of outcome dataforeachmainoutcome,including attrition and exclusions from the analysis. State whether attrition and exclusions were reported, the numbers in each intervention group (compared with total randomized participants), reasons for attrition/exclusions where reported, and any re-inclusions in analyses performed by the review authors. State how the possibility of selective outcome reporting was examined by the review authors and what was found. State any important concerns about bias not addressed in the other domains in the tool. If particular questions/entries were pre-specified in the review s protocol, responses should be provided for each question/entry. Was the allocation sequence vadequately generated? Was allocation adequately concealed? Was knowledge of the allocated intervention adequately prevented during the study? Were incomplete outcome data adequately addressed? Are reports of the study free of suggestion of selective outcome reporting? Was the study apparently free of other problems that could putitatahighriskofbias?

3 are a cause of the exposure alone or a cause of the outcome alone [13]. When using causal diagrams to choose among different regression models, each variable included in a regression model would also need to be included in the diagram. Using causal diagrams, each traditional epidemiological bias can be grouped into one of four categories [13,16 18]. These biases are briefly explained below and fully explained in Appendix Confounding bias. This may occur when there is a common cause of exposure and outcome and one has not blocked the bias through conditioning on a sufficient set of covariates [13]. 2. Collider-stratification bias (also known as selection bias). This may occur when one conditions on a common effect of two different causes. This is especially important because it means that including a covariate in a model can create bias rather than minimize it even if it is (i) associated with exposure, (ii) associated with outcome, (iii) changes the effect estimate when included in the model, and (iv) is not affected by exposure. Thus, standard epidemiological rules for deciding which variables to include in a model to reduce bias are not sufficient, and one must understand the causal relationships to know if including a covariate is likely to reduce or increase bias [13]. 3. Measurement error bias. This may occur with measurement error [17,18], where measurement error means that the measured value of a variable is not reflective of its true value (e.g., misclassified exposure or outcome) and may be due to either random or systematic error. 4. Over-adjustment bias. This may occur when one conditions on a variable that lies within the causal pathway or conditions on a marker for a variable within the causal pathway [16,19]. Causal diagrams have been used to explain biases within original research studies related to randomized controlled trials (e.g., loss to follow-up), cohort studies (e.g., possible common causes of exposure and outcome), and case-control studies (e.g., Berkson s bias) [13]. Once a causal diagram is drawn, it is possible to follow some simple rules to determine which covariates should be included in the analysis in order to obtain an unbiased estimate [20,21], assuming the causal diagram drawn is correct. In addition, several software programs have been developed to help understand which covariate sets minimize bias and which covariate sets introduce bias [22,23]. One important limitation of using causal diagrams is that one never knows if one has drawn the correct causal diagram. That said, this actually represents a strength of the causal diagram approach because it requires authors to make their underlying assumptions transparent, which is an essential component of any systematic review or meta-analysis. This is explained in greater depth later. In brief, we have previously suggested that where several causal diagrams are plausible, authors should display each of the causal diagrams and associated analyses [20]. Whether or not one chooses to draw the causal diagrams, bias will be minimized or not depending on the true causal diagram. If an investigator chooses not to draw out the plausible causal diagrams and still apply an analytical strategy (including propensity scores), it simply means that the investigator is deciding that one particular causal diagram is the most appropriate (i.e., the one in which his/her analytical strategy will minimize bias) without making the underlying assumptions and reasoning available to the reader. Readers should understand that causal diagrams do not affect the validity of any randomized or non-randomized study. Unknown common causes of exposure and outcome can never be ruled out completely in a non-randomized study, whereas the randomization algorithm is the only possible cause of exposure in a well-conducted randomized trial. In addition, when there is unmeasured confounding in observational studies, including instrumental variables (or indeed any variable that is more strongly associated with exposure than with the outcome, i.e., even some true confounders) in an analytical model may actually amplify the bias in the results compared with not including these variables [24]. The objective of causal diagrams is to help investigators choose analytical strategies that will minimize bias in both randomized and non-randomized studies and make the potential biases in each study design apparent. Readers interested in learning more about causal diagrams are referred elsewhere for more extensive background [13 19,21,25]. Given that bias-adjusted methods for combining studies in meta-analyses are becoming more formalized and causal diagrams help make underlying assumptions of analytical models transparent, the overall objective of this paper is to show how causal diagrams can be used to (i) demonstrate that with minor modifications, the current Cochrane risk of bias tool provides an applicable framework for all study designs and (ii) highlight contexts where current bias-adjusted estimate methods being proposed for meta-analyses may inadvertently introduce bias rather than correct for it. Each of the causal diagrams in this paper includes the following features: If the bias exists in both randomized and non-randomized studies, the term allocation process is used; if the bias is different in randomized and non-randomized studies, the factors responsible for the allocation to exposure group are stated. Allocation factors cause subjects to be assigned to exposure groups, which cause exposure to participants (called true exposure because not all participants will comply). In the causal diagrams shown, exposure is a cause of the outcome. When a variable is noted with an *,it refers to the observed measure of the variable, which includes the associated measurement error [26]. Cochrane risk of bias tool The premise of this paper is that causal diagrams suggest that the current Cochrane risk of bias tool (see Table 1 for full description of the tool), developed for randomized trials, also provides an appropriate foundation for 225

4 assessment of biases in non-randomized studies. This framework lists five general sources of bias, with an additional other category [10]: sequence generation, allocation concealment, blinding (participants, personnel, and outcome assessors), incomplete outcome data, and selective outcome reporting. Each of the following sections lists a Cochrane risk of bias tool item, describes a causal diagram for the item, illustrates differences between randomized and non-randomized studies where appropriate, and explains the usefulness for authors/ readers/teachers of meta-analyses. Sequence generation Sequence generation refers to the process leading to group assignment. In a truly randomized trial, the sequence generation is randomly generated (caused), usually by a computer algorithm, and is causally unrelated to the outcome. The sequence generation creates a group assignment, which causes investigators to inform subjects which group they are assigned to, which affects the probability that subjects will actually be exposed or not, and which causes a change in the probability of the outcome occurring (assuming exposure has an effect). The key to sequence generation is that the group assignment should be causally unrelated to the outcome. However, some studies labeled as randomized may not use truly random assignments. When this occurs, a bias may be introduced if the assignment is not random with respect to the outcome. Therefore, all risk of bias tools (including the Cochrane risk of bias tool) ask authors of systematic reviews to assess the likelihood of this occurring. In Figure 1, the sequence generation of one of the included randomized studies was based (caused) on the month of birth; that is, it was not random and should not have been called a randomized study. In this example, all participants born in the first half of the year are assigned to one group, and those born in the second half of the year are assigned to another group (one group will be older than the other). If age is a prognostic factor (Figure 1), then confounding bias exists because age is a common cause of both exposure and outcome [13]. Inappropriate sequence generation can sometimes be accounted for in an analysis, but this is not usually done. First, if authors were aware of the problem, they would have avoided it. Second, in the example given above, residual confounding would always exist because each group has completely different months of birth with no overlap. That said, the authors of a meta-analysis might not believe age affects the outcome. If true, they would draw a similar causal diagram but delete the arrow between age and outcome. If this causal diagram was correct and one applied the methods previously mentioned that determine if bias exists [20,21], one would conclude that the allocation method used did not introduce any bias even though it was not randomized. Therefore, in a metaanalysis using bias-adjusted methodology, no adjustment would be necessary for this item in this study. Drawing the causal diagram clearly delineates the assumptions and underlying reasoning why one should adjust for the sequence generation or should not adjust for the sequence generation. Allocation concealment Lack of allocation concealment was originally associated with ~30% overestimation of treatment effect and has received a great deal of attention as a potential bias [8,9,27 31]. In reality, this is really a problem related to blinding. Blinding has many components; one can blind 1) investigators, 2) patients, 3) those responsible for assigning participants to exposure groups, 4) those responsible for allocating treatment, 5) those responsible for carrying out all study processes, and/or 6) those responsible for assessing outcome. Allocation concealment refers only to blinding of the person allocating the treatment. The Cochrane risk of bias tool and other bias adjustment methods [4,9,10] list this potential bias separately from other types of blinding. Therefore, blinding of those unrelated to allocation concealment is discussed separately in the next section (associated with Figure 3). The causal diagrams in Figure 2 suggest that the underlying mechanism of bias related to allocation concealment could be either confounding bias (Figure 2a) or collider-stratification bias (Figure 2b). 226 Figure 1. A causal diagram for the sequence generation item in the Cochrane risk of bias tool [10]. In an appropriately randomized study, the sequence generation does not cause the outcome (i.e., the arrow from unmeasured factor to outcome would not exist). In some studies labeled randomized, the sequence generation may in fact be due to a factor that is also a cause of the outcome, that is, it is not a true randomized study. For example, if one allocates treatment by date of birth (e.g., participants born January June are in one group and participants born July December are in another group) and age is a cause of the outcome, confounding bias exists because age is a common cause of both exposure and outcome.

5 Figure 2. A causal diagram for the allocation concealment item in the Cochrane risk of bias tool [10], which is due to un-blinding of the person allocating the treatment (un-blinding of investigator, patient, outcome assessor, or other study personnel is explained in Figure 3). In A, the allocator has knowledge about the health of the participant (which is affected by causal factors of the outcome), and this causes the allocator to cheat and allocate certain participants to exposure groups that they were not actually assigned to. In this diagram, participant prognosis is the common cause of exposure and outcome. In B, poor research training of the investigators leads to a protocol in which the allocator was not blinded. In addition, poor research training also leads to other quality issues in the study (e.g., follow-up procedures are different for the two groups). Because follow-up procedures are caused by both exposure and poor research training (i.e., follow-up procedures are a common effect), a conditional association is created between exposure and poor research training (and therefore also between allocator concealment and outcome). Note that there is no arrow from allocation concealment to exposure because this can only occur through fraud (illustrated in A). There is no arrow from allocation concealment to outcome because this can only occur through un-blinding of an outcome assessor or the patient, and that is illustrated in Figure 3. For pedagogical purposes, we restrict the discussion to randomized studies because almost all nonrandomized studies do not have allocation concealment. In Figure 2a, the person allocating the treatment knows the health status of the participants (allocation not concealed) and therefore may decide to give a different treatment than what was randomly assigned. This is a form of fraud. Although fraud could occur at any level of a study, it is illustrated here because allocation concealment has received so much attention, and it is in fact the only possible cause of the bias in this context. To minimize this bias, some have argued that elaborate tactics (e.g., central randomization) are necessary to prevent the allocator from determining what the assignment is; if simple mechanisms are used (e.g., allocation using envelopes), the risk of bias is high [31 33]. However, these elaborate solutions have not been tested, and in theory, it is not clear that they would be successful. For example, an allocator who wants a subject assigned to a particular group could simply assign the subject even if central randomization said not to, and then switch the assignment for a subsequent subject to create the correct balance between groups. Furthermore, if the fraud begins with the investigators (to show a treatment is beneficial/harmful despite the science), the investigators would likely lie about allocation concealment in their report the risk of bias would remain high despite the reporting that elaborate mechanisms were used. Another possibility suggested by Wood et al. [9] is that lack of allocation concealment may simply represent a marker for general poor study quality. If true, the current suggestions on how to account for this bias in metaanalyses may increase rather than minimize bias. Figure 2b is a causal diagram showing why this would occur. According to this theory, poor research training would increase the probability (cause) that the study investigators used different follow-up procedures (i.e., number of follow-up visits, methods to detect outcome, stringency in quality control) for each of the exposure groups. In the causal diagram, this is expressed by including an arrow from poor research training to follow-up procedures and from exposure to follow-up procedures. Therefore, the follow-up procedure represents a collider between exposure group and poor research training. Furthermore, in this theory, allocation concealment cannot cause exposure by any other method except fraud. Because this was illustrated in Figure 2a, it is not repeated again here, and there is no arrow from allocation concealment to exposure. Finally, if the person allocating treatment also happens to be the person responsible for assessing outcome, there might be measurement error bias, but this is due to un-blinding of the outcome assessor (discussed in Figure 3) and not due to lack of allocation concealment. What are the implications of the causal structure in Figure 2b? First, there is a conditional association between exposure and poor research training; that is, a non-causal association (collider-stratification bias) between allocation concealment and outcome is created if any of the follow-up procedures (i.e., the collider) are included in the model. Second, if all other study methods are appropriate (i.e., if the arrow from poor training to follow-up 227

6 Figure 3. A causal diagram for the blinding item in the Cochrane risk of bias tool [10]. In A, the bias associated with the placebo effect is shown. This bias can occur in both randomized and non-randomized studies, and therefore, the generic term allocation process is used to assign exposure groups. The bias only occurs if the unblinded participant had prior beliefs about the effects of a particular exposure, and these beliefs actually cause a change in the outcome (e.g., through psychobiological or behavioral mechanisms). Note that there is no bias if one is interested in the total causal effect of the exposure (which includes psychobiological effects). In B, the biases associated with lack of blinding of the investigator/clinician/participant when one is not interested in including psychobiological effects (i.e., non-placebo effect) are shown. The effects are different for randomized (factors causing exposure other than randomization are not present) and non-randomized studies ( randomization factor not present). In non-randomized studies, the probability of exposure changes if the clinician (and/or participant) believes that the effect of the exposure will affect the outcome. If the reason for these beliefs is because of the presence/absence of factors that cause the outcome, there is confounding bias (confounding by indication). If the reasons for these beliefs are unrelated to the outcome (i.e., the arrows from causal outcome factors to clinician/participant beliefs would be absent), there is no confounding bias. A separate bias occurs if the participant is assigned to one exposure group and then later changes because they become aware of their prognosis. In C, assessor non-blinding from any one of several causes can cause differences between the classified outcome (denoted outcome* ) and the true outcome (denoted outcome ), which is known as measurement error. In addition, the way questions are asked/information obtained can lead to differences between classified exposure and true exposure (recall bias); the classified exposure ( exposure* ) is also caused by assignment to exposure ( exposure ). These biases occur regardless of whether the allocation process is randomized or not. 228 procedures is removed), allocation concealment is not a cause of bias; any method that provided an allocation concealment bias-adjusted estimate for such a study [4,6 9] would introduce bias rather than minimize it. Third, if poor follow-up procedures were present and bias-adjusted estimates were calculated for these poor follow-up procedures, any further adjustment for allocation concealment would again introduce bias. Finally, if training on general study methods were improved, lack of allocation concealment would likely become unimportant, because it is just a marker (as opposed to a cause) for bias. Indeed, improved training may be the reason for the observed reduction in the effect of allocation concealment bias (published by the same group of authors) from 30% in 2001 [34] to 17% in 2008 [9], and the finding that a simple method of sealed, sequentially numbered opaque envelopes yielded equivalent effect estimates compared with central randomization [31]. That said, providing bias-adjusted estimates based on allocation concealment alone (i.e., no other bias adjustments are made) could theoretically be an efficient method but only under the assumption that allocation concealment is a good marker for the combined total of all biases related to methodological design. In conclusion, causal diagrams illustrate that providing bias-adjusted estimates for allocation concealment in a meta-analysis requires careful thought, and that such bias-adjusted estimates should probably not include any other adjustments for methodological quality.

7 Blinding Bias due to un-blinding the person responsible for assigning exposure (participants or clinician/investigators) is illustrated in Figures 3a and b, and bias due to an un-blinding the person responsible for determining outcome (assessor, participant) is illustrated in Figure 3c. An unblinded analyst could also create bias through reporting bias (discussed later) or through deliberate inappropriate analyses (again equivalent to fraud and not shown as a causal diagram). Finally, un-blinding of any other study personnel (e.g., those applying an intervention, entering data) could also create bias but are not part of risk of bias tools and are not discussed here. In brief, bias would be created if these un-blinded individuals affected the application of the intervention, changed the expectations of the participant (equivalent to un-blinding the participant), or affected the measured outcome (equivalent to un-blinding the outcome assessor or analyst). Figure 3a illustrates potential biases due to placebo effects (defined as psychobiological effects here) when a participant is unblinded in a study with truly objective outcomes; that is, response bias (where a subject responds in a particular way to please the researchers) is not possible because the participant is not the assessor. The causal diagram illustrates that the intention-to-treat analysis is biased in unblinded studies if one is interested in the affects of assigned exposure acting only through the biological effect (i.e., blinding eliminates the arrow between exposure group and participant beliefs of assigned exposure because participants do not know which exposure group they are in). However, there is no bias in unblinded studies when using an intention-to-treat analysis if (i) there are no psychobiological effects on the outcome (i.e., no arrow emanating from participant beliefs), (ii) participants can be convinced that the two treatments are likely to have the same effect (or equivalently, that participants do not have expectations that one treatment is superior; again, no arrow emanating from participant beliefs), or (iii) if one is interested in the total causal effect of assigning exposure (where the total causal effect includes behavior change and psychobiological effects such as when one is interested in whether the color and shape of a pill affects the outcome [35]). In addition, using the causal diagrams and applying the rules previously mentioned that help determine which covariates should be included in the analysis in order to obtain an unbiased estimate [20,21], the lack of blinding can be minimized if the patients prior beliefs about the exposure are measured before the exposure is given, and then conditioned on. Figure 3b illustrates the non-placebo biases associated with un-blinding the investigator/clinician/participant. In general, one can group these biases into two categories: confounding by indication and misclassification (measurement error). First, confounding by indication occurs when the participant is assigned an exposure specifically because of the presence/absence of causal factors for the outcome. The presence/absence of these causal outcome factors affects the clinician s beliefs about exposure effects (or of the beliefs of the participant who accepts/refuses a treatment). This information then leads to prescribing (or accepting) a particular treatment. The causal outcome factors therefore represent a common cause of both exposure and outcome (confounding bias). Note that other more complicated forms of confounding by indication exist (Figures 6c and d in Ref. [13]), where conditioning on a common effect induces confounding bias because it creates a conditional common cause of exposure and outcome [36]. A slightly different bias occurs when the participant becomes unblinded after the exposure group has already been assigned. At this point, there would be measurement error bias if the participant decided to take the un-assigned exposure (traditionally known as contamination) or just stop the assigned exposure; biases due to dropping out of the study are discussed in Figure 4. All of these contexts can occur in both randomized and non-randomized studies. Figure 3c addresses the issue of an unblinded assessor; similar effects are expected in both randomized and non-randomized studies. In the causal diagram, this is just a form of measurement error because the unblinded assessor is reporting a value for the outcome that is different from the true value. If the assessor is part of the research team, this may be called detection bias (because one group is being more accurately classified). If the assessor is the participant him/herself and it is a subjective outcome, it may be called response bias [37] (because the participant answers positively just to please the research team even if they do not feel better). Similarly, in a case-control study, the participant may recall exposures differently if they are un-blinded (often called recall bias but could also be called response bias if the motivation was to please the research team). Regardless of the name or who is responsible for the misclassification, causal diagrams illustrate that all of these are cases of measurement error bias. Summarizing the causal diagrams in Figures 3a-c, the structure of the bias due to unblinded assessors is identical in randomized and non-randomized studies (frequency and magnitude may differ), except for some Figure 4. A causal diagram for the incomplete outcome item in the Cochrane risk of bias tool. This causal diagram demonstrates an example where one is conditioning on a descendent of a variable lying along the causal pathway (over-adjustment bias [16]). These biases occur regardless of whether the allocation process is randomized or not. Other causal diagrams related to incomplete outcome data can be found in Ref. [13], some of which apply to both randomized and non-randomized studies, and others that apply only to non-randomized studies. 229

8 case-control analyses. Bias due to unblinded participants is identical for the placebo effect and contamination. The only structural difference between randomized and non-randomized studies (i.e., the probability of the different biases may also be different) is confounding by indication, which can occur for known or unknown reasons. Can confounding by indication be minimized at the analysis stage? Using causal diagrams [15], confounding by indication is minimized (i.e., the result is equivalent to the intention-to-treat analysis of a randomized study) by conditioning on a correct set of accurately measured covariates that blocks the confounding bias; one set of covariates that minimizes bias when included in the model is simply the indications (i.e., causes) for treatment. This should be achievable for studies on diseases where the indications are well described and understood [38]. For conditions where indications are ambiguous, prospective studies can obtain the indications from the physician or patient at the time the treatment is prescribed [38] and condition on them in the statistical model to provide unbiased estimates. The only limitation to this approach is that precision is decreased if it includes variables that are not also causes of the outcome [39]. When conducting a meta-analysis, investigators are clearly dependent on what information the original studies collected and reported and the analyses conducted. One serious limitation to meta-analyses of non-randomized studies using traditional methods is that different studies will use different sets of covariates in the adjusted analysis. Currently, investigators not using the bias-adjusted estimate methods previously described simply report meta-analysis results after combining studies with different covariate adjustment sets but never explain why they believe it is appropriate to do so. Causal diagrams force the investigator to make their assumptions explicit. For example, when the causal relationships between exposure and disease are well understood, and it is realistic to assume a particular causal model has a high probability of being correct, it is logical to combine studies that use different covariate adjustment sets if each of the covariate adjustment sets results in minimizing bias (see Appendix 1 for an example). When the causal relationships between exposure and disease are not well understood, investigators should draw all the causal diagrams they believe are realistic. The same process as above (i.e., when only one causal diagram is probable) is then applied to each of the causal diagrams as a sensitivity analysis. It may be that certain studies should be combined according to one causal diagram but not according to a different causal diagram; causal diagrams therefore increase transparency of the underlying assumptions being applied by the investigator doing the meta-analysis. Of course, methods that provide bias-adjusted estimates of the individual studies also depend on the true causal relationships, and therefore, causal diagrams would increase the transparency of these meta-analyses as well. Incomplete outcome data The issues concerning incomplete outcome data that have previously been discussed by others will not be repeated here (see Figure 6 in Ref [13]). The bias occurs when there is differential loss to follow-up between the assignment groups, which is known as informative censoring in traditional epidemiological teaching [13]. Another example of informative censoring not previously discussed (Figure 4) is the over-adjustment bias that occurs where loss to follow-up is a marker of side effects due to exposure (i.e., one is effectively conditioning on an effect of the exposure). Incomplete outcome data bias may occur in both randomized and non-randomized studies. The bias is probably more frequently present in non-randomized studies because some of the causal diagrams describing the bias are specific to non-randomized studies (see Figure 6 in Ref. [13]), but none are specific to randomized studies. Using causal diagrams clearly shows that obtaining the reasons why the individuals dropped out of the study is essential to estimating bias. If the reason is not an effect of exposure, there is no bias. If the reason is an effect of exposure, non-regression methods of analysis may be more appropriate [40 42]. With respect to meta-analyses, causal diagrams are helpful pedagogically to illustrate the bias but less helpful from a practical viewpoint at this time. Selective reporting outcome 230 The selective reporting outcome row in the risk of bias tool is designed to evaluate whether some outcomes are specifically excluded from the published literature (which can occur based on either author or editor decisions). From a causal diagram approach, there is little difference between selective reporting bias and publication bias. Both biases are due to incomplete reporting of results in the published literature (publication bias simply being the extreme where no results are published). As such, the causal diagram in Figure 5 begins with the study results, which causes investigators/editors to choose which outcomes to emphasize (e.g., based on statistical significance, results support investigators theories, and so on). The choice of outcomes may also cause a change in the probability of publication (either by the authors or the editors), and this can affect the results of future metaanalyses. If the study protocol were available, one would be able to properly evaluate if this occurred. In essence, selective reporting bias and publication bias are examples of incomplete outcome data, but on the meta-analysis level rather than the individual study level. With respect to meta-analyses, not including data because of the results represents an example of over-adjustment bias (conditioning on an effect of the exposure, which are the results in this example). Attempts to simply include unpublished data do not remove bias because

9 Figure 5. A casual diagram for the selective reporting item in the Cochrane risk of bias tool [10] (also applicable to publication bias, which is the extreme of selective reporting and occurs when all results of a study are not reported). Each individual study is analogous to a cross-over study (because each study represents an observation with data for both comparison conditions). The results of the studies cause investigators (or editors) to emphasize some of the study outcomes but not others, which leads to the inclusion of only some results for the publication. In addition, study quality will affect the amount of bias in the study, which will affect both the study results and the probability of publishing. Including only published material is effectively conditioning on an effect of exposure. Including all the unpublished results will lead to bias unless one can appropriately condition on the study biases. study quality (leading to study biases) represents a common cause of the ability to publish the study and the study results. Therefore, the best approach would be to obtain all the unpublished data and adjust for study quality (all internal biases) in order to obtain valid bias-adjusted estimates [4,6 9,43] or to qualitatively report on these biases if the expertise for bias-adjusted estimate calculations is not available. However, this is often not feasible or possible, and some authors have recently suggested a Bayesian approach that borrows information about publication bias from other meta-analyses [44]. In essence, this is conceptually similar to the previously discussed methods for allocation concealment where one uses usual bias estimates obtained from other meta-analyses [9]. Readers should note that publication bias can occur for other reasons unrelated to study biases, and these are not included in Figure 5 (e.g., studies written by authors whose first language is not English may have difficulty publishing in English language publications) because they do not represent a common cause of two variables in the causal diagram and therefore are not necessary for internal validity of the study (in this case, the metaanalysis). However, omitting such studies can affect generalizability and may still represent a concern depending on the research question. Other biases Figures 6, 7 illustrate that many of the biases attributed solely to non-randomized study designs also occur in randomized trials, usually because of poor design. These biases are termed study-level biases because they occur at the level of the study group (i.e., each participant in the same group is affected the same way) rather than the participant level. Because they occur at the study group level, they may be more easily thought of as due to clustering or co-interventions. Non-randomized studies that use control groups identified by a different period (pre-post studies, historical controls) or location are considered to have a high probability of bias. This is because other factors (e.g., level of health care) may have changed over time (or location), and these other factors might be responsible for the observed effect. In essence, this is just a specific example of assigning causal relations when there are co-interventions. For example, if a cluster randomized study has only two clusters, there may be co-interventions that occur by chance (Figure 6) [45]. In this case, it is not possible to know whether the randomized exposure or the co-intervention is responsible for the causal effect. With respect to historical control studies, these simply represent a two-cluster study with clustering due to time. If there are no co-interventions related to the outcome within the context of the study (e.g., health care does not generally change over the span of weeks or months), Figure 6. A causal diagram for the biases that occur when the exposure and control groups are chosen based on time (e.g., historical controls, pre-post studies) or location. The choice of exposure applies to all individuals within the group, and the same choice may also cause a second exposure (i.e., the equivalent of a co-intervention). If the second exposure also causes the outcome, it is not possible to disentangle the effects of the exposure of interest from the co-intervention. This effect is also observed in cluster randomized trials with few clusters. See text for example. 231

10 Figure 7. A causal diagram for regression to the mean. In this example, exposure groups are clustered by time. The random variability in the outcome means that the response of the exposure group will vary over time by chance (indicated by dotted bidirectional arrow that suggests an association due to unmeasured causes [15]). The bias occurs when the outcome of one clustered group is unusually high, and this causes the investigators to decide to conduct the study; exposure should never be determined based on outcome. Because investigators use the group with the known high values that occurred by chance, it is very probable that the comparison group will have low values simply by chance. then historical controls could be appropriate, and bias needs to be evaluated on a study-by-study basis. From a meta-analysis perspective, causal diagrams suggest this bias occurs in both randomized and non-randomized studies (albeit more frequently in non-randomized studies). Therefore, any risk of bias tool that assesses biases for randomized studies to calculate bias-adjusted estimates in a meta-analysis must account for this bias as well. Another commonly cited bias for pre-post studies is regression to the mean (Figure 7). If the pre-post study were well designed where the intervention is applied based on factors unrelated to the prognosis of patients, the probability of the pre group having a higher mean value than the post group is exactly the same as any randomized trial, that is, chance association. That said, pre-post studies might be conducted because clinicians notice an unusually high rate/proportion of an undesirable outcome at a particular period (e.g., high injury rate). The clinicians might then create an intervention to reduce the high rate and compare post-intervention results with pre-intervention results. This violates the basic principle that one should never define an exposure group based on the outcome (which was also the principle underlying recall bias in Figure 3b); in this case, the pre group was defined by their unusually high injury rate. More generically, one expects variability in the outcome rate over time, and time is considered the clustering factor in a pre-post study. This chance association at one point in time causes investigators to conduct the study, which creates a non-causal association between exposure and outcome. Here, causal diagrams are most useful for pedagogy and explain how regression to the mean is simply due to inappropriately allowing an outcome to determine exposure level. Finally, biases associated with the selection of controls within case-control studies are often considered difficult to identify. However, causal diagrams have been used to show that they usually represent bias due to conditioning on a common effect (collider-stratification bias) [13]. For example, Berkson s bias occurs when an investigator chooses controls from the same hospital where cases were admitted, which is often done in order to select controls from the same socio-economic area as the cases (i.e., individuals who attend the same hospital are likely to live in proximity to each other). However, conditioning on a common effect of two variables (e.g., hospitalization is caused by several different diseases) or one of its descendants creates a conditional association between the causes of these diseases. This means that any exposure that results in hospitalization will be associated with the outcome of interest even if it is not a cause of the outcome of interest. Because bias due to conditioning on a common effect can also occur when there is loss to follow-up in randomized studies [13], there is nothing unique about the bias in case-control studies. More importantly, if one chooses the controls for a casecontrol study appropriately (e.g., using incidence density sampling in a case-control study nested within a cohort study), there is no conditioning on a common effect and there is no bias. Therefore, from a risk of bias tool perspective, the relevant question for a case-control study is whether the control group was specifically identified because they experienced a particular outcome that is also caused by the exposure of interest. Because a cohort study would never select participants based on an outcome that occurred after exposure, neither should a casecontrol study (with the exception that controls cannot have the outcome of interest). From a meta-analysis perspective, causal diagrams should help investigators realize that collider-stratification bias in randomized trials creates the same problems as collider-stratification bias in non-randomized studies. Implications for bias assessment tools in meta-analyses 232 At a fundamental level, the causal diagrams in Figures 1 7 illustrate that the only bias that occurs exclusively in non-randomized studies is the process of treatment allocation leading to confounding by indication; all the other general categories of biases differ only in the frequency or magnitude with which they occur. This supports our previous work arguing the same principles from a traditional epidemiological approach [38]. Given the above, any effective tool used to assess bias in randomized studies or calculate bias-adjusted estimates in meta-analyses must already include all biases except confounding by indication, and that only slight modifications would be necessary to be appropriate for non-randomized studies. One possible series of modifications to the popular Cochrane risk of bias tool are shown in Table 2, with the additions necessary to

Controlled Trials. Spyros Kitsiou, PhD

Controlled Trials. Spyros Kitsiou, PhD Assessing Risk of Bias in Randomized Controlled Trials Spyros Kitsiou, PhD Assistant Professor Department of Biomedical and Health Information Sciences College of Applied Health Sciences University of

More information

Instrument for the assessment of systematic reviews and meta-analysis

Instrument for the assessment of systematic reviews and meta-analysis Appendix II Annex II Instruments for the assessment of evidence As detailed in the main body of the methodological appendix (Appendix II, "Description of the methodology utilised for the collection, assessment

More information

Recent developments for combining evidence within evidence streams: bias-adjusted meta-analysis

Recent developments for combining evidence within evidence streams: bias-adjusted meta-analysis EFSA/EBTC Colloquium, 25 October 2017 Recent developments for combining evidence within evidence streams: bias-adjusted meta-analysis Julian Higgins University of Bristol 1 Introduction to concepts Standard

More information

The RoB 2.0 tool (individually randomized, cross-over trials)

The RoB 2.0 tool (individually randomized, cross-over trials) The RoB 2.0 tool (individually randomized, cross-over trials) Study design Randomized parallel group trial Cluster-randomized trial Randomized cross-over or other matched design Specify which outcome is

More information

ARCHE Risk of Bias (ROB) Guidelines

ARCHE Risk of Bias (ROB) Guidelines Types of Biases and ROB Domains ARCHE Risk of Bias (ROB) Guidelines Bias Selection Bias Performance Bias Detection Bias Attrition Bias Reporting Bias Other Bias ROB Domain Sequence generation Allocation

More information

Essential Skills for Evidence-based Practice Understanding and Using Systematic Reviews

Essential Skills for Evidence-based Practice Understanding and Using Systematic Reviews J Nurs Sci Vol.28 No.4 Oct - Dec 2010 Essential Skills for Evidence-based Practice Understanding and Using Systematic Reviews Jeanne Grace Corresponding author: J Grace E-mail: Jeanne_Grace@urmc.rochester.edu

More information

Online Supplementary Material

Online Supplementary Material Section 1. Adapted Newcastle-Ottawa Scale The adaptation consisted of allowing case-control studies to earn a star when the case definition is based on record linkage, to liken the evaluation of case-control

More information

Book review of Herbert I. Weisberg: Bias and Causation, Models and Judgment for Valid Comparisons Reviewed by Judea Pearl

Book review of Herbert I. Weisberg: Bias and Causation, Models and Judgment for Valid Comparisons Reviewed by Judea Pearl Book review of Herbert I. Weisberg: Bias and Causation, Models and Judgment for Valid Comparisons Reviewed by Judea Pearl Judea Pearl University of California, Los Angeles Computer Science Department Los

More information

In this second module in the clinical trials series, we will focus on design considerations for Phase III clinical trials. Phase III clinical trials

In this second module in the clinical trials series, we will focus on design considerations for Phase III clinical trials. Phase III clinical trials In this second module in the clinical trials series, we will focus on design considerations for Phase III clinical trials. Phase III clinical trials are comparative, large scale studies that typically

More information

GRADE. Grading of Recommendations Assessment, Development and Evaluation. British Association of Dermatologists April 2014

GRADE. Grading of Recommendations Assessment, Development and Evaluation. British Association of Dermatologists April 2014 GRADE Grading of Recommendations Assessment, Development and Evaluation British Association of Dermatologists April 2014 Previous grading system Level of evidence Strength of recommendation Level of evidence

More information

Evaluating Social Programs Course: Evaluation Glossary (Sources: 3ie and The World Bank)

Evaluating Social Programs Course: Evaluation Glossary (Sources: 3ie and The World Bank) Evaluating Social Programs Course: Evaluation Glossary (Sources: 3ie and The World Bank) Attribution The extent to which the observed change in outcome is the result of the intervention, having allowed

More information

Evidence- and Value-based Solutions for Health Care Clinical Improvement Consults, Content Development, Training & Seminars, Tools

Evidence- and Value-based Solutions for Health Care Clinical Improvement Consults, Content Development, Training & Seminars, Tools Definition Key Points Key Problems Bias Choice Lack of Control Chance Observational Study Defined Epidemiological study in which observations are made, but investigators do not control the exposure or

More information

The ROBINS-I tool is reproduced from riskofbias.info with the permission of the authors. The tool should not be modified for use.

The ROBINS-I tool is reproduced from riskofbias.info with the permission of the authors. The tool should not be modified for use. Table A. The Risk Of Bias In Non-romized Studies of Interventions (ROBINS-I) I) assessment tool The ROBINS-I tool is reproduced from riskofbias.info with the permission of the auths. The tool should not

More information

Randomized Controlled Trial

Randomized Controlled Trial Randomized Controlled Trial Training Course in Sexual and Reproductive Health Research Geneva 2016 Dr Khalifa Elmusharaf MBBS, PgDip, FRSPH, PHD Senior Lecturer in Public Health Graduate Entry Medical

More information

Cochrane Pregnancy and Childbirth Group Methodological Guidelines

Cochrane Pregnancy and Childbirth Group Methodological Guidelines Cochrane Pregnancy and Childbirth Group Methodological Guidelines [Prepared by Simon Gates: July 2009, updated July 2012] These guidelines are intended to aid quality and consistency across the reviews

More information

Assessing risk of bias

Assessing risk of bias Assessing risk of bias Norwegian Research School for Global Health Atle Fretheim Research Director, Norwegian Institute of Public Health Professor II, Uiniversity of Oslo Goal for the day We all have an

More information

Epidemiologic Methods I & II Epidem 201AB Winter & Spring 2002

Epidemiologic Methods I & II Epidem 201AB Winter & Spring 2002 DETAILED COURSE OUTLINE Epidemiologic Methods I & II Epidem 201AB Winter & Spring 2002 Hal Morgenstern, Ph.D. Department of Epidemiology UCLA School of Public Health Page 1 I. THE NATURE OF EPIDEMIOLOGIC

More information

Funnelling Used to describe a process of narrowing down of focus within a literature review. So, the writer begins with a broad discussion providing b

Funnelling Used to describe a process of narrowing down of focus within a literature review. So, the writer begins with a broad discussion providing b Accidental sampling A lesser-used term for convenience sampling. Action research An approach that challenges the traditional conception of the researcher as separate from the real world. It is associated

More information

GRADE. Grading of Recommendations Assessment, Development and Evaluation. British Association of Dermatologists April 2018

GRADE. Grading of Recommendations Assessment, Development and Evaluation. British Association of Dermatologists April 2018 GRADE Grading of Recommendations Assessment, Development and Evaluation British Association of Dermatologists April 2018 Previous grading system Level of evidence Strength of recommendation Level of evidence

More information

Systematic Review & Course outline. Lecture (20%) Class discussion & tutorial (30%)

Systematic Review & Course outline. Lecture (20%) Class discussion & tutorial (30%) Systematic Review & Meta-analysisanalysis Ammarin Thakkinstian, Ph.D. Section for Clinical Epidemiology and Biostatistics Faculty of Medicine, Ramathibodi Hospital Tel: 02-201-1269, 02-201-1762 Fax: 02-2011284

More information

Nature and significance of the local problem

Nature and significance of the local problem Revised Standards for Quality Improvement Reporting Excellence (SQUIRE 2.0) September 15, 2015 Text Section and Item Section or Item Description Name The SQUIRE guidelines provide a framework for reporting

More information

Clinical research in AKI Timing of initiation of dialysis in AKI

Clinical research in AKI Timing of initiation of dialysis in AKI Clinical research in AKI Timing of initiation of dialysis in AKI Josée Bouchard, MD Krescent Workshop December 10 th, 2011 1 Acute kidney injury in ICU 15 25% of critically ill patients experience AKI

More information

Propensity Score Methods for Estimating Causality in the Absence of Random Assignment: Applications for Child Care Policy Research

Propensity Score Methods for Estimating Causality in the Absence of Random Assignment: Applications for Child Care Policy Research 2012 CCPRC Meeting Methodology Presession Workshop October 23, 2012, 2:00-5:00 p.m. Propensity Score Methods for Estimating Causality in the Absence of Random Assignment: Applications for Child Care Policy

More information

Supplement 2. Use of Directed Acyclic Graphs (DAGs)

Supplement 2. Use of Directed Acyclic Graphs (DAGs) Supplement 2. Use of Directed Acyclic Graphs (DAGs) Abstract This supplement describes how counterfactual theory is used to define causal effects and the conditions in which observed data can be used to

More information

USDA Nutrition Evidence Library: Systematic Review Methodology

USDA Nutrition Evidence Library: Systematic Review Methodology USDA Nutrition Evidence Library: Systematic Review Methodology Julie E. Obbagy, PhD, RD USDA Center for Nutrition Policy & Promotion Meeting #2 October 17, 2016 The National Academies of Sciences, Engineering,

More information

The role of Randomized Controlled Trials

The role of Randomized Controlled Trials The role of Randomized Controlled Trials Dr. Georgia Salanti Lecturer in Epidemiology University of Ioannina School of Medicine Outline Understanding study designs and the role of confounding Observational

More information

Supporting information for Systematic review reveals limitations of studies evaluating health-related quality of life after potentially curative

Supporting information for Systematic review reveals limitations of studies evaluating health-related quality of life after potentially curative Supporting information for Systematic review reveals limitations of studies evaluating health-related quality of life after potentially curative treatment for esophageal cancer Supplement Material 1 Supplement

More information

Meta-Analysis. Zifei Liu. Biological and Agricultural Engineering

Meta-Analysis. Zifei Liu. Biological and Agricultural Engineering Meta-Analysis Zifei Liu What is a meta-analysis; why perform a metaanalysis? How a meta-analysis work some basic concepts and principles Steps of Meta-analysis Cautions on meta-analysis 2 What is Meta-analysis

More information

About Reading Scientific Studies

About Reading Scientific Studies About Reading Scientific Studies TABLE OF CONTENTS About Reading Scientific Studies... 1 Why are these skills important?... 1 Create a Checklist... 1 Introduction... 1 Abstract... 1 Background... 2 Methods...

More information

Version No. 7 Date: July Please send comments or suggestions on this glossary to

Version No. 7 Date: July Please send comments or suggestions on this glossary to Impact Evaluation Glossary Version No. 7 Date: July 2012 Please send comments or suggestions on this glossary to 3ie@3ieimpact.org. Recommended citation: 3ie (2012) 3ie impact evaluation glossary. International

More information

Appraising the Literature Overview of Study Designs

Appraising the Literature Overview of Study Designs Chapter 5 Appraising the Literature Overview of Study Designs Barbara M. Sullivan, PhD Department of Research, NUHS Jerrilyn A. Cambron, PhD, DC Department of Researach, NUHS EBP@NUHS Ch 5 - Overview of

More information

ISPOR Task Force Report: ITC & NMA Study Questionnaire

ISPOR Task Force Report: ITC & NMA Study Questionnaire INDIRECT TREATMENT COMPARISON / NETWORK META-ANALYSIS STUDY QUESTIONNAIRE TO ASSESS RELEVANCE AND CREDIBILITY TO INFORM HEALTHCARE DECISION-MAKING: AN ISPOR-AMCP-NPC GOOD PRACTICE TASK FORCE REPORT DRAFT

More information

Systematic reviews and meta-analyses of observational studies (MOOSE): Checklist.

Systematic reviews and meta-analyses of observational studies (MOOSE): Checklist. Systematic reviews and meta-analyses of observational studies (MOOSE): Checklist. MOOSE Checklist Infliximab reduces hospitalizations and surgery interventions in patients with inflammatory bowel disease:

More information

Checklist for Randomized Controlled Trials. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

Checklist for Randomized Controlled Trials. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews Checklist for Randomized Controlled Trials http://joannabriggs.org/research/critical-appraisal-tools.html www.joannabriggs.org

More information

Glossary. Ó 2010 John Wiley & Sons, Ltd

Glossary. Ó 2010 John Wiley & Sons, Ltd Glossary The majority of the definitions within this glossary are based on, but are only a selection from, the comprehensive list provided by Day (2007) in the Dictionary of Clinical Trials. We have added

More information

Downloaded from:

Downloaded from: Arnup, SJ; Forbes, AB; Kahan, BC; Morgan, KE; McKenzie, JE (2016) The quality of reporting in cluster randomised crossover trials: proposal for reporting items and an assessment of reporting quality. Trials,

More information

Checklist for Randomized Controlled Trials. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

Checklist for Randomized Controlled Trials. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews Checklist for Randomized Controlled Trials http://joannabriggs.org/research/critical-appraisal-tools.html www.joannabriggs.org

More information

GLOSSARY OF GENERAL TERMS

GLOSSARY OF GENERAL TERMS GLOSSARY OF GENERAL TERMS Absolute risk reduction Absolute risk reduction (ARR) is the difference between the event rate in the control group (CER) and the event rate in the treated group (EER). ARR =

More information

9.0 L '- ---'- ---'- --' X

9.0 L '- ---'- ---'- --' X 352 C hap te r Ten 11.0 10.5 Y 10.0 9.5 9.0 L...- ----'- ---'- ---'- --' 0.0 0.5 1.0 X 1.5 2.0 FIGURE 10.23 Interpreting r = 0 for curvilinear data. Establishing causation requires solid scientific understanding.

More information

Evidence Informed Practice Online Learning Module Glossary

Evidence Informed Practice Online Learning Module Glossary Term Abstract Associations Attrition Bias Background and Significance Baseline Basic Science Bias Blinding Definition An abstract is a summary of a research article. It usually includes the purpose, methods,

More information

Are the likely benefits worth the potential harms and costs? From McMaster EBCP Workshop/Duke University Medical Center

Are the likely benefits worth the potential harms and costs? From McMaster EBCP Workshop/Duke University Medical Center CRITICAL REVIEW FORM FOR THERAPY STUDY Did experimental and control groups begin the study with a similar prognosis? Were patients randomized? Was randomization concealed? Were patients analyzed in the

More information

CRITICAL APPRAISAL OF MEDICAL LITERATURE. Samuel Iff ISPM Bern

CRITICAL APPRAISAL OF MEDICAL LITERATURE. Samuel Iff ISPM Bern CRITICAL APPRAISAL OF MEDICAL LITERATURE Samuel Iff ISPM Bern siff@ispm.unibe.ch Contents Study designs Asking good questions Pitfalls in clinical studies How to assess validity (RCT) Conclusion Step-by-step

More information

Checklist of Key Considerations for Development of Program Logic Models [author name removed for anonymity during review] April 2018

Checklist of Key Considerations for Development of Program Logic Models [author name removed for anonymity during review] April 2018 Checklist of Key Considerations for Development of Program Logic Models [author name removed for anonymity during review] April 2018 A logic model is a graphic representation of a program that depicts

More information

Evidence Based Practice

Evidence Based Practice Evidence Based Practice RCS 6740 7/26/04 Evidence Based Practice: Definitions Evidence based practice is the integration of the best available external clinical evidence from systematic research with individual

More information

Chapter 02. Basic Research Methodology

Chapter 02. Basic Research Methodology Chapter 02 Basic Research Methodology Definition RESEARCH Research is a quest for knowledge through diligent search or investigation or experimentation aimed at the discovery and interpretation of new

More information

Quantitative Methods. Lonnie Berger. Research Training Policy Practice

Quantitative Methods. Lonnie Berger. Research Training Policy Practice Quantitative Methods Lonnie Berger Research Training Policy Practice Defining Quantitative and Qualitative Research Quantitative methods: systematic empirical investigation of observable phenomena via

More information

CHECK-LISTS AND Tools DR F. R E Z A E I DR E. G H A D E R I K U R D I S TA N U N I V E R S I T Y O F M E D I C A L S C I E N C E S

CHECK-LISTS AND Tools DR F. R E Z A E I DR E. G H A D E R I K U R D I S TA N U N I V E R S I T Y O F M E D I C A L S C I E N C E S CHECK-LISTS AND Tools DR F. R E Z A E I DR E. G H A D E R I K U R D I S TA N U N I V E R S I T Y O F M E D I C A L S C I E N C E S What is critical appraisal? Critical appraisal is the assessment of evidence

More information

Learning objectives. Examining the reliability of published research findings

Learning objectives. Examining the reliability of published research findings Examining the reliability of published research findings Roger Chou, MD Associate Professor of Medicine Department of Medicine and Department of Medical Informatics and Clinical Epidemiology Scientific

More information

CONSORT 2010 checklist of information to include when reporting a randomised trial*

CONSORT 2010 checklist of information to include when reporting a randomised trial* CONSORT 2010 checklist of information to include when reporting a randomised trial* Section/Topic Title and abstract Introduction Background and objectives Item No Checklist item 1a Identification as a

More information

Systematic reviews: From evidence to recommendation. Marcel Dijkers, PhD, FACRM Icahn School of Medicine at Mount Sinai

Systematic reviews: From evidence to recommendation. Marcel Dijkers, PhD, FACRM Icahn School of Medicine at Mount Sinai Systematic reviews: From evidence to recommendation Session 2 - June 18, 2014 Going beyond design, going beyond intervention: The American Academy of Neurology (AAN) Clinical Practice Guideline process

More information

A Brief Introduction to Bayesian Statistics

A Brief Introduction to Bayesian Statistics A Brief Introduction to Statistics David Kaplan Department of Educational Psychology Methods for Social Policy Research and, Washington, DC 2017 1 / 37 The Reverend Thomas Bayes, 1701 1761 2 / 37 Pierre-Simon

More information

4/10/2018. Choosing a study design to answer a specific research question. Importance of study design. Types of study design. Types of study design

4/10/2018. Choosing a study design to answer a specific research question. Importance of study design. Types of study design. Types of study design Choosing a study design to answer a specific research question Importance of study design Will determine how you collect, analyse and interpret your data Helps you decide what resources you need Impact

More information

Clinical Epidemiology I: Deciding on Appropriate Therapy

Clinical Epidemiology I: Deciding on Appropriate Therapy Clinical Epidemiology I: Deciding on Appropriate Therapy 1 Clinical Scenario [UG2B] 65 yo man controlled HTN 6-mo Hx cardioversion-resistant afib Benefit vs risk of long-term anticoagulation:? prevent

More information

CHECKLIST FOR EVALUATING A RESEARCH REPORT Provided by Dr. Blevins

CHECKLIST FOR EVALUATING A RESEARCH REPORT Provided by Dr. Blevins CHECKLIST FOR EVALUATING A RESEARCH REPORT Provided by Dr. Blevins 1. The Title a. Is it clear and concise? b. Does it promise no more than the study can provide? INTRODUCTION 2. The Problem a. It is clearly

More information

Checking the counterarguments confirms that publication bias contaminated studies relating social class and unethical behavior

Checking the counterarguments confirms that publication bias contaminated studies relating social class and unethical behavior 1 Checking the counterarguments confirms that publication bias contaminated studies relating social class and unethical behavior Gregory Francis Department of Psychological Sciences Purdue University gfrancis@purdue.edu

More information

Special guidelines for preparation and quality approval of reviews in the form of reference documents in the field of occupational diseases

Special guidelines for preparation and quality approval of reviews in the form of reference documents in the field of occupational diseases Special guidelines for preparation and quality approval of reviews in the form of reference documents in the field of occupational diseases November 2010 (1 st July 2016: The National Board of Industrial

More information

Revised Cochrane risk of bias tool for randomized trials (RoB 2.0) Additional considerations for cross-over trials

Revised Cochrane risk of bias tool for randomized trials (RoB 2.0) Additional considerations for cross-over trials Revised Cochrane risk of bias tool for randomized trials (RoB 2.0) Additional considerations for cross-over trials Edited by Julian PT Higgins on behalf of the RoB 2.0 working group on cross-over trials

More information

GRADE, Summary of Findings and ConQual Workshop

GRADE, Summary of Findings and ConQual Workshop GRADE, Summary of Findings and ConQual Workshop To discuss Introduction New JBI Levels of Evidence and Grades of Recommendation Moving towards GRADE Summary of Findings tables Qualitative Levels Conclusion

More information

Overview and Comparisons of Risk of Bias and Strength of Evidence Assessment Tools: Opportunities and Challenges of Application in Developing DRIs

Overview and Comparisons of Risk of Bias and Strength of Evidence Assessment Tools: Opportunities and Challenges of Application in Developing DRIs Workshop on Guiding Principles for the Inclusion of Chronic Diseases Endpoints in Future Dietary Reference Intakes (DRIs) Overview and Comparisons of Risk of Bias and Strength of Evidence Assessment Tools:

More information

The Research Roadmap Checklist

The Research Roadmap Checklist 1/5 The Research Roadmap Checklist Version: December 1, 2007 All enquires to bwhitworth@acm.org This checklist is at http://brianwhitworth.com/researchchecklist.pdf The element details are explained at

More information

School of Population and Public Health SPPH 503 Epidemiologic methods II January to April 2019

School of Population and Public Health SPPH 503 Epidemiologic methods II January to April 2019 School of Population and Public Health SPPH 503 Epidemiologic methods II January to April 2019 Time: Tuesday, 1330 1630 Location: School of Population and Public Health, UBC Course description Students

More information

Clinical Research Scientific Writing. K. A. Koram NMIMR

Clinical Research Scientific Writing. K. A. Koram NMIMR Clinical Research Scientific Writing K. A. Koram NMIMR Clinical Research Branch of medical science that determines the safety and effectiveness of medications, devices, diagnostic products and treatment

More information

Trials and Tribulations of Systematic Reviews and Meta-Analyses

Trials and Tribulations of Systematic Reviews and Meta-Analyses Trials and Tribulations of Systematic Reviews and Meta-Analyses Mark A. Crowther and Deborah J. Cook St. Joseph s Hospital, Hamilton, Ontario, Canada; McMaster University, Hamilton, Ontario, Canada Systematic

More information

UNIT 5 - Association Causation, Effect Modification and Validity

UNIT 5 - Association Causation, Effect Modification and Validity 5 UNIT 5 - Association Causation, Effect Modification and Validity Introduction In Unit 1 we introduced the concept of causality in epidemiology and presented different ways in which causes can be understood

More information

Measuring and Assessing Study Quality

Measuring and Assessing Study Quality Measuring and Assessing Study Quality Jeff Valentine, PhD Co-Chair, Campbell Collaboration Training Group & Associate Professor, College of Education and Human Development, University of Louisville Why

More information

Why do Psychologists Perform Research?

Why do Psychologists Perform Research? PSY 102 1 PSY 102 Understanding and Thinking Critically About Psychological Research Thinking critically about research means knowing the right questions to ask to assess the validity or accuracy of a

More information

Teaching critical appraisal of randomised controlled trials

Teaching critical appraisal of randomised controlled trials Teaching critical appraisal of randomised controlled trials Dr Kamal R. Mahtani BSc PhD MBBS PGDip MRCGP Deputy Director Centre for Evidence Based Medicine University of Oxford November 2014 1 objectives

More information

Outline. What is Evidence-Based Practice? EVIDENCE-BASED PRACTICE. What EBP is Not:

Outline. What is Evidence-Based Practice? EVIDENCE-BASED PRACTICE. What EBP is Not: Evidence Based Practice Primer Outline Evidence Based Practice (EBP) EBP overview and process Formulating clinical questions (PICO) Searching for EB answers Trial design Critical appraisal Assessing the

More information

Evidence Based Medicine

Evidence Based Medicine Hamadan University of medical sciences School of Public Health Department of Epidemiology Evidence Based Medicine Amin Doosti-Irani, PhD in Epidemiology 10 March 2017 a_doostiirani@yahoo.com 1 Outlines

More information

Trial Designs. Professor Peter Cameron

Trial Designs. Professor Peter Cameron Trial Designs Professor Peter Cameron OVERVIEW Review of Observational methods Principles of experimental design applied to observational studies Population Selection Looking for bias Inference Analysis

More information

COMMITTEE FOR PROPRIETARY MEDICINAL PRODUCTS (CPMP) POINTS TO CONSIDER ON MISSING DATA

COMMITTEE FOR PROPRIETARY MEDICINAL PRODUCTS (CPMP) POINTS TO CONSIDER ON MISSING DATA The European Agency for the Evaluation of Medicinal Products Evaluation of Medicines for Human Use London, 15 November 2001 CPMP/EWP/1776/99 COMMITTEE FOR PROPRIETARY MEDICINAL PRODUCTS (CPMP) POINTS TO

More information

Title:Continuity of GP care is associated with lower use of complementary and alternative medical providers A population-based cross-sectional survey

Title:Continuity of GP care is associated with lower use of complementary and alternative medical providers A population-based cross-sectional survey Author's response to reviews Title:Continuity of GP care is associated with lower use of complementary and alternative medical providers A population-based cross-sectional survey Authors: Anne Helen Hansen

More information

Delfini Evidence Tool Kit

Delfini Evidence Tool Kit General 1. Who is sponsoring and funding the study? What are the affiliations of the authors? Study Design Assessment Internal Validity Assessment Considerations: This can be helpful information and is

More information

Beyond the intention-to treat effect: Per-protocol effects in randomized trials

Beyond the intention-to treat effect: Per-protocol effects in randomized trials Beyond the intention-to treat effect: Per-protocol effects in randomized trials Miguel Hernán DEPARTMENTS OF EPIDEMIOLOGY AND BIOSTATISTICS Intention-to-treat analysis (estimator) estimates intention-to-treat

More information

Special Features of Randomized Controlled Trials

Special Features of Randomized Controlled Trials Special Features of Randomized Controlled Trials Bangkok 2006 Kenneth F. Schulz, PhD, MBA Critical Methodological Elements in RCTs Randomization Avoiding and handling exclusions after trial entry Blinding

More information

Chapter Three Research Methodology

Chapter Three Research Methodology Chapter Three Research Methodology Research Methods is a systematic and principled way of obtaining evidence (data, information) for solving health care problems. 1 Dr. Mohammed ALnaif METHODS AND KNOWLEDGE

More information

PHO MetaQAT Guide. Critical appraisal in public health. PHO Meta-tool for quality appraisal

PHO MetaQAT Guide. Critical appraisal in public health. PHO Meta-tool for quality appraisal PHO MetaQAT Guide Critical appraisal in public health Critical appraisal is a necessary part of evidence-based practice and decision-making, allowing us to understand the strengths and weaknesses of evidence,

More information

School of Dentistry. What is a systematic review?

School of Dentistry. What is a systematic review? School of Dentistry What is a systematic review? Screen Shot 2012-12-12 at 09.38.42 Where do I find the best evidence? The Literature Information overload 2 million articles published a year 20,000 biomedical

More information

Module 5. The Epidemiological Basis of Randomised Controlled Trials. Landon Myer School of Public Health & Family Medicine, University of Cape Town

Module 5. The Epidemiological Basis of Randomised Controlled Trials. Landon Myer School of Public Health & Family Medicine, University of Cape Town Module 5 The Epidemiological Basis of Randomised Controlled Trials Landon Myer School of Public Health & Family Medicine, University of Cape Town Introduction The Randomised Controlled Trial (RCT) is the

More information

Assignment 4: True or Quasi-Experiment

Assignment 4: True or Quasi-Experiment Assignment 4: True or Quasi-Experiment Objectives: After completing this assignment, you will be able to Evaluate when you must use an experiment to answer a research question Develop statistical hypotheses

More information

The comparison or control group may be allocated a placebo intervention, an alternative real intervention or no intervention at all.

The comparison or control group may be allocated a placebo intervention, an alternative real intervention or no intervention at all. 1. RANDOMISED CONTROLLED TRIALS (Treatment studies) (Relevant JAMA User s Guides, Numbers IIA & B: references (3,4) Introduction: The most valid study design for assessing the effectiveness (both the benefits

More information

Representation and Analysis of Medical Decision Problems with Influence. Diagrams

Representation and Analysis of Medical Decision Problems with Influence. Diagrams Representation and Analysis of Medical Decision Problems with Influence Diagrams Douglas K. Owens, M.D., M.Sc., VA Palo Alto Health Care System, Palo Alto, California, Section on Medical Informatics, Department

More information

Epidemiologic Methods and Counting Infections: The Basics of Surveillance

Epidemiologic Methods and Counting Infections: The Basics of Surveillance Epidemiologic Methods and Counting Infections: The Basics of Surveillance Ebbing Lautenbach, MD, MPH, MSCE University of Pennsylvania School of Medicine Nothing to disclose PENN Outline Definitions / Historical

More information

AOTA S EVIDENCE EXCHANGE CRITICALLY APPRAISED PAPER (CAP) GUIDELINES Annual AOTA Conference Poster Submissions Critically Appraised Papers (CAPs) are

AOTA S EVIDENCE EXCHANGE CRITICALLY APPRAISED PAPER (CAP) GUIDELINES Annual AOTA Conference Poster Submissions Critically Appraised Papers (CAPs) are AOTA S EVIDENCE EXCHANGE CRITICALLY APPRAISED PAPER (CAP) GUIDELINES Annual AOTA Conference Poster Submissions Critically Appraised Papers (CAPs) are at-a-glance summaries of the methods, findings and

More information

How to interpret results of metaanalysis

How to interpret results of metaanalysis How to interpret results of metaanalysis Tony Hak, Henk van Rhee, & Robert Suurmond Version 1.0, March 2016 Version 1.3, Updated June 2018 Meta-analysis is a systematic method for synthesizing quantitative

More information

Guidelines for Writing and Reviewing an Informed Consent Manuscript From the Editors of Clinical Research in Practice: The Journal of Team Hippocrates

Guidelines for Writing and Reviewing an Informed Consent Manuscript From the Editors of Clinical Research in Practice: The Journal of Team Hippocrates Guidelines for Writing and Reviewing an Informed Consent Manuscript From the Editors of Clinical Research in Practice: The Journal of Team Hippocrates 1. Title a. Emphasize the clinical utility of the

More information

Clinical Epidemiology for the uninitiated

Clinical Epidemiology for the uninitiated Clinical epidemiologist have one foot in clinical care and the other in clinical practice research. As clinical epidemiologists we apply a wide array of scientific principles, strategies and tactics to

More information

Washington, DC, November 9, 2009 Institute of Medicine

Washington, DC, November 9, 2009 Institute of Medicine Holger Schünemann, MD, PhD Chair, Department of Clinical Epidemiology & Biostatistics Michael Gent Chair in Healthcare Research McMaster University, Hamilton, Canada Washington, DC, November 9, 2009 Institute

More information

Chapter 13. Experiments and Observational Studies

Chapter 13. Experiments and Observational Studies Chapter 13 Experiments and Observational Studies 1 /36 Homework Read Chpt 13 Do p312 1, 7, 9, 11, 17, 20, 25, 27, 29, 33, 40, 41 2 /36 Observational Studies In an observational study, researchers do not

More information

Generalization and Theory-Building in Software Engineering Research

Generalization and Theory-Building in Software Engineering Research Generalization and Theory-Building in Software Engineering Research Magne Jørgensen, Dag Sjøberg Simula Research Laboratory {magne.jorgensen, dagsj}@simula.no Abstract The main purpose of this paper is

More information

In this chapter we discuss validity issues for quantitative research and for qualitative research.

In this chapter we discuss validity issues for quantitative research and for qualitative research. Chapter 8 Validity of Research Results (Reminder: Don t forget to utilize the concept maps and study questions as you study this and the other chapters.) In this chapter we discuss validity issues for

More information

Conducting Strong Quasi-experiments

Conducting Strong Quasi-experiments Analytic Technical Assistance and Development Conducting Strong Quasi-experiments Version 1 May 2015 This report was prepared for the Institute of Education Sciences (IES) by Decision Information Resources,

More information

Revised Cochrane risk of bias tool for randomized trials (RoB 2.0)

Revised Cochrane risk of bias tool for randomized trials (RoB 2.0) Revised Cochrane risk of bias tool for randomized trials (RoB 2.0) Edited by Julian PT Higgins, Jelena Savović, Matthew J Page, Jonathan AC Sterne on behalf of the development group for RoB 2.0 20 th October

More information

Standard Methods for Quality Assessment of Evidence

Standard Methods for Quality Assessment of Evidence Drug Use Research & Management Program Oregon State University, 500 Summer Street NE, E35, Salem, Oregon 97301 1079 Phone 503 947 5220 Fax 503 947 1119 Standard Methods for Quality Assessment of Evidence

More information

Results. NeuRA Hypnosis June 2016

Results. NeuRA Hypnosis June 2016 Introduction may be experienced as an altered state of consciousness or as a state of relaxation. There is no agreed framework for administering hypnosis, but the procedure often involves induction (such

More information

Confounding and Bias

Confounding and Bias 28 th International Conference on Pharmacoepidemiology and Therapeutic Risk Management Barcelona, Spain August 22, 2012 Confounding and Bias Tobias Gerhard, PhD Assistant Professor, Ernest Mario School

More information

Evidence-based practice tutorial Critical Appraisal Skills

Evidence-based practice tutorial Critical Appraisal Skills Evidence-based practice tutorial Critical Appraisal Skills Earlier evidence based practice tutorials have focussed on skills to search various useful sites on the internet for evidence. Anyone who has

More information

Studying the effect of change on change : a different viewpoint

Studying the effect of change on change : a different viewpoint Studying the effect of change on change : a different viewpoint Eyal Shahar Professor, Division of Epidemiology and Biostatistics, Mel and Enid Zuckerman College of Public Health, University of Arizona

More information

What is indirect comparison?

What is indirect comparison? ...? series New title Statistics Supported by sanofi-aventis What is indirect comparison? Fujian Song BMed MMed PhD Reader in Research Synthesis, Faculty of Health, University of East Anglia Indirect comparison

More information

Study Design. Svetlana Yampolskaya, Ph.D. Summer 2013

Study Design. Svetlana Yampolskaya, Ph.D. Summer 2013 Study Design Svetlana Yampolskaya, Ph.D. Summer 2013 Study Design Single point in time Cross-Sectional Research Multiple points in time Study only exists in this point in time Longitudinal Research Study

More information