Statistical considerations in indirect comparisons and network meta-analysis

Similar documents
Statistical considerations in indirect comparisons and network meta-analysis

Statistical considerations in indirect comparisons and network meta-analysis

What is indirect comparison?

Methods for assessing consistency in Mixed Treatment Comparison Meta-analysis

PROTOCOL. Francesco Brigo, Luigi Giuseppe Bongiovanni

Statistical considerations in indirect comparisons and network meta-analysis

Evaluating the Quality of Evidence from a Network Meta- Analysis

Editorial considerations for reviews that compare multiple interventions

Papers. Validity of indirect comparison for estimating efficacy of competing interventions: empirical evidence from published meta-analyses.

Quality and Reporting Characteristics of Network Meta-analyses: A Scoping Review

Explaining heterogeneity

The role of MTM in Overviews of Reviews

GRADE: Applicazione alle network meta-analisi

Addressing multiple treatments II: multiple-treatments meta-analysis basic methods

Meta-Analysis. Zifei Liu. Biological and Agricultural Engineering

Meta Analysis. David R Urbach MD MSc Outcomes Research Course December 4, 2014

Simulation evaluation of statistical properties of methods for indirect and mixed treatment comparisons

Multiple-Treatments Meta-Analysis

Modelling the effects of drug dose in hierarchical network meta-analysis models to account for variability in treatment classifications

How to do a meta-analysis. Orestis Efthimiou Dpt. Of Hygiene and Epidemiology, School of Medicine University of Ioannina, Greece

Live WebEx meeting agenda

Network Meta-Analysis for Comparative Effectiveness Research

Sample size and power considerations in network meta-analysis

heterogeneity in meta-analysis stats methodologists meeting 3 September 2015

Critical appraisal: Systematic Review & Meta-analysis

Agenda. Frequent Scenario WS I: Treatment Comparisons and Network Meta-Analysis: Learning the Basics

Estimating the Power of Indirect Comparisons: A Simulation Study

GUIDELINE COMPARATORS & COMPARISONS:

Applications of Bayesian methods in health technology assessment

Problem solving therapy

Workshop: Cochrane Rehabilitation 05th May Trusted evidence. Informed decisions. Better health.

Results. NeuRA Hypnosis June 2016

Statistical methods for reliably updating meta-analyses

Dear Dr. Villanueva,

Evaluating the results of a Systematic Review/Meta- Analysis

Traumatic brain injury

Multiple-Treatments Meta-Analysis

Advanced Issues in Multiple Treatment Comparison Meta-analysis

Results. NeuRA Treatments for internalised stigma December 2017

NeuRA Sleep disturbance April 2016

Beyond RevMan: Meta-Regression. Analysis in Context

How to interpret results of metaanalysis

ISPOR Task Force Report: ITC & NMA Study Questionnaire

Methods for evidence synthesis in the case of very few studies. Guido Skipka IQWiG, Cologne, Germany

Results. NeuRA Forensic settings April 2016

A COMMON REFERENCE-BASED INDIRECT COMPARISON META-ANALYSIS OF ESLICARBAZEPINE VERSUS LACOSAMIDE AS ADD ON TREATMENTS FOR FOCAL EPILEPSY PROTOCOL

Results. NeuRA Treatments for dual diagnosis August 2016

Comparative Effectiveness Research Collaborative Initiative (CER-CI) PART 1: INTERPRETING OUTCOMES RESEARCH STUDIES FOR HEALTH CARE DECISION MAKERS

Evidence-Based Medicine and Publication Bias Desmond Thompson Merck & Co.

How to Conduct a Meta-Analysis

Recent developments for combining evidence within evidence streams: bias-adjusted meta-analysis

The QUOROM Statement: revised recommendations for improving the quality of reports of systematic reviews

ADDING VALUE AND EXPERTISE FOR SUCCESSFUL MARKET ACCESS. Per Sørensen, Lundbeck A/S

CINeMA. Georgia Salanti & Theodore Papakonstantinou Institute of Social and Preventive Medicine University of Bern Switzerland

Results. NeuRA Mindfulness and acceptance therapies August 2018

Method. NeuRA Paliperidone August 2016

Results. NeuRA Worldwide incidence April 2016

Determinants of quality: Factors that lower or increase the quality of evidence

Comparative Effectiveness Research Collaborative Initiative (CER-CI) PART 1: INTERPRETING OUTCOMES RESEARCH STUDIES FOR HEALTH CARE DECISION MAKERS

False statistically significant findings in cumulative metaanalyses and the ability of Trial Sequential Analysis (TSA) to identify them.

Technology appraisal guidance Published: 6 December 2017 nice.org.uk/guidance/ta493

Systematic Reviews and Meta- Analysis in Kidney Transplantation

C2 Training: August 2010

In many healthcare situations, it is common to find

Development of restricted mean survival time difference in network metaanalysis based on data from MACNPC update.

NeuRA Obsessive-compulsive disorders October 2017

Cochrane Pregnancy and Childbirth Group Methodological Guidelines

Results. NeuRA Family relationships May 2017

Safeguarding public health Subgroup Analyses: Important, Infuriating and Intractable

Animal-assisted therapy

Comparison of Meta-Analytic Results of Indirect, Direct, and Combined Comparisons of Drugs for Chronic Insomnia in Adults: A Case Study

Instrumental Variables Estimation: An Introduction

The role of Randomized Controlled Trials

Introduction to systematic reviews/metaanalysis

Distraction techniques

Multivariate and network meta-analysis of multiple outcomes and multiple treatments: rationale, concepts, and examples

Systematic reviews and meta-analyses of observational studies (MOOSE): Checklist.

Models for potentially biased evidence in meta-analysis using empirically based priors

Transcranial Direct-Current Stimulation

Role of evidence from observational studies in the process of health care decision making

Types of Data. Systematic Reviews: Data Synthesis Professor Jodie Dodd 4/12/2014. Acknowledgements: Emily Bain Australasian Cochrane Centre

Clinical Epidemiology for the uninitiated

SYSTEMATIC REVIEW: AN APPROACH FOR TRANSPARENT RESEARCH SYNTHESIS

Introduction to Meta-analysis of Accuracy Data

5/10/2010. ISPOR Good Research Practices for Comparative Effectiveness Research: Indirect Treatment Comparisons Task Force

Choice of axis, tests for funnel plot asymmetry, and methods to adjust for publication bias

Combining studies: from heterogeneity to similarity

Evidence based urology in practice: heterogeneity in a systematic review meta-analysis. Health Services Research Unit, University of Aberdeen, UK

Research Synthesis and meta-analysis: themes. Graham A. Colditz, MD, DrPH Method Tuuli, MD, MPH

Study protocol v. 1.0 Systematic review of the Sequential Organ Failure Assessment score as a surrogate endpoint in randomized controlled trials

Agomelatine versus placebo: A meta-analysis of published and unpublished trials

Clinical research in AKI Timing of initiation of dialysis in AKI

A systematic review of treatments for severe psoriasis Griffiths C E, Clark C M, Chalmers R J, Li Wan Po A, Williams H C

The emerging evidence synthesis tools: Actively Living Network Meta- Analysis

CTU Bern, Bern University Hospital & Institute of Social and Preventive Medicine (ISPM)

Indirect Comparisons: heroism or heresy. Neil Hawkins With acknowledgements to Sarah DeWilde

Results. NeuRA Herbal medicines August 2016

SSAI Clinical Practice Committee guideline work flow v2. A. Formal matters

Using Indirect Comparisons to Support a Health Technology Assessment (HTA)

Transcription:

Statistical considerations in indirect comparisons and network meta-analysis Said Business School, Oxford, UK March 18-19, 2013 Cochrane Comparing Multiple Interventions Methods Group Oxford Training event, March 2013 1

Handout S4-L Validity of indirect comparisons Deborah Caldwell School of Social and Community Medicine University of Bristol Cochrane Comparing Multiple Interventions Methods Group Oxford Training event, March 2013 2

Acknowledgements Georgia Salanti 3

Criticism of indirect comparison An Indirect comparison respects randomisation but it is not randomized evidence The treatment comparisons are not randomly assigned across studies Indirect comparison is a special type of regression (using the comparison as explanatory variable) Meta-regression and subgroup analysis provide observational evidence as the characteristic they regress on hasn t been randomized across studies Is direct evidence preferable to indirect evidence? Shall we use indirect comparison only in the absence of direct evidence? 4

Assumption underlying indirect/mixed comparison Single Assumption underlying indirect and mixed comparison Conceptual definition Manifestation in the data Transitivity Consistency 6

Transitivity An underlying assumption when μ Ι BC is calculated is that one can learn about B versus C via A. Sometime it is an untestable assumption B A The anchor treatment A is transitive C.but you can evaluate clinically and epidemiologically its plausibility 7

Transitivity assumption In the literature this assumption has been often referred to as the similarity assumption (e.g. Song, BMJ 2009; Donegan et al. PloS 2010) 1. The term transitivity describes better aim of the assumption (to compare two treatments via a third one) (Salanti, 2012). 2. Similarity reduces to homogeneity for a single head-to-head comparison (transitivity clearly refers to > two comparisons) 3. Similarity may wrongly suggest that similarity is required for all characteristics of trials and patients across the evidence base when in reality valid indirect comparison can be obtained even when studies are dissimilar in characteristics that are not effect modifiers The violation of the assumption is often referred to in statistical models as treatment-by-trial interaction. 8

Transitivity requires... (1) A A B C The anchor treatment A to be similarly defined when it appears in AB and AC trials. e.g. a treatment given at different doses but no systematic difference in the average dose of A across AB and AC comparison B A A C Plausible when A is placebo given in different forms/ mechanism? e.g. injection versus pill 9

Transitivity requires... (1) Example: When comparing different fluoride treatments, comparison between fluoride toothpaste and fluoride rinse can be made via placebo. However, placebo toothpaste and placebo rinse might not be comparable as the mechanical function of brushing might have a different effect on the prevention of caries. If this is the case, the transitivity assumption is doubtful (Salanti 2009). Toothpaste Note that transitivity is violated when the anchor treatment differs systematically between trials (not randomly). Consequently, the definition of Placebo the nodes in the treatment network is a challenging issue with important implications for the joint analysis Varnish No treat Gel Rinse 10

Transitivity requires... (2) AC trials do not have B arms and AB trials do not have treatment C Another way to think about the transitivity assumption is to consider these missing arms missing at random (Lu and Ades 2006). However, evidence in many medical areas shows the choice of comparator is not always random (Heres et al. Am J Ps 2006; Rizos et al. JCE 2011;Salanti et al. Ann Int Med 2008). often placebo or a suboptimal intervention preferred to a more realistic alternative such as an established effective treatment. If the choice of the comparison is associated, directly or indirectly, with the relative effectiveness of the interventions then the assumption of transitivity will be violated 11

Effectiveness Effectiveness Transitivity requires... (3) C B A1 variable A...that AC and AB trials do not differ with respect to the distribution of effect modifiers C B A variable 12

Transitivity requires... (3) This formulation facilitates evaluation of the transitivity assumption. E.g. examine distribution of effect modifiers of the relative treatment effects in AC and AB trials Clinicians and methodologists that aim to synthesize evidence from many comparisons should identify a priori possible effect modifiers and compare their distributions across comparisons. 13

Transitivity requires... (3) Placebo vs B Placebo vs C 20 25 30 20 25 30 20 25 30 20 25 30 14

Year of randomisation Fluorides: characteristics of placebo-controlled trials 1994 1984 1974 1964 1954 T G R V

Transitivity requires... (3) This formulation facilitates evaluation of the transitivity assumption. Distribution of effect modifiers of the relative treatment effects for similarity in AC and AB trials Clinicians and methodologists that aim to synthesize evidence from many comparisons should identify a priori possible effect modifiers and compare their distributions across comparisons. It is important to note however that the transitivity assumption holds for the mean effect sizes μ D AB and μ D AC and not for individual study results that is, between the mean summary effects for AC and AB Consequently, an effect modifier that differs across studies that belong to the same comparison but has a similar distribution across comparisons will not violate the transitivity assumption. For example, if age is an effect modifier and AC trials differ in terms of mean age of participants (which will be presented as heterogeneity in AC studies) but the same variability is observed in the set of BC trials then transitivity may hold even if age is an effect modifier. 16

Effectiveness Transitivity requires... (3) Alfacalcidol +Ca Vitamin D +Ca Calcitriol + Ca Ca age Calcif Tissue 2005 Richy et al 17

Transitivity requires... (4) that all treatments are jointly randomizable This consideration is a fundamental one and should be addressed when building the evidence network The assumption of transitivity could be violated if interventions have different indications. Ex: treatment A is a chemotherapy regimen administered as a second line treatment, whereas treatments B and C can be either as first or second line we cannot assume that participants in a BC trial could have been randomized in an AC trial! Treatments can be comparable in theory but not in practice! Ex: interferon and natalizumab are used for relapsing-remitting MS patients mitoxantrone for patients with a progressive disease. However, evidence to support this clinical tradition is not solid and it would be appealing to compare the three treatments. 18

Consistency Direct and indirect evidence are in agreement B B A C C μ Ι ΒC μ D ΒC μ M ΒC 19

Consistency Direct and indirect evidence are in agreement B B A C C μ D AC -μ D AB = μ Ι ΒC =μ D ΒC 20

Consistency Direct and indirect evidence are in agreement B B A C C μ D AC -μ D AB = μ Ι ΒC =μ D ΒC 21

Consistency=transitivity across a loop B A C In a simple triangular loop consistency holds when transitivity can be assumed for at least two out of the three nodes A, B and C (as if A and B are transitive then C is transitive as well). 22

Consistency means... That each treatment in the loop pertains to a fixed definition independently of its comparator. That the missing treatments in each trial in the loop are missing at random All sets of trials grouped by comparison are similar with respect to the distribution of effect modifiers That there are no differences between observed and unobserved effects for every comparison in the loop beyond those attributed to heterogeneity. 23

Study AC BC AB Study AC BC AB If arm were included.. B A C Observed Observed and unobserved θ i,ac AC BC AB Consistency: Observed and unobserved estimates do not differ beyond what can be explained by heterogeneity

Statistical consistency Consistency is a property of a closed loop (a path that starts and ends at the same node) or cycle (as in graph theory). A statistically significant difference between μ D ΒC and μ I ΒC typically defines statistical inconsistency. Consistency can be evaluated statistically by comparing μ D ΒC and μ I ΒC in a simple z-test (often called the Bucher method). Alternatively, one could estimate the inconsistency as IF= μ D ΒC - μ I ΒC (often called inconsistency factors ) and its confidence interval If consistency holds, it may be reasonable to pool μ D ΒC and μ I ΒC 25

Estimating inconsistency In a ABC loop of evidence: IF μ I BC μ D BC μ I AC μ D AC μ I ΑΒ μ D ΑΒ var(if) If the 95% CI excludes zero, then there is statistically significant inconsistency A test for H 0 : IF=0 var(μ 95%CI :IF 1.96 z = IF var(if) I BC ) var(μ var(if) ~ Ν(0,1) D BC ) 26

Example: Gel versus Toothpaste Indirect SMD GvsT = - 0.15 (-0.27, -0.03) Direct SMD GvsT = 0.04 (-0.17, 0.25) Inconsistency factor = 0.19 (-0.05, 0.43) Is it important? You can a apply a z-test Z=1.55, p=0.12 27

Consistency in practice There are examples of indirect comparisons in the literature where although the key assumption has not been met, authors have formed conclusions that indirect comparisons may not be valid (e.g. Chou 2006). Fears persist that indirect comparisons may systematically overor under-estimate treatment effects when compared to direct (Bucher 1997, Mills 2011). However, such concerns may be misplaced. Given that inconsistency is a property of a loop of evidence it follows that a seeming over-estimation of treatment efficacy on one side of the triangle network (e.g. μ I ΒC ) may represent an under-estimation on another (μ I AC ) 28

Empirical evidence Song (2011) examined 112 independent 3-treatment networks and detected 16 cases of statistically significant discrepancies. Veroniki et al (2013) examined 315 loops and up to 10% were inconsistent Depends on the estimator of heterogeneity Inconsistency more probable in loops with comparisons informed by a single study Veroniki et al (2013) examined 40 networks and one in eight was found statistically inconsistent 29

Issues with statistical estimation of consistency (1) In a traditional meta-analysis a statistically non-significant Q test should not be interpreted as evidence of homogeneity Similarly, a non-significant inconsistency test result should not be taken as proof for the absence of inconsistency the methodological and clinical plausibility of the consistency assumption should be further considered The test for inconsistency may have low power. The analyst must therefore be extremely cautious when interpreting non-significant IFs. The lack of direct evidence ( open triangle) makes the statistical evaluation of consistency impossible but the transitivity assumption is still needed to derive the indirect estimate! 30

Issues with statistical estimation of consistency (2) Inference of the test depends on The amount of heterogeneity Whether random or fixed effects are used to derive direct estimates The estimator of heterogeneity (MM, REML, SJ etc) Whether the same or different heterogeneity parameters are used for the three comparisons AB, AC, BC 31

Statistical consistency and heterogeneity a) Fixed effects analysis AC b) Random effects analysis AC BC BC AB AB Indirect AB Indirect AB 32

What to do when statistically significant inconsistency is found? Action Heterogeneity Inconsistency Check the data Try to bypass Studies that stand out in the forest plot are checked for data extraction errors There is empirical evidence that some measures are associated with larger heterogeneity than others (Deeks 2002; Friedrich et al. 2011) Using simple loop inconsistency you can identify studies with data extraction errors. Inconsistency in loops where a comparison is informed by a single study is particularly suspicious for data errors. Empirical evidence suggests that different effect measures of dichotomous outcomes does not impact on statistical inconsistency (Veroniki et al. 2013) 33

What to do when statistically significant inconsistency is found? Action Heterogeneity Inconsistency Resign to it Investigators may decide not to undertake meta-analysis in the presence of excessive heterogeneity Encompass it Apply random-effects metaanalysis Investigators may decide not to synthesize the network in the presence of excessive inconsistency Apply models that relax the consistency assumption by adding an extra loopspecific random effect (Higgins et al. 2012, Lu & Ades 2006)*. *However, as random effects are not a remedy for excessive heterogeneity and should be applied only for unexplained heterogeneity, inconsistency models should be employed to reflect inconsistency in the results, not to adjust for it. 34

What to do when statistically significant inconsistency is found? Action Heterogeneity Inconsistency Explore it Use pre-specified variables in a subgroup analysis or metaregression Split the network into subgroups or use network meta-regression to account for differences across studies and comparisons. Specify the variables in the protocol, including bias-related characteristics. 35

Summary The assumption of consistency underlies the indirect and mixed comparison process Transitivity refers to the validity of the indirect comparison and can be evaluated conceptually Statistical evaluation of the consistency can take place in a closed loop Care is needed when interpreting the results of a consistency test as issues of heterogeneity and power may limit its usefulness Conceptual evaluation of the consistency assumption should include Checking for effect modifiers that differ across comparisons Checking the definition of each node/treatment Checking the random choice of comparators 36

References Salanti G. Indirect and mixed-treatment comparison, network, or multiple-treatments meta-analysis: many names, many benefits, many concerns for the next generation evidence synthesis tool. Research Synthesis Methods. 2012 3 (2): 80. Baker, S.G. & Kramer, B.S. 2002. The transitive fallacy for randomized trials: if A bests B and B bests C in separate trials, is A better than C? BMC.Med.Res.Methodol., 2, (1) 13 Bucher, H.C., Guyatt, G.H., Griffith, L.E., & Walter, S.D. 1997. The results of direct and indirect treatment comparisons in meta-analysis of randomized controlled trials. J Clin.Epidemiol., 50, (6) 683-691 Deeks, J.J. 2002. Issues in the selection of a summary statistic for meta-analysis of clinical trials with binary outcomes. Stat Med, 21, (11) 1575-1600 Song F, Yoke Y, Walsh T et al. 2009. Methodological problems in the use of indirect comparisons for evaluating healthcare interventions: survey of published systematic reviews. BMJ 338:b1147 Donegan, S., Williamson, P., Gamble, C., & Tudur-Smith, C. 2010. Indirect comparisons: a review of reporting and methodological quality. PLoS.One., 5, (11) e11054 Mills, E.J., Ghement, I., O'Regan, C., & Thorlund, K. 2011. Estimating the power of indirect comparisons: a simulation study. PLoS.One., 6, (1) e16237 Song F, Xiong T, Parekh-Bhurke S et al 2011 Inconsistency between direct and indirect comparisons of competing interventions: meta-epidemiological study. BMJ. 2011 16;343 Veroniki A, Vasiliadis H, Higgins J, Salanti G Evaluation of inconsistency in networks of interventions IJE 2013 37

Statistical considerations in indirect comparisons and network meta-analysis Said Business School, Oxford, UK March 18-19, 2013 Cochrane Comparing Multiple Interventions Methods Group Oxford Training event, March 2013 38