Comparative Effectiveness Research Collaborative Initiative (CER-CI) PART 1: INTERPRETING OUTCOMES RESEARCH STUDIES FOR HEALTH CARE DECISION MAKERS

Similar documents
Comparative Effectiveness Research Collaborative Initiative (CER-CI) PART 1: INTERPRETING OUTCOMES RESEARCH STUDIES FOR HEALTH CARE DECISION MAKERS

Interpreting Prospective Studies

ISPOR Task Force Report: ITC & NMA Study Questionnaire

TOOLS YOU CAN USE April 8, 2014

5/10/2010. ISPOR Good Research Practices for Comparative Effectiveness Research: Indirect Treatment Comparisons Task Force

'Summary of findings' tables in network meta-analysis (NMA)

Agenda. Frequent Scenario WS I: Treatment Comparisons and Network Meta-Analysis: Learning the Basics

Alcohol interventions in secondary and further education

Overview and Comparisons of Risk of Bias and Strength of Evidence Assessment Tools: Opportunities and Challenges of Application in Developing DRIs

Evaluating the Quality of Evidence from a Network Meta- Analysis

Network Meta-Analysis for Comparative Effectiveness Research

Quality and Reporting Characteristics of Network Meta-analyses: A Scoping Review

Applied Pharmacoeconomic and Outcomes Research Forum. Spring 2007

What is a Special Interest Group (SIG)?

VALUE IN HEALTH 14 (2011) available at journal homepage:

Standards for the reporting of new Cochrane Intervention Reviews

CRITICAL APPRAISAL OF CLINICAL PRACTICE GUIDELINE (CPG)

Disclosures. KVN serves as a consultant for the life sciences industry. MW is employed by GSK

Systematic reviews and meta-analyses of observational studies (MOOSE): Checklist.

The Joanna Briggs Institute Reviewers Manual 2014

DRAFT REPORT FOR REVIEW VERSION AUGUST 22, 2013

Statistical Analysis Plans

The role of MTM in Overviews of Reviews

EPF s response to the European Commission s public consultation on the "Summary of Clinical Trial Results for Laypersons"

Designing Pharmacy Practice Research

Issue Panel 18 HOW TO DEAL WITH MODEL UNCERTAINTY AT THE PLANNING STAGE OF COST-EFFECTIVENESS ANALYSIS?: TIPS FROM GLOBAL EXPERIENCES

Dear Dr. Villanueva,

Integrating the Patient Perspective Into Value Frameworks

GUIDELINE COMPARATORS & COMPARISONS:

Systematic Reviews and Meta- Analysis in Kidney Transplantation

Statistical considerations in indirect comparisons and network meta-analysis

The GRADE-CERQual approach: Assessing confidence in findings from syntheses of qualitative evidence

Applications of Bayesian methods in health technology assessment

Systematic Review & Course outline. Lecture (20%) Class discussion & tutorial (30%)

The Cochrane Collaboration

Clinical research in AKI Timing of initiation of dialysis in AKI

What is indirect comparison?

Incorporating qualitative research into guideline development: the way forward

Assessment of methodological quality and QUADAS-2

Statistical considerations in indirect comparisons and network meta-analysis

Live WebEx meeting agenda

The QUOROM Statement: revised recommendations for improving the quality of reports of systematic reviews

Outcomes and GRADE Summary of Findings Tables: old and new

Standards for reporting Plain Language Summaries (PLS) for Cochrane Diagnostic Test Accuracy Reviews (interim guidance adapted from Methodological

Accelerating Patient-Centered Outcomes Research and Methodological Research

Controlled Trials. Spyros Kitsiou, PhD

Combination therapy compared to monotherapy for moderate to severe Alzheimer's Disease. Summary

Recent developments for combining evidence within evidence streams: bias-adjusted meta-analysis

INTRODUCTION. Evidence standards for justifiable evidence claims, June 2016

Uses and misuses of the STROBE statement: bibliographic study

Development of restricted mean survival time difference in network metaanalysis based on data from MACNPC update.

Progress from the Patient-Centered Outcomes Research Institute (PCORI)

From Description to Causation

Seeing the Forest & Trees: novel analyses of complex interventions to guide health system decision making

SUPPLEMENTARY DATA. Supplementary Figure S1. Search terms*

Draft Accreditation Report for consultation

Objectives. Information proliferation. Guidelines: Evidence or Expert opinion or???? 21/01/2017. Evidence-based clinical decisions

Clinical Epidemiology for the uninitiated

Dementia Priority Setting Partnership. PROTOCOL March 2012

Quantitative summaries of treatment effect estimates obtained with network meta-analysis of survival curves to inform decision-making

The role of Randomized Controlled Trials

Checklist for Text and Opinion. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

IMPORTANCE OF AND CRITERIA FOR EVALUATING EXTERNAL VALIDITY. Russell E. Glasgow, PhD Lawrence W. Green, DrPH

Washington, DC, November 9, 2009 Institute of Medicine

The Cochrane Collaboration, the US Cochrane Center, and The Cochrane Library

Meta-Analysis. Zifei Liu. Biological and Agricultural Engineering

Patient-Centered Measurement: Innovation Challenge Series

Workshop: Cochrane Rehabilitation 05th May Trusted evidence. Informed decisions. Better health.

AHRQ Effective Health Care Program Clinical and Comparative Effectiveness Research Methods: II

Statistical considerations in indirect comparisons and network meta-analysis

Authors face many challenges when summarising results in reviews.

GRADE. Grading of Recommendations Assessment, Development and Evaluation. British Association of Dermatologists April 2018

Learning from Systematic Review and Meta analysis

A Framework for Patient-Centered Outcomes Research

Meta-analysis of safety thoughts from CIOMS X

Workshop 21: INDIRECT TREATMENT COMPARISONS: AN INTERACTIVE WORKSHOP ON CHOOSING THE RIGHT TOOL FOR THE AVAILABLE DATA

Introduction. Rare Disease Research, Health Technology Assessment and Evidence for Reimbursement FORUM

Cancer Immunotherapy from the Health Technology Assessment (HTA) and Payer Perspectives

Prospective Observational Study Assessment Questionnaire

Case scenarios: Patient Group Directions

Evidence-based Laboratory Medicine. Prof AE Zemlin Chemical Pathology Tygerberg Hospital

CINeMA. Georgia Salanti & Theodore Papakonstantinou Institute of Social and Preventive Medicine University of Bern Switzerland

Instrument for the assessment of systematic reviews and meta-analysis

AAEM Clinical Practice Committee Statement Literature Search / Grading Process - Revision Version 3.0 September 2011

Quantitative benefit-risk assessment: An analytical framework for a shared understanding of the effects of medicines. Patrick Ryan 21 April 2010

NICE Guidelines for HTA Issues of Controversy

RESEARCH METHODS & REPORTING

New England HIV Implementation Science Network

Introduzione al metodo GRADE

Issues to Consider in the Design of Randomized Controlled Trials

ADDING VALUE AND EXPERTISE FOR SUCCESSFUL MARKET ACCESS. Per Sørensen, Lundbeck A/S

Should Cochrane apply error-adjustment methods when conducting repeated meta-analyses?

EQUATOR Network: promises and results of reporting guidelines

Network Meta-Analysis of the Efficacy of Acupuncture, Alpha-blockers and Antibiotics on

CEU screening programme: Overview of common errors & good practice in Cochrane intervention reviews

Advancing Use of Patient Preference Information as Scientific Evidence in Medical Product Evaluation Day 2

Copyright GRADE ING THE QUALITY OF EVIDENCE AND STRENGTH OF RECOMMENDATIONS NANCY SANTESSO, RD, PHD

Critical appraisal: Systematic Review & Meta-analysis

The EU PIP - a step in Pediatric Drug Development. Thomas Severin Bonn,

Transcription:

Comparative Effectiveness Research Collaborative Initiative (CER-CI) PART 1: INTERPRETING OUTCOMES RESEARCH STUDIES FOR HEALTH CARE DECISION MAKERS ASSESSING NETWORK META ANALYSIS STUDIES: A PROPOSED MEASUREMENT TOOL FOR HEALTH CARE DECISION MAKERS Interpreting Indirect Treatment Comparison Studies For Health Care Decision Makers Task Force Forum: Monday June 4 th ISPOR 17 th International Meeting, Washington DC AMCP/ISPOR/NPC Assessing Network Meta Analysis Studies: A Proposed Measurement Tool For Health Care Decision Makers Moderator and Speakers Jeroen Jansen, PhD Chair of Interpreting Indirect Treatment Comparison Studies For Health Care Decision Makers Task Force VP- Health Economics & Outcomes Research, MAPI Consultancy, Boston, MA, USA Jessica Daw, PharmD, MBA Member of Interpreting Indirect Treatment Comparison Studies for Health Care Decisions Task Force Director, Clinical Pharmacy UPMC Health Plan Pittsburgh, PA, USA Forum- June 4, 2012 1

Agenda Background Objective Process Draft Assessment Tool Next steps Q&A Background Over the next 2 years, how likely is your plan to invest more in CER? Over the next 2 years, how likely is your plan to request more CER from industry? 35% 30% 25% 29% 31% 31% 50% 40% 41% 29% 20% 15% 10% 5% 6% 4% 30% 20% 10% 0% 12% 18% 0% N = 49 0% N = 49 1 -Not Likely 2 3 4 5- Very Likely 1 -Not Likely 2 3 4 5- Very Likely Source: Online Survey of Xcenda s Managed Care Network. September 2010. Forum- June 4, 2012 2

Background Clinicians, patients, and health-policy makers often need to decide which treatment is best. Growing interest in measures/methodologies to help ensure healthcare decision making is better informed by results of relevant evidence. Available treatments tend to increase over time. Unfortunately, robustly designed RCTs that simultaneously compare all interventions of interest are almost never available. Network meta-analysis is an extension of standard pairwise metaanalysis by including multiple pairwise comparisons across a range of interventions. Network meta-analysis provides indirect comparisons for interventions not studied in a head-to-head fashion. Background There is an increase in the number of indirect treatment comparisons and network meta-analysis published. Given the relevance of network meta-analysis to inform healthcare and coverage decision-making, it is of great interest to improve understanding of these studies by decision-makers and increase their use. Forum- June 4, 2012 3

Objective To develop a measurement tool for the assessment of credibility (validity) and relevance of a network meta-analysis. AMCP, NPC, ISPOR Comparative Effectiveness Research Collaborative Initiative (CER-CI) The aim is enhancing usefulness of CER to improve patient health outcomes: Guidance and practical tools to help P&T members critically appraise CER studies to inform decision making Guidance to industry on what kinds of evidence payers want to see and how evidence will be considered in decision making Provide greater uniformity and transparency in the use and evaluation of CER for coverage and decision making Joint AMCP, ISPOR, NPC Meeting Oct 22 2010 Forum- June 4, 2012 4

Comparative Effectiveness Research Collaborative Initiative (CER-CI), PART 1 Develop a set of Assessment Tools to Assess the credibility, and relevance of non experimental studies Prospective observational studies Retrospective observational studies Modeling studies Network meta-analysis Specific questions pertinent to the type of study Consistency across the different tools Interpreting Indirect Treatment Comparison Studies For Health Care Decision Makers Task Force Chair: Jeroen Jansen, PhD VP- Health Economics & Outcomes Research, MAPI Consultancy, Boston, MA, USA Members: AMCP NPC ISPOR Sherry Andes, BSPharm, PharmD, BCPS, BCPP, PAHM Clinical Pharmacist - Drug Information Services, informedrx, an SXC Company Louisville, Kentucky, USA Jessica Daw, PharmD, MBA Director, Clinical Pharmacy UPMC Health Plan Pittsburgh, PA, USA Vijay Joish BS Pharm, MS, PhD Deputy Director HEOR, Bayer Pharmaceuticals, Wayne, New Jersey, USA Joseph C. Cappelleri PhD, MS, MPH Senior Director Pfizer Inc New London, Connecticut, USA Thomas Trikalinos MD, PhD Research Associate in Health Services Policy & Practice, Public Health-Health Services Policy & Practice, Brown University, Providence, Rhode Island, USA Georgia Salanti PhD Assistant Professor, Department of Hygiene and Epidemiology University of Ioannina, School of Medicine University Campus, Ioannina, Greece Forum- June 4, 2012 5

Steps for Development of Tool 1. Outlining requirements ( Desiderata ) Relevant for decision-making Total evidence base Appropriate statistical methods Transparency Understandable and easy to use 2. Draft items/questions along with glossary 3. Modified Delphi approach 4. Modification (if needed) Key Decisions Level of expertise needed Granularity Organization Scoring Forum- June 4, 2012 6

Level of Expertise Required of Assessor Basic understanding of study design and elementary epidemiological concepts is required. Some jargon in tool To gain acceptance for a wide range of users, including experts To allow assessors to become familiar with the language of ITC/NMA Expected that after some practice, non-experts will understand the key terms used in the instrument Glossary and supporting document explaining the concepts in a non-technical manner Granularity The tool needs to be sufficiently specific to help decision-makers judge validity and relevance. More specific questions will help non-experts in their assessment. Too many questions will compromise the practicality of the tool. Forum- June 4, 2012 7

Networks of Evidence Source: Jansen JP, Fleurence R, Devine B, et al. Interpreting indirect treatment comparisons & network meta-analysis for health care decision-making: Report of the ISPOR Task Force on indirect treatment comparisons good research practices Part 1. Value Health 2011;14: 417-28 Two Main Questions Credible? Internal validity The extent to which the study appropriately answers the question it is designed or intended to answer. The extent to which the results are affected by bias due to design of the study and conduct of analysis. Relevant? Relevance addresses the extent to which the results of the study, if accurate, apply to the setting of interest to the decision-maker. Addresses issues of population, interventions, endpoints and policy-relevant differences. Forum- June 4, 2012 8

By domain? Credibility (Internal Validity) Relevance Organization By journal article format / reporting sequence? Abstract Background Methods Findings Interpretation Other? Draft Instrument 3 Categories, 27 Questions Relevance for the decision-problem - 5 questions (1 on validity; 4 on relevance) - Focus areas: population, intervention, outcomes Assessment of adequacy of methodology and validity - 14 questions on validity - Focus areas: good search practices; quality of included studies; outcome selection; evidence network; systematic differences in effect modifiers across comparisons; consistency; appropriate statistical methods; explaining heterogeneity Completeness of Reporting - 8 questions on relevance - Focus areas: presentation of network; individual study results; results of NMA for all comparisons; direct effects separate from indirect effects; ranking; subgroup effects; conclusions in line with results and limitations Forum- June 4, 2012 9

Relevance for the Decision-Problem 1. Has the target population been clearly defined? 2. Is the target population relevant for the decision-problem? 3. Are all the relevant interventions for the decision-problem been included? (Decision Set) 4. Have additional interventions been included beyond those relevant for the decision-problem? (Evidence Set) 5. Have relevant outcomes been included? Decision Set vs. All Evidence Set Forum- June 4, 2012 10

Assessment of Adequacy of Methodology and Validity 6. Were good systematic literature search and study selection practices performed? 7. Have all randomized controlled trials involving at least two treatments of the Evidence Set for the population of interest been included? 8. Is there a possibility of bias induced by compromised quality of included studies? 9. Is there a possibility of bias induced by selection of outcomes? Assessment of Adequacy of Methodology and Validity (cont d) 10. Are the treatments that are being compared with the network meta-analysis all part of one connected network of trials? 11. Are there systematic differences in treatment effect modifiers across the different types of treatment comparisons in this network?* Was this justified by the researchers before seeing the results of the individual studies? Forum- June 4, 2012 11

Imbalance in Treatment Effect Modifiers Cause Bias in Indirect Comparisons Source: Jansen JP, Schmid CH, Salanti G. Direct Acyclic graphs can help understand bias in indirect and mixed treatment comparisons. J Clin Epidemiology (in press) 23 Imbalance in Treatment Effect Modifiers Cause Bias in Indirect Comparisons Source: Jansen JP. Network meta-analysis of individual and aggregate level data. (under review) 24 Forum- June 4, 2012 12

Assessment of Adequacy of Methodology and Validity (cont d) 12. Were statistical methods used that synthesize results of trial comparisons? (No naïve comparisons ) 13. If direct and indirect comparisons are available for pairwise contrasts (i.e. closed loops), was agreement (i.e. consistency) evaluated or discussed? 14. Were in the presence of consistency between direct and indirect evidence, both included in the analysis? 15. Were any systematic differences in potential treatment effect-modifiers across comparisons and (unexplained) inconsistency taken into account in the network metaanalysis? Imbalance in Treatment Effect Modifiers Cause Inconsistency in Mixed Treatment Comparisons Forum- June 4, 2012 13

Assessment of Adequacy of Methodology and Validity (cont d) 16. Was a valid rationale provided for the use of random effects or fixed effects models? 17. If a random effects model was used, were assumptions about heterogeneity explored or discussed? 18. If there are indications of heterogeneity, were subgroup analyses or meta-regression analysis with pre-specified covariates performed? Completeness of Reporting 19. Is a graphical or tabular representation of the evidence network provided with information on the number of RCTs per direct comparison? 20. Are relevant study and patient characteristics (e.g. effect modifiers) reported or available? 21. Are the individual study results reported? 22. Are direct results reported separately? Forum- June 4, 2012 14

Completeness of Reporting (cont d) 23. Are all pairwise contrasts between interventions as obtained with network meta-analysis reported along with measures of uncertainty? 24. Is a ranking of interventions provided, given the reported treatment effects and its uncertainty by outcome? 25. Are subgroup effects or the impact of important patient characteristics on treatment effects reported? 26. Are conclusions in line with research questions, results and validity/limitations of analyses? Probabilistic Interpretation of the Uncertainty Source: Salanti G, Ades AE, Ioannidis JP. Graphical methods and numerical summaries for presenting results from multipletreatment meta-analysis: an overview and tutorial. 30 J Clin Epidemiol. 2011;64:163-71. Forum- June 4, 2012 15

Scoring Each question will be scored as Yes / No / Unclear with annotation, if necessary No weighting of items or quantitative scoring Total score for validity (credibility) and a total score for relevance based on number of items scored Yes. Fatal flaws will be based on the scoring of some of the validity items. Next Steps Finalize glossary and supporting document Circulate to external experts in NMA for feedback Circulate to wider audience of decisionmakers and ISPOR community Write manuscript Forum- June 4, 2012 16

Questions? Thank you http://www.ispor.org/taskforces/interpretingorsforhc DecisionMakersTFx.asp Forum- June 4, 2012 17