Prospective Observational Study Assessment Questionnaire

Similar documents
DRAFT REPORT FOR REVIEW VERSION AUGUST 22, 2013

ISPOR Task Force Report: ITC & NMA Study Questionnaire

Interpreting Prospective Studies

Comparative Effectiveness Research Collaborative Initiative (CER-CI) PART 1: INTERPRETING OUTCOMES RESEARCH STUDIES FOR HEALTH CARE DECISION MAKERS

Comparative Effectiveness Research Collaborative Initiative (CER-CI) PART 1: INTERPRETING OUTCOMES RESEARCH STUDIES FOR HEALTH CARE DECISION MAKERS

Systematic reviews and meta-analyses of observational studies (MOOSE): Checklist.

Recent developments for combining evidence within evidence streams: bias-adjusted meta-analysis

Online Supplementary Material

CONSORT 2010 checklist of information to include when reporting a randomised trial*

The QUOROM Statement: revised recommendations for improving the quality of reports of systematic reviews

ISPOR Good Research Practices for Retrospective Database Analysis Task Force

Controlled Trials. Spyros Kitsiou, PhD

Washington, DC, November 9, 2009 Institute of Medicine

Meta-analysis of safety thoughts from CIOMS X

Evidence-Based Medicine and Publication Bias Desmond Thompson Merck & Co.

CHAMP: CHecklist for the Appraisal of Moderators and Predictors

Systematic Reviews and Meta- Analysis in Kidney Transplantation

Evidence Informed Practice Online Learning Module Glossary

PHO MetaQAT Guide. Critical appraisal in public health. PHO Meta-tool for quality appraisal

May 16, Division of Dockets Management (HFA-305) Food and Drug Administration 5630 Fishers Lane, Room 1061 Rockville, MD 20852

Complications of Proton Pump Inhibitor Therapy. Gastroenterology 2017; 153:35-48 발표자 ; F1 김선화

The Joanna Briggs Institute Reviewers Manual 2014

CRITICAL APPRAISAL OF CLINICAL PRACTICE GUIDELINE (CPG)

SUPPLEMENTARY DATA. Supplementary Figure S1. Search terms*

EQUATOR Network: promises and results of reporting guidelines

ISA 540, Auditing Accounting Estimates, Including Fair Value Accounting Estimates, and Related Disclosures Issues and Task Force Recommendations

Standards for the reporting of new Cochrane Intervention Reviews

Essential Skills for Evidence-based Practice Understanding and Using Systematic Reviews

Trials and Tribulations of Systematic Reviews and Meta-Analyses

Marc L. Berger, MD, 1 Muhammad Mamdani, PharmD, MA, MPH, 2 David Atkins, MD, MPH, 3 Michael L. Johnson, PhD 4. Background to the Task Force

Clinical research in AKI Timing of initiation of dialysis in AKI

Nature and significance of the local problem

GRADE. Grading of Recommendations Assessment, Development and Evaluation. British Association of Dermatologists April 2018

Improving reporting for observational studies: STROBE statement

Research in Real-World Settings: PCORI s Model for Comparative Clinical Effectiveness Research

Live WebEx meeting agenda

Checklist for appraisal of study relevance (child sex offenses)

Biases in clinical research. Seungho Ryu, MD, PhD Kanguk Samsung Hospital, Sungkyunkwan University

EPF s response to the European Commission s public consultation on the "Summary of Clinical Trial Results for Laypersons"

DRAFT (Final) Concept Paper On choosing appropriate estimands and defining sensitivity analyses in confirmatory clinical trials

Guideline development in TB diagnostics. Karen R Steingart, MD, MPH McGill University, Montreal, July 2011

About Reading Scientific Studies

Learning objectives. Examining the reliability of published research findings

Special guidelines for preparation and quality approval of reviews in the form of reference documents in the field of occupational diseases

Accelerating Patient-Centered Outcomes Research and Methodological Research

A proposal for collaboration between the Psychometrics Committee and the Association of Test Publishers of South Africa

Review of Veterinary Epidemiologic Research by Dohoo, Martin, and Stryhn

GRADE Evidence Profiles on Long- and Rapid-Acting Insulin Analogues for the treatment of Diabetes Mellitus [DRAFT] October 2007

Randomized Controlled Trial

Protocol Development: The Guiding Light of Any Clinical Study

What is a Special Interest Group (SIG)?

CHECK-LISTS AND Tools DR F. R E Z A E I DR E. G H A D E R I K U R D I S TA N U N I V E R S I T Y O F M E D I C A L S C I E N C E S

Guidelines for Reporting Non-Randomised Studies

What is indirect comparison?

Declaration of interests. Register-based research on safety and effectiveness opportunities and challenges 08/04/2018

Systematic Reviews. Simon Gates 8 March 2007

Title:Continuity of GP care is associated with lower use of complementary and alternative medical providers A population-based cross-sectional survey

Research Questions and Survey Development

AAOS Appropriate Use Criteria Methodology

Cochrane Pregnancy and Childbirth Group Methodological Guidelines

PROPOSED WORK PROGRAMME FOR THE CLEARING-HOUSE MECHANISM IN SUPPORT OF THE STRATEGIC PLAN FOR BIODIVERSITY Note by the Executive Secretary

A Framework for Patient-Centered Outcomes Research

Uses and misuses of the STROBE statement: bibliographic study

CHAPTER VI RESEARCH METHODOLOGY

Lecture Outline. Biost 590: Statistical Consulting. Stages of Scientific Studies. Scientific Method

Fitting the Method to the Question

CLINICAL REGISTRIES Use and Emerging Best Practices

Glossary of Practical Epidemiology Concepts

Lecture 4: Evidence-based Practice: Beyond Colorado

Results. NeuRA Mindfulness and acceptance therapies August 2018

Systematic Review & Course outline. Lecture (20%) Class discussion & tutorial (30%)

A response by Servier to the Statement of Reasons provided by NICE

Analysis A step in the research process that involves describing and then making inferences based on a set of data.

Results. NeuRA Worldwide incidence April 2016

Traumatic brain injury

Progress from the Patient-Centered Outcomes Research Institute (PCORI)

Glossary From Running Randomized Evaluations: A Practical Guide, by Rachel Glennerster and Kudzai Takavarasha

Common Criteria. for. CGIAR Research Proposal (CRP) Design and Assessment

1. Draft checklist for judging on quality of animal studies (Van der Worp et al., 2010)

P & T Competition: How to Session. Reproduced with permission from Lynn Nishida, R.Ph.

The Guide to Community Preventive Services. Systematic Use of Evidence to Address Public Health Questions

GLOSSARY OF GENERAL TERMS

GRADE. Grading of Recommendations Assessment, Development and Evaluation. British Association of Dermatologists April 2014

Results. NeuRA Hypnosis June 2016

5/10/2010. ISPOR Good Research Practices for Comparative Effectiveness Research: Indirect Treatment Comparisons Task Force

Objectives. Information proliferation. Guidelines: Evidence or Expert opinion or???? 21/01/2017. Evidence-based clinical decisions

Challenges of Observational and Retrospective Studies

SSAI Clinical Practice Committee guideline work flow v2. A. Formal matters

Improved Transparency in Key Operational Decisions in Real World Evidence

Fitting the Method to the Question

The ROBINS-I tool is reproduced from riskofbias.info with the permission of the authors. The tool should not be modified for use.

Appraising the Literature Overview of Study Designs

MODULE 3 APPRAISING EVIDENCE. Evidence-Informed Policy Making Training

How to use this appraisal tool: Three broad issues need to be considered when appraising a case control study:

Critical appraisal: Systematic Review & Meta-analysis

OBSERVATIONAL MEDICAL OUTCOMES PARTNERSHIP

Journal Club Critical Appraisal Worksheets. Dr David Walbridge

Supplementary Online Content

Applicable or non-applicable: investigations of clinical heterogeneity in systematic reviews

15 May 2017 Exposure Draft. Response Due Date 23 June Exposure Draft

Transcription:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 PROSPECTIVE OBSERVATIONAL STUDY QUESTIONNAIRE TO ASSESS RELEVANCE AND CREDIBILITY TO INFORM HEALTHCARE DECISION-MAKING: AN ISPOR-AMCP-NPC GOOD PRACTICE TASK FORCE REPORT DRAFT REPORT FOR REVIEW VERSION September 5, 2013 1

23 ABSTRACT 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 Evidence-based health-care decisions are best informed by comparisons of all relevant interventions used to treat conditions in specific patient populations. Although randomized controlled trials (RCTS) are often soughtafter, but frequently absent, or when available, lack information on a diverse range of populations and practice settings. Increasingly, prospective observational studies are being performed to help fill these gaps. However, widespread adoption of evidence from prospective observational studies has been limited due to a variety of factors, including the lack of consensus regarding accepted principles for their evaluation and interpretation. A Task Force was commissioned by ISPOR as part of collaboration with AMCP and NPC, to develop a questionnaire to assist decision makers in evaluating prospective observational studies. The intent was to promote a structured approach to reduce the potential for subjective interpretation of evidence and promote consistency in decision-making. The questionnaire consists of 31 questions divided into two domains: relevance and credibility. Relevance addresses the extent to which findings, if accurate, applies to the setting of interest to the decision maker. Credibility addresses the extent to which the study findings accurately answer the study question or hypothesis. The questionnaire provides a guide for assessing the degree of confidence that should be placed in evidence from prospective observational studies, and promotes awareness of the subtleties involved in evaluating such studies. It is anticipated that user feedback will permit periodic evaluation and modification of the questionnaire. The goal is to make the questionnaire as useful as possible to the healthcare decision making community. Keywords: questionnaire, checklist, decision-making, validity, credibility, relevance, bias, confounding 44 2

45 46 47 48 49 50 51 52 53 54 55 56 BACKGROUND TO THE TASK FORCE [To be added] INTRODUCTION Four Good Practices Task Forces developed consensus-based questionnaires to help decision makers evaluate 1) prospective and 2) retrospective observational studies, 3) network meta-analysis (Indirect Treatment Comparison), and 4) decision analytic modeling studies with greater uniformity and transparency.[1-3] The primary audiences of these questionnaires are assessors and reviewers of health care research studies for health technology assessment, drug formulary, and health care services decisions requiring varying level of knowledge and expertise. This report focuses on the questionnaire to assess the relevance and credibility of prospective observational studies. 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 Although randomized controlled trials (RCTs) are often sought-after to inform health system decisions, there is increasing recognition of the limitations of relying on RCTs alone. These studies may be absent, due to financial, ethical or time limitations or they may lack sufficient information for decision-making in a real-world setting and recognizing a diverse range of populations and practice settings. Other types of studies, such as observational, modeling, and network meta-analysis, are increasingly sought-after to fill this gap. [4] However, there may be barriers to the use of these studies due to the limited number of accepted principles for their evaluation and interpretation. There is a need for transparent and uniform ways to assess their quality. [5] A structured approach reduces the potential for subjectivity to influence the interpretation of evidence and can promote consistency in decision-making. [6] Previous tools, including grading systems, scorecards, and checklists, have been developed to facilitate structured approaches to critically appraising clinical research.[7, 8] Some are elaborate, requiring software and deliberation among a broad range of experts [9], others are very simple, using scoring systems and best suited for randomized clinical trials.[10] It was believed a questionnaire that would not be time-consuming and easy to use, given a basic understanding of epidemiology, would create awareness of issues related to alternative study designs and could be widely promoted. Development of this questionnaire was informed by prior efforts and with several guiding principles derived from a focus group of end users (e.g. payors) that provided input at the 2012 Annual AMCP Meeting. First, questionnaires had to be easy-to-use and not time consuming for individuals without a broad range of expertise and without in-depth study design and statistical knowledge. Second, questionnaires had to be sufficiently comprehensive to promote awareness of the appropriate 3

78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 application of different study designs to decision-making; we also sought to produce a questionnaire that would prompt users to obtain additional education on the underlying methodologies. Lastly, the use of questionnaire would need to be facilitated by comprehensive educational programs. The Task Force defines prospective observational studies as those in which participants are not randomized or otherwise assigned to an exposure and for which the consequential outcomes of interest occur after study commencement (including creation of a study protocol and analysis plan, and study initiation). They are often longitudinal in nature. Exposure to any of the interventions being studied may have been recorded before the study initiation such as when a prospective observational study uses an existing registry cohort. Exposure may include a pharmaceutical intervention, surgery, medical device, prescription, or decision to treat. This definition contrasts with retrospective observational studies that use existing data sources in which both exposure and outcomes have already occurred. [11] Questionnaire Development The first issue was whether the questionnaires developed by the 4 Task Forces (including the one discussed in this paper) should be linked to checklists, scorecards, or annotated scorecards. Concerns were raised that a scoring system may be misleading if it did not have adequate measurement properties. Scoring systems have been shown to be problematic in the interpretation of randomized trials.[12] An alternative to a scorecard is a checklist. However, the Task Force members believed that checklists might also mislead users because a study may satisfy all of the elements of a checklist and still harbor fatal flaws. Moreover, users might have the tendency to count up the number of elements present converting it into a score, and then apply the score to their overall assessment of the evidence. In addition, the strength of a study finding may depend on other evidence that addresses the specific issue or the decision being made. A questionnaire without an accompanying score or checklist was felt to be the best way to allow analysts to be aware of the strengths and weaknesses of each piece of evidence and apply his/her own reasoning. Questions were developed based on a review of items in previous questionnaires and guidance documents,and previous ISPOR Task Force recommendations [11] as well as methods and reporting guidance. [13-16] Through user testing and consensus, items from previous efforts were grouped into conceptual domains. 107 108 109 110 The questionnaire is divided into two main sections, Relevance and Credibility based on the key elements essential to evaluating comparative effectiveness evidence. Four identical questions were developed for the relevance section for this and the other questionnaires developed by the 4 Task Forces. Credibility is further 4

111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 divided into several key domains. For this questionnaire to obtain broad acceptance, and based on the focus group recommendations, it was limited to ~30 questions. Whenever possible, efforts were made to avoid the use of jargon and to employ similar wording across all four questionnaires. Also, as shown in Figure 1, there is substantial overlap in the design and flow of this questionnaire and that developed for retrospective observational studies. Figure 1 Upon completion of questions in the relevance section, users are asked to rate whether the study is sufficient or insufficient for inclusion. If a study was not considered sufficiently relevant, a user could then opt to truncate the review of its credibility. In the credibility category, users rate each domain as a strength, a weakness, or neutral. Based upon these evaluations, the user then similarly rates the credibility of the research study as either sufficient or insufficient to inform decision making. For some questions in the credibility section, a user would be notified that they had detected a fatal flaw. The presence of a fatal flaw suggests significant opportunities for the findings to be misleading. Consequently, the decision maker should use extreme caution in applying the findings to inform decisions. However, the occurrence of a fatal flaw does not prevent a user from completing the questionnaire nor does it require the user to judge the evidence as insufficient for use in decision making. The presence of a fatal flaw is intended to raise a strong caution and should be carefully considered when the overall body of evidence is reviewed. The questionnaire includes a summary in a structured paragraph as follows: In evaluating this study, I made the following judgments: I found the study (relevant/not relevant) for decision making because I considered that the population/interventions/outcomes/setting (applied/did not apply) to the decision I am informing. I found the study (credible/not credible) for decision making because: o There (were/were not any) fatal flaws i.e. critical elements that call into question the validity of the findings. The presence of a fatal flaw suggests significant opportunities for the findings to be misleading and misinterpreted; extreme caution should be used in applying the findings to inform decisions. The following domains contained fatal flaws: o There are strengths and weakness in the study: The following domains were evaluated as strengths: The following domains were evaluated as weaknesses 5

147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 Questionnaire Items Questions that fall under the main categories of relevance and credibility appear in Table 1. Explanations of each question along with specific definitions are provided in the following section, to facilitate understanding of the appropriate use of the questionnaire. Table 1 Relevance Relevance addresses whether the results of the study/apply to the setting of interest to the decision-maker. It addresses issues of external validity (population, comparators, endpoints, timeframe) and the direction and magnitude of difference meaningful to the decision maker. There is no correct answer for relevance. Relevance is determined by each decision-maker and the relevance assessment determined by one decision-maker will not necessarily apply to other decision-makers. Is the population relevant? This question addresses whether the population analyzed in the study sufficiently matches the population of interest to the decision-maker. Population characteristics to consider include demographics such as age and gender, nationality, ethnicity; risk factors such as average blood pressure, cholesterol levels, and body mass index; behaviors such as smoking; stage/severity of the condition; past and current treatments for the condition; and clinical issues such as co-morbidities. Are any relevant interventions missing? This question addresses whether the interventions analyzed in the study include ones of interest to the decisionmaker and whether all relevant comparators have been considered. Intervention characteristics need to be specified at a detailed level. For technologies, this includes the device specification and the technique used (e.g. screening screening for osteoporosis: was dual-energy X-ray absorptiometry (DEXA) or some other scanning method used: what sites were measured such as spine, hip, wrist). For drugs and biologics, was the doses, durations, and mode of administration specified? Are other pertinent issues discussed such as the skill level of provider (which can be particularly important for surgical interventions), post-treatment monitoring and care and duration of follow-up discussed. In addition, analysts should consider to what extent alternative modes of care (e.g., another intervention or standard care) match the modes of care in the decision setting. 6

175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 Are the outcomes relevant? This question asks what outcomes are assessed in the study and whether the outcomes are meaningful to the decision maker. Outcomes such as cardiovascular events (e.g., rates of myocardial infarction or stroke), patient functioning, or health-related quality of life (e.g., scores from short form-36 health survey or EuroQol-5D instruments) may be more relevant than surrogate or intermediate endpoints (e.g., cholesterol levels). Is the context (settings & practice patterns) applicable? The context of the study refers to factors that may influence the generalizability of the study findings to other settings. Factors that should be considered may include the study time frame, the payer setting, provider characteristics, or the geographic area. Some or all of these factors may be different than the population to which the user wants to apply the study results; however, if it is suspected that differences in these factors may influence the treatment response, it should influence the user s judgment of the extent that these findings could be applied to another setting. Credibility Credibility addresses the extent to which the study accurately answers the question it is designed or intended to answer and is determined by the design and conduct of the study. It addresses issues of internal validity, error, and confounding. For example, the observed effect of a new treatment may be due to the degree to which patients were followed and their outcomes reliably measured and not due to differences in treatment effectiveness. Appropriate study design and analytic approaches can better separate the contribution of the intervention to observed outcomes versus other factors. The credibility section of the questionnaire was divided into the following domains: design, data, analysis, reporting, interpretation, and conflicts of interest. Design Were the study hypotheses or goals prespecified a priori? As stated in a prior report of an ISPOR taskforce, One strength of clinical trials is the requirement for a study protocol which specifies inclusion criteria for subjects, primary and secondary outcomes, and analytic approach. Although there are differing views regarding a priori specification of a research hypothesis when conducting observational research, prior specification minimizes the risk of cherry-picking interesting findings and a related issue of observing spurious findings because of multiple hypothesis testing [11]. For these reasons, we recommend the practice of a priori specification of the research question, study design, and dataanalysis plan in a formal study protocol to assure end-users that the results were not the product of data-mining. 7

205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 This is not an indictment of data-mining per se; rather that data mining is more appropriate for hypothesis generation, rather than hypothesis testing. Evidence that pre-specified hypotheses were formally stated in a protocol includes registration on a publicly available website, such as clinicaltrials.gov or evidence of a review procedure or process which may include disclosure of an institutional review board (IRB) procedure. Were the comparison groups concurrently observed? Concurrent comparators are derived from the same sample of subjects and are followed over the same time frame; this approach avoids time-related confounding. Alternatively, historical controls are derived from a sample of subjects derived from a time period prior to the treatment that is compared. Concurrent comparators add more strength to research findings, although the use of historical comparators can be justified when a treatment becomes a standard of care and nearly all subjects who could be treated either receive the new treatment, or there are perceived ethical barriers to withhold the new treatment. This suggests a no answer to this question does not automatically invalidate the credibility of study findings. Was there evidence that a formal study protocol including an analysis plan was specified prior to executing the study? Altering study definitions, subject selection criteria, model specification, and other study procedures can have dramatic effects on study findings and permit reported results to be influenced by investigator biases. [17-19] Ideally, study reports would be fully transparent about planned study procedures. They should report findings based on the original plan and be fully transparent regarding post-hoc changes to the analysis plan to justify those post-hoc alterations. Unfortunately, the conventional reporting practices of observational studies are rarely detailed enough to allow readers to adequately assess what was pre-planned and what was not. [20] The best evidence that the study had an analytic plan would be to compare a study report with its registry data; however, only a small fraction of observational studies are registered. Alternatively, users could rely on terms commonly used, such as 'pre-specified' and 'planned analyses' when describing the methods. The results can also indicate pre-specified objectives and analyses by declaring results that were pre-planned versus those that were post-hoc. If the study reports 'post-hoc' analyses, the user may cautiously infer that the other analyses were pre-planned. Other indicators that an analysis plan was developed prior to the study execution would be the reporting of an IRB review or grant awards specific to the research study. A pre-specified analysis plan cannot be assumed when a study does not offer any indication that there was such a plan developed beforehand. Was sample size and statistical power to detect difference addressed? An observational study attempts to create a comparison across two groups of data just as its randomized counterpart and therefore still requires a sample size or power calculation if the results are applied to a different 8

236 237 238 239 240 241 242 243 244 study population. [11] Without this, the reader is left with insufficient information as to whether the detectable difference should have occurred based on the expected size of the effect and in advance of the study. Was a study design employed to minimize or account for confounding? Some study designs can provide stronger methods to deal with potential confounding that may occur due to lack of randomization. These include inception cohorts, new user designs, the use of multiple comparator groups, matching designs, and assessment of outcomes not thought to be impacted by the intervention compared. (See Box) A variety of considerations factor into choice of study design including: cost, feasibility, and ethical considerations. Box 1: Study Designs Employed to Minimize the Effect Of Confounding Variables Inception cohorts are designated groups of persons assembled at a common time early in the development of a specific clinical disorder (e.g., at first exposure to the putative cause or at initial diagnosis), who are followed thereafter. A new-user design begins by identifying all of the patients in a defined population (both in terms of people and time) who start a course of treatment with the study medication. Study follow-up for endpoints begins at precisely the same time as initiation of therapy, (t0). The study is further restricted to patients with a minimum period of non-use (washout) prior to t0. Matching designs include a deliberate process of making a study group and a comparison group comparable with respect to factors that are extraneous to the purpose of the investigation but which might interfere with the interpretation of the study's findings. Assessment of outcomes thought not to be impacted by the intervention compared may permit an assessment of residual confounding [11]. 245 246 247 248 249 250 251 252 253 Was the follow-up period of sufficient duration to detect differences addressed? Depending on the condition studied and the difference in impact of comparator interventions, the length of time required to detect differences will vary. The quicker and more frequently outcome events occur (e.g. asthma exacerbations occur more frequently and therefore can be detected with shorter observation periods in comparison with hip fracture), the shorter the duration of follow-up is required. The duration of follow-up is related to the power of the study and its ability to detect differences. There may be feasibility and cost limitations that impact the duration of follow-up in prospective observational studies; however, this should not affect the assessment of the credibility of its findings. 9

254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 Were the sources, criteria, and methods for selecting participants appropriate to address the study questions / hypotheses? The sources, criteria, and methods for selecting participants for a study should be similar for the different groups of patients being assessed, whether they are defined based on outcome (case-control studies) or based on exposure (cohort studies). Bias can be introduced if patient selection varies by comparator group or the data source or methods for assessing or selecting patient groups varies. [21] Also the data source should provide some level of assurance that key measures are reliably recorded. Key questions in assessing the sources, criteria, and methods for selecting patients include whether there was an adequate rationale provided for key inclusion and exclusion criteria and whether approaches were employed to ensure comparability of treatment and control groups. Approaches to ensure comparability of comparator groups include matching designs, and analytic approaches such as propensity scoring. [22] These are described in detail in prior ISPOR task force reports. [11] Data Were the data sources sufficient to support the study? The data sources should contain valid measures of treatment, outcome, and covariates for confounding variables, possess a large enough sample, and be of sufficient duration to detect differences. Often a single data source will not contain all the information necessary to conduct the study, and linkages to other data sources may be necessary. [23] This is a broad concept to assess as the quality of the study data and its use in executing the study are influenced by multiple factors. Questions to consider in assessing this criterion include whether all outcomes, exposures, predictors, potential confounders, and effect modifiers were clearly defined; whether sources of data and details of methods of assessment for each variable of interest were described; whether the assessment methods and/or definitions the same across treatment groups; whether the reliability and validity of the data been described, including any data quality checks and data cleaning procedures; and whether reliable and valid data were available on important confounders or effect modifiers Was exposure defined and measured in a valid way? Exposure to treatment is ideally documented by evidence that patients actually took the medication, though this is rarely available. [24] Exposure may be documented by evidence of a prescription being written, being filled, a claim being filed, and the measures of medication possession. The most basic level of exposure measurement determines whether a subject received (or did not receive) a treatment, but exposure may also be defined in terms of the intensity of exposure, including the dose and/or duration of treatment(s). Less reliable evidence of treatment exposure is obtained through patient self-report methods. [25] 285 Have the primary outcomes been defined and measured in a valid way? 10

286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 Selection of primary outcomes is perhaps the most critical part of study design. [11, 18] Outcomes more directly observable such as hospitalization and death may require less sophisticated validation approaches than outcomes that are more difficult to observe or rely on investigator classification and subjectivity, such as time to referral or medication adherence. Primary outcomes can be documented in a patient chart (paper, electronic medical record), inferred from claims (hospitalization for myocardial infarction), documented in a diary, or reported in a face-to-face interview. Documentation in a chart is considered the most reliable. [16] Was the follow-up time similar among comparison groups or were the differences in follow-up accounted for in the analyses? Patients may discontinue medications for many reasons including lack of effectiveness and adverse effects. Differential follow-up between treatment groups can introduce a bias in observed treatment effects. Some differences in follow-up are inevitable. A credible study will explain as best as possible the reasons for these differences and use appropriate statistical techniques (e.g. censoring) to minimize the impact on hypothesis testing. Analyses Was there a thorough assessment of potential measured and unmeasured confounders? The choice and effectiveness of treatments may be affected by practice setting, the health-care environment and the experience of health care providers, as well as the medical history of patients. (For a more detailed discussion see ISPOR POS Task Force report 2012). Treatment inferences from observational studies are all potentially biased from imbalances across treatment groups on confounding variables whether the variables are observed or not [11, 18]. Confounding can be controlled statistically using a wide arrange of multivariate approaches, however, if a statistical model excludes a confounding variable, the estimates of treatment effects suffer from omitted variable bias in just about all analyses except when a valid instrumental variables approach is undertaken. [26, 27] When assessing a research study, one should look to see that the authors have considered all potential confounding factors and conducted a literature review to identify variables that are known to influence the outcome variable. The literature search strategy for identifying confounders is not typically reported in manuscripts; [14] however, a table of potential confounders with citations to previous research describing these associations is sometimes available in a manuscript and would be suggestive of an explicit search to identify known confounding variables. In addition to a literature search, credible research should use clinical judgment or consensus techniques to identify confounders. Often the data will not contain information for some confounding variables (e.g. race, income level, exercise level) and will be omitted variables from the analysis. [26] When the analysis does not include key confounders, a credible research report will discuss the potential impact of these including the direction and magnitude of the potential bias. 11

318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 Were analyses of subgroups and/or interaction effects reported for comparison groups? Exploring and identifying heterogeneous treatment effects, or effect modification, are some of the potential advantages of large observational studies. Interaction occurs when the association of one exposure differs in the presence of another exposure or factor. The most common and basic approaches for identifying heterogeneous treatment effects are to conduct subgroup analyses or to incorporate interaction terms within the analysis. [28] Interactions may be explored using additive or multiplicative approaches where the differences in effect depart from either the addition of the effects of the two factors or exposures, or the multiplicative effect of those factors. Caution should be warranted when the main treatment effect is not significantly associated with the outcome, but significant subgroup results are reported (which could have been the result of a series of post-hoc sub-group analyses). Were sensitivity analyses performed to assess the effect of key assumptions or definitions on outcomes? A variety of decisions must be made in designing a study. This includes how populations, interventions, and outcomes are defined; how missing data are dealt with; how outliers are dealt with; and to what extent may unmeasured confounders influence the results. A credible study will indicate which of these decisions had an important impact on the results of the analyses and will report the effect of using a reasonable range of alternative choices on the results. Key issues to consider include whether sensitivity analyses were reported using different statistical approaches, and according to key definitions; whether the analysis accounted for outliers and examined their impact in a sensitivity analysis; and whether the authors discussed or conducted a sensitivity analysis to estimate the effect of unmeasured confounders on the observed difference in effect. Reporting When methods are described in sufficient detail, it permits others to replicate the analysis on similar data sets or to reproduce the results if given access to the data set from the study. An adequate description delineates all key assumptions and why a methodological approach was selected over alternatives. Although there are several checklists that have been developed to address adequacy of reporting, a straightforward consideration is whether the user of a study can understand precisely how study authors arrived at their particular findings Was the number of individuals screened at each stage of the selection process reported? Reporting the number of individuals screened at each stage of the selection process is important to assess the potential selection bias of participants. The final analyzable sample can be most easily interpreted when a text description or a flow diagram is provided that describes the initial pool of potential subjects and the sample after each inclusion and exclusion is applied. This allows the reader to more easily assess the extent to which the 12

350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 analyzable sample may differ from the target population and which criteria materially influenced the final sample. Were the descriptive statistics of the study participants adequately reported? Descriptive statistics include demographics, co-morbidities, and other potential confounders reported by treatment groups that will enable the reader to assess the potential for confounding. Vast differences on key confounding measures may suggest a higher likelihood of residual confounding even after adjusting for the observable characteristics. Did the authors describe the statistical uncertainty of their findings? There will always be uncertainty when evaluating outcomes in observational research as estimates must be based on population samples., Uncertainty from sampling should be presented in the form of either a Bayesian credibility interval (range of values that is likely to contain the true size of the effect given the data) or a confidence interval (range of values that is likely to contain the true estimate, e.g. 95%). P values can provide some sense of uncertainty but are not sufficient for re-analysis. Since they are a product of both uncertainty and the magnitude of the effect observed, they can be misleading when either sample populations or effect sizes are large. Did the authors describe and report the key components of their statistical approaches? The authors should fully describe their statistical approach and provide citations for any nuanced econometric or statistical methods that fully describe the approach. One question to consider in assessing the adequacy of the statistical reporting include whether the authors utilize statistical techniques to examine the effect of multiple variables simultaneously (i.e., multivariate analysis) and whether they discuss how well the models predict what it is intended to predict. Authors may conduct statistical test, such as r-squared, pseudo r-squared, c-statistics, and c-indices to demonstrate the predictive capacity of the statistical model used. Other key items of statistical reporting relate to statistical techniques used to adjust for multiple analyses of the same data, reporting of unadjusted estimates of treatment effects, and reporting of the full regression model in either the publication or appendix. Techniques commonly employed in prospective observational studies include propensity score methods and instrument variable methods. These more involved techniques may require more extensive reporting of their development, use, and evaluation. A guiding principle for statistical reporting is to Describe statistical methods with enough detail to enable a knowledgeable reader with access to the original data to verify the reported results. [18] Were confounder-adjusted estimates of treatment effects reported? 13

380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 Confounder-adjusted estimates of treatment effects can be obtained in a variety of ways. Most commonly, treatment effects are estimated from a coefficient of independent variable(s) in a multivariate regression, or systems of equations that include a set of control covariates that represent potential confounders. Treatment effects may also be estimated by taking differences from propensity matched treated and comparison subjects. Any non-randomized study must report confounder-adjusted estimates if they are attempting to make any inference regarding the effects from treatment. Unadjusted estimates should also be reported to allow for comparison with the adjusted results. Was the extent of missing data reported? There is considerable opportunity to introduce bias into estimates of treatment effects if data are missing. Missing data can occur in prospective observational studies, as many studies rely on secondary data sources that rely on routine data entry. Since it is possible that the reason for the missing data is related to the reason for observed treatment effects, the extent of missing data should be reported. The potential for bias from missing data can be further explored in sensitivity analyses or through analyses that attempt to correct for missing data by making assumptions. Were absolute and relative measures of treatment effect reported? Reporting the effect of treatment(s) in both absolute and relative terms provides the decision maker the greatest understanding of the magnitude of the effect. [29] Absolute measures of effect included differences in proportions, means, rates, number-needed-to-harm (NNH), number-needed-to-treat-to-harm (NNT H ) and number-needed-to-treat (NNT) and should be reported for a meaningful time period. Relative measures of effect are rate ratios, proportions, or other measures and include odds ratios (ORs), incidence rate ratios, relative risks, and hazard ratios (HR). Interpretation Were the results consistent with prior known information or if not was an adequate explanation provided? To aid interpretation of research, study authors should undertake a thorough review of the literature to compare their findings to all known previous findings exploring the same or similar objectives. Research authors should provide plausible explanations for disparate findings and identify methodological differences and/or advance a theoretical or biologic rationale for the differences. Authors should provide plausible explanations that overlooked but important factors have led to findings that are different in direction or magnitude. Are the observed treatment effects considered clinically meaningful? 14

410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 In analyses of large observational studies, sometimes relatively minor differences between treatment groups can attain levels of statistical significance due to the large sample sizes. The results should be interpreted not only in terms of their statistical association, but also by the magnitude of effect in terms of clinical importance. Some authors may identify previously developed minimally important clinical differences to support their assertions. Additionally, the larger the treatment effect that is observed, the smaller the chances that residual confounding can change a significant finding to a null finding. Are the conclusions fair and balanced? Overstating the implications of the study results is commonly encountered in the literature. (30) The study should be fully transparent describing the study limitations and importantly how study limitations could influence the direction and magnitude of the findings and ultimately the study conclusions. Users of a study should consider whether conclusions are cautious and appropriate given the objectives, limitations, multiplicity of analyses, results from similar studies, and other relevant evidence. Authors should discuss the limitations of the study, including the potential direction and magnitude of any potential bias to help users of the study understand the degree to which these limitations may influence study findings. Was the impact of unmeasured confounding factors discussed? Unmeasured confounding is always a potential issue in any observational research framework. Unmeasured confounding is more likely to bias the results when patients might be channeled to one treatment over another with studies that investigate known or likely outcomes. Outcome events that are lesser known or unsuspected are less likely to be considered in the clinician s or patient s decision process, and in turn, are less likely to suffer from confounding. A good discussion will identify factors thought to be confounders that were not recorded or were un-measurable, and identifies the potential direction of the bias. A credible study would ideally assess residual confounding through simulations to explore how strongly a confounder would have to be correlated with treatment and outcome to move the results to a null finding. Conflicts of Interest Two questions were used in the conflicts of interest domain in all of the questionnaires: Were there any potential conflicts of interest? If there were potential conflicts of interest, were steps taken to address these? Potential conflicts of interest may include financial interests in the study results, desire for professional recognition, or other non-monetary incentives. Steps to address potential conflicts of interest include disclosure 15

441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 of any potential conflicts of interest, involving third parties in the design, conduct, and analysis of studies, and agreements that provide independence of researchers (including freedom to publicly disseminate results) from funding entities. [31] DISCUSSION User Testing: Following testing by members of the Assessing Prospective and Retrospective Observational, Indirect Treatment and Modeling Studies Task Forces and subsequent modification of the questionnaire based on these tests, a revised questionnaire was made available to volunteers from the payer community as well as the pharmaceutical industry. Each volunteer was asked to test one questionnaire using three studies and rate them accordingly. Studies were previously rated by the Task Force as either good quality, medium quality, or poor quality.[32] Ninety-three volunteers were solicited to participate, with 65 participating and 24 individuals assigned to the prospective observational study testing. The prospective observational study questionnaire response rate was 72%. Although there were not enough users to perform a formal psychometric evaluation, the good quality study was generally rated as sufficient with respect to relevance and credibility, while the poor study was generally rated not sufficient. Based upon the answer to the question: is the study Sufficiently or Insufficiently credible to include in the body of evidence? there was 59% agreement among the ratings provided. Ratings were not provided 15% of the time. Multi-rater agreement exceeded 80% for most of the credibility domains. Few users completed supplementary questions. Educational Needs Internationally, the resources and expertise available to inform health care decision makers varies widely. While there is broad experience in evaluating evidence from RCTs, there is less experience and greater skepticism regarding the value of other types of evidence. (11, 18) However, the volume and variety of realworld evidence is increasingly rapidly with the increasing adoption of electronic medical records (EMRs) and the linkage of claims data with laboratory, imaging and EMR data. Volume, variety and velocity (the speed at which data is generated) are three of the hallmarks of the era of big data in health care. The amount of information from these sources could easily eclipse that from RCTs in coming years. This implies it is an ideal time for health care decision makers and those that support evidence evaluation to enhance their ability to evaluate this information. 16

473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 Although there is skepticism about the value of evidence from observational, network meta-analysis/indirect treatment comparison and modeling studies, they continue to fill important gaps in a knowledge base important to payers, providers, and patients. ISPOR has provided Good Clinical Research recommendations on what rigorous design, conduct, and analysis looks like for these sources of evidence. [11] These questionnaires, including the one discussed in this report, are an extension of those recommendations and serve as a platform to assist the decision maker in understanding what a systematic evaluation of this research requires. By understanding what a systematic structured approach to the appraisal of this research entails, it is hoped that this will lead to a general increase in sophistication by decision makers in the use of this evidence. To that end, we anticipate additional educational efforts and promotion of these questionnaires will be developed and made available to members. In addition, an interactive (i.e., web-based) questionnaire would facilitate uptake and support the educational goal that the questionnaire provides REFERENCES 1. Jansen, J.P. Et Al. An Indirect Treatment Comparison And Network Meta-Analysis Study Questionnaire To Assess Study Relevance And Credibility Of To Inform Healthcare Decision-Making: An ISPOR-AMCP- NPC Good Practice Task Force Report, Value in Health, 17:1 2. Martin, et al. A Retrospective Observational Study Questionnaire to Assess Relevance and Credibility to Inform Healthcare Decision-Making: An ISPOR-AMCP-NPC Good Practice Task Force Report, Value in Health, 17:1. 3. Caro JJ, et al. A Modeling Study Questionnaire to Assess Study Relevance and Credibility to Inform Healthcare Decision-Making: An ISPOR-AMCP-NPC Good Practice Task Force Report. Value in Health,; 17:1 4. Garrison Jr. LP, Neumann PJ, Erickson P, et al. Using real-world data for coverage and payment decisions: The ISPOR real-world data task force report. Value Health 2007;10:326-35. 5. Brixner DI, Holtorf AP, Neumann PJ, Malone DC, Watkins JB. Standardizing Quality Assessment of Observational Studies for Decision Making in Health Care. JMCP. 2009;15(3);275-281 6. Balshem H, Helfand M, Schünemann HJ, Oxman AD, Kunz R, Brozek J, et al. GRADE guidelines: 3. Rating the quality of evidence. Journal of Clinical Epidemiology. 2011 Apr;64(4):401 6. 7. Atkins D, Eccles M, Flottorp S, Guyatt GH, Henry D, et al. (2004) Systems for grading the quality of evidence and the strength of recommendations I: critical appraisal of existing approaches The GRADE Working Group. BMC Health Serv Res 4: 38. doi:10.1186/1472-6963-4-38. 8. Glasziou P (2004) Assessing the quality of research. BMJ 328: 39 41. doi:10.1136/bmj.328.7430.39. 17

506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 9. Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, et al. (2008) GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ 336: 924 926. doi:10.1136/bmj.39489.470347.ad. 10. Moher D, Jadad AR, Tugwell P (1996) Assessing the quality of randomized controlled trials. Current issues and future directions. Int J Technol Assess Health Care 12: 195 208. 11. Berger ML, Dreyer N, Anderson Fred, et al. Prospective Observational Studies to Assess Comparative Effectiveness: The ISPOR Good Research Practices Task Force Report. Value Health 2012;15:217-230. 12. Jüni P, Witschi A, Bloch R, Egger M (1999). The Hazards of Scoring the Quality of Clinical Trials for meta- analysis. Jama J Am Med Assoc 282: 1054 1060. 13. GRACE Principles. Available from: http://www.graceprinciples.org/ Accessed on: August 9, 2013 14. STROBE checklist. Available from: http://www.strobestatement.org/fileadmin/strobe/uploads/checklists/strobe_checklist_v4_cohort.pdfstatement.org/fileadmin/st robe/uploads/checklists/strobe_checklist_v4_cohort.pdf Accessed on: August 9, 2013 15. EnCePP. Available from: http://www.encepp.eu/standards_and_guidances/documents/enceppguideofmethstandardsinpe.pdf Accessed on: August 9, 2013 16. AHRQ User Guide for Developing a Protocol for Observation Comparative Effectiveness Research. http://www.ncbi.nlm.nih.gov/books/nbk126190/. Accessed on: August 9, 2013 17. Thomas L, Peterson ED. The value of statistical analysis plans in observational research: defining highquality research from the start. JAMA. 2012 Aug 22;308(8):773 4. 18. Cox E, Martin BC, Van Staa T, Garbe E, Siebert U, Johnson ML. Good research practices for comparative effectiveness research: approaches to mitigate bias and confounding in the design of nonrandomized studies of treatment effects using secondary data sources: the International Society for Pharmacoeconomics and Outcomes Research Good Research Practices for Retrospective Database Analysis Task Force Report--Part II. Value Health. 2009 Dec;12(8):1053 61. 19. Berger ML, Mamdani M, Atkins D, Johnson ML. Good research practices for comparative effectiveness research: defining, reporting and interpreting nonrandomized studies of treatment effects using secondary data sources: the ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report--Part I. Value Health. 2009 Dec;12(8):1044 52. 20. Altman DG (2000) Statistics in medical journals: some recent trends. Stat Med 19: 3275 3289. 21. Ellenberg JH. Selection bias in observational and experimental studies. Statistics in Medicine. 1994;13(5-7):557 67 18

538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 22. Stuart EA. Matching Methods for Causal Inference: A Review and a Look Forward. Stat Sci. 2010 Feb;25(1):1 21. 23. Bohensky MA, Jolley D, Sundararajan V, Evans S, Pilcher DV, Scott I, et al. Data Linkage: A powerful research tool with potential problems. BMC Health Services Research. 2010;10(1):346. 24. Gordis L. (1979) Conceptual and methodologic problems in measuring patient compliance. In: Haynes B, Taylor DW, Sackett DL, eds. Compliance in Health Care. Baltimore: The John Hopkins University Press, 23 45. 25. Vermeire E, Hearnshaw H, Van Royen P, Denekens J. Patient adherence to treatment: three decades of research. A comprehensive review. Journal of Clinical Pharmacy and Therapeutics. 2001;26(5):331 42. 26. Schneeweiss S. Sensitivity analysis and external adjustment for unmeasured confounders in epidemiologic database studies of therapeutics. Pharmacoepidemiology and Drug Safety. 2006;15(5):291 303. 27. Fewell Z, Smith GD, Sterne JAC. The Impact of Residual and Unmeasured Confounding in Epidemiologic Studies: A Simulation Study. Am J Epidemiol. 2007 Sep 15;166(6):646 55. 28. Rosenbaum PR. Heterogeneity and causality: Unit heterogeneity and design sensitivity in observational studies. Am Stat. 2005 May;59(2):147 52. 29. Forrow L, Taylor WC, Arnold RM. Absolutely relative: how research results are summarized can affect treatment decisions. Am J Med1992;92:121-4. 30. Gøtzsche PC. Readers as research detectives. Trials. 2009;10(1):2 31. Husereau D, Drummond M, Petrou S, et al. Consolidated health economic evaluation reporting standards (CHEERS) Explanation and elaboration: A report of the ISPOR health economic evaluations publication guidelines good reporting practices task force. Value Health 2013;16:231-50. 32. Charles A. Simonton, Bruce Brodie, Barrett Cheek, Fred Krainin, Chris Metzger,, James Hermiller, Stanley Juk, Peter Duffy, Angela Humphrey, Marcy Nussbaum, Sherry Laurent. Comparative Clinical Outcomes of Paclitaxel- and Sirolimus-Eluting Stents. J Am. College of Cardiology, 2007; Vol. 50,. 13. Figure 1 Prospective observational study assessment questionnaire flowchart 19

565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 Table 1 Questionnaire to assess the relevance and credibility of a prospective observational study The questionnaire consists of 31 questions related to the relevance and credibility of a prospective observational study. Relevance questions relate to usefulness of the prospective observational study to inform health care decision making. Each question will be scored with / / Can t Answer. Based on the scoring of the individual questions, the overall relevance of the prospective observational study needs to be judged as Sufficient or Insufficient. If the prospective observational study is considered sufficiently relevant, the credibility is going to be assessed. The credibility is captured with questions in the following 6 domains, Design, Data, Analysis, Reporting, Interpretation, and Conflict of interest. Each question will be scored with / / Can t Answer. Based on the number of questions scored satisfactory in each domain, an overall judgment of the strength of each domain needs to be provided: Strength / Neutral / Weakness / Fatal flaw. If any one of the items is scored as a no resulting in a fatal flaw, the overall domain will be scored as a fatal flaw and the study may have serious 20