DRAFT REPORT FOR REVIEW VERSION AUGUST 22, 2013

Similar documents
Prospective Observational Study Assessment Questionnaire

ISPOR Task Force Report: ITC & NMA Study Questionnaire

Interpreting Prospective Studies

Online Supplementary Material

CONSORT 2010 checklist of information to include when reporting a randomised trial*

Comparative Effectiveness Research Collaborative Initiative (CER-CI) PART 1: INTERPRETING OUTCOMES RESEARCH STUDIES FOR HEALTH CARE DECISION MAKERS

Recent developments for combining evidence within evidence streams: bias-adjusted meta-analysis

Systematic reviews and meta-analyses of observational studies (MOOSE): Checklist.

Comparative Effectiveness Research Collaborative Initiative (CER-CI) PART 1: INTERPRETING OUTCOMES RESEARCH STUDIES FOR HEALTH CARE DECISION MAKERS

Clinical research in AKI Timing of initiation of dialysis in AKI

Observational Study Designs. Review. Today. Measures of disease occurrence. Cohort Studies

Biases in clinical research. Seungho Ryu, MD, PhD Kanguk Samsung Hospital, Sungkyunkwan University

Evidence Informed Practice Online Learning Module Glossary

CRITICAL APPRAISAL OF CLINICAL PRACTICE GUIDELINE (CPG)

May 16, Division of Dockets Management (HFA-305) Food and Drug Administration 5630 Fishers Lane, Room 1061 Rockville, MD 20852

Standards for the reporting of new Cochrane Intervention Reviews

Glossary of Practical Epidemiology Concepts

CHAMP: CHecklist for the Appraisal of Moderators and Predictors

PubH 7405: REGRESSION ANALYSIS. Propensity Score

Systematic Reviews and Meta- Analysis in Kidney Transplantation

ISPOR Good Research Practices for Retrospective Database Analysis Task Force

PHO MetaQAT Guide. Critical appraisal in public health. PHO Meta-tool for quality appraisal

CLINICAL REGISTRIES Use and Emerging Best Practices

Guideline development in TB diagnostics. Karen R Steingart, MD, MPH McGill University, Montreal, July 2011

ISA 540, Auditing Accounting Estimates, Including Fair Value Accounting Estimates, and Related Disclosures Issues and Task Force Recommendations

CHECK-LISTS AND Tools DR F. R E Z A E I DR E. G H A D E R I K U R D I S TA N U N I V E R S I T Y O F M E D I C A L S C I E N C E S

SUPPLEMENTARY DATA. Supplementary Figure S1. Search terms*

Controlled Trials. Spyros Kitsiou, PhD

Washington, DC, November 9, 2009 Institute of Medicine

The Joanna Briggs Institute Reviewers Manual 2014

INTERNAL VALIDITY, BIAS AND CONFOUNDING

The QUOROM Statement: revised recommendations for improving the quality of reports of systematic reviews

Randomized Controlled Trial

EQUATOR Network: promises and results of reporting guidelines

Research Questions and Survey Development

Challenges of Observational and Retrospective Studies

Downloaded from:

Epidemiologic Methods and Counting Infections: The Basics of Surveillance

Quantitative Research Methods and Tools

Title:Continuity of GP care is associated with lower use of complementary and alternative medical providers A population-based cross-sectional survey

Biases in clinical research. Seungho Ryu, MD, PhD Kanguk Samsung Hospital, Sungkyunkwan University

1. Draft checklist for judging on quality of animal studies (Van der Worp et al., 2010)

Protocol Development: The Guiding Light of Any Clinical Study

Critical Review Form Clinical Decision Analysis

Finland and Sweden and UK GP-HOSP datasets

Meta-Analysis. Zifei Liu. Biological and Agricultural Engineering

GLOSSARY OF GENERAL TERMS

Fitting the Method to the Question

Appendix 2 Quality assessment tools. Cochrane risk of bias tool for RCTs. Support for judgment

About Reading Scientific Studies

OBSERVATIONAL MEDICAL OUTCOMES PARTNERSHIP

Trials and Tribulations of Systematic Reviews and Meta-Analyses

Improved Transparency in Key Operational Decisions in Real World Evidence

Live WebEx meeting agenda

CHECKLIST FOR EVALUATING A RESEARCH REPORT Provided by Dr. Blevins

Rapid appraisal of the literature: Identifying study biases

Guidelines for Reporting Non-Randomised Studies

Learning objectives. Examining the reliability of published research findings

Checklist for Case Control Studies. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

Fitting the Method to the Question

Evidence-Based Medicine and Publication Bias Desmond Thompson Merck & Co.

Cochrane Pregnancy and Childbirth Group Methodological Guidelines

Clinical Research Scientific Writing. K. A. Koram NMIMR

CHAPTER VI RESEARCH METHODOLOGY

Results. NeuRA Worldwide incidence April 2016

Declaration of interests. Register-based research on safety and effectiveness opportunities and challenges 08/04/2018

GRADE. Grading of Recommendations Assessment, Development and Evaluation. British Association of Dermatologists April 2018

Confounding and Bias

Results. NeuRA Treatments for internalised stigma December 2017

Critical appraisal: Systematic Review & Meta-analysis

BIOSTATISTICAL METHODS

The comparison or control group may be allocated a placebo intervention, an alternative real intervention or no intervention at all.

Improving reporting for observational studies: STROBE statement

Checklist for appraisal of study relevance (child sex offenses)

Biases in clinical research. Seungho Ryu, MD, PhD Kanguk Samsung Hospital, Sungkyunkwan University

Blood Pressure and Complications in Individuals with Type 2 Diabetes and No Previous Cardiovascular Disease. ID BMJ

Complications of Proton Pump Inhibitor Therapy. Gastroenterology 2017; 153:35-48 발표자 ; F1 김선화

Nature and significance of the local problem

Meta-analysis of safety thoughts from CIOMS X

NATIONAL INSTITUTE FOR HEALTH AND CLINICAL EXCELLENCE

Fixed Effect Combining

Traumatic brain injury

Epidemiologic Methods I & II Epidem 201AB Winter & Spring 2002

University of Wollongong. Research Online. Australian Health Services Research Institute

Evidence-Based Medicine Journal Club. A Primer in Statistics, Study Design, and Epidemiology. August, 2013

Lecture Slides. Elementary Statistics Eleventh Edition. by Mario F. Triola. and the Triola Statistics Series 1.1-1

Uses and misuses of the STROBE statement: bibliographic study

Incorporating Clinical Information into the Label

Results. NeuRA Hypnosis June 2016

Propensity Score Methods for Estimating Causality in the Absence of Random Assignment: Applications for Child Care Policy Research

Real-world data in pragmatic trials

Essential Skills for Evidence-based Practice Understanding and Using Systematic Reviews

Amyotrophic Lateral Sclerosis: Developing Drugs for Treatment Guidance for Industry

Results. NeuRA Mindfulness and acceptance therapies August 2018

JAMA. 2011;305(24): Nora A. Kalagi, MSc

THE DEPARTMENT OF VETERANS AFFAIRS HEALTH SERVICES RESEARCH & DEVELOPMENT SERVICE 810 Vermont Avenue Washington, DC 20420

Evaluating Scientific Journal Articles. Tufts CTSI s Mission & Purpose. Tufts Clinical and Translational Science Institute

Appraising the Literature Overview of Study Designs

Analysis A step in the research process that involves describing and then making inferences based on a set of data.

The ROBINS-I tool is reproduced from riskofbias.info with the permission of the authors. The tool should not be modified for use.

Transcription:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 RETROSPECTIVE OBSERVATIONAL STUDY QUESTIONNAIRE TO ASSESS RELEVANCE AND CREDIBILITY TO INFORM HEALTH CARE DECISION MAKING: AN ISPOR-AMCP-NPC GOOD PRACTICE TASK FORCE REPORT DRAFT REPORT FOR REVIEW VERSION AUGUST 22, 2013 1

31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 ABSTRACT (To be added) Keywords: questionnaire, checklist, decision-making, validity, credibility, relevance, bias, confounding 2

62 63 64 65 66 67 68 69 70 71 BACKGROUND TO THE TASK FORCE (To be added) INTRODUCTION Four Good Practices Task Forces developed a consensus-based set of questionnaires to help decision makers evaluate 1) prospective and 2) retrospective observational studies, 3) network meta-analysis (Indirect Treatment Comparison), and 4) decision analytic modeling studies with greater uniformity and transparency.[1-3] The primary audiences of these questionnaires are assessors and reviewers of health care research studies for health technology assessment, drug formulary, and health care services decisions requiring varying level of knowledge and expertise. This report focuses on the questionnaire to assess the relevance and credibility of prospective observational studies. 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 Although randomized controlled trials (RCTS) are often sought to inform health system decisions, there is increasing recognition of the limitations of relying on RCTS alone. These studies may be absent, due to financial, ethical or time limitations or they may lack sufficient information for decision-making in a realworld setting and recognizing a diverse range of populations and practice settings. Other types of studies, such as observational, modeling, and network meta-analysis, are increasingly sought to fill this gap. [4] However, there may be barriers to the use of these studies due to the limited number of accepted principles for their evaluation and interpretation. There is a need for transparent and uniform ways to assess their quality. [5] A structured approach reduces the potential for subjectivity to influence the interpretation of evidence and can promote consistency in decision-making. [6] Previous tools, including grading systems, scorecards, and checklists, have been developed to facilitate structured approaches to critically appraising clinical research.[7, 8] Some are elaborate, requiring software and deliberation among a broad range of experts [9], others are very simple, using scoring systems and best suited for randomized clinical trials.[10] It was believed a questionnaire that would not be time-consuming and easy to use with a basic understanding of epidemiology, would create awareness of issues related to alternative study designs, and could be widely promoted. Development of this questionnaire was informed by prior efforts and with several guiding principles derived from the focus group payors comments described above. First, questionnaires had to be easy-to-use and not time consuming for individuals without a broad range of expertise and without in-depth study design and statistical knowledge. Second, questionnaires had to be sufficiently comprehensive to promote awareness of the appropriate application of different study designs to decision-making; we also sought to produce a 3

93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 questionnaire that would prompt users to obtain additional education on the underlying methodologies. Lastly, the use of questionnaire would need to be facilitated by comprehensive educational programs. The Task Force defines prospective observational studies as ones in which participants are not randomized or otherwise assigned to an exposure for which the consequential outcomes of interest occur after study commencement (including creation of a study protocol and analysis plan, and study initiation). They are often longitudinal in nature. Exposure to any of the interventions being studied may have been recorded before the study initiation such as when a prospective observational study uses an existing registry cohort. Exposure may include a pharmaceutical intervention, surgery, medical device, prescription, or decision to treat. This definition contrasts with retrospective observational studies that use existing data sources in which both exposure and outcomes have already occurred. [11] QUESTIONNAIRE DEVELOPMENT The first issue was whether the questionnaires should be linked to checklists, scorecards, or annotated scorecards. Concerns were raised that a scoring system may be misleading if it did not have adequate measurement properties. Scoring systems have been shown to be problematic in the interpretation of randomized trials.[12] An alternative to a scorecard is a checklist. However, the Task Force members believed that checklists might also mislead users because a study may satisfy all of the elements of a checklist and still harbor fatal flaws. Moreover, users might have the tendency to count up the number of elements present converting it into a score, and then apply the score to their overall assessment of the evidence. In addition, the strength of a study finding may depend on other evidence that addresses the specific issue or the decision being made. A questionnaire without an accompanying score or checklist was felt to be the best way to allow analysts to be aware of the strengths and weaknesses of each piece of evidence and apply his/her own reasoning. Questions were developed based on a review of items in previous questionnaires, guidance documents, and previous ISPOR Task Force recommendations [11] as well as methods and reporting guidance s.[13-16] Through user testing and consensus, items from previous efforts were grouped into conceptual domains. 121 122 123 124 The questionnaire is divided into two main sections, Relevance and Credibility based on the key elements essential to evaluating comparative effectiveness evidence. Four similar questions were developed for the relevance section across all questionnaires. Credibility is further divided into several key domains. For this questionnaire to obtain broad acceptance, and based on the focus group 4

125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 recommendations, it was limited to ~30 questions. Whenever possible, efforts were made to avoid the use of jargon and to employ similar wording across all four questionnaires. Also, as shown in Figure 1, there is substantial overlap in the design and flow of this questionnaire and that developed for retrospective observational studies. FIGURE 1 Upon completion of questions in the relevance section, users are asked to rate whether the study is sufficient or insufficient for inclusion. If a study was not considered sufficiently relevant, a user could then opt to truncate the review of its credibility. In the credibility category, users rate each domain as a strength, a weakness, or neutral. Based upon these evaluations, the user then similarly rates the credibility of the research study as either sufficient or insufficient to inform decision making. For some questions in the credibility section, a user would be notified that they had detected a fatal flaw. The presence of a fatal flaw suggests significant opportunities for the findings to be misleading. Consequently, the decision maker should use extreme caution in applying the findings to inform decisions. However, the occurrence of a fatal flaw does not prevent a user from completing the questionnaire nor does it require the user to judge the evidence as insufficient for use in decision making. The presence of a fatal flaw is intended to raise a strong caution and should be carefully considered when the overall body of evidence is reviewed. The questionnaire includes a summary in a structured paragraph as follows: In evaluating this study, I made the following judgments: 144 145 146 147 148 149 150 151 152 153 154 155 156 157 I found the study (relevant/not relevant) for decision making because I considered that the population/interventions/outcomes/setting (applied/did not apply) to the decision I am informing. I found the study (credible/not credible) for decision making because: o There (were/were not any) fatal flaws i.e. critical elements that call into question the validity of the findings. The presence of a fatal flaw suggests significant opportunities for the findings to be misleading and misinterpreted; extreme caution should be used in applying the findings to inform decisions. The following domains contained fatal flaws: o There are strengths and weakness in the study: The following domains were evaluated as strengths: The following domains were evaluated as weaknesses 5

158 159 160 161 QUESTIONNAIRE ITEMS Questions that fall under the main categories of relevance and credibility appear in Table 1. Explanations of each question along with specific definitions are provided in the following section, to facilitate understanding of the appropriate use of the questionnaire. 162 TABLE 1 163 164 165 166 167 168 RELEVANCE Relevance addresses whether the results of the study/apply to the setting of interest to the decision-maker. It addresses issues of external validity (population, comparators, endpoints, timeframe) and the direction and magnitude of difference meaningful to the decision maker. There is no correct answer for relevance. Relevance is determined by each decision-maker and the relevance assessment determined by one decision-maker will not necessarily apply to other decision-makers. 169 170 171 172 173 174 1. Is the population relevant? This question addresses whether the population analyzed in the study sufficiently matches the population of interest to the decision-maker. Population characteristics to consider include demographics such as age and gender, nationality, ethnicity; risk factors such as average blood pressure, cholesterol levels, and body mass index levels; behaviors such as smoking; stage/severity of the condition; past and current treatments for the condition; and clinical issues such as co-morbidities. 175 176 177 178 179 180 181 182 183 184 2. Are any relevant interventions missing? This question addresses whether the interventions analyzed in the study match the ones of interest to the decision-maker and whether all relevant comparators have been considered. Intervention characteristics to consider include: technology characteristics (e.g., if the issue is screening for osteoporosis, is the technology dual-energy X-ray absorptiometry (DEXA) of the spine? DEXA of the wrist? Another scanning method?); technique (e.g., if the issue is endovascular aneurysm repair, does it involve an iliac branch device?); dose of a drug or biologic; duration of treatment; mode of administration (e.g., oral, intravenous); skill level of provider; post-treatment monitoring and care; and duration of follow-up. In addition, analysts should consider to what extent alternative modes of care (e.g., another intervention or standard care) match the modes of care in the decision setting. 185 186 187 188 3. Are the outcomes relevant? This question asks what outcomes are assessed in the study and whether the outcomes are meaningful to the decision maker. Outcomes such as cardiovascular events (e.g., rates of myocardial infarction or stroke), patient functioning, or health-related quality of life (e.g., scores from short form-36 health survey 6

189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 or EuroQol-5D instruments) may be more relevant than surrogate or intermediate endpoints (e.g., cholesterol levels). 4. Is the context (settings & practice patterns) applicable? The context of the study refers to factors that may influence the generalizability of the study findings to other settings. Factors that should be considered may include the study time frame, the payer setting, provider characteristics, or the geographic area. Some or all of these factors may be different than the population to which the user wants to apply the study results; however, if it is suspected that differences in these factors may influence the treatment response, it should influence the user s judgment of the extent that these findings could be applied to another setting. CREDIBILITY Credibility addresses the extent to which the study accurately answers the question it is designed or intended to answer and is determined by the design and conduct of the study. It addresses issues of internal validity, error, and confounding. For example, the observed effect of a new treatment may be due to the degree to which patients were followed and their outcomes reliably measured and not due to differences in treatment effectiveness. Appropriate study design and analytic approaches can better separate the contribution of the intervention to observed outcomes versus other factors. The credibility section of the questionnaire was divided into the following domains: design, data, analysis, reporting, interpretation, and conflicts of interest. DESIGN 5. Were the study hypotheses or goals pre-specified a priori? 'A priori' in this context refers to the development of the objectives or study plan prior to the execution of the study. This is often a difficult point to assess in published reports since many authors do not explicitly report which if any objectives were pre-specified. The best evidence that the study developed 'a priori' objectives would be to compare a study report with the trial registry where the study was registered, however, only small fraction of observational studies are registered in these trial registries. Alternatively, users could rely on terms commonly used in reporting observational studies that indicate pre-specified 'a priori.' Phrases and terms such as 'pre-specified' or 'planned analyses,' when describing the objectives, methods, or even the results can indicate pre-specified objectives. Other indicants that the objectives may have been developed prior to the study execution would be reporting of IRB review or grant awards 7

219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 specific to the research study. 'A priori' objectives cannot be assumed when a study does not offer any indicant that the objectives were developed beforehand. 6. If one or more comparison groups were used were they concurrent comparators or did they justify the use of historical comparisons group(s)? Concurrent comparators are derived from the same sample of subjects and are followed over the same time frame. Historical controls are derived from a sample of subjects derived from a time period prior to the treatment that is compared. Concurrent comparators should be used where possible, however, the use of historical comparators can be justified when a treatment becomes a standard of care and nearly all subjects that could be treated either receive the new treatment, or it becomes unethical to withhold the new treatment. 7. Was there evidence that a formal study protocol including an analysis plan was prespecified and guided the execution of the study? Altering study definitions, subject inclusion exclusion criteria, model specification, and other study procedures can have dramatic effects on study findings and permit reported results to be influenced by investigator biases.[17-19] Ideally, study reports would be fully transparent about pre-planned study procedures. They should report findings based on the original plan and be fully transparent regarding post-hoc changes to the analysis plan to justify those post-hoc alterations. Unfortunately, the conventional reporting practices of observational studies are rarely detailed enough to allow readers to adequately assess what was pre-planned and what was not. [20] The best evidence that the study had an analytic plan would be to compare a study report with the trial registry where the study was registered; however, only a small fraction of observational studies are registered in these trial registries. Alternatively, users could rely on terms commonly used, such as 'pre-specified' and 'planned analyses' when describing the methods. The results can also indicate pre-specified objectives and analyses by declaring results that were pre-planned versus those that were post-hoc. If the study reports 'post-hoc' analyses, the user may cautiously infer that the other analyses were pre-planned. Other indicants that an analysis plan was developed prior to the study execution would be the reporting of an IRB review or grant awards specific to the research study. A pre-specified analysis plan cannot be assumed when a study does not offer any indication that there was such a plan developed beforehand. 8. Was sample size and statistical power addressed? 8

249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 An observational study attempts to approximate a randomized study and therefore a sample size or power calculation should be performed in advance of the study. [11] In the retrospective framework where subjects are not prospectively recruited, sample size estimates do not dictate the number of subjects to be included in the study as the investigator will typically include all available subjects recorded in the data source and follow them for as long as subject data are recorded. However, a sample size estimate / power calculation will enable readers to interpret 'null' findings. A study that results in a null finding of no difference between comparators that has low power (e.g. < 80%) is difficult to interpret since the findings may be due to inadequate power or due to a true null finding. 9. Was a study design employed to minimize or account for confounding such as inception cohorts, new-user designs, multiple comparator groups, matching designs, and assessment of outcomes not thought to be impacted by the intervention compared? Some study designs can provide stronger methods to deal with potential confounding that may occur due to lack of randomization. A confounder is a factor that distorts the true relationship of the study variables of central interest by virtue of being related to the outcome of interest, but extraneous to the study question and unequally distributed among the groups being compared. A variety of considerations will factor into the choice of study design including cost, feasibility, and ethical considerations. Box 1: Study Designs Employed to Minimize the Effect of Confounding Variables Inception cohorts are designated groups of persons assembled at a common time early in the development of a specific clinical disorder (e.g., at first exposure to the putative cause or at initial diagnosis), who are followed thereafter. A new-user design begins by identifying all of the patients in a defined population (both in terms of people and time) who start a course of treatment with the study medication. Study follow-up for endpoints begins at precisely the same time as initiation of therapy, (t0). The study is further restricted to patients with a minimum period of non-use (washout) prior to t0. Newuser designs mitigate channeling bias by excluding subjects that tolerate existing treatments. Matching designs include a deliberate process of making a study group and a comparison group comparable with respect to factors that are extraneous to the purpose of the investigation, but which might interfere with the interpretation of the study's findings. For example, in case-control 9

277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 studies, individual cases might be matched or paired with a specific control on the basis of comparable age, sex, clinical features, or a combination of these. Assessment of outcomes thought not to be impacted by the intervention compared or 'falsification tests' may permit an assessment of residual confounding [11] 10. Were the sources, criteria, and methods for selecting participants appropriate to address the study hypotheses? The sources, criteria, and methods for selecting percipients to study should be similar for the different groups of patients being assessed, whether they are defined based on outcome (case-control studies) or based on exposure (cohort studies). Bias can be introduced if patient selection varies by comparator group or the data source or methods for assessing or selecting patient groups varies.[21] Also the data source should provide some level of assurance that key measures are reliably recorded. Some key questions in assessing the sources, criteria, and methods for selecting patients include; 1. Was there an adequate rationale provided for key inclusion and exclusion criteria? 2. Was there an assurance that subject encounters or data were adequately recorded over the entire study time frame for each subject? Depending on the data source, the assurance that relevant health information was captured will vary. Administrative data sources should be checked to ensure subjects had continuous eligibility for health benefits and are eligible to receive all relevant sources of care. t all persons eligible for insurance will be eligible for pharmacy or mental benefits for example. Studies that rely on provider-supplied data such as EMRs should attempt to ensure that persons are receiving care in the provider network and not from other providers. 11. Was the study sample restricted so that comparison groups would be sufficiently similar to each other (e.g., indications for treatment)? One of the most common starting points to enhance the comparability of treatment groups is to restrict subjects to a common set of conditions or patient characteristics. For example, if one were to compare beta blockers and diuretics as antihypertensive therapy, it would be important to restrict both treated groups to those without any evidence of previous cardiovascular disease including angina since beta blockers have indications for angina (a strong risk factor for subsequent MI) whereas diuretics do not and an unrestricted analysis would suffer by confounding by indication. DATA 10

306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 12. Was exposure defined and measured in a valid way? Exposure to treatment is ideally documented by evidence that patients actually took the medication or received some form of treatment, though this is rarely available. [22] Exposure may be documented by evidence of a prescription being written, being fulfilled, a claim being filed, or expressed in continuous terms such as the medication possession ratio. The most basic level of exposure measurement defines the presence or absence that a subject received (or did not receive) a treatment, but exposure may also be defined in terms of the intensity of exposure, including the dose and/or duration of treatment(s). A valid exposure measure will have low levels of exposure misclassification. Exposure misclassification may occur when the data used to define exposure are errantly recorded. For example, subjects may be misclassified when they experience a treatment but are defined as unexposed in a study and vice versa. Specific exposure misclassification, common in retrospective analyses of claims data, may occur when drug treatments are not covered by a health plan and persons obtain treatments without filing claims. Drugs that can be acquired over the counter, such as lifestyle drugs that are not commonly reimbursed, treatments that may have some stigma associated with them, or treatments that are inexpensive where claims may not be filed, are more prone to exposure misclassification. Some key questions one should consider when assessing the validity of the studies exposure measure include; 1. Does the study describe the data source(s) used in the study of the ascertainment of exposure (e.g., pharmacy dispensing, general practice prescribing, claims data, self-report, face-to-face interview)? 2. Does the study describe how exposure is defined and measured (e.g., operational details for defining and categorizing exposure)? 3. Does the study discuss the validity of exposure measurement (e.g., precision, accuracy, prospective ascertainment, exposure information recorded before outcome occurred, use of validation sub study)? The latter question regarding the accuracy of the study exposure measure can be obtained by a sub-study comparing alternative data sources or by citing previous validation studies of the study exposure measures. 13. Have the primary outcomes been defined and measured in a valid way? Ideally, some evidence of the validity of the outcome measure definitions (which often include measures of sensitivity, specificity, and positive predictive value) should be described. Researchers may conduct a sub study to verify the accuracy of their outcome definitions using a more detailed source such as a review of electronic medical records, chart reviews, or patient/provider surveys, and then report the performance of their outcome definitions. More often, investigators may cite previous studies that validated outcome measures/definitions with an alternative source. Some judgment may be exercised in assessing the validity of an outcome measure. Outcome measure definitions that have been used in 11

338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 previous analyses permit comparison between studies but do not ensure validity. Some outcome measures that are measured objectively such as inpatient episodes derived from an administrative claims database, may be presumed to be valid since there are strong financial incentives for these services to be recorded, and they are not subject to personal judgment. The validity of other measures, and in particular those that are subject to clinical judgment such as patient functioning or symptoms such as pain or severity of disease, will be context-specific and more prone to errors in measurement. Additional steps such as blinded assessment or the use of multiple clinician assessments should be considered. 14. Were the data sources sufficient to support the study? The data source should contain valid measures of treatment, outcome, and covariates for confounding variables, possess a large enough sample, and be of sufficient duration to detect differences. Often a single data source will not contain all the information necessary to conduct the study, and linkages to other data sources may be necessary; those data linkages should be described well enough to assess their validity. This is a broad concept to assess as the quality of the study data and its use in executing the study are influenced by multiple factors. Questions to consider in assessing these criteria include; 1. Were all outcomes, exposures, predictors, potential confounders, and effect modifiers clearly defined? (Give diagnostic criteria, if applicable.) 2. Were sources of data and details of methods of assessment for each variable of interest described? 3. Were the assessment methods and/or definitions the same across treatment groups? 4. Have the reliability and validity of the data been described, including any data quality checks and data cleaning procedures? 5. Were reliable and valid data available on important confounders or effect modifiers? 15. Was the follow-up time similar among comparisons groups OR were differences in follow-up accounted for in the analysis? Persons that are followed for longer durations are more likely to experience outcome events and the duration of follow-up must be accounted for in the analysis. Restricting the sample to only those subjects that have the same follow up (e.g., one year) is one approach to account for person time in the analysis. However, this approach can lead to immortal time bias (also referred to as survivorship bias), where subjects that die or otherwise lose enrollment in the study are excluded, which may be differential across treatment groups. The more likely that the duration of follow-up time can be influenced by the treatment group selections, the greater the prospects that immortal time bias can influence the associations. An alternative approach is to allow subjects to be followed for varying durations and to use a survival analytic approach. However, this does not eliminate the possibility of immortal time bias occurring if being selected to a cohort is dependent upon some person time which may not be overt criteria, but rather 12

370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 one that may be de-facto to be exposed to a treatment. A good study will explain as best as possible these reasons and use appropriate statistical techniques (e.g., time to event such as cox proportional hazards) to mitigate the impact. ANALYSES 16. Were sensitivity analyses performed to assess key assumptions or definitions? A variety of decisions must be made in designing a study. How are the populations, interventions, and outcomes defined? How are missing data dealt with? How are outliers dealt with? To what extent may unmeasured confounders influence the results? A good study will indicate which of these decisions had an important impact on the results of the analyses and will report the effect of using a reasonable range of alternative choices on the results. Key issues to consider include; 1. Were sensitivity analyses reported using different statistical approaches? 2. Were sensitivity analyses reported to test the impact of key definitions? 3. Did the analysis account for outliers and examine their impact in a sensitivity analysis? 4. Did the authors discuss or conduct a sensitivity analysis to estimate the effect of unmeasured confounders on the observed difference in effect? 17. Was there a thorough assessment of potential measured and unmeasured confounders? Carefully identifying all potential confounders is a critical step in any observational comparative effectiveness research study. Confounders are variables or factors that are correlated with treatment and outcome variables that lie outside the causal pathway between treatment and outcome. Confounders may be an artifact of common epidemiologic biases such as channeling or indication bias. For example, persons with more severe disease may be more likely to receive one treatment over another. Treatment inferences from observational studies are all potentially biased from imbalances across treatment groups on confounding variables whether the variables are observed or not. Carefully identifying all potential confounders is a critical step in any observational comparative effectiveness research study. Confounding can be controlled statistically using a wide range of multivariate approaches, however, if a statistical model excludes a confounding variable, the estimates of treatment effects suffer from omitted variable bias in just about all analyses except when a valid instrumental variables approach is undertaken. When assessing a research study, one should look to see that the authors have considered all potential confounding factors and conducted a literature review to identify variables that are known to influence the outcome variable. The literature search strategy for identifying confounders is not typically reported in manuscripts; however, a table of potential confounders with citations to previous research describing 13

401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 these associations is sometimes available in a manuscript and would be suggestive of an explicit search to identify known confounding variables. In addition to a literature search, researchers should use clinical judgment to identify confounders. Often the data will not contain information for some confounding variables (e.g., race, income, exercise) and will be omitted variables from the analysis. When the analysis does not include key confounders, the potential impact of these should be discussed including the direction and magnitude of the potential bias. 18. Were analyses of subgroups or interactions of effects reported for comparison groups? Exploring and identifying heterogeneous treatment effects, or effect modification, are one of the potential advantages of large observational studies. Interaction occurs when the association of one exposure differs in the presence of another exposure or factor. The most common and basic approaches for identifying heterogeneous treatment effects are to conduct subgroup analyses or to incorporate interaction terms with the treatment variable. [23] Interactions may be explored using additive or multiplicative approaches where the differences in effect depart from either the addition of the effects of the two factors or exposures, or the multiplicative effect of those factors. Caution should be warranted when the main treatment effect is not significantly associated with the outcome, but significant subgroup results are reported (which could have been the result of a series of post-hoc sub-group analyses). REPORTING 19. Were the methods described in sufficient detail? Replication is one of the hallmarks of the scientific process, and given the increasing availability of retrospective data source studies, it is increasingly possible to replicate many retrospective studies whether utilizing the same data source or a similar data source. Ideally, a published manuscript should describe the study in sufficient detail to enable a knowledgeable person to replicate the study with the same sample criteria, study measures, and analytic approach. However, the word limits imposed by many journals in their printed versions often do not permit enough space to sufficiently describe a study to enable replication. When a study is not described in sufficient detail, an online appendix or a statement indicating that a technical appendix will be made available upon request would be sufficient. 20. Were the numbers of individuals at each stage of the subject selection process reported after applying inclusion and exclusion criteria? 14

431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 Reporting the number of individuals screened at each stage of the selection process is important to assess the potential selection bias of participants. The final analyzable sample should be described in text or displayed as a flow diagram describing the initial pool of potential subjects and the sample after each inclusion and exclusion is applied. This will allow the reader to assess the extent to which the analyzable sample may differ from the target population and which criteria materially influenced the final sample. 21. Were the descriptive statistics of study participants adequately reported? Descriptive statistics include demographics, comorbidity, and other potential confounders reported by treatment groups that will enable the reader to assess the potential for confounding. Vast differences on key confounding measures may suggest a higher likelihood of residual confounding even after adjusting for the observable characteristics. 22. Did the authors describe the uncertainty of their findings through reporting confidence intervals and or p-values? There will always be uncertainty when evaluating outcomes in observational research, and that uncertainty should be presented in the form of either a p-value (chance that the relationship being assessed is observed by chance), or a confidence interval (range of values that contain the true estimate with a pre-defined probability - e.g. 95%). 23. Did the authors describe and report the key components of their statistical approaches? The authors should fully describe their statistical approach and provide citations for any nuanced econometric or statistical methods that fully describe the approach. Some factors to consider in assessing the adequacy of the statistical reporting includes; 1. If the authors utilize multivariate statistical techniques, do they discuss how well the models predict what it is intended to predict (e.g. R-square, pseudo r-square, c-statistics, and c-indices)? 2. Were any modeling assumptions addressed such as multiplicity adjustment or the Bonferroni correction? 3. Were unadjusted estimates of treatment effects (outcomes) reported? 4. Was the full regression model (not just the adjusted treatment effects) available in the publication or at least an appendix? 5. If propensity score methods were used, were the methods appropriately described? This includes the method of developing the propensity score, use of the propensity score (matching, weighting, regression, etc.), and evaluation of the propensity score (e.g., standardized differences before and after matching). 6. If instrumental variable methods were used, were 15

465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 the methods appropriately described (rational for the instrumental variable, evaluation of the strength of the instrument)? 24. Were confounder-adjusted estimates of treatment effects reported? Confounder-adjusted estimates of treatment effects can be obtained in a variety of ways. Most commonly, treatment effects are estimated from a coefficient of a treatment variable(s) in a multivariate regression, or systems of equations that include a set of control covariates that represent potential confounders. Treatment effects may also be estimated by taking differences from propensity matched treated and comparison subjects. Any non-randomized study must report confounder-adjusted estimates if they are attempting to make any inference regarding a treatment comparison. Unadjusted estimates should also be reported to allow for comparison with the adjusted results. 25. Was the extent of missing data and how they were handled reported? Unlike clinical trials or prospective registries, it is often difficult to assess when and for whom data are missing because subjects are rarely enrolled in a study with pre-determined data collection time points. When data are clearly missing, for example, a prescription claim is identified in a claims database but the days supply is missing from the claim, the frequency this occurs should be reported. Additionally, the methods for handling the missing data should be described including whether subjects were excluded, the missing values were imputed, or analyses were stratified. 26. Were absolute and relative measures of treatment effects reported? Reporting the effect of treatment(s) in both absolute and relative terms provides the decision maker the greatest understanding of the magnitude of the effect. [24] Absolute measures of effect included differences in proportions, means, rates, NNH, and NNT and should be reported for a meaningful time period. Relative measures of effect are defined as a ratio of rates, proportions, or other measures and include Odds Ratios (ORs), incidence rate ratios, relative risks, HRs. INTERPRETATION 16

494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 27. Were the results consistent with prior known information? If not, was an adequate explanation provided for the inconsistent results? The authors should undertake a thorough review of the literature to identify and present all known previous findings exploring the same or similar objectives and then contrast their findings. When studies differ in the direction of the findings, the study should motivate plausible explanations for the disparate findings and identify methodological differences and/or advance a theoretical or biologic rationale for the differences. Key questions to consider include; Is the direction of the effect in this study comparable to similar studies? Is the magnitude of effect in this study comparable to similar studies? If the direction or size of effect is different, have the authors provided an adequate explanation? Is the explanation plausible? Are there other more important factors more likely to explain the difference? 28. Are the results (differences demonstrated) considered clinically meaningful? In analyses of large databases, sometimes relatively minor differences between treatment groups can attain levels of statistical significance due to the large sample sizes. The results should be interpreted not only in terms of their statistical association, but also by the magnitude of effect in terms of clinical or economic importance. Additionally, the larger the treatment effect that is observed, the smaller the chances that residual confounding can change a significant finding to a null finding. Key questions to consider include; was the magnitude of differences in results between groups large? Have the statistical findings been interpreted in terms of their clinical or economic relevance? 29. Are the conclusions fair and balanced? Overstating the implications of the study results is commonly encountered in the literature. The study should be fully transparent describing the study limitations and importantly how study limitations could influence the direction and magnitude of the findings and ultimately the study conclusions. Key considerations include; was a cautious/appropriate overall interpretation of results considering objectives, limitations, multiplicity of analyses, results from similar studies, and other relevant evidence presented? Were the limitations of the study, including the direction and magnitude of any potential bias, reported? 30. Was the impact of unmeasured confounding factors discussed? 17

528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 Unmeasured confounding is always a potential issue in any observational research framework. Unmeasured confounding is more likely to bias the results when patients might be channeled to one treatment over another with studies that investigate known or likely outcomes. Outcome events that are lesser known or unsuspected are less likely to be considered in the clinician s or patient s decision process, and in turn, are less likely to suffer from confounding. A good discussion will identify factors thought to be confounders that were not recorded or were un-measurable, and identifies the potential direction of the bias. Ideally, residual confounding could be accessed through simulations to explore how strongly a confounder would have to be correlated with treatment and outcome to move the results to a null finding. CONFLICTS OF INTEREST 31. Were there any potential conflicts of interest? Potential conflicts of interest care may be present for many reasons including financial interests in the study results, desire for professional recognition, or the desire to do favors for others. 32. If there were potential conflicts of interest, were steps taken to address these? Steps to address potential conflicts of interest include disclosure of any potential conflicts of interest, involving third parties in the design, conduct, and analysis of studies, and agreements that provide independence of researchers (including freedom to publicly disseminate results) from funding entities. [25] DISCUSSION User Testing: Following testing by members of the Assessing Prospective and Retrospective Observational, Indirect Treatment and Modeling Studies Task Forces and subsequent modification of the questionnaire based on these tests, a revised questionnaire was made available to volunteers from the payers community as well as the pharmaceutical industry. Each volunteer was asked to test one questionnaire using three studies and rate them accordingly. Studies were previously rated by the Task Force as either good quality, medium quality, or poor quality. Ninety-three volunteers were solicited to participate, with 65 participating and 24 individuals assigned to the retrospective observational study testing. The retrospective observational study questionnaire response rate was 70%. Although there were not enough 18

560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 users to perform a formal psychometric evaluation, the good quality study was generally rated as sufficient with respect to relevance and credibility, while the poor study was generally rated not sufficient. Based upon the answer to the question: is the study Sufficiently or Insufficiently credible to include in the body of evidence? -- There was 54% agreement overall among the ratings provided. Ratings were not provided 36% of the time. Multi-rater agreement exceeded 80% for most of the credibility domains. Few users completed supplementary questions. Educational Needs Internationally, the resources and expertise available to inform health care decision makers varies widely. While there is broad experience in evaluating evidence from RCTs, there is less experience and greater skepticism regarding the value of other types of evidence. (11, 18) However, the volume and variety of real-world evidence is increasingly rapidly with the increasing adoption of electronic medical records (EMRs) and the linkage of claims data with laboratory, imaging and EMR data. Volume, variety and velocity (the speed at which data is generated) are three of the hallmarks of the era of big data in health care. The amount of information from these sources could easily eclipse that from RCTs in coming years. This implies it is an ideal time for health care decision makers and those that support evidence evaluation to enhance their ability to evaluate this information. Although there is skepticism about the value of evidence from observational, network metaanalysis/indirect treatment comparison and modeling studies, they continue to fill important gaps in a knowledge base important to payers, providers, and patients. ISPOR has provided Good Clinical Research recommendations on what rigorous design, conduct, and analysis looks like for these sources of evidence. [11] These questionnaires are an extension of those recommendations and serve as a platform to assist the decision maker in understanding what a systematic evaluation of this research requires. By understanding what a systematic structured approach to the appraisal of this research entails, it is hoped that this will lead to a general increase in sophistication by decision makers in the use of this evidence. To that end, we anticipate additional educational efforts and promotion of these questionnaires will be developed and made available to members. In addition, an interactive (i.e., web-based) questionnaire would facilitate uptake and support the educational goal that the questionnaire provides 19

594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 REFERENCES 1. Jansen, J.P. Et Al. An Indirect Treatment Comparison And Network Meta-Analysis Study Questionnaire To Assess Study Relevance And Credibility Of To Inform Healthcare Decision- Making: An ISPOR-AMCP- NPC Good Practice Task Force Report, Value in Health, 17:1 2. Martin, et al. A Retrospective Observational Study Questionnaire to Assess Relevance and Credibility to Inform Healthcare Decision-Making: An ISPOR-AMCP-NPC Good Practice Task Force Report, Value in Health, 17:1. 3. Caro JJ, et al. A Modeling Study Questionnaire to Assess Study Relevance and Credibility to Inform Healthcare Decision-Making: An ISPOR-AMCP-NPC Good Practice Task Force Report. Value in Health,; 17:1 4. Garrison Jr. LP, Neumann PJ, Erickson P, et al. Using real-world data for coverage and payment decisions: The ISPOR real-world data task force report. Value Health 2007;10:326-35. 5. Brixner DI, Holtorf AP, Neumann PJ, Malone DC, Watkins JB. Standardizing Quality Assessment of Observational Studies for Decision Making in Health Care. JMCP. 2009;15(3);275-281 6. Balshem H, Helfand M, Schünemann HJ, Oxman AD, Kunz R, Brozek J, et al. GRADE guidelines: 3. Rating the quality of evidence. Journal of Clinical Epidemiology. 2011 Apr;64(4):401 6. 7. Atkins D, Eccles M, Flottorp S, Guyatt GH, Henry D, et al. (2004) Systems for grading the quality of evidence and the strength of recommendations I: critical appraisal of existing approaches The GRADE Working Group. BMC Health Serv Res 4: 38. doi:10.1186/1472-6963- 4-38. 8. Glasziou P (2004) Assessing the quality of research. BMJ 328: 39 41. doi:10.1136/bmj.328.7430.39. 9. Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, et al. (2008) GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ 336: 924 926. doi:10.1136/bmj.39489.470347.ad. 10. Moher D, Jadad AR, Tugwell P (1996) Assessing the quality of randomized controlled trials. Current issues and future directions. Int J Technol Assess Health Care 12: 195 208. 11. Berger ML, Dreyer N, Anderson Fred, et al. Prospective Observational Studies to Assess Comparative Effectiveness: The ISPOR Good Research Practices Task Force Report. Value Health 2012;15:217-230. 12. Jüni P, Witschi A, Bloch R, Egger M (1999). The Hazards of Scoring the Quality of Clinical Trials for meta- analysis. Jama J Am Med Assoc 282: 1054 1060. 13. GRACE Principles. Available from: http://www.graceprinciples.org/ Accessed on: August 9, 2013 14. STROBE checklist. Available from: http://www.strobestatement.org/fileadmin/strobe/uploads/checklists/strobe_checklist_v4_cohort.pdfstatement.or g/fileadmin/strobe/uploads/checklists/strobe_checklist_v4_cohort.pdf Accessed on: August 9, 2013 15. EnCePP. Available from: http://www.encepp.eu/standards_and_guidances/documents/enceppguideofmethstanda rdsinpe.pdf Accessed on: August 9, 2013 20

637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 16. AHRQ User Guide for Developing a Protocol for Observation Comparative Effectiveness Research. http://www.ncbi.nlm.nih.gov/books/nbk126190/. Accessed on: August 9, 2013 17. Thomas L, Peterson ED. The value of statistical analysis plans in observational research: defining high-quality research from the start. JAMA. 2012 Aug 22;308(8):773 4. 18. Cox E, Martin BC, Van Staa T, Garbe E, Siebert U, Johnson ML. Good research practices for comparative effectiveness research: approaches to mitigate bias and confounding in the design of nonrandomized studies of treatment effects using secondary data sources: the International Society for Pharmacoeconomics and Outcomes Research Good Research Practices for Retrospective Database Analysis Task Force Report--Part II. Value Health. 2009 Dec;12(8):1053 61. 19. Berger ML, Mamdani M, Atkins D, Johnson ML. Good research practices for comparative effectiveness research: defining, reporting and interpreting nonrandomized studies of treatment effects using secondary data sources: the ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report--Part I. Value Health. 2009 Dec;12(8):1044 52. 20. Altman DG (2000) Statistics in medical journals: some recent trends. Stat Med 19: 3275 3289. 21. Ellenberg JH. Selection bias in observational and experimental studies. Statistics in Medicine. 1994;13(5-7):557 67 22. Gordis L. (1979) Conceptual and methodologic problems in measuring patient compliance. In: Haynes B, Taylor DW, Sackett DL, eds. Compliance in Health Care. Baltimore: The John Hopkins University Press, 23 45. 23. Rosenbaum PR. Heterogeneity and causality: Unit heterogeneity and design sensitivity in observational studies. Am Stat. 2005 May;59(2):147 52. 24. Forrow L, Taylor WC, Arnold RM. Absolutely relative: how research results are summarized can affect treatment decisions. Am J Med1992;92:121-4. 25. Husereau D, Drummond M, Petrou S, et al. Consolidated health economic evaluation reporting standards (CHEERS) Explanation and elaboration: A report of the ISPOR health economic evaluations publication guidelines good reporting practices task force. Value Health 2013;16:231-50. 21

676 677 Figure 1 Retrospective observational study assessment questionnaire flowchart 678 679 680 681 682 683 684 685 686 687 688 689 TABLE 1 Questionnaire to assess the relevance and credibility of a Retrospective observational study The questionnaire consists of 31 questions related to the relevance and credibility of a Retrospective observational study. Relevance questions relate to usefulness of the prospective observational study to inform health care decision making. Each question will be scored with / / Can t Answer. Based on the scoring of the individual questions, the overall relevance of the prospective observational study needs to be judged as Sufficient or Insufficient. If the Retrospective observational study is considered sufficiently relevant, the credibility is going to be assessed. The credibility is captured with questions in the following 6 domains, Design, Data, Analysis, 22