Observational Studies and PCOR: What are the right questions?

Similar documents
Presenters: Robin Newhouse Steven Goodman David Hickam

Research in Real-World Settings: PCORI s Model for Comparative Clinical Effectiveness Research

A Framework for Patient-Centered Outcomes Research

Accelerating Patient-Centered Outcomes Research and Methodological Research

Progress from the Patient-Centered Outcomes Research Institute (PCORI)

The Cochrane Collaboration

Lia Hotchkiss: I'm Lia Hotchkiss and I'm with the Agency for Healthcare. Research and Quality. We are one of the 12 agencies part of the Department of

Benefit Risk Assessment. Patrick Salmon Eurordis Summer School 8 th June, 2016

Patrick Breheny. January 28

Advancing Use of Patient Preference Information as Scientific Evidence in Medical Product Evaluation Day 2

Statistical Analysis Plans

How Does Analysis of Competing Hypotheses (ACH) Improve Intelligence Analysis?

Evidence Informed Practice Online Learning Module Glossary

E11(R1) Addendum to E11: Clinical Investigation of Medicinal Products in the Pediatric Population Step4

PCORI s Role in Supporting CER

PubH 7470: STATISTICS FOR TRANSLATIONAL & CLINICAL RESEARCH

OA Biomarkers: What is required for validation and qualification? Part I. Evaluation Frameworks

Introduction to the Patient- Centered Outcomes Research Institute

OBSERVATIONAL STUDIES IN A LEARNING HEALTH SYSTEM

Assignment 4: True or Quasi-Experiment

GLOSSARY OF GENERAL TERMS

Evidence-Based Medicine and Publication Bias Desmond Thompson Merck & Co.

Lecture 2. Key Concepts in Clinical Research

The Final Piece of a Comprehensive HPC Program. November 14, 2012

Building an evidence-based social sector in New Zealand. Prepared and presented by Dr Carolyn O Fallon September 2017

Why published medical research may not be good for your health

MODULE 3 APPRAISING EVIDENCE. Evidence-Informed Policy Making Training

BACKGROUND + GENERAL COMMENTS

EER Assurance Criteria & Assertions IAASB Main Agenda (June 2018)

PROPOSED WORK PROGRAMME FOR THE CLEARING-HOUSE MECHANISM IN SUPPORT OF THE STRATEGIC PLAN FOR BIODIVERSITY Note by the Executive Secretary

Standards for the reporting of new Cochrane Intervention Reviews

APPENDICES. Appendix I Business Ethical Framework

DRAFT (Final) Concept Paper On choosing appropriate estimands and defining sensitivity analyses in confirmatory clinical trials

Enumerative and Analytic Studies. Description versus prediction

David O. Meltzer* Opportunities in the Economics of Personalized Health Care and Prevention

Hypothesis-Driven Research

The Academy Capitol Forum: Meet the Experts. Diagnosing Which Health Treatments Improve Outcomes: A PCORI Overview and How to Get Involved

COVERAGE WITH EVIDENCE DEVELOPMENT

CONSORT 2010 checklist of information to include when reporting a randomised trial*

Cover Page. The handle holds various files of this Leiden University dissertation

Chapter 2 Comparative Effectiveness Research

Innovative Analytic Approaches in the Context of a Global Pediatric IBD Drug Development

Madhukar Pai, MD, PhD Associate Professor Department of Epidemiology & Biostatistics McGill University, Montreal, Canada

Consistency in REC Review

Summarizing the Evidence. Cathleen Colon-Emeric, MD, MHS Jane Gagliardi, MD, MHS

TRACOM Sneak Peek. Excerpts from CONCEPTS GUIDE

Propensity Score Methods for Estimating Causality in the Absence of Random Assignment: Applications for Child Care Policy Research

Patient-Centered Outcomes Research Institute

Should Cochrane apply error-adjustment methods when conducting repeated meta-analyses?

Systematic Reviews. Simon Gates 8 March 2007

Quality Digest Daily, March 3, 2014 Manuscript 266. Statistics and SPC. Two things sharing a common name can still be different. Donald J.

CSC2130: Empirical Research Methods for Software Engineering

CHECK-LISTS AND Tools DR F. R E Z A E I DR E. G H A D E R I K U R D I S TA N U N I V E R S I T Y O F M E D I C A L S C I E N C E S

Penn State MPH Program Competencies 2013

NICE Guidelines for HTA Issues of Controversy

July 7, Dockets Management Branch (HFA-305) Food and Drug Administration 5630 Fishers Lane, Rm Rockville, MD 20852

VLBW infants have complications related to prematurity, particularly ICH, hypotension and anemia/need for transfusion

Re: Docket No. FDA D Presenting Risk Information in Prescription Drug and Medical Device Promotion

PROSPERO International prospective register of systematic reviews

QA Program Evaluation Research Tool CF-195, Effective

4/10/2018. Choosing a study design to answer a specific research question. Importance of study design. Types of study design. Types of study design

Guidelines for Writing and Reviewing an Informed Consent Manuscript From the Editors of Clinical Research in Practice: The Journal of Team Hippocrates

Response to the ASA s statement on p-values: context, process, and purpose

Evidence-Based Medicine and Patho-Physiologic Rationale : It s the evidence, stupid! JOHN WORRALL LSE Kent Conference July

Background EVM. FAO/WHO technical workshop on nutrient risk assessment, Geneva, May 2005, published 2006.

Introduction to Program Evaluation

Health care guidelines, recommendations, care pathways

Improving Adverse Drug Reaction Information in Product Labels. EFSPI IDA Webinar 28 th Sept 2017 Sally Lettis

Overview and Comparisons of Risk of Bias and Strength of Evidence Assessment Tools: Opportunities and Challenges of Application in Developing DRIs

Trial Designs. Professor Peter Cameron

Software as a Medical Device (SaMD)

Criteria for evaluating transferability of health interventions: a systematic review and thematic synthesis

Analytical Performance in the The Test Evaluation Cycle

OHRP - Guidance on Research Involving Coded Private Information or Biological Specimens

Seeing the Forest & Trees: novel analyses of complex interventions to guide health system decision making

Treatment of Multiple Sclerosis

Using self-report questionnaires in OB research: a comment on the use of a controversial method

! Parallels with clinical studies.! Two (of a number of) concerns about data from trials.! Concluding comments

ICH M4E(R2): Revised Guideline on Common Technical Document -- Efficacy

Missing data in clinical trials: a data interpretation problem with statistical solutions?

SOCQ121/BIOQ121. Session 8. Methods and Methodology. Department of Social Science. endeavour.edu.au

2 Critical thinking guidelines

Introduction to Applied Research in Economics

Cochrane Breast Cancer Group

Meta-analysis of safety thoughts from CIOMS X

Clinically Meaningful Inclusion of Participants in Clinical Trials. David Hickam, MD, MPH Washington, DC April 9, 2015

RIGOR AND REPRODUCIBILITY IN RESEARCH GUIDANCE FOR REVIEWERS

Investigator Initiated Study Proposal Form

Using natural experiments to evaluate population health interventions

Chapter 23. Inference About Means. Copyright 2010 Pearson Education, Inc.

Advancing the Science of Patient Input in Medical Product R&D: Towards a Research Agenda

Public Health Masters (MPH) Competencies and Coursework by Major

Using A Logic Model Framework For Program Planning And Evaluation

Quantitative benefit-risk assessment: An analytical framework for a shared understanding of the effects of medicines. Patrick Ryan 21 April 2010

DEMAND FOR DIABETES PREVENTION PROGRAMS. A Feasibility Study

Issues in Conducting Rigorous and Relevant Research in Education

EQUATOR Network: promises and results of reporting guidelines

BIOSTATISTICAL METHODS

International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use

Funding Opportunities for Public Health Research

Transcription:

Observational Studies and PCOR: What are the right questions? Steven Goodman, MD, MHS, PhD Stanford University PCORI MC Steve.goodman@stanford.edu

A tale of two cities? "It was the best of times, it was the worst of times, it was the age of Clinical Trials, it was the age of Observational studies, it was the epoch of Data Mining, it was the epoch of Pre-specification, it was the season of Discovery, it was the season of Decision, it was the spring of Effectiveness, it was the winter of Harms, we had Truth before us, we had Lies before us, we were all going direct to Causality, we were all going direct the other way--in short, the period was the present period, and some of its noisiest funders and policy makers insisted on its being perceived, for good or for evil, with a superlative degree of scientific rigor."

These questions appear almost daily

If you're a runner, you might have noticed this surprising headline from the April 5 edition of the Guardian: "Brisk walk healthier than running scientists." Or maybe you saw this one, which ran in Health magazine the very same day: "Want to lose weight? Then run, don't walk: Study. Both articles described the work of a herpetologist-turned-statistician at the Lawrence Berkeley National Laboratory named Paul T. Williams, who, this month, achieved a feat that's exceedingly rare in mainstream science: He used exactly the same dataset to publish two opposing findings.

Foundational equations E=mc 2 e πi = -1

Foundational equation of Epidemiology Pr(Outcome X=x) = Pr(Outcome Set(X=x) ) In English The probability of an outcome when we observe a risk factor is equal to when we actively set the risk factor to the same value. (This equation does not specify how one sets the variable.)

5. How should FDA factor in different kinds of safety evidence in considering different kinds of regulatory actions? Committee charge 1. What are the ethical and informed consent issues that must be considered when designing RCTs s to evaluate safety risks? 2. What are the strengths and weaknesses of various approaches, including observational studies, including patient registries, metaanalyses, including patient level data meta-analyses, and randomized controlled trials, to generate evidence about safety questions? 3. Considering the speed, cost, and value of studies, what types of follow-up studies are appropriate to investigate different kinds of signals (detected pre-approval or postapproval) and in what temporal order? 4. Under what circumstances should head-to-head randomized clinical trials for safety be required?

How to decide between an OS and an RCT Size of and nature of signal. Size of effect needed to justify policy change Time urgency Other potential causes of outcome; studying intended or unintended effects? Quality and availability of data Transportability Consider analytic approach, not just design.

What should it do? Part 1 FUNCTIONS. the methodology committee shall work to develop and improve the science and methods of comparative clinical effectiveness research by developing and periodically updating the following: (i) Methodological standards for research. Such methodological standards shall provide specific criteria for internal validity, generalizability, feasibility, and timeliness of research and for health outcomes measures risk adjustment The process for developing and updating such standards shall include input from relevant experts, stakeholders, and decisionmakers, and shall provide opportunities for public comment. Such standards shall also include methods by which patient subpopulations can be accounted for and evaluated in different types of research

What should it do? Part 2 (ii) A translation table that is designed to provide guidance and act as a reference for the Board to determine research methods that are most likely to address each specific research question.

Translation Framework

Emerging principles 1 Keep the research question and the design separate. 2 Focus on clarifying tradeoffs. 3 Expect that several versions of a translation tool might be needed for different decision-makers and different research categories. 4 Place individual research studies in the context of a research program. 5 The choice of study design must take into account the state of the art of research methodology.

The role of observational research In both of these reports, the role and place of observational research in the world of therapeutics is central. There seems to be a strong need to apply the same, somewhat formulaic rules of evidence/design familiar from the EBM world to observational data. This comes in through creating rules or at least a social consensus about what constitutes legitimate designs for various questions.

RCT- Observational canards Trait RCTs OSs Bias Low High Resources: Time, $ High Low Causal inference High Low Transportability Low/Medium High Reporting biases Low/Medium High Real world: Interventions, outcomes, comparators Low High Patient centered Low High Data quality High Low Ethicality Low High Sample size Low/medium High

Methods canards New statistical methods solve the inferential problems of OSs. Large amounts of data can overcome the problems of poor data. Electronic health records make high quality observational research much easier. The more personal the data, the better the prediction. Better prediction means better outcomes.

What is true The distinctions between OSs and RCTs are can be subtle. Interventions are changing, with more need to evaluate complex interventions. We have not just more data, but different data types and more complex data. We do have new, improved methods, and many unproven. We have new outcomes we should or can measure. We different decisions to make about how to use new decision tools (e.g. diagnostics) and interventions. We have more emphasis on transparency, reproducibility and wider engagement in the scientific process.

Basis for meeting plan Effect in practice True effect in study setting Average Effect Measured by Study Design Generalizability External validity Bias Random Error Effect observed in actual study Subgroup effects

Central questions Does the method address the question we are really interested in? Does it estimate the effect correctly for those to whom the results are applied? If not, how wrong could it be? Does it estimate the uncertainty in the effect estimate correctly? If not, how wrong could it be?

Central questions Which of the approaches discussed today and tomorrow would be sound enough to guide a treatment decision? If not, could they get there and what would it take?

PCORI Decisions Which methodologies should be used and how? Investment in the development of which new methodologies will yield the greatest benefit? Measurement Design Analysis Data-generation and linkage Decision making What methods should the next generation of PCOR researchers be taught?