Methodological requirements for realworld cost-effectiveness assessment

Similar documents
Confounding by indication developments in matching, and instrumental variable methods. Richard Grieve London School of Hygiene and Tropical Medicine

Version No. 7 Date: July Please send comments or suggestions on this glossary to

Lecture II: Difference in Difference. Causality is difficult to Show from cross

Evaluating Social Programs Course: Evaluation Glossary (Sources: 3ie and The World Bank)

Can Quasi Experiments Yield Causal Inferences? Sample. Intervention 2/20/2012. Matthew L. Maciejewski, PhD Durham VA HSR&D and Duke University

Combination therapy compared to monotherapy for moderate to severe Alzheimer's Disease. Summary

Alcohol interventions in secondary and further education

Assessing the impact of unmeasured confounding: confounding functions for causal inference

Recent advances in non-experimental comparison group designs

Effects of propensity score overlap on the estimates of treatment effects. Yating Zheng & Laura Stapleton

WHAT TO DO IN ABSENCE OF HEAD TO HEAD CLINICAL TRIAL DATA. Lead the economic evaluation group at CHERE, University of Technology, Sydney

Causal Inference in Observational Settings

Reducing Tobacco Use and Secondhand Smoke Exposure: Interventions to Increase the Unit Price for Tobacco Products

Introduction to Observational Studies. Jane Pinelis

QUASI-EXPERIMENTAL APPROACHES

RCTs: what s the problem?

Getting ready for propensity score methods: Designing non-experimental studies and selecting comparison groups

Propensity Score Methods for Estimating Causality in the Absence of Random Assignment: Applications for Child Care Policy Research

Pharmacoeconomics: from Policy to Science. Olivia Wu, PhD Health Economics and Health Technology Assessment

Introductory: Coding

Lecture II: Difference in Difference and Regression Discontinuity

Essay 3 Advancing quantitative methods for the evaluation of complex interventions

GLOSSARY OF GENERAL TERMS

of stratified medicines

Introduction to Applied Research in Economics Kamiljon T. Akramov, Ph.D. IFPRI, Washington, DC, USA

NICE DSU TECHNICAL SUPPORT DOCUMENT 16: ADJUSTING SURVIVAL TIME ESTIMATES IN THE PRESENCE OF TREATMENT SWITCHING

TREATMENT EFFECTIVENESS IN TECHNOLOGY APPRAISAL: May 2015

Measuring Impact. Program and Policy Evaluation with Observational Data. Daniel L. Millimet. Southern Methodist University.

Ec331: Research in Applied Economics Spring term, Panel Data: brief outlines

Econometric analysis and counterfactual studies in the context of IA practices

Introduction to Cost-Effectiveness Analysis

Can harnessing causal inference with large-scale data enable the health economist to reach the policy-making summit?

CAN EFFECTIVENESS BE MEASURED OUTSIDE A CLINICAL TRIAL?

Impact Evaluation Toolbox

Technology appraisal guidance Published: 22 September 2010 nice.org.uk/guidance/ta200

GATE: Graphic Appraisal Tool for Epidemiology picture, 2 formulas & 3 acronyms

I. Identifying the question Define Research Hypothesis and Questions

The Effects of Maternal Alcohol Use and Smoking on Children s Mental Health: Evidence from the National Longitudinal Survey of Children and Youth

Analyses of cost-effective BMD scanning and treatment strategies for generic alendronate, and the costeffectiveness

CAUSAL EFFECT HETEROGENEITY IN OBSERVATIONAL DATA

Dealing with uncertainty in the economic evaluation of health care technologies

Exploring uncertainty in cost effectiveness analysis. Francis Ruiz NICE International (acknowledgements to: Benjarin Santatiwongchai of HITAP)

Causal Validity Considerations for Including High Quality Non-Experimental Evidence in Systematic Reviews

Checklist for Randomized Controlled Trials. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

Jae Jin An, Ph.D. Michael B. Nichol, Ph.D.

1 Executive summary. Background

Adverse Outcomes After Hospitalization and Delirium in Persons With Alzheimer Disease

The cost-effectiveness of interventions in primary care a health economic perspective for decision-making. Miika Linna, Aalto University/HEMA

CRITICAL APPRAISAL AP DR JEMAIMA CHE HAMZAH MD (UKM) MS (OFTAL) UKM PHD (UK) DEPARTMENT OF OPHTHALMOLOGY UKM MEDICAL CENTRE

EC352 Econometric Methods: Week 07

CHALLENGES IN THE USE OF REAL-WORLD EVIDENCE FOR PHARMACOECONOMIC MODELLING

The (mis)use of treatment switching adjustment methods in health technology assessment busting some myths!

Statistical considerations in indirect comparisons and network meta-analysis

Propensity scores and instrumental variables to control for confounding. ISPE mid-year meeting München, 2013 Rolf H.H. Groenwold, MD, PhD

Identification of effectheterogeneity. instrumental variables

Title: New Perspectives on the Synthetic Control Method. Authors: Eli Ben-Michael, UC Berkeley,

Basic Economic Analysis. David Epstein, Centre for Health Economics, York

Research Design. Beyond Randomized Control Trials. Jody Worley, Ph.D. College of Arts & Sciences Human Relations

Cost-effectiveness of osimertinib (Tagrisso )

Appendix 18d: Computerised cognitive behavioural therapy for panic disorder GRADE evidence profiles

CONCEPTUALIZING A RESEARCH DESIGN

QUASI-EXPERIMENTAL HEALTH SERVICE EVALUATION COMPASS 1 APRIL 2016

Making sense of published research Two ways: Format Argument

Propensity Score Analysis Shenyang Guo, Ph.D.

Rapid appraisal of the literature: Identifying study biases

Economic Evaluation. Introduction to Economic Evaluation

Type of intervention Treatment. Economic study type Cost-effectiveness analysis.

title authors policy issue How often should you have dental visits? no: 10 date: 27/06/2013

Project Funded under FP7 - HEALTH Grant Agreement no Funded under FP7 - HEALTH Grant Agreement no

A Primer on Health Economics & Integrating Findings from Clinical Trials into Health Technology Assessments and Decision Making

Overview of Study Designs

Anna Spurlock, Peter Cappers, Ling Jin, Annika Todd, LBNL Patrick Baylis, University of California at Berkeley

Cost-effectiveness Analysis for HHS

Overview of Study Designs in Clinical Research

Systematic Reviews in Education Research: When Do Effect Studies Provide Evidence?

2/20/2012. New Technology #1. The Horizon of New Health Technologies. Introduction to Economic Evaluation

The role of self-reporting bias in health, mental health and labor force participation: a descriptive analysis

Peginterferon alfa and ribavirin for the treatment of chronic hepatitis C. Part review of NICE technology appraisal guidance 75 and 106

Introduction to Program Evaluation

Setting The setting was primary care. The economic study was conducted in the USA.

The impact of treatment line matching on covariates balance and cost effectiveness results: A case study in oncology

Cost-effectiveness of elosulfase alfa (Vimizim ) for the treatment of Morquio A Syndrome in patients of all ages.

The role of Randomized Controlled Trials

Outline. Definitions Background Objectives Methodology Results and Discussion Conclusion and Recommendations

INTRODUCTION TO HEALTH SERVICES RESEARCH POPULATION HEALTH 796. Spring 2014 WARF Room 758 Monday 9:00 AM- 11:30 AM.

Stated Preference Methods Research in Health Care Decision Making A Critical Review of Its Use in the European Regulatory Environment.

Clinical and economic consequences of non-adherence

ISPOR Good Research Practices for Retrospective Database Analysis Task Force

A cost effectiveness analysis of treatment options for methotrexate-naive rheumatoid arthritis Choi H K, Seeger J D, Kuntz K M

What is Multilevel Modelling Vs Fixed Effects. Will Cook Social Statistics

CASE STUDY 2: VOCATIONAL TRAINING FOR DISADVANTAGED YOUTH

Cost-effectiveness of Influenza Vaccine

Appendix B Fracture incidence and costs by province

Introduction to Applied Research in Economics

Causal Methods for Observational Data Amanda Stevenson, University of Texas at Austin Population Research Center, Austin, TX

Demographic Responses to a Political Transformation: Evidence of Women s Empowerment from a Natural Experiment in Nepal

State-of-the-art Strategies for Addressing Selection Bias When Comparing Two or More Treatment Groups. Beth Ann Griffin Daniel McCaffrey

Database of Abstracts of Reviews of Effects (DARE) Produced by the Centre for Reviews and Dissemination Copyright 2017 University of York.

Comparing Radiation Treatments for Prostate Cancer: What Does the Evidence Tell Us?

Appendix. Lifetime extrapolation of data from the randomised controlled DiGEM trial

Transcription:

Methodological requirements for realworld cost-effectiveness assessment Ismo Linnosmaa Department of Health and Social Management, UEF Centre for Health and Social Economics, THL Ismo Linnosmaa

Outline 1. Basic concepts in cost-effectiveness research 2. How to assess cost-effectiveness: data and methods Concepts treatment, intervention or service are used interchangeably 29/08/2018

Basic concepts 29.8.2018

Framework to assess costs and effectiveness Inputs - costs Output - services Effectiveness - change in well-being or health Productivity is assessed using information on outputs and inputs Cost-effectiveness is assessed using information on costs and effectiveness

Effectiveness Well-being/ health Observed well-being/health with the service E = effectiveness Counterfactual effect: Well-being/health without the service Service use begins time

Resource use and costs Steps in measuring costs of interventions or services: Choice of perspective for the evaluation; societal perspective recommended Identification and measurement of resource use Valuing the use of resources: opportunity cost of resource use Role of non-medical and future costs Remark: Costs/Cost reduction can also be outcomes of interest

Incremental cost-effectiveness ratio Consider old (0) and new (1) technologies with costs C 0 and C 1 outcomes E 0 and E 1 Economic analysis often interested in incremental cost-effectiveness ratio C1 C0 ICER0,1, E E 1 0 measuring the cost of one unit of outcome (e.g. health, QALY or wellbeing) gained when the decision-maker decides to replace the current technology with new technology

How to assess cost-effectiveness: data and methods 29/08/2018

Data sources Data from controlled experiments (henceforth experimental data) Data collection or measurements are controlled by experimenters or researchers Sample size often relatively small Focus often on specific populations, like e.g. multimorbid patients older than 75 years waiting for a hip replacement Real-world data Is observational (henceforth observational data): records what goes on in the real world Sample size can be large Examples: data from administrative records, survey data, population surveys

Randomized controlled trials (RCTs) RCTs are often considered to be the best research designs when measuring causal effects of treatments on outcomes, ie. treatment effects In RCTs: Assignment of individuals to treatment and control groups is controlled by researchers Randomization ensures unbiased estimation of treatment effects Random assignment of subjects to treatment and control groups ensures that on average study subjects are similar in treatment and control groups in all other respects than the treatment; hence Any potential differences in outcomes are caused by the treatment Examples: clinical trials, RAND health insurance experiment (Manning et al. 1987)

Drawbacks of RCTs RCTs may be costly to carry out are sometimes considered unethical because of random allocation of subjects Usual practice in health and social care is that services are allocated on the basis of need may not always work as desired Drop-outs after randomization: e.g. more old people die in the treatment group than in the control group (King et al. 2011) Non-compliance to the experimental protocol

Observational data and methods In observational data selection into the treatment may not be purely random but affected by other factors like e.g. need of care Treatment effects may be confounded by factors influencing selection Methodological approaches to control systematic selection (Jones and Rice, 2011) to estimate causal effect I Selection on observables: Regression techniques Matching techniques (e.g. propensity score matching) II Selection on unobservables: Instrumental variables estimation Regression discontinuity Difference-in-differences

Methodological requirements All methodological approaches go with modelling assumptions that need to be satisfied in causal estimation I Selection on observables: Regression technique assumes conditional independence (CI): Conditional on observables x in regression, outcome and treatment are independent (ie. selection is controlled by observables x) Matching assumes CI and weak overlap of observables in treatment and control groups: overlap creates a possibility to match

Methodological requirements II Selection on unobservables: Instrumental variables method assumes existence of strong and valid instruments Regression discontinuity assumes existence of a forcing variable causing a discontinuous change in the assignment to treatment; e.g. eligibility to a disability benefit ends at a certain age Difference-in-differences assumes longitudinal data and parallel trends: without treatment, trends in the treatment and control groups are same Review by Kreif et al. (2013) shows that in the majority of the published cost-effectiveness studies using observational data (N=81), modelling assumptions needed to estimate causal effects were not properly assessed

Conclusions Improved availability of real world data increases possibilities to evaluate treatment effects in cost-effectiveness studies When using real world data, it is important to Evaluate the possibility of selection bias, and Apply proper methods to estimate treatment effects

Literature Jones AM, Rice N (2011) Econometric evaluation of health policies in Glied S, Smith PC (eds.) The Oxford handbook of Health Economics, Oxford Handbooks, Oxford University Press Kreif N, Grieve R, Sadique MZ (2013) Statistical methods for cost-effectiveness analysis that use observational data: A critical appraisal tool and review of current practice, Health Econ 22: 486-500 King G, Nielsen R, Coberley C, Pope JE, Wells A (2011) Avoiding randomization failutre in program emavalution, with application to the Medicare Health Support Program, Population Health Management, 14(1): 11-22 Manning WG, Newhouse JP, Naihua D, Keeler EB, Leibowitz A, Marquis MS (1987) Health Insurance and the Demand for Medical Care: Evidence from a Randomized Experiment, The American Economic Review, 77(3): 251-277 29/08/2018

More information: ismo.linnosmaa@uef.fi 29.8.2018 17