Using natural experiments to evaluate population health interventions

Similar documents
Using natural experiments to evaluate population health interventions: new Medical Research Council guidance

Using natural experiments to evaluate population health interventions: guidance for producers and users of evidence

The new PH landscape Opportunities for collaboration

Formative and Impact Evaluation. Formative Evaluation. Impact Evaluation

Overcoming barriers. Our strategy for

Unintentional Firearm Fatality and Suicide

executive summary Ecotherapy The green agenda for mental health Mind week report, May 2007

Lecture 4: Research Approaches

Natural Experiments. Thad Dunning Department of Political Science Yale University. Draft entry for the International Encyclopedia of Political Science

Using natural experiments to evaluate population health interventions

Valuing improvements in mental health

MRC Network of Hubs for Trials Methodology Research Gordon Murray, University of Edinburgh

Key questions when starting an econometric project (Angrist & Pischke, 2009):

A Brief Introduction to Bayesian Statistics

Seeing the Forest & Trees: novel analyses of complex interventions to guide health system decision making

MEETING REPORT INTERCONNECT: A GLOBAL INITIATIVE ON GENE-ENVIRONMENT INTERACTION IN DIABETES AND OBESITY.

THEORY OF CHANGE FOR FUNDERS

Programme Name: Climate Schools: Alcohol and drug education courses

Implementing the ASSIST smoking prevention programme following a successful RCT. Rona Campbell

British Association of Stroke Physicians Strategy 2017 to 2020

Places and communities that support and promote good health

Sustainability Learning Review 2015

Presented at the Research Methodology Center Ohio State University September 29, 2016

SAVING LIVES: ACHIEVING MORE

A Framework for Patient-Centered Outcomes Research

Checklist for Randomized Controlled Trials. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

Safety Planning and Lethal Means Reduction to Prevent Suicide Fall Substance Use Conference 2015 Doug Thomas, LCSW Director

2 Psychological Processes : An Introduction

WELSH GOVERNMENT RESPONSE TO RECOMMENDATIONS FROM THE HEALTH & SOCIAL CARE COMMITTEE: INQUIRY INTO NEW PSYCHOACTIVE SUBSTANCES

HC 963 SesSIon november Department of Health. Young people s sexual health: the National Chlamydia Screening Programme

An HTA Perspective. Cynthia P Iglesias Urrutia MSc PhD

The openehr approach. Background. Approach. Heather Leslie, July 17, 2014

Designing Trials of Complex Interventions for Efficacy and Mechanisms Evaluation Dr Richard Emsley*, Prof Jonathan Green, Prof Graham Dunn

WWC Standards for Regression Discontinuity and Single Case Study Designs

STRATEGIC PLAN. Working to address health inequalities and improve the lives of LGBT people in Scotland

SUICIDE PREVENTION IN GREATER GLASGOW AND CLYDE

Health and well-being a perspective from WHO

Value Based Health Care in the UK: NICE, VBP and the Cost-effectiveness Threshold. Eldon Spackman, MA, PhD

Local action on health inequalities. Introduction to a series of evidence papers

Driving Improvement in Healthcare Our Strategy

The next paradigm in cancer diagnosis: introducing the CanTest Collaborative from the how to the who

Healthy London Partnership - Prevention Programme Healthy Steps Together Expression of interest

GLOBAL NUTRITION REPORT. ABSTRACT This is a summary of the recently published Global Nutrition Report prepared by an Independent Expert Group.

Public Policy & Evidence:

Experimental Political Science and the Study of Causality

Difference-in-Differences in Action

QUASI EXPERIMENTAL DESIGN

Causal Inference in Observational Settings

The NHS Cancer Plan: A Progress Report

CRITICAL APPRAISAL AP DR JEMAIMA CHE HAMZAH MD (UKM) MS (OFTAL) UKM PHD (UK) DEPARTMENT OF OPHTHALMOLOGY UKM MEDICAL CENTRE

Version No. 7 Date: July Please send comments or suggestions on this glossary to

Our approach to research

CAN EFFECTIVENESS BE MEASURED OUTSIDE A CLINICAL TRIAL?

MORECare: A framework for conducting research in palliative care

Business Plan. July 2016 to June Trusted evidence. Informed decisions. Better health.

Steady Ready Go! teady Ready Go. Every day, young people aged years become infected with. Preventing HIV/AIDS in young people

Recent advances in non-experimental comparison group designs

Solace and Local Government Association response to Ofsted s consultation on the future of social care inspection

Getting ready for propensity score methods: Designing non-experimental studies and selecting comparison groups

GAMBLING AND INDIVIDUALS WELLBEING:

Safeguarding children, young people and adults from harm

The Wellbeing Agenda: Opportunities for Transport Policy

Trial designs fully integrating biomarker information for the evaluation of treatment-effect mechanisms in stratified medicine

Today s reading concerned what method? A. Randomized control trials. B. Regression discontinuity. C. Latent variable analysis. D.

NIHR Supporting collaboration in life sciences research

Conducting Strong Quasi-experiments

Quasi-Experimental and Single Case Experimental Designs. Experimental Designs vs. Quasi-Experimental Designs

Job Profile, Responsibilities and Person Specification

W e have previously described the disease impact

Dementia Priority Setting Partnership. PROTOCOL March 2012

THE GOOD, THE BAD, & THE UGLY: WHAT WE KNOW TODAY ABOUT LCA WITH DISTAL OUTCOMES. Bethany C. Bray, Ph.D.

QUASI-EXPERIMENTAL APPROACHES

An evidence rating scale for New Zealand

Recent developments for combining evidence within evidence streams: bias-adjusted meta-analysis

Lecture II: Difference in Difference. Causality is difficult to Show from cross

Corporate Plan

Observational Studies and PCOR: What are the right questions?

Changing Lives Nationwide

Centre for Trials Research, College of Biomedical and Life Sciences, Cardiff University, Cardiff, UK 2

Part Ⅰ- simplified version-a. Current situation of tobacco use and tobacco control in the country

Type 1 Research Today

Renewable World Global Gender Equality Policy

Title: New Perspectives on the Synthetic Control Method. Authors: Eli Ben-Michael, UC Berkeley,

Memorandum of Understanding

HIV Modelling Consortium

The Effects of Maternal Alcohol Use and Smoking on Children s Mental Health: Evidence from the National Longitudinal Survey of Children and Youth

How NCIN may help the NCRN in Achieving its Goals

Emotional Wellbeing & Mental Health Fund

CHAPTER 1 Understanding Social Behavior

1.1 Welcome and Apologies ST. 1.2 Declaration of Interest All. 1.3 Minutes of the previous meeting ST Enc 1

Strategic Plan

Five Ways to Embed Youth Social Action September 2018

What needs to happen in Scotland

What Constitutes Reliable Evidence? Rare Diseases and Clinical Trials

Project Manager Mental Health Job Description and Application Pack

Annual Report 2016 C O N N E C T D E V E L O P I N F L U E N C E

Just what research design am I using? Shona J. Kelly Professor of Interdisciplinary Research

Why randomize? Rohini Pande Harvard University and J-PAL.

SFHPT15 Explore with the client how to work within the therapeutic frame and boundaries

Summary of current policy drivers and national practice overview

Transcription:

Using natural experiments to evaluate population health interventions Peter Craig MRC Population Health Sciences Research Network Public Health Intervention Research Symposium Toronto, 29-30 November 2010

MRC Population Health Sciences Research Network Group of MRC Units and Centres, established in 2005 to Add value to the MRC s existing core investment in population health sciences Provide a co-ordinated voice on research and policy issues relating to population health Develop capacity in the population health sciences Main aims 2010-15 Collaborate with other groups to deliver a programme of methodological knowledge transfer, i.e. to identify, refine and translate best practice in the population health sciences www.populationhealthsciences.org

Methodological knowledge transfer

Natural experiments the early days It was obvious that no experiment could have been devised which would more thoroughly test the effect of water supply on the progress of cholera than this. John Snow, On the mode of communication of cholera. London, Churchill, 1855

High expectations Can natural experiments solve all our evaluation problems in public health? The major constraint to further progress on the implementation of public health interventions is the weakness of the evidence base regarding their effectiveness and cost-effectiveness. Current public health policy and practice, which includes a multitude of promising initiatives, should be evaluated as a series of natural experiments.

The Foresight obesity systems map provides a tool for research funders to identify priority areas. Broadly, areas requiring increased support include: Long term interventions Studies focused on prevention Research targeting large population groups Evaluation of natural experiments (rather than highly controlled experimental paradigms) Tackling obesities: future choices. Challenges for Research and Research Management London, Government Office for Science, 2007

Happiness index to gauge Britain's national mood If you want to know things Should I live in Exeter rather than London? What will it do to my quality of life? you need a large enough sample size and if you have a big sample, and have more than one a year, then people can make proper analysis on what to do with their life. And next time we have a comprehensive spending review, let's not just guess what effect various policies will have on people's wellbeing. Let's actually know. Government source, quoted in the Guardian, 15 November 2010.

What constitutes a natural experiment? An act of nature or other unplanned perturbation of an existing system? A situation in which naturally occurring variation in exposure can be exploited for the purposes of causal inference? A situation in which a naturally occurring control group is identified for comparison in a study of the effect of a policy measure? A situation in which people or groups are allocated to treatment groups as if they had been randomised?

A working definition A natural experiment is a characteristic of an intervention and the way it is implemented, not a type of study design The key characteristics are the allocation of people or groups to treatment condition is outside the control of the researcher the introduction of the intervention is not motivated by research purposes A variety of (observational or quasi-experimental) study designs may be used to evaluate such interventions

How are they used? To identify causes of disease and of changes in population health Famine, migration, economic crisis To evaluate impact of public health interventions Air pollution control, smoking bans, regulation of poisons To evaluate health impact of non-health interventions Changes in domestic gas supply, control of alcohol imports Suicide rate per 100,000 50 45 40 35 30 25 20 15 10 5 Fig 1 Suicide rates in Sri Lanka 1880-2005 First case of pesticide poisoning reported, 1954 0 1880 1886 1892 1898 1904 1910 1916 1922 1928 1934 1940 1946 1952 1958 1964 1970 1976 1982 1988 1994 2000 Year All class I pesticides banned 1995 Parathion / methyl parathion banned 1984 Endosulfan Banned 1998 Gunnell et al, Int J Epid 2007 Fig 2 Sex-specific suicide rates by mode of death, England and Wales 1955-71 Rate per 100,000 Male 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 55 57 59 61 63 65 67 69 71 Year Suicide rate (all methods) All other methods Carbon Monoxide Female 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 55 57 59 61 63 65 67 69 71 Year Suicide rate (all methods) All other methods Carbon Monoxide Kreitman N Brit. J. Prev. Soc. Med. (1976)30,86-93 And in many other ways

Does a mention in citations of a research paper? increase Yes papers mentioned in NYT are cited 73% more than comparable papers But is this just because they are better? No papers mentioned in articles included in facsimile editions, prepared but not published, during a strike were cited no more often than control papers The strike provides an almost perfect natural experiment, breaking the link between the quality of a paper and being mentioned in the NYT Phillips DP et al., N Engl J Med 1991, 325:1180 3

When should we use natural experiments? Whenever experimental manipulation of exposure to an intervention is unethical or impractical? Whenever an intervention is implemented in such a way that a planned experiment is not feasible? Whenever the (expected) effect size is so large that no credible alternative causal hypothesis is plausible? Whenever a planned experiment is impossible but there is an intervention and a source of data available for analysis?

When can we use them to best effect? When there is scientific uncertainty about the size or nature of the effects of the type of intervention and it is impractical, unethical or politically unwelcome for the intervention to be introduced as a true experiment and it is possible to obtain the relevant data from an appropriate study population and the intervention or the principles behind it have the potential for replication, scalability or generalisability

Identifying candidates for natural experimental studies Size and nature of effects How large are the effects How rapidly do they follow change in exposure? Source of variation in exposure How large is the change? Is it abrupt or gradual? How large is the population affected? Does it affect a whole population or a subset? How readily can individuals manipulate their own exposure? Rapid large effects are more readily detectable, but natural experiments can be used to detect more subtle effects so long as there is a suitable source of variation in exposure

Improving the design of natural experimental studies Calling a source of variation a natural experiment does not make that variation exogenous. But the natural experiment approach emphasises the importance of understanding the source of variation used to estimate key parameters. This is the primary lesson of recent work in the natural experiment mould. If one cannot experimentally control the variation one is using, one should understand its source. (Meyer B, J Business and Economic Studies 1995, 13:151-61 ) For large and/or rapid effects, simple approaches may be adequate Natural experiments are not restricted to such cases, but more complicated designs will usually be needed

Using natural experiments to detect small effects Does fundholding by family doctors influence referral rates?* Source of variation: withdrawal of fundholding Identification strategy: difference in differences Testing: non-equivalent dependent variables *Dusheiko et al., J Health Econ 2006, 25: 449-78. Did Headstart reduce childhood mortality?** Source of variation: targeting of help by poverty rates Identification strategy: regression discontinuity Testing: non-equivalent dependent variables **Ludwig and Miller, Quarterly J Econ 2007: 159-208.

Choosing the right methods Size and type of effects Availability of routine data Minimising bias By design Multiple pre-post measures Multiple exposed/unexposed groups In analysis Selection on observables Matching Propensity scores Selection on unobservables Difference in differences Instrumental variables Regression discontinuity Testing Mediators of change Non-equivalent dependent variables Sensitivity analysis Combining methods and comparing results

Recommendations Natural experiments are valuable, but are not a panacea for all evaluation problems Natural experiments can be used to study subtle as well as rapid/large effects, but a rigorous approach to dealing with bias is essential Good research infrastructure is needed to exploit available opportunities: links with policy flexible funding good routine data, etc. Single natural experimental studies are unlikely to be conclusive replication is vital Transparent reporting is also essential a prospective register would help

The new MRC guidance Great interest among researchers and policy-makers Lack of general guidance for producers and users of natural experimental evidence Experience suggests that new guidance would be useful When to use natural experiments Natural and planned experiments Economic considerations Choice of methods Reporting and evidence synthesis Expected publication: Spring 2011

Acknowledgements Funding MRC Population Health Sciences Research Network MRC Methodology Research Panel Authors Cyrus Cooper, Director, MRC Lifecourse Epidemiology Unit Peter Craig, Programme Manager, MRC Population Health Sciences Research Network David Gunnell, Professor of Epidemiology, Department of Social Medicine, University of Bristol Sally Haw, Principal Public Health Adviser and Associate Director, Scottish Collaboration for Public Health Research and Policy Kenny Lawson, Research Fellow in Health Economics, Centre for Population and Health Sciences, University of Glasgow Sally Macintyre, Director, MRClCSO Social and Public Health Sciences Unit David Ogilvie, Clinical Investigator Scientist and Honorary Consultant in Public Health Medicine, MRC Epidemiology Unit, University of Cambridge Barney Reeves, Professorial Fellow in Health Services Research and co-head Clinical Trials and Evaluation Unit, University of Bristol Matt Sutton, Professor of Health Economics, Health Methodology Research Group, School of Community-based Medicine, University of Manchester Simon Thompson, Director, MRC Biostatistics Unit