Critical Thinking A tour through the science of neuroscience

Similar documents
Choice of axis, tests for funnel plot asymmetry, and methods to adjust for publication bias

Comparison of Different Methods of Detecting Publication Bias

C2 Training: August 2010

Meta Analysis. David R Urbach MD MSc Outcomes Research Course December 4, 2014

Evaluation of Excess Significance Bias in Animal Studies of Neurological Diseases

Live WebEx meeting agenda

Estimation statistics should replace significance testing

Meta-Analysis. Zifei Liu. Biological and Agricultural Engineering

Problem solving therapy

Fixed Effect Combining

Results. NeuRA Treatments for internalised stigma December 2017

Randomized experiments vs. Propensity scores matching: a Meta-analysis.

GLOSSARY OF GENERAL TERMS

Thresholds for statistical and clinical significance in systematic reviews with meta-analytic methods

Types of Data. Systematic Reviews: Data Synthesis Professor Jodie Dodd 4/12/2014. Acknowledgements: Emily Bain Australasian Cochrane Centre

Results. NeuRA Mindfulness and acceptance therapies August 2018

Method. NeuRA Biofeedback May 2016

Models for potentially biased evidence in meta-analysis using empirically based priors

The translational roadblock, characterized by beneficial

Performance of the Trim and Fill Method in Adjusting for the Publication Bias in Meta-Analysis of Continuous Data

18/11/2013. An Introduction to Meta-analysis. In this session: What is meta-analysis? Some Background Clinical Trials. What questions are addressed?

Meta-analysis: Advanced methods using the STATA software

Systematic Reviews and Meta- Analysis in Kidney Transplantation

Systematic Reviews. Simon Gates 8 March 2007

Traumatic brain injury

Animal-assisted therapy

EPSE 594: Meta-Analysis: Quantitative Research Synthesis

NeuRA Sleep disturbance April 2016

The comparison or control group may be allocated a placebo intervention, an alternative real intervention or no intervention at all.

Distraction techniques

More than 3Rs Improving the validity and reproducibility of animal research

Evaluating the results of a Systematic Review/Meta- Analysis

Results. NeuRA Hypnosis June 2016

Results. NeuRA Worldwide incidence April 2016

Evidence-Based Medicine and Publication Bias Desmond Thompson Merck & Co.

Overview. Goals of Interpretation. Methodology. Reasons to Read and Evaluate

Learning objectives. Examining the reliability of published research findings

Critical Appraisal Series

GUIDELINE COMPARATORS & COMPARISONS:

the standard deviation (SD) is a measure of how much dispersion exists from the mean SD = square root (variance)

Treatment effect estimates adjusted for small-study effects via a limit meta-analysis

UNCORRECTED PROOFS. Software for Publication Bias. Michael Borenstein Biostat, Inc., USA CHAPTER 11

Assessing publication bias in genetic association studies: evidence from a recent meta-analysis

Systematic reviews and meta-analyses of observational studies (MOOSE): Checklist.

During the past several decades, the modeling of stroke in

Changes to NIH grant applications:

Results. NeuRA Forensic settings April 2016

Clinical research in AKI Timing of initiation of dialysis in AKI

Cochrane Pregnancy and Childbirth Group Methodological Guidelines

How to Conduct a Meta-Analysis

Results. NeuRA Motor dysfunction April 2016

Empirical evidence on sources of bias in randomised controlled trials: methods of and results from the BRANDO study

Essential Skills for Evidence-based Practice Understanding and Using Systematic Reviews

PROTOCOL FORMAT SYSTEMATIC REVIEW ANIMAL INTERVENTION STUDIES

Results. NeuRA Family relationships May 2017

Disclosures. An Introduction to Meta Analysis. Biography, SL Norris 2/20/2012

Method. NeuRA Paliperidone August 2016

Research Synthesis and meta-analysis: themes. Graham A. Colditz, MD, DrPH Method Tuuli, MD, MPH

Efficacy of Antidepressants in Animal Models of Ischemic Stroke A Systematic Review and Meta-Analysis

Results. NeuRA Treatments for dual diagnosis August 2016

Recent developments for combining evidence within evidence streams: bias-adjusted meta-analysis

FOCUS: Fluoxetine Or Control Under Supervision Results. Martin Dennis on behalf of the FOCUS collaborators

Examining Relationships Least-squares regression. Sections 2.3

Describe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo

Data extraction. Specific interventions included in the review Dressings and topical agents in relation to wound healing.

Learning Objectives 9/9/2013. Hypothesis Testing. Conflicts of Interest. Descriptive statistics: Numerical methods Measures of Central Tendency

Unit 1 Exploring and Understanding Data

9/4/2013. Decision Errors. Hypothesis Testing. Conflicts of Interest. Descriptive statistics: Numerical methods Measures of Central Tendency

School of Dentistry. What is a systematic review?

NeuRA Obsessive-compulsive disorders October 2017

Statistics is the science of collecting, organizing, presenting, analyzing, and interpreting data to assist in making effective decisions

Determinants of quality: Factors that lower or increase the quality of evidence

University of Wollongong. Research Online. Australian Health Services Research Institute

Results. NeuRA Herbal medicines August 2016

CHL 5225 H Advanced Statistical Methods for Clinical Trials. CHL 5225 H The Language of Clinical Trials

EPSE 594: Meta-Analysis: Quantitative Research Synthesis

Guidance Document for Claims Based on Non-Inferiority Trials

Meta-analysis: Methodology

Supplementary Information for Duration of dual antiplatelet therapy after drug-eluting stent

Statistics is the science of collecting, organizing, presenting, analyzing, and interpreting data to assist in making effective decisions

Critical appraisal: Systematic Review & Meta-analysis

Instruments for Assessing Risk of Bias and Other Methodological Criteria of Published Animal Studies: A Systematic Review

Describe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo

PROGRAMMA DELLA GIORNATA

Database of Abstracts of Reviews of Effects (DARE) Produced by the Centre for Reviews and Dissemination Copyright 2017 University of York.

Six Sigma Glossary Lean 6 Society

Introduction to systematic reviews/metaanalysis

Supplementary webappendix

Gambling attitudes and misconceptions

This is a repository copy of Practical guide to sample size calculations: superiority trials.

Improving the Analysis and Interpretation of Vascular Prevention Trials

Robert M. Jacobson, M.D. Department of Pediatric and Adolescent Medicine Mayo Clinic, Rochester, Minnesota

Chapter 1: Exploring Data

Systematic review with multiple treatment comparison metaanalysis. on interventions for hepatic encephalopathy

Title: The efficacy of fish oil supplements in the treatment of depression: food for thought

Readings: Textbook readings: OpenStax - Chapters 1 11 Online readings: Appendix D, E & F Plous Chapters 10, 11, 12 and 14

Business Statistics Probability

How to do a meta-analysis. Orestis Efthimiou Dpt. Of Hygiene and Epidemiology, School of Medicine University of Ioannina, Greece

Biostatistics II

Transcription:

Critical Thinking A tour through the science of neuroscience NEBM 10032/5 Publication Bias Emily S Sena Centre for Clinical Brain Sciences, University of Edinburgh slides at @CAMARADES_

Bias Biases in research and reporting are presumably exacerbated by systems of publication and career evaluation that reward impact and productivity over quality and ability to replicate studies Systematic reviews Can only include data made available If unpublished data are systematically different to the published literature then the summarised data, opinions and understanding will be biased.

Growing Research Area Increased efforts in the study of conscious and unconscious biases in research Threaten human health Waste economic resources Threaten scientific progress Fanelli and Ioannidis, 2013

Reporting bias Neutral and negative studies Publication bias Time lag bias Language bias Vibration effects Selective analysis reporting Selective outcome reporting

Publication Bias Neutral and negative studies remain unpublished less likely to be identified in systematic review leads to the overstatement of efficacy in meta-analysis Hot topic for clinical trials summary results of clinical trials conducted in Europe publicly available

How to assess for it? Effect size and corresponding standard error To assess for its presence Funnel plot Egger regression Estimate efficacy in the absence of publication bias Trim and Fill

Funnel Plots Work on the basis that small studies are likely to be more spread around the mean Small studies that are significant are more likely to be published inverted funnel Funnel plots are visually assessed and most useful with a large number of studies Relative measures plotted on a log scale ensures that effects of the same magnitude but opposite directions are equidistant from 1.0 Used to examine whether smaller studies report larger treatment effects

Funnel Plot 0.5 0.4 Precision 0.3 0.2 0.1 0.0-150 -100-50 0 50 100 150 Effect Size

Egger Regression Egger regression statistically assess asymmetry of the funnel plot The standardised effect is regressed on the precision A weighted regression of the effect size on it standard error with the weight equal to the precision With no asymmetry the regression line and its 95% CI will pass through the origin Again, not proof of bias but does raise the questions regarding the interepration of results

Egger Regression 0.5 0.4 0.3 Precision 0.2 0.1 0.0-0.1-10 -5 0 5 10 15 20 25 Effect Size/Standard Error

Trim and Fill Missing studies are iteratively identified ( trimming ) and are replaced ( filling ) in order to calculate an adjusted estimate of effect size Studies with the largest deviation from the mean are trimmed The remaining symmetrical plot is used to recalculate the summary effect Trimmed studies are replaced and their counterparts filled to correct variance The mirror axis is placed along the adjusted estimate

Trim and Fill Overall efficacy was reduced from; 32% (95% CI 30 to 34%) to 26% (95% CI 24 to 28%)

Publication bias in experimental stroke Trim and Fill suggested 16% of experiments remain unpublished Best estimate of magnitude of problem Overstatement of efficacy 31% Only 2% publications reported no significant treatment effects

Cancer Induced Bone Pain - animals 103 filled studies 2.79 (2.6-3.0; n=257) to 1.6 (1.4-1.8; n=360) 42.3% relative overstatement

Publication bias 20% - 32% n expts Estimated unpublished Reported efficacy Corrected efficacy Stroke infarct volume 1359 214 31.3% 27.5% EAE - neurobehaviour 1892 505 33.1% 15.0% EAE inflammation 818 14 38.2% 37.5% EAE demyelination 290 74 45.1% 30.5% EAE axon loss 170 46 54.8% 41.7% AD Water Maze 80 15 0.688 sd 0.498 sd AD plaque burden 632 154 0.999 sd 0.610 sd

Few Studies 16 Studies OR 6.7 (3.7-12.2)

Few Studies 7 filled trials OR 6.7 to 2.7 150% relative overstatement in effect?

More Data 165 comparisons OR 1.8 (1.7-1.9)

More Data 34 filled trials OR 1.8 to 1.6 10% relative overstatement in effect

Limitations Too few studies, ideally want >25 Need reasonable dispersion of sample size For sparse data the Mantel-Haenszel methods are probably more appropriate for weighting but Trim and Fill function only has the inverse variance method enabled Small study effects may have other causes ORs rather than NNT Measure of variance difficult to derive for NNT OR are more extreme compared to RR if the event rate is high so asymmetry may observed with OR

Note of caution. Funnel plot asymmetry has a number of potential sources Selection bias (Publication bias, reporting bias, biased inclusion criteria) True heterogeneity (study effect differs according to study size, differences in underlying risk) Data irregularities (poor methodological design, inadequate analysis, fraud) Artefact (poor choice of effect measure) Chance

Is the effect an artefact of bias? We checked for its presence and impact but not whether the overall effect is robust Rosenthal s fail-safe N How many studies do we need to nullify the observed effect? Statistical rather than substantiative Assumes missing studies have an effect of zero rather than negative or small positive effects Orwin s Fail-safe N Attempts to address both issues

Other options Cumulative meta-analysis by size This allows you to see whether adding smaller studies shifts a stable effect size of larger studies Smaller studies are generally given less weight but are they influencing the pooled estimate Used a fixed model (Mantel-Haenszel) where less weight is given to smaller studies. Will not be thrown by few aberrant studies

Cumulative MA by size

Cumulative MA by size

Other options - Excess Significance To identify whether there is an excess of significant studies within previously conducted systematic reviews of neurological disorders

Excess Significance Test Examines whether too many individual studies in a meta-analysis report statistically significant results Driven by selective reporting of analyses or outcomes Low power to detect bias in a single meta-analysis with few studies but is applicable across many metaanalyses in a field

Methods Dataset 6 diseases Alzheimer s disease, EAE, focal ischaemia, intracerebral haemorrhage, Parkinson s disease, and spinal cord injury 60 meta-analyses 4,445 experiments Median sample size = 16 (IQR 11-20)

Methods Data extraction Year of publication Intervention Effect size + standard error Sample size Quality score items Peer review, randomisation, allocation concealment, blinded assessment of outcome, sample size calculation, conflict of interest Funnel plot asymmetry

Methods Excess Significance Test [(O)>(E)] Observed nominally positive studies p<0.05 Binomial test Expected positive studies is power True effect size Fixed effect size of the most precise study Wilcoxon rank sum test

Assessed excess significance In aggregate across the six disease Within each disease Subgroup Amount of heterogeneity Significant summary fixed effect Egger Regression Reporting of measures to minimise bias (quality score)

Expected significant results = 919 (21%) Observed significant results = 1719 (39%) Excess significance was present across all six neurological disorders in all subgroups defined by methodological or reporting characteristics The strongest excess significance was observed in meta-analyses where small study effects were also observed with the least precise studies

Some research areas may be more susceptible to bias Softer sciences Proportion of studies reporting positive results increases moving from physical sciences to medical to social sciences. Research choices are more influenced by a scientist s own beliefs, expectations and wishes. Behavioural studies Regional Publish or perish long characterised US-based research Research from Asian and developing countries tends to be rejected unless they report extraordinary results Fanelli and Ioannidis, 2013

Further analysis of research bias 82 meta-analyses 1,174 primary outcomes Samples from health related biological and behavioural research Measured how individual results deviated from the overall summary estimate of effect from within their respective meta-analysis Stratified by research type and country of origin Fanelli and Ioannidis, 2013

Hypotheses In the life sciences there are perverse incentives (publication, funding, promotion) to produce positive results with little attention paid to their validity Leads to a body of evidence with an inflated proportion of published studies with statistically significant results This potentially compromises the utility of animal models and contributes to translational failure

What happens Small (underpowered), poorly conducted studies reach spurious (falsely positive) conclusions but are published because they are seen to be interesting. Small (perhaps) poorly conducted (sometimes) studies not reaching the same conclusions are not published. Investigators become conditioned by the apparent success that comes from conducting small underpowered studies Investigators keep trying to replicate the positive studies Exacerbated by the combination of pressures to publish and winner-takes-all system of rewards.

What can we do? Problem: Not all outcomes are reported A priori analyses not always reported What we need: We need to know if scientists report what they set out to report Solution: Published Protocols Registries

Thanks to... Edinburgh Malcolm Macleod Kieren Egan Hanna Vesterinen Gill Currie Rustam Al-Shahi Salman Joseph Frantzis Project Students Melbourne David Howells Ana Antonic Peter Batchelor Taryn Wills Sarah McCann Stanford/Ioannina John Ioannidis Kostad Tsilidis Orestis Panagiotou Eleni Aretouli Vangelis Evangelou Utrecht Bart van der Worp Nottingham Philip Bath Translators NHS R&D methodology program Chief Scientist Office http://www.ted.com/talks/ben_goldacre_what_doctors_don_t_know_about_th e_drugs_they_prescribe.html