EBP STEP 2. APPRAISING THE EVIDENCE : So how do I know that this article is any good? (Quantitative Articles) Alison Hoens

Similar documents
Evidence Based Medicine

Meta Analysis. David R Urbach MD MSc Outcomes Research Course December 4, 2014

Clinical Epidemiology for the uninitiated

Systematic reviews and meta-analyses of observational studies (MOOSE): Checklist.

Overview of Study Designs in Clinical Research

Systematic reviews: From evidence to recommendation. Marcel Dijkers, PhD, FACRM Icahn School of Medicine at Mount Sinai

Evidence-Based Medicine Journal Club. A Primer in Statistics, Study Design, and Epidemiology. August, 2013

Systematic Reviews. Simon Gates 8 March 2007

Evidence Based Medicine

Introduction to systematic reviews/metaanalysis

GLOSSARY OF GENERAL TERMS

Evidence-Based Medicine and Publication Bias Desmond Thompson Merck & Co.

Statistical analysis DIANA SAPLACAN 2017 * SLIDES ADAPTED BASED ON LECTURE NOTES BY ALMA LEORA CULEN

Outline. What is Evidence-Based Practice? EVIDENCE-BASED PRACTICE. What EBP is Not:

GRADE. Grading of Recommendations Assessment, Development and Evaluation. British Association of Dermatologists April 2018

Critical appraisal: Systematic Review & Meta-analysis

APPLYING EVIDENCE-BASED METHODS IN PSYCHIATRY JOURNAL CLUB: HOW TO READ & CRITIQUE ARTICLES

Principles of meta-analysis

MCQ Course in Pediatrics Al Yamamah Hospital June Dr M A Maleque Molla, FRCP, FRCPCH

Systematic Reviews and Meta- Analysis in Kidney Transplantation

Evidence based practice. Dr. Rehab Gwada

Appraising the Literature Overview of Study Designs

Evidence-based Laboratory Medicine: Finding and Assessing the Evidence

The QUOROM Statement: revised recommendations for improving the quality of reports of systematic reviews

Essential Skills for Evidence-based Practice Understanding and Using Systematic Reviews

GRADE. Grading of Recommendations Assessment, Development and Evaluation. British Association of Dermatologists April 2014

AOTA S EVIDENCE EXCHANGE CRITICALLY APPRAISED PAPER (CAP) GUIDELINES Annual AOTA Conference Poster Submissions Critically Appraised Papers (CAPs) are

Critical Appraisal. Dave Abbott Senior Medicines Information Pharmacist

Choosing the Correct Statistical Test

Results. NeuRA Treatments for internalised stigma December 2017

Meta-Analysis. Zifei Liu. Biological and Agricultural Engineering

Critically Appraised Paper for Efficacy of occupational therapy for patients with Parkinson's disease: A randomized controlled trial

4/10/2018. Choosing a study design to answer a specific research question. Importance of study design. Types of study design. Types of study design

Recent developments for combining evidence within evidence streams: bias-adjusted meta-analysis

Evidence-based Laboratory Medicine. Prof AE Zemlin Chemical Pathology Tygerberg Hospital

PTHP 7101 Research 1 Chapter Assignments

Critical Appraisal of a Meta-Analysis: Rosiglitazone and CV Death. Debra Moy Faculty of Pharmacy University of Toronto

Evidence Based Practice. Objectives. Overarching Goal. What it is. Barriers to EBP 4/11/2016

Quantitative Methods in Computing Education Research (A brief overview tips and techniques)

Learning Objectives 9/9/2013. Hypothesis Testing. Conflicts of Interest. Descriptive statistics: Numerical methods Measures of Central Tendency

Problem solving therapy

Statistics as a Tool. A set of tools for collecting, organizing, presenting and analyzing numerical facts or observations.

Checklist for Randomized Controlled Trials. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

The Effect of Vocational Rehabilitation on Return-to-Work Rates in Adults with Stroke

CRITICAL EVALUATION OF BIOMEDICAL LITERATURE

Applying Evidence-Based Practice with Meta-Analysis

Introduzione al metodo GRADE

C2 Training: August 2010

Traumatic brain injury

9/4/2013. Decision Errors. Hypothesis Testing. Conflicts of Interest. Descriptive statistics: Numerical methods Measures of Central Tendency

Results. NeuRA Worldwide incidence April 2016

investigate. educate. inform.

Evidence Informed Practice Online Learning Module Glossary

the standard deviation (SD) is a measure of how much dispersion exists from the mean SD = square root (variance)

Trials and Tribulations of Systematic Reviews and Meta-Analyses

RATING OF A RESEARCH PAPER. By: Neti Juniarti, S.Kp., M.Kes., MNurs

Results. NeuRA Mindfulness and acceptance therapies August 2018

Cochrane Bone, Joint & Muscle Trauma Group How To Write A Protocol

HOW STATISTICS IMPACT PHARMACY PRACTICE?

What you should know before you collect data. BAE 815 (Fall 2017) Dr. Zifei Liu

ISPOR Task Force Report: ITC & NMA Study Questionnaire

Alcohol interventions in secondary and further education

Critical Appraisal of Scientific Literature. André Valdez, PhD Stanford Health Care Stanford University School of Medicine

Systematic Review & Course outline. Lecture (20%) Class discussion & tutorial (30%)

Gambling attitudes and misconceptions

Tammy Filby ( address: 4 th year undergraduate occupational therapy student, University of Western Sydney

Table of Contents. Plots. Essential Statistics for Nursing Research 1/12/2017

Copyright GRADE ING THE QUALITY OF EVIDENCE AND STRENGTH OF RECOMMENDATIONS NANCY SANTESSO, RD, PHD

Funnelling Used to describe a process of narrowing down of focus within a literature review. So, the writer begins with a broad discussion providing b

Research Designs and Potential Interpretation of Data: Introduction to Statistics. Let s Take it Step by Step... Confused by Statistics?

STATISTICS AND RESEARCH DESIGN

SUPPLEMENTARY DATA. Supplementary Figure S1. Search terms*

Evidence-based practice tutorial Critical Appraisal Skills

Research Synthesis and meta-analysis: themes. Graham A. Colditz, MD, DrPH Method Tuuli, MD, MPH

AOTA S EVIDENCE EXCHANGE GUIDELINES TO CRITICALLY APPRAISED PAPER (CAP) WORKSHEET

CRITICALLY APPRAISED PAPER (CAP)

Results. NeuRA Hypnosis June 2016

CHECK-LISTS AND Tools DR F. R E Z A E I DR E. G H A D E R I K U R D I S TA N U N I V E R S I T Y O F M E D I C A L S C I E N C E S

NeuRA Sleep disturbance April 2016

Content. Evidence-based Geriatric Medicine. Evidence-based Medicine is: Why is EBM Needed? 10/8/2008. Evidence-based Medicine (EBM)

Biostatistics II

Checklist for Randomized Controlled Trials. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

Evidence Based Practice (EBP) Five Step Process EBM. A Definition of EBP 10/13/2009. Fall

OUTLINE. Teaching Critical Appraisal and Application of Research Findings. Elements of Patient Management 2/18/2015. Examination

Retrieving and appraising systematic reviews

CRITICALLY APPRAISED PAPER (CAP)

GATE CAT Intervention RCT/Cohort Studies

CRITICAL APPRAISAL OF MEDICAL LITERATURE. Samuel Iff ISPM Bern

DESIGN TYPE AND LEVEL OF EVIDENCE: Level I: Pilot randomized controlled trial. Limitations (appropriateness of study design):

The comparison or control group may be allocated a placebo intervention, an alternative real intervention or no intervention at all.

CRITICALLY APPRAISED PAPER (CAP)

Systematic Reviews and Meta-analysis

EVIDENCE-BASED HEALTH CARE

Clinical Research Scientific Writing. K. A. Koram NMIMR

CRITICALLY APPRAISED PAPER (CAP)

Psychology Research Process

Dr. Engle has received an unrestricted educational grant for the MD Anderson Pain Medicine Fellowship.

Overview. Goals of Interpretation. Methodology. Reasons to Read and Evaluate

Teaching critical appraisal of randomised controlled trials

Systematic reviews: From evidence to recommendation. Marcel Dijkers, PhD, FACRM Icahn School of Medicine at Mount Sinai

Transcription:

EBP STEP 2 APPRAISING THE EVIDENCE : So how do I know that this article is any good? (Quantitative Articles) Alison Hoens Clinical Assistant Prof, UBC Clinical Coordinator, PHC Maggie McIlwaine Clinical Instructor, UBC Clinical Coordinator, PT, BC Children s Hospital

EBP - THE PROCESS Clinical Problem Act Ask Apply Acquire Appraise

EBP - WHAT IS IT? Evidence-Based Practice: The conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual pts Sackett,, 1996 BMJ 312, 71-72. 72. The integration of best research evidence with clinical expertise and patient values Sackett,, 2000 Evidence-Based Based Medicine. How to Practice and Teach EBM. 2nd Ed. Churchill Livingtone

BARRIERS TO EBP Stevenson, T et al. (2005). Influences on Treatment Choices in Stroke Rehabilitation: Survey of Canadian Physiotherapists. Physiotherapy Canada. Ranking of importance of factors influencing current practice: Experience Continuing education (practical) Colleague Influence Continuing Education (theory) Professional Literature * secondary sources Entry Level Training

BARRIERS TO EBP Lack of time, computing resources, not enough evidence, lack of access; lack of skills for searching, appraising, and interpreting; lack of incentives (Bennett S. et al, 2003. Australian OT Journal, 50, 13 Relevant literature not compiled all in one place (Bennett S. et al, 2003. Australian OT Journal, 50, 13-22.) 22.) (Closs & Lewin,, 1998. Br J of Therapy & Rehab, 5, 151-155). 155). Publication bias, indexing issues, language issues, assessing internal validity, access to electronic databases, access to full text, assessing applicability, drawing conclusions (Maher. C. et al. Phys Ther,, 84: 645-654). 654).

ASK Foreground Questions PICO P - Patient/Problem I - Intervention C - Comparison O - Outcome

ACQUIRE Building the search strategy Identify concepts from PICO Decide on words 2 methods: keywords or classification Boolean operators Truncation Limits: study design; age; gender; language; year of publication

EVALUATING QUANTITATIVE ARTICLES Critical Review Form, Quantitative Studies, developed by McMaster OT Evidence-Based Practice Research Group (Law et al, 1998) User s Guide To The Medical Literature. JAMA. (Guyatt & Rennie, 2002). BMJ (1997) series July-Sept (Greenhalgh, T) Evidence-based Rehabilitation. A Guide to Practice. M. Law (Ed.). SLACK Inc. NJ. (2002) A checklist to evaluate a report of a nonpharmacological trial (Boutron et al, 2005)

EVALUATING QUANTITATIVE ARTICLES Akoberg, AK (2005). Evidence in practice. Parts 1-4. Akoberg, AK (2005). Understanding systematic reviews and meta-analysis. Clancy, MJ (2002). Overview of research designs. Sim, J & Reid, N (1999). Statistical Inference by Confidence Intervals. Herbert, RD (2000). How to estimate treatment effects of clinical trials. Parts 1-2. PEDro

Purpose Determine the purpose of the study Study Purpose Clearly stated? Phrased as a research question or hypothesis

Review of Literature Literature Review Provides a synthesis of appropriate previous research & the clinical importance of the topic? Includes primary > secondary sources Interprets results of previous work Clearly demonstrates the holes which need to be filled by this particular study and thus justifies the need for this study

STUDY DESIGN RCT Randomization to intervention & control grps Prospective Control (? ethical to withhold Rx) Cohort Grp exposed to similar situation (program or diagnosis); observed over time *prospective May have a comparison or control grp Not random allocation; must find another grp of similar age, gender & other important factors * caution re confounding factors

STUDY DESIGN Single subject Single or multiple single subjects Repeated measures before, during and after intervention Subject serves as own control Before-After Single subjects or groups No control group Useful when can t withhold Rx

STUDY DESIGN Case-control Retrospective Explores what makes grps different Compared to a comparable control grp Often problems with confounding variables Cross-sectional One grp evaluated all at the same time Useful when little is known Eg. Surveys, questionnaires & interviews Impossible to know if all factors included thus can t draw cause-effect conclusions

STUDY DESIGN Case-study Descriptive data abt the relationship btwn an exposure and an outcome of interest No control grp Provides info for further studies Appropriateness of Design Depends on: Knowledge of topic/issue Outcomes Ethical issues Study-purpose/question

LEVEL OF EVIDENCE Determine the level of evidence

LEVEL OF EVIDENCE I a) Systematic review (with homogenity) of RCTs b) individual RCT (with narrow confidence interval) c) all or none II a) systematic review (with homogeneity) of cohort studies b) individual cohort study, including low quality RCTs eg. < 80% follow-up c) outcomes research III a) systematic review (with homogeneity) of case-control studies b) Individual case-control studies IV Case series (and poor-quality cohort & case-control studies V Expert opinion without critical appraisal or based on physiology, bench research or first principles Oxford Center for EBM levels of Medicine May 2001 *for Therapy

LEVEL OF EVIDENCE Many hierarchies are available 1. Inconsistent nomenclature introduces a wide range of possibilities - aim of EBP is to reduce unnecessary inconsistencies! 2. The hierarchies give differential weight to consensus and evidence Upshur, R (2003). Are all evidence-based practices alike? Problems in the ranking of evidence. JAMC 167(7), 672-3. Suggestion: use what makes sense - in a clinical setting: KISS

HIERARCHY OF EVIDENCE Meta-analysis Systematic reviews Narrative reviews Clinical Practice Guidelines Randomized Controlled Trials Cohort Studies Case-controlled Studies Case studies *Expert opinion Modified from Cormack (2002)

LEVEL OF EVIDENCE META ANALYSIS Locate clinical trials Criteria: RCT Rank: Methodological Score Similar outcome measure Variable Rx protocols Pool results Statistical analysis?significant effect Houghton, 2004 SYSTEMATIC REVIEWS Locate clinical trials Criteria: RCT Rank: Methodological score Variable outcomes Organized based on Rx protocols Blinded reviewers (+/- or?) No statistics (#/+/-)

LEVEL OF EVIDENCE NARRATIVE Question broad Sources/search may not be specified; potentially biased Selection of articles may not be specified; potentially biased Appraisal variable Synthesis often qualitative Inferences maybe EB Table 7-1 (Law et al, 2002) p. 111 SYSTEMATIC Question focused Sources/search must be comprehensive and search strategy detailed Selection of articles based on strict criteria Appraisal rigorous Quantitative summary (metaanalysis) Inferences EB

STRENGTH OF EVIDENCE Quality of methodology If you are deciding whether a paper is worth reading, you should do so on the design of the methods section and not on the interest of the hypothesis, the nature or impact of the results, or the speculation in the discussion Greenhalgh, T. (1997). How to read a paper. BMJ, 315, 243-6

STRENGTH OF EVIDENCE Many scales for strength of evidence *Boutron et al, (2005) Downs & Black (1998) Van der Heijden et al. (1996) Sindhu et al, (1997) De Vet et al, (1997) PEDro scale with explanations http://www.pedro.fhs.usyd.edu.au/scale_item.html

STRENGTH OF EVIDENCE Eg. Medlicott & Harris 2006: 1. Randomization 2. Inclusion & exclusion criteria 3. Similarity of grps at baseline 4. Protocol described sufficiently for replicability 5. Reliability of data with outcome measures investigated 6. Validity of data addressed 7. Blinding of patient, Rx provider & assessor 8. Long -term (6 m) results assessed 9. Adherence to home prgm assessed Yes vs No: Strong (8-10 Yes); Mod (6-7 Yes); Weak (</=5 Yes)

METHODOLOGY 5.1 The Sample Was there a detailed description eg. Age, gender, duration of disease/disability? *mean, SD, range! Were the number reported? Were the groups equal & similar? Should they be stratified? Was there a description of how subjects were sampled/recruited?

METHODOLOGY 5.1 The Sample Cont d Were there appropriate inclusion/exclusion criteria? Was there justification of sample size? If there was more than 1 grp, were subjects randomized? Was allocation concealed? Were the ethics procedures reported?

METHODOLOGY 5.1 Cont d: Sample selection biases Volunteer or referral bias Seasonal bias Attention bias

INTERPRETING THE RESULTS OF CLINICAL TRIALS Phase 1 - small numbers, assesses tolerance, safety issues Phase 11 - estimates the efficacy, small scale, descriptive, pilot study Phase 111 - large numbers, power size comparative trials Phase 1V - seeks to determine neg side effects.

ABSENCE OF EVIDENCE IS NO EVIDENCE OF ABSENCE

DANGER TREATMENT A TREATMENT B OUTCOME VARIABLE FEV 1 SHOWS NO SIGNIFICANT DIFFERENCE CONCLUSION: TREATMENT A = TREATMENT B

TYPE 11 ERROR Infers two treatments are equal efficacy when they are not Need to do a power analysis prior to the study Usually need to increase the sample size to detect true differences

TYPE 11 ERRORS Negative Findings in Clinical Studies Moher et al. JAMA 1994;272:122 Reviewed 383 studies 70 showed negative findings (no sig. Diff) Only 36% (21 studies) had a Power size large enough to detect a true difference.

METHODOLOGY 5.2 The Outcomes Were the outcomes clearly described? Were they detailed sufficiently for replication? Was the frequency of outcome measures described? Were the measures relevant to clinical outcome? Was reliability examined/reported & confirmed with these investigators? Was validity examined & reported (all relevant components - content validity; in relationship to gold standards - criterion validity)?

METHODOLOGY 5.2 Cont d: Measurement/detection biases No. of outcome measures - only 1 (eg. If excludes a critical measure) vs too many for the sample size Lack of masked / blinded evaluation Recall or memory bias

SHORT TERM STUDIES Cannot imply long-term efficacy with short term studies Many outcome measures need a long duration to decrease confidence interval.

METHODOLOGY 5.3 The Intervention Was the intervention described in detail (to replicate)? Was the intervention relevant? Who delivered it; were they trained? Were they blinded? Was the frequency appropriate? Was the setting appropriate? Was contamination/co-intervention avoided?

METHODOLOGY 5.3 Cont d: Intervention/Performance Biases Contamination - some of the control group receives the intervention Co-intervention - intervention group receives other care that may impact outcome Timing of intervention Site of treatment Different therapists

RESULTS Results Were the statistical methods provided in detail? Were the statistical methods appropriate? Were drop-outs reported?

Drop - outs: RESULTS Important to determine what was done with their data Excluded Assumptions made for missing values Analyzed separately Intention to treat analysis To reduce bias of when some pts drop out All pts analyzed together as the Rx group whether or not they received the Rx or completed the study

RESULTS COMMON STATS TERMS Descriptive statistics Frequency distributions Measures of central tendency Mean - affected by extreme scores esp when total number of scores is small Median - 50th percentile; less affected by extreme scores Mode - most frequent score Measures of variance/variability Range - highest and lowest scores; affected by extremes Variance - spread of scores around the mean Standard deviation - square root of the variance

RESULTS Parametric vs nonparametric tests Parametric (*for normal distributions) T-test - compares means of two groups Anova- compares means of groups in terms of variance Nonparametric (*less powerful) Wilcoxon signed rank test Whitney-Mann-Wilcoxon test Kruskal-Wallis test

RESULTS P value - the probability that the observed diff btwn the 2 grps might have occurred by chance. Not an indication of accuracy. Confidence interval - the range in which we are 95% certain that the true population Rx effect will lie. A measure of accuracy. Better than p value. If the CI does not cross zero it is statistically significant. Odds ratio - a way of comparing whether the probability of a certain event is the same for two groups. 1 = the event is equally likely in both groups >1 = the event is more likely in the first group <1 = the event is less likely in the first group.

CONFIDENCE INTERVALS Narrow or Wide

RESULTS Relative Risk (RR) - the number of pts who achieve the stated end point divided by the total no. of pts in the grp. 1 = the event is equally likely in both groups >1 = the event is more likely in the first group <1 = the event is less likely in the first group. Number needed to treat (NNT) - the number of people who would need to be treated to prevent the event of interest. *smaller the better eg. 10 vs 100

RESULTS Fail-safe number - an estimate of the number of unpublished studies that demonstrate the opposite finding in order to refute the findings of the current meta-analysis. Regression equation - statistical technique used to explain or predict the behavior of a dependent variable. Y=a+bx+c, Eg creating a mathematical equation to describe the best fit of a line through data points that are not linear. Forest Plot - graph that visually shows the info from all the studies in the meta-analysis to indicate the overall effect of Rx. Funnel plot - statistical test to determine if there is likely to be publication bias.

FORREST PLOT

FUNNEL PLOT

CONCLUSIONS Conclusions Were clinical implications explored? Were conclusions restricted to a reasonable interpretation of the results? Were limitations of the study reported?

APPLICABILITY Ask yourself - how can I apply the results to patient care? Were the study patients similar to my patient? (demographics, severity, comorbidity) Were all clinically important outcomes considered? Are the likely treatment benefits worth the potential harm and costs?

USEFUL RESOURCES Center for Evidence-Based Medicine http://www.cebm.net PEDro scale with explanations http://www.pedro.fhs.usyd.edu.au/scale_item.html

FINAL SUGGESTION Start/join a journal club!