Evaluating Scientific Journal Articles. Tufts CTSI s Mission & Purpose. Tufts Clinical and Translational Science Institute

Similar documents
Overview of Study Designs

Clinical Research Scientific Writing. K. A. Koram NMIMR

CONSORT 2010 checklist of information to include when reporting a randomised trial*

Rapid appraisal of the literature: Identifying study biases

Observational Study Designs. Review. Today. Measures of disease occurrence. Cohort Studies

UNDERSTANDING EPIDEMIOLOGICAL STUDIES. Csaba P Kovesdy, MD FASN Salem VA Medical Center, Salem VA University of Virginia, Charlottesville VA

Mindfulness-Based Stress Reduction and Chronic Pain

Evidence Informed Practice Online Learning Module Glossary

HOW TO WRITE A STUDY PROTOCOL

RATING OF A RESEARCH PAPER. By: Neti Juniarti, S.Kp., M.Kes., MNurs

Clinical Trials. Susan G. Fisher, Ph.D. Dept. of Clinical Sciences

exposure/intervention

Physical Therapy (PT) DISCLOSURES. Knee Osteoarthritis (OA) Central Pain Mechanisms in Osteoarthritis

Gambling attitudes and misconceptions

Erik J. Groessl, PhD. Acknowledgements. VA Rehabilitation Research & Development

GATE CAT Intervention RCT/Cohort Studies

Effectiveness of Mind Body Interventions for Adults with Chronic Pain

Issues to Consider in the Design of Randomized Controlled Trials

MODULE 3 APPRAISING EVIDENCE. Evidence-Informed Policy Making Training

Integrative Medicine Group Visits: Who Does Well in the Group Visits and Who Attends the Group Visits?

CRITICALLY APPRAISED PAPER (CAP)

The Practice of Statistics 1 Week 2: Relationships and Data Collection

Gathering. Useful Data. Chapter 3. Copyright 2004 Brooks/Cole, a division of Thomson Learning, Inc.

Science Update: Inform Your Mindfulness Teaching and Practice with Current Research.

CRITICAL APPRAISAL OF CLINICAL PRACTICE GUIDELINE (CPG)

Systematic Review & Course outline. Lecture (20%) Class discussion & tutorial (30%)

MedicalBiostatistics.com

Systematic reviews: From evidence to recommendation. Marcel Dijkers, PhD, FACRM Icahn School of Medicine at Mount Sinai

CRITICALLY APPRAISED PAPER (CAP)

CRITICALLY APPRAISED PAPER (CAP)

Protocol Development: The Guiding Light of Any Clinical Study

Guidelines for Writing and Reviewing an Informed Consent Manuscript From the Editors of Clinical Research in Practice: The Journal of Team Hippocrates

AOTA S EVIDENCE EXCHANGE CRITICALLY APPRAISED PAPER (CAP) GUIDELINES Annual AOTA Conference Poster Submissions Critically Appraised Papers (CAPs) are

Clinical Study Design: From Pilot to Randomized Controlled Trials

Module 5. The Epidemiological Basis of Randomised Controlled Trials. Landon Myer School of Public Health & Family Medicine, University of Cape Town

BIOSTATISTICAL METHODS

Avoiding common errors in research reporting:

(training in mindfulness meditation and yoga) were delivered in 8 weekly 2-hour groups. Usual care included whatever care participants received.

Understanding Science Conceptual Framework

You can t fix by analysis what you bungled by design. Fancy analysis can t fix a poorly designed study.

Experimental Methods. Anna Fahlgren, Phd Associate professor in Experimental Orthopaedics

Quantitative research Methods. Tiny Jaarsma

CHECK-LISTS AND Tools DR F. R E Z A E I DR E. G H A D E R I K U R D I S TA N U N I V E R S I T Y O F M E D I C A L S C I E N C E S

Systematic Reviews and Meta- Analysis in Kidney Transplantation

Lecture 2. Key Concepts in Clinical Research

USDA Nutrition Evidence Library: Systematic Review Methodology

CRITICAL APPRAISAL AP DR JEMAIMA CHE HAMZAH MD (UKM) MS (OFTAL) UKM PHD (UK) DEPARTMENT OF OPHTHALMOLOGY UKM MEDICAL CENTRE

PLANNING THE RESEARCH PROJECT

Statistics for Clinical Trials: Basics of Phase III Trial Design

Learning objectives. Examining the reliability of published research findings

GATE CAT Diagnostic Test Accuracy Studies

Systematic reviews and meta-analyses of observational studies (MOOSE): Checklist.

Live WebEx meeting agenda

The treatment of postnatal depression: a comprehensive literature review Boath E, Henshaw C

Critical Appraisal Series

Randomized Controlled Trial

Overview of Study Designs in Clinical Research

How to use this appraisal tool: Three broad issues need to be considered when appraising a case control study:

3. Factors such as race, age, sex, and a person s physiological state are all considered determinants of disease. a. True

Types of Biomedical Research

SOCQ121/BIOQ121. Session 8. Methods and Methodology. Department of Social Science. endeavour.edu.au

Setting The setting was institutional and tertiary care in London, Essex and Hertfordshire in the UK.

Incorporating Clinical Information into the Label

An exercise in cost-effectiveness analysis: treating emotional distress in melanoma patients Bares C B, Trask P C, Schwartz S M

Systematic Reviews. Simon Gates 8 March 2007

In this chapter we discuss validity issues for quantitative research and for qualitative research.

Winter 2017 Schedule. ~ Day of Mindfulness for all sessions - Sunday, March 5, 2017, 10 am - 4:30 pm

Appraisal tool for Cross-Sectional Studies (AXIS)

9/16/2014. Genetic Counselors as Choice Architects Some Considerations for Presenting Genetic Testing Decisions In a Complex Choice Environment

Real-world data in pragmatic trials

Controlled Trials. Spyros Kitsiou, PhD

CSC2130: Empirical Research Methods for Software Engineering

Design of Experiments & Introduction to Research

How to CRITICALLY APPRAISE

diagnosis and initial treatment at one of the 27 collaborating CCSS institutions;

Objectives. Distinguish between primary and secondary studies. Discuss ways to assess methodological quality. Review limitations of all studies.

Recent developments for combining evidence within evidence streams: bias-adjusted meta-analysis

Writing to Convince: The 1-page Abstract & More. UW ICTR-CAP Madison March, 2009

The Research Enterprise in Psychology Chapter 2

Integrative Medicine Group Visits: A New Model of Care for Managing Health and Well-Being Katherine Gergen Barnett, MD Diane Rogers June 25, 2015

Overview of Clinical Study Design Laura Lee Johnson, Ph.D. Statistician National Center for Complementary and Alternative Medicine US National

INTERNAL VALIDITY, BIAS AND CONFOUNDING

Winter 2013 Schedule. Day of Mindfulness for all sessions - Saturday, March 16, 2013, 10 am - 4:30 pm

CRITICALLY APPRAISED PAPER (CAP)

Funnelling Used to describe a process of narrowing down of focus within a literature review. So, the writer begins with a broad discussion providing b

CHECKLIST FOR EVALUATING A RESEARCH REPORT Provided by Dr. Blevins

In this second module in the clinical trials series, we will focus on design considerations for Phase III clinical trials. Phase III clinical trials

Dr Anita Rose Director of Clinical Service: Consultant in Neuropsychology & Rehabilitation

What is the evidence for effectiveness of interventions to enhance coping among people living with HIV disease? A systematic review

Clever Hans the horse could do simple math and spell out the answers to simple questions. He wasn t always correct, but he was most of the time.

Experimental and Quasi-Experimental designs

Gender Identity and Expression. Week 8

Lecture Slides. Elementary Statistics Eleventh Edition. by Mario F. Triola. and the Triola Statistics Series 1.1-1

Caroline M. Angel, R.N., PhD Lawrence Sherman, Heather Strang, Sarah Bennet, Nova Inkpen Anne Keane & Terry Richmond, University of Pennsylvania

Hypothesis-Driven Research

Instrument for the assessment of systematic reviews and meta-analysis

Challenges of Observational and Retrospective Studies

12/26/2013. Types of Biomedical Research. Clinical Research. 7Steps to do research INTRODUCTION & MEASUREMENT IN CLINICAL RESEARCH S T A T I S T I C

Clinical research in AKI Timing of initiation of dialysis in AKI

Guidelines for Reporting Non-Randomised Studies

Transcription:

Tufts Clinical and Translational Science Institute Lori Lyn Price, MAS Biostatistics, Epidemiology, and Research Design (BERD) Tufts Clinical and Translational Science Institute (CTSI) Tufts CTSI s Mission & Purpose Tufts CTSI is based on the conviction that authentic involvement of the entire spectrum of clinical and translational research is critical to meeting the promise and the public s needs of biomedical science. Our mission is to identify, stimulate, and expedite innovative clinical and translational research, with the goal of improving the public s health. 39 Tufts CTSI Partners http://ilearn.tuftsctsi.org/ Live seminars are recorded for our I LEARN site. Seminar videos can be viewed at any time, and are free! How to Request Tufts CTSI Services Visit www.tuftsctsi.org and submit a request Evaluating Scientific Journal Articles Lori Lyn Price, MAS Biostatistics, Epidemiology, and Research Design (BERD) Tufts Clinical and Translational Science Institute (CTSI) 1

Learning Objectives List the questions you should ask yourself when evaluating a scientific journal article Identify the specific, testable hypothesis of the paper Identify what type of study design was used Evaluate whether the results of the study were affected by bias Explain why this study was important, what it added to the literature, or how it changed health practice Appraise the compatibility of the conclusions of the study with the study objectives Evaluation of a Scientific Article Cherkin DC, Sherman KJ, Balderson BH, Cook AJ, Anderson ML, Hawkes RJ, Hansen KE, Turner JA. Effect of Mindfulness-Based Stress Reduction vs Cognitive Behavioral Therapy or Usual Care on Back Pain and Functional Limitations in Adults with Chronic Low Back Pain: A Randomized Clinical Trial. JAMA 315(12): 1240-1249, 2016. Key Sections of a Journal Article 1. Abstract 2. Introduction/Background 3. Methods 4. Results 5. Discussion 6. Conclusions 7. References Introduction to Evaluating Articles In most articles, the authors tell a story based on data that they have collected, analyzed, and interpreted The reader should evaluate each of these phases to decide whether to trust the story It s also important to understand why the study was done Research Question Data Collection Analysis Interpretation Overall Issues in Evaluation Big picture: Strength of the body of literature Plausibility of biological/health mechanism Effect size and number of people Quality of study Quality methodological details: Appropriate hypothesis Study design Data quality Plausible effect estimate or concern about biases Generalizability Where Studies Fail Inadequate study design Biased sample and results Uncontrolled confounding Study sample too small 2

Where Studies Fail Lack of generalizability Single center Subject recruitment & retention Inclusion/exclusion criteria Misinterpretation of results Conclusions don t match results Following the Story Part 1 Context / Motivation Introduction/ Background What was the motivation for doing this study? Did the authors conduct this study to: Generate descriptive or pilot data or new hypotheses? Test a formulated hypothesis? Replicate or validate previous findings? Motivation Evaluate 2 specific hypotheses: Adults with chronic lower back pain (CLBP) treated with Mindfulness-Based Stress Reduction (MBSR) would show greater shortand long-term improvement than adults randomized to usual care Adults with CLBP treated with MBSR would show greater short- and long-term improvement than adults randomized to Cognitive Behavioral Therapy (CBT) Why Was the Study Important? What information already exists about this topic? Functional status of people with CLBP has decreased over time, despite numerous treatment options and resources Psychosocial factors are a component of pain CBT is known to be effective for a variety of types of chronic pain, but limited access MBSR, another mind-body component, is becoming increasingly available MBSR is helpful for chronic pain Why Was the Study Important? What were the gaps in the literature that this study sought to help fill? Is MBSR effective in treating CLBP? Is MBSR more effective than CBT? What other factors make this an important study? Does the Paper Present a Clear Research Question or Objective and a Specific, Testable Hypothesis? Study objective To evaluate the effectiveness for chronic low back pain of MBSR vs cognitive behavioral therapy or usual care. Testable hypothesis? 3

Examples of Hypotheses 1) MBSR is more effective than CBT in treating CLBP. 2) MBSR is more effective than CBT in reducing back pain. 3) MBSR is more effective than CBT in reducing back pain over {pre-specified time frame} using {pre-specified instrument} Following the Story Part 2 How was the study conducted? Inclusion and exclusion criteria Recruitment of participants Study design Definition of outcomes Administration of intervention Methods Following the Story Part 2 Subjects What was the source population for recruiting subjects and were study subjects representative of this population? Explanation of subject selection process Generalizability Inclusion and Exclusion 20-70 year olds Non-specific CLBP for at least 3 months No compensation or litigation issues English speaking Able to attend classes Adequate pain, based on bothersomeness and pain interference questionnaires Recruitment Group Health members Eligibility based on medical record Invited to participate via mail Community Random sample of participants Invited to participate via mail Who Were the Study Subjects? What was the source population from which study subjects were recruited? Was the subject selection process clearly explained? How representative was the sample? 4

Following the Story Part 2 Study Designs Methods Randomized Controlled Trial (RCT) Observational Studies: Cohort selection based on exposure (smoking status) Case-Control selection based on disease/outcome (lung disease) Cross-sectional one snapshot in time Retrospective exposure collected after disease Prospective exposure collected before disease What type of study design was used? Randomized Controlled Trial Is this design susceptible to any types of bias? Were potential sources of bias identified and addressed when designing the study? Variables Following the Story Part 2 Data Collection Were independent and dependent variables clearly defined and accurately measured? Potential for misclassification Validation of exposure/outcome status Properties of measurement methods Definition of Outcomes Functional limitation related to CLBP Roland Disability Questionnaire (validated) One item removed Asked about past week rather than only today Back pain bothersomeness (0-10 scale) Definition of Outcomes Primary analysis: % of people with clinically meaningful improvement ( 30% improvement from baseline) Intervention: CBT 2 hour weekly group session for 8 weeks Chronic pain education Changing dysfunctional thoughts Workbooks & CDs Instructions for home practice: Relaxation and imagery 5

Intervention: MBSR 2 hour weekly group session for 8 weeks Didactic content Mindfulness practice Workbooks & CDs Optional 6 hour retreat Instructions for home practice: Mindfulness, meditation and yoga Following the Story Part 2 Subjects Were a significant number of subjects lost to follow-up? Differential between groups (26 weeks) 5% usual care, 18% MBSR, 19% CBT 13 participants each in MBSR & CBT did not attend any classes How missing data were handled imputation Intent to treat Analysis Following the Story Part 3 Statistical Methods Was a Sample Size/Power Calculation Methods Performed Before Beginning the Study? Are all calculation parameters reported so that the calculation could be duplicated? Outcome: proportion of participants experiencing meaningful improvement 90% power to detect 25% difference in MBSR (55%) vs. usual care (30%) 80% power to detect 21% difference in MSBR (76%)vs. CBT (55%) Was the sample adequately powered? 6

Following the Story Part 4 Reporting and Interpretation of Data Did the authors present and compare the characteristics of the 3 study groups? This information is provided in Table 1 Are there any clinically meaningful differences in Table 1? More women in usual care (77% vs ~60%) Fewer college grads in MBSR (52% vs ~61%) Should these affect our analysis? Results Conclusion Discussion How Strong Were the Study s Results? Did the investigators find any statistically significant results? Roland Disability: at least 2 of the 3 groups differed significantly (p=0.04) Pain Bothersome: at least 2 of the 3 groups differed significantly (p=0.01) At 26 weeks, adjusted for age, sex, education, baseline score and pain duration How likely is it that results were due to chance or bias? Primary Outcome (%) Chance & Bias More than 40 p-values presented in the tables Additional tests performed when one of these p-values<0.05 Are results due to chance? Only 50-60% of participants randomized to MBSR & CBT completed at least 6 classes Are results due to bias? Limitations and Generalizability Do the authors adequately address the study s limitations and their implications? Highly educated & enrolled in a single health care system ~20% loss to followup Who do the results of this study apply to? the generalizability of findings to other settings and populations is unknown Are the Conclusions Reasonable Based on the Study s Aims and Results? Among adults with chronic low back pain, treatment with MBSR or CBT, compared with usual care, resulted in greater improvement in back pain and functional limitations at 26 weeks, with no significant differences in outcomes between MBSR and CBT. Is this conclusion compatible with the original study objective? Do the results of the study justify the conclusions? 7

Are the Conclusions Reasonable Based on the Study s Aims and Results? These findings suggest that MBSR may be an effective treatment option for patients with chronic low back pain. What do you think about this? Superiority vs non-inferiority How are these hypotheses different? What is each testing? MBSR is more effective than CBT in reducing back pain. MBSR is non-inferior to CBT in reducing back pain Learning from What s Not There If an article doesn t mention a particular issue (e.g. blinding, randomization), it s usually safe to assume that the study did not address that issue All studies have limitations. If none are mentioned, it probably means that issues with the study were not adequately addressed. Overall Issues in Evaluation Big picture: Quality of study Appropriate hypothesis Study design Data quality Plausible effect estimate or concern about biases Generalizability Effect size and number of people Plausibility of biological/health mechanism Strength of the body of literature (on specific and larger related research questions) Questions to ask yourself What is the big picture? Why is study important? Plausible effect? Is story believable? Any concerns about quality? All the important details: Appropriate & specific hypothesis? Design, subject selection, choice of variables, data quality, appropriate analysis, biases, etc. Questions? 8

Thank you! 9