GRADE, Summary of Findings and ConQual Workshop

Similar documents
The Joanna Briggs Institute Reviewers Manual 2014

Critical Appraisal Tools

Confidence in qualitative synthesis findings: The ConQual Approach

Establishing confidence in qualitative evidence. The ConQual Approach

GRADE. Grading of Recommendations Assessment, Development and Evaluation. British Association of Dermatologists April 2018

GRADE. Grading of Recommendations Assessment, Development and Evaluation. British Association of Dermatologists April 2014

Checklist for Text and Opinion. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

Introduzione al metodo GRADE

Copyright GRADE ING THE QUALITY OF EVIDENCE AND STRENGTH OF RECOMMENDATIONS NANCY SANTESSO, RD, PHD

Clinical problems and choice of study designs

Why is ILCOR moving to GRADE?

Washington, DC, November 9, 2009 Institute of Medicine

Checklist for Randomized Controlled Trials. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

Checklist for Randomized Controlled Trials. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

Systematic reviews: From evidence to recommendation. Marcel Dijkers, PhD, FACRM Icahn School of Medicine at Mount Sinai

Checklist for Diagnostic Test Accuracy Studies. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

Are the likely benefits worth the potential harms and costs? From McMaster EBCP Workshop/Duke University Medical Center

Determinants of quality: Factors that lower or increase the quality of evidence

JBI GRADE Research School Workshop. Presented by JBI Adelaide GRADE Centre Staff

Cochrane-GRADE Workshop

Checklist for Analytical Cross Sectional Studies. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

CHECK-LISTS AND Tools DR F. R E Z A E I DR E. G H A D E R I K U R D I S TA N U N I V E R S I T Y O F M E D I C A L S C I E N C E S

Alcohol interventions in secondary and further education

Objectives. Information proliferation. Guidelines: Evidence or Expert opinion or???? 21/01/2017. Evidence-based clinical decisions

Checklist for Case Control Studies. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

ACR OA Guideline Development Process Knee and Hip

Guideline Development at the American College of Physicians. American College of Physicians

Cochrane Pregnancy and Childbirth Group Methodological Guidelines

MINI SYMPOSIUM - EUMASS - UEMASS European Union of Medicine in Assurance and Social Security

A Systematic Review of the Efficacy and Clinical Effectiveness of Group Analysis and Analytic/Dynamic Group Psychotherapy

Traumatic brain injury

The detection and management of pain in patients with dementia in acute care settings: development of a decision tool: Research protocol.

Quality of Clinical Practice Guidelines

CONSORT 2010 checklist of information to include when reporting a randomised trial*

Online Annexes (5-8)

Practice guidelines : overview of methodology with focus on GRADE

Appendix 2 Quality assessment tools. Cochrane risk of bias tool for RCTs. Support for judgment

Problem solving therapy

CHAMP: CHecklist for the Appraisal of Moderators and Predictors

The comparison or control group may be allocated a placebo intervention, an alternative real intervention or no intervention at all.

User Manual The Journal Club

Mapping the Informed Health Choices (IHC) Key Concepts (KC) to core concepts for the main steps of Evidence-Based Health Care (EBHC).

Evidence Based Medicine

Recent developments for combining evidence within evidence streams: bias-adjusted meta-analysis

Online Annexes (5-8)

LIHS Mini Master Class

Evaluating the Strength of Clinical Recommendations in the Medical Literature: GRADE, SORT, and AGREE

11 questions to help you make sense of a case control study

Results. NeuRA Treatments for internalised stigma December 2017

Systematic reviews and meta-analyses of observational studies (MOOSE): Checklist.

1. Draft checklist for judging on quality of animal studies (Van der Worp et al., 2010)

NeuRA Sleep disturbance April 2016

Appendix A: Literature search strategy

Improving reporting for observational studies: STROBE statement

Controlled Trials. Spyros Kitsiou, PhD

Results. NeuRA Mindfulness and acceptance therapies August 2018

GLOSSARY OF GENERAL TERMS

Results. NeuRA Hypnosis June 2016

Checklist for Prevalence Studies. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

ISPOR Task Force Report: ITC & NMA Study Questionnaire

Surveillance report Published: 13 April 2017 nice.org.uk. NICE All rights reserved.

COMMITTEE FOR PROPRIETARY MEDICINAL PRODUCTS (CPMP) POINTS TO CONSIDER ON MISSING DATA

BACKGROUND + GENERAL COMMENTS

Outcomes and GRADE Summary of Findings Tables: old and new

Component of CPG development ILAE Recommendation Document Identifying topic and developing clinical. S3 research question

Method. NeuRA Biofeedback May 2016

Overview and Comparisons of Risk of Bias and Strength of Evidence Assessment Tools: Opportunities and Challenges of Application in Developing DRIs

Animal-assisted therapy

EVIDENCE-BASED HEALTH CARE

GATE CAT Intervention RCT/Cohort Studies

Guideline development in TB diagnostics. Karen R Steingart, MD, MPH McGill University, Montreal, July 2011

Guidelines for Writing and Reviewing an Informed Consent Manuscript From the Editors of Clinical Research in Practice: The Journal of Team Hippocrates

Online Annexes (2-4)

Content. Evidence-based Geriatric Medicine. Evidence-based Medicine is: Why is EBM Needed? 10/8/2008. Evidence-based Medicine (EBM)

CHECKLIST FOR EVALUATING A RESEARCH REPORT Provided by Dr. Blevins

Results. NeuRA Worldwide incidence April 2016

Research Approach & Design. Awatif Alam MBBS, Msc (Toronto),ABCM Professor Community Medicine Vice Provost Girls Section

CEU screening programme: Overview of common errors & good practice in Cochrane intervention reviews

The relative importance or values of the main outcomes of interest: Outcome. Survival to Hospital Discharge

5/31/2013. Disclosures. Lumbar Facet Joint Pain: Evidence. I have nothing to disclose

4/10/2018. Choosing a study design to answer a specific research question. Importance of study design. Types of study design. Types of study design

Version No. 7 Date: July Please send comments or suggestions on this glossary to

Authors face many challenges when summarising results in reviews.

Models for potentially biased evidence in meta-analysis using empirically based priors

Results. NeuRA Family relationships May 2017

Critical appraisal: Systematic Review & Meta-analysis

Evidence Based Medicine

CRITICAL APPRAISAL OF CLINICAL PRACTICE GUIDELINE (CPG)

Systematic Reviews and Meta- Analysis in Kidney Transplantation

Results. NeuRA Forensic settings April 2016

Quality of Clinical Practice Guidelines

Statistical considerations in indirect comparisons and network meta-analysis

The Effective Public Health Practice Project Tool

NeuRA Decision making April 2016

Distraction techniques

Meta-analyses: analyses:

GRADE Evidence Profiles on Long- and Rapid-Acting Insulin Analogues for the treatment of Diabetes Mellitus [DRAFT] October 2007

Implementation of Evidence Based Medicine in Clinical Teaching and Practices

Learning objectives. Examining the reliability of published research findings

CRITICAL APPRAISAL SKILLS PROGRAMME Making sense of evidence about clinical effectiveness. 11 questions to help you make sense of case control study

Transcription:

GRADE, Summary of Findings and ConQual Workshop

To discuss Introduction New JBI Levels of Evidence and Grades of Recommendation Moving towards GRADE Summary of Findings tables Qualitative Levels Conclusion University of Adelaide 2

JBI Methodology Groups Formed to provide guidance on emerging methods of evidence synthesis Mixed methods, GRADE, qualitative, umbrella, economic, effects, prevalence, correlation Group formed to review JBI Levels of Evidence University of Adelaide 3

Levels of Evidence Grades of Recommendation Designate study type Better study designs, with greater methodological quality, are ranked higher Assist in applying research into practice Recommendations assigned a grade

History University of Adelaide 5

University of Adelaide 6

University of Adelaide 7

Old levels of evidence University of Adelaide 8

Old Grades of recommendation University of Adelaide 9

How are they used? University of Adelaide 10

Justification for Change Clarity (not specific) Quasi-experimental studies Not all types of evidence No supporting document Experimental studies Adopt GRADE Difficulties creating recommendations Don t say systematic review University of Adelaide 11

Working Party Recommendations 1. JBI adopts the GRADE approach to summary of findings tables for all reviews addressing questions of effect. 2. JBI adopts a modified GRADE approach, based on FAME, for forming recommendations for practice. 3. New levels of evidence under the following headings: Intervention/Therapy/ Harms, Diagnostic Accuracy, Prognosis, Economic Analysis, Qualitative research. 4. JBI adopts an approach based on GRADE but sensitive to the nature of qualitative research for qualitative systematic reviews. 5. JBI contact the GRADE working group to discuss GRADE for the use of JBI, and to offer a partnership for developing GRADE for qualitative research. University of Adelaide 12

Why GRADE? Grading of Recommendations Assessment, Development and Evaluation (GRADE) International working group Endorsed by many EBHC organisations (WHO, Cochrane, SIGN, etc) Kerwin et al. 2012 University of Adelaide 13

Why GRADE? University of Adelaide 14

Forming recommendations with GRADE Balance between benefits, harms and burdens Quality of Evidence How do we determine quality of the evidence? Patients values and preferences Resource use University of Adelaide 15

Example meta-analysis discussion From the examples provided, what information would increase or decrease your confidence in these results? University of Adelaide 16

Discussion results Decrease Increase University of Adelaide 17

GRADE Decrease Methodological quality (risk of bias) Indirectness (i.e applicability, generalisability, transferability etc) Inconsistency (heterogeneity) Imprecision (uncertainty) Publication bias Increase Large, consistent, precise effect All plausible biases underestimate the effect Dose response effect University of Adelaide 18

Quality of evidence: beyond risk of bias Definition: The extent to which our confidence in an estimate of the treatment effect is adequate to support a particular recommendation Methodological limitations Inconsistency of results Indirectness of evidence Imprecision of results Publication bias Risk of bias: Allocation concealment Blinding Intention-to-treat Follow-up Stopped early Sources of indirectness: Indirect comparisons Patients Interventions Comparators Outcomes 19

GRADEing the evidence Pre-ranking RCTs start as high, Observational studies as low Quality of evidence ranges from HIGH MODERATE LOW VERY LOW Confidence RCTs start with high quality rating Can be downgraded 1 or 2 points for each area of concern Maximum downgrade of 3 points overall

GRADE Quality of Evidence In the context of making recommendations: The quality of evidence reflects the extent of our confidence that the estimates of an effect are adequate to support a particular decision or recommendation

Likelihood of and confidence in an outcome

Interpretation of grades of evidence /A/High: Further research is very unlikely to change confidence in the estimate of effect. /B/Moderate: Further research is likely to have an important impact on confidence in the estimate of effect and may change the estimate. /C/Low: Further research is very likely to have an important impact on confidence in the estimate of effect and is likely to change the estimate. /D/Very low: We have very little confidence in the effect estimate: Any estimate of effect is very uncertain.

Summary of Findings tables Standard table format one for each comparison Focus on outcomes Includes: context results GRADE reasons behind decisions

Summary of findings table Improve understanding Improve accessibility Created with GRADEpro http://ims.cochrane.org/revman/gradepro

Presented in a summary of findings table University of Adelaide 26

University of Adelaide 27

University of Adelaide 28

Forming recommendations with GRADE Balance between benefits, harms and burdens Two recommendations Strong and Weak For or against Quality of Evidence Patients values and preferences Resource use University of Adelaide 29

Goldet et al. 2013 University of Adelaide 30

Grade up Grade down RCT start high, obs. data start low P I C O Outcome Outcome Outcome Outcome Systematic review Critical Critical Important Low Summary of findings & estimate of effect for each outcome High Moderate Low Very low 1. Risk of bias 2. Inconsistency 3. Indirectness 4. Imprecision 5. Publication bias 1. Large effect 2. Dose response 3. Confounders Guideline development Formulate recommendations: For or against (direction) Strong or weak (strength) By considering: Quality of evidence Balance benefits/harms Values and preferences Revise if necessary by considering: Resource use (cost) Rate overall quality of evidence across outcomes based on lowest quality of critical outcomes We recommend using We suggest using We recommend against using We suggest against using

Why still have levels? Other JBI resources (not just systematic reviews) Assist in pre-ranking Address evidence of other types University of Adelaide 32

JBI Levels of Evidence OLD LEVELS OF EVIDENCE Feasibility Appropriateness Meaningfulness Effectiveness NEW PROPOSED LEVELS OF EVIDENCE Therapy, harm Diagnostic/screening studies Prognosis Economic evaluations Meaningfulness University of Adelaide 33

University of Adelaide 34

Levels of evidence: Diagnostic/screening studies Level 1 Studies of Test Accuracy among consecutive patients Level 1.a Systematic reviews of studies of test accuracy among consecutive patients Level 1.b Studies of test accuracy among consecutive patients Level 2 Studies of Test Accuracy among non-consecutive patients Level 2.a Systematic reviews of studies of test accuracy among non-consecutive patients Level 2.b Studies of test accuracy among non-consecutive patients Level 3 Diagnostic Case control studies Level 3.a Systematic reviews of diagnostic case control studies Level 3.b Diagnostic case-control study Level 4 Diagnostic yield studies Level 4.a Systematic reviews of diagnostic yield studies Level 4.b Individual diagnostic yield study Level 5 Expert Opinion and Bench Research Level 5.a Systematic reviews of expert opinion Level 5.b Expert consensus Level 5.c Bench research/ single expert opinion University of Adelaide 35

Levels of evidence: Prognosis Level 1 Inception Cohort Studies Level 1.a Systematic reviews of inception cohort studies Level 1.b Inception cohort studies Level 2 Studies of All or none Level 2.a Systematic reviews of studies of all or none studies Level 2.b All or none studies Level 3 Cohort studies Level 3.a Systematic reviews of cohort studies (or control arm of RCT) Level 3.b Cohort studies (or control arm of RCT) Level 4 Case series/case Controlled/ Historically Controlled studies Level 4.a Systematic reviews of Case series/case Controlled/ Historically Controlled studies Level 4.b Individual Case series/case Controlled/ Historically Controlled studies Level 5 Expert Opinion and Bench Research Level 5.a Systematic reviews of expert opinion Level 5.b Expert consensus Level 5.c Bench research/ single expert opinion University of Adelaide 36

Levels of evidence: Economic evaluations Level 1: Systematic review of economic evaluations Level 2: Single economic evaluation Level 3: Systematic review of expert opinion Level 4: Expert opinion University of Adelaide 37

Levels of evidence: Meaningfulness 1. Qualitative or mixed-methods systematic review 2. Qualitative or mixed-methods synthesis 3. Single qualitative study 4. Systematic review of expert opinion 5. Expert opinion University of Adelaide 38

Summary of findings table: Meaningfulness Incorporating a GRADE like rating of quality for studies related to meaningfulness can it be done? Need to consider what increases or decrease confidence in the results University of Adelaide 39

Discussion Activity EXAMPLE META-SYNTHESIS From the examples provided, what information would increase or decrease your confidence in these results? Synthesised finding 1 People undergoing imaging often expect a health issue to be found during their scan, which can then lead to anxiety and worry. Munn & Jordan. The patient experience of high technology medical imaging: a systematic review of the qualitative evidence, JBI Library of Systematic Reviews, 2011; 9(19): 631-678 Synthesised finding 2 Adjust themselves: Caregivers need to adjust themselves to the caregiving role when living with and taking care of persons with schizophrenia. The ultimate goal is to integrate the caregiving role into their lives. Tungpunkom, Napa, Chaniang & Srikhachin. Caregiving experiences of families living with persons with schizophrenia: a systematic review, 2013; 11(8): 415-564 University of Adelaide 40

Discussion Activity EXAMPLE META-SYNTHESIS What information will increase or decrease your confidence in the results? University of Adelaide 41

Discussion Activity (some answers) EXAMPLE META-SYNTHESIS What information will increase or decrease your confidence in the results? Type of data Dependability Confidence University of Adelaide 42

Meaningfulness Summary of Findings Table Systematic review title: Population: Phenomena of interest: Context: Synthesised Finding Type of data Dependability Confidence Score Comments University of Adelaide 43

Type of data Ranking scale consists of 4 levels High Moderate Low Very Low Begin by pre-ranking papers based on type of data High for qualitative studies Low for expert opinion University of Adelaide 44

Score of dependability: qualitative research Measurement Measured by asking the following questions: 1. Is there congruity between the research methodology and the research question or objectives? 2. Is there congruity between the research methodology and the methods used to collect data? 3. Is there congruity between the research methodology and the representation and analysis of data? 4. Is there a statement locating the researcher culturally or theoretically? 5. Is the influence of the researcher on the research, and vice-versa, addressed? Ranking system: 4-5 yes responses, the finding remains unchanged 2-3 yes responses: move down 1 level 0-1 yes responses: move down 2 levels University of Adelaide 45

Score of dependability: expert opinion Measured by asking questions related to the appropriateness of the conduct of the research with research aims and purpose: 1. Does the source of the opinion have standing in the field of expertise? 2. Is the opinion s basis in logic/experience clearly argued? 3. Is the argument developed analytical? 4. Is there reference to the extant literature/evidence and any incongruence with it logically defended? 5. Is the opinion supported by peers? Ranking system: 4-5 yes responses, the paper remains unchanged (0) 2-3 yes responses: move down 1 level (-1) 0-1 yes responses: move down 2 levels (-2) University of Adelaide 46

Score of confidence Measurement Assign a level of credibility to the findings: Unequivocal (findings accompanied by an illustration that is beyond reasonable doubt and; therefore not open to challenge) Equivocal (findings accompanied by an illustration lacking clear association with it and therefore open to challenge) Unsupported (findings are not supported by the data, or with no illustration) Ranking The synthesised findings contains only unequivocal findings Remains unchanged Mix of unequivocal/equivocal findings downgraded one (-1) All equivocal finding down grade 2 (-2) Mix of plausible/unsupported findings downgraded three (-3) Not-supported findings downgraded four (-4) University of Adelaide 47

Meaningfulness Summary of Findings Table Systematic review title: The patient experience of high technology medical imaging: a systematic review of the qualitative evidence Population: Persons who had undergone high technology medical imaging Phenomena of interest: The meaningfulness of a patients experience of undergoing diagnostic imaging using high technology Context: Male and Female Adult Patients presenting to a medical imaging department Synthesised Finding People undergoing imaging often expect a health issue to be found during their scan, which can then lead to anxiety and worry Type of research Qualitative (HIGH) Dependability Confidence Score Comments (MODERATE) (LOW) University of Adelaide 48 LOW

A new spin on FAME The FAME (Feasibility, Appropriateness, Meaningfulness and Effectiveness) scale informs the recommendation. F Feasibility; specifically: What is the cost effectiveness of the practice? Is the resource/practice available? Is their sufficient experience/levels of competency available? A Appropriateness; specifically: Is it culturally acceptable? Is it transferable to the majority of the population? Is it easily adaptable to a variety of circumstances? M Meaningfulness; specifically: Is it associated with positive experiences? Is it not associated with negative experiences? E Effectiveness; specifically: Was there a beneficial effect? Is it safe? (i.e is there a lack of harm associated with the practice? University of Adelaide 49

JBI Grades of Recommendation JBI Grades of Recommendation Grade A A strong recommendation for a certain health management strategy where: it is clear that desirable effects outweigh undesirable effects of the strategy; where there is evidence of adequate quality supporting its use; there is a benefit or no impact on resource use, and values, preferences and the patient experience have been taken into account Grade B A weak recommendation for a certain health management strategy where: desirable effects appear to outweigh undesirable effects of the strategy, although this is not as clear; where there is evidence supporting its use, although this may not be of high quality; there is a benefit, no impact or minimal impact on resource use, and values, preferences and the patient experience may or may not have been taken into account. University of Adelaide 50

Recommendations People undergoing scanning are aware that is has the power to detect significant health issues, which can lead to anxious waits and uncertainty after imaging. Once imaging is completed, patients should be given information regarding when they will receive their results, and what will happen now the images have been taken (Grade B) Munn & Jordan. The patient experience of high technology medical imaging: a systematic review of the qualitative evidence, JBI Library of Systematic Reviews, 2011; 9(19): 631-678

University of Adelaide 52

Conclusions Improved clarity in levels of evidence In line with international organisations Changes made to fit with JBI s broader view of what constitutes evidence Guidance provided University of Adelaide 53