Checklist for Diagnostic Test Accuracy Studies. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

Similar documents
Checklist for Analytical Cross Sectional Studies. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

Checklist for Case Control Studies. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

Checklist for Text and Opinion. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

Checklist for Prevalence Studies. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

Checklist for Randomized Controlled Trials. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

Checklist for Randomized Controlled Trials. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

The Joanna Briggs Institute Reviewers Manual 2014

SYNTHESIS OF EVIDENCE OF DIAGNOSTIC TESTS AND PREVENTIVE PROGRAMS IDENTIFYING PRE-DIABETES TYPE

Assessment of methodological quality and QUADAS-2

Appendix G: Methodology checklist: the QUADAS tool for studies of diagnostic test accuracy 1

GRADE, Summary of Findings and ConQual Workshop

LIHS Mini Master Class

CENTER FOR KLINISKE RETNINGSLINJER

Essential Skills for Evidence-based Practice Understanding and Using Systematic Reviews

EVIDENCE-BASED GUIDELINE DEVELOPMENT FOR DIAGNOSTIC QUESTIONS

The diagnosis of Chronic Pancreatitis

Controlled Trials. Spyros Kitsiou, PhD

The Synthesis of Qualitative Findings

Systematic Reviews in healthcare and the Joanna Briggs Institute /Cochrane Collaboration. Fiona Bath-Hextall

PROSPERO International prospective register of systematic reviews

Diagnostic cervical and lumbar medial branch blocks

Confidence in qualitative synthesis findings: The ConQual Approach

Supplementary Online Content

Systematic Reviews of Studies Quantifying the Accuracy of Diagnostic Tests and Markers

Appraising Diagnostic Test Studies

Introduction to diagnostic accuracy meta-analysis. Yemisi Takwoingi October 2015

Overview of Study Designs in Clinical Research

APPENDIX 11: CASE IDENTIFICATION STUDY CHARACTERISTICS AND RISK OF BIAS TABLES

Alcohol interventions in secondary and further education

Critical Appraisal Tools

Guidelines for Writing and Reviewing an Informed Consent Manuscript From the Editors of Clinical Research in Practice: The Journal of Team Hippocrates

MINDFULNESS-BASED INTERVENTIONS IN EPILEPSY

Clinical Epidemiology for the uninitiated

APPLYING EVIDENCE-BASED METHODS IN PSYCHIATRY JOURNAL CLUB: HOW TO READ & CRITIQUE ARTICLES

CRITICAL EVALUATION OF BIOMEDICAL LITERATURE

In many healthcare situations, it is common to find

Quality Assessment of Research Articles in Nuclear Medicine Using STARD and QUADAS 2 Tools

Clinical Practice Committee Statement Protocols (updated 2/22/2018) Henry Kim, MD FAAEM William Meurer, MD FAAEM. Steven Rosenbaum, MD FAAEM

Older persons perceptions and experiences of community palliative care: a systematic review of qualitative evidence protocol

Value of symptoms and additional diagnostic tests for colorectal cancer in primary care: systematic review and meta-analysis

Systematic reviews of diagnostic accuracy studies are often

Establishing confidence in qualitative evidence. The ConQual Approach

Systematic Reviews and meta-analyses of Diagnostic Test Accuracy. Mariska Leeflang

GATE CAT Diagnostic Test Accuracy Studies

INTRODUCTION. Evidence standards for justifiable evidence claims, June 2016

Surveillance report Published: 17 March 2016 nice.org.uk

Dr Edward KoKoAung Master of Clinical Science School of Translational Science Faculty of Health Sciences The University of Adelaide South Australia.

Outcomes assessed in the review

GATE CAT Intervention RCT/Cohort Studies

A Systematic Review of the Efficacy and Clinical Effectiveness of Group Analysis and Analytic/Dynamic Group Psychotherapy

Karin Hannes Methodology of Educational Sciences Research Centre KU Leuven

Summarising and validating test accuracy results across multiple studies for use in clinical practice

EER Assurance Criteria & Assertions IAASB Main Agenda (June 2018)

Appendix 2 Quality assessment tools. Cochrane risk of bias tool for RCTs. Support for judgment

Participant views and experiences of participating in HIV research in sub-saharan Africa: a qualitative systematic review protocol

1. Draft checklist for judging on quality of animal studies (Van der Worp et al., 2010)

Systematic reviews: From evidence to recommendation. Marcel Dijkers, PhD, FACRM Icahn School of Medicine at Mount Sinai

Webinar 3 Systematic Literature Review: What you Need to Know

Guidance for conducting systematic scoping reviews

Cochrane Breast Cancer Group

Meta-analysis of diagnostic research. Karen R Steingart, MD, MPH Chennai, 15 December Overview

Standards for reporting Plain Language Summaries (PLS) for Cochrane Diagnostic Test Accuracy Reviews (interim guidance adapted from Methodological

AAEM Clinical Practice Committee Statement Literature Search / Grading Process - Revision Version 3.0 September 2011

MODULE 3 APPRAISING EVIDENCE. Evidence-Informed Policy Making Training

Cochrane and related publications... 2

Systematic reviews of diagnostic test accuracy studies

Reporting and methods in systematic reviews of comparative accuracy

The accuracy of Influenza A (H1N1) swine flu laboratory testing: A systematic review of diagnostic test accuracy

Online Annexes (2-4)

Innovative Analytic Approaches in the Context of a Global Pediatric IBD Drug Development

Appendix A: Literature search strategy

Sanjay P. Zodpey Clinical Epidemiology Unit, Department of Preventive and Social Medicine, Government Medical College, Nagpur, Maharashtra, India.

Comparison of 18 FDG PET/PET-CT and bone scintigraphy for detecting bone metastases in patients with nasopharyngeal cancer: a meta-analysis

User Manual The Journal Club

Abstract: Learning Objectives: Posttest:

Evidence Based Medicine

Title: Reporting and Methodologic Quality of Cochrane Neonatal Review Group Systematic Reviews

Resilience: A Qualitative Meta-Synthesis

Revised Cochrane risk of bias tool for randomized trials (RoB 2.0) Additional considerations for cross-over trials

PLAN OF PRESENTATION. Background Understanding the Elaboration of a research Step by step analysis Conclusion

Title registration for a systematic review: Book reading for promoting physical and mental health in older adults

Sylivia Nalubega. University of Nottingham. 16 th Annual Conference of the National HIV Nurses Association (NHIVNA)

Ovid. A Spectrum of Resources to Support Evidence-Based Medical Research. Susan Lamprey Sturchio, MLIS. June 14, 2015

The effects of cognitive behaviour therapy for major depression in older adults

Overview and Comparisons of Risk of Bias and Strength of Evidence Assessment Tools: Opportunities and Challenges of Application in Developing DRIs

CONSORT 2010 checklist of information to include when reporting a randomised trial*

Systematic reviews and meta-analyses of observational studies (MOOSE): Checklist.

Using Evidence-Based Practice in Social Work Denise Bronson, MSW, Ph.D. The Ohio State University College of Social Work

Jamie Kirkham (University of Liverpool)

CRITICALLY APPRAISED PAPER (CAP)

CommonKnowledge. Pacific University. Gina Clark Pacific University. Lauren Murphy Pacific University. Recommended Citation.

CRITICALLY APPRAISED PAPER (CAP)

CHECK-LISTS AND Tools DR F. R E Z A E I DR E. G H A D E R I K U R D I S TA N U N I V E R S I T Y O F M E D I C A L S C I E N C E S

Systematic Reviews and Meta- Analysis in Kidney Transplantation

Quality and Reporting Characteristics of Network Meta-analyses: A Scoping Review

Evidence-based Laboratory Medicine: Finding and Assessing the Evidence

Comments from Wyeth on the Assessment Report for the appraisal of Enbrel in RA General Comments

CONCEPTUALIZING A RESEARCH DESIGN

PHO MetaQAT Guide. Critical appraisal in public health. PHO Meta-tool for quality appraisal

Exploring uncertainty in cost effectiveness analysis. Francis Ruiz NICE International (acknowledgements to: Benjarin Santatiwongchai of HITAP)

Transcription:

The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews Checklist for Diagnostic Test Accuracy Studies http://joannabriggs.org/research/critical-appraisal-tools.html www.joannabriggs.org

The Joanna Briggs Institute Introduction The Joanna Briggs Institute (JBI) is an international, membership based research and development organization within the Faculty of Health Sciences at the University of Adelaide. The Institute specializes in promoting and supporting evidence-based healthcare by providing access to resources for professionals in nursing, midwifery, medicine, and allied health. With over 80 collaborating centres and entities, servicing over 90 countries, the Institute is a recognized global leader in evidence-based healthcare. JBI Systematic Reviews The core of evidence synthesis is the systematic review of literature of a particular intervention, condition or issue. The systematic review is essentially an analysis of the available literature (that is, evidence) and a judgment of the effectiveness or otherwise of a practice, involving a series of complex steps. The JBI takes a particular view on what counts as evidence and the methods utilized to synthesize those different types of evidence. In line with this broader view of evidence, the Institute has developed theories, methodologies and rigorous processes for the critical appraisal and synthesis of these diverse forms of evidence in order to aid in clinical decision-making in health care. There now exists JBI guidance for conducting reviews of effectiveness research, qualitative research, prevalence/incidence, etiology/risk, economic evaluations, text/opinion, diagnostic test accuracy, mixed-methods, umbrella reviews and scoping reviews. Further information regarding JBI systematic reviews can be found in the JBI Reviewer s Manual on our website. JBI Critical Appraisal Tools All systematic reviews incorporate a process of critique or appraisal of the research evidence. The purpose of this appraisal is to assess the methodological quality of a study and to determine the extent to which a study has addressed the possibility of bias in its design, conduct and analysis. All papers selected for inclusion in the systematic review (that is those that meet the inclusion criteria described in the protocol) need to be subjected to rigorous appraisal by two critical appraisers. The results of this appraisal can then be used to inform synthesis and interpretation of the results of the study. JBI Critical appraisal tools have been developed by the JBI and collaborators and approved by the JBI Scientific Committee following extensive peer review. Although designed for use in systematic reviews, JBI critical appraisal tools can also be used when creating Critically Appraised Topics (CAT), in journal clubs and as an educational tool. 2

JBI Critical Appraisal Checklist Reviewer Date Author Year Record Number 1. Was a consecutive or random sample of patients Yes No Unclear Not applicable enrolled? 2. Was a case control design avoided? 3. Did the study avoid inappropriate exclusions? 4. Were the index test results interpreted without knowledge of the results of the reference standard? 5. If a threshold was used, was it pre-specified? 6. Is the reference standard likely to correctly classify the target condition? 7. Were the reference standard results interpreted without knowledge of the results of the index test? 8. Was there an appropriate interval between index test and reference standard? 9. Did all patients receive the same reference standard? 10. Were all patients included in the analysis? Overall appraisal: Include Exclude Seek further info Comments (Including reason for exclusion) 3

Diagnostic Test Accuracy Studies Critical Appraisal Tool How to cite: Whiting PF, Rutjes AW, Westwood ME, Mallett S, Deeks JJ, Reitsma JB, Leeflang MM, Sterne JA, Bossuyt PM, QUADAS-2 Group. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med. 2011;155(8):529-36. Campbell JM, Klugar M, Ding S, Carmody DP, Hakonsen SJ, Jadotte YT, White S, Munn Z. Diagnostic test accuracy: methods for systematic review and meta-analysis. Int J Evid Based Healthc. 2015;13(3):154-62. Answers: Yes, No, Unclear or Not/Applicable Patient selection 1. Was a consecutive or random sample of patients enrolled? Studies should state or describe their method of enrolment. If it is claimed that a random sample was chosen the method of randomization should be stated (and appropriate). It is acceptable if studies do not say consecutive but instead describe consecutive enrolment; i.e. all patients from. till. were included. 2. Was a case control design avoided? Case control studies are described in detail in the reviewers manual. In essence, if a study design involves recruiting participants who are already known by other means to have the diagnosis of interest and investigating whether the test of interest correctly identifies them as such, the answer is No. 3. Did the study avoid inappropriate exclusions? If patients are excluded for reasons that would likely influence the conduct, interpretation or results of the test, this may bias the results. Examples include: excluding patients on which the test is difficult to conduct, excluding patients with borderline results, excluding patients with clear clinical indicators of the diagnosis of interest. Index test 4. Were the index test results interpreted without knowledge of the results of the reference standard? The results of the index test should be interpreted by someone who is blind to the results of the reference test. The reference test may not have been conducted at the point that the index test is carried out, if so the answer to this question will be Yes. If the person who interprets the index test also interpreted the reference test then it is assumed that this question will be answered No unless there are other factors in play (for instance, the interpretation of the 4

results may be separate from their collection, in which case the interpreter may be blinded to patient identity and past reference test results). 5. If a threshold was used, was it pre-specified? Diagnostic thresholds may be chosen based on what gives the optimum accuracy from the data, or they may be pre-specified. When no diagnostic threshold is applied (i.e. the results of a test is based on the observation of a specific characteristic which is either there or not) this question will be answered NA. 6. Is the reference standard likely to correctly classify the target condition? The reference test should be the gold standard for the diagnosis of the condition of interest. Additionally, the reporting of the study should describe its conduct in sufficient detail that the reviewers can be confident that it has been correctly and competently implemented. 7. Were the reference standard results interpreted without knowledge of the results of the index test? The points made for criteria 4 apply equally here. The results of the reference test should be interpreted by someone who is blind to the results of the index test. The index test may not have been conducted at the point that the reference test is carried out, if so the answer to this question will be Yes. If the person who interprets the reference test also interpreted the index test then it is assumed that this question will be answered No unless there are other factors in play (for instance, the interpretation of the results may be separate from their collection, in which case the interpreter may be blinded to patient identity and past index test results). 8. Was there an appropriate interval between index test and reference standard? The index test and the reference test should be carried out close enough together that the status of the patient could not have meaningfully changed. The maximum acceptable time will vary based on characteristics of the population and condition of interest. 9. Did all patients receive the same reference standard? The reference standard by which patients are classed as having or not having the condition of interest should be the same for all patients. If the results of the index test influence how or whether the reference test is used (i.e. where an apparent false negative may be detected the study design may call for a double check ) this may result in biased estimates of sensitivity and specificity. Additionally, in some studies two parallel reference tests may be used (on different patients) and the results then pooled. In either case the results should be No. 5

10. Were all patients included in the analysis? Loses to follow up should be explained and there cause and frequency should be considered in whether they are likely to have had an effect on the results (Subjectivity may exist in this context, overall low tolerance should be applied in deciding to answer No to this question, but a single withdrawal from a large cohort should not necessarily force a negative response). However, if a patient's results being difficult to interpret causes their data to be excluded from the analysis this will exaggerate the estimate of DTA, and this question should definitely be answered No. 6