GATE CAT Intervention RCT/Cohort Studies

Similar documents
GATE CAT Case Control Studies

GATE CAT Diagnostic Test Accuracy Studies

GATE: Graphic Appraisal Tool for Epidemiology picture, 2 formulas & 3 acronyms

GATE: Graphic Appraisal Tool for Epidemiology picture, 2 formulas & 3 acronyms

The comparison or control group may be allocated a placebo intervention, an alternative real intervention or no intervention at all.

1. Draft checklist for judging on quality of animal studies (Van der Worp et al., 2010)

Systematic reviews and meta-analyses of observational studies (MOOSE): Checklist.

Can we teach how to discriminate between good & bad evidence? the GATE approach

Appendix 2 Quality assessment tools. Cochrane risk of bias tool for RCTs. Support for judgment

Module 5. The Epidemiological Basis of Randomised Controlled Trials. Landon Myer School of Public Health & Family Medicine, University of Cape Town

GLOSSARY OF GENERAL TERMS

Checklist for Randomized Controlled Trials. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

School of Dentistry. What is a systematic review?

EBM: Therapy. Thunyarat Anothaisintawee, M.D., Ph.D. Department of Family Medicine, Ramathibodi Hospital, Mahidol University

Checklist for Randomized Controlled Trials. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

Downloaded from:

FOCUSSED CLINICAL QUESTION:

The Effect of Vocational Rehabilitation on Return-to-Work Rates in Adults with Stroke

Randomized Controlled Trial

EVIDENCE DETECTIVES December 28, 2005 Edward Amores, M.D. Reviewed and edited by P. Wyer, M.D. Part I Question Formulation

ACR OA Guideline Development Process Knee and Hip

Alcohol interventions in secondary and further education

A Systematic Review of the Efficacy and Clinical Effectiveness of Group Analysis and Analytic/Dynamic Group Psychotherapy

PROGRAMMA DELLA GIORNATA

CHECK-LISTS AND Tools DR F. R E Z A E I DR E. G H A D E R I K U R D I S TA N U N I V E R S I T Y O F M E D I C A L S C I E N C E S

Protocol Development: The Guiding Light of Any Clinical Study

Appendix G: Methodology checklist: the QUADAS tool for studies of diagnostic test accuracy 1

Tammy Filby ( address: 4 th year undergraduate occupational therapy student, University of Western Sydney

GRADE. Grading of Recommendations Assessment, Development and Evaluation. British Association of Dermatologists April 2018

RATING OF A RESEARCH PAPER. By: Neti Juniarti, S.Kp., M.Kes., MNurs

STUDIES OF THE ACCURACY OF DIAGNOSTIC TESTS: (Relevant JAMA Users Guide Numbers IIIA & B: references (5,6))

SUPPLEMENTARY DATA. Supplementary Figure S1. Search terms*

CONSORT 2010 checklist of information to include when reporting a randomised trial*

Appendix. TABLE E-1 Search Terms

Clinical problems and choice of study designs

Meta-analyses: analyses:

Checklist for appraisal of study relevance (child sex offenses)

Guidelines for Writing and Reviewing an Informed Consent Manuscript From the Editors of Clinical Research in Practice: The Journal of Team Hippocrates

The RoB 2.0 tool (individually randomized, cross-over trials)

Problem solving therapy

Database of Abstracts of Reviews of Effects (DARE) Produced by the Centre for Reviews and Dissemination Copyright 2017 University of York.

Systematic Reviews. Simon Gates 8 March 2007

Cochrane Pregnancy and Childbirth Group Methodological Guidelines

Guidelines for Reporting Non-Randomised Studies

Systematic Review & Course outline. Lecture (20%) Class discussion & tutorial (30%)

Evidence Based Medicine

MINDFULNESS-BASED INTERVENTIONS IN EPILEPSY

Cochrane Breast Cancer Group

Evidence based practice. Dr. Rehab Gwada

Critical Appraisal Series

PROSPERO International prospective register of systematic reviews

Journal Club Critical Appraisal Worksheets. Dr David Walbridge

Assessing risk of bias

Robert M. Jacobson, M.D. Department of Pediatric and Adolescent Medicine Mayo Clinic, Rochester, Minnesota

Critical Appraisal Practicum. Fabio Di Bello Medical Implementation Manager

Evidence-Based Medicine Journal Club. A Primer in Statistics, Study Design, and Epidemiology. August, 2013

pc oral surgery international

Appendix Document A1: Search strategy for Medline (1960 November 2015)

Results. NeuRA Mindfulness and acceptance therapies August 2018

PROSPERO International prospective register of systematic reviews

The Effects of Joint Protection on Task Performance in Rheumatoid Arthritis

Critical Appraisal of RCT

Overview of Study Designs in Clinical Research

Systematic review with multiple treatment comparison metaanalysis. on interventions for hepatic encephalopathy

Gambling attitudes and misconceptions

Version No. 7 Date: July Please send comments or suggestions on this glossary to

Grading the Evidence Developing the Typhoid Statement. Manitoba 10 th Annual Travel Conference April 26, 2012

Determinants of quality: Factors that lower or increase the quality of evidence

Systematic Reviews and Meta- Analysis in Kidney Transplantation

Appendix A: Literature search strategy

GRADE. Grading of Recommendations Assessment, Development and Evaluation. British Association of Dermatologists April 2014

The treatment of postnatal depression: a comprehensive literature review Boath E, Henshaw C

Workshop: Cochrane Rehabilitation 05th May Trusted evidence. Informed decisions. Better health.

Critical Appraisal of Evidence

NeuRA Sleep disturbance April 2016

Page: 1 / 5 Produced by the Centre for Reviews and Dissemination Copyright 2018 University of York

Supplementary material

The Predictive Validity of the Test of Infant Motor Performance on School Age Motor Developmental Delay

Evidence-based Laboratory Medicine: Finding and Assessing the Evidence

Glossary of Practical Epidemiology Concepts

EVIDENCE-BASED HEALTH CARE

There is good evidence (Level 1a) to support the use of relaxation therapy for children and adolescents with headaches.

Evidence Based Medicine

Results. NeuRA Hypnosis June 2016

Deep vein thrombosis and its prevention in critically ill adults Attia J, Ray J G, Cook D J, Douketis J, Ginsberg J S, Geerts W H

Does bilateral upper limb training improve upper limb function following stroke?

FROM A QUESTION TO A PAPER

Essential Skills for Evidence-based Practice Understanding and Using Systematic Reviews

Journal Club Handbook

Animal-assisted therapy

Standards for the reporting of new Cochrane Intervention Reviews

Critical Review Form Therapy

Controlled Trials. Spyros Kitsiou, PhD

Allergen immunotherapy for the treatment of allergic rhinitis and/or asthma

Title 1 Descriptive title identifying the study design, population, interventions, and, if applicable, trial acronym

Evidence Summary For the Ghana Essential Medicines Committee

GRADE, Summary of Findings and ConQual Workshop

Evidence Based Practice

Implementing scientific evidence into clinical practice guidelines

The influence of CONSORT on the quality of reports of RCTs: An updated review. Thanks to MRC (UK), and CIHR (Canada) for funding support

Transcription:

GATE: a Graphic Approach To Evidence based practice updates from previous version in red Critically Appraised Topic (CAT): Applying the 5 steps of Evidence Based Practice Using evidence about interventions from randomised controlled trials (RCTs) & non-randomised cohort studies Assessed by: Date: Problem Describe the problem that led you to seek an answer from the literature about the effectiveness of interventions. Step 1: Ask a focused 5-part question using PECOT framework (EITHER your question OR the study s question ) Population / Describe relevant patient/client/population group (be specific about: medical condition, age group, sex, patient / client etc.) Exposure Describe intervention(s) you want to find out about (intervention) Be reasonably specific: e.g. how much? when? how administered? for how long? Comparison Describe alternative intervention (e.g. nothing or usual care?) you want to compare it with? (Control) Be reasonably specific List the relevant health/disease-related outcomes you would like to prevent/reduce/etc Enter a realistic time period within which you would expect to observe a change in these outcomes? Step 2: Access (Search) for the best evidence using the PECOT framework PECOT item Primary Search Term Synonym 1 Synonym 2 Population / Participants / Enter key search terms for at least P, E & O. OR Include relevant synonym OR Include relevant synonym AND patients / clients C & T may not be so useful for searching. Use MESH terms (from PubMed) if available, then text words. Exposure As above OR As above OR As above AND (Interventions) Comparison As above OR As above OR As above AND (Control) As above OR As above OR As above AND As above AND As above AND As above Limits & Filters PubMed has Limits (eg age, English language, years) & PubMed Clinical Queries has Filters (e.g. study type) to help focus your search. List those used. Databases searched: Database Cochrane SRs Other Secondary PubMed / Ovid Other Number of publications (Hits) GATE CAT Intervention RCT/Cohort Studies Enter number of hits from Cochrane database search for Systematic Reviews (SR). Sources Enter number of hits from other secondary sources (specify source) Evidence Selected Enter the full citation of the publication you have selected to evaluate. Justification for selection State the main objectives of the study. Explain why you chose this publication for evaluation. Medline Enter number of hits from PubMed /Ovid/etc (specify database) Enter number of hits from other sources (e.g. Google scholar, Google) V8: 2014: Please contribute your comments and suggestions on this form to: rt.jackson@auckland.ac.nz 1

Population Exposure & Comparison Reported Results Intervention Studies Step 3: Appraise Study 3a. Describe study by hanging it on the GATE frame (also enter study numbers into the separate excel GATE calculator) Study Setting Describe when & from where participants recruited (e.g. what year(s), which country, urban / rural / hospital / community) Setting Eligible Population Eligible population Define eligible population / main eligibility (inclusion and exclusion) criteria. P Recruitment process Participants Describe recruitment process (e.g. were eligibles recruited from electoral / birth / hospital admission register, or media advert, etc). How they were recruited (e.g. random sample, consecutive eligibles) What percentage of the invited eligibles participated? What reasons were given for non-participation among those otherwise eligible? Exposure Group Group (EG) EG Comparison CG (CG) Allocation methods Exposure Comparison For RCTs: describe method used to generate random allocation sequence and method used to ensure that the allocation outcome could not be changed by the participants or those assigning interventions For non-randomised studies: describe method/measures used to allocate participants to EG & CG Describe main intervention: what, how much, how, when, for how long & by whom administered. Describe comparison intervention (given to CG): as above O T Enter the main reported results Primary Secondary Adverse Outcome Describe the primary outcome. How was it defined? How & by whom was it measured? Is it categorical (the variable is grouped into categories; e.g. dead/alive) or numerical (the variable has a numerical value; e.g. weight, days in hospital) Describe any secondary outcomes How & by whom were they measured? Describe any adverse outcomes measured How & by whom were they measured? If outcomes measured cross-sectionally (e.g. diabetes, BP), state when it was measured in relation to when the intervention(s) began. If outcomes measured over a period of time (e.g. deaths), state the length of follow-up time after initiation of intervention(s) Effect Confidence Interval estimate Include type of measure; eg. RR, HR V8: 2014: Please contribute your comments and suggestions on this form to: rt.jackson@auckland.ac.nz 2

Participant Population Exposures & Comparisons Complete the Numbers on the separate GATE Calculator for Intervention Studies Intervention Studies Step 3: Appraise Study 3b. Assess risk of errors using RAMboMAN Appraisal questions (RAMboMAN) Risk of errors +, x,?, na Notes Recruitment/Applicability errors : questions on risks to application of results in practice are in blue boxes Internal study design errors: questions on risk of errors within study (design & conduct) are in pink boxes Analyses errors: questions on errors in analyses are in orange boxes Random error: questions on risk of errors due to chance are in the green box Key for scoring risk of errors: + = low; x = of concern;? = unclear; na = not applicable Recruitment - are the findings based on these recruited participants applicable in practice? Study Setting relevant to practice? Eligible population relevant to practice? Participants similar to all eligibles? Key personal (risk/prognostic) characteristics of participants that would influence applicability in practice - reported? Were E & C randomised? If RCT: method of Random sequence generation adequate? Allocation process concealed? Allocation process successful? E & C interventions sufficiently well described? E & C interventions applicable in practice? Score risk of error: +, x,? or na (see key above) Allocation to EG & CG done well? Is the study setting (e.g. what year(s), which country, urban / rural, hospital / community) likely to influence the applicability of the study results? Was the eligible population from which participants were identified relevant to the study objective and to practice? Were inclusion & exclusion criteria well defined & applied similarly to all potential eligibles? Did the recruitment process identify participants likely to be similar to all eligibles? Was sufficient information given about eligibles who did not participate? Was there sufficient information about baseline characteristics of participants to determine the applicability of the study results? Was any important information missing? Were the exposure/comparison interventions reported to be allocated randomly? Was the method of random sequence generation likely to produce similar groups (EG & CG)? Could person(s) determining allocation &/or implementing interventions have changed the allocation outcome before or during enrolment? If yes, was it sufficient to cause important bias? Were EG & CG similar at baseline? If not, was this sufficient to cause important bias without adjustments in the analyses (see Analysis section below)? Were E & C interventions described in sufficient detail for the study to be replicated or the interventions to be replicated in practice? Is the E intervention available, implementable & affordable? Was the C intervention a relevant alternative? Maintenance in allocated groups and on allocated interventions sufficient throughout study? V8: 2014: Please contribute your comments and suggestions on this form to: rt.jackson@auckland.ac.nz 3

Results Completeness of follow-up sufficiently high? Compliance with EG & CG interventions sufficiently high? Contamination sufficiently low? Co-interventions: were all other interventions similar in both groups? Participants / study staff blind to interventions? Was the proportion of participants lost-to-follow-up /dropped / lost pre-/ during/ post- intervention acceptably low? Did the proportion followed up differ in EG & CG? Was this sufficient to cause important bias? Did most participants in the EG & CG remain on their allocated interventions throughout the study? Was it sufficient to demonstrate the effect of the interventions? Did any of the CG receive the EG intervention or vice versa? If so, was it sufficient to cause important bias? Were the groups treated / behave similarly other than the EG & CG interventions? Did either group receive additional interventions / have services provided in a different manner / change their behaviour? Was this sufficient to cause important bias? If participants/staff aware of the interventions received, were the EG & CG treated differently / did they behave differently in ways that influenced followup/compliance/contamination/co-interventions differentially in EG & CG? Was this sufficient to cause important bias? blind or objective Measurement of : were they done accurately? measured blind to EG & CG status? measured objectively? All important outcomes assessed? Follow-up time similar in EG & CG? Follow-up time sufficient to be meaningful? Intention-to treat-analyses done? If EG & CG not similar at baseline was this adjusted for in the analyses? Estimates of Intervention effects given or calculable? Were they calculated correctly? Measures of the amount of random error in estimates of intervention effects ANalyses: were they done appropriately? Were outcome assessors (or participants) aware of whether participants were in EG or CG? If yes, was this likely to lead to biased outcome measurement? How objective were outcome measures (e.g. death, automatic test, strict diagnostic criteria)? Where significant judgment was required, were independent adjudicators used? Was reliability of measures relevant (inter-rater & intrarater), & if so, reported? Both benefits and harms assessed? Was it possible to determine the overall balance of benefits and harms of the exposure/comparison? If not, was it sufficient to cause important bias? Or was it either: too short to have time for the risk/prognostic factors to have influenced the outcome(s); or too long, e.g. the effect may have worn off? Were all participants analysed in the groups (EG & CG) to which they were originally allocated? e.g. using multivariate analyses or stratification Were measures of occurrence (EGO & CGO) & effect estimates (e.g RRs, RDs, NNTs) given or possible to calculate? If entered into GATE calculator, were GATE results similar to reported results? Were confidence intervals &/or p-values for effect estimates given or possible to calculate? If they could be V8: 2014: Please contribute your comments and suggestions on this form to: rt.jackson@auckland.ac.nz 4

given or calculable? Were they calculated correctly? Study design & conduct: was risk of error low (i.e. results reasonably unbiased)? Study analyses: was risk of error low (i.e. results reasonably unbiased)? Random error in estimates of intervention effects: were CIs sufficiently narrow for results to be meaningful? Applicability: are these findings applicable in practice? Summary of Study Appraisal entered into GATE calculator, were GATE results similar to reported results? If effect estimates not statistically significant were power calculations given or possible to calculate? Use responses to questions in pink boxes above Use responses from the orange boxes above Use responses to questions in green box above. Would you make a different decision if the true effect was close to the upper confidence limit rather than close to the lower confidence limit? Use responses to questions in blue boxes above V8: 2014: Please contribute your comments and suggestions on this form to: rt.jackson@auckland.ac.nz 5

Patient & Family Intervention Studies Step 4: Apply. Consider/weigh up all factors & make (shared) decision to act The X-factor Epidemiological Evidence Economic Community Values & preferences Decision System features Legal Practitioner Case Circumstances Political Epidemiological evidence: summarise the quality of the study appraised, the magnitude and precision of the measure(s) estimated and the applicability of the evidence. Also summarise its consistency with other studies (ideally systematic reviews) relevant to the decision. Case circumstances: what circumstances (e.g. disease process/ co-morbidities [mechanistic evidence], social situation) specifically related to the problem you are investigating may impact on the decision? System features: were there any system constraint or enablers that may impact on the decision? What values & preferences may need to be considered in making the decision? Taking into account all the factors above what is the best decision in this case? Step 5: Audit usual practice (For Quality Improvement) Is there likely to be a gap between your usual practice and best practice for the problem? V8: 2014: Please contribute your comments and suggestions on this form to: rt.jackson@auckland.ac.nz 6