GATE CAT Case Control Studies

Similar documents
GATE CAT Intervention RCT/Cohort Studies

GATE CAT Diagnostic Test Accuracy Studies

GATE: Graphic Appraisal Tool for Epidemiology picture, 2 formulas & 3 acronyms

GATE: Graphic Appraisal Tool for Epidemiology picture, 2 formulas & 3 acronyms

Systematic reviews and meta-analyses of observational studies (MOOSE): Checklist.

1. Draft checklist for judging on quality of animal studies (Van der Worp et al., 2010)

Appendix 2 Quality assessment tools. Cochrane risk of bias tool for RCTs. Support for judgment

SUPPLEMENTARY DATA. Supplementary Figure S1. Search terms*

About Reading Scientific Studies

The comparison or control group may be allocated a placebo intervention, an alternative real intervention or no intervention at all.

Checklist for Randomized Controlled Trials. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

GLOSSARY OF GENERAL TERMS

STUDIES OF THE ACCURACY OF DIAGNOSTIC TESTS: (Relevant JAMA Users Guide Numbers IIIA & B: references (5,6))

Meta Analysis. David R Urbach MD MSc Outcomes Research Course December 4, 2014

Evidence based practice. Dr. Rehab Gwada

Title:Continuity of GP care is associated with lower use of complementary and alternative medical providers A population-based cross-sectional survey

EVIDENCE DETECTIVES December 28, 2005 Edward Amores, M.D. Reviewed and edited by P. Wyer, M.D. Part I Question Formulation

Protocol Development: The Guiding Light of Any Clinical Study

Trial Designs. Professor Peter Cameron

Evidence-based Laboratory Medicine. Prof AE Zemlin Chemical Pathology Tygerberg Hospital

Problem solving therapy

Guidelines for Writing and Reviewing an Informed Consent Manuscript From the Editors of Clinical Research in Practice: The Journal of Team Hippocrates

Evidence-based Laboratory Medicine: Finding and Assessing the Evidence

Checklist for Randomized Controlled Trials. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

Robert M. Jacobson, M.D. Department of Pediatric and Adolescent Medicine Mayo Clinic, Rochester, Minnesota

Overview of Study Designs in Clinical Research

University of Wollongong. Research Online. Australian Health Services Research Institute

Results. NeuRA Mindfulness and acceptance therapies August 2018

Standards for the reporting of new Cochrane Intervention Reviews

Cochrane Breast Cancer Group

The Logotherapy Evidence Base: A Practitioner s Review. Marshall H. Lewis

NeuRA Sleep disturbance April 2016

Appendix A: Literature search strategy

MINDFULNESS-BASED INTERVENTIONS IN EPILEPSY

MSc Programme in International Health Epidemiology and Statistics. Before lecture exercise. Aims of the lecture

The Effect of Vocational Rehabilitation on Return-to-Work Rates in Adults with Stroke

INTERPRETATION OF STUDY FINDINGS: PART I. Julie E. Buring, ScD Harvard School of Public Health Boston, MA

CRITICAL APPRAISAL WORKSHEET 1

USDA Nutrition Evidence Library: Systematic Review Methodology

Clinical problems and choice of study designs

GRADE. Grading of Recommendations Assessment, Development and Evaluation. British Association of Dermatologists April 2018

Results. NeuRA Motor dysfunction April 2016

Supplementary material

Results. NeuRA Hypnosis June 2016

Can we teach how to discriminate between good & bad evidence? the GATE approach

ACR OA Guideline Development Process Knee and Hip

CHECK-LISTS AND Tools DR F. R E Z A E I DR E. G H A D E R I K U R D I S TA N U N I V E R S I T Y O F M E D I C A L S C I E N C E S

CHECKLIST FOR EVALUATING A RESEARCH REPORT Provided by Dr. Blevins

Recent developments for combining evidence within evidence streams: bias-adjusted meta-analysis

1 Case Control Studies

Assessing risk of bias

Supplementary Online Content

Evidence-Based Medicine Journal Club. A Primer in Statistics, Study Design, and Epidemiology. August, 2013

The Predictive Validity of the Test of Infant Motor Performance on School Age Motor Developmental Delay

Research Questions and Survey Development

Essential Skills for Evidence-based Practice Understanding and Using Systematic Reviews

Traumatic brain injury

Checklist for Case Control Studies. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

Results. NeuRA Treatments for internalised stigma December 2017

CONSORT 2010 checklist of information to include when reporting a randomised trial*

Type of intervention Diagnosis. Economic study type Cost-effectiveness analysis.

Results. NeuRA Worldwide incidence April 2016

11 questions to help you make sense of a case control study

The Royal College of Pathologists Journal article evaluation questions

Cochrane Pregnancy and Childbirth Group Methodological Guidelines

Journal Club Critical Appraisal Worksheets. Dr David Walbridge

SYSTEMATIC REVIEW: AN APPROACH FOR TRANSPARENT RESEARCH SYNTHESIS

Evidence Based Medicine

Results. NeuRA Family relationships May 2017

Meta-Analysis. Zifei Liu. Biological and Agricultural Engineering

NIH NEW CLINICAL TRIAL REQUIREMENTS AND FORMS-E

Instrument for the assessment of systematic reviews and meta-analysis

Systematic Review & Course outline. Lecture (20%) Class discussion & tutorial (30%)

A Systematic Review of the Efficacy and Clinical Effectiveness of Group Analysis and Analytic/Dynamic Group Psychotherapy

There is good evidence (Level 1a) to support the use of relaxation therapy for children and adolescents with headaches.

How to use this appraisal tool: Three broad issues need to be considered when appraising a case control study:

WORKSHEET: Etiology/Harm

EUnetHTA JA2 Guideline Clinical endpoints WP 7 GUIDELINE. Endpoints used for Relative Effectiveness Assessment: Clinical Endpoints

Abstract: Learning Objectives: Posttest:

INTERNATIONAL STANDARD ON ASSURANCE ENGAGEMENTS 3000 ASSURANCE ENGAGEMENTS OTHER THAN AUDITS OR REVIEWS OF HISTORICAL FINANCIAL INFORMATION CONTENTS

The RoB 2.0 tool (individually randomized, cross-over trials)

Alcohol interventions in secondary and further education

RATING OF A RESEARCH PAPER. By: Neti Juniarti, S.Kp., M.Kes., MNurs

Animal-assisted therapy

Using Evidence-Based Practice in Social Work Denise Bronson, MSW, Ph.D. The Ohio State University College of Social Work

Title 1 Descriptive title identifying the study design, population, interventions, and, if applicable, trial acronym

CRITICAL APPRAISAL SKILLS PROGRAMME Making sense of evidence about clinical effectiveness. 11 questions to help you make sense of case control study

Checklist for Prevalence Studies. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

Quick Literature Searches

Standards for the conduct and reporting of new Cochrane Intervention Reviews 2012

Tammy Filby ( address: 4 th year undergraduate occupational therapy student, University of Western Sydney

Systematic Reviews and Meta- Analysis in Kidney Transplantation

PROGRAMMA DELLA GIORNATA

Feng-Yi Lai, RN, MSN, Instructor Department of Nursing, Shu-Zen College of Medicine and Management, Asphodel Yang, RN, PhD, Associate Professor

Guidelines for Reporting Non-Randomised Studies

FOCUSSED CLINICAL QUESTION:

CRITICALLY APPRAISED PAPER (CAP)

Critical Review: Do Speech-Generating Devices (SGDs) Increase Natural Speech Production in Children with Developmental Disabilities?

pc oral surgery international

Meta-analyses: analyses:

Transcription:

GATE: a Graphic Approach To Evidence based practice updates from previous version in red Critically Appraised Topic (CAT): Applying the 5 steps of Evidence Based Practice Using evidence about aetiology/risk/interventions from Case Control Studies Assessed by: Date: Problem Describe the problem that led you to seek an answer from the literature about aetiology/risk/interventions. Step 1: Ask a focused 5-part question using PECOT framework (EITHER your question OR the study s question ) Population / Describe the relevant patient/client/population group (be specific about: medical condition, age group, sex, patient / client etc.) Exposure Describe the risk/intervention factor(s) you want to find out about (intervention) Be reasonably specific: e.g. how defined? when? by whom? Comparison Describe an appropriate comparison group - be reasonably specific (Control) Outcomes List the relevant health/disease-related outcome you wish to investigate Enter a realistic time period within which you would expect to observe these outcomes? Step 2: Access (Search) for the best evidence using the PECOT framework PECOT item Primary Search Term Synonym 1 Synonym 2 Population / Participants / Enter your key search terms for at least P, E & OR Include relevant synonym OR Include relevant synonym AND patients / clients O. C & T may not be so useful for searching. Use MESH terms (from PubMed) if available, then text words. Exposure(Interven As above OR As above OR As above AND tions) Comparison As above OR As above OR As above AND (Control) Outcomes As above OR As above OR As above AND As above AND As above AND As above Limits & Filters PubMed has Limits (eg age, English language, years) & PubMed Clinical Queries has Filters (e.g. study type) to help focus your search. List those used. Databases searched: Database Cochrane Other Secondary PubMed / OvidMedline Other Number of publications (Hits) GATE CAT Case Control Studies Enter number of hits from Cochrane search. Sources Enter number of hits from other secondary sources. Evidence Selected Enter the full citation of the publication you have selected to evaluate. Justification for selection State the main objectives of the study. Explain why you chose this publication for evaluation. Enter number of hits from PubMed /Ovid/etc (specify database) Enter number of hits from other sources (e.g. Google scholar, Google) V3: 2014: Please contribute your comments and suggestions on this form to: rt.jackson@auckland.ac.nz 1

Population Exposure & Comparison Outcomes Reported Results Case Control Studies about aetiology/risk/interventions Step 3: Appraise Study 3a. Describe study by hanging it on the GATE frame (also enter study numbers into the separate excel GATE calculator) Study Setting Describe when & from where participants recruited (e.g. what year(s), which country, urban/rural/ hospital/community) Setting Eligible Population Cases: Eligible population Recruitment process Define eligible population (if possible) from which the cases were recruited (e.g. by age / gender / geographic / administrative region). Describe case recruitment process (e.g. were they recruited from electoral / birth / hospital admission register, media advert, etc). How recruited (e.g. consecutive eligible cases). What percentage of invited eligible cases participated? What reasons were given for non-participation? Exposure Group (EG) Comparison Group (CG) Controls: Eligible population. Recruitment process Allocation method Exposure Define eligible population (if possible) from which the controls were recruited (as above). Describe control recruitment process (as above for cases). What percentage of invited eligible controls participated? What reasons were given for non-participation? Cases and controls allocated by measurement of risk/intervention factors Describe risk/intervention factor(s): what, how defined, how measured, when, by whom for cases and for controls EG CG Comparison Describe comparison risk/intervention factor(s) as above Cases Outcome + EG CG Cases Outcome (case definition) Describe the outcome that made a person a case. How was it defined? How, when & by whom were cases identified? Controls Outcome - Controls T State the relevant time between when participants were exposed to risk factor/intervention and the outcome. Enter the main reported results Outcome Risk estimate Confidence Interval Incl.measure eg. OR Complete the Numbers on the separate GATE Calculator for Case-Control Studies V3: 2014: Please contribute your comments and suggestions on this form to: rt.jackson@auckland.ac.nz 2

Participant Population Exposures &Comparisons Case Control Studies about aetiology/risk/interventions Step 3: Appraise Study 3b. Assess risk of errors using RAMboMAN Risk of Appraisal questions (RAMboMAN) errors Notes +, x,?, na Recruitment/Applicability errors : questions on application of results in practice & risk of errors due to differences in recruitment of cases and controls are in blue boxes Internal study design errors: questions on risk of errors within study (design & conduct) are in pink boxes Analyses errors: questions on errors in analyses are in orange boxes Random error: questions on risk of errors due to chance are in the green box Key for scoring risk of errors: + = low; x = of concern;? = unclear; na = not applicable Recruitment - are the findings based on these recruited participants applicable in practice? Study Setting relevant to practice? Eligible population for cases relevant to practice? Eligible population for controls relevant to practice? Cases and controls recruited from same population? Recruited cases and controls similar to all eligible cases and controls? Key personal (risk/prognostic) characteristics of cases and controls that would influence applicability in practice - reported? E & C (risk/intervention) factors sufficiently well defined and well measured so cases and controls allocated to correct exposure status? E & C (risk/intervention) factors measured prior to outcomes occurring in cases? E & C (risk/intervention) factors Score risk of error as: +, x,? or na (see key above) Allocation to EG & CG done well? Is the study setting (e.g. what year(s), which country, urban / rural, hospital / community) likely to influence the applicability of the study results? Was the eligible population from which cases were identified relevant to the study objective and to practice? Were inclusion & exclusion criteria explicit and applied similarly to all eligible cases? Was the population from which controls were identified relevant to the study objective? Were inclusion & exclusion criteria explicit and applied similarly to all eligible controls? e.g. all cases and controls on the same electoral roll/ geographic area? Was sufficient information given about eligibles who did not participate? Were response rates similar in cases & controls? The control group provides the background proportion of exposure within the eligible population (& therefore the expected proportion in the case group). Recruitment of controls must be independent of the main exposure(s) being investigated. Was there sufficient information about the characteristics of cases & controls to determine the applicability of the study results? Was any important information missing? Were E & C definitions described in sufficient detail for the measurements to be replicated? Were the measurements done accurately and similarly in cases & controls? Were criteria / cut-off levels of categories well justified If E or C status assessed retrospectively in cases: i. were they likely to have been affected by the study outcomes (e.g. angina the outcome - can influence level of physical activity - the E or C); ii. were cases and controls likely to have different recall of exposure information? Are the E & C factors measurable, relevant & affordable in usual practice? V3: 2014: Please contribute your comments and suggestions on this form to: rt.jackson@auckland.ac.nz 3

Outcomes Results meaningful in usual practice? Response rates of eligible cases and controls sufficiently high and similar? E/C (risk/intervention) definitions accurately classified exposures throughout exposure period of interest (virtual follow-up period)? E & C cases/controls treated similarly? Cases & controls blind to their risk/intervention status? Maintenance in allocated groups and throughout study sufficient? Were the proportions of eligible cases and controls identified but who did not participate acceptably low? Did this differ between cases & controls? Was it likely to differ depending on E or C status? Did the E/C definitions include length of time cases & controls had been exposed to E or C? Had E/C cases & E/C controls been treated / behaved similarly other than in regard to the E & C factors? If cases & controls aware of their risk/intervention status, were E & C cases or E & C controls treated differently or did they behave differently in ways that influenced response rates or exposure status differentially? blind or objective Measurement of Outcomes: were they done accurately? Outcomes (case status) measured blind to E or C status? Outcomes (case status) measured objectively? Was the outcome meaningful/relevant in usual practice? Exposure period of interest (virtual follow-up time) sufficient to be meaningful? If E/C cases & controls not similar at baseline was this adjusted for in the analyses? Estimates of associations between E or C and outcome(s) given or calculable? Were they calculated correctly? Is the Odds Ratio (if calculated) likely to approximate a relative risk? Measures of the amount of random error in estimates of associations given or calculable? Were they calculated ANalyses: were they done appropriately? Were outcome assessors aware of the risk/intervention status of the cases prior to the case status being determined? If yes, could this have caused errors in outcome diagnosis/classification? How objective were outcome measures (e.g. death, automatic test, strict diagnostic criteria)? Where significant judgment was required, were independent adjudicators used? Was reliability of measures relevant (inter-rater & intrarater), & if so, reported? Was the time period of exposure to E or C prior to identifying cases & controls sufficient to demonstrate an association between the factor(s) and the outcome(s)? Or was it either: too short to have time for the risk/intervention factors to have influenced the outcome; or too long (e.g. the effect may have worn off)? e.g. using multivariate analyses or stratification Were there likely to be residual differences causing confounding? Were ORs or RRs given or possible to calculate? If entered into GATE calculator, were GATE results similar to reported results? ORs & RRs are likely to be similar when the outcome (cases) is relatively uncommon. If less than 10-15% of the eligible population are cases, then the OR will approximate an equivalent RR. Were confidence intervals &/or p-values for estimates of association given or possible to calculate? If they could be entered into GATE calculator, were GATE results similar to V3: 2014: Please contribute your comments and suggestions on this form to: rt.jackson@auckland.ac.nz 4

correctly? Study design & conduct: was risk of error low (i.e. results reasonably unbiased)? Study analyses: was risk of error low (i.e. results reasonably unbiased)? Random error in estimates of intervention effects: were CIs sufficiently narrow for results to be meaningful? Applicability: are these findings applicable in practice? Summary of Study Appraisal reported results? If estimates not statistically significant were power calculations given or possible to calculate? Use responses to questions in pink boxes above Use responses from the orange boxes above Use responses to questions in green box above. Would you make a different decision if the true effect was close to the upper confidence limit rather than close to the lower confidence limit? Use responses to questions in blue boxes above V3: 2014: Please contribute your comments and suggestions on this form to: rt.jackson@auckland.ac.nz 5

Patient & Family Case Control Studies about aetiology/risk/interventions Step 4: Apply. Consider/weigh up all factors & make (shared) decision to act The X-factor Epidemiological Evidence Economic Community Values & preferences Decision System features Legal Practitioner Case Circumstances Political Epidemiological evidence: summarise the quality of the study appraised, the magnitude and precision of the measure(s) estimated and the applicability of the evidence. Also summarise its consistency with other studies (ideally systematic reviews) relevant to the decision. Case circumstances: what circumstances (e.g. disease process/ co-morbidities [mechanistic evidence], social situation) specifically related to the problem you are investigating may impact on the decision? System features: were there any system constraints or enablers that may impact on the decision? What values & preferences may need to be considered in making the decision? Decision: Taking into account all the factors above what is the best decision in this case? Step 5: Audit usual practice (For Quality Improvement) Is there likely to be a gap between your usual practice and best practice for the problem? V3: 2014: Please contribute your comments and suggestions on this form to: rt.jackson@auckland.ac.nz 6