CRITICAL APPRAISAL SKILLS PROGRAMME Making sense of evidence about clinical effectiveness. 11 questions to help you make sense of case control study

Similar documents
11 questions to help you make sense of a case control study

How to use this appraisal tool: Three broad issues need to be considered when appraising a case control study:

11 questions to help you evaluate a clinical prediction rule

The comparison or control group may be allocated a placebo intervention, an alternative real intervention or no intervention at all.

CHECK-LISTS AND Tools DR F. R E Z A E I DR E. G H A D E R I K U R D I S TA N U N I V E R S I T Y O F M E D I C A L S C I E N C E S

Study Design STUDY DESIGN CASE SERIES AND CROSS-SECTIONAL STUDY DESIGN

UNIT 5 - Association Causation, Effect Modification and Validity

Epidemiologic Methods and Counting Infections: The Basics of Surveillance

11 questions to help you make sense of a trial

Bradford Hill Criteria for Causal Inference Based on a presentation at the 2015 ANZEA Conference

How to carry out health technology appraisals and guidance. Learning from the Scottish experience Richard Clark, Principal Pharmaceutical

icahe Critical Appraisal Summary

Version No. 7 Date: July Please send comments or suggestions on this glossary to

Incorporating Clinical Information into the Label

1. Draft checklist for judging on quality of animal studies (Van der Worp et al., 2010)

Clinical problems and choice of study designs

GLOSSARY OF GENERAL TERMS

CHAMP: CHecklist for the Appraisal of Moderators and Predictors

Choosing the right study design

CONCEPTUALIZING A RESEARCH DESIGN

Trial Designs. Professor Peter Cameron

Lecture 5 Conducting Interviews and Focus Groups

Rapid appraisal of the literature: Identifying study biases

Handout 1: Introduction to the Research Process and Study Design STAT 335 Fall 2016

Dr. Allen Back. Oct. 7, 2016

Critical Appraisal Istanbul 2011

Types of Biomedical Research

Evidence-Based Medicine Journal Club. A Primer in Statistics, Study Design, and Epidemiology. August, 2013

LIHS Mini Master Class

12/26/2013. Types of Biomedical Research. Clinical Research. 7Steps to do research INTRODUCTION & MEASUREMENT IN CLINICAL RESEARCH S T A T I S T I C

Evidence-based practice tutorial Critical Appraisal Skills

Critical Appraisal. Dave Abbott Senior Medicines Information Pharmacist

Evaluating Social Programs Course: Evaluation Glossary (Sources: 3ie and The World Bank)

Epidemiological study design. Paul Pharoah Department of Public Health and Primary Care

Tips for Making a Good Research Proposal

Study Designs. Lecture 3. Kevin Frick

Complications of Proton Pump Inhibitor Therapy. Gastroenterology 2017; 153:35-48 발표자 ; F1 김선화

Implementing scientific evidence into clinical practice guidelines

observational studies Descriptive studies

Protocol Development: The Guiding Light of Any Clinical Study

Evidence Informed Practice Online Learning Module Glossary

Meta-Analysis. Zifei Liu. Biological and Agricultural Engineering

Critical Appraisal Series

Epidemiology 101. Nutritional Epidemiology Methods and Interpretation Criteria

Level 2: Critical Appraisal of Research Evidence

A short appraisal of recent studies on fluoridation cessation in Alberta, Canada

Overview of Study Designs in Clinical Research

About Reading Scientific Studies

Sampling. (James Madison University) January 9, / 13

ISC- GRADE XI HUMANITIES ( ) PSYCHOLOGY. Chapter 2- Methods of Psychology

Checklist for appraisal of study relevance (child sex offenses)

Clinical Trials. Susan G. Fisher, Ph.D. Dept. of Clinical Sciences

I. Introduction and Data Collection B. Sampling. 1. Bias. In this section Bias Random Sampling Sampling Error

Bias and confounding. Mads Kamper-Jørgensen, associate professor, Section of Social Medicine

CRITICAL APPRAISAL OF CLINICAL PRACTICE GUIDELINE (CPG)

Health Studies 315. Clinical Epidemiology: Evidence of Risk and Harm

Unit 3: Collecting Data. Observational Study Experimental Study Sampling Bias Types of Sampling

Quality of Clinical Practice Guidelines

Experimental design. Basic principles

WORKSHEET: Etiology/Harm

Sampling Controlled experiments Summary. Study design. Patrick Breheny. January 22. Patrick Breheny Introduction to Biostatistics (BIOS 4120) 1/34

Recently Reviewed and Updated CAT: March 2018

CRITICAL APPRAISAL WORKSHEET 1

Copyright GRADE ING THE QUALITY OF EVIDENCE AND STRENGTH OF RECOMMENDATIONS NANCY SANTESSO, RD, PHD

Causation Assessment and Methodology

THERAPY WORKSHEET: page 1 of 2 adapted from Sackett 1996

To evaluate a single epidemiological article we need to know and discuss the methods used in the underlying study.

Dr. Allen Back. Sep. 30, 2016

5 Bias and confounding

Critical Appraisal of Evidence

Comparing Proportions between Two Independent Populations. John McGready Johns Hopkins University

Evidence Based Practice

Vocabulary. Bias. Blinding. Block. Cluster sample

Evidence Based Medicine

Epidemiology & Evidence Based Medicine. Patrick Linden & Matthew McCall

Randomized Block Designs 1

Economic study type Cost-effectiveness analysis.

Study design. Chapter 64. Research Methodology S.B. MARTINS, A. ZIN, W. ZIN

Adverse Outcomes After Hospitalization and Delirium in Persons With Alzheimer Disease

GRADE, Summary of Findings and ConQual Workshop

Critical Appraisal of Evidence A Focus on Intervention/Treatment Studies

Checklist for Randomized Controlled Trials. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

Mapping the Informed Health Choices (IHC) Key Concepts (KC) to core concepts for the main steps of Evidence-Based Health Care (EBHC).

Checklist for Randomized Controlled Trials. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

Study Design. Study design. Patrick Breheny. January 23. Patrick Breheny Introduction to Biostatistics (171:161) 1/34

Anatomy of Medical Studies: Part I. Emily D Agostino Epidemiology and Biostatistics CUNY School of Public Health at Hunter College

Purpose. Study Designs. Objectives. Observational Studies. Analytic Studies

How Causal Heterogeneity Can Influence Statistical Significance in Clinical Trials

Section D. Another Non-Randomized Study Design: The Case-Control Design

Population-adjusted treatment comparisons Overview and recommendations from the NICE Decision Support Unit

Critical Appraisal of Research Reports

The Effective Public Health Practice Project Tool

First author. Title: Journal

Comparing Means among Two (or More) Independent Populations. John McGready Johns Hopkins University

Setting Non-profit psychiatric hospital. The economic analysis was carried out in the USA.

Critical Appraisal Tools

CPH601 Chapter 3 Risk Assessment

Discrimination and Reclassification in Statistics and Study Design AACC/ASN 30 th Beckman Conference

Designing Studies of Diagnostic Imaging

Transcription:

CRITICAL APPRAISAL SKILLS PROGRAMME Making sense of evidence about clinical effectiveness 11 questions to help you make sense of case control study General comments Three broad issues need to be considered when appraising a case control study. Are the results of the study valid? What are the results? Will the results help locally? The 11 questions on the following pages are designed to help you think about these issues systematically. The first two questions are screening questions and can be answered quickly. If the answer to those two is "yes", it is worth proceeding with the remaining questions. There is a fair degree of overlap between several of the questions. You are asked to record a "yes", "no" or "can't tell" to most of the questions. A number of italicised prompts are given after each question. These are designed to remind you why the question is important. Record your reasons for your answers in the spaces provided. CASP This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/3.0/ Critical Appraisal Skills Programme (CASP) Case Control study checklist_14.10.10

A/ Are the results of the study valid? Screening Questions 1 Did the study address a clearly focused issue? HINT: A question can be focused in terms of? the population studied the risk factors studied whether the study tried to detect a beneficial or harmful effect? 2 Did the authors use an appropriate method to answer their question? HINT: Consider is a case control study a n appropriate way of answering the question under the circumstances?(is the outcome rare or harmful?) did it address the study question? Detailed Questions Is it worth continuing? 3 Were the cases recruited in an acceptable way? HINT: We are looking for selection bias which might compromise validity of the findings: Are the cases defined precisely? Were the cases representative of a defined population (geographically and/or temporally)? Was there an established reliable system for selecting all the cases? Are they incident or prevalent? Is there something special about the cases? Is the time frame of the study relevant to disease/exposure? Was there a sufficient number of cases selected? Was there a power calculation?

4 Were the controls selected in an acceptable way? HINT: We are looking for selection bias which might compromise the generalisability of the findings: Were the controls representative of a defined population (geographically and/or temporally)? Was there something special about the controls? Was the non-response high? Could nonrespondents be different in any way? Are they matched, population based or randomly selected? Was there a sufficient number of controls selected? 5. Was the exposure accurately measured to minimise bias? HINT: We are looking for measurement, recall or classification bias: Was the exposure clearly defined and accurately measured? Did the authors use subjective or objective measurements? Do the measures truly reflect what they are supposed to measure? (have they been validated?) Were the measurement methods similar in the cases and controls? Did the study incorporate blinding where feasible? Is the temporal relation correct? (does the exposure of interest precede the outcome?) 6 A. What confounding factors have the authors accounted for? List the ones you think might be important, that the author missed. (genetic, environmental and socio-economic) HINT: B. Have the authors taken account of the potential confounding factors in the design and/or in their analysis? Look for restriction in design, and techniques e.g. modelling, stratified-, regression-, or sensitivity analysis to correct, control or adjust for confounding factors

7. What are the results of this study? What are the bottom line results? Is the analysis appropriate to the design? How strong is the association between exposure and outcome (look at the odds ratio)? Are the results adjusted for confounding and might confounding still explain the association? Has adjustment made a big difference to the OR? B/ What are the results? 8 How precise are the results? How precise is the estimate of risk? Size of the P-value Size of the confidence intervals Have the authors considered all the important variables? How was the effect of subjects refusing to participate evaluated? 9. Do you believe the results? Big effect is hard to ignore! Can it be due to chance, bias or confounding? Are the design and methods of this study sufficiently flawed to make the results unreliable? Consider Bradford Hills criteria (e.g. time sequence, dose-response gradient, strength, biological plausibility) Yes No

C/ Will the results help me locally? 10. Can the results be applied to the local population? HINT: Consider whether The subjects covered in the study could be sufficiently different from your population to cause concern Your local setting is likely to differ much from that of the study Can you quantify the local benefits and harms? 11. Do the results of this study fit with other available evidence? HINT: Consider all the available evidence from RCTs, systematic reviews, cohort studies and case-control studies as well for consistency. One observational study rarely provides sufficiently robust evidence to recommend changes to clinical practice or within health policy decision making. However, for certain questions observational studies provide the only evidence. Recommendations from observational studies are always stronger when supported by other evidence.