The Effective Public Health Practice Project Tool

Similar documents
Instrument for the assessment of systematic reviews and meta-analysis

ARCHE Risk of Bias (ROB) Guidelines

Controlled Trials. Spyros Kitsiou, PhD

CRITICAL APPRAISAL OF MEDICAL LITERATURE. Samuel Iff ISPM Bern

Clinical problems and choice of study designs

Critical Appraisal Istanbul 2011

Evaluating Social Programs Course: Evaluation Glossary (Sources: 3ie and The World Bank)

Evidence Based Practice

The role of Randomized Controlled Trials

Experimental Design (7)

Version No. 7 Date: July Please send comments or suggestions on this glossary to

Teaching critical appraisal of randomised controlled trials

Protocol Development: The Guiding Light of Any Clinical Study

The RoB 2.0 tool (individually randomized, cross-over trials)

Critical Appraisal Series

Assessing risk of bias

CHAPTER 8 EXPERIMENTAL DESIGN

Appendix 2 Quality assessment tools. Cochrane risk of bias tool for RCTs. Support for judgment

Module 5. The Epidemiological Basis of Randomised Controlled Trials. Landon Myer School of Public Health & Family Medicine, University of Cape Town

Online Supplementary Material

GRADE, Summary of Findings and ConQual Workshop

Appraising the Literature Overview of Study Designs

Chapter 9. Producing Data: Experiments. BPS - 5th Ed. Chapter 9 1

11 questions to help you make sense of a case control study

Tammy Filby ( address: 4 th year undergraduate occupational therapy student, University of Western Sydney

GATE CAT Intervention RCT/Cohort Studies

CONCEPTUALIZING A RESEARCH DESIGN

The treatment of postnatal depression: a comprehensive literature review Boath E, Henshaw C

CRITICAL APPRAISAL SKILLS PROGRAMME Making sense of evidence about clinical effectiveness. 11 questions to help you make sense of case control study

Holistic needs. Trainee Assessment. Your name: Your workplace: Your date of birth: NSN number (if you know it):

The comparison or control group may be allocated a placebo intervention, an alternative real intervention or no intervention at all.

How to use this appraisal tool: Three broad issues need to be considered when appraising a case control study:

A Systematic Review of the Efficacy and Clinical Effectiveness of Group Analysis and Analytic/Dynamic Group Psychotherapy

DEPENDENT VARIABLE. TEST UNITS - subjects or entities whose response to the experimental treatment are measured or observed. Dependent Variable

Evidence-based practice tutorial Critical Appraisal Skills

Overview of Study Designs

Quantitative Research Methods and Tools

Fitting the Method to the Question

Randomized Controlled Trial

EBM: Therapy. Thunyarat Anothaisintawee, M.D., Ph.D. Department of Family Medicine, Ramathibodi Hospital, Mahidol University

Today s reading concerned what method? A. Randomized control trials. B. Regression discontinuity. C. Latent variable analysis. D.

Food Choice at Work Study: Effectiveness of Complex Workplace Dietary Interventions on Dietary Behaviours and Diet-Related Disease Risk.

GRADE. Grading of Recommendations Assessment, Development and Evaluation. British Association of Dermatologists April 2018

I. Identifying the question Define Research Hypothesis and Questions

CTRI Dataset and Description

Checklist for Randomized Controlled Trials. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

Epidemiological study design. Paul Pharoah Department of Public Health and Primary Care

Objectives. Distinguish between primary and secondary studies. Discuss ways to assess methodological quality. Review limitations of all studies.

Chapter 9. Producing Data: Experiments. BPS - 5th Ed. Chapter 9 1

Critical Appraisal Practicum. Fabio Di Bello Medical Implementation Manager

Statistical Power Sampling Design and sample Size Determination

UNIT 3 & 4 PSYCHOLOGY RESEARCH METHODS TOOLKIT

Critical Appraisal of RCT

Data Item Definition/Explanation

Smith, J., & Noble, H. (2014). Bias in research. Evidence-Based Nursing, 17(4), DOI: /eb

GATE: Graphic Appraisal Tool for Epidemiology picture, 2 formulas & 3 acronyms

Lecture 4: Chapter 3, Section 4 Designing Studies (Focus on Experiments)

Practice guidelines : overview of methodology with focus on GRADE

Standard Methods for Quality Assessment of Evidence

#28 A Critique of Heads I Win, Tails It s Chance: The Illusion of Control as a Function of

Evidence Informed Practice Online Learning Module Glossary

Diagnostic Study Planning Guide

Lecture 4: Chapter 3, Section 4. (Focus on Experiments) Designing Studies. Looking Back: Review. Definitions

Experimental and Quasi-Experimental designs

Psychology of Dysfunctional Behaviour RESEARCH METHODS

The effect of parental alcohol use on alcohol use in young adults: the mediating role of parental monitoring and peer deviance

Lecture 3. Previous lecture. Learning outcomes of lecture 3. Today. Trustworthiness in Fixed Design Research. Class question - Causality

Evaluating Effectiveness of Treatments. Elenore Judy B. Uy, M.D.

Structural Approach to Bias in Meta-analyses

Experimental Design. Terminology. Chusak Okascharoen, MD, PhD September 19 th, Experimental study Clinical trial Randomized controlled trial

Maintenance of weight loss and behaviour. dietary intervention: 1 year follow up

Economic study type Cost-effectiveness analysis.

TMD and Evidence - based medicine. Asbjørn Jokstad

Introduction to Experiments

Physical Activity: Family-Based Interventions

Checklist for Randomized Controlled Trials. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

Overview of Study Designs in Clinical Research

Topic #6. Quasi-experimental designs are research studies in which participants are selected for different conditions from pre-existing groups.

RESEARCH METHODS. Winfred, research methods, ; rv ; rv

Some Potential Confounds: All Experimental & Quasi-Experimental Designs (QEDs) 1. Experimenter Bias (i.e., E expectancy effects)

RESEARCH METHODS. Winfred, research methods,

Maxing out on quality appraisal of your research: Avoiding common pitfalls. Policy influenced by study quality

GRADE. Grading of Recommendations Assessment, Development and Evaluation. British Association of Dermatologists April 2014

Dr. Engle has received an unrestricted educational grant for the MD Anderson Pain Medicine Fellowship.

Review of Veterinary Epidemiologic Research by Dohoo, Martin, and Stryhn

Statistical Analysis Plan

Overview of Clinical Study Design Laura Lee Johnson, Ph.D. Statistician National Center for Complementary and Alternative Medicine US National

PSYC1024 Clinical Perspectives on Anxiety, Mood and Stress

the standard deviation (SD) is a measure of how much dispersion exists from the mean SD = square root (variance)

CRITICALLY APPRAISED PAPER (CAP)

A short appraisal of recent studies on fluoridation cessation in Alberta, Canada

Quantitative research Quiz Answers

Bias and confounding. Mads Kamper-Jørgensen, associate professor, Section of Social Medicine

CRITICAL APPRAISAL AP DR JEMAIMA CHE HAMZAH MD (UKM) MS (OFTAL) UKM PHD (UK) DEPARTMENT OF OPHTHALMOLOGY UKM MEDICAL CENTRE

PDF of Trial CTRI Website URL -

Learning objectives. Examining the reliability of published research findings

Bandolier. Professional. Independent evidence-based health care ON QUALITY AND VALIDITY. Quality and validity. May Clinical trial quality

The Guide to Community Preventive Services. Systematic Use of Evidence to Address Public Health Questions

UNIT 5 - Association Causation, Effect Modification and Validity

Transcription:

The Effective Public Health Practice Project Tool International Workshop on Evidence-Based Public Health: Concepts and Methods Munich, 17th and 18th November 2010 Dr. Eva Rehfuess Institute for Medical Informatics,Biometry and Epidemiology University of Munich Email: rehfuess@ibe.med.uni-muenchen.de Potential bias in an epidemiological study Recruit participants SELECTION BIAS Allocate to group ALLOCATION BIAS Intervention group CONFOUNDING Control group Implement intervention Follow-up participants INTEGRITY OF INTERVENTION INTENTION-TO-TREAT ATTRITION BIAS Implement intervention Follow-up participants Measure outcomes DETECTION BIAS/RECALL BIAS RELIABILITY/VALIDITY OF DATA COLLECTION METHODS Measure outcomes Analyse data STATISTICAL ANALYSIS Analyse data 1

Components of the EPHPP tool A. Selection bias B. Study design C. Confounders D. Blinding E. Data collection methods F. Withdrawals and drop-outs G. Intervention integrity H. Analyses Assessment and component rating as strong, moderate or weak Assessment but no component rating Overall study rating as strong, moderate or weak based on component ratings A. Selection bias systematic differences in characteristics between those who are selected for study and those who are not Medical students without any prior skin disease are asked to volunteer for a randomised controlled trial on the effectiveness of an information package about UV radiation and sun protection. EPHPP Tool: Selection bias Are the individuals selected to participate in the study likely to be representative of the target population? What percentage of selected individuals agreed to participate? 2

B. Allocation bias systematic differences in the characteristics of those assigned to intervention versus control groups Methods of allocation: Randomisation (e.g. coin toss, computer-generated random number tables) Quasi-randomised (e.g. alternation, allocation by date of birth) Selection by researcher, self-identification, etc. EPHPP Tool: Study design Indicate the study design Was the study described as randomised? If Yes, was the method of randomisation described? If Yes, was the method appropriate? C. Confounding factors that are associated with the intervention and causally related to the outcome of interest (known and unknown confounders) Fair skin is associated with the likelihood of using sunscreen (intervention) and the likelihood to develop skin cancer (outcome). EPHPP Tool: Confounders Were there important differences between groups prior to the intervention? If Yes, indicate the percentage of relevant confouders that were controlled (either in the design or analysis). 3

D. Blinding Detection bias systematic differences between groups in how outcomes are determined -> blinding of outcome assessors to the intervention/control status of participants Recall (reporting outcome) bias: systematic differences in the way groups answer questions -> blinding of participants to research question EPHPP Tool: Blinding Was (were) the outcome assessor(s) aware of the intervention or exposure status of participants? Were the study participants aware of the research question? E. Data collection methods Validity the degree to which a measurement accurately reflects or assesses the specific phenomenon that the researcher is attempting to measure Reliability repeatability of measurement, or the degree to which an instrument measures the same way each time it is used under the same condition with the same subject Testing of questionnaire components on behaviour change against observed behaviour change (validity); repeated pre-testing of questionnaire with the same group of individuals (repeatability). EPHPP Tool: Data collection methods Were data collection tools shown to be valid? Were data collection tools shown to be reliable? 4

F. Attrition bias systematic differences between groups in withdrawals from a study Completed follow-up responses were obtained from 87% of those in the intervention group and 69% of those in the control group. There were no significant differences between respondents and non-respondents in age, sex, skin type and socio-economic status. EPHPP Tool: Withdrawals and drop-outs Were withdrawals and drop-outs reported in terms of numbers and/or reasons per group? Indicate the percentage of participants completing the study. Applying the EPHPP tool Process: Independent rating of study quality by two persons Comparison of individual ratings to reach consensus on each component Involvement of a third person in case of lack of consensus Overall study quality is rated based on combination of component ratings STRONG: 4 strong ratings with no weak ratings MODERATE: less than four strong ratings and one weak rating WEAK: two or more weak ratings Components of EPHPP tool: Form Accompanying dictionary http://www.ephpp.ca/tools.html 5