Mapping the Informed Health Choices (IHC) Key Concepts (KC) to core concepts for the main steps of Evidence-Based Health Care (EBHC).

Similar documents
How to CRITICALLY APPRAISE

Critical Appraisal Istanbul 2011

Cochrane Pregnancy and Childbirth Group Methodological Guidelines

EBM e Medicina Veterinaria: un modo di affrontare i problemi clinici e prendere decisioni da un altra prospettiva. Veterinary Art vs.

Choosing the right study design

GRADE. Grading of Recommendations Assessment, Development and Evaluation. British Association of Dermatologists April 2014

Department of OUTCOMES RESEARCH

Implementing scientific evidence into clinical practice guidelines

GRADE, Summary of Findings and ConQual Workshop

Teaching critical appraisal of randomised controlled trials

Benefit Risk Assessment. Patrick Salmon Eurordis Summer School 8 th June, 2016

Does executive coaching work? The questions every coach needs to ask and (at least try to) answer. Rob B Briner

Controlled Trials. Spyros Kitsiou, PhD

GLOSSARY OF GENERAL TERMS

Systematic Reviews. Simon Gates 8 March 2007

11 questions to help you make sense of a case control study

CRITICAL APPRAISAL OF CLINICAL PRACTICE GUIDELINE (CPG)

Randomized Controlled Trial

Essential Skills for Evidence-based Practice Understanding and Using Systematic Reviews

Quality of Clinical Practice Guidelines

Systematic reviews and meta-analyses of observational studies (MOOSE): Checklist.

Recent developments for combining evidence within evidence streams: bias-adjusted meta-analysis

Evidence based practice. Dr. Rehab Gwada

Clinical Epidemiology for the uninitiated

Module 5. The Epidemiological Basis of Randomised Controlled Trials. Landon Myer School of Public Health & Family Medicine, University of Cape Town

Guidelines for Writing and Reviewing an Informed Consent Manuscript From the Editors of Clinical Research in Practice: The Journal of Team Hippocrates

Critical Appraisal. Dave Abbott Senior Medicines Information Pharmacist

Appendix 2 Quality assessment tools. Cochrane risk of bias tool for RCTs. Support for judgment

Copyright GRADE ING THE QUALITY OF EVIDENCE AND STRENGTH OF RECOMMENDATIONS NANCY SANTESSO, RD, PHD

Survival analysis. EPIB Clinical Epidemiology. Date: May 9 to June 3 8:35 11:40. Session 9: Evidence-Based Medicine. Dr. J.

GRADE. Grading of Recommendations Assessment, Development and Evaluation. British Association of Dermatologists April 2018

Washington, DC, November 9, 2009 Institute of Medicine

Types of Data. Systematic Reviews: Data Synthesis Professor Jodie Dodd 4/12/2014. Acknowledgements: Emily Bain Australasian Cochrane Centre

Evidence Based Medicine

Clinical Research Scientific Writing. K. A. Koram NMIMR

CRITICAL APPRAISAL AP DR JEMAIMA CHE HAMZAH MD (UKM) MS (OFTAL) UKM PHD (UK) DEPARTMENT OF OPHTHALMOLOGY UKM MEDICAL CENTRE

Assessing risk of bias

The role of Randomized Controlled Trials

Clinical Evidence: Asking the Question and Understanding the Answer. Keeping Up to Date. Keeping Up to Date

CHAMP: CHecklist for the Appraisal of Moderators and Predictors

Critical Appraisal Series

CHECK-LISTS AND Tools DR F. R E Z A E I DR E. G H A D E R I K U R D I S TA N U N I V E R S I T Y O F M E D I C A L S C I E N C E S

Outline. What is Evidence-Based Practice? EVIDENCE-BASED PRACTICE. What EBP is Not:

1. Draft checklist for judging on quality of animal studies (Van der Worp et al., 2010)

CONSORT 2010 checklist of information to include when reporting a randomised trial*

Discussion of meta-analysis on the benefits of psychotherapy

What is EBM? Introduction to Evidence-Based Medicine. The integration of: Defining EBM: From the Surgeon s Perspective

School of Dentistry. What is a systematic review?

Quantitative Evaluation

Clinical problems and choice of study designs

Critical Thinking and Reading Lecture 15

PROGRAMMA DELLA GIORNATA

Introduzione al metodo GRADE

Content. Evidence-based Geriatric Medicine. Evidence-based Medicine is: Why is EBM Needed? 10/8/2008. Evidence-based Medicine (EBM)

Role of evidence from observational studies in the process of health care decision making

MINI SYMPOSIUM - EUMASS - UEMASS European Union of Medicine in Assurance and Social Security

How to use this appraisal tool: Three broad issues need to be considered when appraising a case control study:

1/25/2013. Selecting the Right Program: Using Systematic Reviews to Identify Effective Programs. Agenda

PLAN OF PRESENTATION. Background Understanding the Elaboration of a research Step by step analysis Conclusion

Checklist for Randomized Controlled Trials. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

CRITICAL APPRAISAL SKILLS PROGRAMME Making sense of evidence about clinical effectiveness. 11 questions to help you make sense of case control study

CRITICAL APPRAISAL OF MEDICAL LITERATURE. Samuel Iff ISPM Bern

95% 2.5% 2.5% +2SD 95% of data will 95% be within of data will 1.96 be within standard deviations 1.96 of sample mean

Evidence Based Practice

ISPOR Task Force Report: ITC & NMA Study Questionnaire

Evidence Informed Practice Online Learning Module Glossary

Matching study design to research question-interactive learning session

GATE: Graphic Appraisal Tool for Epidemiology picture, 2 formulas & 3 acronyms

Alcohol interventions in secondary and further education

CHAPTER 4 Designing Studies

Meta-Analysis. Zifei Liu. Biological and Agricultural Engineering

GUEN DONDÉ HEAD OF RESEARCH INSTITUTE OF BUSINESS ETHICS

Checklist for Randomized Controlled Trials. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

CONSORT: missing missing data guidelines, the effects on HTA monograph reporting Yvonne Sylvestre

RATING OF A RESEARCH PAPER. By: Neti Juniarti, S.Kp., M.Kes., MNurs

EBM: Therapy. Thunyarat Anothaisintawee, M.D., Ph.D. Department of Family Medicine, Ramathibodi Hospital, Mahidol University

Human intuition is remarkably accurate and free from error.

Cochrane Bone, Joint & Muscle Trauma Group How To Write A Protocol

Insight Assessment Measuring Thinking Worldwide

Delfini Evidence Tool Kit

Quantitative research Quiz Answers

\ jar gon \ BUSTER. For research terms A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 1

WRITTEN ASSIGNMENT 1 (8%)

Overview of Study Designs in Clinical Research

CRITICAL APPRAISAL WORKSHEET

UNIT 5 - Association Causation, Effect Modification and Validity

Systematic reviews: From evidence to recommendation. Marcel Dijkers, PhD, FACRM Icahn School of Medicine at Mount Sinai

Intro to Evidence Based Medicine

Quality of Clinical Practice Guidelines

ACR OA Guideline Development Process Knee and Hip

ISC- GRADE XI HUMANITIES ( ) PSYCHOLOGY. Chapter 2- Methods of Psychology

STUDY DESIGNS WHICH ONE IS BEST?

Experimental Psychology

Critical Thinking Rubric. 1. The student will demonstrate the ability to interpret information. Apprentice Level 5 6. graphics, questions, etc.

Vocabulary. Bias. Blinding. Block. Cluster sample

Welcome to PHRM 231. Study Design and Interpretation - Part 2. Handouts are on CONNECT All sessions will be recorded

Chapter 5: Producing Data

Medical Statistics. Introduction. What do you hope to get out of this course in Medical Statistics? Motivation

TMD and Evidence - based medicine. Asbjørn Jokstad

How do we identify a good healthcare provider? - Patient Characteristics - Clinical Expertise - Current best research evidence

Transcription:

KC No KC Short Title KC Statement EBHC concept EBHC sub concept EBHC statement EBHC Step Reported 1.01 Treatments can harm Treatments may be harmful Explain the use of harm/aetiologies for (rare) adverse effects of interventions. Indicate that common treatment harms can usually be observed in controlled trials, but some rare or late harms will only be seen in observational studies. While critical appraisal of such studies is not a core skill, learner needs to indicate when and why they are needed. Also, learners need to recognise that treatment may be harmful and increasing the amount of an effective treatment does not necessarily increase the benefits of a treatment and may cause harm 1.02 Anecdotes are unreliable evidence Personal experiences or anecdotes are an unreliable basis for assessing the effects of most treatments. Know the rationale and origin of EBP. Recognise the disparity between our diagnostic skills and clinical judgment, which increase with experience, and our uptodate knowledge and clinical performance, which decline. The disparity between our diagnostic skills and clinical judgment, which increase with experience, and our uptodate knowledge and clinical performance which decline 1.03 Association is not the same as causation An outcome may be associated with a treatment, but not caused by the treatment. and applicability of healthrelated research Recognize that association does not imply causation, and explain why. Understanding that association does not imply causation and why (e.g. reverse causation, confounding) is also important. 1.04 Common practice is not always evidencebased Widely used treatments or treatments that have been used for a long time are not necessarily beneficial or safe Know the rationale and origin of EBP. Recognise the gaps between evidence and practice leading to variations in practice and quality of care. The gaps between evidence and practice (including overuse and underuse of evidence) leading to variations in practice and quality of care 1.05 Newer is not necessarily better New, brandnamed, or more expensive treatments may not be better than available alternatives Know the rationale and origin of EBP. In this concept, learner needs to know the rationale and origin of EBP, including that daily clinical need for valid and quantitative information about diagnosis (e.g. knowing that earlier diagnosis doesn t mean better), prognosis, therapy (e.g. new interventions aren t necessarily better than current alternatives)

1.06 Expert opinion is not always right Opinions of experts or authorities do not alone provide a reliable basis for deciding on the benefits and harms of treatments Know the rationale and origin of EBP. Be aware of the inadequacy of traditional sources for this information. The inadequacy of traditional sources for this information because they are out of date (traditional textbooks), frequently biased (experts) 1.07 Beware of conflicting interests Conflicting interests may result in misleading claims about the effects of treatments. and applicability of healthrelated research Know the importance of considering conflict of interest/funding sources when appraising articles. Know the importance of considering conflict of interest/funding sources when appraising articles. 1.08 More is not necessarily better Increasing the amount of a treatment does not necessarily increase the benefits of a treatment and may cause harm. Explain the use of harm/aetiologies for (rare) adverse effects of interventions. Common treatment harms can usually be observed in controlled trials, but some rare or late harms will only be seen in observational studies. While critical appraisal of such studies is not a core skill, learner needs to indicate when and why they are needed. Also, learners need to recognise that treatment may be harmful and increasing the amount of an effective treatment does not necessarily increase the benefits of a treatment and may cause harm 1.09 Earlier is not necessarily better Earlier detection of disease is not necessarily better. Know the rationale and origin of EBP. In this concept, learner needs to know the rationale and origin of EBP, including that daily clinical need for valid and quantitative information about diagnosis (e.g. knowing that earlier diagnosis doesn t mean better), prognosis, therapy (e.g. new interventions aren t necessarily better than current alternatives 1.10 Hope may lead to unrealistic expectations Hope or fear can lead to unrealistic expectations about the effects of treatments. Engage patients in the decision making process, using shared decision making, including discussing the evidence and their preferences In this concept, learner needs to be able to engage the patients in the decision making process, to communicate benefit and harms to patients, and to be aware of the role of decision support tools such as patient decision aids in shared decision making

1.10 Hope may lead to unrealistic expectations Hope or fear can lead to unrealistic expectations about the effects of treatments. Understand evidencebased practice (EBP) defined as the integration of the best research evidence with our clinical expertise and our patient s unique values and circumstances. Patient values and circumstances (i.e. the unique preferences, concerns and expectations each patient brings to a clinical encounter and which must be integrated into shared clinical decisions if they are to serve the patient; and their individual clinical state and the clinical setting. 1.11 Explanations about how treatments work can be wrong Beliefs about how treatments work are not reliable predictors of the actual effects of treatments Know the rationale and origin of EBP. Recognise the distinction between the pathophysiological and empirical approach of dealing with what is effective. The distinction between the pathophysiological vs. empirical approach of dealing with what is effective (e.g. Dr. Spock's advice to put infants on fronts to sleep to avoid choking on vomit [pathophysiological] while this led to avoidable cot death [empirical] 1.12 Dramatic treatment effects are rare Large, dramatic effects of treatments are rare. Identify the preferred order of designs for each type of clinical question, including the pros and cons of the major designs In this competence, learner needs to identify the preferred order of designs for each type of clinical question (e.g. treatment question best to be answered by a systematic review of randomised controlled trials), including the pros and cons of the major designs importantly those of highest level of evidence (e.g. systematic reviews, RCTs). Also, the learner needs to recognise when RCTs are unnecessary such as in case of large, dramatic effects of intervention all or none 2.01 Evaluating the effects of treatments requires appropriate comparisons Evaluating the effects of treatments requires appropriate comparisons. Understand the preferred order of designs for each type of clinical question, including the pros and cons of the major designs Understand and be able to identify the major designs for each type of research question. In this concept, learner needs to understand the preferred order of designs for each type of clinical question (e.g. treatment question best to be answered by a systematic review of randomised controlled trials), including the pros and cons of the major designs importantly those of highest level of evidence (e.g. systematic reviews, RCTs) 2.01 Comparison groups should be similar Apart from the treatments being compared, the comparison groups need to be similar (i.e. 'like needs to be compared with like'). and applicability of healthrelated research. Understand the different types of bias. research which requires an understanding of the different sources and types of bias (such as allocation bias )

2.02 Comparison groups should be similar Apart from the treatments being compared, the comparison groups need to be similar (i.e. 'like needs to be compared with like'). randomisation ) 2.03 Peoples' outcomes should be analysed in their original groups People s outcomes should be counted in the group to which they were allocated. controlled trial), which requires being able to identify IntentionToTreat analysis vs. Per Protocol analysis. 2.04 Comparison groups should be treated equally People in the groups being compared need to be cared for similarly (apart from the treatments being compared) Performance bias ) 2.05 People should not know which treatment they get If possible, people should not know which of the treatments being compared they are receiving. allocation concealment and blinding ) 2.06 Peoples' outcomes should be assessed similarly Outcomes should be measured in the same way (fairly) in the treatment groups being compared. and applicability of healthrelated research. Understand the different types of bias. research which requires an understanding of the different sources and types of bias (such as measurement and detection bias )

2.06 Peoples' outcomes should be assessed similarly Outcomes should be measured in the same way (fairly) in the treatment groups being compared. [ randomisation and allocation concealment, blinding, loss to follow up/attrition bias, and IntentionToTreat analysis vs. Per Protocol analysis 2.07 All should be followed up It is important to measure outcomes in everyone who was included in the treatment groups loss to follow up /attrition bias ) 2.08 Consider all of the relevant fair comparisons The results of single comparisons of treatments Critically appraise and interpret a systematic review Know the difference between systematic reviews, metaanalyses, and nonsystematic reviews. 2.08 Consider all of the relevant fair comparisons The results of single comparisons of treatments Critically appraise and interpret a systematic review Able to identify and critically evaluate key elements of a systematic review. appraise a systematic review which requires being able to identify and assess the key elements of a systematic review such as the search strategy, the appraisal and selection of studies, and the synthesis and summary of findings (including a Summary of Findings table) and how these elements differ from a traditional review 2.09 Reviews of fair comparisons should be systematic Reviews of treatment comparisons that do not use systematic methods Critically appraise and interpret a systematic review Able to identify and critically evaluate key elements of a systematic review. appraise a systematic review which requires being able to identify and assess the key elements of a systematic review such as the search strategy, the appraisal and selection of studies, and the synthesis and summary of findings (including a Summary of Findings table) and how these elements differ from a traditional review

2.10 All fair comparisons and outcomes should be reported Unpublished results of fair comparisons may result in biased estimates of treatment effects and applicability of healthrelated research. Understand the different types of bias. research which requires an understanding of the different sources and types of bias (such as selection bias, confounding, allocation bias, measurement and detection bias, and reporting and publication bias ), and the impact of these biases and uncertainty (random error) on estimates from studies 2.11 Subgroup analyses may be misleading Results for a selected group of people within a systematic review of fair comparisons of treatments can be misleading. Interpret different types of measures of association and effect, including key graphical presentations Recognise use and limitations subgroup and sensitivity analysis, and how they help the interpretation of results. Since subgroup and sensitivity analysis are commonly reporting, their meaning and limitations should be known. 2.12 Relative measures of effects Relative effects of treatments alone Explain the importance of baseline risk of individual patients when estimating individual expected benefit. In this concept, learner needs to understand the importance of baseline risk of individual patients when estimating individual expected benefit, and its role in engaging the patients in decisionmaking process (e.g. balance benefits and harms of a treatment). 2.12 Relative measures of effects Relative effects of treatments alone. Interpret the results including measures of effect Interpreting the results requires being able to interpret the common measures of effect (such as odds ratio, relative risk reduction/increase, absolute risk difference, relative risk /risk ratio, hazard ratio, NNT/NNH) 2.13 Average measures of effects Average differences between treatments Explain the importance of baseline risk of individual patients when estimating individual expected benefit. In this concept, learner needs to understand the importance of baseline risk of individual patients when estimating individual expected benefit (average measures of effects ), and its role in engaging the patients in decisionmaking process (e.g. balance benefits and harms of a treatment).

2.14 Fair comparisons with few people or outcome events Small studies in which few outcome events occur are usually not informative and the results may be misleading and applicability of healthrelated research. Interpret commonly used measures of uncertainty, such as confidence intervals and pvalues. 2.14 Fair comparisons with few people or outcome events Small studies in which few outcome events occur are usually not informative and the results may be misleading and applicability of healthrelated research. Understand the difference between random error and systematic error (Bias). 2.15 Confidence intervals should be reported The use of pvalues to indicate the probability of something having occurred by chance may be misleading; confidence intervals are more informative. Interpret the results including measures of effect Interpreting the results requires being able to interpret the common measures of effect (such as odds ratio, relative risk reduction/increase, absolute risk difference, relative risk /risk ratio, hazard ratio, NNT/NNH) and measures of uncertainty (confidence intervals and p values). 2.15 Confidence intervals should be reported The use of pvalues to indicate the probability of something having occurred by chance may be misleading; confidence intervals are more informative and applicability of healthrelated research. Interpret commonly used measures of uncertainty, such as confidence intervals and pvalues. Knowledge of statistical calculations is not required, but the ability to interpret statistical results, such as confidence intervals and pvalues, is essential 2.16 Don't confuse "statistical significance" with "importance" Saying that a difference is statistically significant or that it is not statistically significant Interpret different types of measures of association and effect, including key graphical presentations Identify the difference between "statistical significance" and "importance", and between a lack of evidence of an effect and evidence of no effect. Understand the difference between statistical and clinical significance.

2.17 Don't confuse "no evidence" with "no effect" A lack of evidence is not the same as evidence of no difference Interpret different types of measures of association and effect, including key graphical presentations Identify the difference between "statistical significance" and "importance", and between a lack of evidence of an effect and evidence of no effect. 3.01 Do the outcomes measured matter to you? A systematic review of fair comparisons of treatments should measure outcomes that are important Explain the importance of baseline risk of individual patients when estimating individual expected benefit. Recognise the different types of outcome measures (surrogate vs. composite end points measures). In addition, learners need to know the different types of outcome measures and to identify the most important to the patients (e.g. patients related outcomes are more relevant to the patients than surrogate outcomes) 3.02 Are you very different from the people studied? A systematic review of fair comparisons of treatments in animals or highly selected groups of people may not be relevant and applicability of healthrelated research. Understand the different types of bias. research which requires an understanding of the different sources and types of bias (such as selection bias ) 3.02 Are you very different from the people studied? A systematic review of fair comparisons of treatments in animals or highly selected groups of people may not be relevant Practice the 5 steps of EBP: Ask, Acquire, and Interpret,, and Reflect. Critically appraising that evidence for its validity, impact, and applicability 3.03 Are the treatments practical in your setting? The treatments evaluated in fair comparisons may not be relevant or applicable Understand evidencebased practice (EBP) defined as the integration of the best research evidence with our clinical expertise and our patient s unique values and circumstances. Patient values and circumstances (i.e. the unique preferences, concerns and expectations each patient brings to a clinical encounter and which must be integrated into shared clinical decisions if they are to serve the patient; and their individual clinical state and the clinical setting.

3.03 Are the treatments practical in your setting? The treatments evaluated in fair comparisons may not be relevant or applicable and applicability of healthrelated research. research 3.04 How certain is the evidence? Wellconducted systematic reviews often reveal a lack of relevant evidence, but they provide the best basis for making judgements about the certainty of the evidence. Critically appraise and interpret a systematic review Such appraisal skills should be able to conclude with the Level of evidence resulting from a systematic review, e.g. using Grading of Recommendations Assessment, Development and Evaluation (GRADE). 3.04 How certain is the evidence? Wellconducted systematic reviews often reveal a lack of relevant evidence, but they provide the best basis for making judgements about the certainty of the evidence. Interpret the grading of the certainty in evidence and the strength of recommendations in health care In this concept, learner needs to understand and consider key factors drive the direction and strength of the recommendations, and its role in shared decision making (e.g. weak recommendations are usually sensitive to the patients values and preferences) 3.05 Do the advantages outweigh the disadvantages? Decisions about treatments should not be based on considering only their benefits Explain the importance of baseline risk of individual patients when estimating individual expected benefit. In this concept, learner needs to understand the importance of baseline risk of individual patients when estimating individual expected benefit, and its role in engaging the patients in decisionmaking process (e.g. balance benefits and harms of a treatment). 3.05 Do the advantages outweigh the disadvantages? Decisions about treatments should not be based on considering only their benefits Interpret the grading of the certainty in evidence and the strength of recommendations in health care In this concept, learner needs to understand and consider key factors drive the direction and strength of the recommendations, and its role in shared decision making (e.g. weak evidence recommendations are usually sensitive to the patients values and preferences)