Evidence Informed Practice Online Learning Module Glossary

Similar documents
2 Critical thinking guidelines

What is Psychology? chapter 1

GLOSSARY OF GENERAL TERMS

Psych 1Chapter 2 Overview

Experimental and Quasi-Experimental designs

Asking and answering research questions. What s it about?

PHO MetaQAT Guide. Critical appraisal in public health. PHO Meta-tool for quality appraisal

Nature and significance of the local problem

Analysis A step in the research process that involves describing and then making inferences based on a set of data.

Chapter 8. Learning Objectives 9/10/2012. Research Principles and Evidence Based Practice

MODULE 3 APPRAISING EVIDENCE. Evidence-Informed Policy Making Training

Critical appraisal: Systematic Review & Meta-analysis

Design strategies in quantitative research an introduction

Are the likely benefits worth the potential harms and costs? From McMaster EBCP Workshop/Duke University Medical Center

Learning objectives. Examining the reliability of published research findings

Evidence- and Value-based Solutions for Health Care Clinical Improvement Consults, Content Development, Training & Seminars, Tools

Guidelines for Writing and Reviewing an Informed Consent Manuscript From the Editors of Clinical Research in Practice: The Journal of Team Hippocrates

Overview of Study Designs in Clinical Research

Lecture 5 Conducting Interviews and Focus Groups

Vocabulary. Bias. Blinding. Block. Cluster sample

Clinical problems and choice of study designs

M2. Positivist Methods

Essential Skills for Evidence-based Practice Understanding and Using Systematic Reviews

GCSE PSYCHOLOGY UNIT 2 FURTHER RESEARCH METHODS

The degree to which a measure is free from error. (See page 65) Accuracy

Fitting the Method to the Question

CSC2130: Empirical Research Methods for Software Engineering

Module 5. The Epidemiological Basis of Randomised Controlled Trials. Landon Myer School of Public Health & Family Medicine, University of Cape Town

How do we identify a good healthcare provider? - Patient Characteristics - Clinical Expertise - Current best research evidence

Checklist for Randomized Controlled Trials. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

An Inquiry about One Peer-Driven Curriculum to Train Peer Coaches: Reflecting on Power, Privilege and Supporting Each Other s Recovery

Assessment Materials: Deb Rumsey (with Lynda Ballou) 1. "Study Suggests Light to Back of Knee Alters Biological Clock" (NY Times, 1/16/98)

Fitting the Method to the Question

Sampling. (James Madison University) January 9, / 13

11 questions to help you make sense of a case control study

SkillBuilder Shortcut: Levels of Evidence

Chapter 11: Experiments and Observational Studies p 318

Sampling Controlled experiments Summary. Study design. Patrick Breheny. January 22. Patrick Breheny Introduction to Biostatistics (BIOS 4120) 1/34

Evidence Based Medicine

Basic Biostatistics. Dr. Kiran Chaudhary Dr. Mina Chandra

Conducting Research in the Social Sciences. Rick Balkin, Ph.D., LPC-S, NCC

Introduction to Research Methods

Checklist for Randomized Controlled Trials. The Joanna Briggs Institute Critical Appraisal tools for use in JBI Systematic Reviews

Outline. What is Evidence-Based Practice? EVIDENCE-BASED PRACTICE. What EBP is Not:

Final Exam: PSYC 300. Multiple Choice Items (1 point each)

Two-sample Categorical data: Measuring association

Elements of the Structured Abstract

GRADE. Grading of Recommendations Assessment, Development and Evaluation. British Association of Dermatologists April 2014

Evaluating you relationships

GRADE. Grading of Recommendations Assessment, Development and Evaluation. British Association of Dermatologists April 2018

In this chapter we discuss validity issues for quantitative research and for qualitative research.

A to Z OF RESEARCH METHODS AND TERMS APPLICABLE WITHIN SOCIAL SCIENCE RESEARCH

Asking & Answering Sociological Questions

Psychology of Dysfunctional Behaviour RESEARCH METHODS

Clinical Research Scientific Writing. K. A. Koram NMIMR

How to CRITICALLY APPRAISE

Study Design. Study design. Patrick Breheny. January 23. Patrick Breheny Introduction to Biostatistics (171:161) 1/34

Goal: To become familiar with the methods that researchers use to investigate aspects of causation and methods of treatment

Glossary From Running Randomized Evaluations: A Practical Guide, by Rachel Glennerster and Kudzai Takavarasha

Disposition. Quantitative Research Methods. Science what it is. Basic assumptions of science. Inductive and deductive logic

Lecture Slides. Elementary Statistics Eleventh Edition. by Mario F. Triola. and the Triola Statistics Series 1.1-1

Psychology 205, Revelle, Fall 2014 Research Methods in Psychology Mid-Term. Name:

Chapter 2 Doing Sociology: Research Methods

Why do Psychologists Perform Research?

Does Hemodialysis or Peritoneal Dialysis Provide a Better Quality of Life for Those with Chronic Kidney Disease? University of New Hampshire

Critical Appraisal of Evidence

AOTA S EVIDENCE EXCHANGE CRITICALLY APPRAISED PAPER (CAP) GUIDELINES Annual AOTA Conference Poster Submissions Critically Appraised Papers (CAPs) are

Critical Appraisal Istanbul 2011

Recent developments for combining evidence within evidence streams: bias-adjusted meta-analysis

Chapter Three Research Methodology

CRITICAL APPRAISAL AP DR JEMAIMA CHE HAMZAH MD (UKM) MS (OFTAL) UKM PHD (UK) DEPARTMENT OF OPHTHALMOLOGY UKM MEDICAL CENTRE

MATH& 146 Lesson 6. Section 1.5 Experiments

Ability to link signs/symptoms of current patient to previous clinical encounters; allows filtering of info to produce broad. differential.

Version No. 7 Date: July Please send comments or suggestions on this glossary to

Overview of the Logic and Language of Psychology Research

Clever Hans the horse could do simple math and spell out the answers to simple questions. He wasn t always correct, but he was most of the time.

Chapter 13 Summary Experiments and Observational Studies

Choosing and Using Quantitative Research Methods and Tools

Critical Appraisal of Evidence A Focus on Intervention/Treatment Studies

Guiding Questions

Glossary of Practical Epidemiology Concepts

Live WebEx meeting agenda

exposure/intervention

Writing Reaction Papers Using the QuALMRI Framework

observational studies Descriptive studies

AP Psychology -- Chapter 02 Review Research Methods in Psychology

Observation process of obtaining info by using senses

SOCQ121/BIOQ121. Session 8. Methods and Methodology. Department of Social Science. endeavour.edu.au

Chapter 13. Experiments and Observational Studies. Copyright 2012, 2008, 2005 Pearson Education, Inc.

UNIT I SAMPLING AND EXPERIMENTATION: PLANNING AND CONDUCTING A STUDY (Chapter 4)

Goal: To understand the methods that scientists use to study abnormal behavior

Data and Statistics 101: Key Concepts in the Collection, Analysis, and Application of Child Welfare Data

Glossary. Ó 2010 John Wiley & Sons, Ltd

I. Introduction and Data Collection B. Sampling. 1. Bias. In this section Bias Random Sampling Sampling Error

Research Methods & Design Outline. Types of research design How to choose a research design Issues in research design

The role of Randomized Controlled Trials

UNDERSTANDING EPIDEMIOLOGICAL STUDIES. Csaba P Kovesdy, MD FASN Salem VA Medical Center, Salem VA University of Virginia, Charlottesville VA

Introduction; Study design

Study Design STUDY DESIGN CASE SERIES AND CROSS-SECTIONAL STUDY DESIGN

Transcription:

Term Abstract Associations Attrition Bias Background and Significance Baseline Basic Science Bias Blinding Definition An abstract is a summary of a research article. It usually includes the purpose, methods, results and conclusions. The abstract provides information to help you decide whether the entire document is worth taking the time to read. An association indicates a relationship between two variables. Note that association is not the same as causality: just because two variables are associated does not mean that one is a direct consequence of the other. Attrition bias occurs when participants drop out of a study for a systematic reason. This can potentially affect the results in two ways. First, the remaining sample may no longer be the same as the larger population, which will make the results less generalizable. Second, in an experimental trial, if the drop out rate is higher in one treatment group versus another, the results of the study may no longer be due to the treatment patients received, but rather to the reason they dropped out. Background and significance is generally the first section of a research article. It tells you why the study was important to do and provides important background information about the condition and treatments being studied. It is sometimes called the Introduction. A baseline documents the clinical characteristics of participants before the study intervention (examples: their typical pain rating, disability). Basic science research provides foundational understanding of organisms and diseases. It can provide information necessary to develop new tests and treatments. Usually, results of basic science research cannot be immediately applied to patients. The results can suggest that the tests and treatments may work, or are plausible. Bias is a tendency or preference towards a particular perspective or result. It interferes with the ability to be impartial or objective. Blinding is also known as masking. With blinding, one or more parties in the study are kept unaware of the experimental condition or intervention. These parties can be: the person measuring patient outcomes, the patient, or the treatment provider. If two of these parties are blinded, it is a double-blinded trial, if three, it is tripleblinded. Generally speaking, the more blinding that occurs in a study, the greater the chance that bias has been minimized and that

the observed results are true. Case-Control Study Case Report Case Series Clinical Equipoise Clinical Experience Clinical Importance Clinical Jazz Clinical Clinical Series Cohort Study In case-control studies, researchers identify individuals (cases) who already have the outcome of interest. ers then choose "outcome free" individuals (controls) who are similar to the cases in terms of characteristics that may affect outcome (for example, gender and age). Cases are then compared to controls to see if there are differences in exposure. A case report is a detailed description of a unique case in clinical practice. A case series (also known as clinical series) is a type of observational study that measures exposure and effects in a series of individuals presenting to a clinic or other setting. These types of studies usually do not include a control group. They can be retrospective or prospective. Clinical equipoise provides the ethical basis for assigning patients to different treatments or diagnostic tests. It means the researcher must have genuine uncertainty as to whether a treatment or test is beneficial. Clinical experience represents the collective experience gained from caring for all of the patients clinicians see. It is an ongoing, iterative process involving professional collaboration, assessment and treatment of patients, and critical reflection. It includes the wisdom gained from correct decisions and mistakes, successes and failures. Clinical importance indicates the significance of a result to a stakeholder. Clinical jazz is the art of making clinical decisions based on all types of "evidence." This includes clinical experience, patient preferences, and research. Clinical research is scientific investigation aimed at generating new knowledge to help diagnose, treat, and prevent disease. It involves the systematic study of tests and treatments to determine their safety and effectiveness. Clinical research is performed on human subjects. See Case Series. Cohort studies start with a group of people who share common,

defining characteristics (for example, gender, year of birth, and geographic location). In contrast to a case-control study, participants have NOT developed the outcome of interest yet. ers follow the cohort for long periods of time. They then categorize them into different levels of exposure which may be associated with a disease outcome. Concealed Allocation Confidence Interval Cross-Sectional Study Demographic Data Disability Discussion and Conclusions Double Blinding (Double-Blinded) DOEs Effect Concealed allocation is the measure taken to prevent scientists and their staff from predicting which treatment a study participant will get. It helps prevent bias in a study. This is because it helps ensure participants in all groups are similar, except for the treatment they will receive. A confidence interval indicates the percent likelihood that a random sample of the data will fall within a specific range of values. For example, a 95% confidence interval indicates that we can be 95% certain that the true value will fall within the range of values given. A cross-sectional study is a snapshot of a specific population s health and behaviors at one point in time. Surveys are a type of cross-sectional study. Demographic data are the characteristics of a population (like age, race, gender). Disability is the degree of functional impairment experienced by a person. It can affect how a person participates in society, including work, family and social life. The discussion and conclusion sections of a research article help put the results of the study into clinical perspective. Information about the strengths and limitations of the study are usually found in the discussion. Double blinding means two parties in the study are unaware of patient's treatment assignment. Usually, this is the person performing outcomes assessment PLUS the treatment provider OR the patient. DOE stands for "disease oriented evidence". DOEs measure physiological outcomes (such as blood chemistry and x-rays). They don't directly measure how a patient feels or experiences disease. An effect is also known as an outcome. Effects include events

such as the presence of disease, mortality, pain levels, lab results, and so on. Empirical Epidemiology Evidence House Evidence Pyramid Experimental Experimenter Bias Exposure Fatal Flaws Filtered Resource Generalizability Guidelines A central concept in research, empirical evidence is observable by the senses; it is evidence that can be measured. Epidemiology is the study of health and disease in human populations. The evidence house views evidence more broadly than the evidence pyramid. The evidence house model acknowledges that different types of research methods are needed to answer different types of questions. The evidence pyramid helps guide clinicians to the highest quality research evidence for clinical practice. According to this model, the higher up you go in the pyramid, the better, or more trustworthy, the evidence. In experimental research, researchers manipulate or control what happens. Experimental studies are distinguished from observational studies in which researchers do not alter or control what happens to subjects. Experimenter bias introduces subjectivity into a research study and can influence the study results. Two of the most common types of experimenter bias are observation bias and measurement bias. Exposure is anything one might consider to be a potentially important cause of an effect. This includes a wide range of things, such as exposure to chemicals, viruses, occupational hazards, treatments and so on. Fatal flaws are important deficiencies in a study that cause us to question its credibility (or validity). In a filtered research resource, the authors have critically appraised the original research. If the authors of filtered resources did a good job, these forms of evidence are valid. Generalizability is the ability to apply a study s results to other patients and clinical settings. Guidelines are a type of summary research. Guidelines usually attempt to find all of the available studies on a broad topic (such as low back pain). They can address a range of information including

condition prevalence, diagnostic tests and procedures, and treatments. They usually critically appraise the studies included. Health Service Hierarchy of Evidence Hypothesis Inclusion and Exclusion Criteria Information Mastery Improvement Internal Validity Longitudinal Study Masked Measurement Bias Meta-Analyses Health Services studies how social factors, financing systems, organizational processes, and health technologies affect access, quality, and cost of healthcare. It looks at the impact on the individual, organizational, community and population levels. See Evidence Pyramid. A hypothesis is a possible explanation for an observation that still requires further testing. studies are designed to test a hypothesis. Inclusion and exclusion criteria are the conditions that must be met for a person to participate in a study. Information mastery is a practical way to efficiently find current, high-quality answers to clinical questions and keep up-to-date on relevant research. It focuses on relevant and valid resources that require as little work as possible for the clinician. Improvement is the degree to which a patient feels their health or condition has changed for the better. Internal validity is the indicator of a study s ability to measure what is intended. In other words, was the study design rigorous enough to have confidence that the researchers actually found what they think they found? Internal validity answers the questions, How well was the study done? Do the results reflect the truth? Longitudinal studies observe individuals at regular intervals over a relatively long period of time. See Blinding. Measurement bias is a type of systematic error that typically favors a particular result. A measurement process is biased if it consistently overstates or understates the true value of the measurement. A meta-analysis is a type of systematic review that uses statistical methods to combine the results of several original studies. Like all systematic reviews, meta-analyses attempt to find all available studies on a specific topic and critically appraise them. They commonly answer one question, such as "is treatment X effective

for condition Y?" Methods Narrative Reviews Numbers Needed to Treat Observation Bias Observational Original Outcome Measure Peer-Reviewed Placebo (sham) The methods section of a research article typically tells you what the researchers did, how they did it, who did it to whom, and when. It includes detailed descriptions of the patient population, treatment and comparison groups, outcome measures, sample size, and statistical analysis. Narrative reviews generally address specific questions or broad topics. Unlike other types of summary research (such as systematic reviews) narrative reviews don't include all available studies. They also tend to be unfiltered, that is, they don't critically appraise the studies they include. This makes them susceptible to bias. Numbers needed to treat (NNT) is a useful statistic for clinicians. The NNT tells us how many people we need to treat with a therapy for one additional person to benefit. Observation bias occurs when the researcher s expectations affect study measurements. Observational research observes and measures people in realworld settings. It is distinguished from experimental research, where researchers control or manipulate what is done to people and then examine the effects. Observational research answers important questions about the causes of health and disease in the real world. Original refers to individual research studies. These include types of clinical research (i.e., randomized clinical trials) and observational research (i.e., cohort studies, case-control studies, etc.). An outcome measure is an instrument or tool that allows you to assess an aspect of a patient s health state. Peer reviewed articles are ones that have been subjected to evaluation by experts, who are supposed to be qualified and able to provide an un-biased review. The process is intended to ensure that published articles are of good quality. However, just because an article has been peer reviewed doesn't guarantee that the results are valid. A placebo is a treatment approach that is designed to have no positive physiological effects.

POEMs Power Prospective P-Value Qualitative Quality of Life Quantitative Quantitative Random Assignment Randomization POEM stands for "patient-oriented evidence that matters." POEMs are outcomes that are meaningful to patients and affect how they feel and experience disease. Examples include symptoms (such as pain), disability and quality of life. Power is the ability of a study to find the difference between groups if a difference really exists. Power depends on the number of patients and the magnitude of the difference. In research, prospective means starting with a group of individuals and collecting data as events occur. Prospective research is less prone to measurement bias than retrospective research. This makes prospective studies more believable than retrospective studies. P-value is the probability that a particular result might have happened by chance. For example, a p-value of less than 0.05, means there is less than a 5% likelihood that a study's results are due to chance. Qualitative research aims to gain an in-depth understanding of people's perceptions, emotions and behaviors. Data collection methods include interviews, focus groups and observations. This is in contrast to quantitative research that uses more structured methods, such as surveys and instruments that produce a numerical value. Quality of life is the overall sense of well-being experienced by a patient, both physically and mentally. Quantitative research uses structured methods to describe an observation or relationship in numerical terms. It is based on the assumption that reality can and should be measured. Quantitative research uses standardized data collection methods, including documentation of observations, questionnaires, laboratory tests, imaging, and others that can be coded and numerated. Most basic science and clinical research is quantitative. Quantitative data uses numbers to describe relationships. Random assignment is the same as randomization, where participants are assigned to treatment groups according to chance (much like flipping a coin). Randomization is the process by which study participants are assigned by chance to treatment groups (much like flipping a

coin). Today, random allocation is typically generated by a computer. Randomization is a very effective way for preventing bias in research. Randomized Clinical Trial Randomized Controlled Trials Relative Risk Relevance Results Retrospective Reverse Gullibility Sampling Bias Randomized clinical trials are designed to evaluate the effectiveness of different treatments in humans. They are characterized by the method used (randomization) to assign participants to treatment groups. Randomized controlled trials usually have a placebo or sham treatment as a comparison group. Relative risk compares the probability of either harm or benefit of one treatment as compared with another. For example a relative risk of 0.7 tells us that the likelihood of something happening is 30% lower in one group versus the other. The term "relevance" in Information Mastery refers to the degree to which research information is important to you and your patients. It helps you assess whether a research article is worth taking the time to read. It is part of the usefulness equation, where usefulness=relevance x validity/work. The most useful information is highly relevant and valid, and requires little work. is the systematic investigation of a subject for the purpose of discovering or revising facts, theories, or applications. It relies on gathering observable, empirical, and measurable information, which is then analyzed to draw conclusions. The results section of a research article describes the outcomes of the study, usually in numerical terms. There are often tables that show you the number of people who completed the study (or were followed up ), patient characteristics, (e.g., age, gender), outcome measurements, and p-values. Retrospective research collects data on individuals about events that have already occurred. It is more prone to measurement bias than prospective research. Reverse gullibility is the inability to believe something even if the evidence is strong. Clinicians sometimes find that their training and experience can bias their assumptions regarding new information and lead to reverse gullibility. Sampling bias occurs when there are systematic differences between groups being compared. This can occur in studies that

allow providers or participants to select what treatment is received. Statistically Significant Single Blinding Summary Systematic Review Systematic Search Strategies Triple Blinding Unfiltered Resources Unsystematic Usefulness Equation In healthcare research, studies that have a p-value of 0.05 or less are considered statistically significant. In other words, because there is a 5% or less risk that the results were due to chance, the results are accepted as true and significant. Single blinding means that one party in the study is masked to the patient s treatment status. Usually, in a single blinded study, it is the person performing the outcomes assessment who is blinded. If it is not possible to blind this person, it is important that steps are taken to minimize any influence on the measurement of outcome. Summary research synthesizes all of the studies on a given topic and critically appraises it. Examples include systematic reviews, meta-analyses and guidelines. When done well, summary research is the most useful type of research evidence for clinicians to use. A systematic review is a summary and filtered resource. It locates all the available studies about treatment for a particular condition and critically appraises them. This provides a summary of the existing research evidence regarding treatment s effectiveness. Systematic search strategies are the search methods used to locate the original articles in scientific databases; they should be reproducible by others. Triple blinding means that three parties in the studies are kept unaware of the patients treatment assignment. These parties are usually the person performing the outcomes assessment, the treatment provider, and the patient. Unfiltered resources have NOT been critically appraised. This means that they may have limitations or flaws that make the results less valid. Unsystematic is an unstructured approach, which usually can not be reproduced by others (that is, if someone else attempted to use their methods, they would likely get different results). This type of approach is susceptible to bias. The usefulness equation is an efficient way to decide which research information to use. The best or most useful research information can be found by applying this equation: usefulness = relevance x validity/work. The most useful information is highly relevant and valid, and requires little work.

Validity Work YODA YUCK Validity refers to whether or not information is credible or trustworthy and can be believed. The term "work" in Information Mastery refers to the amount of effort it takes for a clinician to find and use research information. It is part of the usefulness equation, where usefulness=relevance x validity/work. The most useful information is highly relevant and valid, and requires little work. YODA stands for Your own data analyzer. YUCK stands for Your unsubstantiated know it all.