Experimental design. Basic principles

Similar documents
Elementary Statistics and Inference. Elementary Statistics and Inference. 1.) Introduction. 22S:025 or 7P:025. Lecture 1.

Study Design. Study design. Patrick Breheny. January 23. Patrick Breheny Introduction to Biostatistics (171:161) 1/34

Sampling Controlled experiments Summary. Study design. Patrick Breheny. January 22. Patrick Breheny Introduction to Biostatistics (BIOS 4120) 1/34

Introduction; Study design

Describing Data: Part I. John McGready Johns Hopkins University

Introduction to Statistics Design of Experiments

[Source: Freedman, Pisani, Purves (1998) Statistics, 3rd ed., Ch 2.]

Section B. Variability in the Normal Distribution: Calculating Normal Scores

Confounding and Effect Modification. John McGready Johns Hopkins University

Comparing Proportions between Two Independent Populations. John McGready Johns Hopkins University

Section F. Measures of Association: Risk Difference, Relative Risk, and the Odds Ratio

Comparing Means among Two (or More) Independent Populations. John McGready Johns Hopkins University

HUMAN-COMPUTER INTERACTION EXPERIMENTAL DESIGN

Section C. Sample Size Determination for Comparing Two Proportions

Villarreal Rm. 170 Handout (4.3)/(4.4) - 1 Designing Experiments I

Study Designs. Lecture 3. Kevin Frick

Study Methodology: Tricks and Traps

Proposal Writing. Lynda Burton, ScD Johns Hopkins University

Section D. Another Non-Randomized Study Design: The Case-Control Design

Section G. Comparing Means between More than Two Independent Populations

Dr. Allen Back. Sep. 30, 2016

Section H. Case Study: Risk Assessment Issues Associated with Chloroform

AMS 5 EXPERIMENTAL DESIGN

Evaluation and Assessment of Health Policy. Gerard F. Anderson, PhD Johns Hopkins University

Chapter 1 - Sampling and Experimental Design

How to Choose the Wrong Model. Scott L. Zeger Department of Biostatistics Johns Hopkins Bloomberg School

Statistics for Psychosocial Research Lecturer: William Eaton

The Impact of Pandemic Influenza on Public Health

You can t fix by analysis what you bungled by design. Fancy analysis can t fix a poorly designed study.

Applied Analysis of Variance and Experimental Design. Lukas Meier, Seminar für Statistik

Thursday, April 25, 13. Intervention Studies

Next, we ll discuss some terminology that is typically used when discussing randomized experiments.

Thursday, April 25, 13. Intervention Studies

UCLA STAT 251 / OBEE 216 UCLA STAT 251 / OBEE 216

Alcohol. Gordon Smith, MD (MB, ChB Otago), MPH Professor University of Maryland National Study Center for Trauma and EMS

Observational study is a poor way to gauge the effect of an intervention. When looking for cause effect relationships you MUST have an experiment.

CRITICAL APPRAISAL SKILLS PROGRAMME Making sense of evidence about clinical effectiveness. 11 questions to help you make sense of case control study

Chapter 13: Experiments

Chapter 13 Summary Experiments and Observational Studies

Comparing Proportions between Two Independent Populations. John McGready Johns Hopkins University

Chapter 13. Experiments and Observational Studies. Copyright 2012, 2008, 2005 Pearson Education, Inc.

Epidemiologic Methods for Evaluating Screening Programs. Rosa M. Crum, MD, MHS Johns Hopkins University

Health Issues for Aging Populations

Chapter 1 Data Types and Data Collection. Brian Habing Department of Statistics University of South Carolina. Outline

Sampling. (James Madison University) January 9, / 13

Experiments in the Real World

Examining Relationships Least-squares regression. Sections 2.3

Lesson Plan. Class Policies. Designed Experiments. Observational Studies. Weighted Averages

Epidemiology of Diarrheal Diseases. Robert Black, MD, MPH Johns Hopkins University

Section D. Genes whose Mutation can lead to Initiation

Defining Mental Disorders. Judy Bass, MPH, PhD Johns Hopkins University

Proteins and Amino Acids. Benjamin Caballero, MD, PhD Johns Hopkins University

STD Epidemiology. Jonathan Zenilman, MD Johns Hopkins University

UNDERSTANDING EPIDEMIOLOGICAL STUDIES. Csaba P Kovesdy, MD FASN Salem VA Medical Center, Salem VA University of Virginia, Charlottesville VA

THE BIGGEST PUBLIC HEALTH EXPERIMENT EVER: The 1954 Field Trial of the Salk Poliomyelitis Vaccine

TB Epidemiology. Richard E. Chaisson, MD Johns Hopkins University Center for Tuberculosis Research

Changing People s Behavior. Larry Wissow Professor Health, Behavior and Society Johns Hopkins School of Public Health

Social Capital. Ginny Tedrow. Health Behavior Change at the Individual, Household and Community Levels

Section B. Self-Regulation, Attention, and Problem Solving

Economic Evaluation. Introduction to Economic Evaluation

Master Class: Fundamentals of Lung Cancer

Glossary From Running Randomized Evaluations: A Practical Guide, by Rachel Glennerster and Kudzai Takavarasha

Critical Analysis of Popular Diets and Supplements

Monitoring and Evaluation IN ACTION! Duff Gillespie

Math 2a: Lecture 1. Agenda. Course organization. History of the subject. Why is the subject relevant today? First examples

CHAPTER 9: Producing Data: Experiments

Justice in Research Ethics: International Issues. Nancy E. Kass, ScD Johns Hopkins University

Severe Mental Disorders. Etheldreda Nakimuli-Mpungu, MMed (Psych), MBChB Johns Hopkins University

En & IH Engineering Aspects of Public Health Crises. Quiz 1

Chapter 13. Experiments and Observational Studies

Salt Iodization in Haiti: Challenges to Improving Salt Production Quality and Recommendations for Pursuing Iodization

Trick or Treat. In April!

Creative Commons Attribution-NonCommercial-Share Alike License

Experimental Design. Terminology. Chusak Okascharoen, MD, PhD September 19 th, Experimental study Clinical trial Randomized controlled trial

STI Prevention: Housekeeping and How We Use Public Health. M. Terry Hogan, MPH Johns Hopkins University

Breast Cancer Basics. Clinical Oncology for Public Health Professionals. Ben Ho Park, MD, PhD

Math 1680 Class Notes. Chapters: 1, 2, 3, 4, 5, 6

Measurement of Psychopathology in Populations. William W. Eaton, PhD Johns Hopkins University

aps/stone U0 d14 review d2 teacher notes 9/14/17 obj: review Opener: I have- who has

Lecture 2. Key Concepts in Clinical Research

Designed Experiments have developed their own terminology. The individuals in an experiment are often called subjects.

Math 140 Introductory Statistics

AP Statistics Exam Review: Strand 2: Sampling and Experimentation Date:

Chapter 15: Continuation of probability rules

CHAPTER 4 Designing Studies

STA 291 Lecture 4 Jan 26, 2010

Introduction, Nosology, and History. William W. Eaton, PhD Johns Hopkins University

What is Statistics? (*) Collection of data Experiments and Observational studies. (*) Summarizing data Descriptive statistics.

Meanings of medications, metonymy. Peter Winch. Health Behavior Change at the Individual, Household and Community Levels

Welcome to this series focused on sources of bias in epidemiologic studies. In this first module, I will provide a general overview of bias.

What is Science? The pics are used for education purposes only

Vocabulary. Bias. Blinding. Block. Cluster sample

9/24/2014 UNIT 2: RESEARCH METHODS AND STATISTICS RESEARCH METHODS RESEARCH METHODS RESEARCH METHODS

Chapter 1 Chapter 1. Chapter 1 Chapter 1. Chapter 1 Chapter 1. Chapter 1 Chapter 1. Chapter 1 Chapter 1

STATISTICS: METHOD TO GET INSIGHT INTO VARIATION IN A POPULATIONS If every unit in the population had the same value,say

STAB22 Statistics I. Lecture 12

Post Procedural Care

Probability and Sample space

THERAPEUTIC REASONING

Transcription:

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this site. Copyright 2006, The Johns Hopkins University and Karl Broman. All rights reserved. Use of these materials permitted only in accordance with license rights granted. Materials provided AS IS ; no representations or warranties provided. User assumes all responsibility for use, and all liability related thereto, and must independently review all materials for accuracy and efficacy. May contain materials owned by others. User is responsible for obtaining permissions for use from third parties as needed.

Experimental design Note: You can see this plus a bit more online at: http://oac.med.jhmi.edu/humane_site/topics/12a.html 1 Basic principles 1. Formulate question/goal in advance 2. Comparison/control 3. Replication 4. Randomization 5. Stratification (aka blocking) 6. Factorial experiments 2 1

Example Question: Does salted drinking water affect blood pressure (BP) in mice? Experiment: 1. Provide a mouse with water containing 1% NaCl. 2. Wait 14 days. 3. Measure BP. 3 Comparison/control Good experiments are comparative. Compare BP in mice fed salt water to BP in mice fed plain water. Compare BP in strain A mice fed salt water to BP in strain B mice fed salt water. Ideally, the experimental group is compared to concurrent controls (rather than to historical controls). 4 2

Replication 5 Why replicate? Reduce the effect of uncontrolled variation (i.e., increase precision). Quantify uncertainty. A related point: An estimate is of no value without some statement of the uncertainty in the estimate. 6 3

Randomization Experimental subjects ( units ) should be assigned to treatment groups at random. At random does not mean haphazardly. One needs to explicitly randomize using A computer, or Coins, dice or cards. 7 Why randomize? Avoid bias. For example: the first six mice you grab may have intrinsically higher BP. Control the role of chance. Randomization allows the later use of probability theory, and so gives a solid foundation for statistical analysis. 8 4

Stratification Suppose that some BP measurements will be made in the morning and some in the afternoon. If you anticipate a difference between morning and afternoon measurements: Ensure that within each period, there are equal numbers of subjects in each treatment group. Take account of the difference between periods in your analysis. This is sometimes called blocking. 9 Example 20 male mice and 20 female mice. Half to be treated; the other half left untreated. Can only work with 4 mice per day. Question: How to assign individuals to treatment groups and to days? 10 5

An extremely bad design 11 Randomized 12 6

A stratified design 13 Randomization and stratification If you can (and want to), fix a variable. e.g., use only 8 week old male mice from a single strain. If you don t fix a variable, stratify it. e.g., use both 8 week and 12 week old male mice, and stratify with respect to age. If you can neither fix nor stratify a variable, randomize it. 14 7

Factorial experiments Suppose we are interested in the effect of both salt water and a high-fat diet on blood pressure. Ideally: look at all 4 treatments in one experiment. Plain water Salt water Normal diet High-fat diet Why? We can learn more. More efficient than doing all single-factor experiments. 15 Interactions 16 8

Other points Blinding Measurements made by people can be influenced by unconscious biases. Ideally, dissections and measurements should be made without knowledge of the treatment applied. Internal controls It can be useful to use the subjects themselves as their own controls (e.g., consider the response after vs. before treatment). Why? Increased precision. 17 Other points Representativeness Are the subjects/tissues you are studying really representative of the population you want to study? Ideally, your study material is a random sample from the population of interest. 18 9

Summary Characteristics of good experiments: Unbiased Randomization Blinding High precision Uniform material Replication Stratification Simple Protect against mistakes Wide range of applicability Deliberate variation Factorial designs Able to estimate uncertainty Replication Randomization 19 10

Salk vaccine trial 1916: first polio epidemic in the US next 40 years: hundreds of thousands of victims By 1950s: several vaccines developed; that by Jonas Salk appears most promising 1954: Public Health Service and Nat l Fdn for Infantile Paralysis (NFIP) ready to test the Salk vaccine in a field trial See Freedman, Psiani, Purves (1998) Statistics, 3rd ed, Ch 1 2 Possible designs for the vaccine trial 1. Give the vaccine to many children and look at the rate vs the previous year. 2. Compare those vaccinated to those whose parents refused vaccination. 3. Vaccinate grade 2 (in consenting) and compare to grades 1 and 3. [This is what the NFIP chose to do.] 4. Vaccinate some portion (chosen at random) of those whose parents consent. Best study: double-blind randomized placebo-controlled

Results of 1954 Salk vaccine trial The randomized controlled double-blind experiment Size Rate Treatment 200,000 28 Control 200,000 71 No consent 350,000 46 The NFIP study Size Rate Grade 2 (vaccine) 225,000 25 Grades 1 & 3 (control) 725,000 54 Grade 2 (no consent) 125,000 44 Note: Rates are per 100,000 Points NFIP study: vaccine appears to lower rate 54 25 (vs 71 28). The control group included children whose parents would not have consented. Might the vaccine have no effect? (Could the observed differences be simply chance variation?) In the randomized controlled trial, it is relatively simple to answer this question, as the role of chance was according to our design. In the NFIP study, it is impossible to tell, as chance is not under our control.

The portacaval shunt A long, hazardous surgery to treat cirrhosis of the liver. Do the benefits outweigh the risks? Over 50 studies have considered this. Degree of enthusiasm Design Marked Moderate None No controls 24 7 1 Controls, but not randomized 10 3 2 Randomized controlled 0 1 3 In the studies where the controls were not chosen at random, sicker patients were chosen as controls. Historical controls Historical controls: patients treated the old way in the past. Problem: treatment group and historical control group may differ in important ways besides the treatment. Randomized Historically controlled controlled + + Coronary bypass surgery 1 7 16 5 5-FU 1 7 2 0 BCG 2 2 4 0 DES 0 3 5 0