Power Analysis: A Crucial Step in any Social Science Study

Similar documents
Developing and Testing Hypotheses Kuba Glazek, Ph.D. Methodology Expert National Center for Academic and Dissertation Excellence Los Angeles

Choosing an Approach for a Quantitative Dissertation: Strategies for Various Variable Types

d =.20 which means females earn 2/10 a standard deviation more than males

Power and Effect Size Measures: A Census of Articles Published from in the Journal of Speech, Language, and Hearing Research

Overview. Survey Methods & Design in Psychology. Readings. Significance Testing. Significance Testing. The Logic of Significance Testing

Methods for Determining Random Sample Size

Confidence Intervals On Subsets May Be Misleading

Methodological Issues in Measuring the Development of Character

Power of a Clinical Study

NEED A SAMPLE SIZE? How to work with your friendly biostatistician!!!

A SAS Macro to Investigate Statistical Power in Meta-analysis Jin Liu, Fan Pan University of South Carolina Columbia

Using Effect Size in NSSE Survey Reporting

Substantive Significance of Multivariate Regression Coefficients

The Effects of Simplifying Assumptions in Power Analysis

Authors face many challenges when summarising results in reviews.

Are the likely benefits worth the potential harms and costs? From McMaster EBCP Workshop/Duke University Medical Center

Quantitative Approaches for Estimating Sample Size for Qualitative Research in COA Development and Validation

SAMPLE SIZE IN CLINICAL RESEARCH, THE NUMBER WE NEED

Testing Means. Related-Samples t Test With Confidence Intervals. 6. Compute a related-samples t test and interpret the results.

Sheila Barron Statistics Outreach Center 2/8/2011

Welcome to our e-course on research methods for dietitians

Georgina Salas. Topics EDCI Intro to Research Dr. A.J. Herrera

CHAPTER OBJECTIVES - STUDENTS SHOULD BE ABLE TO:

Investigator Qualification Evidence Gathering: Our Approach to Data Collection

Research Methods & Design Outline. Types of research design How to choose a research design Issues in research design

Lesson 11.1: The Alpha Value

Empirical Knowledge: based on observations. Answer questions why, whom, how, and when.

Guidelines for reviewers

Power, Effect Sizes, Confidence Intervals, & Academic Integrity. Significance testing

The Research Roadmap Checklist

Variables Research involves trying to determine the relationship between two or more variables.

Statistical Power Sampling Design and sample Size Determination

Designing Experiments... Or how many times and ways can I screw that up?!?

Formal Sorority Recruitment Fall 2015 Survey

UNIT V: Analysis of Non-numerical and Numerical Data SWK 330 Kimberly Baker-Abrams. In qualitative research: Grounded Theory

How many speakers? How many tokens?:

Research Methodology. Characteristics of Observations. Variables 10/18/2016. Week Most important know what is being observed.

Binary Diagnostic Tests Two Independent Samples

Biostatistics Primer

On the Performance of Maximum Likelihood Versus Means and Variance Adjusted Weighted Least Squares Estimation in CFA

Closed Coding. Analyzing Qualitative Data VIS17. Melanie Tory

Guidelines for Incorporating & Strengthening Perspective-Taking & Self-Authorship into Division of Student Life Programs

Pathways to Flourishing for First- Generation College Students. Association of American Colleges & Universities 2018 Annual Meeting Washington, DC

RESEARCH ARTICLE. GENDER DIFFERENCES IN RISK TAKING: EVIDENCE FROM A MANAGEMENT INSTITUTE Amit Kapoor 1

Introduction to statistics Dr Alvin Vista, ACER Bangkok, 14-18, Sept. 2015

Systematic Reviews and Meta- Analysis in Kidney Transplantation

Patient Reported Outcomes in Clinical Research. Overview 11/30/2015. Why measure patientreported

INADEQUACIES OF SIGNIFICANCE TESTS IN

UA College of Medicine-Phoenix Biostatistics and Study Design Service. Introduction

CHAPTER VI RESEARCH METHODOLOGY

Use of the Quantitative-Methods Approach in Scientific Inquiry. Du Feng, Ph.D. Professor School of Nursing University of Nevada, Las Vegas

Empirically grounded ethics of care: a methodological perspective

Relationship, Correlation, & Causation DR. MIKE MARRAPODI

EBM, Study Design and Numbers. David Frankfurter, MD Professor OB/GYN The George Washington University

The Nonuse, Misuse, and Proper Use of Pilot Studies in Education Research

Research Questions, Variables, and Hypotheses: Part 2. Review. Hypotheses RCS /7/04. What are research questions? What are variables?

Unllais News Summer Issue 1

Objectives. Quantifying the quality of hypothesis tests. Type I and II errors. Power of a test. Cautions about significance tests

Experimental Design Part II

Experimental Psychology

Overview of Non-Parametric Statistics

Confirmatory Factor Analysis of the BCSSE Scales

What is the Dissertation?

Evidence Based Library and Information Practice

RESEARCH METHODOLOGY

Risk. Factors. Children and Adolescent Substance Use. in Porter County 2016 Report

Parameter Estimation of Cognitive Attributes using the Crossed Random- Effects Linear Logistic Test Model with PROC GLIMMIX

Indiana Academic Standards Addressed By Zoo Program WINGED WONDERS: SEED DROP!

Measuring Self-Esteem of Adolescents Based on Academic Performance. Grambling State University

FATTY ACID PROFILING BY GAS CHROMATOGRAPHY FOR THE SHERLOCK MIS

Calculating and Reporting Effect Sizes to Facilitate Cumulative Science: A. Practical Primer for t-tests and ANOVAs. Daniël Lakens

The New Zealand Mental Health Commission has defined recovery as. The Wellness Recovery Action Plan (WRAP): workshop evaluation CONSUMER ISSUES

U.S. Preventive Services Task Force: Draft Prostate Cancer Screening Recommendation (April 2017)

Experimental Methods. Anna Fahlgren, Phd Associate professor in Experimental Orthopaedics

CHAPTER THIRTEEN. Data Analysis and Interpretation: Part II.Tests of Statistical Significance and the Analysis Story CHAPTER OUTLINE

STATISTICAL CONCLUSION VALIDITY

Institutional Review Board Application

Shiken: JALT Testing & Evaluation SIG Newsletter. 12 (2). April 2008 (p )

THE RELATIONSHIP BETWEEN SELF- EFFICACY, SOCIAL ISOLATION, REJECTION SENSITIVITY, COLLEGE ADJUSTMENT, AND RETENTION

THE PROCESS OF MENTAL HEALTH RECOVERY/RESILIENCY IN CHILDREN AND ADOLESCENTS

Eastern Illinois University Vice President of Student Affairs

Introduction to Quantitative Research and Program Evaluation Methods

Psychology Research Process

Experimental Research in HCI. Alma Leora Culén University of Oslo, Department of Informatics, Design

Unit 1 Exploring and Understanding Data

Level 2: Critical Appraisal of Research Evidence

Internal assessment details

Meta-Analysis Procedures. Search procedures. Cross-sectional and longitudinal studies were searched from

THERE S NO EXCUSE FOR UNSAFE ACTS

Two-Way Independent ANOVA

2-Group Multivariate Research & Analyses

Why Tests Don t Pass

Evidence-Based Medicine Journal Club. A Primer in Statistics, Study Design, and Epidemiology. August, 2013

OUTLINE. Teaching Critical Appraisal and Application of Research Findings. Elements of Patient Management 2/18/2015. Examination

Peer Mentoring Guide for Student Mentees

Influence of Socio Personal Variables on Level of Work Motivation Among School Teachers of Haryana State

A Spreadsheet for Deriving a Confidence Interval, Mechanistic Inference and Clinical Inference from a P Value

ehealth and Data Analytics Dementia Pathfinder Programme Dementia Analytics Research User Group (DARUG) PPI Steering Group

Statistics: Making Sense of the Numbers

Pain Management Pathway Redesign. Briefing on Patient Journey Mapping approach to patient interviews

Transcription:

Power Analysis: A Crucial Step in any Social Science Study Kuba Glazek, Ph.D. Methodology Expert National Center for Academic and Dissertation Excellence Los Angeles

Outline The four elements of a meaningful study Alpha (p value) Power Effect size Adequate sample size Techniques for obtaining an adequate sample size Design implications

Qualitative Studies Power Saturation Collect data until consistent/repetitive themes emerge Usually N = 12 is an upper limit (Baker & Edwards, 2012)

Alpha and Type I Error Corresponds to p value (p <.05) P(robability) of a type I error Incorrectly rejecting a null hypothesis AKA false alarm Observing an effect that isn t really there due to sampling error

Power and Type II Error Corresponds to 1-β value Probability of committing a type II error Missing an effect that really is there (i.e., observing p >.05) AKA miss Due to lack of power to detect the effect The less data collected, the less precise the estimate of any potential effect

Underpowered Study Consequences Cannot know whether effect does not exist OR Effect exists, but study was underpowered (i.e., not enough data to detect it) If study is conducted, participants are put in harm s way without hope of valid conclusion A potentially effective treatment may be overlooked

Overview State of the World Effect present Reject H 0 Effect absent Study Finding Effect present Effect absent Type I error (false alarm) Type II error (miss) Retain H 0 Desired outcomes Probability of type I error less than 5% (p <.05) Probability of type II error less than 20% (1- β <.8)

Oversampling The more data collected, the more precise the estimate of effect, the more power to detect an effect Clinical significance may be non-existent My expose too many participants to potential danger

Effect Size Alpha only states whether an effect is significant Need to know the magnitude of effect, as well With enough data, 1% improvement may be significant With too little data, a 50% improvement may be non-significant If p is greater than.05, effect size is still informative

Required Sample Size Determined by three inputs Chosen alpha Chosen power Expected effect size

Expected Effect Size? Review previous literature studying similar constructs Note the sample size used Note reported effect sizes Cohen s d or Hedges s g Eta (η, η 2, η p2 ) Omega (ω, ω 2 ) Pearson s r Statistical tests used in previous lit do not have to be identical to those appropriate to your study s design

Expected Effect Size? If effect size not reported, can use other values to calculate Durlak, J. A. (2009). How to select, calculate, and interpret effect sizes. Journal of Pediatric Psychology Cohen s d to Pearson s r and vice-versa Cohen s d and Hedges s g from means and SDs Pearson s r from Student s t and vice-versa Article available at http://jpepsy.oxfordjournals.org/content/early/2009/ 02/16/jpepsy.jsp004.full.pdf+html d to r calculator: http://mason.gmu.edu/~dwilsonb/downloads/es_cal culator.xls (David Wilson)

Expected Effect Size? If all else fails, obtain a range of sample sizes necessary for a range of effect sizes Only consider small to medium effects Large effects are obvious and further scientific inquiry would be more redundant than useful Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155-159. Available via EBSCO database

Cohen (1992)

Cohen (1992)

G*Power Cohen (1992) gives generic examples Unique effect size estimates may require unique power analysis http://www.gpower.hhu.de/en.html Allows for determining effect sizes from various inputs (e.g., correlations, means, SDs, etc.) Can get technical Start with Cohen

Thank You! Questions? Presentation available on YouTube https://www.youtube.com/channel/uchcvwpnugsxf85vbqfseljq http://bit.ly/1ge1xsd or just search for NCADE Email: ncade.me@thechicagoschool.edu Office: Los Angeles campus 825-B Phone: (213) 615-7290