Power Analysis: A Crucial Step in any Social Science Study Kuba Glazek, Ph.D. Methodology Expert National Center for Academic and Dissertation Excellence Los Angeles
Outline The four elements of a meaningful study Alpha (p value) Power Effect size Adequate sample size Techniques for obtaining an adequate sample size Design implications
Qualitative Studies Power Saturation Collect data until consistent/repetitive themes emerge Usually N = 12 is an upper limit (Baker & Edwards, 2012)
Alpha and Type I Error Corresponds to p value (p <.05) P(robability) of a type I error Incorrectly rejecting a null hypothesis AKA false alarm Observing an effect that isn t really there due to sampling error
Power and Type II Error Corresponds to 1-β value Probability of committing a type II error Missing an effect that really is there (i.e., observing p >.05) AKA miss Due to lack of power to detect the effect The less data collected, the less precise the estimate of any potential effect
Underpowered Study Consequences Cannot know whether effect does not exist OR Effect exists, but study was underpowered (i.e., not enough data to detect it) If study is conducted, participants are put in harm s way without hope of valid conclusion A potentially effective treatment may be overlooked
Overview State of the World Effect present Reject H 0 Effect absent Study Finding Effect present Effect absent Type I error (false alarm) Type II error (miss) Retain H 0 Desired outcomes Probability of type I error less than 5% (p <.05) Probability of type II error less than 20% (1- β <.8)
Oversampling The more data collected, the more precise the estimate of effect, the more power to detect an effect Clinical significance may be non-existent My expose too many participants to potential danger
Effect Size Alpha only states whether an effect is significant Need to know the magnitude of effect, as well With enough data, 1% improvement may be significant With too little data, a 50% improvement may be non-significant If p is greater than.05, effect size is still informative
Required Sample Size Determined by three inputs Chosen alpha Chosen power Expected effect size
Expected Effect Size? Review previous literature studying similar constructs Note the sample size used Note reported effect sizes Cohen s d or Hedges s g Eta (η, η 2, η p2 ) Omega (ω, ω 2 ) Pearson s r Statistical tests used in previous lit do not have to be identical to those appropriate to your study s design
Expected Effect Size? If effect size not reported, can use other values to calculate Durlak, J. A. (2009). How to select, calculate, and interpret effect sizes. Journal of Pediatric Psychology Cohen s d to Pearson s r and vice-versa Cohen s d and Hedges s g from means and SDs Pearson s r from Student s t and vice-versa Article available at http://jpepsy.oxfordjournals.org/content/early/2009/ 02/16/jpepsy.jsp004.full.pdf+html d to r calculator: http://mason.gmu.edu/~dwilsonb/downloads/es_cal culator.xls (David Wilson)
Expected Effect Size? If all else fails, obtain a range of sample sizes necessary for a range of effect sizes Only consider small to medium effects Large effects are obvious and further scientific inquiry would be more redundant than useful Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155-159. Available via EBSCO database
Cohen (1992)
Cohen (1992)
G*Power Cohen (1992) gives generic examples Unique effect size estimates may require unique power analysis http://www.gpower.hhu.de/en.html Allows for determining effect sizes from various inputs (e.g., correlations, means, SDs, etc.) Can get technical Start with Cohen
Thank You! Questions? Presentation available on YouTube https://www.youtube.com/channel/uchcvwpnugsxf85vbqfseljq http://bit.ly/1ge1xsd or just search for NCADE Email: ncade.me@thechicagoschool.edu Office: Los Angeles campus 825-B Phone: (213) 615-7290