Developing and Testing Hypotheses Kuba Glazek, Ph.D. Methodology Expert National Center for Academic and Dissertation Excellence Los Angeles NATIONAL CENTER FOR ACADEMIC & DISSERTATION EXCELLENCE
Overview Literature review Identification of gap(s) Research question development Hypothesis and design Data collection Data analysis Reporting Studies methods diverge upon determination of research question(s) Little previous research: Exploratory study A lot of previous research: Confirmatory study
Literature Review Gap Identification Navigate to TCS library site Tutorial: http://enews.thechicagoschool.edu/library/naviga te_to_library_homepage/library%20homepage% 20Navigation.htm Once at the site, search our databases EBSCO PsycARTICLES
Literature Review Gap Identification Enter keywords, authors, etc. Review titles and abstracts Download and read pertinent full-text articles Review related literature Studies cited by pertinent articles Studies citing pertinent articles
You are here Or is it here?
Look for Road Signs Note frequently-cited authors Search databases by author; visit authors websites Seek highly-regarded literature In a seminal study, Smith (1982) discovered Our results are consistent with the now-classic theory of Seek review articles and meta-analyses
The more I learn, the more I realize how much I don't know. -Albert Einstein The results of one literature review lead to the beginnings of the next one Revise search according to clues Keywords, instruments, authors Original research question evolves through iteration
Typical Results of Review Lack of evidence Your turn to pave the way Converging evidence Do not conduct a redundant study Reiterate research question in light of what is already known Conflicting evidence Investigate differences between studies Measure(s), population(s), etc.
Research Question Hypothesis When there is little scientific knowledge about a topic No hypothesis per se Develop open-ended, exploratory research question Typically leads to qualitative study design When there is robust converging evidence Posit precise hypothesis When there is conflicting evidence Posit a hypothesis about potential source of variability
Hypothesis Testing Null hypothesis: There is no significant relationship/difference between and Shorthand: H 0 Alternative hypothesis: There is a significant relationship/difference between and Can be non-directional or directional Shorthand: H 1 or H A
Quantitative Research Designs Descriptive Observe rates of behavior Correlational Measure strength of relationships between variables Quasi-experimental Compare different groups on measure of interest Experimental
Quantitative Criteria for a Valid Study Valid and reliable instrument(s) High-integrity data Sufficient power to detect effect(s) Statistical significance of effect(s) Effect size(s)
Choosing a Valid Instrument Internal validity is the degree to which an instrument actually measures the concept or construct it is intended to measure (Slavin, 2007) External validity is the degree to which internally valid results can be generalized to other contexts (Ray, 2009)
Choosing a Reliable Instrument Reliability is the degree to which the instrument produces consistent, stable indication of the level of a variable (Slavin, 2007) Test-retest reliability: across time Inter-rater reliability: across researchers/teams Cronbach s α, factor analysis, and metaanalysis used to quantify a measure s reliability
Integrity of Data Transcription errors are (statistically) inevitable Must check for missing and erroneous values Test statistical assumptions Assumption Test Data missing at random Missing Value Analysis Normality of distributions Visual inspection of histograms Shapiro-Wilk test Homogeneity of variance Levene s test Influential outliers Mahalanobis distance Standardized DFBeta
Integrity of Data Address violations of assumptions Transform variables Remove outliers Use non-parametric tests
Power Analysis Determine necessary sample size As sample size increases, so does chance of detecting effects in inherently noisy data Review existing studies related to construct of interest Significant effects (p <.05) obtained using N = Obtained effect sizes (e.g., Cohen s d, Pearson s r, ɳ 2 ) Conduct a priori power analysis Calculate necessary sample size given expected effect size and number of variables Use G*power software: www.gpower.hhu.de/en.html
Logic of Hypothesis Testing Difference is assumed to be non-existent unless shown otherwise H 0 is always the assumption to be rejected Assuming there is no effect, what is the probability of obtaining the given difference? If the probability is less than 5% (i.e., p <.05), then we assume that the assumption of a null effect is false (i.e., reject the null hypothesis)
Effect Size Significant effects do not necessarily imply important effects Effect size indicates either: Amount of variability in outcome accounted for by predictor (e.g., ɳ 2, Pearson s r) Distance between distributions (e.g., Cohen s d)
Power Analysis (part 2) Post hoc power analysis Given obtained effect size, how likely is it that a result is trustworthy? Small effects are difficult to detect If power is low (i.e., <.8), result cannot be taken at face value If a priori power analysis was conducted, this is not an issue
Thank You! Questions? Web: https://my.thechicagoschool.edu/ students/pages/ncade.aspx Email: ncade@thechicagoschool.edu Phone: (312) 488-6054