Hypothesis Testing. Richard S. Balkin, Ph.D., LPC-S, NCC

Similar documents
Inferential Statistics

Research Questions, Variables, and Hypotheses: Part 2. Review. Hypotheses RCS /7/04. What are research questions? What are variables?

CHAPTER OBJECTIVES - STUDENTS SHOULD BE ABLE TO:

Study Guide for the Final Exam

CHAPTER NINE DATA ANALYSIS / EVALUATING QUALITY (VALIDITY) OF BETWEEN GROUP EXPERIMENTS

Describe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo

Chapter 12. The One- Sample

3 CONCEPTUAL FOUNDATIONS OF STATISTICS

Business Statistics Probability

Still important ideas

Sheila Barron Statistics Outreach Center 2/8/2011

Readings: Textbook readings: OpenStax - Chapters 1 11 Online readings: Appendix D, E & F Plous Chapters 10, 11, 12 and 14

Lessons in biostatistics

Chapter 12: Introduction to Analysis of Variance

Conducting Research in the Social Sciences. Rick Balkin, Ph.D., LPC-S, NCC

Psychology Research Process

One-Way ANOVAs t-test two statistically significant Type I error alpha null hypothesis dependant variable Independent variable three levels;

Still important ideas

Describe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo

Unit 1 Exploring and Understanding Data

THIS PROBLEM HAS BEEN SOLVED BY USING THE CALCULATOR. A 90% CONFIDENCE INTERVAL IS ALSO SHOWN. ALL QUESTIONS ARE LISTED BELOW THE RESULTS.

Describe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo

Factorial Analysis of Variance

Psychology Research Process

Readings: Textbook readings: OpenStax - Chapters 1 13 (emphasis on Chapter 12) Online readings: Appendix D, E & F

Analysis of Variance (ANOVA)

04/12/2014. Research Methods in Psychology. Chapter 6: Independent Groups Designs. What is your ideas? Testing

Analysis of Variance (ANOVA) Program Transcript

ANOVA. Thomas Elliott. January 29, 2013

Statistical Significance, Effect Size, and Practical Significance Eva Lawrence Guilford College October, 2017

Chapter 23. Inference About Means. Copyright 2010 Pearson Education, Inc.

15.301/310, Managerial Psychology Prof. Dan Ariely Recitation 8: T test and ANOVA

PSYCHOLOGY 320L Problem Set #4: Estimating Sample Size, Post Hoc Tests, and Two-Factor ANOVA

Testing Means. Related-Samples t Test With Confidence Intervals. 6. Compute a related-samples t test and interpret the results.

Design of Experiments & Introduction to Research

d =.20 which means females earn 2/10 a standard deviation more than males

A Brief (very brief) Overview of Biostatistics. Jody Kreiman, PhD Bureau of Glottal Affairs

Investigating the robustness of the nonparametric Levene test with more than two groups

Profile Analysis. Intro and Assumptions Psy 524 Andrew Ainsworth

MEA DISCUSSION PAPERS

APPENDIX N. Summary Statistics: The "Big 5" Statistical Tools for School Counselors

SPRING GROVE AREA SCHOOL DISTRICT. Course Description. Instructional Strategies, Learning Practices, Activities, and Experiences.

PRINCIPLES OF STATISTICS

CRITICAL EVALUATION OF BIOMEDICAL LITERATURE

1) What is the independent variable? What is our Dependent Variable?

Examining differences between two sets of scores

PSYCHOLOGY 300B (A01) One-sample t test. n = d = ρ 1 ρ 0 δ = d (n 1) d

8/28/2017. If the experiment is successful, then the model will explain more variance than it can t SS M will be greater than SS R

The Single-Sample t Test and the Paired-Samples t Test

One slide on research question Literature review: structured; holes you will fill in Your research design

CHAPTER THIRTEEN. Data Analysis and Interpretation: Part II.Tests of Statistical Significance and the Analysis Story CHAPTER OUTLINE

Statistics Guide. Prepared by: Amanda J. Rockinson- Szapkiw, Ed.D.

Political Science 15, Winter 2014 Final Review

Understandable Statistics

Biostatistics 3. Developed by Pfizer. March 2018

INADEQUACIES OF SIGNIFICANCE TESTS IN

One-Way Independent ANOVA

Creative Commons Attribution-NonCommercial-Share Alike License

Kepler tried to record the paths of planets in the sky, Harvey to measure the flow of blood in the circulatory system, and chemists tried to produce

investigate. educate. inform.

Hypothesis testing: comparig means

Reflection Questions for Math 58B

Applied Statistical Analysis EDUC 6050 Week 4

CHAPTER - 6 STATISTICAL ANALYSIS. This chapter discusses inferential statistics, which use sample data to

Experimental Design Part II

Student Performance Q&A:

Overview. Survey Methods & Design in Psychology. Readings. Significance Testing. Significance Testing. The Logic of Significance Testing

MULTIPLE LINEAR REGRESSION 24.1 INTRODUCTION AND OBJECTIVES OBJECTIVES

Two-Way Independent ANOVA

2-Group Multivariate Research & Analyses

10 Intraclass Correlations under the Mixed Factorial Design

Experimental Research I. Quiz/Review 7/6/2011

Power & Sample Size. Dr. Andrea Benedetti

Confidence Intervals On Subsets May Be Misleading

Overview of Lecture. Survey Methods & Design in Psychology. Correlational statistics vs tests of differences between groups

Previously, when making inferences about the population mean,, we were assuming the following simple conditions:

Small Group Presentations

Objectives. Quantifying the quality of hypothesis tests. Type I and II errors. Power of a test. Cautions about significance tests

The normal curve and standardisation. Percentiles, z-scores

Missy Wittenzellner Big Brother Big Sister Project

Stat Wk 9: Hypothesis Tests and Analysis

Quantitative Methods in Computing Education Research (A brief overview tips and techniques)

STATISTICS AND RESEARCH DESIGN

Prepared by: Assoc. Prof. Dr Bahaman Abu Samah Department of Professional Development and Continuing Education Faculty of Educational Studies

Introduction to statistics Dr Alvin Vista, ACER Bangkok, 14-18, Sept. 2015

CHAPTER VI RESEARCH METHODOLOGY

Where does "analysis" enter the experimental process?

Use of the Quantitative-Methods Approach in Scientific Inquiry. Du Feng, Ph.D. Professor School of Nursing University of Nevada, Las Vegas

Evidence Based Library and Information Practice

Statistics for EES Factorial analysis of variance

Part I: Alcohol Metabolization Explore and Explain

The Pretest! Pretest! Pretest! Assignment (Example 2)

Introduction to SPSS. Katie Handwerger Why n How February 19, 2009

Chapter 11. Experimental Design: One-Way Independent Samples Design

Preliminary Report on Simple Statistical Tests (t-tests and bivariate correlations)

SINGLE-CASE RESEARCH. Relevant History. Relevant History 1/9/2018

Slide 1 - Introduction to Statistics Tutorial: An Overview Slide notes

BIOL 458 BIOMETRY Lab 7 Multi-Factor ANOVA

Power of a Clinical Study

Research Designs and Potential Interpretation of Data: Introduction to Statistics. Let s Take it Step by Step... Confused by Statistics?

Transcription:

Hypothesis Testing Richard S. Balkin, Ph.D., LPC-S, NCC

Overview When we have questions about the effect of a treatment or intervention or wish to compare groups, we use hypothesis testing Parametric statistics are used in hypothesis testing when the groups being studied are normally distributed. The type of parametric statistic discussed in this course is univariate, that is, they are all focused on the measurement of one variable. R. S. Balkin, 008

Overview (cont.) Hypothesis testing is used to determine whether or not any fluctuation in the dependent variable is due to change in the independent variable or simply due to chance. The statistics employed in a study are generally inferential, by making a judgment on a population parameter based on sampling data. R. S. Balkin, 008 3

A Review: types of Hypotheses research/scientific includes the following: declarative statement about what we think should happen in our study an examination of variables that deter the study statistical hypotheses: you have a null and alternative you accept one and reject the other R. S. Balkin, 008 4

Central Limit Theorem Hypothesis testing is governed by the Central Limit Theorem Distribution of sample means from random sample will be normally distributed The more samples you have, the closer your values get to the mean R. S. Balkin, 008 5

Central Limit Theorem If your sample is large enough, outliers will not affect the normalizing of the sample distribution Sample size is not as important as representativeness With a representative sample, the Central Limit Theorem allows you to generalize about the population R. S. Balkin, 008 6

A word about error When we compare groups in hypothesis testing, we use the means of each group to make those comparisons. But does every member within a group have a score represented by the mean? R. S. Balkin, 008 7

A word about error Of course not So, we have error in our tests Accepting or rejecting the null hypothesis is directly related to the amount of error the researcher chooses to allow in the study The allowance of error is, in part, a subjective decision by the researcher(s). R. S. Balkin, 008 8

A word about error There are two types of error that we will look at in hypothesis testing: type I error type II error Other types of error exist as well, such as in the experimental design of the research R. S. Balkin, 008 9

Type I error A type one error occurs when the null hypothesis is rejected when it should have been retained. In other words, the researcher determined that statistically significant differences existed between the groups when they actually did not. The amount of type I error in a study is identified as α (alpha). The researcher establishes the amount of type I error allowed in the study. R. S. Balkin, 008 0

Type II error Type II error is conversely related to statistical power ( - β), the likelihood of rejecting the null hypothesis given that the null hypothesis should be rejected. A type II error occurs when the null hypothesis is retained when it should have been rejected. In other words, the researcher determined that statistically significant differences did not exist between the groups when they actually did. The amount of type II error is identified as β (beta). R. S. Balkin, 008

Type II error Type II error is conversely related to statistical power ( - β), the likelihood of rejecting the null hypothesis given that the null hypothesis should be rejected. Put another way, statistical power is the likelihood of finding statistically significant differences, given that these differences actually exist. So, if the likelihood of making a type II error is 0%, then statistical power is at 80%. It is important to note that a given study should have at least.80 for statistical power (80% or 0% type II error). Studies that lack sufficient power are more likely to make a type II error. The best way to avoid this problem is to have a sufficient sample size. Usually, sample size of 5 for each group suffices nicely (Kirk, 995). R. S. Balkin, 008

Scenario A counselor working with adolescents with anger management problems designs an anger management strategy to be used in group work. The counselor wishes to know whether the strategy is effective. So, participants are randomly assigned to groups, half of the groups receiving the new strategy and the other half using the old method. Assuming reliable and valid methods were used to gather the data, the type of group comparison may fit under parametric univariate statistics. R. S. Balkin, 008 3

Error Type I and type II errors may occur for several reasons. The participants in the sample may not be representative of the population. If, for example, participants for the anger management groups were all for patients in a psychiatric hospital, the results of the study may not be representative for the adolescent population as a whole. Or, the sampling procedure could be biased, such as the researcher unintentionally placed adolescents with more severe anger issues in the control group. R. S. Balkin, 008 4

Error While procedures exist to control for type I error, type II error is largely dependent on sample size. So, which is worse, a type I or type II error? If we look at the example, consider the following examples: R. S. Balkin, 008 5

Error Scenario : The researcher determined that the group with the new anger management strategy had a statistically significant increase their ability to manage their anger. However, the researcher made a type I error and the group with the new strategy actually did worse. Because the researcher was unaware of the error, the researcher publicized the findings and made a new policy of implementing the new strategy in to all anger management groups. R. S. Balkin, 008 6

Error Scenario : The researcher determined that the group with the new anger management strategy showed no statistically significant difference in their ability to manage their anger. However, the researcher made a type II error and the group with the new strategy actually did much better. Because the researcher was unaware of the error, the researcher discarded the new method, and future clients were never able to experience the benefit of the new method. R. S. Balkin, 008 7

Error Correct "! Type II error! Type I error! Correct Power "! R. S. Balkin, 008 8

Steps to hypothesis testing There is a logical series of steps that are employed in hypothesis testing that are listed below and then explained in depth.. Identify the null and alternative hypotheses.. Establish a level of significance. 3. Select an appropriate statistic. 4. Calculate the test statistic and probability values. 5. Evaluate the outcome for statistical significance. 6. Evaluate practical significance. 7. Write the results. R. S. Balkin, 008 9

Identify the null and alternative hypotheses The null hypothesis is the hypothesis that is rejected or retained with inferential statistics and is often the opposite of what the researcher believes to be true. The alternative hypothesis is generally the research hypothesis and is a statement of what occurs if the null hypothesis is rejected. Both the null and alternative hypotheses are written in statistical notation R. S. Balkin, 008 0

Identify the null and alternative hypotheses Because inferential statistics are about a population, the hypotheses are created using parameter statistics. Null: H o :» o! =! Alternative: H :! "! supports existence or lack of Accountability Evidence Does not prove R. S. Balkin, 008

Identify the null and alternative hypotheses In hypothesis testing, the null hypothesis is always tested. The null hypothesis indicates that there are no differences between the groups. If the null hypothesis is accepted (retained), then no statistical group differences were found. If the null hypothesis is rejected, then the alternative hypothesis holds true, statistical differences between the groups. R. S. Balkin, 008

Establish a level of significance The level of significance in a study is directly related to the amount of type I error allowed in a study. The researcher identifies an alpha level, signified by α, the amount of type I error the researcher is willing to allow in the study. R. S. Balkin, 008 3

Establish a level of significance In social science research, typical alpha levels are around.05 to.0. Alpha levels can be larger or smaller. An alpha level of.0 or.00 is not uncommon. The decision to allow for more or less type I error is subjective. Often, researchers review the literature and choose a level of significance based on previous research. R. S. Balkin, 008 4

Establish a level of significance Higher alpha levels, such as.05 or.0 allow for a greater chance of type I error (5% or 0% respectively) and are therefore considered more liberal tests of statistical significance. Lower alpha levels, such as.0 or.00 allow for a reduced chance of type one error (% or.% respectively), and are therefore considered more conservative tests of statistical significance. R. S. Balkin, 008 5

Establish a level of significance After a level of significance is established, a critical value can be determined and a test statistic can be calculated. Scores that are beyond the stated level of significance and critical value are considered to be statistically significant. The critical value changes because it is based on the type of test statistic used, the number of groups being compared, and the number in the sample size. R. S. Balkin, 008 6

Establish a level of significance This figure displays an alpha level of.05. On the normal curve, this amount of type I error can be divided in to two halves, known as a non-directional test; statistical significance can be found on either side of the normal curve. If α =.05 then α/ =.05 on either side of the normal curve. R. S. Balkin, 008 7

Establish a level of significance Scores that fall beyond standards scores of -.96 or +.96 would be statistically significant. For right now, understand that scores that fall in the middle 95% of the normal curve are not statistically significant because the researcher set the alpha level at.05, indicating a 5% chance of type I error. Scores beyond that point, designated by α, are statistically significant. Another way of thinking about it is if the researcher has established a 5% type I error rate, we can be 95% confident in the results. R. S. Balkin, 008 8

Establish a level of significance This figure displays an alpha level of.05 for a directional test. In a directional test, the researcher hypothesizes that a given score will be either higher or lower than the level of significance. R. S. Balkin, 008 9

Establish a level of significance For example, the researcher decides to implement a new anger management strategy and is using a standardized instrument to measure progress. If the researcher hypothesized that a group receiving the new anger management intervention is going to demonstrate more progress than the population to whom the measure was standardized, a directional test would be used. This means that the level of significance is placed to one side and not divided between both sides of the normal curve as shown in the figure on the previous slide. R. S. Balkin, 008 30

Establish a level of significance 5% of the type I error rate is placed to one side of the normal curve, not divided by both sides. For directional hypotheses, the null hypothesis is written the same, but the alternative hypothesis identifies the direction. From the previous example, the researcher hypothesized that the group receiving the new intervention would have a statistically significant higher score on a standardized instrument. So, the null and alternative hypotheses would be written with the following statistical notation: H o : µ = µ H : µ R. S. Balkin, 008 3 > µ

Select an appropriate statistic Synopsis of Univariate Parametric Tests Type of test Purpose z-test t-test tests hypotheses between a sample mean and a population mean tests hypotheses between two sample means only ANOVA ANCOVA tests hypotheses between two or more sample means tests hypotheses between two or more sample means while controlling for another variable that affects the outcome of the dependent variable R. S. Balkin, 008 3

mean differences error Calculate the test statistic and probability values.each test statistic is computed from a fraction The numerator represents a computation for mean differences, such as by comparing two groups and subtracting one mean from another. The denominator is an error term, computed by taking in to account the standard deviation or variance, and sample size. The numerator is an expression of differences between groups, often referred to as between-group differences, the denominator looks at error, or differences that exist within in each group, often referred to as within-group differences. Mean differences between group differences error error µ! µ s error R. S. Balkin, 008 33

Calculate the test statistic and probability values So why does this computation work? Simply put, the numerator value expresses differences between groups. In the previous example, it was hypothesized that the experimental group had a higher performance than the population group. Thus, the mean score from the population was subtracted from the mean score of the experimental group. However, the mean score in either group is simply a representative score for many participants in each group. Each participant in either group may have scored above the mean or below the mean. R. S. Balkin, 008 34

Calculate the test statistic and probability values Consider the following scores: 0, 8,, 8, 3, 0. The mean for this distribution is 0. Some participants scored at the mean, and some participants scored above or below it. The error term, in this case the standard deviation, is approximately.90 representing the average amount of error within the group. While the mean is a score representing group outcome, the standard deviation is an error signifying that some participants did not score at the mean. Thus, a test statistic is computed by taking in to account a score that represents the average divided by a score that represent error. R. S. Balkin, 008 35

The Z-test Used to test whether or not significant differences exist between a sample mean and a population mean. The standard error of the mean represents the error when estimating the population mean. Larger sample size = less error R. S. Balkin, 008 36 z =! = X X! " µ X! n

The Z-test Notice that the formula for the Z-test fits the generic formula mentioned for all statistical tests z = X " µ! X Mean differences error R. S. Balkin, 008 37

Z-test You are administering the Balkin Assessment for Deviant Behavior Of Youth scale (BADBOY). It has a mean of 60 and a sd of 5. Your class (n = 6) of emotionally disordered children has a mean score 66. Is this a significant difference from the population? R. S. Balkin, 008 38

State the Null and Alternative Hypotheses Null: H o :» o! =! Alternative: H :! "! R. S. Balkin, 008 39

Identify Level of Significance directional or Non-directional Test at the.05 level of significance 95% confidence 5% chance of type I error Non-directional R. S. Balkin, 008 40

Perform Z-test Compute Standard Error of the Mean! = X! n R. S. Balkin, 008 4

Perform Z-test Compute Standard Error of the Mean!! X = n = 5 = 6.5 R. S. Balkin, 008 4

Perform Z-test: Computation z = X " µ! X R. S. Balkin, 008 43

Perform Z-test: Computation z = X! µ " X = 66! 60.5 = 4.8 R. S. Balkin, 008 44

Accept or Reject Null: Compute Z-crit z-critical Values for Rejection of Null Hypothesis 0.0 0.05 0.0 Type of test level level level non-directional.645.96.576 directional.8.645.36 R. S. Balkin, 008 45

Evaluating the z-test for statistical significance The observed value, z obs = 4.80, is greater than the critical value, z crit =.96. Therefore, the null hypothesis is rejected. The scores from your class are statistically significantly higher than the scores from the population. In research articles, statistics are computed with the aide of statistical software (i.e. SPSS, SAS). When results are reported, they are compared to the alpha level. R. S. Balkin, 008 46

Evaluating the z-test for statistical significance For example, There was a statistically significant difference between classroom scores and the population, z = 4.80, p <.05. The p refers to the probability of making a type I error. If p is less than the predetermined alpha level the researcher established, then statistically significant differences exist. When a statistically significant difference exists, it means that the results are probably not due to mere chance. In this case, there is 95% chance (α =.05) that the results would be the same if the study were replicated with a similar sample. R. S. Balkin, 008 47

Comparing Two or More Sample Means: Student s t-test and the F-test Often in social science research, information regarding populations is not readily available for comparisons. The number of variables to consider when investigating various phenomena is vast and arbitrary. R. S. Balkin, 008 48

Comparing Two or More Sample Means: Student s t-test and the F-test Unlike the z-test, which compares the sample mean to the population mean, Student s t-test compares two sample means and the F-test (also known as ANOVA or Analysis of Variance) compares two or more sample means. R. S. Balkin, 008 49

Comparing Two or More Sample Means: Student s t-test and the F-test When comparing samples, the researcher needs to account for sample size and the number of groups being compared. As sample size and group comparisons can differ from study to study, so do the distributions. Thus, probability values under the normal curve change (see below). Hence degrees of freedom are calculated to provide an estimate of the normal curve given the number of groups and sample size of each group. There are various formulae for computing the degrees of freedom, dependent upon the number of comparison groups, sample size, and type of statistical test. R. S. Balkin, 008 50

R. S. Balkin, 008 5 Independent t-test Used to test whether or not significant differences exist between two samples The standard error is a bit more complex, computed by taking in to account the variances of both samples the pooled standard error. However, it still follows the same generic formula: Mean differences error!! " # $ $ % & + ' + ' ' + ' = ' = ' ' ) ( ) ( ) ( ) ( n n n n n s n s s s s s X X t x x x x

t-test So, assume that 5 graduates from a counseling program in University A and 30 graduates from a counseling program in University B take the NCE, and the graduate programs wish to know which of the programs are doing a better job in preparing their students for the NCE. The mean score for the 5 students from University A was 0 with a standard deviation of 0. The mean score for the 30 students from University B was 5 with a standard deviation of 5. So, is there a statistically significant difference between the two groups? Can it be stated with 95% confidence that the difference in these scores is outside the realm of chance? An independent t- test is conducted to answer this question using the means, standard deviations, and sample sizes from the two groups. R. S. Balkin, 008 5

State the Null and Alternative Hypotheses Null: H o :» o! =! Alternative: H :! "! R. S. Balkin, 008 53

Identify Level of Significance directional or Non-directional Test at the.05 level of significance 95% confidence 5% chance of type I error Non-directional R. S. Balkin, 008 54

R. S. Balkin, 008 55 Perform t-test Compute pooled Standard Error!! " # $ $ % & + ' + ' ' + ' = ' ) ( ) ( ) ( ) ( n n n n n s n s s s x x

R. S. Balkin, 008 56 Perform t-test Compute Pooled Standard Error 3.43.79 68.40(.07).03) (.04 53 655 400 30 5 53 5(9) 00(4) ) ( ) ( ) ( ) ( = = = + + =! " # $ % & + + =!! " # $ $ % & + ' + ' ' + ' = ' n n n n n s n s s s x x

Perform t-test: Computation t = X! X s s x! x R. S. Balkin, 008 57

Perform t-test: Computation t = X! X 0! 5 = =!.46 s! s 3.43 x x R. S. Balkin, 008 58

Accept or Reject Null: Compute t-crit We are performing a non-directional test.05 level of significance (α =.05) Using Appendix B when =.05 (non-directional), t(53) =.008 R. S. Balkin, 008 59

Evaluating the independent t- test for statistical significance Similarly to the z-test, to determine whether or not the observed score, t obs = -.46, is statistically significant at the.05 level of significance, the observed score must be greater in magnitude than the critical value, t crit. In this example, -.46 < -.008, so there is no statistically significant difference between the two groups. R. S. Balkin, 008 60

Evaluating the t-test for statistical significance There was not a statistically significant difference between the two programs, t(53) = -.46, p >.05 The p refers to the probability of making a type I error. If p is greater than the predetermined alpha level the researcher established, then statistically significant differences do not exist. When a statistically significant differences do not exist, it means that the results are probably not due to mere chance. In this case, there is 95% chance (α =.05) that the results would be the same if the study were replicated with a similar sample. R. S. Balkin, 008 6

R. S. Balkin, 008 6 Dependent t-test Used to test whether or not significant differences exist between two paired samples The standard error takes in to account the variances of both samples and the correlation between the two groups!! " # $ $ % &!! " # $ $ % & ' + ' = ' = ' n s n s r n s n s X X s X X t x x

Dependent t-test So, assume that 5 clients from a counseling center at a university are administered the Beck s Depression Inventory-II (BDI) upon initial screening and three months later. The counseling center wishes to know whether or not clients with depression are doing better after three months of counseling. The mean score for the 5 clients at the initial screening is 35 with a standard deviation of. The mean score for the 5 clients after three months is 6 with a standard deviation of 0. The correlation between the two administrations is r =.93. So, is there a statistically significant difference between the two scores over time? Can it be stated with 95% confidence that the difference in these scores is outside the realm of chance? A dependent t-test is conducted to answer this question. R. S. Balkin, 008 63

State the Null and Alternative Hypotheses Null: H o :» o! =! Alternative: H :! "! R. S. Balkin, 008 64

Identify Level of Significance directional or Non-directional Test at the.05 level of significance 95% confidence 5% chance of type I error Non-directional R. S. Balkin, 008 65

R. S. Balkin, 008 66 Perform Dependent t- test Compute pooled Standard Error!! " # $ $ % &!! " # $ $ % & ' + = ' n s n s r n s n s s x x

R. S. Balkin, 008 67 Perform t-test Compute Pooled Standard Error.9.83 (.93)(.4)() 4 5.76 5 0 5 (.93) 5 00 5 44 = =! + = " # $ % & ' " # $ % & '! + = " " # $ % % & ' " " # $ % % & '! + =! n s n s r n s n s s x x

Perform Dependent t-test: Computation X! X t = s x! x R. S. Balkin, 008 68

Perform Dependent t-test: Computation t = X! X 35! 6 = = 9.89 s x!x.9 R. S. Balkin, 008 69

Accept or Reject Null: Compute t-crit We are performing a non-directional test.05 level of significance (α =.05) using the table in Appendix B when =.05 (non-directional), t(4) =.064. R. S. Balkin, 008 70

Evaluating the dependent t-test for statistical significance To determine whether or not the observed score, t obs = 9.89, is statistically significant at the.05 level of significance, the observed score must be greater than the critical value, t crit. In this example, 9.89 >.064, so there is a statistically significant differences between the two groups. R. S. Balkin, 008 7

ANOVA An Analysis of Variance uses an F- test to identify significant differences between two or more groups. Why not conduct repeated t-tests? Each statistical test is conducted with a specified chance of making a type I error the alpha level. R. S. Balkin, 008 7

ANOVA For example, a researcher wished to use a self-efficacy screening measure on students in a math class. Research has shown that higher levels of self-efficacy are related to better performance. Students are randomly placed in four groups. Group receives a lecture on enhancing self-efficacy. Group two receives a lecture and experiential exercise on self-efficacy. Group 3 receives an experiential exercise only. Group 4 is a control group. So a self-efficacy measure is administered to four different math classes. If t-tests were used to conduct the analysis and a level of significance was set at, then six separate t-tests would need to be conducted: (a) Group to Group, (b) Group to Group 3, (c) Group to Group 4, (d) Group to Group 3, (e) Group to Group 4, and (f) Group 3 to Group 4. R. S. Balkin, 008 73

ANOVA The problem with conducting multiple t- tests is that type I error is multiplied by the number of tests being conducted. Hence, if a researcher conducts six t- tests with an on the same data, the chance of having a type I error among the six tests is 30%, a rather large likelihood of identifying statistically significant differences when none actually exist. R. S. Balkin, 008 74

ANOVA When considering that research can help in identifying what models may be helpful to clients, one would need to be extremely cautious in implementing best practices when there is a 30% chance of being wrong. Rather than conducting several t-tests, an ANOVA could be conducted to identify statistically significant differences among all the groups at the.05 level of significance only a 5% chance of type I error! R. S. Balkin, 008 75

ANOVA ANOVA may be used when more than one independent variable is examined, such as wanting to know the difference in depression scores based on sex and ethnicity. This is known as a factorial ANOVA R. S. Balkin, 008 76

ANOVA ANOVA may also be used when more than one dependent variable is used in a study. This is known as MANOVA multivariate analysis of variance Correlation designs also use F- tests, such as when we want to know the relationship between variables (i.e. regression). R. S. Balkin, 008 77

ANOVA Results for F-tests are reported similarly to t-tests. F(3, 5) = 4.86, p <.05 would indicate a statistically significant difference among four groups. R. S. Balkin, 008 78