Statistics for EES Factorial analysis of variance

Size: px
Start display at page:

Download "Statistics for EES Factorial analysis of variance"

Transcription

1 Statistics for EES Factorial analysis of variance Dirk Metzler 1. July 2013

2 1 ANOVA and F-Test 2 Pairwise comparisons and multiple testing 3 Non-parametric: The Kruskal-Wallis Test

3 Contents ANOVA and F-Test 1 ANOVA and F-Test 2 Pairwise comparisons and multiple testing 3 Non-parametric: The Kruskal-Wallis Test

4 ANOVA and F-Test Are the group means significantly different? group 1 group 2 group observed value

5 ANOVA and F-Test Are the group means significantly different? group 1 group 2 group observed value group 1 group 2 group observed value Or does this look like random deviations?

6 ANOVA and F-Test Are the group means significantly different? group 1 group 2 group observed value group 1 group 2 group observed value Or does this look like random deviations? This depends on the ratio of the varibility of group means and the variability within the groups.

7 ANOVA and F-Test Are the group means significantly different? observed value observed value group 1 group 2 group 3 Large variability within groups group 1 group 2 group 3 Small variability with groups Or does this look like random deviations? This depends on the ratio of the varibility of group means and the variability within the groups.

8 ANOVA and F-Test Are the group means significantly different? observed value observed value group 1 group 2 group 3 Large variability within groups group 1 group 2 group 3 Small variability with groups Or does this look like random deviations? This depends on the ratio of the varibility of group means and the variability within the groups. The analysis of variance (ANOVA) quantifies this ratio and its significance.

9 ANOVA and F-Test Example Blood-clotting time in rats under 4 different treatments group observation

10 ANOVA and F-Test Example Blood-clotting time in rats under 4 different treatments group observation global mean x = 64, group means x 1 = 61, x 2 = 66, x 3 = 68, x 4 = 61.

11 ANOVA and F-Test Example Blood-clotting time in rats under 4 different treatments group observation global mean x = 64, group means x 1 = 61, x 2 = 66, x 3 = 68, x 4 = 61. Caution: In this simple example, the global mean is the mean of the group means. This is not always the case!

12 ANOVA and F-Test Example Blood-clotting times in rats under 4 different treatments gr. x i observations (62 61) 2 (60 61) 2 (63 61) 2 (59 61) (63 66) 2 (67 66) 2 (71 66) 2 (64 66) 2 (65 66) 2 (66 66) (68 68) 2 (66 68) 2 (71 68) 2 (67 68) 2 (68 68) 2 (68 68) (56 61) 2 (62 61) 2 (60 61) 2 (61 61) 2 (63 61) 2 (64 61) 2 (63 61) 2 (59 61) 2 global mean x = 64, group means x 1 = 61, x 2 = 66, x 3 = 68, x 4 = 61.

13 ANOVA and F-Test Example Blood-clotting times in rats under 4 different treatments gr. x i observations (62 61) 2 (60 61) 2 (63 61) 2 (59 61) (63 66) 2 (67 66) 2 (71 66) 2 (64 66) 2 (65 66) 2 (66 66) (68 68) 2 (66 68) 2 (71 68) 2 (67 68) 2 (68 68) 2 (68 68) (56 61) 2 (62 61) 2 (60 61) 2 (61 61) 2 (63 61) 2 (64 61) 2 (63 61) 2 (59 61) 2 global mean x = 64, group means x 1 = 61, x 2 = 66, x 3 = 68, x 4 = 61. The red Differences (unsquared) are the residuals: they are the residual variability which is not explained by the model.

14 ANOVA and F-Test Example Blood-clotting times in rats under 4 different treatments gr. x i observations (62 61) 2 (60 61) 2 (63 61) 2 (59 61) (63 66) 2 (67 66) 2 (71 66) 2 (64 66) 2 (65 66) 2 (66 66) (68 68) 2 (66 68) 2 (71 68) 2 (67 68) 2 (68 68) 2 (68 68) (56 61) 2 (62 61) 2 (60 61) 2 (61 61) 2 (63 61) 2 (64 61) 2 (63 61) 2 (59 61) 2 global mean x = 64, group means x 1 = 61, x 2 = 66, x 3 = 68, x 4 = 61. The red Differences (unsquared) are the residuals: they are the residual variability which is not explained by the model. Sums of squares within groups: ss within = 112,

15 ANOVA and F-Test Example Blood-clotting times in rats under 4 different treatments gr. x i observations (62 61) 2 (60 61) 2 (63 61) 2 (59 61) (63 66) 2 (67 66) 2 (71 66) 2 (64 66) 2 (65 66) 2 (66 66) (68 68) 2 (66 68) 2 (71 68) 2 (67 68) 2 (68 68) 2 (68 68) (56 61) 2 (62 61) 2 (60 61) 2 (61 61) 2 (63 61) 2 (64 61) 2 (63 61) 2 (59 61) 2 global mean x = 64, group means x 1 = 61, x 2 = 66, x 3 = 68, x 4 = 61. The red Differences (unsquared) are the residuals: they are the residual variability which is not explained by the model. Sums of squares within groups: ss within = 112, 20 degrees of freedom (df)

16 ANOVA and F-Test Example Blood-clotting times in rats under 4 different treatments gr. x i observations (62 61) 2 (60 61) 2 (63 61) 2 (59 61) (63 66) 2 (67 66) 2 (71 66) 2 (64 66) 2 (65 66) 2 (66 66) (68 68) 2 (66 68) 2 (71 68) 2 (67 68) 2 (68 68) 2 (68 68) (56 61) 2 (62 61) 2 (60 61) 2 (61 61) 2 (63 61) 2 (64 61) 2 (63 61) 2 (59 61) 2 global mean x = 64, group means x 1 = 61, x 2 = 66, x 3 = 68, x 4 = 61. The red Differences (unsquared) are the residuals: they are the residual variability which is not explained by the model. Sums of squares within groups: ss within = 112, 20 degrees of freedom (df) Sums of squares between groups: ss betw = 4 (61 64) (66 64) (68 64) (61 64) 2 = 228,

17 ANOVA and F-Test Example Blood-clotting times in rats under 4 different treatments gr. x i observations (62 61) 2 (60 61) 2 (63 61) 2 (59 61) (63 66) 2 (67 66) 2 (71 66) 2 (64 66) 2 (65 66) 2 (66 66) (68 68) 2 (66 68) 2 (71 68) 2 (67 68) 2 (68 68) 2 (68 68) (56 61) 2 (62 61) 2 (60 61) 2 (61 61) 2 (63 61) 2 (64 61) 2 (63 61) 2 (59 61) 2 global mean x = 64, group means x 1 = 61, x 2 = 66, x 3 = 68, x 4 = 61. The red Differences (unsquared) are the residuals: they are the residual variability which is not explained by the model. Sums of squares within groups: ss within = 112, 20 degrees of freedom (df) Sums of squares between groups: ss betw = 4 (61 64) (66 64) (68 64) (61 64) 2 = 228, 3 degrees of freedom (df)

18 Example ANOVA and F-Test Blood-clotting times in rats under 4 different treatments gr. x i observations (62 61) 2 (60 61) 2 (63 61) 2 (59 61) (63 66) 2 (67 66) 2 (71 66) 2 (64 66) 2 (65 66) 2 (66 66) (68 68) 2 (66 68) 2 (71 68) 2 (67 68) 2 (68 68) 2 (68 68) (56 61) 2 (62 61) 2 (60 61) 2 (61 61) 2 (63 61) 2 (64 61) 2 (63 61) 2 (59 61) 2 global mean x = 64, group means x 1 = 61, x 2 = 66, x 3 = 68, x 4 = 61. The red Differences (unsquared) are the residuals: they are the residual variability which is not explained by the model. Sums of squares within groups: ss within = 112, 20 degrees of freedom (df) Sums of squares between groups: ss betw = 4 (61 64) (66 64) (68 64) (61 64) 2 = 228, 3 degrees of freedom (df) F = ss betw/3 ss within /20 = = 13.57

19 ANOVA and F-Test Example: Blood-clotting times in rats under 4 different treatments. ANOVA table (ANalysis Of VAriance) sum of mean sum df squares of squares F value (ss) (ss/df) groups residuals

20 ANOVA and F-Test Example: Blood-clotting times in rats under 4 different treatments. ANOVA table (ANalysis Of VAriance) sum of mean sum df squares of squares F value (ss) (ss/df) groups residuals Under the Null-Hypothesis H 0 the group means are equal (and assuming independent, normally distributed observations) is F Fisher-distributed with 3 and 20 degrees of freedom, and p = Fisher 3,20 ([13.57, ))

21 ANOVA and F-Test Example: Blood-clotting times in rats under 4 different treatments. ANOVA table (ANalysis Of VAriance) sum of mean sum df squares of squares F value (ss) (ss/df) groups residuals Under the Null-Hypothesis H 0 the group means are equal (and assuming independent, normally distributed observations) is F Fisher-distributed with 3 and 20 degrees of freedom, and p = Fisher 3,20 ([13.57, )) Thus, we can reject H 0.

22 ANOVA and F-Test Sir Ronald Aylmer Fisher,

23 ANOVA and F-Test F-Test n = n 1 + n n I obersvations in I groups, X ij = j-th observation in i-th group, j = 1,..., n i.

24 ANOVA and F-Test F-Test n = n 1 + n n I obersvations in I groups, X ij = j-th observation in i-th group, j = 1,..., n i. Model assumption: X ij = µ i + ε ij, with independent, normally distributed ε ij, E[ε ij ] = 0, Var[ε ij ] = σ 2

25 ANOVA and F-Test F-Test n = n 1 + n n I obersvations in I groups, X ij = j-th observation in i-th group, j = 1,..., n i. Model assumption: X ij = µ i + ε ij, with independent, normally distributed ε ij, E[ε ij ] = 0, Var[ε ij ] = σ 2 (µ i is the true mean within group i.)

26 ANOVA and F-Test F-Test n = n 1 + n n I obersvations in I groups, X ij = j-th observation in i-th group, j = 1,..., n i. Model assumption: X ij = µ i + ε ij, with independent, normally distributed ε ij, E[ε ij ] = 0, Var[ε ij ] = σ 2 (µ i is the true mean within group i.) X = 1 n I i=1 ni j=1 X ij (empirical) global mean X i = 1 n i ni j=1 X ij (empirical) mean of group i

27 ANOVA and F-Test F-Test n = n 1 + n n I obersvations in I groups, X ij = j-th observation in i-th group, j = 1,..., n i. Model assumption: X ij = µ i + ε ij, with independent, normally distributed ε ij, E[ε ij ] = 0, Var[ε ij ] = σ 2 (µ i is the true mean within group i.) X = 1 n I i=1 ni j=1 X ij (empirical) global mean X i = 1 n i ni j=1 X ij (empirical) mean of group i SS within = I n i (X ij X i ) 2 sum of squares within the groups, n I degrees of freedom i=1 j=1 SS betw = I n i (X i X ) 2 i=1 sum of squares between the groups, I 1 degrees of freedom

28 ANOVA and F-Test F-Test n = n 1 + n n I obersvations in I groups, X ij = j-th observation in i-th group, j = 1,..., n i. Model assumption: X ij = µ i + ε ij, with independent, normally distributed ε ij, E[ε ij ] = 0, Var[ε ij ] = σ 2 (µ i is the true mean within group i.) X = 1 n I i=1 ni j=1 X ij (empirical) global mean X i = 1 n i ni j=1 X ij (empirical) mean of group i SS within = I n i (X ij X i ) 2 sum of squares within the groups, n I degrees of freedom i=1 j=1 SS betw = I n i (X i X ) 2 i=1 F = SS betw/(i 1) SS /(n I) sum of squares between the groups, I 1 degrees of freedom

29 ANOVA and F-Test F-Test X ij = j-th observation i-th group, j = 1,..., n i, Model assumption: X ij = µ i + ε ij. E[ε ij ] = 0, Var[ε ij ] = σ 2 SS within = I n i (X ij X i ) 2 sum of squares within groups, n I degrees of feedom i=1 j=1 SS betw = I n i (X i X ) 2 i=1 sum of squares between groups, I 1 degrees of freedom F = SS betw/(i 1) SS within /(n I) Under the hypothesis H 0 : µ 1 = = µ I ( all µ i are equal ) F is Fisher-distributed with I 1 and n I degrees of freedom (no matter what the true joint value of µ i is). F-Test: We reject H 0 on the level of significance α if F q α, whereas q α is the (1 α)-quantile of the Fisher-distribution with I 1 and n I degrees of freedom.

30 ANOVA and F-Test Computing the p-value with R How to choose q such that Pr(F q) = 0.95 for Fisher(6,63)-distributed F? > qf(0.95,df1=6,df2=63) [1]

31 ANOVA and F-Test Computing the p-value with R How to choose q such that Pr(F q) = 0.95 for Fisher(6,63)-distributed F? > qf(0.95,df1=6,df2=63) [1] computation of p-value: How probable is it that a Fisher(3,20)-distributed random variable takes a value 13.57? > pf(13.57, df1=3, df2=20, lower.tail=false) [1] e-05

32 ANOVA and F-Test Table of 95%-quantiles of the F distribution This table shows the (rounded) 95%-quantiles of the Fisher distribution with k 1 and k 2 degrees of freedom (k 1 : numerator, k 2 : denominator) k 2 k

33 ANOVA and F-Test Anova completely in R The text file clotting.txt contains a column time with the clotting times and a column treat with the treatmetns A,B,C,D. > rat<-read.table("clotting.txt",header=true) > rat.aov <- aov(time~treat,data=rat) > summary(rat.aov) Df Sum Sq Mean Sq F value Pr(>F) treat e-05 *** Residuals Signif. codes: 0 *** ** 0.01 *

34 ANOVA and F-Test 7 verschiedene Labors haben jeweils 10 Messungen des Chlorpheniraminmaleat Gehalts von Medikamentenproben vorgenommen: Gehalt an Chlorpheniraminmaleat [mg] Labor

35 ANOVA and F-Test 7 verschiedene Labors haben jeweils 10 Messungen des Chlorpheniraminmaleat Gehalts von Medikamentenproben vorgenommen: Mittelwerte +/ Standardfehler Gehalt an Chlorpheniraminmaleat [mg] Labor

36 ANOVA and F-Test Caution: The labs are labeled with numbers. To tell R that these numbers are just names and have no numeric meaning, we have to convert them into a variable of type factor : > chlor <- read.table("chlorpheniraminmaleat.txt") > str(chlor) data.frame : 70 obs. of 2 variables: $ content: num $ Lab : int > chlor$labor <- as.factor(chlor$labor) > str(chlor) data.frame : 70 obs. of 2 variables: $ content: num $ Lab : Factor w/ 7 levels "1","2","3","4",..:

37 ANOVA and F-Test Now we can perform the anova: > chlor.aov <- aov(content~lab,data=chlor) > summary(chlor.aov) Df Sum Sq Mean Sq F value Pr(>F) Lab e-05 *** Residuals Signif. codes: 0 *** ** 0.01 *

38 Contents Pairwise comparisons and multiple testing 1 ANOVA and F-Test 2 Pairwise comparisons and multiple testing 3 Non-parametric: The Kruskal-Wallis Test

39 Pairwise comparisons and multiple testing The anova shows that there is a significant difference between the labs.

40 Pairwise comparisons and multiple testing The anova shows that there is a significant difference between the labs. But which labs are significantly different?

41 Pairwise comparisons and multiple testing The anova shows that there is a significant difference between the labs. But which labs are significantly different? p-values from pairwise comparisons with t-tests: Lab2 Lab3 Lab4 Lab5 Lab6 Lab7 Lab Lab Lab Lab Lab Lab

42 Pairwise comparisons and multiple testing We see 21 pair-wise comparisons. On the 5% level, some of them are significant.

43 Pairwise comparisons and multiple testing We see 21 pair-wise comparisons. On the 5% level, some of them are significant. Problem of multiple testing: If the null hypothesis ( only random deviations ) is true, we falsly reject it with a probability of 5% in each test.

44 Pairwise comparisons and multiple testing We see 21 pair-wise comparisons. On the 5% level, some of them are significant. Problem of multiple testing: If the null hypothesis ( only random deviations ) is true, we falsly reject it with a probability of 5% in each test.if we then apply 20 or more test, there will be on average one or more test in which we falsely reject the null hypothesis.

45 Pairwise comparisons and multiple testing We see 21 pair-wise comparisons. On the 5% level, some of them are significant. Problem of multiple testing: If the null hypothesis ( only random deviations ) is true, we falsly reject it with a probability of 5% in each test.if we then apply 20 or more test, there will be on average one or more test in which we falsely reject the null hypothesis. Therefore, we should apply a correction to the p-values for multiple testing.

46 Pairwise comparisons and multiple testing We see 21 pair-wise comparisons. On the 5% level, some of them are significant. Problem of multiple testing: If the null hypothesis ( only random deviations ) is true, we falsly reject it with a probability of 5% in each test.if we then apply 20 or more test, there will be on average one or more test in which we falsely reject the null hypothesis. Therefore, we should apply a correction to the p-values for multiple testing. One possibility in the anova case: Tukey s Honest Significant Differences (HSD).

47 Pairwise comparisons and multiple testing > TukeyHSD(chlor.aov) Tukey multiple comparisons of means 95% family-wise confidence level Fit: aov(formula = content ~ lab, data = chlor) $Labor diff lwr upr p adj

48 Pairwise comparisons and multiple testing > TukeyHSD(chlor.aov) Tukey multiple comparisons of means 95% family-wise confidence level Fit: aov(formula = content ~ lab, data = chlor) $Labor diff lwr upr p adj Wie get confidence intervals [lwr,upr] for the differences between the labs and p-values for the null hypotheses that these differences are 0. Both are already corrected for multiple testig.

49 Pairwise comparisons and multiple testing Visualization of HSD confidence intervals with plot(tukeyhsd(chlor.aov)): 95% family wise confidence level Differences in mean levels of Labor

50 Pairwise comparisons and multiple testing Restriction: Tukeys HSD-Methode is only valid for balanced designs, i.e. the sample sizes must be equal across the groups. (And variancen must be equal, which is already required in the anova.)

51 Pairwise comparisons and multiple testing Restriction: Tukeys HSD-Methode is only valid for balanced designs, i.e. the sample sizes must be equal across the groups. (And variancen must be equal, which is already required in the anova.) This is the case for the lab comparisons but not for the rat clotting times dataset.

52 Pairwise comparisons and multiple testing What to use when Tukey s HSD is not applicable?

53 Pairwise comparisons and multiple testing What to use when Tukey s HSD is not applicable? Very general correction for multiple testing: Bonferroni Method: Multiply each p value with the number n of tests performed.

54 Pairwise comparisons and multiple testing Example: pair-wise comparisons (with t-test) for the clotting times, first without correction: B C D A B C

55 Pairwise comparisons and multiple testing Example: pair-wise comparisons (with t-test) for the clotting times, first without correction: B C D A B C Now with Bonferroni correction (multiply each value with 6): B C D A B C

56 Pairwise comparisons and multiple testing Example: pair-wise comparisons (with t-test) for the clotting times, first without correction: B C D A B C Now with Bonferroni correction (multiply each value with 6): B C D A B C Significant differences after Bonferroni correction: A/C, B/D, C/D.

57 Pairwise comparisons and multiple testing Example: pair-wise comparisons (with t-test) for the clotting times, first without correction: B C D A B C Now with Bonferroni correction (multiply each value with 6): B C D A B C Significant differences after Bonferroni correction: A/C, B/D, C/D. (The value of 6.0 is not a proper p-value and should be replaced by 1.0)

58 Pairwise comparisons and multiple testing The Bonferroni method is very conservative, i.e. to be on the safe side, it tends to loose some of the significances.

59 Pairwise comparisons and multiple testing The Bonferroni method is very conservative, i.e. to be on the safe side, it tends to loose some of the significances. An improvement is the Bonferroni-Holm Method: Let k be the number of test. Multiply the smallest p value with k, the second smallest with k 1, the third smallest with k 2 etc.

60 Pairwise comparisons and multiple testing The Bonferroni method is very conservative, i.e. to be on the safe side, it tends to loose some of the significances. An improvement is the Bonferroni-Holm Method: Let k be the number of test. Multiply the smallest p value with k, the second smallest with k 1, the third smallest with k 2 etc. The R command p.adjust corrects p values for multiple testing, using Bonferroni-Holm by default: > pv <- c( , , , , , ) > p.adjust(pv) [1] > p.adjust(pv, method="bonferroni") [1]

61 Pairwise comparisons and multiple testing There is a special R function for pair-wise t-tests that also uses Bonferroni-Holm correction by default: > pairwise.t.test(rat$time,rat$treat,pool.sd=false) Pairwise comparisons using t tests with non-pooled SD data: rat$time and rat$treat A B C B C D P value adjustment method: holm

62 Contents Non-parametric: The Kruskal-Wallis Test 1 ANOVA and F-Test 2 Pairwise comparisons and multiple testing 3 Non-parametric: The Kruskal-Wallis Test

63 Non-parametric: The Kruskal-Wallis Test The one-factor anova is based on the assumption that all values are independent and normally distributed. The group means µ 1, µ 2,..., µ m may be different. (It is the aim of the test to find this out.) The variances within the groups must be equal.

64 Non-parametric: The Kruskal-Wallis Test The one-factor anova is based on the assumption that all values are independent and normally distributed. The group means µ 1, µ 2,..., µ m may be different. (It is the aim of the test to find this out.) The variances within the groups must be equal. In formulae: If Y ij is the jth value in group i, it is assumed that Y ij = µ i + ε ij, whereas all ε ij are independently N (0, σ 2 ) distributed with the same σ 2 for all groups!

65 Non-parametric: The Kruskal-Wallis Test The one-factor anova is based on the assumption that all values are independent and normally distributed. The group means µ 1, µ 2,..., µ m may be different. (It is the aim of the test to find this out.) The variances within the groups must be equal. In formulae: If Y ij is the jth value in group i, it is assumed that Y ij = µ i + ε ij, whereas all ε ij are independently N (0, σ 2 ) distributed with the same σ 2 for all groups! The null hypothesis is µ 1 = µ 2 = = µ m.

66 Non-parametric: The Kruskal-Wallis Test Not all deviations from neutrality cause problems.

67 Non-parametric: The Kruskal-Wallis Test Not all deviations from neutrality cause problems. However, anovas are not robust against outliers or distributions that may somtimes produce exceptionally huge values.

68 Non-parametric: The Kruskal-Wallis Test Not all deviations from neutrality cause problems. However, anovas are not robust against outliers or distributions that may somtimes produce exceptionally huge values. In such cases we can apply thekruskal-wallis Test. Like the Wilcoxon-Test it uses the Ranks instead of the actual values. It is a non-parameteric test, no particular probability distribution is assumed.

69 Non-parametric: The Kruskal-Wallis Test Not all deviations from neutrality cause problems. However, anovas are not robust against outliers or distributions that may somtimes produce exceptionally huge values. In such cases we can apply thekruskal-wallis Test. Like the Wilcoxon-Test it uses the Ranks instead of the actual values. It is a non-parameteric test, no particular probability distribution is assumed. Null hypothesis of the Kruskal-Wallis test: all values of Y ij come from the same distribution, independent of their group.

70 Non-parametric: The Kruskal-Wallis Test Not all deviations from neutrality cause problems. However, anovas are not robust against outliers or distributions that may somtimes produce exceptionally huge values. In such cases we can apply thekruskal-wallis Test. Like the Wilcoxon-Test it uses the Ranks instead of the actual values. It is a non-parameteric test, no particular probability distribution is assumed. Null hypothesis of the Kruskal-Wallis test: all values of Y ij come from the same distribution, independent of their group. Like in the anova, we also have to assume for the Kruskal-Wallis test that the data are independent of each other.

71 Non-parametric: The Kruskal-Wallis Test Let R ij be the rank of Y ij within the total sample.

72 Non-parametric: The Kruskal-Wallis Test Let R ij be the rank of Y ij within the total sample. Let R i. = 1 J i J i R ij be the average rank in group i, with J i the sample size in group i. j=1

73 Non-parametric: The Kruskal-Wallis Test Let R ij be the rank of Y ij within the total sample. Let R i. = 1 J i J i R ij be the average rank in group i, with J i the sample size in group i. j=1 The mean rank in the total sample is R.. = 1 N J I i R ij = N + 1 2, i=1 j=1 with I being the number of groups and N being the total sample size.

74 Non-parametric: The Kruskal-Wallis Test Let R ij be the rank of Y ij within the total sample. Let R i. = 1 J i J i R ij be the average rank in group i, with J i the sample size in group i. j=1 The mean rank in the total sample is R.. = 1 N J I i R ij = N + 1 2, i=1 j=1 with I being the number of groups and N being the total sample size. Under the null hypothesis all ranks have expectation value R...

75 Non-parametric: The Kruskal-Wallis Test We measure the deviation from this expectation by S = I J i (R i. R.. ) 2. i=1

76 Non-parametric: The Kruskal-Wallis Test We measure the deviation from this expectation by S = I J i (R i. R.. ) 2. i=1 To get a p value from S we have to know the probability distribution of S under the null hypothesis. For small values of I and J I, the latter can be found in tables.

77 Non-parametric: The Kruskal-Wallis Test We measure the deviation from this expectation by S = I J i (R i. R.. ) 2. i=1 To get a p value from S we have to know the probability distribution of S under the null hypothesis. For small values of I and J I, the latter can be found in tables. For I 3 und J i 5, or I > 3 and J i 4 we can use that the following scaling K of S is approximately χ 2 distributed with I 1 degrees of freedom: ( I ) 12 K = N (N + 1) S = 12 N (N + 1) J i R 2 i. 3 (N +1) i=1

78 Non-parametric: The Kruskal-Wallis Test Kruskal-Wallis-Test with R > kruskal.test(time~treat,data=rat) Kruskal-Wallis rank sum test data: time by treat Kruskal-Wallis chi-squared = , df = 3, p-value = > kruskal.test(content~lab,data=chlor) Kruskal-Wallis rank sum test data: content by lab Kruskal-Wallis chi-squared = , df = 6, p-value = 4.67e-05

Math Section MW 1-2:30pm SR 117. Bekki George 206 PGH

Math Section MW 1-2:30pm SR 117. Bekki George 206 PGH Math 3339 Section 21155 MW 1-2:30pm SR 117 Bekki George bekki@math.uh.edu 206 PGH Office Hours: M 11-12:30pm & T,TH 10:00 11:00 am and by appointment More than Two Independent Samples: Single Factor Analysis

More information

PSY 216: Elementary Statistics Exam 4

PSY 216: Elementary Statistics Exam 4 Name: PSY 16: Elementary Statistics Exam 4 This exam consists of multiple-choice questions and essay / problem questions. For each multiple-choice question, circle the one letter that corresponds to the

More information

STAT 350 (Spring 2015) Lab 7: R Solution 1

STAT 350 (Spring 2015) Lab 7: R Solution 1 STAT 350 (Spring 2015) Lab 7: R Solution 1 Lab 7: One-Way ANOVA Objectives: Analyze data via the One-Way ANOVA A. (50 pts.) Do isoflavones increase bone mineral density? (ex12-45bmd.txt) Kudzu is a plant

More information

Analysis of Variance (ANOVA)

Analysis of Variance (ANOVA) Research Methods and Ethics in Psychology Week 4 Analysis of Variance (ANOVA) One Way Independent Groups ANOVA Brief revision of some important concepts To introduce the concept of familywise error rate.

More information

Stat Wk 9: Hypothesis Tests and Analysis

Stat Wk 9: Hypothesis Tests and Analysis Stat 342 - Wk 9: Hypothesis Tests and Analysis Crash course on ANOVA, proc glm Stat 342 Notes. Week 9 Page 1 / 57 Crash Course: ANOVA AnOVa stands for Analysis Of Variance. Sometimes it s called ANOVA,

More information

One way Analysis of Variance (ANOVA)

One way Analysis of Variance (ANOVA) One way Analysis of Variance (ANOVA) Esra Akdeniz March 22nd, 2016 Introduction Test hypothesis concerning one population mean. Test hypothesis concerning two population means What if we want to compare

More information

8/28/2017. If the experiment is successful, then the model will explain more variance than it can t SS M will be greater than SS R

8/28/2017. If the experiment is successful, then the model will explain more variance than it can t SS M will be greater than SS R PSY 5101: Advanced Statistics for Psychological and Behavioral Research 1 If the ANOVA is significant, then it means that there is some difference, somewhere but it does not tell you which means are different

More information

ANOVA. Thomas Elliott. January 29, 2013

ANOVA. Thomas Elliott. January 29, 2013 ANOVA Thomas Elliott January 29, 2013 ANOVA stands for analysis of variance and is one of the basic statistical tests we can use to find relationships between two or more variables. ANOVA compares the

More information

Chapter 12: Introduction to Analysis of Variance

Chapter 12: Introduction to Analysis of Variance Chapter 12: Introduction to Analysis of Variance of Variance Chapter 12 presents the general logic and basic formulas for the hypothesis testing procedure known as analysis of variance (ANOVA). The purpose

More information

SUMMER 2011 RE-EXAM PSYF11STAT - STATISTIK

SUMMER 2011 RE-EXAM PSYF11STAT - STATISTIK SUMMER 011 RE-EXAM PSYF11STAT - STATISTIK Full Name: Årskortnummer: Date: This exam is made up of three parts: Part 1 includes 30 multiple choice questions; Part includes 10 matching questions; and Part

More information

Research Analysis MICHAEL BERNSTEIN CS 376

Research Analysis MICHAEL BERNSTEIN CS 376 Research Analysis MICHAEL BERNSTEIN CS 376 Last time What is a statistical test? Chi-square t-test Paired t-test 2 Today ANOVA Posthoc tests Two-way ANOVA Repeated measures ANOVA 3 Recall: hypothesis testing

More information

HS Exam 1 -- March 9, 2006

HS Exam 1 -- March 9, 2006 Please write your name on the back. Don t forget! Part A: Short answer, multiple choice, and true or false questions. No use of calculators, notes, lab workbooks, cell phones, neighbors, brain implants,

More information

Comparing 3 Means- ANOVA

Comparing 3 Means- ANOVA Comparing 3 Means- ANOVA Evaluation Methods & Statistics- Lecture 7 Dr Benjamin Cowan Research Example- Theory of Planned Behaviour Ajzen & Fishbein (1981) One of the most prominent models of behaviour

More information

Biology 345: Biometry Fall 2005 SONOMA STATE UNIVERSITY Lab Exercise 8 One Way ANOVA and comparisons among means Introduction

Biology 345: Biometry Fall 2005 SONOMA STATE UNIVERSITY Lab Exercise 8 One Way ANOVA and comparisons among means Introduction Biology 345: Biometry Fall 2005 SONOMA STATE UNIVERSITY Lab Exercise 8 One Way ANOVA and comparisons among means Introduction In this exercise, we will conduct one-way analyses of variance using two different

More information

ANOVA in SPSS (Practical)

ANOVA in SPSS (Practical) ANOVA in SPSS (Practical) Analysis of Variance practical In this practical we will investigate how we model the influence of a categorical predictor on a continuous response. Centre for Multilevel Modelling

More information

Analysis of Variance: repeated measures

Analysis of Variance: repeated measures Analysis of Variance: repeated measures Tests for comparing three or more groups or conditions: (a) Nonparametric tests: Independent measures: Kruskal-Wallis. Repeated measures: Friedman s. (b) Parametric

More information

Chapter 9. Factorial ANOVA with Two Between-Group Factors 10/22/ Factorial ANOVA with Two Between-Group Factors

Chapter 9. Factorial ANOVA with Two Between-Group Factors 10/22/ Factorial ANOVA with Two Between-Group Factors Chapter 9 Factorial ANOVA with Two Between-Group Factors 10/22/2001 1 Factorial ANOVA with Two Between-Group Factors Recall that in one-way ANOVA we study the relation between one criterion variable and

More information

NEUROBLASTOMA DATA -- TWO GROUPS -- QUANTITATIVE MEASURES 38 15:37 Saturday, January 25, 2003

NEUROBLASTOMA DATA -- TWO GROUPS -- QUANTITATIVE MEASURES 38 15:37 Saturday, January 25, 2003 NEUROBLASTOMA DATA -- TWO GROUPS -- QUANTITATIVE MEASURES 38 15:37 Saturday, January 25, 2003 Obs GROUP I DOPA LNDOPA 1 neurblst 1 48.000 1.68124 2 neurblst 1 133.000 2.12385 3 neurblst 1 34.000 1.53148

More information

STA 3024 Spring 2013 EXAM 3 Test Form Code A UF ID #

STA 3024 Spring 2013 EXAM 3 Test Form Code A UF ID # STA 3024 Spring 2013 Name EXAM 3 Test Form Code A UF ID # Instructions: This exam contains 34 Multiple Choice questions. Each question is worth 3 points, for a total of 102 points (there are TWO bonus

More information

Math 215, Lab 7: 5/23/2007

Math 215, Lab 7: 5/23/2007 Math 215, Lab 7: 5/23/2007 (1) Parametric versus Nonparamteric Bootstrap. Parametric Bootstrap: (Davison and Hinkley, 1997) The data below are 12 times between failures of airconditioning equipment in

More information

Comparing multiple proportions

Comparing multiple proportions Comparing multiple proportions February 24, 2017 psych10.stanford.edu Announcements / Action Items Practice and assessment problem sets will be posted today, might be after 5 PM Reminder of OH switch today

More information

EPS 625 INTERMEDIATE STATISTICS TWO-WAY ANOVA IN-CLASS EXAMPLE (FLEXIBILITY)

EPS 625 INTERMEDIATE STATISTICS TWO-WAY ANOVA IN-CLASS EXAMPLE (FLEXIBILITY) EPS 625 INTERMEDIATE STATISTICS TO-AY ANOVA IN-CLASS EXAMPLE (FLEXIBILITY) A researcher conducts a study to evaluate the effects of the length of an exercise program on the flexibility of female and male

More information

Regression models, R solution day7

Regression models, R solution day7 Regression models, R solution day7 Exercise 1 In this exercise, we shall look at the differences in vitamin D status for women in 4 European countries Read and prepare the data: vit

More information

Hypothesis Testing. Richard S. Balkin, Ph.D., LPC-S, NCC

Hypothesis Testing. Richard S. Balkin, Ph.D., LPC-S, NCC Hypothesis Testing Richard S. Balkin, Ph.D., LPC-S, NCC Overview When we have questions about the effect of a treatment or intervention or wish to compare groups, we use hypothesis testing Parametric statistics

More information

Analysis of Variance (ANOVA) Program Transcript

Analysis of Variance (ANOVA) Program Transcript Analysis of Variance (ANOVA) Program Transcript DR. JENNIFER ANN MORROW: Welcome to Analysis of Variance. My name is Dr. Jennifer Ann Morrow. In today's demonstration, I'll review with you the definition

More information

Hungry Mice. NP: Mice in this group ate as much as they pleased of a non-purified, standard diet for laboratory mice.

Hungry Mice. NP: Mice in this group ate as much as they pleased of a non-purified, standard diet for laboratory mice. Hungry Mice When laboratory mice (and maybe other animals) are fed a nutritionally adequate but near-starvation diet, they may live longer on average than mice that eat a normal amount of food. In this

More information

Table of Contents. Plots. Essential Statistics for Nursing Research 1/12/2017

Table of Contents. Plots. Essential Statistics for Nursing Research 1/12/2017 Essential Statistics for Nursing Research Kristen Carlin, MPH Seattle Nursing Research Workshop January 30, 2017 Table of Contents Plots Descriptive statistics Sample size/power Correlations Hypothesis

More information

Study Guide for the Final Exam

Study Guide for the Final Exam Study Guide for the Final Exam When studying, remember that the computational portion of the exam will only involve new material (covered after the second midterm), that material from Exam 1 will make

More information

Readings Assumed knowledge

Readings Assumed knowledge 3 N = 59 EDUCAT 59 TEACHG 59 CAMP US 59 SOCIAL Analysis of Variance 95% CI Lecture 9 Survey Research & Design in Psychology James Neill, 2012 Readings Assumed knowledge Howell (2010): Ch3 The Normal Distribution

More information

Investigating the robustness of the nonparametric Levene test with more than two groups

Investigating the robustness of the nonparametric Levene test with more than two groups Psicológica (2014), 35, 361-383. Investigating the robustness of the nonparametric Levene test with more than two groups David W. Nordstokke * and S. Mitchell Colp University of Calgary, Canada Testing

More information

Dr. Kelly Bradley Final Exam Summer {2 points} Name

Dr. Kelly Bradley Final Exam Summer {2 points} Name {2 points} Name You MUST work alone no tutors; no help from classmates. Email me or see me with questions. You will receive a score of 0 if this rule is violated. This exam is being scored out of 00 points.

More information

Assignment #6. Chapter 10: 14, 15 Chapter 11: 14, 18. Due tomorrow Nov. 6 th by 2pm in your TA s homework box

Assignment #6. Chapter 10: 14, 15 Chapter 11: 14, 18. Due tomorrow Nov. 6 th by 2pm in your TA s homework box Assignment #6 Chapter 10: 14, 15 Chapter 11: 14, 18 Due tomorrow Nov. 6 th by 2pm in your TA s homework box Assignment #7 Chapter 12: 18, 24 Chapter 13: 28 Due next Friday Nov. 13 th by 2pm in your TA

More information

Statistical analysis DIANA SAPLACAN 2017 * SLIDES ADAPTED BASED ON LECTURE NOTES BY ALMA LEORA CULEN

Statistical analysis DIANA SAPLACAN 2017 * SLIDES ADAPTED BASED ON LECTURE NOTES BY ALMA LEORA CULEN Statistical analysis DIANA SAPLACAN 2017 * SLIDES ADAPTED BASED ON LECTURE NOTES BY ALMA LEORA CULEN Vs. 2 Background 3 There are different types of research methods to study behaviour: Descriptive: observations,

More information

One-Way ANOVAs t-test two statistically significant Type I error alpha null hypothesis dependant variable Independent variable three levels;

One-Way ANOVAs t-test two statistically significant Type I error alpha null hypothesis dependant variable Independent variable three levels; 1 One-Way ANOVAs We have already discussed the t-test. The t-test is used for comparing the means of two groups to determine if there is a statistically significant difference between them. The t-test

More information

What you should know before you collect data. BAE 815 (Fall 2017) Dr. Zifei Liu

What you should know before you collect data. BAE 815 (Fall 2017) Dr. Zifei Liu What you should know before you collect data BAE 815 (Fall 2017) Dr. Zifei Liu Zifeiliu@ksu.edu Types and levels of study Descriptive statistics Inferential statistics How to choose a statistical test

More information

A Brief (very brief) Overview of Biostatistics. Jody Kreiman, PhD Bureau of Glottal Affairs

A Brief (very brief) Overview of Biostatistics. Jody Kreiman, PhD Bureau of Glottal Affairs A Brief (very brief) Overview of Biostatistics Jody Kreiman, PhD Bureau of Glottal Affairs What We ll Cover Fundamentals of measurement Parametric versus nonparametric tests Descriptive versus inferential

More information

Overview of Non-Parametric Statistics

Overview of Non-Parametric Statistics Overview of Non-Parametric Statistics LISA Short Course Series Mark Seiss, Dept. of Statistics April 7, 2009 Presentation Outline 1. Homework 2. Review of Parametric Statistics 3. Overview Non-Parametric

More information

Lessons in biostatistics

Lessons in biostatistics Lessons in biostatistics The test of independence Mary L. McHugh Department of Nursing, School of Health and Human Services, National University, Aero Court, San Diego, California, USA Corresponding author:

More information

Creative Commons Attribution-NonCommercial-Share Alike License

Creative Commons Attribution-NonCommercial-Share Alike License Author: Brenda Gunderson, Ph.D., 05 License: Unless otherwise noted, this material is made available under the terms of the Creative Commons Attribution- NonCommercial-Share Alike 3.0 Unported License:

More information

MEA DISCUSSION PAPERS

MEA DISCUSSION PAPERS Inference Problems under a Special Form of Heteroskedasticity Helmut Farbmacher, Heinrich Kögel 03-2015 MEA DISCUSSION PAPERS mea Amalienstr. 33_D-80799 Munich_Phone+49 89 38602-355_Fax +49 89 38602-390_www.mea.mpisoc.mpg.de

More information

Midterm Exam MMI 409 Spring 2009 Gordon Bleil

Midterm Exam MMI 409 Spring 2009 Gordon Bleil Midterm Exam MMI 409 Spring 2009 Gordon Bleil Table of contents: (Hyperlinked to problem sections) Problem 1 Hypothesis Tests Results Inferences Problem 2 Hypothesis Tests Results Inferences Problem 3

More information

Overview of Lecture. Survey Methods & Design in Psychology. Correlational statistics vs tests of differences between groups

Overview of Lecture. Survey Methods & Design in Psychology. Correlational statistics vs tests of differences between groups Survey Methods & Design in Psychology Lecture 10 ANOVA (2007) Lecturer: James Neill Overview of Lecture Testing mean differences ANOVA models Interactions Follow-up tests Effect sizes Parametric Tests

More information

Chapter 23. Inference About Means. Copyright 2010 Pearson Education, Inc.

Chapter 23. Inference About Means. Copyright 2010 Pearson Education, Inc. Chapter 23 Inference About Means Copyright 2010 Pearson Education, Inc. Getting Started Now that we know how to create confidence intervals and test hypotheses about proportions, it d be nice to be able

More information

Chapter 13: Introduction to Analysis of Variance

Chapter 13: Introduction to Analysis of Variance Chapter 13: Introduction to Analysis of Variance Although the t-test is a useful statistic, it is limited to testing hypotheses about two conditions or levels. The analysis of variance (ANOVA) was developed

More information

Describe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo

Describe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo Please note the page numbers listed for the Lind book may vary by a page or two depending on which version of the textbook you have. Readings: Lind 1 11 (with emphasis on chapters 10, 11) Please note chapter

More information

Profile Analysis. Intro and Assumptions Psy 524 Andrew Ainsworth

Profile Analysis. Intro and Assumptions Psy 524 Andrew Ainsworth Profile Analysis Intro and Assumptions Psy 524 Andrew Ainsworth Profile Analysis Profile analysis is the repeated measures extension of MANOVA where a set of DVs are commensurate (on the same scale). Profile

More information

Nonparametric Linkage Analysis. Nonparametric Linkage Analysis

Nonparametric Linkage Analysis. Nonparametric Linkage Analysis Limitations of Parametric Linkage Analysis We previously discued parametric linkage analysis Genetic model for the disease must be specified: allele frequency parameters and penetrance parameters Lod scores

More information

Day 11: Measures of Association and ANOVA

Day 11: Measures of Association and ANOVA Day 11: Measures of Association and ANOVA Daniel J. Mallinson School of Public Affairs Penn State Harrisburg mallinson@psu.edu PADM-HADM 503 Mallinson Day 11 November 2, 2017 1 / 45 Road map Measures of

More information

One-Way Independent ANOVA

One-Way Independent ANOVA One-Way Independent ANOVA Analysis of Variance (ANOVA) is a common and robust statistical test that you can use to compare the mean scores collected from different conditions or groups in an experiment.

More information

Testing Means. Related-Samples t Test With Confidence Intervals. 6. Compute a related-samples t test and interpret the results.

Testing Means. Related-Samples t Test With Confidence Intervals. 6. Compute a related-samples t test and interpret the results. 10 Learning Objectives Testing Means After reading this chapter, you should be able to: Related-Samples t Test With Confidence Intervals 1. Describe two types of research designs used when we select related

More information

Still important ideas

Still important ideas Readings: OpenStax - Chapters 1 11 + 13 & Appendix D & E (online) Plous - Chapters 2, 3, and 4 Chapter 2: Cognitive Dissonance, Chapter 3: Memory and Hindsight Bias, Chapter 4: Context Dependence Still

More information

Business Statistics Probability

Business Statistics Probability Business Statistics The following was provided by Dr. Suzanne Delaney, and is a comprehensive review of Business Statistics. The workshop instructor will provide relevant examples during the Skills Assessment

More information

Statistics Guide. Prepared by: Amanda J. Rockinson- Szapkiw, Ed.D.

Statistics Guide. Prepared by: Amanda J. Rockinson- Szapkiw, Ed.D. This guide contains a summary of the statistical terms and procedures. This guide can be used as a reference for course work and the dissertation process. However, it is recommended that you refer to statistical

More information

Quantitative Methods in Computing Education Research (A brief overview tips and techniques)

Quantitative Methods in Computing Education Research (A brief overview tips and techniques) Quantitative Methods in Computing Education Research (A brief overview tips and techniques) Dr Judy Sheard Senior Lecturer Co-Director, Computing Education Research Group Monash University judy.sheard@monash.edu

More information

CHAPTER - 6 STATISTICAL ANALYSIS. This chapter discusses inferential statistics, which use sample data to

CHAPTER - 6 STATISTICAL ANALYSIS. This chapter discusses inferential statistics, which use sample data to CHAPTER - 6 STATISTICAL ANALYSIS 6.1 Introduction This chapter discusses inferential statistics, which use sample data to make decisions or inferences about population. Populations are group of interest

More information

BIOL 458 BIOMETRY Lab 7 Multi-Factor ANOVA

BIOL 458 BIOMETRY Lab 7 Multi-Factor ANOVA BIOL 458 BIOMETRY Lab 7 Multi-Factor ANOVA PART 1: Introduction to Factorial ANOVA ingle factor or One - Way Analysis of Variance can be used to test the null hypothesis that k or more treatment or group

More information

Advanced ANOVA Procedures

Advanced ANOVA Procedures Advanced ANOVA Procedures Session Lecture Outline:. An example. An example. Two-way ANOVA. An example. Two-way Repeated Measures ANOVA. MANOVA. ANalysis of Co-Variance (): an ANOVA procedure whereby the

More information

COMPARING SEVERAL DIAGNOSTIC PROCEDURES USING THE INTRINSIC MEASURES OF ROC CURVE

COMPARING SEVERAL DIAGNOSTIC PROCEDURES USING THE INTRINSIC MEASURES OF ROC CURVE DOI: 105281/zenodo47521 Impact Factor (PIF): 2672 COMPARING SEVERAL DIAGNOSTIC PROCEDURES USING THE INTRINSIC MEASURES OF ROC CURVE Vishnu Vardhan R* and Balaswamy S * Department of Statistics, Pondicherry

More information

appstats26.notebook April 17, 2015

appstats26.notebook April 17, 2015 Chapter 26 Comparing Counts Objective: Students will interpret chi square as a test of goodness of fit, homogeneity, and independence. Goodness of Fit A test of whether the distribution of counts in one

More information

Multiple Regression Analysis

Multiple Regression Analysis Multiple Regression Analysis Basic Concept: Extend the simple regression model to include additional explanatory variables: Y = β 0 + β1x1 + β2x2 +... + βp-1xp + ε p = (number of independent variables

More information

Small Group Presentations

Small Group Presentations Admin Assignment 1 due next Tuesday at 3pm in the Psychology course centre. Matrix Quiz during the first hour of next lecture. Assignment 2 due 13 May at 10am. I will upload and distribute these at the

More information

Chapter 2 Organizing and Summarizing Data. Chapter 3 Numerically Summarizing Data. Chapter 4 Describing the Relation between Two Variables

Chapter 2 Organizing and Summarizing Data. Chapter 3 Numerically Summarizing Data. Chapter 4 Describing the Relation between Two Variables Tables and Formulas for Sullivan, Fundamentals of Statistics, 4e 014 Pearson Education, Inc. Chapter Organizing and Summarizing Data Relative frequency = frequency sum of all frequencies Class midpoint:

More information

Supplemental Tables. Macroalgal. Porewater. Porewater. Water column. Water column. Water column chl a. Porewater.

Supplemental Tables. Macroalgal. Porewater. Porewater. Water column. Water column. Water column chl a. Porewater. Supplemental Tables Supplemental Table 1. Test statistics for all response variables measured in clam and control mesocosms in 2009 and 2010. All results are from non-parametric Wilcoxon rank sum tests,

More information

Learning Objectives 9/9/2013. Hypothesis Testing. Conflicts of Interest. Descriptive statistics: Numerical methods Measures of Central Tendency

Learning Objectives 9/9/2013. Hypothesis Testing. Conflicts of Interest. Descriptive statistics: Numerical methods Measures of Central Tendency Conflicts of Interest I have no conflict of interest to disclose Biostatistics Kevin M. Sowinski, Pharm.D., FCCP Last-Chance Ambulatory Care Webinar Thursday, September 5, 2013 Learning Objectives For

More information

9/4/2013. Decision Errors. Hypothesis Testing. Conflicts of Interest. Descriptive statistics: Numerical methods Measures of Central Tendency

9/4/2013. Decision Errors. Hypothesis Testing. Conflicts of Interest. Descriptive statistics: Numerical methods Measures of Central Tendency Conflicts of Interest I have no conflict of interest to disclose Biostatistics Kevin M. Sowinski, Pharm.D., FCCP Pharmacotherapy Webinar Review Course Tuesday, September 3, 2013 Descriptive statistics:

More information

Still important ideas

Still important ideas Readings: OpenStax - Chapters 1 13 & Appendix D & E (online) Plous Chapters 17 & 18 - Chapter 17: Social Influences - Chapter 18: Group Judgments and Decisions Still important ideas Contrast the measurement

More information

Simple Linear Regression the model, estimation and testing

Simple Linear Regression the model, estimation and testing Simple Linear Regression the model, estimation and testing Lecture No. 05 Example 1 A production manager has compared the dexterity test scores of five assembly-line employees with their hourly productivity.

More information

PSYCHOLOGY 300B (A01) One-sample t test. n = d = ρ 1 ρ 0 δ = d (n 1) d

PSYCHOLOGY 300B (A01) One-sample t test. n = d = ρ 1 ρ 0 δ = d (n 1) d PSYCHOLOGY 300B (A01) Assignment 3 January 4, 019 σ M = σ N z = M µ σ M d = M 1 M s p d = µ 1 µ 0 σ M = µ +σ M (z) Independent-samples t test One-sample t test n = δ δ = d n d d = µ 1 µ σ δ = d n n = δ

More information

THE EFFECTS OF USING THREE KINDS OF FEEDING METHODS ON CHICKS' GROWTH

THE EFFECTS OF USING THREE KINDS OF FEEDING METHODS ON CHICKS' GROWTH Electronic Journal of Applied Statistical Analysis EJASA, Electron. j. app. stat. anal. 1(2008), 42 55 ISSN 2070-5948, DOI 10.1285/i20705948v1n1p42 http://siba2.unile.it/ese/ejasa http://faculty.yu.edu.jo/alnasser/ejasa.htm

More information

Previously, when making inferences about the population mean,, we were assuming the following simple conditions:

Previously, when making inferences about the population mean,, we were assuming the following simple conditions: Chapter 17 Inference about a Population Mean Conditions for inference Previously, when making inferences about the population mean,, we were assuming the following simple conditions: (1) Our data (observations)

More information

Factorial Analysis of Variance

Factorial Analysis of Variance Factorial Analysis of Variance Overview of the Factorial ANOVA In the context of ANOVA, an independent variable (or a quasiindependent variable) is called a factor, and research studies with multiple factors,

More information

Kidane Tesfu Habtemariam, MASTAT, Principle of Stat Data Analysis Project work

Kidane Tesfu Habtemariam, MASTAT, Principle of Stat Data Analysis Project work 1 1. INTRODUCTION Food label tells the extent of calories contained in the food package. The number tells you the amount of energy in the food. People pay attention to calories because if you eat more

More information

9 research designs likely for PSYC 2100

9 research designs likely for PSYC 2100 9 research designs likely for PSYC 2100 1) 1 factor, 2 levels, 1 group (one group gets both treatment levels) related samples t-test (compare means of 2 levels only) 2) 1 factor, 2 levels, 2 groups (one

More information

Choosing the Correct Statistical Test

Choosing the Correct Statistical Test Choosing the Correct Statistical Test T racie O. Afifi, PhD Departments of Community Health Sciences & Psychiatry University of Manitoba Department of Community Health Sciences COLLEGE OF MEDICINE, FACULTY

More information

Applied Statistical Analysis EDUC 6050 Week 4

Applied Statistical Analysis EDUC 6050 Week 4 Applied Statistical Analysis EDUC 6050 Week 4 Finding clarity using data Today 1. Hypothesis Testing with Z Scores (continued) 2. Chapters 6 and 7 in Book 2 Review! = $ & '! = $ & ' * ) 1. Which formula

More information

Theoretical Exam. Monday 15 th, Instructor: Dr. Samir Safi. 1. Write your name, student ID and section number.

Theoretical Exam. Monday 15 th, Instructor: Dr. Samir Safi. 1. Write your name, student ID and section number. بسم االله الرحمن الرحيم COMPUTER & DATA ANALYSIS Theoretical Exam FINAL THEORETICAL EXAMINATION Monday 15 th, 2007 Instructor: Dr. Samir Safi Name: ID Number: Instructor: INSTRUCTIONS: 1. Write your name,

More information

Statistics 571: Statistical Methods Summer 2003 Final Exam Ramón V. León

Statistics 571: Statistical Methods Summer 2003 Final Exam Ramón V. León Name: Statistics 571: Statistical Methods Summer 2003 Final Exam Ramón V. León This exam is closed-book and closed-notes. However, you can use up to twenty pages of personal notes as an aid in answering

More information

Hypothesis testing: comparig means

Hypothesis testing: comparig means Hypothesis testing: comparig means Prof. Giuseppe Verlato Unit of Epidemiology & Medical Statistics Department of Diagnostics & Public Health University of Verona COMPARING MEANS 1) Comparing sample mean

More information

MMI 409 Spring 2009 Final Examination Gordon Bleil. 1. Is there a difference in depression as a function of group and drug?

MMI 409 Spring 2009 Final Examination Gordon Bleil. 1. Is there a difference in depression as a function of group and drug? MMI 409 Spring 2009 Final Examination Gordon Bleil Table of Contents Research Scenario and General Assumptions Questions for Dataset (Questions are hyperlinked to detailed answers) 1. Is there a difference

More information

Where does "analysis" enter the experimental process?

Where does analysis enter the experimental process? Lecture Topic : ntroduction to the Principles of Experimental Design Experiment: An exercise designed to determine the effects of one or more variables (treatments) on one or more characteristics (response

More information

AMSc Research Methods Research approach IV: Experimental [2]

AMSc Research Methods Research approach IV: Experimental [2] AMSc Research Methods Research approach IV: Experimental [2] Marie-Luce Bourguet mlb@dcs.qmul.ac.uk Statistical Analysis 1 Statistical Analysis Descriptive Statistics : A set of statistical procedures

More information

Chemistry, Biology and Environmental Science Examples STRONG Lesson Series

Chemistry, Biology and Environmental Science Examples STRONG Lesson Series Chemistry, Biology and Environmental Science Examples STRONG Lesson Series 1 Examples from Biology, Chemistry and Environmental Science Applications 1. In a forest near Dahlonega, 1500 randomly selected

More information

CHAPTER ONE CORRELATION

CHAPTER ONE CORRELATION CHAPTER ONE CORRELATION 1.0 Introduction The first chapter focuses on the nature of statistical data of correlation. The aim of the series of exercises is to ensure the students are able to use SPSS to

More information

Basic Biostatistics. Chapter 1. Content

Basic Biostatistics. Chapter 1. Content Chapter 1 Basic Biostatistics Jamalludin Ab Rahman MD MPH Department of Community Medicine Kulliyyah of Medicine Content 2 Basic premises variables, level of measurements, probability distribution Descriptive

More information

Prepared by: Assoc. Prof. Dr Bahaman Abu Samah Department of Professional Development and Continuing Education Faculty of Educational Studies

Prepared by: Assoc. Prof. Dr Bahaman Abu Samah Department of Professional Development and Continuing Education Faculty of Educational Studies Prepared by: Assoc. Prof. Dr Bahaman Abu Samah Department of Professional Development and Continuing Education Faculty of Educational Studies Universiti Putra Malaysia Serdang At the end of this session,

More information

PSYCHOLOGY 320L Problem Set #4: Estimating Sample Size, Post Hoc Tests, and Two-Factor ANOVA

PSYCHOLOGY 320L Problem Set #4: Estimating Sample Size, Post Hoc Tests, and Two-Factor ANOVA PSYCHOLOGY 320L Problem Set #4: Estimating Sample Size, Post Hoc Tests, and Two-Factor ANOVA Name: Score: 1. Suppose you are planning an experiment for a class project with a group of students and you

More information

Statistics Assignment 11 - Solutions

Statistics Assignment 11 - Solutions Statistics 44.3 Assignment 11 - Solutions 1. Samples were taken of individuals with each blood type to see if the average white blood cell count differed among types. Eleven individuals in each group were

More information

LAB ASSIGNMENT 4 INFERENCES FOR NUMERICAL DATA. Comparison of Cancer Survival*

LAB ASSIGNMENT 4 INFERENCES FOR NUMERICAL DATA. Comparison of Cancer Survival* LAB ASSIGNMENT 4 1 INFERENCES FOR NUMERICAL DATA In this lab assignment, you will analyze the data from a study to compare survival times of patients of both genders with different primary cancers. First,

More information

Business Research Methods. Introduction to Data Analysis

Business Research Methods. Introduction to Data Analysis Business Research Methods Introduction to Data Analysis Data Analysis Process STAGES OF DATA ANALYSIS EDITING CODING DATA ENTRY ERROR CHECKING AND VERIFICATION DATA ANALYSIS Introduction Preparation of

More information

SPSS output for 420 midterm study

SPSS output for 420 midterm study Ψ Psy Midterm Part In lab (5 points total) Your professor decides that he wants to find out how much impact amount of study time has on the first midterm. He randomly assigns students to study for hours,

More information

Readings: Textbook readings: OpenStax - Chapters 1 13 (emphasis on Chapter 12) Online readings: Appendix D, E & F

Readings: Textbook readings: OpenStax - Chapters 1 13 (emphasis on Chapter 12) Online readings: Appendix D, E & F Readings: Textbook readings: OpenStax - Chapters 1 13 (emphasis on Chapter 12) Online readings: Appendix D, E & F Plous Chapters 17 & 18 Chapter 17: Social Influences Chapter 18: Group Judgments and Decisions

More information

Repeated Measures ANOVA and Mixed Model ANOVA. Comparing more than two measurements of the same or matched participants

Repeated Measures ANOVA and Mixed Model ANOVA. Comparing more than two measurements of the same or matched participants Repeated Measures ANOVA and Mixed Model ANOVA Comparing more than two measurements of the same or matched participants Data files Fatigue.sav MentalRotation.sav AttachAndSleep.sav Attitude.sav Homework:

More information

Steps in Inferential Analyses. Inferential Statistics. t-test

Steps in Inferential Analyses. Inferential Statistics. t-test Steps in Inferential Analyses Inferential Statistics Dr. K. A. Korb University of Jos VassarStats: http://faculty.vassar.edu/lowry/vassarstats.html In any inferential statistic, the first step is always

More information

Analysis of Variance ANOVA, Part 2. What We Will Cover in This Section. Factorial ANOVA, Two-way Design

Analysis of Variance ANOVA, Part 2. What We Will Cover in This Section. Factorial ANOVA, Two-way Design Analysis of Variance ANOVA, Part //007 P33 Analysis of Variance What We Will Cover in This Section Introduction. Overview. Factorial ANOVA Repeated Measures ANOVA. //007 P33 Analysis of Variance Factorial

More information

isc ove ring i Statistics sing SPSS

isc ove ring i Statistics sing SPSS isc ove ring i Statistics sing SPSS S E C O N D! E D I T I O N (and sex, drugs and rock V roll) A N D Y F I E L D Publications London o Thousand Oaks New Delhi CONTENTS Preface How To Use This Book Acknowledgements

More information

Final Exam PS 217, Fall 2010

Final Exam PS 217, Fall 2010 Final Exam PS 27, Fall 200. Farzin, et al. (200) wrote an article that appeared in Psychological Science titled: Spatial resolution of conscious visual perception in infants. The abstract read: Humans

More information

Lab #7: Confidence Intervals-Hypothesis Testing (2)-T Test

Lab #7: Confidence Intervals-Hypothesis Testing (2)-T Test A. Objectives: Lab #7: Confidence Intervals-Hypothesis Testing (2)-T Test 1. Subsetting based on variable 2. Explore Normality 3. Explore Hypothesis testing using T-Tests Confidence intervals and initial

More information

Describe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo

Describe what is meant by a placebo Contrast the double-blind procedure with the single-blind procedure Review the structure for organizing a memo Business Statistics The following was provided by Dr. Suzanne Delaney, and is a comprehensive review of Business Statistics. The workshop instructor will provide relevant examples during the Skills Assessment

More information

List of Figures. List of Tables. Preface to the Second Edition. Preface to the First Edition

List of Figures. List of Tables. Preface to the Second Edition. Preface to the First Edition List of Figures List of Tables Preface to the Second Edition Preface to the First Edition xv xxv xxix xxxi 1 What Is R? 1 1.1 Introduction to R................................ 1 1.2 Downloading and Installing

More information

Exam 3 PS 217, Spring 2011

Exam 3 PS 217, Spring 2011 Exam 3 PS 217, Spring 2011 1. First, some random questions about topics we covered this semester. [10 pts] a. In a repeated measures design, what is the effect of counterbalancing on order or carry-over

More information