go to the School of Psychology home page Go to the UNE home page
Chapter 7 - Analysing the Data Part IV - Analysis of Variance Chapter 1 - Behavioural Science and research Chapter 2 - Research Design Chapter 3 - Collecting the Data Chapter 4 - Analysing the Data Part I - Descriptive Statistics Chapter 5 - Analysing the Data Part II - Inferential Statistics Chapter 6 - Analysing the Data Part III - Common Statistical Tests

 

Chapter 7: Analysing the Data
Part IV : Analysis of Variance

 

Scenario and Data Set #2
SPSS Output 7.1
Compare means - One-way ANOVA


 





Comments on SPSS output

Descriptives

Here the DV is named (RECALL), and the Group codes are given. These can be given more informative labels if you wish (in the Define Variables box). In more complex analyses with many DVs (e.g., 50 dependent variables here we have only one DV), it is worth the extra time to label your variables carefully in the first place. The output then becomes clearer. In this present simple example, there was no need to.

The group sample sizes, means, sds, std errors, 95% Confidence Intervals, and minimums and maximums are also given. You should check that the right number of groups is showing up and that the Ns and means are what you would expect. The main thing we are interested in here is the mean for each group.

Test of Homogeneity of Variance

Levene's statistic is calculated for the variances in this ANOVA. If this is significant, we have evidence that the homogeneity assumption has been violated. If it is a problem, you can re-run the analysis selecting the "Games-Howell" option for "Equal Variances Not Assumed". Here the assumption is not violated.

ANOVA

The answer! i.e., The Summary Table. Note that the DFs are correct. The Between Groups DF is k-1 (i.e., the number of groups minus one) and the Total DF is 49 (i.e., one less than the total number of observations). The Summary Table contains the main information we need to answer our research question. Here we can deduce that a significant result has been found F(4,45) = 9.09, p < .001. This result is (highly) significant.

Note the significance is given as ".000". Normally I recommend quoting the probability exactly, but in the case of all zeros, it doesn't make sense to say p = .000. A probability of zero means that the result is impossible! What is really meant of course is that the probability rounded to three decimal places is zero. In reality, the probability is really something like .000257 (say). The most accurate way to report this is by referring to p < .001. That is, use the same number of decimal places, change the last digit to 1, and use the < sign.

Because we have a significant F-value, we now know that all the means are not equal (i.e., reject Ho in favour of H1). However, we do not yet know exactly which means are significantly different to which other means. For this we need Tukey's HSD.

Multiple Comparisons

Here are the results of all pairwise comparisons using Tukey's HSD. I find the "Matrix of Ordered Means" described earlier as an easier way of working out which means are significantly different to which other ones. But all the information is here (plus extra that you don't really need). From the table we can see that Group 1 differed from Group 2 by .1 (notice that .1 = 1.000E-01) and that this difference was not significant.

However, Group 1 was significantly different to Group 3 (only just) and Groups 4 and 5. Group 2 is significantly different to Groups 3, 4, and 5. Group 3 is significantly different to Group 1 and 2 only. Group 4 is significantly different to 1 and 2 only. Group 5 is significantly different to 1 and 2 only.

Note that the actual value for Tukey's HSD is not printed anywhere. This is annoying, because once you know it, you do not need the full table. You can just look at any two means and say they are significant or not by comparing them to the HSD value. Here it is 3.97, so any means differing by more than this are significantly different to each other. Therefore, for example, we find that Groups 1 and 3 are significantly different to each other (only just i.e., p = .046).

Homogeneous subsets

Not very useful. Basically reflects the same information as in the previous table. Here Groups 1 and 2 are grouped together because they do not differ from each other. Groups 3, 4, and 5 are also grouped together because they do not differ from each other, but are different to Groups 1 and 2.

Note that at the bottom of the table, SPSS prints "Uses Harmonic Mean Sample Size = 10.00". If we have different sample sizes, SPSS automatically uses the harmonic mean in the calculations.

 

 

 

© Copyright 2000 University of New England, Armidale, NSW, 2351. All rights reserved

UNE homepage Maintained by Dr Ian Price
Email: iprice@turing.une.edu.au