go to the School of Psychology home page Go to the UNE home page
Chapter 7 - Analysing the Data Part IV - Analysis of Variance Chapter 1 - Behavioural Science and research Chapter 2 - Research Design Chapter 3 - Collecting the Data Chapter 4 - Analysing the Data Part I - Descriptive Statistics Chapter 5 - Analysing the Data Part II - Inferential Statistics Chapter 6 - Analysing the Data Part III - Common Statistical Tests

 

Chapter 7: Analysing the Data
Part IV : Analysis of Variance

 

One-Way ANOVA
General comments

Although ANOVA is an extension of the two group comparison embodied in the t-test, understanding ANOVA requires some shift in logic. In the t-test, if we wanted to know if there was a significant difference between two groups we merely subtracted the two means from each other and divided by the measure of random error (standard error). But when it comes to comparing three or more means, it is not clear which means we should subtract from which other means.

For example, with five means,

Mean 1

Mean 2

Mean 3

Mean 4

Mean 5

7.0

6.9

11.0

13.4

12.0

we could compare Mean 1 against Mean 2, or against Mean 3, or against Mean 4, or against Mean 5. We could also compare Mean 2 against Mean 3 or against Mean 4, or against Mean 5. We could also compare Mean 3 against Mean 4, or against Mean 5. Finally, we could compare Mean 4 against Mean 5. This gives a total of 10 possible two-group comparisons. Obviously, the logic used for the t-test cannot immediately be transferred to ANOVA.

Instead, ANOVA uses some simple logic of comparing variances (hence the name 'Analysis of Variance'). If the variance amongst the five means is significantly greater than our measure of random error variance, then our means must be more spread out than we would expect due to chance alone.

If the variance amongst our sample means is the same as the error variance, then you would expect an F = 1.00. If the variance amongst our sample means is greater than the error variance, you would get F  >  1.00. What we need therefore is a way of deciding when the variance amongst our sample means is significantly greater than 1.00. (An F  <  1.00 does not have much importance and is always  >  0.0 because variance is always positive.)

The answer to this question is the distribution of the F-ratio. An F-ratio is merely the ratio of any two variances. In the case of the between groups ANOVA, the variances we are interested in are the two nominated above.

F distributions depend on the degrees of freedom associated with the numerator in the ratio and the degrees of freedom associated with the denominator. Figure 7.1 shows three different F distributions corresponding to three different combinations of numerator df and denominator df.

Figure 7.1. Different F distributions for different combinations of numerator and denominator degrees of freedom. Notice "variance expected from sampling error" is sometimes called "WITHIN" variance or "within-subjects" variance, which indicates where it comes from.

You will see that each distribution is not symmetrical and has a peak at about F = 1.00. With degrees of freedom = 3 and 12, a calculated F-value greater than 3.49 will be a significant result (p < .05). If the calculated F- value is greater than 5.95, the result will be significant at the alpha = .01 level. With 2 and 9 df, the corresponding values are 4.26 and 8.02. (You will be pleased to know, that there are no one-tailed tests in ANOVA.)

Variance

Variance was covered earlier but as a reminder . . .

variance = standard deviation2

In ANOVA terminology, variance is often called Mean Square. This is because

That is, variance is equal to Sums of Squares divided by N-1. N-1 is approximately the number of observations, so variance is an average Sums of Squares or Mean Square for short.

 

 

 

© Copyright 2000 University of New England, Armidale, NSW, 2351. All rights reserved

UNE homepage Maintained by Dr Ian Price
Email: iprice@turing.une.edu.au