go to the School of Psychology home page Go to the UNE home page
Chapter 7 - Analysing the Data Part IV - Analysis of Variance Chapter 1 - Behavioural Science and research Chapter 2 - Research Design Chapter 3 - Collecting the Data Chapter 4 - Analysing the Data Part I - Descriptive Statistics Chapter 5 - Analysing the Data Part II - Inferential Statistics Chapter 6 - Analysing the Data Part III - Common Statistical Tests Probability Sampling distributions Steps in hypothesis testing Type I and Type II decision errors Power Bonferroni Confidence Intervals Readings and links

 

Chapter 5: Analysing the Data
Part II : Inferential Statistics

 

Type I and Type II errors

When we set alpha at a specified level (say, 0.05) we automatically specify how much confidence (0.95) we will have in a decision to "fail to reject Ho if it really is the true state of affairs. To put a more rational meaning on these numbers, consider doing the exact same experiment, each using a different random sample, 100 times. [Recall the discussion of the nature of a sampling distribution which this sort of repetition idea gives rise to.] If we set alpha = 0.05 (and consequently 1 - alpha = 0.95), then in our 100 experiments, we should expect to make an incorrect decision in 0.05 x 100 or 5 of these experiments (= 5% chance of error), and a correct one 95% of the time if Ho is really true. Thus, alpha states what chance of making an error (by falsely concluding that the null hypothesis should be rejected) we, as researchers, are willing to tolerate in the particular research context.

That is, our basis for making a decision about our sample being a "reasonable estimate" of a population value, is whether the sample event is particularly unlikely or not. Now, it happens, of course, that now and again, unusual, rare and unlikely events do occur just by chance and do not necessarily imply something meaningful has occurred or that something has caused this event to occur. In our example sampling distribution, just by chance you might have drawn 0, 0, 0, and 0. According to our decision making rules, this is so unlikely to have occurred by chance that we should reject Ho in favour of the alternative. You must have been peeking when you drew out the squares!!! But if it was a truly random selection, we would be making a Type I error. We would claim we have evidence of a selection rule being used (i.e., peeking) but would in fact be wrong in concluding this.

When we consider the case where Ho is not the true state of affairs in the population (i.e., Ho is false), we move into an area of statistics concerned with the power of a statistical test. If Ho is false, we want to have a reasonable chance of detecting it using our sample information. Of course, there is always the chance that we would fail to detect a false Ho, which yields the Type II or beta error. However, beta error is generally considered less severe or costly than an alpha error. We must be aware of the power of a statistical test the test's ability to detect a false null hypothesis because we want to reject Ho if it really should be rejected in favour of H1 Hence we focus on 1 - alpha which is the probability of correctly rejecting Ho. See Ray and Howell for more discussion of these types of errors.

 

 

 

© Copyright 2000 University of New England, Armidale, NSW, 2351. All rights reserved

UNE homepage Maintained by Dr Ian Price
Email: iprice@turing.une.edu.au