go to the School of Psychology home page Go to the UNE home page
Chapter 7 - Analysing the Data Part IV - Analysis of Variance Chapter 1 - Behavioural Science and research Chapter 2 - Research Design Chapter 3 - Collecting the Data Chapter 4 - Analysing the Data Part I - Descriptive Statistics Chapter 5 - Analysing the Data Part II - Inferential Statistics Chapter 6 - Analysing the Data Part III - Common Statistical Tests Probability Sampling distributions Steps in hypothesis testing Type I and Type II decision errors Power Bonferroni Confidence Intervals Readings and links

 

Chapter 5: Analysing the Data
Part II : Inferential Statistics

 

Bonferroni

The problem of multiple tests of significance. In many kinds of research, particularly surveys and studies using batteries of tests, and with certain statistical methods (such as multiple regression and analysis of variance), there are frequently many (more than, say, 10) statistical hypotheses being tested (perhaps on different dependent and independent variable combinations). In such studies, we find that setting alpha = 0.05 does not provide sufficient protection against the Type I error. What happens is that as the number of separate hypothesis tests increase within a single study, the true alpha -level for the entire study (regardless of where you think you have set it) will be inflated. An approximate formula for how much alpha increases as a function of the number of hypotheses you test is

actual alpha-level = 1 - (1- alphayou think you are setting)j

where j is the number of significance tests being done or contemplated. Thus, if we think we are setting alpha at 0.05 in our study, but we are contemplating testing 20 statistical hypotheses (say 20 t-tests), our actual chance of claiming a significant result when there should not be one is 1 - (1 - 0.05)20 = 0.64. That is, we really have a 64% chance of making a Type I error. This is quite a substantial chance!! Moreover, you do not know which ones are errors and which ones aren’t.

There are a couple of ways of coping with this problem in multivariate studies. The best way is to use special multivariate statistical procedures that are designed to help protect against this alpha inflation problem either by conducting a single test on all dependent variables simultaneously or by providing ways to statistically reduce the number of variables being dealt with.

Another approach is attributed to a gentleman named Bonferroni. He devised a simple way to help control for the alpha -inflation problem. To have a study with an overall alpha -level equal to some pre-specified level (say 0.05), merely divide this up equally among the j tests of significance being contemplated. Thus, for each significance test, we use

new alpha-level = desired alhpa for entire study divided by J

In the previous example, with 20 significance tests, we would consider using a new alpha -level of 0.05/20 = 0.0025 for each separate significance test. In this way, we gain better protection against alpha -level inflation due to doing multiple tests of significance. We can check this out by computing the inflated alpha formula using the new alpha -level: 1 - (1 - 0.0025)20 = 0.0488 which is very close to an overall alpha of 0.05. We’ll refer to this adjustment of the alpha -level as the Bonferroni adjustment procedure. Of course, we are now in danger of making a Type II error –– by finding no significant differences when in fact they should be there!. In the present case, a p-value less than 0.0025 might be hard to obtain and hence we might overlook an important finding.

 

 

 

© Copyright 2000 University of New England, Armidale, NSW, 2351. All rights reserved

UNE homepage Maintained by Dr Ian Price
Email: iprice@turing.une.edu.au