go to the School of Psychology home page Go to the UNE home page
Chapter 7 - Analysing the Data Part IV - Analysis of Variance Chapter 1 - Behavioural Science and research Chapter 2 - Research Design Chapter 3 - Collecting the Data Chapter 4 - Analysing the Data Part I - Descriptive Statistics Chapter 5 - Analysing the Data Part II - Inferential Statistics Chapter 6 - Analysing the Data Part III - Common Statistical Tests Probability Sampling distributions Steps in hypothesis testing Type I and Type II decision errors Power Bonferroni Confidence Intervals Readings and links


Chapter 5: Analysing the Data
Part II : Inferential Statistics



There are several interrelated issues to consider when we think about setting up a research design to evaluate a statistical hypothesis. LetÕs depict a hypothesis testing situation graphically using two normal distributions Š one to represent the sampling distribution for the null hypothesis and the other to represent the sampling distribution for the alternative hypothesis. We use normal distributions because of the central limit theorem which states that the sampling distribution of the mean from any population looks more and more normal as sample size, N, is increased. Thus, if we assume a reasonable sample size, we are justified in using normal distributions to represent the situation. Remember we are using sampling distributions (in this case for the sample mean) as standards against which to compare our computed sample statistic.

In general terms, suppose we evaluate the specific hypothesis that µ has one value (µ Ho) or another value (µ H1) using a sample of size N:

Ho: µ = µHo vs H1: µ = µH1; then we have

There are four factors that interact when we consider setting significance levels and power:

1. Power: 1- beta(probability of correctly concluding "Reject Ho")

2. Significance level: alpha (probability of falsely concluding "Reject Ho")

3. Sample size: N

4. Effect size: e (the separation between the null hypothesis value and a particular value specified for the alternative hypothesis). Think of this as X bar - µ (i.e., the numerator in the t-test).

Once we know any three of these quantities, the fourth one is automatically determined. LetÕs examine the ways these factors can be varied to increase or decrease the power of a statistical test.

1. One way to increase power is to relax the significance level (alpha ) [if e and N remain constant]

Note how the size of the power area is expanded when alpha is relaxed.

2. Another way to increase power is to increase the sample size Š N [if alpha and e remain constant].

This happens because the variance of a sampling distribution gets smaller when sample size is increased (recall our example on sampling distributions earlier in this Topic), so the tails get pulled in toward the mean. An interesting thing to note here is that, with a large enough N, almost anything , no matter how trivial, can be found to be statistically significant!

3. Yet another way to increase power is to look only for a larger effect size [if alpha and N remain constant].

The larger effect size pulls the two distributions apart, thus lessening their overlap; consequently, differences are easier to detect. An interesting consideration here is if you need to be able to detect a fairly small effect size, you would need to either alter alpha (by relaxing it) or N (by increasing it) or both in order to keep from losing power. The most common way of increasing power in an experiment or survey is to increase sample size. By doing this you are more likely to pick up a real difference if one is really there.

4. Finally, power can be increased if a directional hypothesis can be stated (based on previous research findings or deductions from theory).




© Copyright 2000 University of New England, Armidale, NSW, 2351. All rights reserved

UNE homepage Maintained by Dr Ian Price
Email: iprice@turing.une.edu.au