go to the School of Psychology home page Go to the UNE home page
Chapter 7 - Analysing the Data Part IV - Analysis of Variance Chapter 1 - Behavioural Science and research Chapter 2 - Research Design Chapter 3 - Collecting the Data Chapter 4 - Analysing the Data Part I - Descriptive Statistics Chapter 5 - Analysing the Data Part II - Inferential Statistics Chapter 6 - Analysing the Data Part III - Common Statistical Tests Probability Sampling distributions Steps in hypothesis testing Type I and Type II decision errors Power Bonferroni Confidence Intervals Readings and links

 

Chapter 5: Analysing the Data
Part II : Inferential Statistics

 

Constructing a sampling distribution

An example of how a sampling distribution is constructed is shown for a small population of five scores (0, 2, 4, 6, 8).

Population: [0, 2, 4, 6, 8] µ = 4.0  sigma= 2.828

Repeated sampling with replacement for different sample sizes is shown to produce different sampling distributions. A sampling distribution therefore depends very much on sample size. As an example, with samples of size two, we would first draw a number, say a 6 (the chance of this is 1 in 5 = 0.2 or 20%. We then put the number back and draw another one. Say this is an 8. The mean of our N=2 sample is now (6 + 8)/2 = 7. We would again put the drawn number back into the population.

sigma is calculated using N in the denominator rather than N-1 because we have a population, not a sample.

Different values for the sample mean which are possible for each sample size along with the associated probability of each mean value occurring are given. Also included is what you would expect on the basis of 30 samples. For example, with N = 2, the only way to get a mean of 0.0 is if both the first and the second draw are zeros. Therefore, the probability is one fifth X one fifth =  = 0.04; in 30 draws, that you would expect  X 30 = 1.2 (i.e., probably one and maybe two).

For a sample of size :

N = 2

N = 3

N = 4

X

Prob.

Expect

X

Prob.

Expect

X

Prob.

Expect

0

0.04

1.2

0.00

0.008

0.24

0.00

0.0016

0.048

1

0.08

2.4

0.67

0.024

0.72

0.50

0.0064

0.192

2

0.12

3.6

1.33

0.048

1.44

1.00

0.0160

0.48

3

0.16

4.8

2.00

0.080

2.4

1.50

0.0320

0.96

4

0.20

6.0

2.67

0.120

3.6

2.00

0.0560

1.68

5

0.16

4.8

3.33

0.144

4.32

2.50

0.0832

2.496

6

0.12

3.6

4.00

0.152

4.56

3.00

0.1088

3.264

7

0.08

2.4

4.67

0.144

4.32

3.50

0.1280

3.84

8

0.04

1.2

5.33

0.120

3.6

4.00

0.1360

4.08

 

1.00

30.0

6.00

0.080

2.4

4.50

0.1280

3.84

     

6.67

0.048

1.44

5.00

0.1088

3.264

     

7.33

0.024

0.72

5.50

0.0832

2.496

     

8.00

0.008

0.24

6.00

0.0560

1.68

       

1.000

30.0

6.50

0.0320

0.96

           

7.00

0.0160

0.48

           

7.50

0.0064

0.192

           

8.00

0.0016

0.048

             

1.0000

30.0

Now, we can plot each sampling distribution using the probability or proportion on the Y-axis to get a better feel for what is happening.

Note that the population mean (µ) is known to be 4.0 and the population standard deviation (sigma ) is known to be 2.828 in this small example. For sample sizes of n = 2, 3, and 4, the most likely (or most probable) value for x bar in the sampling distribution is, in fact, 4.0 - the value for µ The fact that this happens for the statistic we call the sample mean gives rise to the idea that the sample mean is an unbiased estimator of the population mean. In other words, on the average (or, equivalently, in the long run), in a sample of a given size, we expect that the sample mean will equal the population value (it will not be biased to one side or the other of the population value). For any particular sample, we might miss, to some degree, the value of µ with our estimate (either too high or too low), so our sampling distribution will have a variance and a standard deviation. We call this standard deviation of the distribution of possible sample means the standard error of the mean (S.E.M.). An important feature of the standard error of the mean is that as the sample size (N) increases, the value for the standard error decreases. The actual relationship is

Standard error = S.E.M. = s over square root of n

You can verify for yourself that this is true using our three example sampling distributions (to within rounding errors, at least). Thus, with larger sample sizes, we will tend, in the long run, to get more accurate estimates. The larger the size of the samples you take each time, the "tighter" the distribution becomes. The sampling distribution becomes less spread out, or more focussed, around the central value as N increases.

You should also note that the larger the sample size, the more "normal-looking" the sampling distribution appears (these last two facts are a direct consequence of the Central Limit Theorem). The values for X become less probable the further away from the population value they are. See the accompanying disk with these notes for an interactive demonstration of the Central Limit Theorem.

From the sampling distributions given above we can answer a number of questions about sample means. For example, how likely is it that we could select a sample of size N = 4 from this mini-population to have a mean of x bar = 2.0 or less? From the theoretical distribution for N =4, we can simply count up the probabilities of X = 2.0 or less and find that this probability is p = .112. From 0.0016 + 0.0064 + 0.0160 + 0.0320 + 0.0560 = 0.112 (This is a "one-tailed calculation"). Hence our estimate of the expected probability of a sample mean of this size or less occurring is 0.112. From our previous discussion this is not an unlikely event.

Sampling distributions can be constructed for any statistic we like. For instance, for each of our small samples in the exercise, we could have simply recorded the largest number only. We would then get a sampling distribution of maximums from samples of a given size. We might have been interested in adding the first and last numbers and taking the cube root!. In which case we get a sampling distribution having a particular shape and having particular properties. In the present case we have constructed a sampling distribution of the mean because it is usually the average value for a variable that is of most interest for us.

The general pattern of statistical tests is:

Test Value = 

sample value - hypothesised value

standard error

For example, the single sample t test is a test of the mean of a distribution and it looks like this

Note, the t-test is the ratio of (a) the difference between the sample statistic and the hypothesised value it is supposed to estimate if H0 is true, to (b) the standard error of that sample statistic. Virtually all parametric significance tests we encounter have a form similar to this: statistic (after adjusting for the H0 value) divided by its standard error. The resulting ratio yields a number in units of standard error (analogous to a regular z-score which is expressed in units of standard deviation). Thus, the test statistic reports how many units of standard error the sample statistic is away from the null hypothesis value. If the ratio is large enough, that is, the sample statistic is far enough away from the value hypothesised for H0 , then we have the evidence we need to reject H0.

 

 

 

© Copyright 2000 University of New England, Armidale, NSW, 2351. All rights reserved

UNE homepage Maintained by Dr Ian Price
Email: iprice@turing.une.edu.au