go to the School of Psychology home page Go to the UNE home page
Chapter 7 - Analysing the Data Part IV - Analysis of Variance Chapter 1 - Behavioural Science and research Chapter 2 - Research Design Chapter 3 - Collecting the Data Chapter 4 - Analysing the Data Part I - Descriptive Statistics Chapter 5 - Analysing the Data Part II - Inferential Statistics Chapter 6 - Analysing the Data Part III - Common Statistical Tests Correlation Regression T-tests Chi-squared Readings and links

 

Chapter 6: Analysing the Data
Part III: Common Statistical Tests

 

Independent Samples t-Test

In this section, I illustrate how to test hypotheses involving the mean difference between two independent groups. We'll be working with an example from a reward and learning study wherein two groups of children have been scored on a Learning variable.. The data are replicated below in Figure 6.10, in a format that would be used to enter the data into SPSS. The condition to which a child was randomly assigned is coded in this table as the "Group" variable, with either a value of 1 or 2. If Group = 1, the child was assigned to the no-reward group. If Group = 2, the child was assigned to the reward group. The values under "Learning" are the number of letters the child correctly pronounced during the testing phase.

The research hypothesis would be along the lines, that "Children who receive nonverbal reinforcement improve in their learning of the alphabet".

Group

Learning

1

3

1

7

1

6

1

2

1

9

1

11

1

13

1

8

1

10

1

2

2

5

2

8

2

12

2

12

2

10

2

17

2

12

2

10

2

13

2

8

Figure 6.10 data for the independent groups t-test.

The five steps of hypothesis testing as they apply to the independent samples t-test are as follows.

Step 1: Stating the hypotheses

Ho: m1 = m2

H1: m1 not equal to m2

Again we have a two-tailed test and an alpha level of .05. Notice the research hypothesis is framed in the direction that you should expect, an improvement in learning. Nevertheless we will assess the hypothesis using a two-tailed test (!!) See your texts and earlier for the confusing rationale here.

Step 2: Checking the assumptions

1. All observations must be independent of each other

2. The dependent variable must be measured on an interval or ratio scale

3. The dependent variable must be normally distributed in the population (for each group being compared). (NORMALITY ASSUMPTION)

4. The distribution of the dependent variable for one of the groups being compared must have the same variance as the distribution for the other group being compared. (HOMOGENEITY OF VARIANCE ASSUMPTION)

See Howell, p. 302-303 and 197-202. Note Howell does not specify Assumption 2. However, this is a necessary part of parametric analysis. The data need to be measured along some metric. If the data represent counts or ranks then nonparametric analyses must be performed.

The first two assumptions are dealt with by design considerations (i.e., before the data are collected). To ensure that the observations are independent, the researcher needs to make sure that each participant contributes his or her own data uncontaminated by other people's scores, or views, or mere presence. The second two assumptions can only be dealt with once the data are in. Note the assumptions are worded in terms of population normality and population variances. Once the data are in we only have a sample from that population - not the entire population. All we can do is check these assumptions in the sample and then assume that the conclusions we reach with the sample reflect the true state of affairs in the whole population. The homogeneity assumption is checked in SPSS by Levene's test.

Step 3: Calculate the test statistic

See Output 6.6 for the results of this analysis and the SPSS Screens and Output booklet for instructions.

Output 6.6 Compare means -> Independent samples t-test

T-Test

Group Statistics Here the groups are identified, their sample sizes (N), their means, their standard deviations, and their S.E.M.s are given. You should check that the right variables have been selected and the right number of cases have been analysed.

Independent Samples Test Table 1. Here is Levene's Test for the Equality of Variances in the two groups of scores. We focus on the "Sig." column and it tells us that the assumption is not violated (the "Sig." value is not significant). [*** Note this table and the following one print by default in one long table. But I have split the table into two sections so that the print does not become too small.]

Independent Samples Test Table 2. Here are the main results that we want. Because the assumption of homogeneity can be assumed, we follow along the top line. The crucial number is in the "Sig. (2-tailed)" column and tells us that the observed difference in the means (-3.6) is significant.

The Confidence Interval information is also given. Notice the null hypothesis value (i.e., zero) does not fall within this interval.

Step 4: Evaluate the result

The output indicates that the observed difference in the means is significant t(18) = -2.247, p = .037. We reject H0 in favour of H1. Eta-square is .219. 21.9% of the variability in the letters recalled was explained by the reward manipulation.

Step 5: Interpret the result

A significant increase in letters recalled occurred in the nonverbal learning group compared to the control group (t(18) = -2.247, p = .037, eta symbol2 = .219).

The broader view might be to consider what is the active ingredient in the new learning method. What is it about the nonverbal reinforcement that appears to have worked? Perhaps the new learning method can be applied to learning arithmetic tables? We are not given a lot of information about the study to comment much further on the broader implications.

 

 

 

© Copyright 2000 University of New England, Armidale, NSW, 2351. All rights reserved

UNE homepage Maintained by Dr Ian Price
Email: iprice@turing.une.edu.au