go to the School of Psychology home page Go to the UNE home page
Chapter 7 - Analysing the Data Part IV - Analysis of Variance Chapter 1 - Behavioural Science and research Chapter 2 - Research Design Chapter 3 - Collecting the Data Chapter 4 - Analysing the Data Part I - Descriptive Statistics Chapter 5 - Analysing the Data Part II - Inferential Statistics Chapter 6 - Analysing the Data Part III - Common Statistical Tests Probability Sampling distributions Steps in hypothesis testing Type I and Type II decision errors Power Bonferroni Confidence Intervals Readings and links

 

Chapter 5: Analysing the Data
Part II : Inferential Statistics

 

Our aim in statistical inference is to establish a logical decision-making process that will allow us to say how valid or reliable a finding is in terms of being indicative of the real or true state of affairs in the larger population. We can never be absolutely sure that the relationship that we observe between two variables in a sample is truly representative of the larger population. All we can do is estimate how likely it is that what we observe in the sample is representative of the true state of affairs. This is where probability and the concept of "significance" comes in. Probability is used to estimate how likely it is that a particular finding is real or true, given a certain set of assumptions. The question then becomes "How do I make the decision to accept a finding as real or not?"

This decision making process abounds with conventions. In particular, the conventions about what are the most appropriate ways in which probability is to be handled and the correct terminology that applies in a particular situation and the correct way to present the results of your research. The decision making process also has a number of intermediate decisions concerning what is the most appropriate test in a particular situation, what significance level to adopt, what pitfalls to look out for, and many others.

In the previous chapter we considered the Normal Distribution (ND) as a special kind of distribution. It is used as our ideal distribution and many things in nature approximate it. The ND has a number of desirable properties.

  • The distribution is symmetrical and the mean, mode, and median are the same
  • The shape of the normal distribution depends only on the mean and standard deviation.
  • The area under the normal distribution is 1.

It is the last property that we deal with in this chapter. Probability is the likelihood of some future event actually happening. This construct is traditionally operationalised as a number between 0 and 1 where 0 implies a future event is impossible and 1 implies a future event is certain to occur. Obviously there are many grades of probability in between, so probability is a continuous construct. The probability of someone drawing an ace from a standard deck is 4/52 = 1/13 = .0769, the chance of tossing a 3 on a standard die is 1/6 = .167. The chance of tossing a 3 on a die is therefore more likely than choosing an ace form a deck of cards.

Probability comes into statistics by assigning a probability to a research event. Rather than the probability of drawing an ace from a finite deck of cards, we want the probability that a research event (usually a sample mean) has come from a population of interest by some random process. If the experiment were repeated what would be the chance of getting the same result again? If the evidence says that it is very unlikely to get a result as extreme as this we conclude the alternative, that the event has occurred as the result of some non-random process or by some particular selection rule. The logic of experimental design then allows us to infer that this particular selection rule was in fact our manipulation of the independent variable and hence the implication of cause and effect.

 

 

 

© Copyright 2000 University of New England, Armidale, NSW, 2351. All rights reserved

UNE homepage Maintained by Dr Ian Price
Email: iprice@turing.une.edu.au