go to the School of Psychology home page Go to the UNE home page
Chapter 7 - Analysing the Data Part IV - Analysis of Variance Chapter 1 - Behavioural Science and research Chapter 2 - Research Design Chapter 3 - Collecting the Data Chapter 4 - Analysing the Data Part I - Descriptive Statistics Chapter 5 - Analysing the Data Part II - Inferential Statistics Chapter 6 - Analysing the Data Part III - Common Statistical Tests Correlation Regression T-tests Chi-squared Readings and links

 

Chapter 6: Analysing the Data
Part III: Common Statistical Tests

 

Testing the Significance of a Regression Line

To test if one variable significantly predicts another variable we need to only test if the correlation between the two variables is significant different to zero (i.e., as above). In regression, a significant prediction means a significant proportion of the variability in the predicted variable can be accounted for by (or "attributed to", or "explained by", or "associated with") the predictor variable.

In a scatterplot this amounts to whether or not the slope of the line of best fit is significantly different to horizontal or not. A horizontal line means there is no association between the two variables (r = 0). Given that the slope of the line (beta ) is equal to

if the correlation r is zero, the slope of the line is also zero (standard deviations of any real variable will never be zero. If r is significantly negative, the slope of the line is significantly sloped high to low (looking left to right). If r is significantly positive, the slope of the line is significantly sloped from low to high.

The crucial part of the SPSS regression output is shown again below. There are two parts to interpret in the regression output. the first is if the overall regression model is significant or not. This is found in the ANOVA table under "Sig.". Here it is ".000" which means the linear model significantly fits the data at the p < .001 level. Further details about the ANOVA table will be discussed in the next chapter.

The second part of the regression output to interpret is the Coefficients table "Sig.". Here two values are given. One is the significance of the Constant ("a", or the Y-intercept) in the regression equation. In general this information is of very little use. It merely tells us that this value is (5.231) significantly different to zero. If somebody were to score zero on the logical reasoning task we would predict a score of 5.231 for them on the creativity task.

The second "Sig." value gives us the significance of each of the predictors of the dependent variable. In this simple case we are using only one predictor (logical reasoning), so because the overall model is significant, the "Sig." value here next to REASON is the same as the overall model (and F = t2). In more advanced regression we might have several variables predicting the dependent variable and even if the overall model is significant, not all of these variables need be significant. In fact, the overall model could be significant but none of the individual variables might be significant (because the significance test tests the significance of unique variability this is an important issue in multivariate statistics).

Several formulae (see Howell) follow to illustrate where some of the numbers in Output 6.2 come from

The standard error of the estimate is found from

The standard error for REASON is found from

The t statistic for REASON is found by comparing the value for the slope (B) with its standard error.


Output 6.2 Regression Linear

 

 

 

© Copyright 2000 University of New England, Armidale, NSW, 2351. All rights reserved

UNE homepage Maintained by Dr Ian Price
Email: iprice@turing.une.edu.au