go to the School of Psychology home page Go to the UNE home page
Chapter 7 - Analysing the Data Part IV - Analysis of Variance Chapter 1 - Behavioural Science and research Chapter 2 - Research Design Chapter 3 - Collecting the Data Chapter 4 - Analysing the Data Part I - Descriptive Statistics Chapter 5 - Analysing the Data Part II - Inferential Statistics Chapter 6 - Analysing the Data Part III - Common Statistical Tests Frequency distributions Central tendancy Variability The normal distribution Transformations Standard scores - Z scores Correlation and regression Linear regression Readings and links

 

Chapter 4: Analysing the Data
Part II : Descriptive Statistics

 

Linear Regression

IÕve said that if two variables are correlated, you can use values on one variable to predict the other. In human terms, for example, perhaps we know that Jim is different from the other boys on some dimension Y. He is also different from the boys on some dimension X. If X and Y are correlated with each other then we can understand or predict something about Jim on Y knowing something about Jim on X. That is, if X and Y are related to each other, some of what makes Jim different from the boys on Y can be predicted from how Jim differs from the boys on X.

The most classic application of linear regression is the generation of a mathematical formula that can be used to generate those predictions. For example, ultimately an employer may use the data from this creativity study to develop a formula that can be used in the future for predicting how a person would have performed on the creativity test, had they took it, from that personÕs score on the logical reasoning test. From the value that the equation predicts, the employer can decide whether or not to hire the person. A decision rule such as "if the predicted creativity score is above 15, then the person is probably pretty creative, so we will hire the person" might be used. In this example, the prediction process is explicit. The goal is actually to generate a prediction. Making predictions is useful in many contexts. Should we admit a person to university? We might want to generate a prediction for how well a particular student is likely to do at university from information about that student on their high school performance, or a university entrance exam, or their age, or their other work experience. Or a clinician might want to know if a client is likely to attempt suicide. Knowing something about that person on "predictors" of suicide may give the clinician some knowledge about how this person may behave. That is, it would allow the person to predict that personÕs likely behaviour. To do this, we need to compute what is called the "regression line" predicting Y (the dependent variable) from X (the independent variable).

 

 

 

© Copyright 2000 University of New England, Armidale, NSW, 2351. All rights reserved

UNE homepage Maintained by Dr Ian Price
Email: iprice@turing.une.edu.au