Evaluate the Test Statistic
Having calculated your t- value, or F-value, or Z, or r, or U-value (see next Chapter!), you need to now make a decision as to what sense to make of it. In this step we are asking you to focus on (1) determining the level at which your test statistic is significant, and (2) to reject or not reject Ho, and (3) the practical significance of the finding.
Determining the level at which your test statistic is significant. This is primarily a task in consulting the output in the correct manner. For each of the tests to be discussed, the SPSS output will print out the appropriate p-value. This part of the output is often labelled "Sig." or "Two-tailed sig.". Sometimes "Prob." is used. Significance is easily determined these days by comparing the calculated probability with the alpha-level that has been set (usually
= .05). In general, if the "Sig." or "Prob." value is less than or equal to .05 you have a significant result.
Another aspect, also discussed in the readings given above, is to determine whether the p-value associated with your finding is satisfactory for the type of research you are conducting. For some applied or exploratory purposes, p < 0.1 might be sufficient for you to decide you have a real difference or a real relationship. But in other situations, such as medical and pharmaceutical research, or when conducting multiple tests of significance, you need to have far more confidence that you really do have a real result, as the consequences of a Type I error are considerably more serious. (Also, see the points raised under Step 1: What
Whether to reject Ho or not. Having decided that the p-value associated with your calculated test statistic is appropriate to allow you to make a claim of ‘significant’, the immediate consequence of this, is that Ho is rejected in favour of the alternative hypothesis. You need to be clear what it is that you are now accepting. If the alternative is two-tailed you are accepting that the sample has come from some population other than the one specified by the null hypothesis. You need to look at the actual numerical value of the means or the correlation to determine in which direction the difference or relationship lies. If you have specified a one-tailed alternative, you again need to check that the numerical values of the means lie in the predicted direction.
If the associated p-value is not sufficient to allow you to make a claim of significance, you make a decision to retain the null hypothesis. Note the comments made previously about the difference between "retaining" and "accepting" Ho.
Another aspect that should be considered when evaluating your test statistic, is the extent to which you thought the assumptions of the test were satisfied and the number of statistical tests you have conducted on the one set of data. Both these factors can contribute to the calculated p-value not being an accurate reflection of the real Type I error rate. You may have to treat your ‘significant’ result with some caution. If this is the case then it is best to acknowledge this ‘rubberiness’ in your Results section.
Perhaps allied somewhat to the concept of ‘rubberiness’ in the reporting of results in the real world is the problem of what to do when your p-value is, say 0.051. A value of p = 0.051 is technically non-significant, whereas p = 0.049 is significant!. Yet, in real terms, the difference between these two probabilities is minimal and indefensible considering that the assumptions underlying the test are probably not exactly satisfied. However, in the former case, you do need to admit that your finding was non-significant. But having done that, you may cautiously talk about the "trends" in your data. You may have to word your statements with terms such as "suggests", "might indicate", "tends toward" !!!. If you p-value comes in "under 0.05", then you can claim "significance" but if p = 0.049 you should not get carried away!