go to the School of Psychology home page Go to the UNE home page
Chapter 7 - Analysing the Data Part IV - Analysis of Variance Chapter 1 - Behavioural Science and research Chapter 2 - Research Design Chapter 3 - Collecting the Data Chapter 4 - Analysing the Data Part I - Descriptive Statistics Chapter 5 - Analysing the Data Part II - Inferential Statistics Chapter 6 - Analysing the Data Part III - Common Statistical Tests Operationalism Experimental and non-experimental designs Internal and external validity Between groups Vs repeated measures designs Ethical issues

 

Chapter 2: Research Design

 

Choosing an Operational Definition

As I illustrated above, there are often many different ways of operationalising a construct. A question then presents itself: Which operational definition should I use? There are three broad answers to this question.

First, you should use the definition or definitions that work for you, depending on the resources you have or to which you have access. Some operational definitions produce data quite easily and nearly anyone can use them. Self-reports are an example. It is fairly easy to get someone to fill out a questionnaire or answer a series of questions, and unless you are studying animals or infants, nearly anyone can answer a set of questions orally or in writing. Other operationalisations might be more difficult for you to use. For example, you may not have the money to hire one or more clinical psychologists to evaluate every participant in your depression study. Or you may not have convenient access to a clinical psychologist. Also, this kind of data collection would take considerable time (at a minimum, about 1 hour per participant just to get the measurement of depression). Similarly, not everyone has access to the medical equipment required to measure physiological markers of depression such as glucose metabolism.

Second, you should use the operational definition or definitions that will be convincing to critics, journal reviewers, or others who will be evaluating your research. Self-report measurement of constructs, for example, while quite common in psychological science, is fraught with difficulties. There is considerable debate as to how much insight we have into our own mental processes, how good our memory is for past events, and whether we can and do answer questions about ourselves honestly and unbiasedly (of course, if you are interested in studying such biases, that isnÕt a problem). So a critic might be able to criticise your study on the grounds that the self-report measure you used to operationalise your construct isnÕt a convincing one. As described below, youÕd be criticised of using a measure low in validity. If a critic argues against the validity of your measurement, then he or she will necessarily evaluate your research as poor. Thus, it is often a good idea to use operationalisations that have been used by other people before. This way, you can always invoke precedent in your defence if someone criticises your methodology. (Of course, it is possible that what has been done in the past is less than ideal and subject to criticism as well!).

Third, if at all possible, you should use more than one operationalisation. Each operational definition is likely to be incomplete, capturing only one small part of the meaning of your construct. By using more than one operational definition, you will be able to better cover the richness and complexity of your construct. You will also then be able to compute a measure of the reliability of your measurement, which is sometimes important (see below). During data analysis, you may aggregate your measurements on the different operationalisations (like adding them up) or you may choose to analyse them separately. If you choose the latter, you will be in a good position, perhaps, to offer what is called "converging evidence" for your findings. Suppose you find, for example, that after exposure to the television program aimed at reducing aggression in children, the children in that group are perceived as less aggressive by their parents, make fewer disciplinary visits to the headmaster, are nominated less frequently by their classmates as "meanies," and actually engage in fewer aggressive acts in the classroom than the children who arenÕt exposed to the television program. These four findings provide converging evidence for any claims made about differences between our groups of children. A critic would be less able to argue that our results are due to poor measurement because we got the same result on several reasonable operationalisations of our construct.

Remember that one of the philosophies underlying psychological science is the "Inexistence of proof." More importantly, I noted that no single study ever proves anything. Instead, each study gives us one small piece in a large and complex puzzle. It is now easier to see why this is true. Because there are many ways of operationalising constructs being studied, it is always possible that any results found may be specific only to that single operationalisation, and that had a different operationalisation been used, a different result may have been found. This is always a possibility. For this reason, we often engage in what is called "conceptual replication." This means that we sometimes repeat studies that others have done (or that we did ourselves earlier) but changing slightly the methodology, such as how the constructs are operationalised. If there were any "truth" to the original finding, weÕd hope to get the same result even though weÕve altered the methodology. Of course, it is possible that weÕd get a different result. Such discrepancies could be due to a number of sources, including the complexity of the phenomenon under study. Regardless, through the conceptual replication of research results, we can get a sense for how generalisable a finding is, whether the phenomenon found should be trusted as "real" or an artefact of the methodology used (such as how the constructs were operationalised). Conceptual replications are another way of generating converging evidence for a claim made about human behaviour from research results.

 

 

 

© Copyright 2000 University of New England, Armidale, NSW, 2351. All rights reserved

UNE homepage Maintained by Dr Ian Price
Email: iprice@turing.une.edu.au