Qualitative_data_analysis
Qualitative_data_analysis
Qualitative_data_analysis
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
PRODUCING AN ACCOUNT 263<br />
consistent with telling the time by the speaking clock. We are both referring to a<br />
common-sense view of time, not one which might be relevant psychologically or in<br />
astrophysics. Incidentally, if I was navigating at sea and dependent on satellite<br />
signals to determine time, I would be better relying on astrophysics than on the<br />
common sense view (Hawking 1988:33). I am also assuming a common frame of<br />
reference, such as British Summer Time. In other words, in checking validity I have<br />
to consider whether my concept of time is consistent with that employed by other<br />
measures. If my concept proves inconsistent—for example, if I have forgotten to<br />
switch to British Summer Time—then I may again doubt the validity of my<br />
measuring instrument. Of course, I could instead doubt the validity of British<br />
Summer Time, but such immodesty would be unbecoming. We have to accept the<br />
authority of established concepts unless we have very good reasons to do otherwise.<br />
This fit (or lack of it) between the concepts we are using and previously established<br />
and authoritative concepts is called ‘construct’ validity.<br />
Finally, I can check that the two measurements are consistent. If my measurement<br />
does prove consistent with the measurements obtained from other indicators, then I<br />
can be confident that my own instrument is a valid measure. This fit (or lack of it)<br />
between measures provided by different indicators is called ‘criterion’ validity. In the<br />
case of the speaking clock, I have such confidence in its efficacy as a measure of time<br />
that I would not seek any further confirmation from other sources. The situation in<br />
social science is usually less clear-cut, because we cannot obtain established<br />
indicators through a simple phone call, and where indicators do exist they are often<br />
less authoritative. Think only of the problems in measuring class, intelligence,<br />
power, status or job satisfaction. Nevertheless, where we can find a reasonable fit<br />
between our own measurements and those derived from established indicators, we<br />
can have more confidence that we have devised a valid measure of the concept we<br />
are interested in.<br />
In qualitative <strong>analysis</strong>, where we are often trying to break new ground and create<br />
new tools of <strong>analysis</strong>, we are more likely to be interested in the ‘face’ and ‘construct’<br />
validity of our account. In the absence of satisfactory ‘measures’ achieving<br />
confidence through consistency in measurement is a less likely prospect. The ‘face’<br />
validity of our account turns on the fit between our observations and our concepts,<br />
and this is something we can be more confident about. The whole thrust of<br />
qualitative <strong>analysis</strong> is to ground our account empirically in the <strong>data</strong>. By annotating,<br />
categorizing and linking <strong>data</strong>, we can provide a sound empirical base for identifying<br />
concepts and the connections between them. Other interpretations and<br />
explanations of the <strong>data</strong> may be possible, but at least we can be confident that the<br />
concepts and connections we have used are rooted in the <strong>data</strong> we have analysed. But<br />
how can we create a similar degree of confidence in others?