Qualitative_data_analysis
Qualitative_data_analysis
Qualitative_data_analysis
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
PRODUCING AN ACCOUNT 259<br />
the vibrations of a quartz crystal. Reliability is not primarily an empirical issue at all,<br />
but a conceptual one. It has to be rooted in a conceptual framework which explains<br />
why in principle we can expect a measuring instrument to produce reliable results.<br />
The empirical aspect comes later, when we check repeatedly to see whether in<br />
practice these results are achieved. Here, in the absence of repeated observations, all<br />
we can do is check that our ‘apparatus’ is in good working order.<br />
If we cannot expect others to replicate our account, the best we can do is explain<br />
how we arrived at our results. This gives our audience the chance to scrutinise our<br />
procedures and to decide whether, at least in principle, the results ought to be<br />
reliable. The crucial procedures will be those we have followed in categorizing or<br />
linking <strong>data</strong>, but we may also explain the procedures we have followed in<br />
summarizing <strong>data</strong>, splitting and splicing categories, making connections and using<br />
our maps and matrices. It may be useful to lay these out in algorithmic form<br />
(Figure 15.6).<br />
Figure 15.7 shows how this can be done for the overall decision-making process<br />
we have followed throughout our <strong>analysis</strong>. Using a similar approach, we can also<br />
detail the substantive decision-making which has governed our conceptualizations<br />
of the <strong>data</strong>, and the connections we have made between concepts. We can improve<br />
internal reliability by ensuring our conceptualizations relate closely to our <strong>data</strong>, by<br />
testing them against a variety of <strong>data</strong> sources.<br />
As well as outlining our procedures, we can try to identify possible sources of error.<br />
For example, what about ‘mistakes’ in categorization? As the assignment of<br />
categories involves judging in terms of a range of criteria, which may be vaguely<br />
articulated and which may also change over time, the opportunity for error is<br />
obvious. Some <strong>data</strong> which should be assigned to a particular category may have been<br />
overlooked. Other <strong>data</strong> have been assigned inappropriately. We could call these the<br />
sins of commission and ommission. Similar sources of error can be found in linking<br />
<strong>data</strong> and looking at connections between categories. There are procedures for<br />
reducing such errors, for example, through repeating the process of categorizing the<br />
<strong>data</strong>. The computer through its search facilities can help to locate <strong>data</strong> which may<br />
have been overlooked in our initial categorization. The converse error, of assigning a<br />
category (or a link) mistakenly to the <strong>data</strong>, can be identified through critical<br />
assessment of the results of category retrievals.<br />
We may correct such errors, but if time does not permit a full-blooded<br />
reassessment, we may settle for an estimate of the degree of error, by taking a sample<br />
of categories or <strong>data</strong>bits and checking the accuracy with which categories have been<br />
assigned.<br />
In assessing the potential for error, we can also call upon the procedures for<br />
corroborating evidence which are relevant in identifying sources of error. In looking<br />
at the quality, distribution and weight of evidence under pinning our