RESEARCH METHOD COHEN ok
RESEARCH METHOD COHEN ok RESEARCH METHOD COHEN ok
SYSTEMATIC APPROACHES TO DATA ANALYSIS 469 assembling and providing sufficient data that keeps separate raw data from analysis. In qualitative data the analysis here is almost inevitably interpretive, hence the data analysis is less a completely accurate representation (as in the numerical, positivist tradition) but more of a reflexive, reactive interaction between the researcher and the decontextualized data that are already interpretations of a social encounter. Indeed reflexivity is an important feature of qualitative data analysis, and we discuss this separately (Chapter 7). The issue here is that the researcher brings to the data his or her own preconceptions, interests, biases, preferences, biography, background and agenda. As Walford (2001: 98) writes: ‘all research is researching yourself’. In practical terms it means that the researcher may be selective in his or her focus, or that the research may be influenced by the subjective features of the researcher. Robson (1993: 374–5) and Lincoln and Guba (1985: 354–5) suggest that these can include: data overload (humans may be unable to handle large amounts of data) first impressions (early data analysis may affect later data collection and analysis) availability of people (e.g. how representative these are and how to know if missing people and data might be important) information availability (easily accessible information may receive greater attention than hard-to-obtain data) positive instances (researchers may overemphasize confirming data and underemphasize disconfirming data). internal consistency (the unusual, unexpected or novel may be under-treated). uneven reliability (the researcher may overlook the fact that some sources are more reliable or unreliable than others). missing data (that issues for which there is incomplete data may be overlooked or neglected) revision of hypotheses (researchers may overreact or under-react to new data) confidence in judgement (researchers may have greater confidence in their final judgements than is tenable) co-occurrence may be mistaken for association inconsistency (subsequent analyses of the same data may yield different results); a notable example of this is Bennett (1976) and Aitkin et al.(1981). The issue here is that great caution and selfawareness must be exercised by the researcher in conducting qualitative data analysis, for the analysis and the findings may say more about the researcher than about the data. For example, it is the researcher who sets the codes and categories for analysis, be they pre-ordinate or responsive (decided in advance of or in response to the data analysis respectively). It is the researcher’s agenda that drives the research and the researcher who chooses the methodology. As the researcher analyses data, he or she will have ideas, insights, comments, reflections to make on data. These can be noted down in memos and, indeed, these can become data themselves in the process of reflexivity (though they should be kept separate from the primary data themselves). Glaser (1978) and Robson (1993: 387) argue that memos are not data in themselves but help the process of data analysis. This is debatable: if reflexivity is part of the data analysis process then memos may become legitimate secondary data in the process or journey of data analysis. Many computer packages for qualitative data analysis (discussed later) have a facility not only for the researcher to write a memo, but also to attach it to a particular piece of datum. There is no single nature or format of a memo; it can include subjective thoughts about the data, with ideas, theories, reflections, comments, opinions, personal responses, suggestions for future and new lines of research, reminders, observations, evaluations, critiques, judgements, conclusions, explanations, considerations, implications, speculations, predictions, hunches, theories, connections, relationships between codes and categories, insights and so on. Memos can be reflections on the past, present and the future, thereby beginning to Chapter 22
470 APPROACHES TO QUALITATIVE DATA ANALYSIS examine the issue of causality. There is no required minimum or maximum length, though memos should be dated not only for ease of reference but also for a marking of the development of the researcher as well as of the research. Memos are an important part of the selfconscious reflection on the data and have considerable potential to inform the data collection, analysis and theorizing processes. They should be written whenever they strike the researcher as important – during and after analysis. They can be written any time; indeed some researchers deliberately carry a pen and paper with them wherever they go, so that ideas that occur can be written down before they are forgotten. The great tension in data analysis is between maintaining a sense of the holism of the data – the text – and the tendency for analysis to atomize and fragment the data – to separate them into constituent elements, thereby losing the synergy of the whole, and often the whole is greater than the sum of the parts. There are several stages in analysis, for example: generating natural units of meaning classifying, categorizing and ordering these units of meaning structuring narratives to describe the contents interpreting the data. These are comparatively generalized stages. Miles and Huberman (1994) suggest twelve tactics for generating meaning from transcribed data: counting frequencies of occurrence (of ideas, themes, pieces of data, words) noting patterns and themes (Gestalts), which may stem from repeated themes and causes or explanations or constructs seeing plausibility: trying to make good sense of data, using informed intuition to reach aconclusion clustering: setting items into categories, types, behaviours and classifications making metaphors: using figurative and connotative language rather than literal and denotative language, bringing data to life, thereby reducing data, making patterns, decentring the data, and connecting data with theory splitting variables to elaborate, differentiate and ‘unpack’ ideas, i.e. to move away from the drive towards integration and the blurring of data subsuming particulars into the general (akin to Glaser’s (1978) notion of ‘constant comparison’: see Chapter 23 in this book) – a move towards clarifying key concepts factoring: bringing a large number of variables under a smaller number of (frequently) unobserved hypothetical variables identifying and noting relations between variables finding intervening variables: looking for other variables that appear to be ‘getting in the way’ of accounting for what one would expect to be strong relationships between variables building a logical chain of evidence: noting causality and making inferences making conceptual/theoretical coherence: moving from metaphors to constructs, to theories to explain the phenomena. This progression, though perhaps positivist in its tone, is a useful way of moving from the specific to the general in data analysis. Running through the suggestions from Miles and Huberman (1994) is the importance that they attach to coding of data, partially as a way of reducing what is typically data overload from qualitative data. Miles and Huberman (1994) suggest that analysis through coding can be performed both within-site and cross-site, enabling causal chains, networks and matrices to be established, all of these addressing what they see as the major issue of reducing data overload through careful data display. Content analysis involves reading and judgement; Brenner et al. (1985) set out several steps in undertaking a content analysis of open-ended data: briefing: understanding the problem and its context in detail sampling: of people, including the types of sample sought (see Chapter 4)
- Page 438 and 439: CONSTRUCTING A TEST 419 achieveme
- Page 440 and 441: CONSTRUCTING A TEST 421 Select the
- Page 442 and 443: CONSTRUCTING A TEST 423 where A = t
- Page 444 and 445: CONSTRUCTING A TEST 425 true/fal
- Page 446 and 447: CONSTRUCTING A TEST 427 short-answe
- Page 448 and 449: CONSTRUCTING A TEST 429 demonstrate
- Page 450 and 451: CONSTRUCTING A TEST 431 (e.g. to as
- Page 452 and 453: COMPUTERIZED ADAPTIVE TESTING 433 H
- Page 454 and 455: 20 Personal constructs Introduction
- Page 456 and 457: ALLOTTING ELEMENTS TO CONSTRUCTS 43
- Page 458 and 459: PROCEDURES IN GRID ANALYSIS 439 inv
- Page 460 and 461: PROCEDURES IN GRID ANALYSIS 441 Box
- Page 462 and 463: SOME EXAMPLES OF THE USE OF REPERTO
- Page 464 and 465: GRID TECHNIQUE AND AUDIO/VIDEO LESS
- Page 466 and 467: FOCUSED GRIDS, NON-VERBAL GRIDS, EX
- Page 468 and 469: INTRODUCTION 449 Box 21.1 Dimension
- Page 470 and 471: ROLE-PLAYING VERSUS DECEPTION: THE
- Page 472 and 473: THE USES OF ROLE-PLAYING 453 of
- Page 474 and 475: ROLE-PLAYING IN AN EDUCATIONAL SETT
- Page 476: EVALUATING ROLE-PLAYING AND OTHER S
- Page 480 and 481: 22 Approaches to qualitative data a
- Page 482 and 483: TABULATING DATA 463 data set reprod
- Page 484 and 485: TABULATING DATA 465 Box 22.4 Studen
- Page 486 and 487: FIVE WAYS OF ORGANIZING AND PRESENT
- Page 490 and 491: SYSTEMATIC APPROACHES TO DATA ANALY
- Page 492 and 493: METHODOLOGICAL TOOLS FOR ANALYSING
- Page 494 and 495: 23 Content analysis and grounded th
- Page 496 and 497: HOW DOES CONTENT ANALYSIS WORK 477
- Page 498 and 499: HOW DOES CONTENT ANALYSIS WORK 479
- Page 500 and 501: HOW DOES CONTENT ANALYSIS WORK 481
- Page 502 and 503: A WORKED EXAMPLE OF CONTENT ANALYSI
- Page 504 and 505: A WORKED EXAMPLE OF CONTENT ANALYSI
- Page 506 and 507: COMPUTER USAGE IN CONTENT ANALYSIS
- Page 508 and 509: COMPUTER USAGE IN CONTENT ANALYSIS
- Page 510 and 511: GROUNDED THEORY 491 data, thereby c
- Page 512 and 513: GROUNDED THEORY 493 fragments are t
- Page 514 and 515: INTERPRETATION IN QUALITATIVE DATA
- Page 516 and 517: INTERPRETATION IN QUALITATIVE DATA
- Page 518 and 519: INTERPRETATION IN QUALITATIVE DATA
- Page 520 and 521: 24 Quantitative data analysis Intro
- Page 522 and 523: DESCRIPTIVE AND INFERENTIAL STATIST
- Page 524 and 525: DEPENDENT AND INDEPENDENT VARIABLES
- Page 526 and 527: EXPLORATORY DATA ANALYSIS: FREQUENC
- Page 528 and 529: EXPLORATORY DATA ANALYSIS: FREQUENC
- Page 530 and 531: EXPLORATORY DATA ANALYSIS: FREQUENC
- Page 532 and 533: EXPLORATORY DATA ANALYSIS: FREQUENC
- Page 534 and 535: STATISTICAL SIGNIFICANCE 515 alongt
- Page 536 and 537: STATISTICAL SIGNIFICANCE 517 hands
470 APPROACHES TO QUALITATIVE DATA ANALYSIS<br />
examine the issue of causality. There is no required<br />
minimum or maximum length, though memos<br />
should be dated not only for ease of reference<br />
but also for a marking of the development of the<br />
researcher as well as of the research.<br />
Memos are an important part of the selfconscious<br />
reflection on the data and have considerable<br />
potential to inform the data collection,<br />
analysis and theorizing processes. They should be<br />
written whenever they strike the researcher as important<br />
– during and after analysis. They can be<br />
written any time; indeed some researchers deliberately<br />
carry a pen and paper with them wherever<br />
they go, so that ideas that occur can be written<br />
down before they are forgotten.<br />
The great tension in data analysis is between<br />
maintaining a sense of the holism of the data – the<br />
text – and the tendency for analysis to atomize<br />
and fragment the data – to separate them into<br />
constituent elements, thereby losing the synergy<br />
of the whole, and often the whole is greater than<br />
the sum of the parts. There are several stages in<br />
analysis, for example:<br />
<br />
<br />
<br />
<br />
generating natural units of meaning<br />
classifying, categorizing and ordering these<br />
units of meaning<br />
structuring narratives to describe the contents<br />
interpreting the data.<br />
These are comparatively generalized stages. Miles<br />
and Huberman (1994) suggest twelve tactics for<br />
generating meaning from transcribed data:<br />
counting frequencies of occurrence (of ideas,<br />
themes, pieces of data, words)<br />
noting patterns and themes (Gestalts), which<br />
may stem from repeated themes and causes or<br />
explanations or constructs<br />
seeing plausibility: trying to make good sense<br />
of data, using informed intuition to reach<br />
aconclusion<br />
clustering: setting items into categories, types,<br />
behaviours and classifications<br />
making metaphors: using figurative and<br />
connotative language rather than literal<br />
and denotative language, bringing data to<br />
life, thereby reducing data, making patterns,<br />
decentring the data, and connecting data with<br />
theory<br />
splitting variables to elaborate, differentiate<br />
and ‘unpack’ ideas, i.e. to move away from the<br />
drive towards integration and the blurring of<br />
data<br />
subsuming particulars into the general (akin<br />
to Glaser’s (1978) notion of ‘constant comparison’:<br />
see Chapter 23 in this bo<strong>ok</strong>) – a move<br />
towards clarifying key concepts<br />
factoring: bringing a large number of variables<br />
under a smaller number of (frequently)<br />
unobserved hypothetical variables<br />
identifying and noting relations between<br />
variables<br />
finding intervening variables: lo<strong>ok</strong>ing for other<br />
variables that appear to be ‘getting in the way’<br />
of accounting for what one would expect to be<br />
strong relationships between variables<br />
<br />
building a logical chain of evidence: noting<br />
causality and making inferences<br />
making conceptual/theoretical coherence:<br />
moving from metaphors to constructs, to<br />
theories to explain the phenomena.<br />
This progression, though perhaps positivist in its<br />
tone, is a useful way of moving from the specific to<br />
the general in data analysis. Running through the<br />
suggestions from Miles and Huberman (1994) is<br />
the importance that they attach to coding of data,<br />
partially as a way of reducing what is typically<br />
data overload from qualitative data. Miles and<br />
Huberman (1994) suggest that analysis through<br />
coding can be performed both within-site and<br />
cross-site, enabling causal chains, networks and<br />
matrices to be established, all of these addressing<br />
what they see as the major issue of reducing data<br />
overload through careful data display.<br />
Content analysis involves reading and judgement;<br />
Brenner et al. (1985) set out several steps<br />
in undertaking a content analysis of open-ended<br />
data:<br />
<br />
<br />
briefing: understanding the problem and its<br />
context in detail<br />
sampling: of people, including the types of<br />
sample sought (see Chapter 4)