RESEARCH METHOD COHEN ok
RESEARCH METHOD COHEN ok RESEARCH METHOD COHEN ok
EVIDENCE-BASED EDUCATIONAL RESEARCH AND META-ANALYSIS 289 account for this dramatic improvement of 50 per cent’ (Mason et al.1992). Although the Shevington researchers attempted to exercise control over extraneous variables, readers may well ask whether threats to internal and external validity such as those alluded to earlier were sufficiently met as to allow such a categorical conclusion as ‘the pupils ... achieved greater success in public examinations as a result of taking part in the project’ (Mason et al.1992). Example 3: a ‘true’ experimental design Another investigation (Bhadwal and Panda 1991) concerned with effecting improvements in pupils’ performance as a consequence of changing teaching strategies used a more robust experimental design. In rural India, the researchers drew a sample of seventy-eight pupils, matched by socio-economic backgrounds and non-verbal IQs, from three primary schools that were themselves matched by location, physical facilities, teachers’ qualifications and skills, school evaluation procedures and degree of parental involvement. Twenty-six pupils were randomly selected to comprise the experimental group, the remaining fifty-two being equally divided into two control groups. Before the introduction of the changed teaching strategies to the experimental group, all three groups completed questionnaires on their study habits and attitudes. These instruments were specifically designed for use with younger children and were subjected to the usual item analyses, test-retest and split-half reliability inspections. Bhadwal and Panda’s research design can be represented as: Experimental RO 1 X RO 2 First control RO 3 RO 4 Second control RO 5 RO 6 Recalling Kerlinger’s (1970) discussion of a ‘good’ experimental design, the version of the pretestpost-test control design employed here (unlike the design used in Example 2 above) resorted to randomization which, in theory, controls all possible independent variables. Kerlinger (1970) adds, however, ‘in practice, it is only when enough subjects are included in the experiment that the principle of randomization has a chance to operate as a powerful control’. It is doubtful whether twenty-six pupils in each of the three groups in Bhadwal and Panda’s (1991) study constituted ‘enough subjects’. In addition to the matching procedures in drawing up the sample, and the random allocation of pupils to experimental and control groups, the researchers also used analysis of covariance, as a further means of controlling for initial differences between E and C groups on their pretest mean scores on the independent variables, study habits and attitudes. The experimental programme involved improving teaching skills, classroom organization, teaching aids, pupil participation, remedial help, peer-tutoring and continuous evaluation. In addition, provision was also made in the experimental group for ensuring parental involvement and extra reading materials. It would be startling if such a package of teaching aids and curriculum strategies did not effect significant changes in their recipients and such was the case in the experimental results. The Experimental Group made highly significant gains in respect of its level of study habits as compared with Control Group 2 where students did not show a marked change. What did surprise the investigators, we suspect, was the significant increase in levels of study habits in Control Group 1. Maybe, they opined, this unexpected result occurred because Control Group 1 pupils were tested immediately prior to the beginning of their annual examinations. On the other hand, they conceded, some unaccountable variables might have been operating. There is, surely, a lesson here for all researchers! (For a set of examples of problematic experiments see http://www.routledge.com/textbooks/ 9780415368780 – Chapter 13, file 13.1.doc). Evidence-based educational research and meta-analysis Evidence-based research In an age of evidence-based education (Thomas and Pring 2004), meta-analysis is an increasingly Chapter 13
290 EXPERIMENTS AND META-ANALYSIS used method of investigation, bringing together different studies to provide evidence to inform policy-making and planning. Meta-analysis is a research strategy in itself. That this is happening significantly is demonstrated in the establishment of the EPPI-Centre (Evidence for Policy and Practice Information and Coordinating Centre) at the University of London (http://eppi.ioe.ac.uk/EPPIWeb/home.aspx), the Social, Psychological, Educational and Criminological Controlled Trials Register (SPECTR), later transferred to the Campbell Collaboration (http://www.campbellcollaboration.org), a parallel to the Cochrane Collaboration in medicine (http://www.cochrane.org/index0.htm), which undertakes systematic reviews and metaanalyses of, typically, experimental evidence in medicine, and the Curriculum, Evaluation and Management (CEM) centre at the University of Durham (http://www.cemcentre.org). ‘Evidence’ here typically comes from randomized controlled trials of one hue or another (Tymms 1999; Coe et al.2000;ThomasandPring2004:95),withtheir emphasis on careful sampling, control of variables, both extraneous and included, and measurements of effect size. The cumulative evidence from collected RCTs is intended to provide a reliable body of knowledge on which to base policy and practice (Coe et al. 2000).Suchaccumulateddata,it is claimed, deliver evidence of ‘what works’, although Morrison (2001b) suggests that this claim is suspect. The roots of evidence-based practice lie in medicine, where the advocacy by Cochrane (1972) for randomized controlled trials together with their systematic review and documentation led to the foundation of the Cochrane Collaboration (Maynard and Chalmers 1997), which is now worldwide. The careful, quantitativebased research studies that can contribute to the accretion of an evidential base is seen to be a powerful counter to the often untried and undertested schemes that are injected into practice. More recently evidence-based education has entered the worlds of social policy, social work (MacDonald 1997) and education (Fitz-Gibbon 1997). At the forefront of educational research in this area are Fitz-Gibbon (1996; 1997; 1999) and Tymms (1996), who, at the Curriculum, Evaluation and Management Centre at the University of Durham, have established one of the world’s largest monitoring centres in education. Fitz-Gibbon’s work is critical of multilevel modelling and, instead, suggests how indicator systems can be used with experimental methods to provide clear evidence of causality and a ready answer to her own question, ‘How do we know what works’ (Fitz-Gibbon 1999: 33). Echoing Anderson and Biddle (1991), Fitz- Gibbon suggests that policy-makers shun evidence in the development of policy and that practitioners, in the hurly-burly of everyday activity, call upon tacit knowledge rather than the knowledge which is derived from RCTs. However, in a compelling argument (Fitz-Gibbon 1997: 35–6), she suggests that evidence-based approaches are necessary in order to challenge the imposition of unproven practices, solve problems and avoid harmful procedures, and create improvement that leads to more effective learning. Further, such evidence, she contends, should examine effect sizes rather than statistical significance. While the nature of information in evidencebased education might be contested by researchers whose sympathies (for whatever reason) lie outside randomized controlled trials, the message from Fitz-Gibbon will not go away: the educational community needs evidence on which to base its judgements and actions. The development of indicator systems worldwide attests to the importance of this, be it through assessment and examination data, inspection findings, national and international comparisons of achievement, or target setting. Rather than being a shot in the dark, evidence-based education suggests that policy formation should be informed, and policy decision-making should be based on the best information to date rather than on hunch, ideology or political will. It is bordering on the unethical to implement untried and untested recommendations in educational practice, just as it is unethical to use untested products and procedures on hospital patients without their consent.
- Page 258 and 259: INTERNET-BASED EXPERIMENTS 239 requ
- Page 260 and 261: INTERNET-BASED INTERVIEWS 241 ‘ne
- Page 262 and 263: SEARCHING FOR RESEARCH MATERIALS ON
- Page 264 and 265: COMPUTER SIMULATIONS 245 autho
- Page 266 and 267: COMPUTER SIMULATIONS 247 computer s
- Page 268 and 269: COMPUTER SIMULATIONS 249 On the oth
- Page 270 and 271: GEOGRAPHICAL INFORMATION SYSTEMS 25
- Page 272 and 273: 11 Case studies What is a case stud
- Page 274 and 275: WHAT IS A CASE STUDY 255 (providing
- Page 276 and 277: WHAT IS A CASE STUDY 257 argue that
- Page 278 and 279: EXAMPLES OF KINDS OF CASE STUDY 259
- Page 280 and 281: PLANNING A CASE STUDY 261 accounts
- Page 282 and 283: CONCLUSION 263 In the narrativ
- Page 284 and 285: CO-RELATIONAL AND CRITERION GROUPS
- Page 286 and 287: CHARACTERISTICS OF EX POST FACTO RE
- Page 288 and 289: DESIGNING AN EX POST FACTO INVESTIG
- Page 290 and 291: PROCEDURES IN EX POST FACTO RESEARC
- Page 292 and 293: INTRODUCTION 273 Box 13.1 Independe
- Page 294 and 295: TRUE EXPERIMENTAL DESIGNS 275 motor
- Page 296 and 297: TRUE EXPERIMENTAL DESIGNS 277 2 Sub
- Page 298 and 299: TRUE EXPERIMENTAL DESIGNS 279 textb
- Page 300 and 301: TRUE EXPERIMENTAL DESIGNS 281 Facto
- Page 302 and 303: A QUASI-EXPERIMENTAL DESIGN: THE NO
- Page 304 and 305: PROCEDURES IN CONDUCTING EXPERIMENT
- Page 306 and 307: EXAMPLES FROM EDUCATIONAL RESEARCH
- Page 310 and 311: EVIDENCE-BASED EDUCATIONAL RESEARCH
- Page 312 and 313: EVIDENCE-BASED EDUCATIONAL RESEARCH
- Page 314 and 315: EVIDENCE-BASED EDUCATIONAL RESEARCH
- Page 316 and 317: 14 Action research Introduction Act
- Page 318 and 319: PRINCIPLES AND CHARACTERISTICS OF A
- Page 320 and 321: PRINCIPLES AND CHARACTERISTICS OF A
- Page 322 and 323: ACTION RESEARCH AS CRITICAL PRAXIS
- Page 324 and 325: PROCEDURES FOR ACTION RESEARCH 305
- Page 326 and 327: PROCEDURES FOR ACTION RESEARCH 307
- Page 328 and 329: PROCEDURES FOR ACTION RESEARCH 309
- Page 330 and 331: SOME PRACTICAL AND THEORETICAL MATT
- Page 332: CONCLUSION 313 3 Actionresearchreso
- Page 336 and 337: 15 Questionnaires Introduction The
- Page 338 and 339: APPROACHING THE PLANNING OF A QUEST
- Page 340 and 341: TYPES OF QUESTIONNAIRE ITEMS 321 If
- Page 342 and 343: TYPES OF QUESTIONNAIRE ITEMS 323 de
- Page 344 and 345: TYPES OF QUESTIONNAIRE ITEMS 325 Ra
- Page 346 and 347: TYPES OF QUESTIONNAIRE ITEMS 327 Ve
- Page 348 and 349: TYPES OF QUESTIONNAIRE ITEMS 329 I
- Page 350 and 351: TYPES OF QUESTIONNAIRE ITEMS 331 Fu
- Page 352 and 353: ASKING SENSITIVE QUESTIONS 333 and
- Page 354 and 355: AVOIDING PITFALLS IN QUESTION WRITI
- Page 356 and 357: QUESTIONNAIRES CONTAINING FEW VERBA
290 EXPERIMENTS AND META-ANALYSIS<br />
used method of investigation, bringing together<br />
different studies to provide evidence to inform<br />
policy-making and planning. Meta-analysis<br />
is a research strategy in itself. That this is<br />
happening significantly is demonstrated in the<br />
establishment of the EPPI-Centre (Evidence<br />
for Policy and Practice Information and Coordinating<br />
Centre) at the University of London<br />
(http://eppi.ioe.ac.uk/EPPIWeb/home.aspx),<br />
the Social, Psychological, Educational and Criminological<br />
Controlled Trials Register (SPECTR),<br />
later transferred to the Campbell Collaboration<br />
(http://www.campbellcollaboration.org), a<br />
parallel to the Cochrane Collaboration in<br />
medicine (http://www.cochrane.org/index0.htm),<br />
which undertakes systematic reviews and metaanalyses<br />
of, typically, experimental evidence in<br />
medicine, and the Curriculum, Evaluation and<br />
Management (CEM) centre at the University of<br />
Durham (http://www.cemcentre.org). ‘Evidence’<br />
here typically comes from randomized controlled<br />
trials of one hue or another (Tymms 1999; Coe<br />
et al.2000;ThomasandPring2004:95),withtheir<br />
emphasis on careful sampling, control of variables,<br />
both extraneous and included, and measurements<br />
of effect size. The cumulative evidence from collected<br />
RCTs is intended to provide a reliable body<br />
of knowledge on which to base policy and practice<br />
(Coe et al. 2000).Suchaccumulateddata,it<br />
is claimed, deliver evidence of ‘what works’, although<br />
Morrison (2001b) suggests that this claim<br />
is suspect.<br />
The roots of evidence-based practice lie<br />
in medicine, where the advocacy by Cochrane<br />
(1972) for randomized controlled trials together<br />
with their systematic review and documentation<br />
led to the foundation of the Cochrane<br />
Collaboration (Maynard and Chalmers 1997),<br />
which is now worldwide. The careful, quantitativebased<br />
research studies that can contribute to the<br />
accretion of an evidential base is seen to be a<br />
powerful counter to the often untried and undertested<br />
schemes that are injected into practice.<br />
More recently evidence-based education has<br />
entered the worlds of social policy, social work<br />
(MacDonald 1997) and education (Fitz-Gibbon<br />
1997). At the forefront of educational research<br />
in this area are Fitz-Gibbon (1996; 1997; 1999)<br />
and Tymms (1996), who, at the Curriculum,<br />
Evaluation and Management Centre at the<br />
University of Durham, have established one of the<br />
world’s largest monitoring centres in education.<br />
Fitz-Gibbon’s work is critical of multilevel<br />
modelling and, instead, suggests how indicator<br />
systems can be used with experimental methods<br />
to provide clear evidence of causality and a ready<br />
answer to her own question, ‘How do we know<br />
what works’ (Fitz-Gibbon 1999: 33).<br />
Echoing Anderson and Biddle (1991), Fitz-<br />
Gibbon suggests that policy-makers shun evidence<br />
in the development of policy and that practitioners,<br />
in the hurly-burly of everyday activity, call upon<br />
tacit knowledge rather than the knowledge which<br />
is derived from RCTs. However, in a compelling<br />
argument (Fitz-Gibbon 1997: 35–6), she suggests<br />
that evidence-based approaches are necessary in<br />
order to challenge the imposition of unproven<br />
practices, solve problems and avoid harmful<br />
procedures, and create improvement that leads<br />
to more effective learning. Further, such evidence,<br />
she contends, should examine effect sizes rather<br />
than statistical significance.<br />
While the nature of information in evidencebased<br />
education might be contested by researchers<br />
whose sympathies (for whatever reason) lie outside<br />
randomized controlled trials, the message from<br />
Fitz-Gibbon will not go away: the educational<br />
community needs evidence on which to base<br />
its judgements and actions. The development<br />
of indicator systems worldwide attests to the<br />
importance of this, be it through assessment and<br />
examination data, inspection findings, national<br />
and international comparisons of achievement,<br />
or target setting. Rather than being a shot<br />
in the dark, evidence-based education suggests<br />
that policy formation should be informed, and<br />
policy decision-making should be based on the<br />
best information to date rather than on hunch,<br />
ideology or political will. It is bordering on the<br />
unethical to implement untried and untested<br />
recommendations in educational practice, just<br />
as it is unethical to use untested products and<br />
procedures on hospital patients without their<br />
consent.