RESEARCH METHOD COHEN ok
RESEARCH METHOD COHEN ok RESEARCH METHOD COHEN ok
INTERNET-BASED EXPERIMENTS 239 requested is higher after the experiment has already been finished’ (Frick et al. 1999:4),i.e.itisbetter to ask for personal information at the beginning. Reips (2002a) also advocates the use of ‘warmup’ techniques in Internet-based research in conjunction with the ‘high hurdle’ technique (see also Frick et al. 1999). He suggests that most dropouts occur earlier rather than later in data collection, or, indeed, at the very beginning (nonparticipation) and that most such initial dropouts occur because participants are overloaded with information early on. Rather, he suggests, it is preferable to introduce some simple-to-complete items earlier on to build up an idea of how to respond to the later items and to try out practice materials. Frick et al.(1999)reportthatofferingfinancial incentives may be useful in reducing dropouts, ensuring that respondents continue an online survey to completion (up to twice as likely to ensure completion), and that they may be useful if intrinsic motivation is insufficient to guarantee completion. Internet-based experiments Agrowingfieldinpsychologicalresearchisthe use of the Internet for experiments (e.g. http:// www.psych.unizh.ch/genpsy/Ulf/Lab/ webExpPsyLab.html). Hewson et al. (2003) classify these into four principal types: those that present static printed materials (for example, printed text or graphics); second are those that make use of non-printed materials (for example, video or sound); third are reaction-time experiments; and fourth are experiments that involve some form of interpersonal interaction. (Hewson et al. 2003:48) The first kind of experiment is akin to a survey in that it sends formulated material to respondents (e.g. graphically presented material) by email or by web page, and the intervention will be to send different groups different materials. Here all the cautions and comments that were made about Internet-based surveys apply, particularly those problems of download times, different browsers and platforms. However, the matter of download time applies more strongly to the second type of Internet-based experiments that use video clips or sound, and some software packages will reproduce higher quality than others, even though the original that is transmitted is the same for everyone. This can be addressed by ensuring that the material runs at its optimum even on the slowest computer (Hewson et al. 2003: 49) or by stating the minimum hardware required for the experiment to be run successfully. Reaction-time experiments, those that require very precise timing (e.g. to milliseconds) are difficult in remote situations, as different platforms and Internet connection speeds and congestion on the Internet through having multiple users at busy times can render standardization virtually impossible. One solution to this is to have the experiment downloaded and then run offline before loading it back onto the computer and sending it. The fourth type involves interaction, and is akin to Internet interviewing (discussed below), facilitated by chat rooms. However, this is solely awrittenmediumandsointonation,inflection, hesitancies, non-verbal cues, extra-linguistic and paralinguistic factors are ruled out of this medium. It is, in a sense, incomplete, although the increasing availability and use of simple screentop video cameras is mitigating this. Indeed this latter development renders observational studies an increasing possibility in the Internet age. Reips (2002a) reports that in comparison to laboratory experiments, Internet-based experiments experienced greater problems of dropout, that the dropout rate in an Internet experiment was very varied (from 1 per cent to 87 per cent, and that dropout could be reduced by offering incentives, e.g. payments or lottery tickets, bringing a difference of as much as 31 per cent to dropout rates. Dropout on Internet-based research was due to arangeoffactors,forexamplemotivation,how interesting the experiment was, not least of which was the non-compulsory nature of the experiment (in contrast, for example, to the compulsory nature of experiments undertaken by university student participants as part of their degree studies). The discussion of the ‘high hurdle’ technique earlier is applicable to experiments here. Reips (2002b: Chapter 10
240 INTERNET-BASED RESEARCH AND COMPUTER USAGE 245–6) also reports that greater variance in results is likely in an Internet-based experiment than in a conventional experiment due to technical matters (e.g. network connection speed, computer speed, multiple software running in parallel). On the other hand, Reips (2002b: 247) also reports that Internet-based experiments have an attraction over laboratory and conventional experiments: They have greater generalizability because of their wider sampling. They demonstrate greater ecological validity as typically they are conducted in settings that are familiar to the participants and at times suitable to the participant (‘the experiment comes to the participant, not vice versa’), though, of course, the obverse of this is that the researcher has no control over the experimental setting (Reips 2002b: 250). They have a high degree of voluntariness, such that more authentic behaviours can be observed. How correct these claims are is an empirical matter. For example, the use of sophisticated software packages (e.g. Java) can reduce experimenter control as these packages may interact with other programming languages. Indeed Schwarz and Reips (2001) report that the use of Javascript led to a 13 per cent higher dropout rate in an experiment compared to an identical experiment that did not use Javascript. Further, multiple returns by a single participant could confound reliability (discussed above in connection with survey methods). Reips (2002a, 2002b) provides a series of ‘dos’ and ‘don’ts’ in Internet experimenting. In terms of ‘dos’ he gives five main points: Use dropout as a dependent variable. Use dropout to detect motivational confounding (i.e. to identify boredom and motivation levels in experiments). Place questions for personal information at the beginning of the Internet study. Reips (2002b) suggests that asking for personal information may assist in keeping participants in an experiment, and that this is part of the ‘high hurdle’ technique, where dropouts selfselect out of the study, rather than dropping out during the study. Use techniques that help ensure quality in data collection over the Internet (e.g. the ‘high hurdle’ and ‘warm-up’ techniques discussed earlier, subsampling to detect and ensure consistency of results, using single passwords to ensure data integrity, providing contact information, reducing dropout). Use Internet-based tools and services to develop and announce your study (using commercially produced software to ensure that technical and presentational problems are overcome). There are also web sites (e.g. the American Psychological Society) that announce experiments. In terms of ‘don’ts’ Reips gives five main points: Do not allow external access to unprotected directories. This can violate ethical and legal requirements, as it provides access to confidential data. It also might allow the participants to have access to the structure of the experiment, thereby contaminating the experiment. Do not allow public display of confidential participant data through URLs (uniform resource locators, a problem if respondents use the GET protocol, which is a way of requesting an html page, whether or not one uses query parameters), as this, again, violates ethical codes. Do not accidentally reveal the experiment’s structure (as this could affect participant behaviour). This might be done through including the experiment’s details on a related file or a file in the same directory. Do not ignore the technical variance inherent in the Internet (configuration details, browsers, platforms, bandwidth and software might all distort the experiment, as discussed above). Do not bias results through improper use of form elements, such as measurement errors, where omitting particular categories (e.g.
- Page 208 and 209: SOME PROBLEMS WITH ETHNOGRAPHIC AND
- Page 210 and 211: 8 Historical and documentary resear
- Page 212 and 213: DATA COLLECTION 193 One can see fro
- Page 214 and 215: WRITING THE RESEARCH REPORT 195 Ext
- Page 216 and 217: THE USE OF QUANTITATIVE METHODS 197
- Page 218 and 219: LIFE HISTORIES 199 Box 8.2 Atypolog
- Page 220 and 221: DOCUMENTARY RESEARCH 201 Documentar
- Page 222 and 223: DOCUMENTARY RESEARCH 203 What are
- Page 224 and 225: 9 Surveys, longitudinal, cross-sect
- Page 226 and 227: SOME PRELIMINARY CONSIDERATIONS 207
- Page 228 and 229: PLANNING A SURVEY 209 structured or
- Page 230 and 231: LONGITUDINAL, CROSS-SECTIONAL AND T
- Page 232 and 233: LONGITUDINAL, CROSS-SECTIONAL AND T
- Page 234 and 235: STRENGTHS AND WEAKNESSES OF LONGITU
- Page 236 and 237: STRENGTHS AND WEAKNESSES OF LONGITU
- Page 238 and 239: POSTAL, INTERVIEW AND TELEPHONE SUR
- Page 240 and 241: POSTAL, INTERVIEW AND TELEPHONE SUR
- Page 242 and 243: POSTAL, INTERVIEW AND TELEPHONE SUR
- Page 244 and 245: EVENT HISTORY ANALYSIS 225 may be p
- Page 246 and 247: INTERNET-BASED SURVEYS 227 packages
- Page 248 and 249: INTERNET-BASED SURVEYS 229 instruc
- Page 250 and 251: INTERNET-BASED SURVEYS 231 Box 10.1
- Page 252 and 253: INTERNET-BASED SURVEYS 233 Box 10.1
- Page 254 and 255: INTERNET-BASED SURVEYS 235 Box 10.1
- Page 256 and 257: INTERNET-BASED SURVEYS 237 Witte et
- Page 260 and 261: INTERNET-BASED INTERVIEWS 241 ‘ne
- Page 262 and 263: SEARCHING FOR RESEARCH MATERIALS ON
- Page 264 and 265: COMPUTER SIMULATIONS 245 autho
- Page 266 and 267: COMPUTER SIMULATIONS 247 computer s
- Page 268 and 269: COMPUTER SIMULATIONS 249 On the oth
- Page 270 and 271: GEOGRAPHICAL INFORMATION SYSTEMS 25
- Page 272 and 273: 11 Case studies What is a case stud
- Page 274 and 275: WHAT IS A CASE STUDY 255 (providing
- Page 276 and 277: WHAT IS A CASE STUDY 257 argue that
- Page 278 and 279: EXAMPLES OF KINDS OF CASE STUDY 259
- Page 280 and 281: PLANNING A CASE STUDY 261 accounts
- Page 282 and 283: CONCLUSION 263 In the narrativ
- Page 284 and 285: CO-RELATIONAL AND CRITERION GROUPS
- Page 286 and 287: CHARACTERISTICS OF EX POST FACTO RE
- Page 288 and 289: DESIGNING AN EX POST FACTO INVESTIG
- Page 290 and 291: PROCEDURES IN EX POST FACTO RESEARC
- Page 292 and 293: INTRODUCTION 273 Box 13.1 Independe
- Page 294 and 295: TRUE EXPERIMENTAL DESIGNS 275 motor
- Page 296 and 297: TRUE EXPERIMENTAL DESIGNS 277 2 Sub
- Page 298 and 299: TRUE EXPERIMENTAL DESIGNS 279 textb
- Page 300 and 301: TRUE EXPERIMENTAL DESIGNS 281 Facto
- Page 302 and 303: A QUASI-EXPERIMENTAL DESIGN: THE NO
- Page 304 and 305: PROCEDURES IN CONDUCTING EXPERIMENT
- Page 306 and 307: EXAMPLES FROM EDUCATIONAL RESEARCH
240 INTERNET-BASED <strong>RESEARCH</strong> AND COMPUTER USAGE<br />
245–6) also reports that greater variance in results<br />
is likely in an Internet-based experiment than in a<br />
conventional experiment due to technical matters<br />
(e.g. network connection speed, computer speed,<br />
multiple software running in parallel).<br />
On the other hand, Reips (2002b: 247) also<br />
reports that Internet-based experiments have<br />
an attraction over laboratory and conventional<br />
experiments:<br />
<br />
<br />
<br />
They have greater generalizability because of<br />
their wider sampling.<br />
They demonstrate greater ecological validity as<br />
typically they are conducted in settings that are<br />
familiar to the participants and at times suitable<br />
to the participant (‘the experiment comes to<br />
the participant, not vice versa’), though, of<br />
course, the obverse of this is that the researcher<br />
has no control over the experimental setting<br />
(Reips 2002b: 250).<br />
They have a high degree of voluntariness,<br />
such that more authentic behaviours can be<br />
observed.<br />
How correct these claims are is an empirical<br />
matter. For example, the use of sophisticated<br />
software packages (e.g. Java) can reduce<br />
experimenter control as these packages may<br />
interact with other programming languages.<br />
Indeed Schwarz and Reips (2001) report that the<br />
use of Javascript led to a 13 per cent higher<br />
dropout rate in an experiment compared to an<br />
identical experiment that did not use Javascript.<br />
Further, multiple returns by a single participant<br />
could confound reliability (discussed above in<br />
connection with survey methods).<br />
Reips (2002a, 2002b) provides a series of ‘dos’<br />
and ‘don’ts’ in Internet experimenting. In terms of<br />
‘dos’ he gives five main points:<br />
<br />
<br />
<br />
Use dropout as a dependent variable.<br />
Use dropout to detect motivational confounding<br />
(i.e. to identify boredom and motivation<br />
levels in experiments).<br />
Place questions for personal information at<br />
the beginning of the Internet study. Reips<br />
(2002b) suggests that asking for personal<br />
information may assist in keeping participants<br />
<br />
in an experiment, and that this is part of the<br />
‘high hurdle’ technique, where dropouts selfselect<br />
out of the study, rather than dropping<br />
out during the study.<br />
Use techniques that help ensure quality in data<br />
collection over the Internet (e.g. the ‘high<br />
hurdle’ and ‘warm-up’ techniques discussed<br />
earlier, subsampling to detect and ensure<br />
consistency of results, using single passwords<br />
to ensure data integrity, providing contact<br />
information, reducing dropout).<br />
Use Internet-based tools and services to<br />
develop and announce your study (using<br />
commercially produced software to ensure<br />
that technical and presentational problems<br />
are overcome). There are also web sites (e.g.<br />
the American Psychological Society) that<br />
announce experiments.<br />
<br />
<br />
<br />
<br />
<br />
In terms of ‘don’ts’ Reips gives five main points:<br />
Do not allow external access to unprotected<br />
directories. This can violate ethical and<br />
legal requirements, as it provides access to<br />
confidential data. It also might allow the<br />
participants to have access to the structure<br />
of the experiment, thereby contaminating the<br />
experiment.<br />
Do not allow public display of confidential<br />
participant data through URLs (uniform<br />
resource locators, a problem if respondents<br />
use the GET protocol, which is a way of<br />
requesting an html page, whether or not one<br />
uses query parameters), as this, again, violates<br />
ethical codes.<br />
Do not accidentally reveal the experiment’s<br />
structure (as this could affect participant<br />
behaviour). This might be done through<br />
including the experiment’s details on a related<br />
file or a file in the same directory.<br />
Do not ignore the technical variance inherent<br />
in the Internet (configuration details, browsers,<br />
platforms, bandwidth and software might all<br />
distort the experiment, as discussed above).<br />
Do not bias results through improper use of<br />
form elements, such as measurement errors,<br />
where omitting particular categories (e.g.