RESEARCH METHOD COHEN ok

RESEARCH METHOD COHEN ok RESEARCH METHOD COHEN ok

12.01.2015 Views

INTERNET-BASED EXPERIMENTS 239 requested is higher after the experiment has already been finished’ (Frick et al. 1999:4),i.e.itisbetter to ask for personal information at the beginning. Reips (2002a) also advocates the use of ‘warmup’ techniques in Internet-based research in conjunction with the ‘high hurdle’ technique (see also Frick et al. 1999). He suggests that most dropouts occur earlier rather than later in data collection, or, indeed, at the very beginning (nonparticipation) and that most such initial dropouts occur because participants are overloaded with information early on. Rather, he suggests, it is preferable to introduce some simple-to-complete items earlier on to build up an idea of how to respond to the later items and to try out practice materials. Frick et al.(1999)reportthatofferingfinancial incentives may be useful in reducing dropouts, ensuring that respondents continue an online survey to completion (up to twice as likely to ensure completion), and that they may be useful if intrinsic motivation is insufficient to guarantee completion. Internet-based experiments Agrowingfieldinpsychologicalresearchisthe use of the Internet for experiments (e.g. http:// www.psych.unizh.ch/genpsy/Ulf/Lab/ webExpPsyLab.html). Hewson et al. (2003) classify these into four principal types: those that present static printed materials (for example, printed text or graphics); second are those that make use of non-printed materials (for example, video or sound); third are reaction-time experiments; and fourth are experiments that involve some form of interpersonal interaction. (Hewson et al. 2003:48) The first kind of experiment is akin to a survey in that it sends formulated material to respondents (e.g. graphically presented material) by email or by web page, and the intervention will be to send different groups different materials. Here all the cautions and comments that were made about Internet-based surveys apply, particularly those problems of download times, different browsers and platforms. However, the matter of download time applies more strongly to the second type of Internet-based experiments that use video clips or sound, and some software packages will reproduce higher quality than others, even though the original that is transmitted is the same for everyone. This can be addressed by ensuring that the material runs at its optimum even on the slowest computer (Hewson et al. 2003: 49) or by stating the minimum hardware required for the experiment to be run successfully. Reaction-time experiments, those that require very precise timing (e.g. to milliseconds) are difficult in remote situations, as different platforms and Internet connection speeds and congestion on the Internet through having multiple users at busy times can render standardization virtually impossible. One solution to this is to have the experiment downloaded and then run offline before loading it back onto the computer and sending it. The fourth type involves interaction, and is akin to Internet interviewing (discussed below), facilitated by chat rooms. However, this is solely awrittenmediumandsointonation,inflection, hesitancies, non-verbal cues, extra-linguistic and paralinguistic factors are ruled out of this medium. It is, in a sense, incomplete, although the increasing availability and use of simple screentop video cameras is mitigating this. Indeed this latter development renders observational studies an increasing possibility in the Internet age. Reips (2002a) reports that in comparison to laboratory experiments, Internet-based experiments experienced greater problems of dropout, that the dropout rate in an Internet experiment was very varied (from 1 per cent to 87 per cent, and that dropout could be reduced by offering incentives, e.g. payments or lottery tickets, bringing a difference of as much as 31 per cent to dropout rates. Dropout on Internet-based research was due to arangeoffactors,forexamplemotivation,how interesting the experiment was, not least of which was the non-compulsory nature of the experiment (in contrast, for example, to the compulsory nature of experiments undertaken by university student participants as part of their degree studies). The discussion of the ‘high hurdle’ technique earlier is applicable to experiments here. Reips (2002b: Chapter 10

240 INTERNET-BASED RESEARCH AND COMPUTER USAGE 245–6) also reports that greater variance in results is likely in an Internet-based experiment than in a conventional experiment due to technical matters (e.g. network connection speed, computer speed, multiple software running in parallel). On the other hand, Reips (2002b: 247) also reports that Internet-based experiments have an attraction over laboratory and conventional experiments: They have greater generalizability because of their wider sampling. They demonstrate greater ecological validity as typically they are conducted in settings that are familiar to the participants and at times suitable to the participant (‘the experiment comes to the participant, not vice versa’), though, of course, the obverse of this is that the researcher has no control over the experimental setting (Reips 2002b: 250). They have a high degree of voluntariness, such that more authentic behaviours can be observed. How correct these claims are is an empirical matter. For example, the use of sophisticated software packages (e.g. Java) can reduce experimenter control as these packages may interact with other programming languages. Indeed Schwarz and Reips (2001) report that the use of Javascript led to a 13 per cent higher dropout rate in an experiment compared to an identical experiment that did not use Javascript. Further, multiple returns by a single participant could confound reliability (discussed above in connection with survey methods). Reips (2002a, 2002b) provides a series of ‘dos’ and ‘don’ts’ in Internet experimenting. In terms of ‘dos’ he gives five main points: Use dropout as a dependent variable. Use dropout to detect motivational confounding (i.e. to identify boredom and motivation levels in experiments). Place questions for personal information at the beginning of the Internet study. Reips (2002b) suggests that asking for personal information may assist in keeping participants in an experiment, and that this is part of the ‘high hurdle’ technique, where dropouts selfselect out of the study, rather than dropping out during the study. Use techniques that help ensure quality in data collection over the Internet (e.g. the ‘high hurdle’ and ‘warm-up’ techniques discussed earlier, subsampling to detect and ensure consistency of results, using single passwords to ensure data integrity, providing contact information, reducing dropout). Use Internet-based tools and services to develop and announce your study (using commercially produced software to ensure that technical and presentational problems are overcome). There are also web sites (e.g. the American Psychological Society) that announce experiments. In terms of ‘don’ts’ Reips gives five main points: Do not allow external access to unprotected directories. This can violate ethical and legal requirements, as it provides access to confidential data. It also might allow the participants to have access to the structure of the experiment, thereby contaminating the experiment. Do not allow public display of confidential participant data through URLs (uniform resource locators, a problem if respondents use the GET protocol, which is a way of requesting an html page, whether or not one uses query parameters), as this, again, violates ethical codes. Do not accidentally reveal the experiment’s structure (as this could affect participant behaviour). This might be done through including the experiment’s details on a related file or a file in the same directory. Do not ignore the technical variance inherent in the Internet (configuration details, browsers, platforms, bandwidth and software might all distort the experiment, as discussed above). Do not bias results through improper use of form elements, such as measurement errors, where omitting particular categories (e.g.

240 INTERNET-BASED <strong>RESEARCH</strong> AND COMPUTER USAGE<br />

245–6) also reports that greater variance in results<br />

is likely in an Internet-based experiment than in a<br />

conventional experiment due to technical matters<br />

(e.g. network connection speed, computer speed,<br />

multiple software running in parallel).<br />

On the other hand, Reips (2002b: 247) also<br />

reports that Internet-based experiments have<br />

an attraction over laboratory and conventional<br />

experiments:<br />

<br />

<br />

<br />

They have greater generalizability because of<br />

their wider sampling.<br />

They demonstrate greater ecological validity as<br />

typically they are conducted in settings that are<br />

familiar to the participants and at times suitable<br />

to the participant (‘the experiment comes to<br />

the participant, not vice versa’), though, of<br />

course, the obverse of this is that the researcher<br />

has no control over the experimental setting<br />

(Reips 2002b: 250).<br />

They have a high degree of voluntariness,<br />

such that more authentic behaviours can be<br />

observed.<br />

How correct these claims are is an empirical<br />

matter. For example, the use of sophisticated<br />

software packages (e.g. Java) can reduce<br />

experimenter control as these packages may<br />

interact with other programming languages.<br />

Indeed Schwarz and Reips (2001) report that the<br />

use of Javascript led to a 13 per cent higher<br />

dropout rate in an experiment compared to an<br />

identical experiment that did not use Javascript.<br />

Further, multiple returns by a single participant<br />

could confound reliability (discussed above in<br />

connection with survey methods).<br />

Reips (2002a, 2002b) provides a series of ‘dos’<br />

and ‘don’ts’ in Internet experimenting. In terms of<br />

‘dos’ he gives five main points:<br />

<br />

<br />

<br />

Use dropout as a dependent variable.<br />

Use dropout to detect motivational confounding<br />

(i.e. to identify boredom and motivation<br />

levels in experiments).<br />

Place questions for personal information at<br />

the beginning of the Internet study. Reips<br />

(2002b) suggests that asking for personal<br />

information may assist in keeping participants<br />

<br />

in an experiment, and that this is part of the<br />

‘high hurdle’ technique, where dropouts selfselect<br />

out of the study, rather than dropping<br />

out during the study.<br />

Use techniques that help ensure quality in data<br />

collection over the Internet (e.g. the ‘high<br />

hurdle’ and ‘warm-up’ techniques discussed<br />

earlier, subsampling to detect and ensure<br />

consistency of results, using single passwords<br />

to ensure data integrity, providing contact<br />

information, reducing dropout).<br />

Use Internet-based tools and services to<br />

develop and announce your study (using<br />

commercially produced software to ensure<br />

that technical and presentational problems<br />

are overcome). There are also web sites (e.g.<br />

the American Psychological Society) that<br />

announce experiments.<br />

<br />

<br />

<br />

<br />

<br />

In terms of ‘don’ts’ Reips gives five main points:<br />

Do not allow external access to unprotected<br />

directories. This can violate ethical and<br />

legal requirements, as it provides access to<br />

confidential data. It also might allow the<br />

participants to have access to the structure<br />

of the experiment, thereby contaminating the<br />

experiment.<br />

Do not allow public display of confidential<br />

participant data through URLs (uniform<br />

resource locators, a problem if respondents<br />

use the GET protocol, which is a way of<br />

requesting an html page, whether or not one<br />

uses query parameters), as this, again, violates<br />

ethical codes.<br />

Do not accidentally reveal the experiment’s<br />

structure (as this could affect participant<br />

behaviour). This might be done through<br />

including the experiment’s details on a related<br />

file or a file in the same directory.<br />

Do not ignore the technical variance inherent<br />

in the Internet (configuration details, browsers,<br />

platforms, bandwidth and software might all<br />

distort the experiment, as discussed above).<br />

Do not bias results through improper use of<br />

form elements, such as measurement errors,<br />

where omitting particular categories (e.g.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!