RESEARCH METHOD COHEN ok
RESEARCH METHOD COHEN ok RESEARCH METHOD COHEN ok
INTERNET-BASED SURVEYS 227 packages are discussed at http://www.tucows.com/, which lists and reviews a range of packages, while http://www.my3q.com/misc/register/register.phtml provides free online survey software. For presentational matters Dillman and his colleagues (1998a; 1999) make the point that in a paper-based survey the eyes and the hands are focused on the same area, while in a web-based survey the eyes are focused on the screen while the hands are either on the keyboard or on the mouse, and so completion is more difficult. This is one reason to avoid asking respondents to type in many responses to open-ended questions, and replacing these with radio buttons or clicking on a mouse that automatically inserts a tick into a box (Witte et al.1999:139).Further,somerespondents may have less developed computer skills than others. They suggest a mixed mode of operation (paper-based together with web-based versions of the same questionnaire). The researchers also found that ‘check-all-that-apply’ lists of factors to be addressed had questionable reliability, as respondents would tend to complete those items at the top of the list and ignore the remainder. Hence they recommend avoiding the use of checkall-that-apply questions in a web-based survey. Similarly they advocate keeping the introduction to the questionnaire short (no more than one screen), informative (e.g. of how to move on) and avoiding giving a long list of instructions. Further, as the first question in a survey tends to raise in respondents’ minds a particular mind-set, care is needed on setting the first question, to entice participants and not to put them off participating. (e.g. not too difficult, not too easy, interesting, straightforward to complete, avoiding drop-down boxes and scrolling). Dillman et al. (1998a; 1998b; 1999) make specific recommendations about the layout of the screen, for example keeping the response categories close to the question for ease of following, using features like brightness, large fonts and spacing for clarity in the early parts of the survey. They also suggest following the natural movement of the eyes from the top left (the most important part of the screen, hence the part in which the question is located) to the bottom right quadrants of the screen (the least important part of the screen, which might contain the researcher’s logo). They comment that the natural movement of the eye is to read prose unevenly, with the risk of missing critical words, and that this is particularly true on long lines, hence they advocate keeping lines and sentences short (e.g. by inserting a hard break in the text or to use table-editing features, locating the text in a table frame). Taking this further, they also advocate the use of some marker to indicate to the respondent where he or she has reached in the questionnaire (e.g. a progress bar or a table that indicates what proportion of the questionnaire has been completed so far). Respondents may not be familiar with webbased questionnaires, e.g. with radio buttons, scroll bars, the use of the mouse, the use of drop-down menus, where to insert open-ended responses, and the survey designer must not overestimate the capability of the respondent to use the software, though Roztocki and Lahri (2002) suggest that there is no relationship between perceived level of computer literacy and preference for webbased surveys. Indeed their use may have to be explained in the survey itself. Dillman et al. (1999) suggest that the problem of differential expertise in computer usage can be addressed in three ways: having the instructions for how to complete the item next to the item itself (not all placed together at the start of the questionnaire) asking the respondents at the beginning about their level of computer expertise, and, if they are more expert, offering them the questionnaire with certain instructions omitted and, if they are less experienced, directing them to instructions and further assistance having a ‘floating window’ that accompanies each screen and which can be maximized for further instructions. Some web-based surveys prevent respondents from proceeding until they have completed all the items on the screen in question. While this might ensure coverage, it can also anger respondents – such that they give up and abandon the survey – or prevent them from having a Chapter 10
228 INTERNET-BASED RESEARCH AND COMPUTER USAGE deliberate non-response (e.g. if they do not wish to reveal particular information, or if, in fact, the question does not apply to them, or if they do not know the answer). Hence the advice of Dillman et al. (1999)istoavoidthispractice.Oneway to address this matter is to give respondents the opportunity to answer an item with ‘prefer not to answer’ or ‘don’t know’. The point that relates to this is that it is much easier for participants in a web-based survey to abandon the survey – a simple click of a button – so more attention has to be given to keeping them participating than in a paper-based survey. Redline et al. (2002) suggest that branching instructions (e.g. ‘skip to item 13’, ‘go to item 10’; ‘if ‘‘yes’’ go to item 12, if ‘‘no’’ then continue’) can create problems in web-based surveys, as respondents may skip over items and series of questions that they should have addressed. This concerns the location of the instruction (e.g. to the right of the item, underneath the item, to the right of the answer box). Locating the instruction too far to the right of the answer box (e.g. more than nine characters of text to the right) can mean that it is outside the foveal view (2 degrees) of the respondent’s vision and, hence, can be overlooked. Further, they report that having a branching instruction in the same font size and colour as the rest of the text can result in it being regarded as unimportant, not least because respondents frequently expect the completion of aformtobeeasierthanitactuallyis.Hencethey advocate making the instruction easier to detect by locating it within the natural field of vision of the reader, printing it in a large font to make it bolder, and using a different colour. They report that, for the most part, branching instruction errors occur because they are overlooked and respondents are unaware of them rather than deliberately disregarding them (Redline et al. 2002: 18). The researchers also investigated a range of other variables that impacted on the success of using branching programmes, and reported the following: The number of words in the question has an impact on the respondent: the greater the number of words the less is the likelihood of correct branching processing by the reader, as the respondent is too absorbed with the question rather than with the instructions. Using large fonts, strategies and verbal design to draw attention to branching instructions leads to greater observance of these instructions. The number of answer categories can exert an effect on the respondent: more than seven categories and the respondent may make errors and also overlook branching instructions. Having to read branching instructions at the same time as looking at answer categories results in overlooking the branching instructions. Locating the branching instruction next to the final category of a series of answer boxes is amuchsaferguaranteeofitbeingobserved than placing it further up a list; this may mean changing the order of the list of response categories, so that the final category naturally leads to the branching instruction. Branching instructions should be placed where they are to be used and where they can be seen. Response-order effects operate in surveys, such that respondents in a self-administered survey tend to choose earlier items in a list rather than later items in a list (the primacy effect), thereby erroneously acting on branching instructions that appear with later items in a list. Questions with alternating branches (i.e. more than one branch) may be forgotten by the time they need to be acted upon after respondents have completed an item. If every answer has a branch then respondents may overlook the instructions for branching as all the branches appear to be similar. If respondents are required to write an openended response this may cause them to overlook a branching instruction as they are so absorbed in composing their own response and the branching instruction may be out of their field of vision when writing in their answer. Items that are located at the bottom of a page are more likely to elicit a non-response than items further up a page, hence if branching
- Page 196 and 197: PLANNING NATURALISTIC RESEARCH 177
- Page 198 and 199: PLANNING NATURALISTIC RESEARCH 179
- Page 200 and 201: PLANNING NATURALISTIC RESEARCH 181
- Page 202 and 203: PLANNING NATURALISTIC RESEARCH 183
- Page 204 and 205: PLANNING NATURALISTIC RESEARCH 185
- Page 206 and 207: CRITICAL ETHNOGRAPHY 187 Relatio
- Page 208 and 209: SOME PROBLEMS WITH ETHNOGRAPHIC AND
- Page 210 and 211: 8 Historical and documentary resear
- Page 212 and 213: DATA COLLECTION 193 One can see fro
- Page 214 and 215: WRITING THE RESEARCH REPORT 195 Ext
- Page 216 and 217: THE USE OF QUANTITATIVE METHODS 197
- Page 218 and 219: LIFE HISTORIES 199 Box 8.2 Atypolog
- Page 220 and 221: DOCUMENTARY RESEARCH 201 Documentar
- Page 222 and 223: DOCUMENTARY RESEARCH 203 What are
- Page 224 and 225: 9 Surveys, longitudinal, cross-sect
- Page 226 and 227: SOME PRELIMINARY CONSIDERATIONS 207
- Page 228 and 229: PLANNING A SURVEY 209 structured or
- Page 230 and 231: LONGITUDINAL, CROSS-SECTIONAL AND T
- Page 232 and 233: LONGITUDINAL, CROSS-SECTIONAL AND T
- Page 234 and 235: STRENGTHS AND WEAKNESSES OF LONGITU
- Page 236 and 237: STRENGTHS AND WEAKNESSES OF LONGITU
- Page 238 and 239: POSTAL, INTERVIEW AND TELEPHONE SUR
- Page 240 and 241: POSTAL, INTERVIEW AND TELEPHONE SUR
- Page 242 and 243: POSTAL, INTERVIEW AND TELEPHONE SUR
- Page 244 and 245: EVENT HISTORY ANALYSIS 225 may be p
- Page 248 and 249: INTERNET-BASED SURVEYS 229 instruc
- Page 250 and 251: INTERNET-BASED SURVEYS 231 Box 10.1
- Page 252 and 253: INTERNET-BASED SURVEYS 233 Box 10.1
- Page 254 and 255: INTERNET-BASED SURVEYS 235 Box 10.1
- Page 256 and 257: INTERNET-BASED SURVEYS 237 Witte et
- Page 258 and 259: INTERNET-BASED EXPERIMENTS 239 requ
- Page 260 and 261: INTERNET-BASED INTERVIEWS 241 ‘ne
- Page 262 and 263: SEARCHING FOR RESEARCH MATERIALS ON
- Page 264 and 265: COMPUTER SIMULATIONS 245 autho
- Page 266 and 267: COMPUTER SIMULATIONS 247 computer s
- Page 268 and 269: COMPUTER SIMULATIONS 249 On the oth
- Page 270 and 271: GEOGRAPHICAL INFORMATION SYSTEMS 25
- Page 272 and 273: 11 Case studies What is a case stud
- Page 274 and 275: WHAT IS A CASE STUDY 255 (providing
- Page 276 and 277: WHAT IS A CASE STUDY 257 argue that
- Page 278 and 279: EXAMPLES OF KINDS OF CASE STUDY 259
- Page 280 and 281: PLANNING A CASE STUDY 261 accounts
- Page 282 and 283: CONCLUSION 263 In the narrativ
- Page 284 and 285: CO-RELATIONAL AND CRITERION GROUPS
- Page 286 and 287: CHARACTERISTICS OF EX POST FACTO RE
- Page 288 and 289: DESIGNING AN EX POST FACTO INVESTIG
- Page 290 and 291: PROCEDURES IN EX POST FACTO RESEARC
- Page 292 and 293: INTRODUCTION 273 Box 13.1 Independe
- Page 294 and 295: TRUE EXPERIMENTAL DESIGNS 275 motor
228 INTERNET-BASED <strong>RESEARCH</strong> AND COMPUTER USAGE<br />
deliberate non-response (e.g. if they do not wish<br />
to reveal particular information, or if, in fact, the<br />
question does not apply to them, or if they do not<br />
know the answer). Hence the advice of Dillman<br />
et al. (1999)istoavoidthispractice.Oneway<br />
to address this matter is to give respondents the<br />
opportunity to answer an item with ‘prefer not<br />
to answer’ or ‘don’t know’. The point that relates<br />
to this is that it is much easier for participants<br />
in a web-based survey to abandon the survey – a<br />
simple click of a button – so more attention has to<br />
be given to keeping them participating than in a<br />
paper-based survey.<br />
Redline et al. (2002) suggest that branching<br />
instructions (e.g. ‘skip to item 13’, ‘go to item 10’;<br />
‘if ‘‘yes’’ go to item 12, if ‘‘no’’ then continue’)<br />
can create problems in web-based surveys, as<br />
respondents may skip over items and series of<br />
questions that they should have addressed. This<br />
concerns the location of the instruction (e.g. to<br />
the right of the item, underneath the item, to the<br />
right of the answer box). Locating the instruction<br />
too far to the right of the answer box (e.g. more<br />
than nine characters of text to the right) can<br />
mean that it is outside the foveal view (2 degrees)<br />
of the respondent’s vision and, hence, can be<br />
overlo<strong>ok</strong>ed. Further, they report that having a<br />
branching instruction in the same font size and<br />
colour as the rest of the text can result in it<br />
being regarded as unimportant, not least because<br />
respondents frequently expect the completion of<br />
aformtobeeasierthanitactuallyis.Hencethey<br />
advocate making the instruction easier to detect<br />
by locating it within the natural field of vision of<br />
the reader, printing it in a large font to make it<br />
bolder, and using a different colour. They report<br />
that, for the most part, branching instruction errors<br />
occur because they are overlo<strong>ok</strong>ed and respondents<br />
are unaware of them rather than deliberately<br />
disregarding them (Redline et al. 2002: 18).<br />
The researchers also investigated a range of<br />
other variables that impacted on the success of<br />
using branching programmes, and reported the<br />
following:<br />
<br />
The number of words in the question has an<br />
impact on the respondent: the greater the<br />
number of words the less is the likelihood<br />
of correct branching processing by the reader,<br />
as the respondent is too absorbed with the<br />
question rather than with the instructions.<br />
Using large fonts, strategies and verbal<br />
design to draw attention to branching<br />
instructions leads to greater observance of these<br />
instructions.<br />
The number of answer categories can exert<br />
an effect on the respondent: more than seven<br />
categories and the respondent may make errors<br />
and also overlo<strong>ok</strong> branching instructions.<br />
Having to read branching instructions at<br />
the same time as lo<strong>ok</strong>ing at answer<br />
categories results in overlo<strong>ok</strong>ing the branching<br />
instructions.<br />
Locating the branching instruction next to the<br />
final category of a series of answer boxes is<br />
amuchsaferguaranteeofitbeingobserved<br />
than placing it further up a list; this may<br />
mean changing the order of the list of response<br />
categories, so that the final category naturally<br />
leads to the branching instruction.<br />
Branching instructions should be placed where<br />
they are to be used and where they can be seen.<br />
Response-order effects operate in surveys, such<br />
that respondents in a self-administered survey<br />
tend to choose earlier items in a list rather than<br />
later items in a list (the primacy effect), thereby<br />
erroneously acting on branching instructions<br />
that appear with later items in a list.<br />
Questions with alternating branches (i.e. more<br />
than one branch) may be forgotten by the time<br />
they need to be acted upon after respondents<br />
have completed an item.<br />
If every answer has a branch then respondents<br />
may overlo<strong>ok</strong> the instructions for branching as<br />
all the branches appear to be similar.<br />
If respondents are required to write an openended<br />
response this may cause them to<br />
overlo<strong>ok</strong> a branching instruction as they are so<br />
absorbed in composing their own response and<br />
the branching instruction may be out of their<br />
field of vision when writing in their answer.<br />
Items that are located at the bottom of a page<br />
are more likely to elicit a non-response than<br />
items further up a page, hence if branching