27.06.2013 Views

learning - Academic Conferences Limited

learning - Academic Conferences Limited

learning - Academic Conferences Limited

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

Anne Jelfs and Chetz Colwell<br />

distance education we realise that all universities face similar issues. That is providing good electronic<br />

support to staff and students and therefore evaluating how students are working with module<br />

materials and online provision is paramount.<br />

2. Evaluating internet based educational resources<br />

As a relatively new form of educational delivery there is no clear definition of how education resources<br />

presented electronically can to be evaluated. To feed into the development of our framework we<br />

looked at current practices and the accepted methods of data collection for evaluating electronic<br />

materials. Our methods are based on valid evaluation practices, but our findings inform our<br />

framework.<br />

A number of recommendations relating to the evaluation of interactive systems have been established<br />

by researchers (see Hix & Hartson, 1993; Nielsen 1994; Newman & Lamming, 1995; Joynes, 2000;<br />

Sheard & Markham 2005; Light, 2006). For example, the notion that participants should be recruited<br />

from the target population of users and that participants should be observed as they complete a set of<br />

pre-planned representative tasks (van Schaik, 1999). In fact, there is an assumption that a<br />

comprehensive evaluation of a web-based environment needs to consider both the technical and the<br />

pedagogical aspects of the system (Sheard & Markham 2005).<br />

A major challenge for designers and human computer interaction researchers is to develop software<br />

tools able to engage novice learners and to support their <strong>learning</strong> even at a distance, because poorly<br />

designed interface which makes students feel lost, confused, or frustrated will hinder effective<br />

<strong>learning</strong> and information retention (Ardito, Costabile, DeMarsico et al 2006). Ihamäki & Vilpola (2004)<br />

say that if the interface of a VLE is not usable, the user’s focus on the actual content is diminished,<br />

because using the VLE requires a considerable amount of concentration. Good usability, on the other<br />

hand, allows the user to focus on the content thus improving <strong>learning</strong> results and making websites<br />

accessible has growing importance as the Intranet becomes the main delivery channel for eLearning<br />

resources (Phipps & Kelly 2006).<br />

Often educational evaluations focus on <strong>learning</strong> outcomes, however with the growth in online<br />

provisions, our research focuses on making the system easy to use and to improve access to high<br />

quality education. In the next section we consider the recognised forms of usability and accessibility<br />

evaluations before moving to the methods we have adopted in our joint work.<br />

One of the methods at the forefront of software evaluations is verbal protocol which is also know as<br />

think aloud protocol This is where a participant is asked to speak about what they are doing and why.<br />

The think aloud protocol requires the participant to complete a set of tasks using a prototype website<br />

and to ‘think aloud’. To say everything they are thinking while they complete the tasks (Turnbow,<br />

Kasianovitz et al 2005). The participant needs to maintain a commentary about what they are doing<br />

and why they are taking a particular path. This allows the researcher to have some understanding of<br />

the choices the participant has made.<br />

It is also valuable if the participants are observed and video recorded so that the participants’ actions<br />

can be used in further data analysis (Hughes & Parkes 2003). Usually there are two observers where<br />

one observer notes everything the participant says and if and how the participant completes the task.<br />

Another observer directs the participant and answers any questions asked. This type of user test<br />

provides essential real-time feedback on potential problems in the design and organization of a<br />

website. It is useful because it gives the researcher immediate information that may be forgotten by<br />

the participant at the end of the evaluation session. An approach we adopt is to have chunks of tasks<br />

so the students are only talking aloud for short periods within the longer session.<br />

When conducting think aloud sessions Dumas (2001) considers there to be three levels of interaction.<br />

Level 1 of verbalization expects the person conducting the research to sit behind and out of view of<br />

the participants and not to speak to the participant unless the person does not follow the instructions.<br />

If participants stop talking they are prompted to keep on talking. This is usually used in problem<br />

solving tasks. Level 2 is the manipulation of non-verbal information such as geometric shapes and<br />

Level 3 participants are asked to verbalise but also to explain why they are thinking or doing<br />

something. It is at Level 3 we usually work, but have on occasions tried Level 1, but our work is<br />

primarily not problem solving, but understanding how we can improve on the current provision.<br />

327

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!