learning - Academic Conferences Limited

learning - Academic Conferences Limited learning - Academic Conferences Limited

academic.conferences.org
from academic.conferences.org More from this publisher
27.06.2013 Views

Nabil Ben Abdallah and Françoise Poyet notion of project. The difference between other constructivist environments and what virtual environments potentially offer can be described as making students not only active, but also actors, i.e. members and contributors of the social and information space". For example, if we consider the learning user, these theoretical possibilities provided by a VLE should not make us forget that there is always a gap between the use of VLE as it is defined by the designers according to the representation that they have of learning activities, and the use of VLE as actually practiced by the learner as regards what he must do to achieve his objectives. The fact that the use of the VLE by the learner is different from that defined by the designer is not in itself a design problem or a dysfunction in learning, it is inherent to what Perriault (Perriault 2008) called the logic of use which is construction by the individual of a choice of instrument and a kind of employment he uses to achieve a project. This description of the various facets which a VLE can have is necessary to even better understand the difficulties that we have to evaluate it. Indeed, for this evaluation to be reasonably reliable, it must include both the product VLE (interface and functionalities) and the learnings it supports. We believe the facility with which a VLE can be used and the variety of its functionalities alone does not explain the success or the failure of its design. Other factors must be taken into account to understand and perhaps predict the adoption or not of a given VLE by the learning actors. This specific acception of the evaluation takes again the distinction made by Nielsen (Nielsen, 1994) between practical acceptability and social acceptability of a computer system. Nielsen does not describe the components of social acceptability in detail, but it clearly emphasizes its significance in the overall system acceptability. We are indeed dealing with social acceptability which refers to the way in which the learning actors perceive the different stakes related to the use of a VLE in a learning situation. These actors react favorably or unfavorably to the use of VLE according to opportunity, risks, and benefits that may arise from this use. Practical acceptability refers to a kind of "diagnostic of use" in which the designers as well as the learning actors are invited to measure the utility, for learning tasks, of functionalities suggested by a VLE and the way in which they can be used. Here they are two principal categories of a computer system evaluation: usability and utility (we will deal with that idea later). Practical acceptability can also take into account the factors (such as the compatibility with the existing systems, the cost, the reliability, etc.) which support the integration of a VLE in a given learning situation. 3. Evaluation of computer systems and VLEs The concept of usability, introduced by Eason (Eason, 1984) and developed by other authors (Nielsen 1994; Bastien 2001; Rosson 2001, Shneiderman 2005) generally refers to the quality of the interface. Bastien suggest that the usability is defined in a way much broader designating the related aspects as much in the use of the computer systems that those related to the utility of these system. The two dimensions (usability and utility) are therefore not completely independent: an interface with good ergonomic qualities (optimal usability) cannot alone satisfy the user’s expectations if essential functions for carrying out the task (real utility) are not implemented in the application. Standard ISO 9241-11 defines the usability of a computer system as follows: "the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use". Two approaches are generally used to evaluate the computer systems: analytical and empirical; these approaches are not necessarily opposite but complementary. The data collected from empirical methods are primarily behavioral; they are used to locate the problems encountered by users. As for data collected from analytical methods, these provide information on the ergonomic quality of the interface without the user directly interacting with the system. The evaluation methods can be complementary according to the objective of the evaluation, to the means used and the moment of the life cycle of the application. Dix (Dix, 1998) defined eight criteria to help the appraiser in the choice of a suitable evaluation technique. They are: the time of the evaluation (in relation to the life cycle of the application), the type of data to be collected, human and material resources which are available, constraints imposed by the evaluation, mode of evaluation (laboratory or field), the degree of objectivity or subjectivity sought, information available and data acquisition mode (online/offline). One or several methods may be used to collect the maximum amount of information on the system being observed in order to bring the evaluation operation to a successful conclusion. For example, interviews and questionnaires may supplement cognitive walkthrough (Lewis 1992; Mahatody 2010) by providing information on the characteristics of the users 50

Nabil Ben Abdallah and Françoise Poyet and the tasks to be performed. In an article devoted to studying methods by inspection (“heuristic evaluation, cognitive walkthrough, formal usability inspections, and pluralistic usability walkthrough”), Hollingsed and Novick (Hollingsed 2007) assert that heuristic evaluation and cognitive walkthrough are methods that are still actively used and they enables to obtain the best possible evaluation in terms of usability. A VLE, as a particular computer system, offers a set of functionalities to carry out various activities directly or indirectly related to learning processes. The utility of these suggested functionalities as well as the way in which they are used by the users may be evaluated by HCI methods based on an analytical or/and empirical approach (Huart 2004, Hvannberg 2007). Let us examine here if the cognitive walkthrough method allows us to evaluate the usability and utility of a VLE. This method requires a specific description of the tasks to be carried out using the computer system, a description of the actions which the user must carry out in order to perform these tasks and, finally, a general description of the users and context of use. The appraiser explores the interface in four stages of the human-computer interaction, to evaluate the facility with which actions will be carried out. The tests make it possible to determine the parts of the interface which are likely to hinder optimal use of the application. For a VLE, the description of the tasks to be performed and the sequence of the actions to be carried out pose a problem for the appraiser. The tasks carried out, for example, by a userlearner are indeed complex and are closely connected. The appraiser must be able to describe the reality of a learning activity involving different actions which are not naturally carried out in a predefined sequence; within this learning activity, the commitment of the learner is not always predictable. If the learning situation had been broken down into scenarios the appraiser could use the scenario which seems the most relevant according to the elements to be evaluated. He must be able to operationalize and implement the chosen scenario on the VLE. This method will be of the greatest interest if it is focused on "simple tasks" for which the series of actions required to carry them out is easy to describe. For example, evaluating the ease with which a VLE user-administrator will choose the actions to be carried out is perfectly possible using the cognitive walkthrough method. These principles are fairly general and consequently they can be used when evaluating different types of computer systems, including VLEs. Similar to a cognitive walkthrough, a heuristic evaluation can be applied to VLEs, as the principles, such as flexibility and coherency, which are at the root of heuristics, are fairly general. However, for this evaluation to be relevant, it must be adapted to these specific environments. We can therefore acknowledge that all the methods presented in the preceding paragraphs are potentially valid to measure the usability of VLEs and provide, in certain conditions, information on their practical utility. Nevertheless, as a VLE must make it possible to carry out various tasks that are directly or indirectly related to the complex process of learning, it is necessary to ponder over specific criteria referring to the teaching uses of VLE. A VLE, as we have already mentioned above, should be seen as a social space where the learner can be an active participant in learning. It is thus necessary that the evaluation is comprehensive relating at the same time practical acceptability (utility and usability) and the social acceptability of a VLE. These two dimensions (practical and social) are not independent, they are articulated and complement each other. In view of this complexity posed by the evaluation of the VLEs, other theoretical approaches are worth questioning: this is the case with activity theory. 4. Origins and nature of activity theory Activity theory comes from the writings of Leontiev, a disciple of Vygotsky in the 1930s. It finds its roots in the theory of social development, according to which social interactions play a considerable part in the development of cognition, and it suggests that the social dimension is at the heart of human activity. Engeström (Engeström 1987) completed the triad (subject, tool, object) initially worked out by Vygotsky, adding the element “community” and two mediating elements: rules and the division of labor. The activity is observed thereby like a system with its own structure, its own internal transformations, and its own development. The diagram below shows a system of human activities according to Engeström. The subject-community-object triangle represents a process connecting the object to a community of work which plays a part, with the subject, in producing or transforming the object. The relationship between the subject and the object is mediated by the tool; between the community and the subject is mediated by explicit or tacit rules; and finally, the relationship between the community and the object 51

Nabil Ben Abdallah and Françoise Poyet<br />

notion of project. The difference between other constructivist environments and what virtual<br />

environments potentially offer can be described as making students not only active, but also actors,<br />

i.e. members and contributors of the social and information space". For example, if we consider the<br />

<strong>learning</strong> user, these theoretical possibilities provided by a VLE should not make us forget that there is<br />

always a gap between the use of VLE as it is defined by the designers according to the representation<br />

that they have of <strong>learning</strong> activities, and the use of VLE as actually practiced by the learner as<br />

regards what he must do to achieve his objectives. The fact that the use of the VLE by the learner is<br />

different from that defined by the designer is not in itself a design problem or a dysfunction in <strong>learning</strong>,<br />

it is inherent to what Perriault (Perriault 2008) called the logic of use which is construction by the<br />

individual of a choice of instrument and a kind of employment he uses to achieve a project.<br />

This description of the various facets which a VLE can have is necessary to even better understand<br />

the difficulties that we have to evaluate it. Indeed, for this evaluation to be reasonably reliable, it must<br />

include both the product VLE (interface and functionalities) and the <strong>learning</strong>s it supports. We believe<br />

the facility with which a VLE can be used and the variety of its functionalities alone does not explain<br />

the success or the failure of its design. Other factors must be taken into account to understand and<br />

perhaps predict the adoption or not of a given VLE by the <strong>learning</strong> actors.<br />

This specific acception of the evaluation takes again the distinction made by Nielsen (Nielsen, 1994)<br />

between practical acceptability and social acceptability of a computer system. Nielsen does not<br />

describe the components of social acceptability in detail, but it clearly emphasizes its significance in<br />

the overall system acceptability. We are indeed dealing with social acceptability which refers to the<br />

way in which the <strong>learning</strong> actors perceive the different stakes related to the use of a VLE in a <strong>learning</strong><br />

situation. These actors react favorably or unfavorably to the use of VLE according to opportunity,<br />

risks, and benefits that may arise from this use. Practical acceptability refers to a kind of "diagnostic of<br />

use" in which the designers as well as the <strong>learning</strong> actors are invited to measure the utility, for<br />

<strong>learning</strong> tasks, of functionalities suggested by a VLE and the way in which they can be used. Here<br />

they are two principal categories of a computer system evaluation: usability and utility (we will deal<br />

with that idea later). Practical acceptability can also take into account the factors (such as the<br />

compatibility with the existing systems, the cost, the reliability, etc.) which support the integration of a<br />

VLE in a given <strong>learning</strong> situation.<br />

3. Evaluation of computer systems and VLEs<br />

The concept of usability, introduced by Eason (Eason, 1984) and developed by other authors (Nielsen<br />

1994; Bastien 2001; Rosson 2001, Shneiderman 2005) generally refers to the quality of the interface.<br />

Bastien suggest that the usability is defined in a way much broader designating the related aspects as<br />

much in the use of the computer systems that those related to the utility of these system. The two<br />

dimensions (usability and utility) are therefore not completely independent: an interface with good<br />

ergonomic qualities (optimal usability) cannot alone satisfy the user’s expectations if essential<br />

functions for carrying out the task (real utility) are not implemented in the application. Standard ISO<br />

9241-11 defines the usability of a computer system as follows: "the extent to which a product can be<br />

used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a<br />

specified context of use".<br />

Two approaches are generally used to evaluate the computer systems: analytical and empirical; these<br />

approaches are not necessarily opposite but complementary. The data collected from empirical<br />

methods are primarily behavioral; they are used to locate the problems encountered by users. As for<br />

data collected from analytical methods, these provide information on the ergonomic quality of the<br />

interface without the user directly interacting with the system.<br />

The evaluation methods can be complementary according to the objective of the evaluation, to the<br />

means used and the moment of the life cycle of the application. Dix (Dix, 1998) defined eight criteria<br />

to help the appraiser in the choice of a suitable evaluation technique. They are: the time of the<br />

evaluation (in relation to the life cycle of the application), the type of data to be collected, human and<br />

material resources which are available, constraints imposed by the evaluation, mode of evaluation<br />

(laboratory or field), the degree of objectivity or subjectivity sought, information available and data<br />

acquisition mode (online/offline). One or several methods may be used to collect the maximum<br />

amount of information on the system being observed in order to bring the evaluation operation to a<br />

successful conclusion. For example, interviews and questionnaires may supplement cognitive<br />

walkthrough (Lewis 1992; Mahatody 2010) by providing information on the characteristics of the users<br />

50

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!