27.12.2014 Views

DEliverable 2.3 - the School of Engineering and Design - Brunel ...

DEliverable 2.3 - the School of Engineering and Design - Brunel ...

DEliverable 2.3 - the School of Engineering and Design - Brunel ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

ICT Project 3D VIVANT– Deliverable <strong>2.3</strong><br />

Contract no.:<br />

248420<br />

User Acceptance Validation Plan<br />

2 METHODOLOGY<br />

There are major differences between testing functions <strong>and</strong> features involving interactivity <strong>and</strong> <strong>the</strong><br />

assessment <strong>of</strong> acceptance <strong>of</strong> perception quality (using vs. viewing). While, for instance, usability<br />

testing requires tests on whe<strong>the</strong>r users underst<strong>and</strong> what <strong>the</strong>y are expected to do <strong>and</strong> how easy it is for<br />

<strong>the</strong>m to find out <strong>and</strong> actually do, validating acceptance <strong>of</strong> a novel perception <strong>of</strong> 3D video <strong>and</strong> audio<br />

will require very different research questions. These differences will influence <strong>the</strong> choice <strong>of</strong> methods<br />

to a large extent.<br />

The following section will describe <strong>the</strong> objectives <strong>and</strong> <strong>the</strong> adopted approach for conducting user<br />

validation tests for <strong>the</strong> development in 3D VIVANT.<br />

2.1 QUALITATIVE VS. QUANTITATIVE ASSESSMENT METHODS<br />

As <strong>the</strong> main objective <strong>of</strong> <strong>the</strong> User Validation Activities in 3D VIVANT is to investigate whe<strong>the</strong>r <strong>the</strong><br />

results <strong>of</strong> 3D VIVANT, from holoscopic 3D to novel opportunities to view <strong>and</strong> interact with video<br />

content, will be accepted by potential users as an added value to <strong>the</strong>ir previous experience, many<br />

subjective aspects will have to be taken into account. In order to explore <strong>the</strong> limitations <strong>of</strong> 3D<br />

VIVANT’s features <strong>and</strong> functions, <strong>the</strong> chosen evaluation methods will have to allow maximum<br />

openness <strong>and</strong> flexibility.<br />

For both interactivity as well as viewing characteristics <strong>the</strong>re is broad agreement that <strong>the</strong>y could not<br />

be sufficiently validated with objective measures alone. While performance, colour display <strong>and</strong><br />

various o<strong>the</strong>r aspects may be measured according to objective criteria, subjective terms like<br />

acceptance are best evaluated with qualitative methods.<br />

All functions <strong>and</strong> features to be validated by project externals will be pilot tested by <strong>the</strong> partners<br />

involved in <strong>the</strong> development <strong>of</strong> <strong>the</strong>se features before <strong>the</strong>y will be subject to tests involving external<br />

users. These internal tests will include several objective tests, which will be dictated by <strong>the</strong><br />

specifications determined in <strong>the</strong> respective work packages. User acceptance, on <strong>the</strong> o<strong>the</strong>r h<strong>and</strong>, will<br />

be validated primarily through qualitative methods.<br />

While quantitative evaluation methods require huge samples <strong>of</strong> testers in order to create statistics,<br />

deduct objectivity from subjective statements, <strong>and</strong> confirm or falsify <strong>the</strong> scientist’s expectations,<br />

qualitative methods are a source for recommendations <strong>and</strong> inspirations. Focus group discussions,<br />

open interviews <strong>and</strong> <strong>the</strong> thinking aloud method, for instance, may produce answers that were not<br />

planned, let alone prepared by <strong>the</strong> scientists leading <strong>the</strong> evaluation. This means, at <strong>the</strong> same time, that<br />

test results are less predictable as testers can come up with answers to questions that were not asked<br />

or not even thought <strong>of</strong>.<br />

This relative openness, however, does not contradict <strong>the</strong> will <strong>and</strong> possibility to achieve generalisation<br />

(Seale 1999) <strong>and</strong> transferability (Lincoln <strong>and</strong> Guba 1985) <strong>of</strong> <strong>the</strong> test results. Approved quality criteria<br />

ensure that both, <strong>the</strong> choice <strong>of</strong> <strong>the</strong> instrument <strong>and</strong> <strong>the</strong> evaluation <strong>of</strong> <strong>the</strong> test results will lead to an<br />

underst<strong>and</strong>ing <strong>of</strong> positive <strong>and</strong> negative potentials <strong>and</strong> give valuable input as to how to avoid negative<br />

<strong>and</strong> achieve positive results (De Abreu et al 2006).<br />

In terms <strong>of</strong> <strong>the</strong> data to be obtained from users, <strong>the</strong>se typically fall under two categories:<br />

• Process data – Observations about what <strong>the</strong> users were doing <strong>and</strong> thinking as <strong>the</strong>y proceeded<br />

through <strong>the</strong> tests.<br />

• Summary data – ‘Bottom-line’ information like <strong>the</strong> time taken to finish a task, error rate, etc.<br />

Relevant data collection methods are<br />

• Questionnaires – Comprising <strong>of</strong> both closed <strong>and</strong> open ended questions. Questionnaires in this<br />

01.09.11 8

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!