learning - Academic Conferences Limited
learning - Academic Conferences Limited learning - Academic Conferences Limited
Martin Cápay et al. On the other hand, there is still the question of problems the students face while elaborating the same task on a paper (in printed form) even if they are able to solve the task easily in electronic form. After the discussion with psychologists, we suppose that the reason lies in higher acceptability of electronic form of testing which, unfortunately, eliminates students’ critical thinking as well as the ability of formally correct writing. The elaboration of given task is in this case faster but only mechanical. The observation also showed that electronic tasks, in which there is the possibility to “undo” or “clear the desktop” (e.g. delete the configurations in Karnaugh map), are more likely to be better solved by the students. In case of a bad solution in printed test, it is not possible to go back or change the direction – this significantly shortens the time as well as decreases the willingness to revise the solution process. Interactive animations and interactive problem tasks are undoubtedly a very interesting possibility of the educational process enhancement. However, there are several problems occurring in their usage that need to be investigated in greater detail and eliminated in the future. We can assume that utilization of this testing method may help to mobilize the students but, on the other hand, it can also cause partial elimination of critical thinking as well as the ability to follow the formalism in writing. Acknowledgements This paper is published thanks to the financial support of the ESF project ITMS 26110230012 “Virtual faculty – Distance Learning at FSVaZ UKF in Nitra”, and the national project KEGA 080-019SPU- 4/2010 “Development of creative potential in eLearning”. References Balogh, Z., Klimeš, C. and Turčáni, M. (2011) “Instruction Adaptation Modelling Using Fuzzy Petri NETs.“ Distance Learning, Simulation and Communication 2011. Brno. pp. 22-29. Brooks, I. and Weatherston, J. (1997) The Business Environment: Challenges and Changes, Prentice Cápay, M (2010). “Testing the programming knowledge and skills in electronic environment.“ Problems of Education in the 21st Century, Vol. 23. Cápay, M. (2009). Development and use of intelligent computer systems to support the teaching of informatics subjects. Disertation thesis. Cápay, M. and Tomanová, J. (2010). “Enhancing the Quality of Administration, Teaching and Testing of Computer Science Using Learning Management System.“ WSEAS Transactions on Information Science & Applications. Vol. 7, No. 9, p. 1126-1136. Gangur, M. (2011). “Automatic generation of cloze questions.“ CSEDU 2011: Proceedings of the Third Internation Conference on Computer Supported Education. ISSN-ISBN: 978-989-8425-49-2 .p. 264-269 Hvorecký, J. Drlík, M. and Munk, M. (2010) “The effect of visual query languages on the improvement of information retrieval skills.“ Procedia - Social and Behavioral Sciences. Vol. 2, No. 2, pp. 717-723. Kapusta, J., Munk, M. and Turčáni, M. (2009) “Experimental comparison of adaptive links annotation technique with adaptive direct guidance technique. “ International Conference on Web Information Systems and Technologies, pp. 250-256. Kipper H. and Rüütmann, T. (2010). “Strategies and techniques of questioning effectuating thinking and deep understanding in teaching engineering at Estonian centre for engineering pedagogy. “ Problems of Education in the 21st Century, pp.36 – 44. Kučera, P., Kvasnička, R. and Vydrová, H. (2008). “Evaluation of test questions using the item analysis for the credit test of the subject of mathematical methods in economics in the Moodle LMS.“, Efficiency and Responsibility in Education. Praha p. 145 - 153. Lindstrom, R. (1994) The Business Week Guide to Multimedia Presentations: Create Dynamic Presentations That Inspire. New York: McGraw-Hill. Magdin, M. (2010).“Integrating interactive flash animations ana java applet in LMS Moodle“. Journal of Technology and Information Education. Vol 2, No 1, http://www.jtie.upol.cz/10_1.htm. Mayer, R. (2004) “Multimedia learning. Are we asking the right questions? “ Educational Psychologist [online] [cit. 2010-12-07], http://www.ameprc.mq.edu.au/docs/conferences/2003/EileenChau.pdf. Skalka, J. and Drlík, M. (2009). “Avoiding Plagiarism in Computer Science ELearning Courses.“ Information & Communicaton Technology in Natural Science Education. Vol. 16. Sonwalker N. (2001) A New Methodology for Evaluation: The Pedagogical Rating of Online Courses. Campus Technology from Syllabus Media Group. Tse-Kian Neo, Mai Neo. (2004). “Integrating multimedia into the Malaysian classroom: Engaging students in interactive learning“, The Turkish Online Journal of Educational Technology – TOJET Vol 3 No 3 [cit. 2011- 08-08], http://www.tojet.net/articles/334.pdf Wen, T.S. and Lin,H.Ch. (2007). “The Study of ELearning for Geographic Information Curriculum in Higher Education. “ 6th WSEAS International Conference on Applied Computer Science (ACOS '07). 90
e-Assessment Using Digital Pens – a Pilot Study to Improve Feedback and Assessment Processes Tim Cappelli University of Manchester, UK Timothy.cappelli-2@manchester.ac.uk Abstract: Manchester Medical School is the largest medical school in the UK, with over 2000 students on the MBChB programme. During the final three years of the programme all students undergo regular assessments of their clinical skills through a series of Objective Structured Clinical Examinations (OSCEs). The OSCEs require students to carry out a series of simulated exercises in front of an examiner. The examiner completes a scoresheet for each student, giving a mark between 1 and 7 for each of four criteria together with a ‘global mark’, again between 1 and 7. The examiner also leaves a small piece of written feedback on the bottom of each form. Following the exam, all the score-sheets from each of the four teaching hospitals attached to the University are scanned using an optical reader. This involves a large amount of effort and provides many opportunities for errors. Due to the work involved and logistical problems, student feedback from the OSCEs is currently limited to a single mark. Despite the examiners providing the piece of text on the score-sheet, this is only made available to students scoring less than 4 on their global mark. The students and the School are increasingly motivated to allow all students access to the written feedback. Hence, in an effort to increase efficiency of the OSCE process and enable the delivery of student feedback, the Medical School has piloted the use of digital pens as a method of capturing and processing scoring and feedback. This case study presents the process and evaluation of the pilot. The study examines the choice of technology, the aims of the pilot and an evaluation of the technology to assess whether objectives have been achieved. An impact analysis of the use of the pens over a five year period also shows the return on investment. Keywords: e-assessment, digital pens, feedback 1. Background Manchester Medical School is the largest medical school in the UK, with over 2000 students across a five year MBChB programme. The final three years of the programme are spent based at one of five teaching hospitals across the North-West of England. With so many students dispersed across so many sites, the logistics of curriculum delivery and examination is immensely challenging. This is particularly so when it comes to the assessment of students’ clinical skills. In common with most medical schools, a student’s clinical skills are assessed using a series of Objective Structured Clinical Assessments (OSCE) (Harden and Gleeson 1979). OSCEs have been used for over three decades as a way of providing a reliable and objective method for testing students’ clinical abilities. OSCEs consist of series of short tests – or stations – that are designed to test a student’s clinical performance along with competences in history taking and clinical examination. Each of the stations has a different examiner and students rotate through the stations, to complete all the stations on their circuit. In this way, all candidates take the same stations, with the same set of examiners, providing the basis of an objective, structured assessment. The students at the University of Manchester undertake OSCEs twice a year from year three onwards, with each of the four base hospitals delivering 16 stations per circuit, 4 times a day over a week to assess all the students in a year group. Each examiner has one form for each of the students; so with 16 stations and 16 students per circuit, four circuits per day per hospital and four hospitals, it’s clear that the number of forms soon mounts up. At present all the forms are marked, collected and checked manually and the forms returned to the Medical Exams Office (MEO) at the University for scanning and collating. The opportunities for making this process more efficient and secure through the introduction of electronic marking are clear; several studies show that e-assessment improves information management and ‘avoids the meltdown of paper systems’ (Buzzetto-More and Alade 2006, Ridgeway, McCusker and Pead 2007). However, efficiency savings where not the driving-force behind the Medical School’s commission of a pilot scheme in June 2010; instead it was the lack of feedback given to students after the exam. Each form completed by the examiner consists of a series of criteria on which the student is given a score between one and seven, with four representing the minimum standard required. The examiner then gives a ‘global mark’ for the student’s performance, again between one and seven, and is encouraged to write one or two lines of feedback in a text box at the bottom of the form. However, due to difficulties in making the manual forms available to students, only those students who have scored less than four on any given station are provided with this feedback, and then only sometime after the exam. This is clearly not in keeping with best practice for summative feedback, with most authors agreeing that feedback should be constructive, timely and freely available (Nicol and Macfalane-Dick 2006, Gibbs 2010). Increasing 91
- Page 66 and 67: References Jonathan Barkand Allen,
- Page 68 and 69: 2. Pedagogical agents Orlando Belo
- Page 70 and 71: Orlando Belo Type (Tp), the refere
- Page 72 and 73: 3.3 The agent’s architecture Orla
- Page 74 and 75: Some Reflections on the Evaluation
- Page 76 and 77: Nabil Ben Abdallah and Françoise P
- Page 78 and 79: Nabil Ben Abdallah and Françoise P
- Page 80 and 81: Nabil Ben Abdallah and Françoise P
- Page 82 and 83: Designing A New Curriculum: Finding
- Page 84 and 85: Andrea Benn For this new course, it
- Page 86 and 87: Andrea Benn Technology is already i
- Page 88 and 89: Andrea Benn To bring about the co-o
- Page 90 and 91: Latefa Bin Fryan and Lampros Stergi
- Page 92 and 93: Latefa Bin Fryan and Lampros Stergi
- Page 94 and 95: Faculty development Online course
- Page 96 and 97: Latefa Bin Fryan and Lampros Stergi
- Page 98 and 99: Latefa Bin Fryan and Lampros Stergi
- Page 100 and 101: Alice Bird being reviewed under the
- Page 102 and 103: Alice Bird Developing the process m
- Page 104 and 105: Alice Bird Reflecting on the feasib
- Page 106 and 107: 3.3 Early stage implementation Alic
- Page 108 and 109: Enhancement of e-Testing Possibilit
- Page 110 and 111: Martin Cápay et al. of Likert scal
- Page 112 and 113: Martin Cápay et al. Figure 3 Proce
- Page 114 and 115: Martin Cápay et al. Figure 4: An e
- Page 118 and 119: Tim Cappelli demand from students t
- Page 120 and 121: Tim Cappelli at a time and increasi
- Page 122 and 123: Tim Cappelli forms were processed a
- Page 124 and 125: Objectives More efficient and faste
- Page 126 and 127: Digital Educational Resources Repos
- Page 128 and 129: Cornélia Castro et al. Economic:
- Page 130 and 131: Cornélia Castro et al. Dimension E
- Page 132 and 133: Cornélia Castro et al. feedback on
- Page 134 and 135: Cornélia Castro et al. EdReNe (200
- Page 136 and 137: Ivana Cechova et al. The influence
- Page 138 and 139: 4. Methodology Ivana Cechova et al.
- Page 140 and 141: Ivana Cechova et al. Although this
- Page 142 and 143: 8. Conclusion Ivana Cechova et al.
- Page 144 and 145: Yin Ha Vivian Chan et al. What is s
- Page 146 and 147: Yin Ha Vivian Chan et al. as a viab
- Page 148 and 149: Yin Ha Vivian Chan et al. the ILC h
- Page 150 and 151: The Development and Application of
- Page 152 and 153: Serdar Çiftci and Mehmet Akif Ocak
- Page 154 and 155: 4.3 Data collection Serdar Çiftci
- Page 156 and 157: Serdar Çiftci and Mehmet Akif Ocak
- Page 158 and 159: Table 8: Students’ responses to q
- Page 160 and 161: An Exploratory Comparative Study of
- Page 162 and 163: Marija Cubric et al. Web 2.0 tools
- Page 164 and 165: Marija Cubric et al. Despite all th
e-Assessment Using Digital Pens – a Pilot Study to<br />
Improve Feedback and Assessment Processes<br />
Tim Cappelli<br />
University of Manchester, UK<br />
Timothy.cappelli-2@manchester.ac.uk<br />
Abstract: Manchester Medical School is the largest medical school in the UK, with over 2000 students on the<br />
MBChB programme. During the final three years of the programme all students undergo regular assessments of<br />
their clinical skills through a series of Objective Structured Clinical Examinations (OSCEs). The OSCEs require<br />
students to carry out a series of simulated exercises in front of an examiner. The examiner completes a scoresheet<br />
for each student, giving a mark between 1 and 7 for each of four criteria together with a ‘global mark’, again<br />
between 1 and 7. The examiner also leaves a small piece of written feedback on the bottom of each form.<br />
Following the exam, all the score-sheets from each of the four teaching hospitals attached to the University are<br />
scanned using an optical reader. This involves a large amount of effort and provides many opportunities for<br />
errors. Due to the work involved and logistical problems, student feedback from the OSCEs is currently limited to<br />
a single mark. Despite the examiners providing the piece of text on the score-sheet, this is only made available to<br />
students scoring less than 4 on their global mark. The students and the School are increasingly motivated to<br />
allow all students access to the written feedback. Hence, in an effort to increase efficiency of the OSCE process<br />
and enable the delivery of student feedback, the Medical School has piloted the use of digital pens as a method<br />
of capturing and processing scoring and feedback. This case study presents the process and evaluation of the<br />
pilot. The study examines the choice of technology, the aims of the pilot and an evaluation of the technology to<br />
assess whether objectives have been achieved. An impact analysis of the use of the pens over a five year period<br />
also shows the return on investment.<br />
Keywords: e-assessment, digital pens, feedback<br />
1. Background<br />
Manchester Medical School is the largest medical school in the UK, with over 2000 students across a<br />
five year MBChB programme. The final three years of the programme are spent based at one of five<br />
teaching hospitals across the North-West of England. With so many students dispersed across so<br />
many sites, the logistics of curriculum delivery and examination is immensely challenging. This is<br />
particularly so when it comes to the assessment of students’ clinical skills. In common with most<br />
medical schools, a student’s clinical skills are assessed using a series of Objective Structured Clinical<br />
Assessments (OSCE) (Harden and Gleeson 1979). OSCEs have been used for over three decades<br />
as a way of providing a reliable and objective method for testing students’ clinical abilities. OSCEs<br />
consist of series of short tests – or stations – that are designed to test a student’s clinical performance<br />
along with competences in history taking and clinical examination. Each of the stations has a different<br />
examiner and students rotate through the stations, to complete all the stations on their circuit. In this<br />
way, all candidates take the same stations, with the same set of examiners, providing the basis of an<br />
objective, structured assessment. The students at the University of Manchester undertake OSCEs<br />
twice a year from year three onwards, with each of the four base hospitals delivering 16 stations per<br />
circuit, 4 times a day over a week to assess all the students in a year group. Each examiner has one<br />
form for each of the students; so with 16 stations and 16 students per circuit, four circuits per day per<br />
hospital and four hospitals, it’s clear that the number of forms soon mounts up. At present all the<br />
forms are marked, collected and checked manually and the forms returned to the Medical Exams<br />
Office (MEO) at the University for scanning and collating. The opportunities for making this process<br />
more efficient and secure through the introduction of electronic marking are clear; several studies<br />
show that e-assessment improves information management and ‘avoids the meltdown of paper<br />
systems’ (Buzzetto-More and Alade 2006, Ridgeway, McCusker and Pead 2007). However, efficiency<br />
savings where not the driving-force behind the Medical School’s commission of a pilot scheme in June<br />
2010; instead it was the lack of feedback given to students after the exam. Each form completed by<br />
the examiner consists of a series of criteria on which the student is given a score between one and<br />
seven, with four representing the minimum standard required. The examiner then gives a ‘global<br />
mark’ for the student’s performance, again between one and seven, and is encouraged to write one or<br />
two lines of feedback in a text box at the bottom of the form. However, due to difficulties in making the<br />
manual forms available to students, only those students who have scored less than four on any given<br />
station are provided with this feedback, and then only sometime after the exam. This is clearly not in<br />
keeping with best practice for summative feedback, with most authors agreeing that feedback should<br />
be constructive, timely and freely available (Nicol and Macfalane-Dick 2006, Gibbs 2010). Increasing<br />
91