NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Jeske, Heimo J.; Lall, Manoj; Kogeda, Okuthe P. – Journal of Information Technology Education: Innovations in Practice, 2018
Aim/Purpose: The aim of this article is to develop a tool to detect plagiarism in real time amongst students being evaluated for learning in a computer-based assessment setting. Background: Cheating or copying all or part of source code of a program is a serious concern to academic institutions. Many academic institutions apply a combination of…
Descriptors: Plagiarism, Identification, Computer Software, Computer Assisted Testing
Hsin-Ling Hung; James W. Altschuld – Sage Research Methods Cases, 2014
The Asia-Pacific program evaluation development and challenge project in 2006-2007 was a collaboration of researchers with varied cultural backgrounds and from two different countries. It focused on the Asia-Pacific region and utilized a Delphi technique involving experts from 11 countries/areas (Australia, China, Hong Kong, Indonesia, Japan,…
Descriptors: Program Evaluation, Cultural Differences, Foreign Countries, Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
Veerasamy, Ashok Kumar; D'Souza, Daryl; Laakso, Mikko-Jussi – Journal of Educational Technology Systems, 2016
This article presents a study aimed at examining the novice student answers in an introductory programming final e-exam to identify misconceptions and types of errors. Our study used the Delphi concept inventory to identify student misconceptions and skill, rule, and knowledge-based errors approach to identify the types of errors made by novices…
Descriptors: Computer Science Education, Programming, Novices, Misconceptions
Peer reviewed Peer reviewed
Direct linkDirect link
Nevo, Dorit; McClean, Ron; Nevo, Saggi – Journal of Information Systems Education, 2010
This paper discusses the relative advantage offered by online Students' Evaluations of Teaching (SET) and describes a study conducted at a Canadian university to identify critical success factors of online evaluations from students' point of view. Factors identified as important by the students include anonymity, ease of use (of both SET survey…
Descriptors: Student Evaluation of Teacher Performance, Information Technology, Surveys, Evaluation Methods