NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Dai, Jing; Gu, Xiaoqing; Zhu, Jiawen – Journal of Educational Computing Research, 2023
Personalized recommendation plays an important role on content selection during the adaptive learning process. It is always a challenge on how to recommend effective items to improve learning performance. The aim of this study was to examine the feasibility of applying adaptive testing technology for personalized recommendation. We proposed the…
Descriptors: Individualized Instruction, Intelligent Tutoring Systems, Evaluation Methods, Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Pejic, Marko; Savic, Goran; Segedinac, Milan – Journal of Educational Computing Research, 2021
This study proposes a software system for determining gaze patterns in on-screen testing. The system applies machine learning techniques to eye-movement data obtained from an eye-tracking device to categorize students according to their gaze behavior pattern while solving an on-screen test. These patterns are determined by converting eye movement…
Descriptors: Eye Movements, Computer Assisted Testing, Computer Software, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Seifried, Eva; Lenhard, Wolfgang; Spinath, Birgit – Journal of Educational Computing Research, 2017
Writing essays and receiving feedback can be useful for fostering students' learning and motivation. When faced with large class sizes, it is desirable to identify students who might particularly benefit from feedback. In this article, we tested the potential of Latent Semantic Analysis (LSA) for identifying poor essays. A total of 14 teaching…
Descriptors: Computer Assisted Testing, Computer Software, Essays, Writing Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Stowell, Jeffrey R.; Allan, Wesley D.; Teoro, Samantha M. – Journal of Educational Computing Research, 2012
Emotions experienced during online academic examinations may differ from emotions experienced in the traditional classroom testing situation. Students in a "Psychology of Learning" course (n = 61) completed assessments of emotions before and after a quiz in each of the following settings: online at their own choice of time and location; online in…
Descriptors: Student Evaluation, Test Anxiety, Emotional Response, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Graham, Charles R.; Tripp, Tonya; Wentworth, Nancy – Journal of Educational Computing Research, 2009
This study explores the efforts at Brigham Young University to improve preservice candidates' technology integration using the Teacher Work Sample (TWS) as an assessment tool. Baseline data that was analyzed from 95 TWSs indicated that students were predominantly using technology for productivity and information presentation purposes even though…
Descriptors: Field Instruction, Work Sample Tests, Technology Integration, Educational Technology
Peer reviewed Peer reviewed
Direct linkDirect link
Bodmann, Shawn M.; Robinson, Daniel H. – Journal of Educational Computing Research, 2004
This study investigated the effect of several different modes of test administration on scores and completion times. In Experiment 1, paper-based assessment was compared to computer-based assessment. Undergraduates completed the computer-based assessment faster than the paper-based assessment, with no difference in scores. Experiment 2 assessed…
Descriptors: Computer Assisted Testing, Higher Education, Undergraduate Students, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Koul, Ravinder; Clariana, Roy B.; Salehi, Roya – Journal of Educational Computing Research, 2005
This article reports the results of an investigation of the convergent criterion-related validity of two computer-based tools for scoring concept maps and essays as part of the ongoing formative evaluation of these tools. In pairs, participants researched a science topic online and created a concept map of the topic. Later, participants…
Descriptors: Scoring, Essay Tests, Test Validity, Formative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Clariana, Roy B.; Wallace, Patricia – Journal of Educational Computing Research, 2007
This proof-of-concept investigation describes a computer-based approach for deriving the knowledge structure of individuals and of groups from their written essays, and considers the convergent criterion-related validity of the computer-based scores relative to human rater essay scores and multiple-choice test scores. After completing a…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Construct Validity, Cognitive Structures
Peer reviewed Peer reviewed
Direct linkDirect link
Miller, Tristan – Journal of Educational Computing Research, 2003
Latent semantic analysis (LSA) is an automated, statistical technique for comparing the semantic similarity of words or documents. In this article, I examine the application of LSA to automated essay scoring. I compare LSA methods to earlier statistical methods for assessing essay quality, and critically review contemporary essay-scoring systems…
Descriptors: Semantics, Test Scoring Machines, Essays, Semantic Differential