NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Journal of Educational…12
Audience
Laws, Policies, & Programs
Pell Grant Program1
What Works Clearinghouse Rating
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Pejic, Marko; Savic, Goran; Segedinac, Milan – Journal of Educational Computing Research, 2021
This study proposes a software system for determining gaze patterns in on-screen testing. The system applies machine learning techniques to eye-movement data obtained from an eye-tracking device to categorize students according to their gaze behavior pattern while solving an on-screen test. These patterns are determined by converting eye movement…
Descriptors: Eye Movements, Computer Assisted Testing, Computer Software, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Zhen; Cao, Yang; Gong, Shaoying – Journal of Educational Computing Research, 2023
Although learner characteristics have been identified as important moderator variables for feedback effectiveness, the question of why learners benefit differently from feedback has only received limited attention. In this study, we investigated: (1) whether learners' dominant goal orientation moderated the effects of computer-based elaborated…
Descriptors: Goal Orientation, Feedback (Response), Cues, Student Characteristics
Peer reviewed Peer reviewed
Direct linkDirect link
Smolinsky, Lawrence; Marx, Brian D.; Olafsson, Gestur; Ma, Yanxia A. – Journal of Educational Computing Research, 2020
Computer-based testing is an expanding use of technology offering advantages to teachers and students. We studied Calculus II classes for science, technology, engineering, and mathematics majors using different testing modes. Three sections with 324 students employed: paper-and-pencil testing, computer-based testing, and both. Computer tests gave…
Descriptors: Test Format, Computer Assisted Testing, Paper (Material), Calculus
Peer reviewed Peer reviewed
Direct linkDirect link
Esteve-Mon, Francesc M.; Cela-Ranilla, Jose María; Gisbert-Cervera, Mercè – Journal of Educational Computing Research, 2016
The acquisition of teacher digital competence is a key aspect in the initial training of teachers. However, most existing evaluation instruments do not provide sufficient evidence of this teaching competence. In this study, we describe the design and development process of a three-dimensional (3D) virtual environment for evaluating the teacher…
Descriptors: Foreign Countries, Preservice Teachers, Undergraduate Students, Technological Literacy
Peer reviewed Peer reviewed
Direct linkDirect link
Paiva, Rui C.; Ferreira, Milton S.; Mendes, Ana G.; Eusébio, Augusto M. J. – Journal of Educational Computing Research, 2015
This article presents a research study addressing the development, implementation, evaluation, and use of Interactive Modules for Online Training (MITO) of mathematics in higher education. This work was carried out in the context of the MITO project, which combined several features of the learning and management system Moodle, the computer-aided…
Descriptors: Foreign Countries, Computer Assisted Testing, Computer Assisted Instruction, College Mathematics
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, Meredith Myra; Braude, Eric John – Journal of Educational Computing Research, 2016
The assessment of learning in large online courses requires tools that are valid, reliable, easy to administer, and can be automatically scored. We have evaluated an online assessment and learning tool called Knowledge Assembly, or Knowla. Knowla measures a student's knowledge in a particular subject by having the student assemble a set of…
Descriptors: Computer Assisted Testing, Teaching Methods, Online Courses, Critical Thinking
Peer reviewed Peer reviewed
Direct linkDirect link
Kealy, W. A.; Ritzhaupt, A. D. – Journal of Educational Computing Research, 2010
Educational researchers have "rarely" addressed the problem of how to provide feedback on constructed responses. All participants (N = 76) read a story and completed short-answer questions based on the text, with some receiving feedback consisting of the exact material on which the questions were based. During feedback, two groups receiving…
Descriptors: Feedback (Response), Reading Comprehension, Recall (Psychology), Short Term Memory
Peer reviewed Peer reviewed
Mason, B. Jean; Patry, Marc; Berstein, Daniel J. – Journal of Educational Computing Research, 2001
Discussion of adapting traditional paper and pencil tests to electronic formats focuses on a study of undergraduates that examined the equivalence between computer-based and traditional tests when the computer testing provided opportunities comparable to paper testing conditions. Results showed no difference between scores from the two test types.…
Descriptors: Comparative Analysis, Computer Assisted Testing, Higher Education, Intermode Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Bodmann, Shawn M.; Robinson, Daniel H. – Journal of Educational Computing Research, 2004
This study investigated the effect of several different modes of test administration on scores and completion times. In Experiment 1, paper-based assessment was compared to computer-based assessment. Undergraduates completed the computer-based assessment faster than the paper-based assessment, with no difference in scores. Experiment 2 assessed…
Descriptors: Computer Assisted Testing, Higher Education, Undergraduate Students, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Smith, Brooke; Caputi, Peter – Journal of Educational Computing Research, 2004
Test equivalence can be evaluated in terms of four aspects: psychometric, behavioral, experiential, and individual differences (i.e., relativity of equivalence) (Honaker, 1988). This study examined the psychometric properties of the Attitude Towards Computerized Assessment Scale (ATCAS) designed to assess one of these criteria, namely,…
Descriptors: Measures (Individuals), Psychometrics, Testing, Factor Analysis
Peer reviewed Peer reviewed
Shermis, Mark D.; Mzumara, Howard R.; Bublitz, Scott T. – Journal of Educational Computing Research, 2001
This study of undergraduates examined differences between computer adaptive testing (CAT) and self-adaptive testing (SAT), including feedback conditions and gender differences. Results of the Test Anxiety Inventory, Computer Anxiety Rating Scale, and a Student Attitude Questionnaire showed measurement efficiency is differentially affected by test…
Descriptors: Adaptive Testing, Computer Anxiety, Computer Assisted Testing, Gender Issues
Peer reviewed Peer reviewed
Direct linkDirect link
Clariana, Roy B.; Wallace, Patricia – Journal of Educational Computing Research, 2007
This proof-of-concept investigation describes a computer-based approach for deriving the knowledge structure of individuals and of groups from their written essays, and considers the convergent criterion-related validity of the computer-based scores relative to human rater essay scores and multiple-choice test scores. After completing a…
Descriptors: Computer Assisted Testing, Multiple Choice Tests, Construct Validity, Cognitive Structures