NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of…1
What Works Clearinghouse Rating
Showing 1 to 15 of 32 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Tan, Hongye; Wang, Chong; Duan, Qinglong; Lu, Yu; Zhang, Hu; Li, Ru – Interactive Learning Environments, 2023
Automatic short answer grading (ASAG) is a challenging task that aims to predict a score for a given student response. Previous works on ASAG mainly use nonneural or neural methods. However, the former depends on handcrafted features and is limited by its inflexibility and high cost, and the latter ignores global word cooccurrence in a corpus and…
Descriptors: Automation, Grading, Computer Assisted Testing, Graphs
Peer reviewed Peer reviewed
Direct linkDirect link
Aditya Shah; Ajay Devmane; Mehul Ranka; Prathamesh Churi – Education and Information Technologies, 2024
Online learning has grown due to the advancement of technology and flexibility. Online examinations measure students' knowledge and skills. Traditional question papers include inconsistent difficulty levels, arbitrary question allocations, and poor grading. The suggested model calibrates question paper difficulty based on student performance to…
Descriptors: Computer Assisted Testing, Difficulty Level, Grading, Test Construction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Telles-Langdon, David M. – Journal of Teaching and Learning, 2020
As the world reeled from the realization that a pandemic of a magnitude not seen in a century was upon us, and that physical distancing to reduce the speed of transmission was going to necessitate suspension of regular classes, university faculty members scrambled to convert their planned lectures from in-person to online formats. This article…
Descriptors: COVID-19, Pandemics, Online Courses, Educational Technology
Guskey, Thomas R.; Jung, Lee Ann – Educational Leadership, 2016
Many educators consider grades calculated from statistical algorithms more accurate, objective, and reliable than grades they calculate themselves. But in this research, the authors first asked teachers to use their professional judgment to choose a summary grade for hypothetical students. When the researchers compared the teachers' grade with the…
Descriptors: Grading, Computer Assisted Testing, Interrater Reliability, Grades (Scholastic)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Thompson, Darrall G. – Journal of Learning Analytics, 2016
This paper attempts to address the possibility of real change after a hundred years of exam-based assessments that produce a single mark or grade as feedback on students' progress and abilities. It uses visual feedback and analysis of graduate attribute assessment to foreground the diversity of aspects of a student's performance across subject…
Descriptors: Evaluation Methods, Student Evaluation, Self Evaluation (Individuals), Feedback (Response)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Koneru, Indira – Turkish Online Journal of Distance Education, 2017
Current and emerging technologies enable Open Distance Learning (ODL) institutions integrate e-Learning in innovative ways and add value to the existing teaching-learning and assessment processes. ODL e-Assessment systems have evolved from Computer Assisted/Aided Assessment (CAA) systems through intelligent assessment and feedback systems.…
Descriptors: Online Courses, Educational Technology, Technology Uses in Education, Distance Education
Peer reviewed Peer reviewed
Direct linkDirect link
Fisteus, Jesus Arias; Pardo, Abelardo; García, Norberto Fernández – Journal of Science Education and Technology, 2013
Although technology for automatic grading of multiple choice exams has existed for several decades, it is not yet as widely available or affordable as it should be. The main reasons preventing this adoption are the cost and the complexity of the setup procedures. In this paper, "Eyegrade," a system for automatic grading of multiple…
Descriptors: Multiple Choice Tests, Grading, Computer Assisted Testing, Man Machine Systems
Peer reviewed Peer reviewed
Direct linkDirect link
Elliott, Victoria – Changing English: Studies in Culture and Education, 2014
Automated essay scoring programs are becoming more common and more technically advanced. They provoke strong reactions from both their advocates and their detractors. Arguments tend to fall into two categories: technical and principled. This paper argues that since technical difficulties will be overcome with time, the debate ought to be held in…
Descriptors: English, English Instruction, Grading, Computer Assisted Testing
Montacute, Rebecca; Holt-White, Erica – Sutton Trust, 2020
The COVID-19 pandemic poses significant challenges for higher education across the UK. This year's cohort of university applicants now face months of uncertainty, as they try to make decisions on their future amid exam cancellations and a new system to determine grades, all without face-to-face support from their school. For students currently…
Descriptors: Disease Control, Distance Education, Online Courses, Access to Computers
Peer reviewed Peer reviewed
Direct linkDirect link
Cope, Bill; Kalantzis, Mary – Open Review of Educational Research, 2015
This article sets out to explore a shift in the sources of evidence-of-learning in the era of networked computing. One of the key features of recent developments has been popularly characterized as "big data". We begin by examining, in general terms, the frame of reference of contemporary debates on machine intelligence and the role of…
Descriptors: Data Analysis, Evidence, Computer Uses in Education, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Yao-Hsien; Cheng, Ching-Hsue; Liu, Jing-Wei – Computers & Education, 2010
In order to evaluate student learning achievement, several aspects should be considered, such as exercises, examinations, and observations. Traditionally, such an evaluation calculates a final score using a weighted average method after awarding numerical scores, and then determines a grade according to a set of established crisp criteria.…
Descriptors: Feedback (Response), Academic Achievement, Student Evaluation, Grading
Peer reviewed Peer reviewed
Direct linkDirect link
He, Yulan; Hui, Siu Cheung; Quan, Tho Thanh – Computers & Education, 2009
Summary writing is an important part of many English Language Examinations. As grading students' summary writings is a very time-consuming task, computer-assisted assessment will help teachers carry out the grading more effectively. Several techniques such as latent semantic analysis (LSA), n-gram co-occurrence and BLEU have been proposed to…
Descriptors: Semantics, Intelligent Tutoring Systems, Grading, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Li-Ju; Ho, Rong-Guey; Yen, Yung-Chin – Educational Technology & Society, 2010
This study aimed to explore the effects of marking and metacognition-evaluated feedback (MEF) in computer-based testing (CBT) on student performance and review behavior. Marking is a strategy, in which students place a question mark next to a test item to indicate an uncertain answer. The MEF provided students with feedback on test results…
Descriptors: Feedback (Response), Test Results, Test Items, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Davies, Phil – Assessment & Evaluation in Higher Education, 2009
This article details the implementation and use of a "Review Stage" within the CAP (computerised assessment by peers) tool as part of the assessment process for a post-graduate module in e-learning. It reports upon the effect of providing the students with a "second chance" in marking and commenting their peers' essays having been able to view the…
Descriptors: Feedback (Response), Student Evaluation, Computer Assisted Testing, Peer Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Lingard, Jennifer; Minasian-Batmanian, Laura; Vella, Gilbert; Cathers, Ian; Gonzalez, Carlos – Assessment & Evaluation in Higher Education, 2009
Effective criterion referenced assessment requires grade descriptors to clarify to students what skills are required to gain higher grades. But do students and staff actually have the same perception of the grading system, and if so, do they perform better than those whose perceptions are less accurately aligned with those of staff? Since…
Descriptors: Feedback (Response), Prior Learning, Physics, Difficulty Level
Previous Page | Next Page »
Pages: 1  |  2  |  3