NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 20 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Elizabeth L. Wetzler; Kenneth S. Cassidy; Margaret J. Jones; Chelsea R. Frazier; Nickalous A. Korbut; Chelsea M. Sims; Shari S. Bowen; Michael Wood – Teaching of Psychology, 2025
Background: Generative artificial intelligence (AI) represents a potentially powerful, time-saving tool for grading student essays. However, little is known about how AI-generated essay scores compare to human instructor scores. Objective: The purpose of this study was to compare the essay grading scores produced by AI with those of human…
Descriptors: Essays, Writing Evaluation, Scores, Evaluators
Peer reviewed Peer reviewed
Direct linkDirect link
Peter Daly; Emmanuelle Deglaire – Innovations in Education and Teaching International, 2025
AI-enabled assessment of student papers has the potential to provide both summative and formative feedback and reduce the time spent on grading. Using auto-ethnography, this study compares AI-enabled and human assessment of business student examination papers in a law module based on previously established rubrics. Examination papers were…
Descriptors: Artificial Intelligence, Computer Software, Technology Integration, College Faculty
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Haoran; Litman, Diane – Grantee Submission, 2021
Human essay grading is a laborious task that can consume much time and effort. Automated Essay Scoring (AES) has thus been proposed as a fast and effective solution to the problem of grading student writing at scale. However, because AES typically uses supervised machine learning, a human-graded essay corpus is still required to train the AES…
Descriptors: Essays, Grading, Writing Evaluation, Computational Linguistics
Leech, Tony; Chambers, Lucy – Research Matters, 2022
Two of the central issues in comparative judgement (CJ), which are perhaps underexplored compared to questions of the method's reliability and technical quality, are "what processes do judges use to make their decisions" and "what features do they focus on when making their decisions?" This article discusses both, in the…
Descriptors: Comparative Analysis, Decision Making, Evaluators, Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Osama Koraishi – Language Teaching Research Quarterly, 2024
This study conducts a comprehensive quantitative evaluation of OpenAI's language model, ChatGPT 4, for grading Task 2 writing of the IELTS exam. The objective is to assess the alignment between ChatGPT's grading and that of official human raters. The analysis encompassed a multifaceted approach, including a comparison of means and reliability…
Descriptors: Second Language Learning, English (Second Language), Language Tests, Artificial Intelligence
Peer reviewed Peer reviewed
Direct linkDirect link
Hao, Qiang; Smith, David H., IV; Ding, Lu; Ko, Amy; Ottaway, Camille; Wilson, Jack; Arakawa, Kai H.; Turcan, Alistair; Poehlman, Timothy; Greer, Tyler – Computer Science Education, 2022
Background and Context: automated feedback for programming assignments has great potential in promoting just-in-time learning, but there has been little work investigating the design of feedback in this context. Objective: to investigate the impacts of different designs of automated feedback on student learning at a fine-grained level, and how…
Descriptors: Computer Science Education, Feedback (Response), Teaching Methods, Comparative Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Seval Kemal; Aysegül Liman-Kaban – Asian Journal of Distance Education, 2025
This study conducts a comprehensive analysis of the assessment of journal writing in English as a Foreign Language (EFL) at the secondary school level, comparing the performance of a Generative Artificial Intelligence (GenAI) platform with two human graders. Employing a convergent parallel mixed methods design, quantitative data were collected…
Descriptors: Artificial Intelligence, Secondary School Students, Feedback (Response), Writing Assignments
Peer reviewed Peer reviewed
Direct linkDirect link
Swapna Haresh Teckwani; Amanda Huee-Ping Wong; Nathasha Vihangi Luke; Ivan Cherh Chiet Low – Advances in Physiology Education, 2024
The advent of artificial intelligence (AI), particularly large language models (LLMs) like ChatGPT and Gemini, has significantly impacted the educational landscape, offering unique opportunities for learning and assessment. In the realm of written assessment grading, traditionally viewed as a laborious and subjective process, this study sought to…
Descriptors: Accuracy, Reliability, Computational Linguistics, Standards
Peer reviewed Peer reviewed
Direct linkDirect link
Wrigley, Stuart – Active Learning in Higher Education, 2019
This article discusses and challenges the increasing use of plagiarism detection services such as Turnitin and Grammarly by students, arguing that the increasingly online nature of composition is having a profound effect on student composition processes. This dependence on the Internet is leading to a strategy I term 'de-plagiarism', in which…
Descriptors: Plagiarism, Essays, Writing Processes, Computer Software
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Unnam, Abhishek; Takhar, Rohit; Aggarwal, Varun – International Educational Data Mining Society, 2019
Email has become the most preferred form of business communication. Writing "good" email has become an essential skill required in the industry. "Good" email writing not only facilitates clear communication, but also makes a positive impression on the recipient, whether it be one's colleague or a customer. The aim of this paper…
Descriptors: Grading, Electronic Mail, Feedback (Response), Written Language
Peer reviewed Peer reviewed
Direct linkDirect link
Hamer, John; Purchase, Helen; Luxton-Reilly, Andrew; Denny, Paul – Assessment & Evaluation in Higher Education, 2015
We report on a study comparing peer feedback with feedback written by tutors on a large, undergraduate software engineering programming class. Feedback generated by peers is generally held to be of lower quality to feedback from experienced tutors, and this study sought to explore the extent and nature of this difference. We looked at how…
Descriptors: Feedback (Response), Programming, Engineering Education, Undergraduate Students
Peer reviewed Peer reviewed
Direct linkDirect link
Heinrich, Eva; Milne, John; Granshaw, Bruce – Australasian Journal of Educational Technology, 2012
This article investigates the support e-learning can provide for the management and marking of assignments. The work is contextualised in the importance of assessment with assignments in tertiary education, in the theories about high quality marking of assignments, and the practical experiences of academics at tertiary institutions. The tasks that…
Descriptors: Electronic Learning, Assignments, College Faculty, Integrated Learning Systems
Peer reviewed Peer reviewed
Direct linkDirect link
Kieftenbeld, Vincent; Natesan, Prathiba – Applied Psychological Measurement, 2012
Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…
Descriptors: Test Length, Markov Processes, Item Response Theory, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Sanna, A.; Lamberti, F.; Paravati, G.; Demartini, C. – IEEE Transactions on Learning Technologies, 2012
Computer-based assessment of exams provides teachers and students with two main benefits: fairness and effectiveness in the evaluation process. This paper proposes a fully automatic evaluation tool for the Graphic and Virtual Design (GVD) curriculum at the First School of Architecture of the Politecnico di Torino, Italy. In particular, the tool is…
Descriptors: Foreign Countries, Open Source Technology, Computer Assisted Testing, Computer Graphics
Peer reviewed Peer reviewed
Direct linkDirect link
Johnson, Martin; Hopkin, Rebecca; Shiell, Hannah – E-Learning and Digital Media, 2012
Technological developments are impacting upon UK assessment practices in many ways. For qualification awarding bodies, a key example of such impact is the ongoing shift towards examiners marking digitally scanned copies of examination scripts on screen rather than the original paper documents. This digitisation process has obvious benefits,…
Descriptors: Foreign Countries, Secondary Education, Exit Examinations, Technological Advancement
Previous Page | Next Page »
Pages: 1  |  2