NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 5 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ikkyu Choi; Jiangang Hao; Chen Li; Michael Fauss; Jakub Novák – ETS Research Report Series, 2024
A frequently encountered security issue in writing tests is nonauthentic text submission: Test takers submit texts that are not their own but rather are copies of texts prepared by someone else. In this report, we propose AutoESD, a human-in-the-loop and automated system to detect nonauthentic texts for a large-scale writing tests, and report its…
Descriptors: Writing Tests, Automation, Cheating, Plagiarism
Peer reviewed Peer reviewed
Direct linkDirect link
Davis, Larry; Papageorgiou, Spiros – Assessment in Education: Principles, Policy & Practice, 2021
Human raters and machine scoring systems potentially have complementary strengths in evaluating language ability; specifically, it has been suggested that automated systems might be used to make consistent measurements of specific linguistic phenomena, whilst humans evaluate more global aspects of performance. We report on an empirical study that…
Descriptors: Scoring, English for Academic Purposes, Oral English, Speech Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Gong, Kaixuan – Asian-Pacific Journal of Second and Foreign Language Education, 2023
The extensive use of automated speech scoring in large-scale speaking assessment can be revolutionary not only to test design and rating, but also to the learning and instruction of speaking based on how students and teachers perceive and react to this technology. However, its washback remained underexplored. This mixed-method study aimed to…
Descriptors: Second Language Learning, Language Tests, English (Second Language), Automation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jones, Daniel Marc; Cheng, Liying; Tweedie, M. Gregory – Canadian Journal of Learning and Technology, 2022
This article reviews recent literature (2011-present) on the automated scoring (AS) of writing and speaking. Its purpose is to first survey the current research on automated scoring of language, then highlight how automated scoring impacts the present and future of assessment, teaching, and learning. The article begins by outlining the general…
Descriptors: Automation, Computer Assisted Testing, Scoring, Writing (Composition)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Daniels, Paul – TESL-EJ, 2022
This paper compares the speaking scores generated by two online systems that are designed to automatically grade student speech and provide personalized speaking feedback in an EFL context. The first system, "Speech Assessment for Moodle" ("SAM"), is an open-source solution developed by the author that makes use of Google's…
Descriptors: Speech Communication, Auditory Perception, Computer Uses in Education, Computer Assisted Testing