NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign…13
Graduate Management Admission…1
Graduate Record Examinations1
What Works Clearinghouse Rating
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ikkyu Choi; Jiangang Hao; Chen Li; Michael Fauss; Jakub Novák – ETS Research Report Series, 2024
A frequently encountered security issue in writing tests is nonauthentic text submission: Test takers submit texts that are not their own but rather are copies of texts prepared by someone else. In this report, we propose AutoESD, a human-in-the-loop and automated system to detect nonauthentic texts for a large-scale writing tests, and report its…
Descriptors: Writing Tests, Automation, Cheating, Plagiarism
Peer reviewed Peer reviewed
Direct linkDirect link
Davis, Larry; Papageorgiou, Spiros – Assessment in Education: Principles, Policy & Practice, 2021
Human raters and machine scoring systems potentially have complementary strengths in evaluating language ability; specifically, it has been suggested that automated systems might be used to make consistent measurements of specific linguistic phenomena, whilst humans evaluate more global aspects of performance. We report on an empirical study that…
Descriptors: Scoring, English for Academic Purposes, Oral English, Speech Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Gong, Kaixuan – Asian-Pacific Journal of Second and Foreign Language Education, 2023
The extensive use of automated speech scoring in large-scale speaking assessment can be revolutionary not only to test design and rating, but also to the learning and instruction of speaking based on how students and teachers perceive and react to this technology. However, its washback remained underexplored. This mixed-method study aimed to…
Descriptors: Second Language Learning, Language Tests, English (Second Language), Automation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jones, Daniel Marc; Cheng, Liying; Tweedie, M. Gregory – Canadian Journal of Learning and Technology, 2022
This article reviews recent literature (2011-present) on the automated scoring (AS) of writing and speaking. Its purpose is to first survey the current research on automated scoring of language, then highlight how automated scoring impacts the present and future of assessment, teaching, and learning. The article begins by outlining the general…
Descriptors: Automation, Computer Assisted Testing, Scoring, Writing (Composition)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Daniels, Paul – TESL-EJ, 2022
This paper compares the speaking scores generated by two online systems that are designed to automatically grade student speech and provide personalized speaking feedback in an EFL context. The first system, "Speech Assessment for Moodle" ("SAM"), is an open-source solution developed by the author that makes use of Google's…
Descriptors: Speech Communication, Auditory Perception, Computer Uses in Education, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rupp, André A.; Casabianca, Jodi M.; Krüger, Maleika; Keller, Stefan; Köller, Olaf – ETS Research Report Series, 2019
In this research report, we describe the design and empirical findings for a large-scale study of essay writing ability with approximately 2,500 high school students in Germany and Switzerland on the basis of 2 tasks with 2 associated prompts, each from a standardized writing assessment whose scoring involved both human and automated components.…
Descriptors: Automation, Foreign Countries, English (Second Language), Language Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Loukina, Anastassia; Buzick, Heather – ETS Research Report Series, 2017
This study is an evaluation of the performance of automated speech scoring for speakers with documented or suspected speech impairments. Given that the use of automated scoring of open-ended spoken responses is relatively nascent and there is little research to date that includes test takers with disabilities, this small exploratory study focuses…
Descriptors: Automation, Scoring, Language Tests, Speech Tests
Guo, Liang; Crossley, Scott A.; McNamara, Danielle S. – Grantee Submission, 2013
This study explores whether linguistic features can predict second language writing proficiency in the Test of English as a Foreign Language (TOEFL iBT) integrated and independent writing tasks and, if so, whether there are differences and similarities in the two sets of predictive linguistic features. Linguistic features related to lexical…
Descriptors: English (Second Language), Linguistics, Second Language Learning, Writing Skills
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Evanini, Keelan; Heilman, Michael; Wang, Xinhao; Blanchard, Daniel – ETS Research Report Series, 2015
This report describes the initial automated scoring results that were obtained using the constructed responses from the Writing and Speaking sections of the pilot forms of the "TOEFL Junior"® Comprehensive test administered in late 2011. For all of the items except one (the edit item in the Writing section), existing automated scoring…
Descriptors: Computer Assisted Testing, Automation, Language Tests, Second Language Learning
Attali, Yigal – Educational Testing Service, 2011
The e-rater[R] automated essay scoring system is used operationally in the scoring of TOEFL iBT[R] independent essays. Previous research has found support for a 3-factor structure of the e-rater features. This 3-factor structure has an attractive hierarchical linguistic interpretation with a word choice factor, a grammatical convention within a…
Descriptors: Essay Tests, Language Tests, Test Scoring Machines, Automation
Attali, Yigal – Educational Testing Service, 2011
This paper proposes an alternative content measure for essay scoring, based on the "difference" in the relative frequency of a word in high-scored versus low-scored essays. The "differential word use" (DWU) measure is the average of these differences across all words in the essay. A positive value indicates the essay is using…
Descriptors: Scoring, Essay Tests, Word Frequency, Content Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Weigle, Sara Cushing – ETS Research Report Series, 2011
Automated scoring has the potential to dramatically reduce the time and costs associated with the assessment of complex skills such as writing, but its use must be validated against a variety of criteria for it to be accepted by test users and stakeholders. This study addresses two validity-related issues regarding the use of e-rater® with the…
Descriptors: Scoring, English (Second Language), Second Language Instruction, Automation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Burstein, Jill – ETS Research Report Series, 2005
The e-rater® system has been used by ETS for automated essay scoring since 1999. This paper describes a new version of e-rater (v.2.0) that differs from the previous one (v.1.3) with regard to the feature set and model building approach. The paper describes the new version, compares the new and previous versions in terms of performance, and…
Descriptors: Essay Tests, Automation, Scoring, Comparative Analysis