NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 28 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Paul Leeming; Justin Harris – Language Teaching Research, 2025
Measurement of language learners' development in speaking proficiency is important for practicing language teachers, not only for assessment purposes, but also for evaluating the effectiveness of materials and approaches used. However, doing so effectively and efficiently presents challenges. Commercial speaking tests are often costly, and beyond…
Descriptors: English (Second Language), Second Language Learning, Second Language Instruction, College Students
Peer reviewed Peer reviewed
Direct linkDirect link
Bamdev, Pakhi; Grover, Manraj Singh; Singla, Yaman Kumar; Vafaee, Payman; Hama, Mika; Shah, Rajiv Ratn – International Journal of Artificial Intelligence in Education, 2023
English proficiency assessments have become a necessary metric for filtering and selecting prospective candidates for both academia and industry. With the rise in demand for such assessments, it has become increasingly necessary to have the automated human-interpretable results to prevent inconsistencies and ensure meaningful feedback to the…
Descriptors: Language Proficiency, Automation, Scoring, Speech Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Erik Voss – Language Testing, 2025
An increasing number of language testing companies are developing and deploying deep learning-based automated essay scoring systems (AES) to replace traditional approaches that rely on handcrafted feature extraction. However, there is hesitation to accept neural network approaches to automated essay scoring because the features are automatically…
Descriptors: Artificial Intelligence, Automation, Scoring, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Gong, Kaixuan – Asian-Pacific Journal of Second and Foreign Language Education, 2023
The extensive use of automated speech scoring in large-scale speaking assessment can be revolutionary not only to test design and rating, but also to the learning and instruction of speaking based on how students and teachers perceive and react to this technology. However, its washback remained underexplored. This mixed-method study aimed to…
Descriptors: Second Language Learning, Language Tests, English (Second Language), Automation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Somayeh Fathali; Fatemeh Mohajeri – Technology in Language Teaching & Learning, 2025
The International English Language Testing System (IELTS) is a high-stakes exam where Writing Task 2 significantly influences the overall scores, requiring reliable evaluation. While trained human raters perform this task, concerns about subjectivity and inconsistency have led to growing interest in artificial intelligence (AI)-based assessment…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Artificial Intelligence
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Xinming Chen; Ziqian Zhou; Malila Prado – International Journal of Assessment Tools in Education, 2025
This study explores the efficacy of ChatGPT-3.5, an AI chatbot, used as an Automatic Essay Scoring (AES) system and feedback provider for IELTS essay preparation. It investigates the alignment between scores given by ChatGPT-3.5 and those assigned by official IELTS examiners to establish its reliability as an AES. It also identifies the strategies…
Descriptors: Artificial Intelligence, Natural Language Processing, Technology Uses in Education, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Xiaoqin Shi; Xiaoqing Wang; Wei Zhang – Language Testing in Asia, 2024
Automatic Speech Scoring (ASS) has increasingly become a useful tool in oral proficiency testing for Second Language (L2) learners. However, limited studies investigate the alignment of ASS indices with the Complexity, Accuracy, and Fluency (CAF)--the three dimensions in evaluating L2 speakers' oral proficiency, and the subsequent impact indices…
Descriptors: Speech Communication, Oral Language, Language Proficiency, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kornwipa Poonpon; Paiboon Manorom; Wirapong Chansanam – Contemporary Educational Technology, 2023
Automated essay scoring (AES) has become a valuable tool in educational settings, providing efficient and objective evaluations of student essays. However, the majority of AES systems have primarily focused on native English speakers, leaving a critical gap in the evaluation of non-native speakers' writing skills. This research addresses this gap…
Descriptors: Automation, Essays, Scoring, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Huimei; Pan, Jie – Asian-Pacific Journal of Second and Foreign Language Education, 2022
The role of internet technology in higher education and particularly in teaching English as a Foreign language is increasingly prominent because of the interest in the ways in which technology can be applied to support students. The automated evaluation scoring system is a typical demonstration of the application of network technology in the…
Descriptors: Comparative Analysis, Automation, Scoring, Feedback (Response)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Loukina, Anastassia; Zechner, Klaus; Yoon, Su-Youn; Zhang, Mo; Tao, Jidong; Wang, Xinhao; Lee, Chong Min; Mulholland, Matthew – ETS Research Report Series, 2017
This report presents an overview of the "SpeechRater"? automated scoring engine model building and evaluation process for several item types with a focus on a low-English-proficiency test-taker population. We discuss each stage of speech scoring, including automatic speech recognition, filtering models for nonscorable responses, and…
Descriptors: Automation, Scoring, Speech Tests, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Davis, Larry; Papageorgiou, Spiros – Assessment in Education: Principles, Policy & Practice, 2021
Human raters and machine scoring systems potentially have complementary strengths in evaluating language ability; specifically, it has been suggested that automated systems might be used to make consistent measurements of specific linguistic phenomena, whilst humans evaluate more global aspects of performance. We report on an empirical study that…
Descriptors: Scoring, English for Academic Purposes, Oral English, Speech Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Loukina, Anastassia; Buzick, Heather – ETS Research Report Series, 2017
This study is an evaluation of the performance of automated speech scoring for speakers with documented or suspected speech impairments. Given that the use of automated scoring of open-ended spoken responses is relatively nascent and there is little research to date that includes test takers with disabilities, this small exploratory study focuses…
Descriptors: Automation, Scoring, Language Tests, Speech Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Jones, Daniel Marc; Cheng, Liying; Tweedie, M. Gregory – Canadian Journal of Learning and Technology, 2022
This article reviews recent literature (2011-present) on the automated scoring (AS) of writing and speaking. Its purpose is to first survey the current research on automated scoring of language, then highlight how automated scoring impacts the present and future of assessment, teaching, and learning. The article begins by outlining the general…
Descriptors: Automation, Computer Assisted Testing, Scoring, Writing (Composition)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Daniels, Paul – TESL-EJ, 2022
This paper compares the speaking scores generated by two online systems that are designed to automatically grade student speech and provide personalized speaking feedback in an EFL context. The first system, "Speech Assessment for Moodle" ("SAM"), is an open-source solution developed by the author that makes use of Google's…
Descriptors: Speech Communication, Auditory Perception, Computer Uses in Education, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Stuart McLean; Paul Raine; Geoffrey Pinchbeck; Laura Huston; Young Ae Kim; Suzuka Nishiyama; Shotaro Ueno – Vocabulary Learning and Instruction, 2021
Vocableveltest.org is a testing platform on which users can create on- line self-marking meaning-recall (reading or listening) and form-recall (typing) tests that address a number of limitations of the existing vocabulary level tests and vocabulary size tests. A major limitation of many existing vocabulary tests is the written receptive…
Descriptors: Accuracy, Automation, Scoring, Writing (Composition)
Previous Page | Next Page ยป
Pages: 1  |  2