Publication Date
| In 2026 | 2 |
| Since 2025 | 3 |
| Since 2022 (last 5 years) | 5 |
| Since 2017 (last 10 years) | 6 |
| Since 2007 (last 20 years) | 6 |
Descriptor
Source
| Language Testing | 6 |
Author
| Ahyoung Alicia Kim | 1 |
| Bond, Trevor | 1 |
| Chan, Kinnie Kin Yee | 1 |
| Gierl, Mark | 1 |
| Gordon Blaine West | 1 |
| Jason A. Kemp | 1 |
| Jin Chen | 1 |
| Kalender, Ilker | 1 |
| Kaya, Elif | 1 |
| Latifi, Syed | 1 |
| Mark Chapman | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 6 |
| Reports - Research | 5 |
| Reports - Descriptive | 1 |
| Tests/Questionnaires | 1 |
Education Level
| Secondary Education | 6 |
| Elementary Education | 3 |
| Junior High Schools | 3 |
| Middle Schools | 3 |
| Grade 7 | 1 |
| High Schools | 1 |
| Higher Education | 1 |
| Postsecondary Education | 1 |
Audience
Location
| China | 2 |
| Hong Kong | 1 |
| Indonesia | 1 |
| Turkey | 1 |
| Turkey (Ankara) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Mark Chapman; Meg Montee; Yangting Wang; Gordon Blaine West; Jason A. Kemp; Ahyoung Alicia Kim – Language Testing, 2026
This paper reports the results of a study designed to explore the relationships between speaking test task variables and linguistic features of spoken responses on a speaking assessment for Grade 7 multilingual English learners (age 12-13) in U.S. public schools. Speaking task responses from 30 high-proficiency test takers were transcribed and…
Descriptors: English Learners, Language Tests, Language Proficiency, Language Fluency
Michael Suhan; Mikyung Kim Wolf – Language Testing, 2026
Large language models, such as OpenAI's GPT-4, have the potential to revolutionize automated writing evaluation (AWE). The present study examines the performance of the GPT-4 model in evaluating the writing of young English as a foreign language learners. Responses to three constructed response tasks (n = 1908) on Educational Testing Service's…
Descriptors: Language Tests, Automation, Computer Assisted Testing, Scoring
Ying Xu; Xiaodong Li; Jin Chen – Language Testing, 2025
This article provides a detailed review of the Computer-based English Listening Speaking Test (CELST) used in Guangdong, China, as part of the National Matriculation English Test (NMET) to assess students' English proficiency. The CELST measures listening and speaking skills as outlined in the "English Curriculum for Senior Middle…
Descriptors: Computer Assisted Testing, English (Second Language), Language Tests, Listening Comprehension Tests
Chan, Kinnie Kin Yee; Bond, Trevor; Yan, Zi – Language Testing, 2023
We investigated the relationship between the scores assigned by an Automated Essay Scoring (AES) system, the Intelligent Essay Assessor (IEA), and grades allocated by trained, professional human raters to English essay writing by instigating two procedures novel to written-language assessment: the logistic transformation of AES raw scores into…
Descriptors: Computer Assisted Testing, Essays, Scoring, Scores
Latifi, Syed; Gierl, Mark – Language Testing, 2021
An automated essay scoring (AES) program is a software system that uses techniques from corpus and computational linguistics and machine learning to grade essays. In this study, we aimed to describe and evaluate particular language features of Coh-Metrix for a novel AES program that would score junior and senior high school students' essays from…
Descriptors: Writing Evaluation, Computer Assisted Testing, Scoring, Essays
Kaya, Elif; O'Grady, Stefan; Kalender, Ilker – Language Testing, 2022
Language proficiency testing serves an important function of classifying examinees into different categories of ability. However, misclassification is to some extent inevitable and may have important consequences for stakeholders. Recent research suggests that classification efficacy may be enhanced substantially using computerized adaptive…
Descriptors: Item Response Theory, Test Items, Language Tests, Classification

Peer reviewed
Direct link
