Publication Date
In 2025 | 2 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 10 |
Since 2016 (last 10 years) | 17 |
Since 2006 (last 20 years) | 33 |
Descriptor
Source
Author
Publication Type
Reports - Research | 31 |
Journal Articles | 30 |
Tests/Questionnaires | 5 |
Dissertations/Theses -… | 4 |
Reports - Evaluative | 4 |
Speeches/Meeting Papers | 4 |
Opinion Papers | 2 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 9 |
Postsecondary Education | 9 |
Elementary Secondary Education | 2 |
Secondary Education | 2 |
Elementary Education | 1 |
High Schools | 1 |
Audience
Practitioners | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 9 |
International English… | 5 |
National Assessment of… | 2 |
ACTFL Oral Proficiency… | 1 |
Foreign Language Classroom… | 1 |
Graduate Record Examinations | 1 |
Test of English for… | 1 |
What Works Clearinghouse Rating
Casabianca, Jodi M.; Donoghue, John R.; Shin, Hyo Jeong; Chao, Szu-Fu; Choi, Ikkyu – Journal of Educational Measurement, 2023
Using item-response theory to model rater effects provides an alternative solution for rater monitoring and diagnosis, compared to using standard performance metrics. In order to fit such models, the ratings data must be sufficiently connected in order to estimate rater effects. Due to popular rating designs used in large-scale testing scenarios,…
Descriptors: Item Response Theory, Alternative Assessment, Evaluators, Research Problems
Chan, Kinnie Kin Yee; Bond, Trevor; Yan, Zi – Language Testing, 2023
We investigated the relationship between the scores assigned by an Automated Essay Scoring (AES) system, the Intelligent Essay Assessor (IEA), and grades allocated by trained, professional human raters to English essay writing by instigating two procedures novel to written-language assessment: the logistic transformation of AES raw scores into…
Descriptors: Computer Assisted Testing, Essays, Scoring, Scores
Alexander James Kwako – ProQuest LLC, 2023
Automated assessment using Natural Language Processing (NLP) has the potential to make English speaking assessments more reliable, authentic, and accessible. Yet without careful examination, NLP may exacerbate social prejudices based on gender or native language (L1). Current NLP-based assessments are prone to such biases, yet research and…
Descriptors: Gender Bias, Natural Language Processing, Native Language, Computational Linguistics
Yuko Hayashi; Yusuke Kondo; Yutaka Ishii – Innovation in Language Learning and Teaching, 2024
Purpose: This study builds a new system for automatically assessing learners' speech elicited from an oral discourse completion task (DCT), and evaluates the prediction capability of the system with a view to better understanding factors deemed influential in predicting speaking proficiency scores and the pedagogical implications of the system.…
Descriptors: English (Second Language), Second Language Learning, Second Language Instruction, Japanese
Na Liu; Jeferd Saong – International Journal of Education and Literacy Studies, 2025
The study examined the use of the Oral Proficiency Interview (OPI) in assessing Chinese as a second language and the challenges faced by teachers in the Chinese Language Scholarship (CLS) program. The Concurrent Triangulation Mixed Method Research was used in the study in which qualitative and quantitative data are collected simultaneously,…
Descriptors: Chinese, Second Language Learning, Oral Language, Language Proficiency
Seedhouse, Paul – ELT Journal, 2019
This article investigates the central role of topic in the IELTS Speaking Test (IST). Topic has developed a dual personality in this interactional setting: topic-as-script is the scripted statement of topic on the examiner's cards prior to the interaction, whereas topic-as-action is how topic is developed by the candidate during the course of the…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Personality Traits
Comparing Rating Modes: Analysing Live, Audio, and Video Ratings of IELTS Speaking Test Performances
Nakatsuhara, Fumiyo; Inoue, Chihiro; Taylor, Lynda – Language Assessment Quarterly, 2021
This mixed methods study compared IELTS examiners' scores when assessing spoken performances under live and two 'non-live' testing conditions using audio and video recordings. Six IELTS examiners assessed 36 test-takers' performances under the live, audio, and video rating conditions. Scores in the three rating modes were calibrated using the…
Descriptors: Video Technology, Audio Equipment, English (Second Language), Language Tests
Ockey, Gary J.; Chukharev-Hudilainen, Evgeny – Applied Linguistics, 2021
A challenge of large-scale oral communication assessments is to feasibly assess a broad construct that includes interactional competence. One possible approach in addressing this challenge is to use a spoken dialog system (SDS), with the computer acting as a peer to elicit a ratable speech sample. With this aim, an SDS was built and four trained…
Descriptors: Oral Language, Grammar, Language Fluency, Language Tests
Jin Soo Choi – ProQuest LLC, 2022
Nonverbal behavior is essential in human interaction (Gullberg, de Bot, & Volterra, 2008; McNeill, 1992, 2005). For second language speakers, nonverbal features can be helpful for successful and efficient communication (e.g., Dahl & Ludvigsen, 2014). However, due to the complexity of nonverbal features, language testing institutions have…
Descriptors: Language Tests, Language Proficiency, Videoconferencing, Second Language Learning
Uzun, Kutay – Contemporary Educational Technology, 2018
Managing crowded classes in terms of classroom assessment is a difficult task due to the amount of time which needs to be devoted to providing feedback to student products. In this respect, the present study aimed to develop an automated essay scoring environment as a potential means to overcome this problem. Secondarily, the study aimed to test…
Descriptors: Computer Assisted Testing, Essays, Scoring, English Literature
Wind, Stefanie A.; Wolfe, Edward W.; Engelhard, George, Jr.; Foltz, Peter; Rosenstein, Mark – International Journal of Testing, 2018
Automated essay scoring engines (AESEs) are becoming increasingly popular as an efficient method for performance assessments in writing, including many language assessments that are used worldwide. Before they can be used operationally, AESEs must be "trained" using machine-learning techniques that incorporate human ratings. However, the…
Descriptors: Computer Assisted Testing, Essay Tests, Writing Evaluation, Scoring
Karim Sadeghi; Neda Bakhshi – International Journal of Language Testing, 2025
Assessing language skills in an integrative form has drawn the attention of assessment experts in recent years. While some research data exists on integrative listening/reading-to-write assessment, there is comparatively little research literature on listening-to-speak integrated assessment. Also, little attention has been devoted to the role of…
Descriptors: Language Tests, Second Language Learning, English (Second Language), Computer Assisted Testing
Gu, Lin; Davis, Larry; Tao, Jacob; Zechner, Klaus – Assessment in Education: Principles, Policy & Practice, 2021
Recent technology advancements have increased the prospects for automated spoken language technology to provide feedback on speaking performance. In this study we examined user perceptions of using an automated feedback system for preparing for the TOEFL iBT® test. Test takers and language teachers evaluated three types of machine-generated…
Descriptors: Audio Equipment, Test Preparation, Feedback (Response), Scores
Linlin, Cao – English Language Teaching, 2020
Through Many-Facet Rasch analysis, this study explores the rating differences between 1 computer automatic rater and 5 expert teacher raters on scoring 119 students in a computerized English listening-speaking test. Results indicate that both automatic and the teacher raters demonstrate good inter-rater reliability, though the automatic rater…
Descriptors: Language Tests, Computer Assisted Testing, English (Second Language), Second Language Learning
Burton, John Dylan – Language Assessment Quarterly, 2020
An assumption underlying speaking tests is that scores reflect the ability to produce online, non-rehearsed speech. Speech produced in testing situations may, however, be less spontaneous if extensive test preparation takes place, resulting in memorized or rehearsed responses. If raters detect these patterns, they may conceptualize speech as…
Descriptors: Language Tests, Oral Language, Scores, Speech Communication