NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Language Testing65
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 65 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ying Xu; Xiaodong Li; Jin Chen – Language Testing, 2025
This article provides a detailed review of the Computer-based English Listening Speaking Test (CELST) used in Guangdong, China, as part of the National Matriculation English Test (NMET) to assess students' English proficiency. The CELST measures listening and speaking skills as outlined in the "English Curriculum for Senior Middle…
Descriptors: Computer Assisted Testing, English (Second Language), Language Tests, Listening Comprehension Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Junlan Pan; Emma Marsden – Language Testing, 2024
"Tests of Aptitude for Language Learning" (TALL) is an openly accessible internet-based battery to measure the multifaceted construct of foreign language aptitude, using language domain-specific instruments and L1-sensitive instructions and stimuli. This brief report introduces the components of this theory-informed battery and…
Descriptors: Language Tests, Aptitude Tests, Second Language Learning, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Hitoshi Nishizawa – Language Testing, 2024
Corpus-based studies have offered the domain definition inference for test developers. Yet, corpus-based studies on temporal fluency measures (e.g., speech rate) have been limited, especially in the context of academic lecture settings. This made it difficult for test developers to sample representative fluency features to create authentic…
Descriptors: High Stakes Tests, Language Tests, Second Language Learning, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Jung Youn, Soo – Language Testing, 2023
As access to smartphones and emerging technologies has become ubiquitous in our daily lives and in language learning, technology-mediated social interaction has become common in teaching and assessing L2 speaking. The changing ecology of L2 spoken interaction provides language educators and testers with opportunities for renewed test design and…
Descriptors: Test Construction, Test Validity, Second Language Learning, Telecommunications
Peer reviewed Peer reviewed
Direct linkDirect link
Andrew Runge; Sarah Goodwin; Yigal Attali; Mya Poe; Phoebe Mulcaire; Kai-Ling Lo; Geoffrey T. LaFlair – Language Testing, 2025
A longstanding criticism of traditional high-stakes writing assessments is their use of static prompts in which test takers compose a single text in response to a prompt. These static prompts do not allow measurement of the writing process. This paper describes the development and validation of an innovative interactive writing task. After the…
Descriptors: Material Development, Writing Evaluation, Writing Assignments, Writing Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Isbell, Daniel R.; Kremmel, Benjamin – Language Testing, 2020
Administration of high-stakes language proficiency tests has been disrupted in many parts of the world as a result of the 2019 novel coronavirus pandemic. Institutions that rely on test scores have been forced to adapt, and in many cases this means using scores from a different test, or a new online version of an existing test, that can be taken…
Descriptors: Language Tests, High Stakes Tests, Language Proficiency, Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Aryadoust, Vahid; Foo, Stacy; Ng, Li Ying – Language Testing, 2022
The aim of this study was to investigate how test methods affect listening test takers' performance and cognitive load. Test methods were defined and operationalized as while-listening performance (WLP) and post-listening performance (PLP) formats. To achieve the goal of the study, we examined test takers' (N = 80) brain activity patterns…
Descriptors: Listening Comprehension Tests, Language Tests, Eye Movements, Brain Hemisphere Functions
Peer reviewed Peer reviewed
Direct linkDirect link
Kaya, Elif; O'Grady, Stefan; Kalender, Ilker – Language Testing, 2022
Language proficiency testing serves an important function of classifying examinees into different categories of ability. However, misclassification is to some extent inevitable and may have important consequences for stakeholders. Recent research suggests that classification efficacy may be enhanced substantially using computerized adaptive…
Descriptors: Item Response Theory, Test Items, Language Tests, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Chan, Kinnie Kin Yee; Bond, Trevor; Yan, Zi – Language Testing, 2023
We investigated the relationship between the scores assigned by an Automated Essay Scoring (AES) system, the Intelligent Essay Assessor (IEA), and grades allocated by trained, professional human raters to English essay writing by instigating two procedures novel to written-language assessment: the logistic transformation of AES raw scores into…
Descriptors: Computer Assisted Testing, Essays, Scoring, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Guzman-Orth, Danielle; Steinberg, Jonathan; Albee, Traci – Language Testing, 2023
Standardizing accessible test design and development to meet students' individual access needs is a complex task. The following study provides one approach to accessible test design and development using participatory design methods with school community members. Participatory research provides opportunities to empower collaborators by co-creating…
Descriptors: English Language Learners, Blindness, Visual Impairments, Testing Accommodations
Peer reviewed Peer reviewed
Direct linkDirect link
Min, Shangchao; Bishop, Kyoungwon; Gary Cook, Howard – Language Testing, 2022
This study explored the interplay between content knowledge and reading ability in a large-scale multistage adaptive English for academic purposes (EAP) reading assessment at a range of ability levels across 1-12 graders. The datasets for this study were item-level responses to the reading tests of ACCESS for ELLs Online 2.0. A sample of 10,000…
Descriptors: Item Response Theory, English Language Learners, Correlation, Reading Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Latifi, Syed; Gierl, Mark – Language Testing, 2021
An automated essay scoring (AES) program is a software system that uses techniques from corpus and computational linguistics and machine learning to grade essays. In this study, we aimed to describe and evaluate particular language features of Coh-Metrix for a novel AES program that would score junior and senior high school students' essays from…
Descriptors: Writing Evaluation, Computer Assisted Testing, Scoring, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
Motteram, Johanna; Spiby, Richard; Bellhouse, Gemma; Sroka, Katarzyna – Language Testing, 2023
This article describes the implementation of a special accommodations policy for a suite of localised English language and numeracy tests, the Workplace Literacy and Numeracy (WPLN) Assessments. The WPLN are computer-delivered assessments, part of the WPLN training and assessment programme, which exists to provide access to workforce skills…
Descriptors: Language Tests, Educational Policy, Testing Accommodations, Futures (of Society)
Peer reviewed Peer reviewed
Direct linkDirect link
Isbell, Dan; Winke, Paula – Language Testing, 2019
The American Council on the Teaching of Foreign Languages (ACTFL) oral proficiency interview -- computer (OPIc) testing system represents an ambitious effort in language assessment: Assessing oral proficiency in over a dozen languages, on the same scale, from virtually anywhere at any time. Especially for users in contexts where multiple foreign…
Descriptors: Oral Language, Language Tests, Language Proficiency, Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Sawaki, Yasuyo; Sinharay, Sandip – Language Testing, 2018
The present study examined the reliability of the reading, listening, speaking, and writing section scores for the TOEFL iBT® test and their interrelationship in order to collect empirical evidence to support, respectively, the "generalization" inference and the "explanation" inference in the TOEFL iBT validity argument…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Computer Assisted Testing
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5