NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Emma Bruce; Karen Dunn; Tony Clark – Language Testing, 2025
Several high-stakes English proficiency tests including but not limited to IELTS, PTE Academic, and TOEFL iBT recommend a 2-year time limit on validity for score usage. Although this timeframe provides a useful rule-of-thumb for the recency of testing, it can have far-reaching consequences. In response to stakeholder queries around IELTS validity…
Descriptors: High Stakes Tests, Language Tests, Test Validity, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Ramsey L. Cardwell; Steven W. Nydick; J.R. Lockwood; Alina A. von Davier – Language Testing, 2024
Applicants must often demonstrate adequate English proficiency when applying to postsecondary institutions by taking an English language proficiency test, such as the TOEFL iBT, IELTS Academic, or Duolingo English Test (DET). Concordance tables aim to provide equivalent scores across multiple assessments, helping admissions officers to make fair…
Descriptors: Second Language Learning, English (Second Language), Language Tests, Language Proficiency
Peer reviewed Peer reviewed
Direct linkDirect link
Ying Xu; Xiaodong Li; Jin Chen – Language Testing, 2025
This article provides a detailed review of the Computer-based English Listening Speaking Test (CELST) used in Guangdong, China, as part of the National Matriculation English Test (NMET) to assess students' English proficiency. The CELST measures listening and speaking skills as outlined in the "English Curriculum for Senior Middle…
Descriptors: Computer Assisted Testing, English (Second Language), Language Tests, Listening Comprehension Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Maria Treadaway; John Read – Language Testing, 2024
Standard-setting is an essential component of test development, supporting the meaningfulness and appropriate interpretation of test scores. However, in the high-stakes testing environment of aviation, standard-setting studies are underexplored. To address this gap, we document two stages in the standard-setting procedures for the Overseas Flight…
Descriptors: Standard Setting, Diagnostic Tests, High Stakes Tests, English for Special Purposes
Peer reviewed Peer reviewed
Direct linkDirect link
Lukácsi, Zoltán – Language Testing, 2021
In second language writing assessment, rating scales and scores from human-mediated assessment have been criticized for a number of shortcomings including problems with adequacy, relevance, and reliability (Hamp-Lyons, 1990; McNamara, 1996; Weigle, 2002). In its testing practice, Euroexam International also detected that the rating scales for…
Descriptors: Test Construction, Test Validity, Test Items, Check Lists
Peer reviewed Peer reviewed
Direct linkDirect link
Isbell, Daniel R.; Kremmel, Benjamin – Language Testing, 2020
Administration of high-stakes language proficiency tests has been disrupted in many parts of the world as a result of the 2019 novel coronavirus pandemic. Institutions that rely on test scores have been forced to adapt, and in many cases this means using scores from a different test, or a new online version of an existing test, that can be taken…
Descriptors: Language Tests, High Stakes Tests, Language Proficiency, Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
LaFlair, Geoffrey T.; Staples, Shelley – Language Testing, 2017
Investigations of the validity of a number of high-stakes language assessments are conducted using an argument-based approach, which requires evidence for inferences that are critical to score interpretation (Chapelle, Enright, & Jamieson, 2008b; Kane, 2013). The current study investigates the extrapolation inference for a high-stakes test of…
Descriptors: Computational Linguistics, Language Tests, Test Validity, Inferences
Peer reviewed Peer reviewed
Direct linkDirect link
Kane, Michael – Language Testing, 2012
The argument-based approach to validation involves two steps; specification of the proposed interpretations and uses of the test scores as an interpretive argument, and the evaluation of the plausibility of the proposed interpretive argument. More ambitious interpretations and uses tend to involve an extended network of inferences and assumptions…
Descriptors: Testing, Language Tests, Inferences, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Coombe, Christine; Davidson, Peter – Language Testing, 2014
The Common Educational Proficiency Assessment (CEPA) is a large-scale, high-stakes, English language proficiency/placement test administered in the United Arab Emirates to Emirati nationals in their final year of secondary education or Grade 12. The purpose of the CEPA is to place students into English classes at the appropriate government…
Descriptors: Language Tests, High Stakes Tests, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Malone, Margaret E. – Language Testing, 2010
This article presents a review of the Canadian Academic English Language (CAEL) Assessment, a high stakes standardized test of the English language. It is a topic-based test that integrates listening, reading, writing and speaking. The test is designed to describe the level of English language proficiency of test takers planning to study at…
Descriptors: Test Reliability, Language Tests, Standardized Tests, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Van Moere, Alistair – Language Testing, 2006
This article investigates a group oral test as administered at a university in Japan to find if it is appropriate to use scores for higher stakes decision making. It is one component of an in-house English proficiency test used for placing students, evaluating their progress, and making informed decisions for the development of the English…
Descriptors: Foreign Countries, Generalizability Theory, Achievement Tests, English (Second Language)