NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Language Testing32
Audience
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign…7
What Works Clearinghouse Rating
Showing 1 to 15 of 32 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Louise Palmour – Language Testing, 2024
This article explores the nature of the construct underlying classroom-based English for academic purpose (EAP) oral presentation assessments, which are used, in part, to determine admission to programmes of study at UK universities. Through analysis of qualitative data (from questionnaires, interviews, rating discussions, and fieldnotes), the…
Descriptors: English for Academic Purposes, Public Speaking, College Students, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Daniel R. Isbell; Dustin Crowther; Hitoshi Nishizawa – Language Testing, 2024
The extrapolation of test scores to a target domain - that is, association between test performances and relevant real-world outcomes - is critical to valid score interpretation and use. This study examined the relationship between Duolingo English Test (DET) speaking scores and university stakeholders' evaluation of DET speaking performances. A…
Descriptors: Language Proficiency, Language Tests, Higher Education, Stakeholders
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Yena; Lee, Senyung; Shin, Sun-Young – Language Testing, 2022
Despite consistent calls for authentic stimuli in listening tests for better construct representation, unscripted texts have been rarely adopted in high-stakes listening tests due to perceived inefficiency. This study details how a local academic listening test was developed using authentic unscripted audio-visual texts from the local target…
Descriptors: Listening Comprehension Tests, English for Academic Purposes, Test Construction, Foreign Students
Peer reviewed Peer reviewed
Direct linkDirect link
Knoch, Ute; Huisman, Annemiek; Elder, Cathie; Kong, Xiaoxiao; McKenna, Angela – Language Testing, 2020
A key concern of washback research in language testing is with the value of test preparation for facilitating learning and improving test performance. Although test takers may draw on a wide range of preparation activities, the majority of research studies examining test preparation have taken place in classroom settings, leaving self-access…
Descriptors: Test Preparation, Repetition, Language Tests, English for Academic Purposes
Peer reviewed Peer reviewed
Direct linkDirect link
Chan, Sathena; May, Lyn – Language Testing, 2023
Despite the increased use of integrated tasks in high-stakes academic writing assessment, research on rating criteria which reflect the unique construct of integrated summary writing skills is comparatively rare. Using a mixed-method approach of expert judgement, text analysis, and statistical analysis, this study examines writing features that…
Descriptors: Scoring, Writing Evaluation, Reading Tests, Listening Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Kyle, Kristopher; Eguchi, Masaki; Choe, Ann Tai; LaFlair, Geoff – Language Testing, 2022
In the realm of language proficiency assessments, the domain description inference and the extrapolation inference are key components of a validity argument. Biber et al.'s description of the lexicogrammatical features of the spoken and written registers in the T2K-SWAL corpus has served as support for the TOEFL iBT test's domain description and…
Descriptors: Language Variation, Written Language, Speech Communication, Inferences
Peer reviewed Peer reviewed
Direct linkDirect link
Min, Shangchao; Bishop, Kyoungwon; Gary Cook, Howard – Language Testing, 2022
This study explored the interplay between content knowledge and reading ability in a large-scale multistage adaptive English for academic purposes (EAP) reading assessment at a range of ability levels across 1-12 graders. The datasets for this study were item-level responses to the reading tests of ACCESS for ELLs Online 2.0. A sample of 10,000…
Descriptors: Item Response Theory, English Language Learners, Correlation, Reading Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Nicklin, Christopher; Vitta, Joseph P. – Language Testing, 2022
Instrument measurement conducted with Rasch analysis is a common process in language assessment research. A recent systematic review of 215 studies involving Rasch analysis in language testing and applied linguistics research reported that 23 different software packages had been utilized. However, none of the analyses were conducted with one of…
Descriptors: Programming Languages, Vocabulary Development, Language Tests, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Oruç Ertürk, Nesrin; Mumford, Simon E. – Language Testing, 2017
This study, conducted by two researchers who were also multiple-choice question (MCQ) test item writers at a private English-medium university in an English as a foreign language (EFL) context, was designed to shed light on the factors that influence test-takers' perceptions of difficulty in English for academic purposes (EAP) vocabulary, with the…
Descriptors: English for Academic Purposes, Vocabulary, Language Tests, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Frost, Kellie; Clothier, Josh; Huisman, Annemiek; Wigglesworth, Gillian – Language Testing, 2020
Integrated speaking tasks requiring test takers to read and/or listen to stimulus texts and to incorporate their content into oral performances are now used in large-scale, high-stakes tests, including the TOEFL iBT. These tasks require test takers to identify, select, and combine relevant source text information to recognize key relationships…
Descriptors: Discourse Analysis, Scoring Rubrics, Speech Communication, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Trace, Jonathan; Janssen, Gerriet; Meier, Valerie – Language Testing, 2017
Previous research in second language writing has shown that when scoring performance assessments even trained raters can exhibit significant differences in severity. When raters disagree, using discussion to try to reach a consensus is one popular form of score resolution, particularly in contexts with limited resources, as it does not require…
Descriptors: Performance Based Assessment, Second Language Learning, Scoring, Evaluators
Peer reviewed Peer reviewed
Direct linkDirect link
LaFlair, Geoffrey T.; Isbell, Daniel; May, L. D. Nicolas; Gutierrez Arvizu, Maria Nelly; Jamieson, Joan – Language Testing, 2017
Language programs need multiple test forms for secure administrations and effective placement decisions, but can they have confidence that scores on alternate test forms have the same meaning? In large-scale testing programs, various equating methods are available to ensure the comparability of forms. The choice of equating method is informed by…
Descriptors: Language Tests, Equated Scores, Testing Programs, Comparative Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Shin, Sun-Young; Lidster, Ryan – Language Testing, 2017
In language programs, it is crucial to place incoming students into appropriate levels to ensure that course curriculum and materials are well targeted to their learning needs. Deciding how and where to set cutscores on placement tests is thus of central importance to programs, but previous studies in educational measurement disagree as to which…
Descriptors: Language Tests, English (Second Language), Standard Setting (Scoring), Student Placement
Peer reviewed Peer reviewed
Direct linkDirect link
Isaacs, Talia; Trofimovich, Pavel; Foote, Jennifer Ann – Language Testing, 2018
There is growing research on the linguistic features that most contribute to making second language (L2) speech easy or difficult to understand. Comprehensibility, which is usually captured through listener judgments, is increasingly viewed as integral to the L2 speaking construct. However, there are shortcomings in how this construct is…
Descriptors: Language Tests, English (Second Language), Second Language Learning, Language of Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Chapelle, Carol A.; Cotos, Elena; Lee, Jooyoung – Language Testing, 2015
Two examples demonstrate an argument-based approach to validation of diagnostic assessment using automated writing evaluation (AWE). "Criterion"®, was developed by Educational Testing Service to analyze students' papers grammatically, providing sentence-level error feedback. An interpretive argument was developed for its use as part of…
Descriptors: Diagnostic Tests, Writing Evaluation, Automation, Test Validity
Previous Page | Next Page »
Pages: 1  |  2  |  3