Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 6 |
Since 2016 (last 10 years) | 10 |
Since 2006 (last 20 years) | 12 |
Descriptor
Source
Language Testing | 12 |
Author
Batty, Aaron Olaf | 1 |
Carey, Michael D. | 1 |
Choe, Ann Tai | 1 |
Dunn, Peter K. | 1 |
Ebling, Sarah | 1 |
Eguchi, Masaki | 1 |
Harding, Luke | 1 |
Haug, Tobias | 1 |
Housen, Alex | 1 |
Hsu, Tammy Huei-Lien | 1 |
Jarvis, Scott | 1 |
More ▼ |
Publication Type
Journal Articles | 12 |
Reports - Research | 12 |
Education Level
Higher Education | 7 |
Postsecondary Education | 5 |
Adult Education | 1 |
Audience
Location
India | 2 |
Australia | 1 |
California (San Francisco) | 1 |
China | 1 |
Europe | 1 |
France | 1 |
New York (New York) | 1 |
Ohio | 1 |
South Korea | 1 |
Switzerland | 1 |
United Kingdom | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 3 |
International English… | 1 |
Michigan Test of English… | 1 |
Test of English for… | 1 |
What Works Clearinghouse Rating
Olson, Daniel J. – Language Testing, 2023
Measuring language dominance, broadly defined as the relative strength of each of a bilingual's two languages, remains a crucial methodological issue in bilingualism research. While various methods have been proposed, the Bilingual Language Profile (BLP) has been one of the most widely used tools for measuring language dominance. While previous…
Descriptors: Bilingualism, Language Dominance, Native Language, Second Language Learning
Batty, Aaron Olaf; Haug, Tobias; Ebling, Sarah; Tissi, Katja; Sidler-Miserez, Sandra – Language Testing, 2023
Sign languages present particular challenges to language assessors in relation to variation in signs, weakly defined citation forms, and a general lack of standard-setting work even in long-established measures of productive sign proficiency. The present article addresses and explores these issues via a mixed-methods study of a human-rated…
Descriptors: Sign Language, Language Tests, Standard Setting, Barriers
Vandeweerd, Nathan; Housen, Alex; Paquot, Magali – Language Testing, 2023
This study investigates whether re-thinking the separation of lexis and grammar in language testing could lead to more valid inferences about proficiency across modes. As argued by Römer, typical scoring rubrics ignore important information about proficiency encoded at the lexis-grammar interface, in particular how the co-selection of lexical and…
Descriptors: French, Language Tests, Grammar, Second Language Learning
Kyle, Kristopher; Eguchi, Masaki; Choe, Ann Tai; LaFlair, Geoff – Language Testing, 2022
In the realm of language proficiency assessments, the domain description inference and the extrapolation inference are key components of a validity argument. Biber et al.'s description of the lexicogrammatical features of the spoken and written registers in the T2K-SWAL corpus has served as support for the TOEFL iBT test's domain description and…
Descriptors: Language Variation, Written Language, Speech Communication, Inferences
Shin, Sun-Young; Lee, Senyung; Lidster, Ryan – Language Testing, 2021
In this study we investigated the potential for a shared-first-language (shared-L1) effect on second language (L2) listening test scores using differential item functioning (DIF) analyses. We did this in order to understand how accented speech may influence performance at the item level, while controlling for key variables including listening…
Descriptors: Listening Comprehension Tests, Language Tests, Native Language, Scores
Miao, Yongzhi – Language Testing, 2023
Scholars have argued for the inclusion of different spoken varieties of English in high-stakes listening tests to better represent the global use of English. However, doing so may introduce additional construct-irrelevant variance due to accent familiarity and the shared first language (L1) advantage, which could threaten test fairness. However,…
Descriptors: Pronunciation, Metalinguistics, Native Language, Intelligibility
Kang, Okim; Rubin, Don; Kermad, Alyssa – Language Testing, 2019
As a result of the fact that judgments of non-native speech are closely tied to social biases, oral proficiency ratings are susceptible to error because of rater background and social attitudes. In the present study we seek first to estimate the variance attributable to rater background and attitudinal variables on novice raters' assessments of L2…
Descriptors: Evaluators, Second Language Learning, Language Tests, English (Second Language)
LaFlair, Geoffrey T.; Staples, Shelley – Language Testing, 2017
Investigations of the validity of a number of high-stakes language assessments are conducted using an argument-based approach, which requires evidence for inferences that are critical to score interpretation (Chapelle, Enright, & Jamieson, 2008b; Kane, 2013). The current study investigates the extrapolation inference for a high-stakes test of…
Descriptors: Computational Linguistics, Language Tests, Test Validity, Inferences
Jarvis, Scott – Language Testing, 2017
The present study discusses the relevance of measures of lexical diversity (LD) to the assessment of learner corpora. It also argues that existing measures of LD, many of which have become specialized for use with language corpora, are fundamentally measures of lexical repetition, are based on an etic perspective of language, and lack construct…
Descriptors: Computational Linguistics, English (Second Language), Second Language Learning, Native Speakers
Hsu, Tammy Huei-Lien – Language Testing, 2016
This study explores the attitudes of raters of English speaking tests towards the global spread of English and the challenges in rating speakers of Indian English in descriptive speaking tasks. The claims put forward by language attitude studies indicate a validity issue in English speaking tests: listeners tend to hold negative attitudes towards…
Descriptors: Evaluators, Language Tests, English (Second Language), Second Language Learning
Carey, Michael D.; Mannell, Robert H.; Dunn, Peter K. – Language Testing, 2011
This study investigated factors that could affect inter-examiner reliability in the pronunciation assessment component of speaking tests. We hypothesized that the rating of pronunciation is susceptible to variation in assessment due to the amount of exposure examiners have to nonnative English accents. An inter-rater variability analysis was…
Descriptors: Oral Language, Pronunciation, Phonology, Interlanguage
Harding, Luke – Language Testing, 2012
This paper reports on an investigation of the potential for a shared-L1 advantage on an academic English listening test featuring speakers with L2 accents. Two hundred and twelve second-language listeners (including 70 Mandarin Chinese L1 listeners and 60 Japanese L1 listeners) completed three versions of the University Test of English as a Second…
Descriptors: Test Bias, Listening Comprehension Tests, Mandarin Chinese, Pronunciation