NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Language Testing93
What Works Clearinghouse Rating
Showing 46 to 60 of 93 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Elgort, Irina – Language Testing, 2013
This study examines the development and evaluation of a bilingual Vocabulary Size Test (VST, Nation, 2006). A bilingual (English-Russian) test was developed and administered to 121 intermediate proficiency EFL learners (native speakers of Russian), alongside the original monolingual (English-only) version of the test. A comparison of the bilingual…
Descriptors: Test Construction, Vocabulary, Language Tests, English
Peer reviewed Peer reviewed
Direct linkDirect link
Davies, Alan – Language Testing, 2010
This article presents the author's response to Xiaoming Xi's paper titled "How do we go about investigating test fairness?" In the paper, Xi offers "a means to fully integrate fairness investigations and practice". Given the current importance accorded to fairness in the language testing community, Xi makes a case for viewing fairness as an aspect…
Descriptors: Investigations, Testing, Language Tests, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Bax, Stephen – Language Testing, 2013
The research described in this article investigates test takers' cognitive processing while completing onscreen IELTS (International English Language Testing System) reading test items. The research aims, among other things, to contribute to our ability to evaluate the cognitive validity of reading test items (Glaser, 1991; Field, in press). The…
Descriptors: Reading Tests, Eye Movements, Cognitive Processes, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Green, Anthony; Hawkey, Roger – Language Testing, 2012
The important yet under-researched role of item writers in the selection and adaptation of texts for high-stakes reading tests is investigated through a case study involving a group of trained item writers working on the International English Language Testing System (IELTS). In the first phase of the study, participants were invited to reflect in…
Descriptors: Test Items, Semantics, Reading Tests, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Gao, Lingyun; Rogers, W. Todd – Language Testing, 2011
The purpose of this study was to explore whether the results of Tree Based Regression (TBR) analyses, informed by a validated cognitive model, would enhance the interpretation of item difficulties in terms of the cognitive processes involved in answering the reading items included in two forms of the Michigan English Language Assessment Battery…
Descriptors: Test Items, Reading Tests, Item Analysis, Reading Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Harding, Luke – Language Testing, 2012
This paper reports on an investigation of the potential for a shared-L1 advantage on an academic English listening test featuring speakers with L2 accents. Two hundred and twelve second-language listeners (including 70 Mandarin Chinese L1 listeners and 60 Japanese L1 listeners) completed three versions of the University Test of English as a Second…
Descriptors: Test Bias, Listening Comprehension Tests, Mandarin Chinese, Pronunciation
Peer reviewed Peer reviewed
Direct linkDirect link
Pae, Tae-Il – Language Testing, 2012
This study tracked gender differential item functioning (DIF) on the English subtest of the Korean College Scholastic Aptitude Test (KCSAT) over a nine-year period across three data points, using both the Mantel-Haenszel (MH) and item response theory likelihood ratio (IRT-LR) procedures. Further, the study identified two factors (i.e. reading…
Descriptors: Aptitude Tests, Academic Aptitude, Language Tests, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Filipi, Anna – Language Testing, 2012
The Assessment of Language Competence (ALC) certificates is an annual, international testing program developed by the Australian Council for Educational Research to test the listening and reading comprehension skills of lower to middle year levels of secondary school. The tests are developed for three levels in French, German, Italian and…
Descriptors: Listening Comprehension Tests, Item Response Theory, Statistical Analysis, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Chapelle, Carol A.; Chung, Yoo-Ree; Hegelheimer, Volker; Pendar, Nick; Xu, Jing – Language Testing, 2010
This study piloted test items that will be used in a computer-delivered and scored test of productive grammatical ability in English as a second language (ESL). Findings from research on learners' development of morphosyntactic, syntactic, and functional knowledge were synthesized to create a framework of grammatical features. We outline the…
Descriptors: Test Items, Grammar, Developmental Stages, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Currie, Michael; Chiramanee, Thanyapa – Language Testing, 2010
Noting the widespread use of multiple-choice items in tests in English language education in Thailand, this study compared their effect against that of constructed-response items. One hundred and fifty-two university undergraduates took a test of English structure first in constructed-response format, and later in three, stem-equivalent…
Descriptors: Experimental Groups, Multiple Choice Tests, Foreign Countries, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Chapelle, Carol A.; Chung, Yoo-Ree – Language Testing, 2010
Advances in natural language processing (NLP) and automatic speech recognition and processing technologies offer new opportunities for language testing. Despite their potential uses on a range of language test item types, relatively little work has been done in this area, and it is therefore not well understood by test developers, researchers or…
Descriptors: Test Items, Computational Linguistics, Testing, Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Beglar, David – Language Testing, 2010
The primary purpose of this study was to provide preliminary validity evidence for a 140-item form of the Vocabulary Size Test, which is designed to measure written receptive knowledge of the first 14,000 words of English. Nineteen native speakers of English and 178 native speakers of Japanese participated in the study. Analyses based on the Rasch…
Descriptors: Test Items, Native Speakers, Test Validity, Vocabulary
Peer reviewed Peer reviewed
Direct linkDirect link
Bunch, Michael B. – Language Testing, 2011
Title III of Public Law 107-110 (No Child Left Behind; NCLB) provided for creation of assessments of English language learners (ELLs) and established, through the Enhanced Assessment Grant program, a platform from which four consortia of states developed ELL tests aligned to rigorous statewide content standards. Those four tests (ACCESS for ELLs,…
Descriptors: Test Items, Student Evaluation, Federal Legislation, Formative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Jang, Eunice Eunhee – Language Testing, 2009
With recent statistical advances in cognitive diagnostic assessment (CDA), the CDA approach has been increasingly applied to non-diagnostic tests partly to meet accountability demands for student achievement. The study aimed to evaluate critically the validity of the CDA application to an existing non-diagnostic L2 reading comprehension test and…
Descriptors: Feedback (Response), Reading Comprehension, Test Items, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Malabonga, Valerie; Kenyon, Dorry M.; Carlo, Maria; August, Diane; Louguit, Mohammed – Language Testing, 2008
This paper describes the development and validation of the Cognate Awareness Test (CAT), which measures cognate awareness in Spanish-speaking English Language Learners (ELLs) in fourth and fifth grade. An investigation of differential performance on the two subtests of the CAT (cognates and noncognates) provides evidence that the instrument is…
Descriptors: Speech Communication, Second Language Learning, Grade 4, Grade 5
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7