Publication Date
In 2025 | 2 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 25 |
Since 2016 (last 10 years) | 50 |
Since 2006 (last 20 years) | 69 |
Descriptor
Language Tests | 99 |
Multiple Choice Tests | 99 |
Test Items | 99 |
English (Second Language) | 68 |
Foreign Countries | 55 |
Second Language Learning | 55 |
Language Proficiency | 32 |
Second Language Instruction | 31 |
Test Construction | 28 |
Test Format | 25 |
Item Analysis | 24 |
More ▼ |
Source
Author
Publication Type
Education Level
Audience
Practitioners | 6 |
Teachers | 6 |
Researchers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 9 |
International English… | 3 |
Test of English for… | 3 |
ACT Assessment | 1 |
Advanced Placement… | 1 |
English Proficiency Test | 1 |
Graduate Record Examinations | 1 |
SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Ludewig, Ulrich; Schwerter, Jakob; McElvany, Nele – Journal of Psychoeducational Assessment, 2023
A better understanding of how distractor features influence the plausibility of distractors is essential for an efficient multiple-choice (MC) item construction in educational assessment. The plausibility of distractors has a major influence on the psychometric characteristics of MC items. Our analysis utilizes the nominal categories model to…
Descriptors: Vocabulary, Language Tests, German, Grade 4
Baghaei, Purya; Christensen, Karl Bang – Language Testing, 2023
C-tests are gap-filling tests mainly used as rough and economical measures of second-language proficiency for placement and research purposes. A C-test usually consists of several short independent passages where the second half of every other word is deleted. Owing to their interdependent structure, C-test items violate the local independence…
Descriptors: Item Response Theory, Language Tests, Language Proficiency, Second Language Learning
Stefan O'Grady – International Journal of Listening, 2025
Language assessment is increasingly computermediated. This development presents opportunities with new task formats and equally a need for renewed scrutiny of established conventions. Recent recommendations to increase integrated skills assessment in lecture comprehension tests is premised on empirical research that demonstrates enhanced construct…
Descriptors: Language Tests, Lecture Method, Listening Comprehension Tests, Multiple Choice Tests
Sharareh Sadat Sarsarabi; Zeinab Sazegar – International Journal of Language Testing, 2023
The statement stated in a multiple-choice question can be developed regarding two types of sentences: Interruptive (periodic) and cumulative (or loose). This study deals with different kinds of stems in designing multiple-choice (MC) items. To fill the existing gap in the literature, two groups of teacher students passing general English courses…
Descriptors: Language Tests, Test Format, Multiple Choice Tests, Student Placement
Choi, Ikkyu; Zu, Jiyun – ETS Research Report Series, 2022
Synthetically generated speech (SGS) has become an integral part of our oral communication in a wide variety of contexts. It can be generated instantly at a low cost and allows precise control over multiple aspects of output, all of which can be highly appealing to second language (L2) assessment developers who have traditionally relied upon human…
Descriptors: Test Wiseness, Multiple Choice Tests, Test Items, Difficulty Level
Liao, Ray J. T. – Language Testing, 2023
Among the variety of selected response formats used in L2 reading assessment, multiple-choice (MC) is the most commonly adopted, primarily due to its efficiency and objectiveness. Given the impact of assessment results on teaching and learning, it is necessary to investigate the degree to which the MC format reliably measures learners' L2 reading…
Descriptors: Reading Tests, Language Tests, Second Language Learning, Second Language Instruction
Ayako Aizawa – Vocabulary Learning and Instruction, 2024
The Vocabulary Size Test (VST) measures English learners' decontextualised receptive vocabulary knowledge of written English and has nine bilingual versions with multiple-choice options written in other languages. This study used the English-Japanese version of the VST to investigate the extent to which loanword items were answered correctly by…
Descriptors: Linguistic Borrowing, Second Language Learning, Native Language, English (Second Language)
Lee, Yi-Hsuan; Haberman, Shelby J.; Dorans, Neil J. – Journal of Educational Measurement, 2019
In many educational tests, both multiple-choice (MC) and constructed-response (CR) sections are used to measure different constructs. In many common cases, security concerns lead to the use of form-specific CR items that cannot be used for equating test scores, along with MC sections that can be linked to previous test forms via common items. In…
Descriptors: Scores, Multiple Choice Tests, Test Items, Responses
Budi Waluyo; Ali Zahabi; Luksika Ruangsung – rEFLections, 2024
The increasing popularity of the Common European Framework of Reference (CEFR) in non-native English-speaking countries has generated a demand for concrete examples in the creation of CEFR-based tests that assess the four main English skills. In response, this research endeavors to provide insight into the development and validation of a…
Descriptors: Language Tests, Language Proficiency, Undergraduate Students, Language Skills
Tomkowicz, Joanna; Kim, Dong-In; Wan, Ping – Online Submission, 2022
In this study we evaluated the stability of item parameters and student scores, using the pre-equated (pre-pandemic) parameters from Spring 2019 and post-equated (post-pandemic) parameters from Spring 2021 in two calibration and equating designs related to item parameter treatment: re-estimating all anchor parameters (Design 1) and holding the…
Descriptors: Equated Scores, Test Items, Evaluation Methods, Pandemics
Rafatbakhsh, Elaheh; Ahmadi, Alireza; Moloodi, Amirsaeid; Mehrpour, Saeed – Educational Measurement: Issues and Practice, 2021
Test development is a crucial, yet difficult and time-consuming part of any educational system, and the task often falls all on teachers. Automatic item generation systems have recently drawn attention as they can reduce this burden and make test development more convenient. Such systems have been developed to generate items for vocabulary,…
Descriptors: Test Construction, Test Items, Computer Assisted Testing, Multiple Choice Tests
Cheewasukthaworn, Kanchana – PASAA: Journal of Language Teaching and Learning in Thailand, 2022
In 2016, the Office of the Higher Education Commission issued a directive requiring all higher education institutions in Thailand to have their students take a standardized English proficiency test. According to the directive, the test's results had to align with the Common European Framework of Reference for Languages (CEFR). In response to this…
Descriptors: Test Construction, Standardized Tests, Language Tests, English (Second Language)
O'Grady, Stefan – Language Teaching Research, 2023
The current study explores the impact of varying multiple-choice question preview and presentation formats in a test of second language listening proficiency targeting different levels of text comprehension. In a between-participant design, participants completed a 30-item test of listening comprehension featuring implicit and explicit information…
Descriptors: Language Tests, Multiple Choice Tests, Scores, Second Language Learning
Holzknecht, Franz; McCray, Gareth; Eberharter, Kathrin; Kremmel, Benjamin; Zehentner, Matthias; Spiby, Richard; Dunlea, Jamie – Language Testing, 2021
Studies from various disciplines have reported that spatial location of options in relation to processing order impacts the ultimate choice of the option. A large number of studies have found a primacy effect, that is, the tendency to prefer the first option. In this paper we report on evidence that position of the key in four-option…
Descriptors: Language Tests, Test Items, Multiple Choice Tests, Listening Comprehension Tests
Qiao Wang; Ralph L. Rose; Ayaka Sugawara; Naho Orita – Vocabulary Learning and Instruction, 2025
VocQGen is an automated tool designed to generate multiple-choice cloze (MCC) questions for vocabulary assessment in second language learning contexts. It leverages several natural language processing (NLP) tools and OpenAI's GPT-4 model to produce MCC items quickly from user-specified word lists. To evaluate its effectiveness, we used the first…
Descriptors: Vocabulary Skills, Artificial Intelligence, Computer Software, Multiple Choice Tests