NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 76 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Xueliang Chen; Vahid Aryadoust; Wenxin Zhang – Language Testing, 2025
The growing diversity among test takers in second or foreign language (L2) assessments makes the importance of fairness front and center. This systematic review aimed to examine how fairness in L2 assessments was evaluated through differential item functioning (DIF) analysis. A total of 83 articles from 27 journals were included in a systematic…
Descriptors: Second Language Learning, Language Tests, Test Items, Item Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Diyorjon Abdullaev; Djuraeva Laylo Shukhratovna; Jamoldinova Odinaxon Rasulovna; Jumanazarov Umid Umirzakovich; Olga V. Staroverova – International Journal of Language Testing, 2024
Local item dependence (LID) refers to the situation where responses to items in a test or questionnaire are influenced by responses to other items in the test. This could be due to shared prompts, item content similarity, and deficiencies in item construction. LID due to a shared prompt is highly probable in cloze tests where items are nested…
Descriptors: Undergraduate Students, Foreign Countries, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Baghaei, Purya; Christensen, Karl Bang – Language Testing, 2023
C-tests are gap-filling tests mainly used as rough and economical measures of second-language proficiency for placement and research purposes. A C-test usually consists of several short independent passages where the second half of every other word is deleted. Owing to their interdependent structure, C-test items violate the local independence…
Descriptors: Item Response Theory, Language Tests, Language Proficiency, Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Kyung-Mi O. – Language Testing in Asia, 2024
This study examines the efficacy of artificial intelligence (AI) in creating parallel test items compared to human-made ones. Two test forms were developed: one consisting of 20 existing human-made items and another with 20 new items generated with ChatGPT assistance. Expert reviews confirmed the content parallelism of the two test forms.…
Descriptors: Comparative Analysis, Artificial Intelligence, Computer Software, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Alpizar, David; Li, Tongyun; Norris, John M.; Gu, Lixiong – Language Testing, 2023
The C-test is a type of gap-filling test designed to efficiently measure second language proficiency. The typical C-test consists of several short paragraphs with the second half of every second word deleted. The words with deleted parts are considered as items nested within the corresponding paragraph. Given this testlet structure, it is commonly…
Descriptors: Psychometrics, Language Tests, Second Language Learning, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Ghaemi, Hamed – Language Testing in Asia, 2022
Listening comprehension in English, as one of the most fundamental skills, has an essential role in the process of learning English. Non-parametric item Response Theory (NIRT) is a probabilistic-nonparametric approach to item response theory (IRT) which determines the one-dimensionality and adaptability of test. NIRT techniques are a useful tool…
Descriptors: English (Second Language), Second Language Learning, Language Tests, Listening Comprehension Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Choi, Ikkyu; Zu, Jiyun – ETS Research Report Series, 2022
Synthetically generated speech (SGS) has become an integral part of our oral communication in a wide variety of contexts. It can be generated instantly at a low cost and allows precise control over multiple aspects of output, all of which can be highly appealing to second language (L2) assessment developers who have traditionally relied upon human…
Descriptors: Test Wiseness, Multiple Choice Tests, Test Items, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Moradi, Elahe; Ghabanchi, Zargham; Pishghadam, Reza – Language Testing in Asia, 2022
Given the significance of the test fairness, this study aimed to investigate a reading comprehension test for evidence of differential item functioning (DIF) based on English as a Foreign Language (EFL) learners' gender and their mode of learning (conventional vs. distance learning). To this end, 514 EFL learners were asked to take a 30-item…
Descriptors: Reading Comprehension, Test Bias, Test Items, Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Elahe Moradi; Zargham Ghabanchi – Journal of College Reading and Learning, 2025
The present study scrutinized Iranian EFL learners' mode of learning (distance vs. conventional) as a probable source of bias in employing cognitive and metacognitive reading comprehension strategies. To this end, a total of 514 Iranian distance and conventional EFL learners were asked to take a 30-item multiple-choice reading comprehension test…
Descriptors: Reading Strategies, Reading Instruction, Conventional Instruction, In Person Learning
Wenyue Ma – ProQuest LLC, 2023
Foreign language placement testing, an important component in university foreign language programs, has received considerable, but not copious, attention over the years in second language (L2) testing research (Norris, 2004), and it has been mostly concentrated on L2 English. In contrast to validation research on L2 English placement testing, the…
Descriptors: Second Language Learning, Chinese, Student Placement, Placement Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Huu Thanh Minh Nguyen; Nguyen Van Anh Le – TESL-EJ, 2024
Comparing language tests and test preparation materials holds important implications for the latter's validity and reliability. However, not enough studies compare such materials across a wide range of indices. Therefore, this study investigated the text complexity of IELTS academic reading tests (IRT) and IELTS reading practice tests (IRPrT).…
Descriptors: Second Language Learning, English (Second Language), Language Tests, Readability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sarallah Jafaripour; Omid Tabatabaei; Hadi Salehi; Hossein Vahid Dastjerdi – International Journal of Language Testing, 2024
The purpose of this study was to examine gender and discipline-based Differential Item Functioning (DIF) and Differential Distractor Functioning (DDF) on the Islamic Azad University English Proficiency Test (IAUEPT). The study evaluated DIF and DDF across genders and disciplines using the Rasch model. To conduct DIF and DDF analysis, the examinees…
Descriptors: Item Response Theory, Test Items, Language Tests, Language Proficiency
Peer reviewed Peer reviewed
Direct linkDirect link
Rafatbakhsh, Elaheh; Ahmadi, Alireza; Moloodi, Amirsaeid; Mehrpour, Saeed – Educational Measurement: Issues and Practice, 2021
Test development is a crucial, yet difficult and time-consuming part of any educational system, and the task often falls all on teachers. Automatic item generation systems have recently drawn attention as they can reduce this burden and make test development more convenient. Such systems have been developed to generate items for vocabulary,…
Descriptors: Test Construction, Test Items, Computer Assisted Testing, Multiple Choice Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Firoozi, Fatemeh – International Journal of Language Testing, 2021
Large-scale standardized ESL tests such as the International English Language Testing System (IELTS) are widely used around the world to measure the language proficiency of test-takers and make different decisions based on their scores. Reading comprehension is an integral part of such tests which requires test-takers to read passages and answer a…
Descriptors: Language Tests, English (Second Language), Second Language Learning, Standardized Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tatarinova, Galiya; Neamah, Nour Raheem; Mohammed, Aisha; Hassan, Aalaa Yaseen; Obaid, Ali Abdulridha; Ismail, Ismail Abdulwahhab; Maabreh, Hatem Ghaleb; Afif, Al Khateeb Nashaat Sultan; Viktorovna, Shvedova Irina – International Journal of Language Testing, 2023
Unidimensionality is an important assumption of measurement but it is violated very often. Most of the time, tests are deliberately constructed to be multidimensional to cover all aspects of the intended construct. In such situations, the application of unidimensional item response theory (IRT) models is not justifieddue to poor model fit and…
Descriptors: Item Response Theory, Test Items, Language Tests, Correlation
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6