NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 16 to 30 of 526 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
TsungHan Ho – Applied Measurement in Education, 2023
An operational multistage adaptive test (MST) requires the development of a large item bank and the effort to continuously replenish the item bank due to concerns about test security and validity over the long term. New items should be pretested and linked to the item bank before being used operationally. The linking item volume fluctuations in…
Descriptors: Bayesian Statistics, Regression (Statistics), Test Items, Pretesting
Peer reviewed Peer reviewed
Direct linkDirect link
Lim, Hwanggyu; Choe, Edison M. – Journal of Educational Measurement, 2023
The residual differential item functioning (RDIF) detection framework was developed recently under a linear testing context. To explore the potential application of this framework to computerized adaptive testing (CAT), the present study investigated the utility of the RDIF[subscript R] statistic both as an index for detecting uniform DIF of…
Descriptors: Test Items, Computer Assisted Testing, Item Response Theory, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Lewis, Jennifer; Lim, Hwanggyu; Padellaro, Frank; Sireci, Stephen G.; Zenisky, April L. – Educational Measurement: Issues and Practice, 2022
Setting cut scores on (MSTs) is difficult, particularly when the test spans several grade levels, and the selection of items from MST panels must reflect the operational test specifications. In this study, we describe, illustrate, and evaluate three methods for mapping panelists' Angoff ratings into cut scores on the scale underlying an MST. The…
Descriptors: Cutting Scores, Adaptive Testing, Test Items, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Andreas Frey; Christoph König; Aron Fink – Journal of Educational Measurement, 2025
The highly adaptive testing (HAT) design is introduced as an alternative test design for the Programme for International Student Assessment (PISA). The principle of HAT is to be as adaptive as possible when selecting items while accounting for PISA's nonstatistical constraints and addressing issues concerning PISA such as item position effects.…
Descriptors: Adaptive Testing, Test Construction, Alternative Assessment, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Hyo Jeong Shin; Christoph König; Frederic Robin; Andreas Frey; Kentaro Yamamoto – Journal of Educational Measurement, 2025
Many international large-scale assessments (ILSAs) have switched to multistage adaptive testing (MST) designs to improve measurement efficiency in measuring the skills of the heterogeneous populations around the world. In this context, previous literature has reported the acceptable level of model parameter recovery under the MST designs when the…
Descriptors: Robustness (Statistics), Item Response Theory, Adaptive Testing, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Xiuxiu Tang; Yi Zheng; Tong Wu; Kit-Tai Hau; Hua-Hua Chang – Journal of Educational Measurement, 2025
Multistage adaptive testing (MST) has been recently adopted for international large-scale assessments such as Programme for International Student Assessment (PISA). MST offers improved measurement efficiency over traditional nonadaptive tests and improved practical convenience over single-item-adaptive computerized adaptive testing (CAT). As a…
Descriptors: Reaction Time, Test Items, Achievement Tests, Foreign Countries
Montserrat Beatriz Valdivia Medinaceli – ProQuest LLC, 2023
My dissertation examines three current challenges of international large-scale assessments (ILSAs) associated with the transition from linear testing to an adaptive testing design. ILSAs are important for making comparisons among populations and informing countries about the quality of their educational systems. ILSA's results inform policymakers…
Descriptors: International Assessment, Achievement Tests, Adaptive Testing, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Lae Lae Shwe; Sureena Matayong; Suntorn Witosurapot – Education and Information Technologies, 2024
Multiple Choice Questions (MCQs) are an important evaluation technique for both examinations and learning activities. However, the manual creation of questions is time-consuming and challenging for teachers. Hence, there is a notable demand for an Automatic Question Generation (AQG) system. Several systems have been created for this aim, but the…
Descriptors: Difficulty Level, Computer Assisted Testing, Adaptive Testing, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Stefanie A. Wind; Beyza Aksu-Dunya – Applied Measurement in Education, 2024
Careless responding is a pervasive concern in research using affective surveys. Although researchers have considered various methods for identifying careless responses, studies are limited that consider the utility of these methods in the context of computer adaptive testing (CAT) for affective scales. Using a simulation study informed by recent…
Descriptors: Response Style (Tests), Computer Assisted Testing, Adaptive Testing, Affective Measures
Peer reviewed Peer reviewed
Direct linkDirect link
Gorgun, Guher; Bulut, Okan – Large-scale Assessments in Education, 2023
In low-stakes assessment settings, students' performance is not only influenced by students' ability level but also their test-taking engagement. In computerized adaptive tests (CATs), disengaged responses (e.g., rapid guesses) that fail to reflect students' true ability levels may lead to the selection of less informative items and thereby…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Algorithms
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Baryktabasov, Kasym; Jumabaeva, Chinara; Brimkulov, Ulan – Research in Learning Technology, 2023
Many examinations with thousands of participating students are organized worldwide every year. Usually, this large number of students sit the exams simultaneously and answer almost the same set of questions. This method of learning assessment requires tremendous effort and resources to prepare the venues, print question books and organize the…
Descriptors: Information Technology, Computer Assisted Testing, Test Items, Adaptive Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Diyorjon Abdullaev; Djuraeva Laylo Shukhratovna; Jamoldinova Odinaxon Rasulovna; Jumanazarov Umid Umirzakovich; Olga V. Staroverova – International Journal of Language Testing, 2024
Local item dependence (LID) refers to the situation where responses to items in a test or questionnaire are influenced by responses to other items in the test. This could be due to shared prompts, item content similarity, and deficiencies in item construction. LID due to a shared prompt is highly probable in cloze tests where items are nested…
Descriptors: Undergraduate Students, Foreign Countries, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Umi Laili Yuhana; Eko Mulyanto Yuniarno; Wenny Rahayu; Eric Pardede – Education and Information Technologies, 2024
In an online learning environment, it is important to establish a suitable assessment approach that can be adapted on the fly to accommodate the varying learning paces of students. At the same time, it is essential that assessment criteria remain compliant with the expected learning outcomes of the relevant education standard which predominantly…
Descriptors: Adaptive Testing, Electronic Learning, Elementary School Students, Student Evaluation
Yiqin Pan – ProQuest LLC, 2022
Item preknowledge refers to the phenomenon in which some examinees have access to live items before taking a test. It is one of the most common and significant concerns within the testing industry. Thus, various statistical methods have been proposed to detect item preknowledge in computerized linear or adaptive testing. However, the success of…
Descriptors: Artificial Intelligence, Prior Learning, Test Items, Algorithms
Ozge Ersan Cinar – ProQuest LLC, 2022
In educational tests, a group of questions related to a shared stimulus is called a testlet (e.g., a reading passage with multiple related questions). Use of testlets is very common in educational tests. Additionally, computerized adaptive testing (CAT) is a mode of testing where the test forms are created in real time tailoring to the test…
Descriptors: Test Items, Computer Assisted Testing, Adaptive Testing, Educational Testing
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  36