NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Xi; Liu, Yang; Robin, Frederic; Guo, Hongwen – International Journal of Testing, 2019
In an on-demand testing program, some items are repeatedly used across test administrations. This poses a risk to test security. In this study, we considered a scenario wherein a test was divided into two subsets: one consisting of secure items and the other consisting of possibly compromised items. In a simulation study of multistage adaptive…
Descriptors: Identification, Methods, Test Items, Cheating
Peer reviewed Peer reviewed
Direct linkDirect link
Soland, James; Kuhfeld, Megan; Rios, Joseph – Large-scale Assessments in Education, 2021
Low examinee effort is a major threat to valid uses of many test scores. Fortunately, several methods have been developed to detect noneffortful item responses, most of which use response times. To accurately identify noneffortful responses, one must set response time thresholds separating those responses from effortful ones. While other studies…
Descriptors: Reaction Time, Measurement, Response Style (Tests), Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Cui, Zhongmin; Liu, Chunyan; He, Yong; Chen, Hanwei – Journal of Educational Measurement, 2018
Allowing item review in computerized adaptive testing (CAT) is getting more attention in the educational measurement field as more and more testing programs adopt CAT. The research literature has shown that allowing item review in an educational test could result in more accurate estimates of examinees' abilities. The practice of item review in…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Test Wiseness
Peer reviewed Peer reviewed
Direct linkDirect link
Cappaert, Kevin J.; Wen, Yao; Chang, Yu-Feng – Measurement: Interdisciplinary Research and Perspectives, 2018
Events such as curriculum changes or practice effects can lead to item parameter drift (IPD) in computer adaptive testing (CAT). The current investigation introduced a point- and weight-adjusted D[superscript 2] method for IPD detection for use in a CAT environment when items are suspected of drifting across test administrations. Type I error and…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Items, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Bao, Yu; Bradshaw, Laine – Measurement: Interdisciplinary Research and Perspectives, 2018
Diagnostic classification models (DCMs) can provide multidimensional diagnostic feedback about students' mastery levels of knowledge components or attributes. One advantage of using DCMs is the ability to accurately and reliably classify students into mastery levels with a relatively small number of items per attribute. Combining DCMs with…
Descriptors: Test Items, Selection, Adaptive Testing, Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sahin, Alper; Ozbasi, Durmus – Eurasian Journal of Educational Research, 2017
Purpose: This study aims to reveal effects of content balancing and item selection method on ability estimation in computerized adaptive tests by comparing Fisher's maximum information (FMI) and likelihood weighted information (LWI) methods. Research Methods: Four groups of examinees (250, 500, 750, 1000) and a bank of 500 items with 10 different…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Test Content