NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 15 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kylie Gorney; Mark D. Reckase – Journal of Educational Measurement, 2025
In computerized adaptive testing, item exposure control methods are often used to provide a more balanced usage of the item pool. Many of the most popular methods, including the restricted method (Revuelta and Ponsoda), use a single maximum exposure rate to limit the proportion of times that each item is administered. However, Barrada et al.…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Banks
Peer reviewed Peer reviewed
Direct linkDirect link
Ersen, Rabia Karatoprak; Lee, Won-Chan – Journal of Educational Measurement, 2023
The purpose of this study was to compare calibration and linking methods for placing pretest item parameter estimates on the item pool scale in a 1-3 computerized multistage adaptive testing design in terms of item parameter recovery. Two models were used: embedded-section, in which pretest items were administered within a separate module, and…
Descriptors: Pretesting, Test Items, Computer Assisted Testing, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Xu, Lingling; Wang, Shiyu; Cai, Yan; Tu, Dongbo – Journal of Educational Measurement, 2021
Designing a multidimensional adaptive test (M-MST) based on a multidimensional item response theory (MIRT) model is critical to make full use of the advantages of both MST and MIRT in implementing multidimensional assessments. This study proposed two types of automated test assembly (ATA) algorithms and one set of routing rules that can facilitate…
Descriptors: Item Response Theory, Adaptive Testing, Automation, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J.; Choi, Seung W. – Journal of Educational Measurement, 2020
One of the methods of controlling test security in adaptive testing is imposing random item-ineligibility constraints on the selection of the items with probabilities automatically updated to maintain a predetermined upper bound on the exposure rates. Three major improvements of the method are presented. First, a few modifications to improve the…
Descriptors: Adaptive Testing, Item Response Theory, Feedback (Response), Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Wyse, Adam E.; McBride, James R. – Journal of Educational Measurement, 2021
A key consideration when giving any computerized adaptive test (CAT) is how much adaptation is present when the test is used in practice. This study introduces a new framework to measure the amount of adaptation of Rasch-based CATs based on looking at the differences between the selected item locations (Rasch item difficulty parameters) of the…
Descriptors: Item Response Theory, Computer Assisted Testing, Adaptive Testing, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Kang, Hyeon-Ah; Zhang, Susu; Chang, Hua-Hua – Journal of Educational Measurement, 2017
The development of cognitive diagnostic-computerized adaptive testing (CD-CAT) has provided a new perspective for gaining information about examinees' mastery on a set of cognitive attributes. This study proposes a new item selection method within the framework of dual-objective CD-CAT that simultaneously addresses examinees' attribute mastery…
Descriptors: Computer Assisted Testing, Adaptive Testing, Cognitive Tests, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Yao, Lihua – Journal of Educational Measurement, 2014
The intent of this research was to find an item selection procedure in the multidimensional computer adaptive testing (CAT) framework that yielded higher precision for both the domain and composite abilities, had a higher usage of the item pool, and controlled the exposure rate. Five multidimensional CAT item selection procedures (minimum angle;…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Jinming; Li, Jie – Journal of Educational Measurement, 2016
An IRT-based sequential procedure is developed to monitor items for enhancing test security. The procedure uses a series of statistical hypothesis tests to examine whether the statistical characteristics of each item under inspection have changed significantly during CAT administration. This procedure is compared with a previously developed…
Descriptors: Computer Assisted Testing, Test Items, Difficulty Level, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Kyung T. – Journal of Educational Measurement, 2012
Successful administration of computerized adaptive testing (CAT) programs in educational settings requires that test security and item exposure control issues be taken seriously. Developing an item selection algorithm that strikes the right balance between test precision and level of item pool utilization is the key to successful implementation…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Peer reviewed Peer reviewed
Stocking, Martha L.; Ward, William C.; Potenza, Maria T. – Journal of Educational Measurement, 1998
Explored, using simulations, the use of disclosed items on continuous testing conditions under a worse-case scenario that assumes that disclosed items are always answered correctly. Some item pool and test designs were identified in which the use of disclosed items produces effects on test scores that may be viewed as negligible. (Author/MAK)
Descriptors: Adaptive Testing, Cheating, Computer Assisted Testing, Item Banks
Peer reviewed Peer reviewed
Potenza, Maria T.; Stocking, Martha L. – Journal of Educational Measurement, 1997
Common strategies for dealing with flawed items in conventional testing, grounded in principles of fairness to examinees, are re-examined in the context of adaptive testing. The additional strategy of retesting from a pool cleansed of flawed items is found, through a Monte Carlo study, to bring about no practical improvement. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Monte Carlo Methods
Peer reviewed Peer reviewed
Wainer, Howard; Kiely, Gerard L. – Journal of Educational Measurement, 1987
The testlet, a bundle of test items, alleviates some problems associated with computerized adaptive testing: context effects, lack of robustness, and item difficulty ordering. While testlets may be linear or hierarchical, the most useful ones are four-level hierarchical units, containing 15 items and partitioning examinees into 16 classes. (GDC)
Descriptors: Adaptive Testing, Computer Assisted Testing, Context Effect, Item Banks
Peer reviewed Peer reviewed
Direct linkDirect link
Ariel, Adelaide; Veldkamp, Bernard P.; van der Linden, Wim J. – Journal of Educational Measurement, 2004
Preventing items in adaptive testing from being over- or underexposed is one of the main problems in computerized adaptive testing. Though the problem of overexposed items can be solved using a probabilistic item-exposure control method, such methods are unable to deal with the problem of underexposed items. Using a system of rotating item pools,…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Banks, Test Construction
Peer reviewed Peer reviewed
Haladyna, Thomas M.; Roid, Gale H. – Journal of Educational Measurement, 1983
The present study showed that Rasch-based adaptive tests--when item domains were finite and specifiable--had greater precision in domain score estimation than test forms created by random sampling of items. Results were replicated across four data sources representing a variety of criterion-referenced, domain-based tests varying in length.…
Descriptors: Adaptive Testing, Criterion Referenced Tests, Error of Measurement, Estimation (Mathematics)
Peer reviewed Peer reviewed
Wainer, Howard; And Others – Journal of Educational Measurement, 1992
Computer simulations were run to measure the relationship between testlet validity and factors of item pool size and testlet length for both adaptive and linearly constructed testlets. Making a testlet adaptive yields only modest increases in aggregate validity because of the peakedness of the typical proficiency distribution. (Author/SLD)
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Computer Simulation