NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lewis, Jennifer; Lim, Hwanggyu; Padellaro, Frank; Sireci, Stephen G.; Zenisky, April L. – Educational Measurement: Issues and Practice, 2022
Setting cut scores on (MSTs) is difficult, particularly when the test spans several grade levels, and the selection of items from MST panels must reflect the operational test specifications. In this study, we describe, illustrate, and evaluate three methods for mapping panelists' Angoff ratings into cut scores on the scale underlying an MST. The…
Descriptors: Cutting Scores, Adaptive Testing, Test Items, Item Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gu, Lixiong; Ling, Guangming; Qu, Yanxuan – ETS Research Report Series, 2019
Research has found that the "a"-stratified item selection strategy (STR) for computerized adaptive tests (CATs) may lead to insufficient use of high a items at later stages of the tests and thus to reduced measurement precision. A refined approach, unequal item selection across strata (USTR), effectively improves test precision over the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Use, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Stark, Stephen; Chernyshenko, Oleksandr S. – International Journal of Testing, 2011
This article delves into a relatively unexplored area of measurement by focusing on adaptive testing with unidimensional pairwise preference items. The use of such tests is becoming more common in applied non-cognitive assessment because research suggests that this format may help to reduce certain types of rater error and response sets commonly…
Descriptors: Test Length, Simulation, Adaptive Testing, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Passos, Valeria Lima; Berger, Martijn P. F.; Tan, Frans E. S. – Journal of Educational and Behavioral Statistics, 2008
During the early stage of computerized adaptive testing (CAT), item selection criteria based on Fisher"s information often produce less stable latent trait estimates than the Kullback-Leibler global information criterion. Robustness against early stage instability has been reported for the D-optimality criterion in a polytomous CAT with the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Evaluation Criteria, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J.; Veldkamp, Bernard P. – Journal of Educational and Behavioral Statistics, 2007
Two conditional versions of the exposure-control method with item-ineligibility constraints for adaptive testing in van der Linden and Veldkamp (2004) are presented. The first version is for unconstrained item selection, the second for item selection with content constraints imposed by the shadow-test approach. In both versions, the exposure rates…
Descriptors: Law Schools, Adaptive Testing, Item Analysis, Probability
Peer reviewed Peer reviewed
Direct linkDirect link
Shuqun, Yang; Shuliang, Ding; Zhiqiang, Yao – International Journal of Distance Education Technologies, 2009
Cognitive diagnosis (CD) plays an important role in intelligent tutoring system. Computerized adaptive testing (CAT) is adaptive, fair, and efficient, which is suitable to large-scale examination. Traditional cognitive diagnostic test needs quite large number of items, the efficient and tailored CAT could be a remedy for it, so the CAT with…
Descriptors: Monte Carlo Methods, Distance Education, Adaptive Testing, Intelligent Tutoring Systems
Peer reviewed Peer reviewed
Direct linkDirect link
Xu, Xueli; Douglas, Jeff – Psychometrika, 2006
Nonparametric item response models have been developed as alternatives to the relatively inflexible parametric item response models. An open question is whether it is possible and practical to administer computerized adaptive testing with nonparametric models. This paper explores the possibility of computerized adaptive testing when using…
Descriptors: Simulation, Nonparametric Statistics, Item Analysis, Item Response Theory
Kalisch, Stanley J. – Journal of Computer-Based Instruction, 1974
A tailored testing model employing the beta distribution, whose mean equals the difficulty of an item and whose variance is approximately equal to the sampling variance of the item difficulty, and employing conditional item difficulties, is proposed. (Author)
Descriptors: Adaptive Testing, Computer Assisted Testing, Evaluation Methods, Item Analysis
Peer reviewed Peer reviewed
Lord, Frederic M. – Educational and Psychological Measurement, 1971
Descriptors: Ability, Adaptive Testing, Computer Oriented Programs, Difficulty Level
Peer reviewed Peer reviewed
Brennan, Robert L. – Journal of Educational Technology Systems, 1973
A review of four areas in which the computer has influenced the theory and practice of achievement testing: a) test scoring and item analysis, b) item sampling, c) item generation, and d) the sequencing of items resulting in various types of adaptive testing. Also, reference is made to the impact of computer assisted achievement testing upon the…
Descriptors: Achievement Tests, Adaptive Testing, Computer Assisted Instruction, Data Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Chao-Lin – Educational Technology & Society, 2005
The author analyzes properties of mutual information between dichotomous concepts and test items. The properties generalize some common intuitions about item comparison, and provide principled foundations for designing item-selection heuristics for student assessment in computer-assisted educational systems. The proposed item-selection strategies…
Descriptors: Test Items, Heuristics, Classification, Item Analysis