NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Yao, Lihua – Applied Psychological Measurement, 2013
Through simulated data, five multidimensional computerized adaptive testing (MCAT) selection procedures with varying test lengths are examined and compared using different stopping rules. Fixed item exposure rates are used for all the items, and the Priority Index (PI) method is used for the content constraints. Two stopping rules, standard error…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Peer reviewed Peer reviewed
Direct linkDirect link
Green, Bert F. – Applied Psychological Measurement, 2011
This article refutes a recent claim that computer-based tests produce biased scores for very proficient test takers who make mistakes on one or two initial items and that the "bias" can be reduced by using a four-parameter IRT model. Because the same effect occurs with pattern scores on nonadaptive tests, the effect results from IRT scoring, not…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Bias, Item Response Theory
Peer reviewed Peer reviewed
Sykes, Robert C.; Ito, Kyoko – Applied Psychological Measurement, 1997
Evaluated the equivalence of scores and one-parameter logistic model item difficulty estimates obtained from computer-based and paper-and-pencil forms of a licensure examination taken by 418 examinees. There was no effect of either order or mode of administration on the equivalences. (SLD)
Descriptors: Computer Assisted Testing, Estimation (Mathematics), Health Personnel, Item Response Theory
Peer reviewed Peer reviewed
Luecht, Richard M. – Applied Psychological Measurement, 1996
The example of a medical licensure test is used to demonstrate situations in which complex, integrated content must be balanced at the total test level for validity reasons, but items assigned to reportable subscore categories may be used under a multidimensional item response theory adaptive paradigm to improve subscore reliability. (SLD)
Descriptors: Adaptive Testing, Certification, Computer Assisted Testing, Licensing Examinations (Professions)
Peer reviewed Peer reviewed
Zwick, Rebecca; And Others – Applied Psychological Measurement, 1994
Simulated data were used to investigate the performance of modified versions of the Mantel-Haenszel method of differential item functioning (DIF) analysis in computerized adaptive tests (CAT). Results indicate that CAT-based DIF procedures perform well and support the use of item response theory-based matching variables in DIF analysis. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Simulation, Error of Measurement
Peer reviewed Peer reviewed
Luecht, Richard M.; Hirsch, Thomas M. – Applied Psychological Measurement, 1992
Derivations of several item selection algorithms for use in fitting test items to target information functions (IFs) are described. These algorithms, which use an average growth approximation of target IFs, were tested by generating six test forms and were found to provide reliable fit. (SLD)
Descriptors: Algorithms, Computer Assisted Testing, Equations (Mathematics), Goodness of Fit