NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 5 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Xu, Xueli; Douglas, Jeff – Psychometrika, 2006
Nonparametric item response models have been developed as alternatives to the relatively inflexible parametric item response models. An open question is whether it is possible and practical to administer computerized adaptive testing with nonparametric models. This paper explores the possibility of computerized adaptive testing when using…
Descriptors: Simulation, Nonparametric Statistics, Item Analysis, Item Response Theory
Roussos, Louis; Nandakumar, Ratna; Cwikla, Julie – 2000
CATSIB is a differential item functioning (DIF) assessment methodology for computerized adaptive test (CAT) data. Kernel smoothing (KS) is a technique for nonparametric estimation of item response functions. In this study an attempt has been made to develop a more efficient DIF procedure for CAT data, KS-CATSIB, by combining CATSIB with kernel…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Bias, Item Response Theory
Yan, Duanli; Lewis, Charles; Stocking, Martha – 1998
It is unrealistic to suppose that standard item response theory (IRT) models will be appropriate for all new and currently considered computer-based tests. In addition to developing new models, researchers will need to give some attention to the possibility of constructing and analyzing new tests without the aid of strong models. Computerized…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Item Response Theory
Peer reviewed Peer reviewed
Cliff, Norman; And Others – Applied Psychological Measurement, 1979
Monte Carlo research with TAILOR, a program using implied orders as a basis for tailored testing, is reported. TAILOR typically required about half the available items to estimate, for each simulated examinee, the responses on the remainder. (Author/CTM)
Descriptors: Adaptive Testing, Computer Programs, Item Sampling, Nonparametric Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Meijer, Rob R. – Journal of Educational Measurement, 2004
Two new methods have been proposed to determine unexpected sum scores on sub-tests (testlets) both for paper-and-pencil tests and computer adaptive tests. A method based on a conservative bound using the hypergeometric distribution, denoted p, was compared with a method where the probability for each score combination was calculated using a…
Descriptors: Probability, Adaptive Testing, Item Response Theory, Scores