NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
van der Linden, Wim J.; Glas, Cees A. W. – Applied Measurement in Education, 2000
Performed a simulation study to demonstrate the dramatic impact of capitalization on estimation errors on ability estimation in adaptive testing. Discusses four different strategies to minimize the likelihood of capitalization in computerized adaptive testing. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Peer reviewed Peer reviewed
Du, Yi; And Others – Applied Measurement in Education, 1993
A new computerized mastery test is described that builds on the Lewis and Sheehan procedure (sequential testlets) (1990), but uses fuzzy set decision theory to determine stopping rules and the Rasch model to calibrate items and estimate abilities. Differences between fuzzy set and Bayesian methods are illustrated through an example. (SLD)
Descriptors: Bayesian Statistics, Comparative Analysis, Computer Assisted Testing, Estimation (Mathematics)
Peer reviewed Peer reviewed
Rocklin, Thomas R. – Applied Measurement in Education, 1994
Effects of self-adapted testing (SAT), in which examinees choose the difficulty of items themselves, on ability estimates, precision, and efficiency, mechanisms of SAT effects, and examinee reactions to SAT are reviewed. SAT, which is less efficient than computer-adapted testing, is more efficient than fixed-item testing. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Difficulty Level
Peer reviewed Peer reviewed
De Ayala, R. J.; And Others – Applied Measurement in Education, 1992
A study involving 1,000 simulated examinees compared the partial credit and graded response models in computerized adaptive testing (CAT). The graded response model fit the data well and provided slightly more accurate ability estimates than those of the partial credit model. Benefits of polytomous model-based CATs are discussed. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Computer Simulation
Peer reviewed Peer reviewed
Vispoel, Walter P.; And Others – Applied Measurement in Education, 1994
Vocabulary fixed-item (FIT), computerized-adaptive (CAT), and self-adapted (SAT) tests were compared with 121 college students. CAT was more precise and efficient than SAT, which was more precise and efficient than FIT. SAT also yielded higher ability estimates for individuals with lower verbal self-concepts. (SLD)
Descriptors: Ability, Adaptive Testing, College Students, Comparative Analysis
Peer reviewed Peer reviewed
Stone, Clement A.; Lane, Suzanne – Applied Measurement in Education, 1991
A model-testing approach for evaluating the stability of item response theory item parameter estimates (IPEs) in a pretest-posttest design is illustrated. Nineteen items from the Head Start Measures Battery were used. A moderately high degree of stability in the IPEs for 5,510 children assessed on 2 occasions was found. (TJH)
Descriptors: Comparative Testing, Compensatory Education, Computer Assisted Testing, Early Childhood Education