NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Pei-Hua; Chang, Hua-Hua; Wu, Haiyan – Educational and Psychological Measurement, 2012
Two sampling-and-classification-based procedures were developed for automated test assembly: the Cell Only and the Cell and Cube methods. A simulation study based on a 540-item bank was conducted to compare the performance of the procedures with the performance of a mixed-integer programming (MIP) method for assembling multiple parallel test…
Descriptors: Test Items, Selection, Test Construction, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hung-Yu; Chen, Po-Hsi; Wang, Wen-Chung – Applied Psychological Measurement, 2012
In the human sciences, a common assumption is that latent traits have a hierarchical structure. Higher order item response theory models have been developed to account for this hierarchy. In this study, computerized adaptive testing (CAT) algorithms based on these kinds of models were implemented, and their performance under a variety of…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Simulation
Deng, Hui; Chang, Hua-Hua – 2001
The purpose of this study was to compare a proposed revised a-stratified, or alpha-stratified, USTR method of test item selection with the original alpha-stratified multistage computerized adaptive testing approach (STR) and the use of maximum Fisher information (FSH) with respect to test efficiency and item pool usage using simulated computerized…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Selection
Peer reviewed Peer reviewed
Chang, Hua-Hua; Ying, Zhiliang – Applied Psychological Measurement, 1999
Proposes a new multistage adaptive-testing procedure that factors the discrimination parameter (alpha) into the item-selection process. Simulation studies indicate that the new strategy results in tests that are well-balanced, with respect to item exposure, and efficient. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Selection
Knol, Dirk L. – 1989
Two iterative procedures for constructing Rasch scales are presented. A log-likelihood ratio test based on a quasi-loglinear formulation of the Rasch model is given by which one item at a time can be deleted from or added to an initial item set. In the so-called "top-down" algorithm, items are stepwise deleted from a relatively large…
Descriptors: Algorithms, Item Banks, Latent Trait Theory, Mathematical Models
Stocking, Martha L. – 1988
The construction of parallel editions of conventional tests for purposes of test security while maintaining score comparability has always been a recognized and difficult problem in psychometrics and test construction. The introduction of new modes of test construction, e.g., adaptive testing, changes the nature of the problem, but does not make…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Identification
Veerkamp, Wim J. J.; Berger, Martijn P. F. – 1994
Items with the highest discrimination parameter values in a logistic item response theory (IRT) model do not necessarily give maximum information. This paper shows which discrimination parameter values (as a function of the guessing parameter and the distance between person ability and item difficulty) give maximum information for the…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
PDF pending restoration PDF pending restoration
Mills, Craig N.; Stocking, Martha L. – 1995
Computerized adaptive testing (CAT), while well-grounded in psychometric theory, has had few large-scale applications for high-stakes, secure tests in the past. This is now changing as the cost of computing has declined rapidly. As is always true where theory is translated into practice, many practical issues arise. This paper discusses a number…
Descriptors: Adaptive Testing, Computer Assisted Testing, High Stakes Tests, Item Banks
Spray, Judith A.; Reckase, Mark D. – 1994
The issue of test-item selection in support of decision making in adaptive testing is considered. The number of items needed to make a decision is compared for two approaches: selecting items from an item pool that are most informative at the decision point or selecting items that are most informative at the examinee's ability level. The first…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing