NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Cheng, Ying; Patton, Jeffrey M.; Shao, Can – Educational and Psychological Measurement, 2015
a-Stratified computerized adaptive testing with b-blocking (AST), as an alternative to the widely used maximum Fisher information (MFI) item selection method, can effectively balance item pool usage while providing accurate latent trait estimates in computerized adaptive testing (CAT). However, previous comparisons of these methods have treated…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Banks
Peer reviewed Peer reviewed
Direct linkDirect link
Leroux, Audrey J.; Lopez, Myriam; Hembry, Ian; Dodd, Barbara G. – Educational and Psychological Measurement, 2013
This study compares the progressive-restricted standard error (PR-SE) exposure control procedure to three commonly used procedures in computerized adaptive testing, the randomesque, Sympson-Hetter (SH), and no exposure control methods. The performance of these four procedures is evaluated using the three-parameter logistic model under the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Comparative Analysis, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G. – Applied Psychological Measurement, 2012
When tests consist of multiple-choice and constructed-response items, researchers are confronted with the question of which item response theory (IRT) model combination will appropriately represent the data collected from these mixed-format tests. This simulation study examined the performance of six model selection criteria, including the…
Descriptors: Item Response Theory, Models, Selection, Criteria
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Kyung T. – Journal of Educational Measurement, 2012
Successful administration of computerized adaptive testing (CAT) programs in educational settings requires that test security and item exposure control issues be taken seriously. Developing an item selection algorithm that strikes the right balance between test precision and level of item pool utilization is the key to successful implementation…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Liu, Chen-Wei – Educational and Psychological Measurement, 2011
The generalized graded unfolding model (GGUM) has been recently developed to describe item responses to Likert items (agree-disagree) in attitude measurement. In this study, the authors (a) developed two item selection methods in computerized classification testing under the GGUM, the current estimate/ability confidence interval method and the cut…
Descriptors: Computer Assisted Testing, Adaptive Testing, Classification, Item Response Theory
Peer reviewed Peer reviewed
Ackerman, Terry A.; Evans, John A. – Applied Psychological Measurement, 1994
The effect of the conditioning score on the results of differential item functioning (DIF) analysis was examined with simulated data. The study demonstrates that results of DIF that rely on a conditioning score can be quite different depending on the conditioning variable that is selected. (SLD)
Descriptors: Construct Validity, Identification, Item Bias, Selection
Peer reviewed Peer reviewed
Stocking, Martha L.; Swanson, Len – Applied Psychological Measurement, 1993
A method is presented for incorporating a large number of constraints on adaptive item selection in the construction of computerized adaptive tests. The method, which emulates practices of expert test specialists, is illustrated for verbal and quantitative measures. Its foundation is application of a weighted deviations model and algorithm. (SLD)
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Expert Systems
Knol, Dirk L. – 1989
Two iterative procedures for constructing Rasch scales are presented. A log-likelihood ratio test based on a quasi-loglinear formulation of the Rasch model is given by which one item at a time can be deleted from or added to an initial item set. In the so-called "top-down" algorithm, items are stepwise deleted from a relatively large…
Descriptors: Algorithms, Item Banks, Latent Trait Theory, Mathematical Models
Pine, Steven M.; Weiss, David J. – 1976
This report examines how selection fairness is influenced by the item characteristics of a selection instrument in terms of its distribution of item difficulties, level of item discrimination, and degree of item bias. Computer simulation was used in the administration of conventional ability tests to a hypothetical target population consisting of…
Descriptors: Aptitude Tests, Bias, Computer Programs, Culture Fair Tests