NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Giada Spaccapanico Proietti; Mariagiulia Matteucci; Stefania Mignani; Bernard P. Veldkamp – Journal of Educational and Behavioral Statistics, 2024
Classical automated test assembly (ATA) methods assume fixed and known coefficients for the constraints and the objective function. This hypothesis is not true for the estimates of item response theory parameters, which are crucial elements in test assembly classical models. To account for uncertainty in ATA, we propose a chance-constrained…
Descriptors: Automation, Computer Assisted Testing, Ambiguity (Context), Item Response Theory
Chang, Shun-Wen; Twu, Bor-Yaun – 2001
To satisfy the security requirements of computerized adaptive tests (CATs), efforts have been made to control the exposure rates of optimal items directly by incorporating statistical methods into the item selection procedure. Since differences are likely to occur between the exposure control parameter derivation stage and the operational CAT…
Descriptors: Adaptive Testing, Computer Assisted Testing, Selection, Simulation
Peer reviewed Peer reviewed
van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob – Applied Psychological Measurement, 1999
Theoretical null distributions of several fit statistic have been derived for paper-and-pencil tests. Examined whether these distributions also hold for computerized adaptive tests through simulation. Rates for two statistics studied were found to be similar in most cases. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Goodness of Fit, Item Response Theory
Peer reviewed Peer reviewed
Nering, Michael L. – Applied Psychological Measurement, 1997
Evaluated the distribution of person fit within the computerized-adaptive testing (CAT) environment through simulation. Found that, within the CAT environment, these indexes tend not to follow a standard normal distribution. Person fit indexes had means and standard deviations that were quite different from the expected. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Error of Measurement, Item Response Theory
Peer reviewed Peer reviewed
Dodd, Barbara G.; Koch, William R. – Educational and Psychological Measurement, 1994
Simulated data were used to investigate the impact of characteristics of threshold values (number, symmetry, and distance between adjacent threshold values) and delta values on the distribution of item information in the successive intervals Rasch model. Implications for computerized adaptive attitude measurement are discussed. (SLD)
Descriptors: Adaptive Testing, Attitude Measures, Computer Assisted Testing, Item Response Theory
Zwick, Rebecca – 1995
This paper describes a study, now in progress, of new methods for representing the sampling variability of Mantel-Haenszel differential item functioning (DIF) results, based on the system for categorizing the severity of DIF that is now in place at the Educational Testing Service. The methods, which involve a Bayesian elaboration of procedures…
Descriptors: Adaptive Testing, Bayesian Statistics, Classification, Computer Assisted Testing
Veldkamp, Bernard P.; van der Linden, Wim J. – 1999
A method of item pool design is proposed that uses an optimal blueprint for the item pool calculated from the test specifications. The blueprint is a document that specifies the attributes that the items in the computerized adaptive test (CAT) pool should have. The blueprint can be a starting point for the item writing process, and it can be used…
Descriptors: Ability, Adaptive Testing, Classification, Computer Assisted Testing
Ito, Kyoko; Sykes, Robert C. – 1994
Responses to previously calibrated items administered in a computerized adaptive testing (CAT) mode may be used to recalibrate the items. This live-data simulation study investigated the possibility, and limitations, of on-line adaptive recalibration of precalibrated items. Responses to items of a Rasch-based paper-and-pencil licensure examination…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Difficulty Level