NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G. – Educational and Psychological Measurement, 2017
The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…
Descriptors: Testing, Performance, Prediction, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Leroux, Audrey J.; Lopez, Myriam; Hembry, Ian; Dodd, Barbara G. – Educational and Psychological Measurement, 2013
This study compares the progressive-restricted standard error (PR-SE) exposure control procedure to three commonly used procedures in computerized adaptive testing, the randomesque, Sympson-Hetter (SH), and no exposure control methods. The performance of these four procedures is evaluated using the three-parameter logistic model under the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Comparative Analysis, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, Seung W.; Grady, Matthew W.; Dodd, Barbara G. – Educational and Psychological Measurement, 2011
The goal of the current study was to introduce a new stopping rule for computerized adaptive testing (CAT). The predicted standard error reduction (PSER) stopping rule uses the predictive posterior variance to determine the reduction in standard error that would result from the administration of additional items. The performance of the PSER was…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Evaluation Methods
Peer reviewed Peer reviewed
Zwick, Rebecca – Educational and Psychological Measurement, 1997
Recent simulations have shown that, for a given sample size, the Mantel-Haenszel (MH) variances tend to be larger when items are administered to randomly selected examinees than when they are administered adaptively. Results suggest that adaptive testing may lead to more efficient application of MH differential item functioning analyses. (SLD)
Descriptors: Adaptive Testing, Item Bias, Sample Size, Simulation