Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 5 |
Descriptor
Adaptive Testing | 6 |
Classification | 6 |
Computer Assisted Testing | 6 |
Test Items | 3 |
Accuracy | 2 |
Comparative Analysis | 2 |
Computation | 2 |
Item Banks | 2 |
Item Response Theory | 2 |
Selection | 2 |
Simulation | 2 |
More ▼ |
Source
Educational and Psychological… | 6 |
Author
Chung, Hyewon | 2 |
Dodd, Barbara G. | 2 |
Kim, Jiseon | 2 |
Park, Ryoungsun | 2 |
Eggen, T. J. H. M. | 1 |
Glasnapp, Douglas R. | 1 |
Lin, Chuan-Ju | 1 |
Liu, Chen-Wei | 1 |
Poggio, John C. | 1 |
Straetmans, G. J. J. M. | 1 |
Wang, Wen-Chung | 1 |
More ▼ |
Publication Type
Journal Articles | 6 |
Reports - Evaluative | 3 |
Reports - Research | 3 |
Education Level
Audience
Location
Netherlands | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G. – Educational and Psychological Measurement, 2017
The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…
Descriptors: Testing, Performance, Prediction, Error of Measurement
Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G.; Park, Ryoungsun – Educational and Psychological Measurement, 2012
This study compared various panel designs of the multistage test (MST) using mixed-format tests in the context of classification testing. Simulations varied the design of the first-stage module. The first stage was constructed according to three levels of test information functions (TIFs) with three different TIF centers. Additional computerized…
Descriptors: Test Format, Comparative Analysis, Computer Assisted Testing, Adaptive Testing
Lin, Chuan-Ju – Educational and Psychological Measurement, 2011
This study compares four item selection criteria for a two-category computerized classification testing: (1) Fisher information (FI), (2) Kullback-Leibler information (KLI), (3) weighted log-odds ratio (WLOR), and (4) mutual information (MI), with respect to the efficiency and accuracy of classification decision using the sequential probability…
Descriptors: Computer Assisted Testing, Adaptive Testing, Selection, Test Items
Wang, Wen-Chung; Liu, Chen-Wei – Educational and Psychological Measurement, 2011
The generalized graded unfolding model (GGUM) has been recently developed to describe item responses to Likert items (agree-disagree) in attitude measurement. In this study, the authors (a) developed two item selection methods in computerized classification testing under the GGUM, the current estimate/ability confidence interval method and the cut…
Descriptors: Computer Assisted Testing, Adaptive Testing, Classification, Item Response Theory
Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R. – Educational and Psychological Measurement, 2006
The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…
Descriptors: Classification, Computation, Simulation, Item Response Theory

Eggen, T. J. H. M.; Straetmans, G. J. J. M. – Educational and Psychological Measurement, 2000
Studied the use of adaptive testing when examinees are classified into three categories. Established testing algorithms with two different statistical computation procedures and evaluated them through simulation using an operative item bank from Dutch basic adult education. Results suggest a reduction of at least 22% in the mean number of items…
Descriptors: Adaptive Testing, Adult Education, Algorithms, Classification