Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 1 |
Descriptor
Adaptive Testing | 7 |
Computer Assisted Testing | 6 |
Test Items | 4 |
Item Response Theory | 3 |
Simulation | 3 |
Test Construction | 3 |
Ability | 2 |
Computation | 2 |
Models | 2 |
Ability Identification | 1 |
Academic Achievement | 1 |
More ▼ |
Source
Journal of Educational and… | 7 |
Author
Lewis, Charles | 1 |
Meijer, Rob R. | 1 |
Revuelta, Javier | 1 |
Segall, Daniel O. | 1 |
Stocking, Martha | 1 |
Thissen, David | 1 |
Veerkamp, Wim J. J. | 1 |
Yan, Duanli | 1 |
van Krimpen-Stoop, Edith M.… | 1 |
van der Linden, Wim J. | 1 |
Publication Type
Journal Articles | 7 |
Reports - Descriptive | 7 |
Reports - Evaluative | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Thissen, David – Journal of Educational and Behavioral Statistics, 2016
David Thissen, a professor in the Department of Psychology and Neuroscience, Quantitative Program at the University of North Carolina, has consulted and served on technical advisory committees for assessment programs that use item response theory (IRT) over the past couple decades. He has come to the conclusion that there are usually two purposes…
Descriptors: Item Response Theory, Test Construction, Testing Problems, Student Evaluation

van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R. – Journal of Educational and Behavioral Statistics, 2001
Proposed person-fit statistics that are designed for use in a computerized adaptive test (CAT) and derived critical values for these statistics using cumulative sum (CUSUM) procedures so that item-score patterns can be classified as fitting or misfitting. Compared nominal Type I errors with empirical Type I errors through simulation studies. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Simulation, Test Construction

Veerkamp, Wim J. J. – Journal of Educational and Behavioral Statistics, 2000
Showed how Taylor approximation can be used to generate a linear approximation to a logistic item characteristic curve and a linear ability estimator. Demonstrated how, for a specific simulation, this could result in the special case of a Robbins-Monro item selection procedure for adaptive testing. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Selection
Yan, Duanli; Lewis, Charles; Stocking, Martha – Journal of Educational and Behavioral Statistics, 2004
It is unrealistic to suppose that standard item response theory (IRT) models will be appropriate for all the new and currently considered computer-based tests. In addition to developing new models, we also need to give attention to the possibility of constructing and analyzing new tests without the aid of strong models. Computerized adaptive…
Descriptors: Nonparametric Statistics, Regression (Statistics), Adaptive Testing, Computer Assisted Testing

van der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 1999
Proposes an algorithm that minimizes the asymptotic variance of the maximum-likelihood (ML) estimator of a linear combination of abilities of interest. The criterion results in a closed-form expression that is easy to evaluate. Also shows how the algorithm can be modified if the interest is in a test with a "simple ability structure."…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Segall, Daniel O. – Journal of Educational and Behavioral Statistics, 2004
A new sharing item response theory (SIRT) model is presented that explicitly models the effects of sharing item content between informants and test takers. This model is used to construct adaptive item selection and scoring rules that provide increased precision and reduced score gains in instances where sharing occurs. The adaptive item selection…
Descriptors: Scoring, Item Analysis, Item Response Theory, Adaptive Testing
Revuelta, Javier – Journal of Educational and Behavioral Statistics, 2004
This article presents a psychometric model for estimating ability and item-selection strategies in self-adapted testing. In contrast to computer adaptive testing, in self-adapted testing the examinees are allowed to select the difficulty of the items. The item-selection strategy is defined as the distribution of difficulty conditional on the…
Descriptors: Psychometrics, Adaptive Testing, Test Items, Evaluation Methods