Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 3 |
Descriptor
Adaptive Testing | 7 |
Computation | 7 |
Evaluation Methods | 7 |
Computer Assisted Testing | 6 |
Test Items | 6 |
Computer Simulation | 4 |
Item Banks | 4 |
Models | 3 |
Comparative Analysis | 2 |
Difficulty Level | 2 |
Item Response Theory | 2 |
More ▼ |
Source
Applied Psychological… | 3 |
Applied Measurement in… | 1 |
Educational Testing Service | 1 |
Journal of Educational and… | 1 |
Perspectives in Education | 1 |
Author
Armstrong, Ronald D. | 1 |
Belov, Dmitry I. | 1 |
Belur, Madhu N. | 1 |
Chaporkar, Prasanna | 1 |
Davey, Tim | 1 |
Dodd, Barbara | 1 |
Fitzpatrick, Steven | 1 |
Gorin, Joanna | 1 |
Herbert, Erin | 1 |
Li, Yuan H. | 1 |
Moothedath, Shana | 1 |
More ▼ |
Publication Type
Journal Articles | 6 |
Reports - Descriptive | 3 |
Reports - Research | 3 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
California Achievement Tests | 1 |
What Works Clearinghouse Rating
Moothedath, Shana; Chaporkar, Prasanna; Belur, Madhu N. – Perspectives in Education, 2016
In recent years, the computerised adaptive test (CAT) has gained popularity over conventional exams in evaluating student capabilities with desired accuracy. However, the key limitation of CAT is that it requires a large pool of pre-calibrated questions. In the absence of such a pre-calibrated question bank, offline exams with uncalibrated…
Descriptors: Guessing (Tests), Computer Assisted Testing, Adaptive Testing, Maximum Likelihood Statistics
Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander – Applied Psychological Measurement, 2008
This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the…
Descriptors: Test Items, Monte Carlo Methods, Law Schools, Adaptive Testing
Penfield, Randall D. – Applied Measurement in Education, 2006
This study applied the maximum expected information (MEI) and the maximum posterior-weighted information (MPI) approaches of computer adaptive testing item selection to the case of a test using polytomous items following the partial credit model. The MEI and MPI approaches are described. A simulation study compared the efficiency of ability…
Descriptors: Bayesian Statistics, Adaptive Testing, Computer Assisted Testing, Test Items
Gorin, Joanna; Dodd, Barbara; Fitzpatrick, Steven; Shieh, Yann – Applied Psychological Measurement, 2005
The primary purpose of this research is to examine the impact of estimation methods, actual latent trait distributions, and item pool characteristics on the performance of a simulated computerized adaptive testing (CAT) system. In this study, three estimation procedures are compared for accuracy of estimation: maximum likelihood estimation (MLE),…
Descriptors: Adaptive Testing, Computer Assisted Testing, Computation, Test Items
Revuelta, Javier – Journal of Educational and Behavioral Statistics, 2004
This article presents a psychometric model for estimating ability and item-selection strategies in self-adapted testing. In contrast to computer adaptive testing, in self-adapted testing the examinees are allowed to select the difficulty of the items. The item-selection strategy is defined as the distribution of difficulty conditional on the…
Descriptors: Psychometrics, Adaptive Testing, Test Items, Evaluation Methods
Li, Yuan H.; Schafer, William D. – Applied Psychological Measurement, 2005
Under a multidimensional item response theory (MIRT) computerized adaptive testing (CAT) testing scenario, a trait estimate (theta) in one dimension will provide clues for subsequently seeking a solution in other dimensions. This feature may enhance the efficiency of MIRT CAT's item selection and its scoring algorithms compared with its…
Descriptors: Adaptive Testing, Item Banks, Computation, Psychological Studies
Rizavi, Saba; Way, Walter D.; Davey, Tim; Herbert, Erin – Educational Testing Service, 2004
Item parameter estimates vary for a variety of reasons, including estimation error, characteristics of the examinee samples, and context effects (e.g., item location effects, section location effects, etc.). Although we expect variation based on theory, there is reason to believe that observed variation in item parameter estimates exceeds what…
Descriptors: Adaptive Testing, Test Items, Computation, Context Effect