Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 2 |
Descriptor
Difficulty Level | 9 |
Item Banks | 9 |
Simulation | 9 |
Test Items | 8 |
Computer Assisted Testing | 7 |
Adaptive Testing | 6 |
Item Response Theory | 4 |
Algorithms | 3 |
Item Analysis | 3 |
Test Construction | 3 |
Ability | 2 |
More ▼ |
Source
Journal of Educational… | 2 |
Author
Berger, Martijn P. F. | 1 |
Bergstrom, Betty | 1 |
Cliff, Norman | 1 |
Curry, Allen R. | 1 |
Gershon, Richard | 1 |
Ito, Kyoko | 1 |
Lau, C. Allen | 1 |
Li, Jie | 1 |
McBride, James R. | 1 |
Schnipke, Deborah L. | 1 |
Sykes, Robert C. | 1 |
More ▼ |
Publication Type
Reports - Research | 5 |
Reports - Evaluative | 4 |
Speeches/Meeting Papers | 4 |
Journal Articles | 2 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 1 |
Stanford Binet Intelligence… | 1 |
What Works Clearinghouse Rating
Wyse, Adam E.; McBride, James R. – Journal of Educational Measurement, 2021
A key consideration when giving any computerized adaptive test (CAT) is how much adaptation is present when the test is used in practice. This study introduces a new framework to measure the amount of adaptation of Rasch-based CATs based on looking at the differences between the selected item locations (Rasch item difficulty parameters) of the…
Descriptors: Item Response Theory, Computer Assisted Testing, Adaptive Testing, Test Items
Zhang, Jinming; Li, Jie – Journal of Educational Measurement, 2016
An IRT-based sequential procedure is developed to monitor items for enhancing test security. The procedure uses a series of statistical hypothesis tests to examine whether the statistical characteristics of each item under inspection have changed significantly during CAT administration. This procedure is compared with a previously developed…
Descriptors: Computer Assisted Testing, Test Items, Difficulty Level, Item Response Theory
Lau, C. Allen; Wang, Tianyou – 1999
A study was conducted to extend the sequential probability ratio testing (SPRT) procedure with the polytomous model under some practical constraints in computerized classification testing (CCT), such as methods to control item exposure rate, and to study the effects of other variables, including item information algorithms, test difficulties, item…
Descriptors: Algorithms, Computer Assisted Testing, Difficulty Level, Item Banks
Schnipke, Deborah L. – 2002
A common practice in some certification fields (e.g., information technology) is to draw items from an item pool randomly and apply a common passing score, regardless of the items administered. Because these tests are commonly used, it is important to determine how accurate the pass/fail decisions are for such tests and whether fairly small,…
Descriptors: Decision Making, Difficulty Level, Item Banks, Pass Fail Grading
Gershon, Richard; Bergstrom, Betty – 1995
When examinees are allowed to review responses on an adaptive test, can they "cheat" the adaptive algorithm in order to take an easier test and improve their performance? Theoretically, deliberately answering items incorrectly will lower the examinee ability estimate and easy test items will be administered. If review is then allowed,…
Descriptors: Adaptive Testing, Algorithms, Cheating, Computer Assisted Testing
Veerkamp, Wim J. J.; Berger, Martijn P. F. – 1994
Items with the highest discrimination parameter values in a logistic item response theory (IRT) model do not necessarily give maximum information. This paper shows which discrimination parameter values (as a function of the guessing parameter and the distance between person ability and item difficulty) give maximum information for the…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Ito, Kyoko; Sykes, Robert C. – 1994
Responses to previously calibrated items administered in a computerized adaptive testing (CAT) mode may be used to recalibrate the items. This live-data simulation study investigated the possibility, and limitations, of on-line adaptive recalibration of precalibrated items. Responses to items of a Rasch-based paper-and-pencil licensure examination…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Difficulty Level
Curry, Allen R.; And Others – 1978
The efficacy of employing subsets of items from a calibrated item pool to estimate the Rasch model person parameters was investigated. Specifically, the degree of invariance of Rasch model ability-parameter estimates was examined across differing collections of simulated items. The ability-parameter estimates were obtained from a simulation of…
Descriptors: Career Development, Difficulty Level, Equated Scores, Error of Measurement
Cliff, Norman; And Others – 1977
TAILOR is a computer program that uses the implied orders concept as the basis for computerized adaptive testing. The basic characteristics of TAILOR, which does not involve pretesting, are reviewed here and two studies of it are reported. One is a Monte Carlo simulation based on the four-parameter Birnbaum model and the other uses a matrix of…
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Programs, Difficulty Level