Publication Date
In 2025 | 1 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 3 |
Descriptor
Algorithms | 5 |
Item Response Theory | 5 |
Models | 3 |
Accuracy | 2 |
Ability | 1 |
Adaptive Testing | 1 |
Bayesian Statistics | 1 |
Computation | 1 |
Computer Assisted Testing | 1 |
Equated Scores | 1 |
Estimation (Mathematics) | 1 |
More ▼ |
Author
Cai, Li | 1 |
Huang, Sijia | 1 |
Jean-Paul Fox | 1 |
Kim, Yunsung | 1 |
Kurz, Terri Barber | 1 |
Luo, Jinwen | 1 |
Piech, Chris | 1 |
Sreechan | 1 |
Thille, Candace | 1 |
van der Linden, Wim J. | 1 |
Publication Type
Reports - Descriptive | 5 |
Journal Articles | 2 |
Speeches/Meeting Papers | 2 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Law School Admission Test | 1 |
What Works Clearinghouse Rating
Jean-Paul Fox – Journal of Educational and Behavioral Statistics, 2025
Popular item response theory (IRT) models are considered complex, mainly due to the inclusion of a random factor variable (latent variable). The random factor variable represents the incidental parameter problem since the number of parameters increases when including data of new persons. Therefore, IRT models require a specific estimation method…
Descriptors: Sample Size, Item Response Theory, Accuracy, Bayesian Statistics
Kim, Yunsung; Sreechan; Piech, Chris; Thille, Candace – International Educational Data Mining Society, 2023
Dynamic Item Response Models extend the standard Item Response Theory (IRT) to capture temporal dynamics in learner ability. While these models have the potential to allow instructional systems to actively monitor the evolution of learner proficiency in real time, existing dynamic item response models rely on expensive inference algorithms that…
Descriptors: Item Response Theory, Accuracy, Inferences, Algorithms
Huang, Sijia; Luo, Jinwen; Cai, Li – Educational and Psychological Measurement, 2023
Random item effects item response theory (IRT) models, which treat both person and item effects as random, have received much attention for more than a decade. The random item effects approach has several advantages in many practical settings. The present study introduced an explanatory multidimensional random item effects rating scale model. The…
Descriptors: Rating Scales, Item Response Theory, Models, Test Items
Kurz, Terri Barber – 1999
Multiple-choice tests are generally scored using a conventional number right scoring method. While this method is easy to use, it has several weaknesses. These weaknesses include decreased validity due to guessing and failure to credit partial knowledge. In an attempt to address these weaknesses, psychometricians have developed various scoring…
Descriptors: Algorithms, Guessing (Tests), Item Response Theory, Multiple Choice Tests
van der Linden, Wim J. – 1999
A constrained computerized adaptive testing (CAT) algorithm is presented that automatically equates the number-correct scores on adaptive tests. The algorithm can be used to equate number-correct scores across different administrations of the same adaptive test as well as to an external reference test. The constraints are derived from a set of…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing