Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 3 |
Descriptor
Difficulty Level | 4 |
Error of Measurement | 4 |
Item Response Theory | 4 |
Test Items | 3 |
Computation | 2 |
Item Analysis | 2 |
Simulation | 2 |
Accuracy | 1 |
Adaptive Testing | 1 |
Bayesian Statistics | 1 |
Classification | 1 |
More ▼ |
Source
Journal of Educational… | 4 |
Author
Clauser, Brian E. | 1 |
Clauser, Jerome C. | 1 |
Jiao, Hong | 1 |
Jin, Ying | 1 |
Kamata, Akihito | 1 |
Kane, Michael | 1 |
Li, Jie | 1 |
Li, Yuan H. | 1 |
Lissitz, Robert W. | 1 |
Wang, Shudong | 1 |
Zhang, Jinming | 1 |
More ▼ |
Publication Type
Journal Articles | 4 |
Reports - Evaluative | 2 |
Reports - Research | 2 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Clauser, Brian E.; Kane, Michael; Clauser, Jerome C. – Journal of Educational Measurement, 2020
An Angoff standard setting study generally yields judgments on a number of items by a number of judges (who may or may not be nested in panels). Variability associated with judges (and possibly panels) contributes error to the resulting cut score. The variability associated with items plays a more complicated role. To the extent that the mean item…
Descriptors: Cutting Scores, Generalization, Decision Making, Standard Setting
Zhang, Jinming; Li, Jie – Journal of Educational Measurement, 2016
An IRT-based sequential procedure is developed to monitor items for enhancing test security. The procedure uses a series of statistical hypothesis tests to examine whether the statistical characteristics of each item under inspection have changed significantly during CAT administration. This procedure is compared with a previously developed…
Descriptors: Computer Assisted Testing, Test Items, Difficulty Level, Item Response Theory
Jiao, Hong; Kamata, Akihito; Wang, Shudong; Jin, Ying – Journal of Educational Measurement, 2012
The applications of item response theory (IRT) models assume local item independence and that examinees are independent of each other. When a representative sample for psychometric analysis is selected using a cluster sampling method in a testlet-based assessment, both local item dependence and local person dependence are likely to be induced.…
Descriptors: Item Response Theory, Test Items, Markov Processes, Monte Carlo Methods
Li, Yuan H.; Lissitz, Robert W. – Journal of Educational Measurement, 2004
The analytically derived asymptotic standard errors (SEs) of maximum likelihood (ML) item estimates can be approximated by a mathematical function without examinees' responses to test items, and the empirically determined SEs of marginal maximum likelihood estimation (MMLE)/Bayesian item estimates can be obtained when the same set of items is…
Descriptors: Test Items, Computation, Item Response Theory, Error of Measurement