Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 5 |
Descriptor
Source
Journal of Educational and… | 6 |
Author
Bernard P. Veldkamp | 1 |
Bloxom, Bruce | 1 |
Bolsinova, Maria | 1 |
Cho, Sun-Joo | 1 |
Cohen, Allan S. | 1 |
Giada Spaccapanico Proietti | 1 |
Grund, Simon | 1 |
Lüdtke, Oliver | 1 |
Mariagiulia Matteucci | 1 |
Monroe, Scott | 1 |
Robitzsch, Alexander | 1 |
More ▼ |
Publication Type
Journal Articles | 6 |
Reports - Research | 5 |
Reports - Evaluative | 1 |
Education Level
Elementary Secondary Education | 3 |
Secondary Education | 3 |
Grade 12 | 1 |
High Schools | 1 |
Audience
Location
Netherlands | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Trends in International… | 3 |
Armed Services Vocational… | 1 |
National Assessment of… | 1 |
Program for International… | 1 |
What Works Clearinghouse Rating
Giada Spaccapanico Proietti; Mariagiulia Matteucci; Stefania Mignani; Bernard P. Veldkamp – Journal of Educational and Behavioral Statistics, 2024
Classical automated test assembly (ATA) methods assume fixed and known coefficients for the constraints and the objective function. This hypothesis is not true for the estimates of item response theory parameters, which are crucial elements in test assembly classical models. To account for uncertainty in ATA, we propose a chance-constrained…
Descriptors: Automation, Computer Assisted Testing, Ambiguity (Context), Item Response Theory
Monroe, Scott – Journal of Educational and Behavioral Statistics, 2021
This research proposes a new statistic for testing latent variable distribution fit for unidimensional item response theory (IRT) models. If the typical assumption of normality is violated, then item parameter estimates will be biased, and dependent quantities such as IRT score estimates will be adversely affected. The proposed statistic compares…
Descriptors: Item Response Theory, Simulation, Scores, Comparative Analysis
Grund, Simon; Lüdtke, Oliver; Robitzsch, Alexander – Journal of Educational and Behavioral Statistics, 2021
Large-scale assessments (LSAs) use Mislevy's "plausible value" (PV) approach to relate student proficiency to noncognitive variables administered in a background questionnaire. This method requires background variables to be completely observed, a requirement that is seldom fulfilled. In this article, we evaluate and compare the…
Descriptors: Data Analysis, Error of Measurement, Research Problems, Statistical Inference
Bolsinova, Maria; Tijmstra, Jesper – Journal of Educational and Behavioral Statistics, 2016
Conditional independence (CI) between response time and response accuracy is a fundamental assumption of many joint models for time and accuracy used in educational measurement. In this study, posterior predictive checks (PPCs) are proposed for testing this assumption. These PPCs are based on three discrepancy measures reflecting different…
Descriptors: Reaction Time, Accuracy, Statistical Analysis, Robustness (Statistics)
Cho, Sun-Joo; Cohen, Allan S. – Journal of Educational and Behavioral Statistics, 2010
Mixture item response theory models have been suggested as a potentially useful methodology for identifying latent groups formed along secondary, possibly nuisance dimensions. In this article, we describe a multilevel mixture item response theory (IRT) model (MMixIRTM) that allows for the possibility that this nuisance dimensionality may function…
Descriptors: Simulation, Mathematics Tests, Item Response Theory, Student Behavior

Bloxom, Bruce; And Others – Journal of Educational and Behavioral Statistics, 1995
Develops and evaluates the linkage of the Armed Services Vocational Aptitude Battery to the mathematics scale of the National Assessment of Educational Progress. The accuracy of the proficiency distribution estimated from the projection was close to the accuracy of the distribution estimated from the large scale assessment. (SLD)
Descriptors: Educational Assessment, Estimation (Mathematics), Evaluation Methods, Mathematics Tests