Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 5 |
Descriptor
Source
Journal of Educational and… | 6 |
Author
Begg, Melissa D. | 1 |
Bernard P. Veldkamp | 1 |
Cai, Li | 1 |
Chan, Wendy | 1 |
Giada Spaccapanico Proietti | 1 |
Grund, Simon | 1 |
Lüdtke, Oliver | 1 |
Mariagiulia Matteucci | 1 |
Monroe, Scott | 1 |
Robitzsch, Alexander | 1 |
Stefania Mignani | 1 |
More ▼ |
Publication Type
Journal Articles | 6 |
Reports - Research | 5 |
Reports - Evaluative | 1 |
Education Level
Elementary Secondary Education | 5 |
Secondary Education | 2 |
Grade 12 | 1 |
High Schools | 1 |
Audience
Location
Indiana | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Trends in International… | 3 |
Program for International… | 1 |
What Works Clearinghouse Rating
Giada Spaccapanico Proietti; Mariagiulia Matteucci; Stefania Mignani; Bernard P. Veldkamp – Journal of Educational and Behavioral Statistics, 2024
Classical automated test assembly (ATA) methods assume fixed and known coefficients for the constraints and the objective function. This hypothesis is not true for the estimates of item response theory parameters, which are crucial elements in test assembly classical models. To account for uncertainty in ATA, we propose a chance-constrained…
Descriptors: Automation, Computer Assisted Testing, Ambiguity (Context), Item Response Theory
Monroe, Scott – Journal of Educational and Behavioral Statistics, 2021
This research proposes a new statistic for testing latent variable distribution fit for unidimensional item response theory (IRT) models. If the typical assumption of normality is violated, then item parameter estimates will be biased, and dependent quantities such as IRT score estimates will be adversely affected. The proposed statistic compares…
Descriptors: Item Response Theory, Simulation, Scores, Comparative Analysis
Chan, Wendy – Journal of Educational and Behavioral Statistics, 2018
Policymakers have grown increasingly interested in how experimental results may generalize to a larger population. However, recently developed propensity score-based methods are limited by small sample sizes, where the experimental study is generalized to a population that is at least 20 times larger. This is particularly problematic for methods…
Descriptors: Computation, Generalization, Probability, Sample Size
Grund, Simon; Lüdtke, Oliver; Robitzsch, Alexander – Journal of Educational and Behavioral Statistics, 2021
Large-scale assessments (LSAs) use Mislevy's "plausible value" (PV) approach to relate student proficiency to noncognitive variables administered in a background questionnaire. This method requires background variables to be completely observed, a requirement that is seldom fulfilled. In this article, we evaluate and compare the…
Descriptors: Data Analysis, Error of Measurement, Research Problems, Statistical Inference
Cai, Li – Journal of Educational and Behavioral Statistics, 2010
Item factor analysis (IFA), already well established in educational measurement, is increasingly applied to psychological measurement in research settings. However, high-dimensional confirmatory IFA remains a numerical challenge. The current research extends the Metropolis-Hastings Robbins-Monro (MH-RM) algorithm, initially proposed for…
Descriptors: Simulation, Questionnaires, Measurement, Factor Analysis

Vaughan, Roger D.; Begg, Melissa D. – Journal of Educational and Behavioral Statistics, 1999
Explores two methods for the analysis of binary data, and presents a proposal for adapting these data to matched-pair data from school intervention studies. Evaluates the performance of the two methods through simulation and discusses conditions under which each method may be used. (SLD)
Descriptors: Elementary Secondary Education, Intervention, Simulation