NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 4 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Raykov, Tenko – Measurement: Interdisciplinary Research and Perspectives, 2023
This software review discusses the capabilities of Stata to conduct item response theory modeling. The commands needed for fitting the popular one-, two-, and three-parameter logistic models are initially discussed. The procedure for testing the discrimination parameter equality in the one-parameter model is then outlined. The commands for fitting…
Descriptors: Item Response Theory, Models, Comparative Analysis, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Peabody, Michael R. – Measurement: Interdisciplinary Research and Perspectives, 2023
Many organizations utilize some form of automation in the test assembly process; either fully algorithmic or heuristically constructed. However, one issue with heuristic models is that when the test assembly problem changes the entire model may need to be re-conceptualized and recoded. In contrast, mixed-integer programming (MIP) is a mathematical…
Descriptors: Programming Languages, Algorithms, Heuristics, Mathematical Models
Peer reviewed Peer reviewed
Direct linkDirect link
Maydeu-Olivares, Alberto – Measurement: Interdisciplinary Research and Perspectives, 2013
In this rejoinder, Maydeu-Olivares states that, in item response theory (IRT) measurement applications, the application of goodness-of-fit (GOF) methods informs researchers of the discrepancy between the model and the data being fitted (the room for improvement). By routinely reporting the GOF of IRT models, together with the substantive results…
Descriptors: Goodness of Fit, Models, Evaluation Methods, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Suh, Youngsuk; Mroch, Andrew A.; Kane, Michael T.; Ripkey, Douglas R. – Measurement: Interdisciplinary Research and Perspectives, 2009
In this study, a data base containing the responses of 40,000 candidates to 90 multiple-choice questions was used to mimic data sets for 50-item tests under the "nonequivalent groups with anchor test" (NEAT) design. Using these smaller data sets, we evaluated the performance of five linear equating methods for the NEAT design with five levels of…
Descriptors: Test Items, Equated Scores, Methods, Differences