NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 12 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Qi; Bolt, Daniel M. – Educational and Psychological Measurement, 2023
Previous studies have demonstrated evidence of latent skill continuity even in tests intentionally designed for measurement of binary skills. In addition, the assumption of binary skills when continuity is present has been shown to potentially create a lack of invariance in item and latent ability parameters that may undermine applications. In…
Descriptors: Item Response Theory, Test Items, Skill Development, Robustness (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Bolt, Daniel M.; Deng, Sien; Lee, Sora – Journal of Educational Measurement, 2014
Functional form misfit is frequently a concern in item response theory (IRT), although the practical implications of misfit are often difficult to evaluate. In this article, we illustrate how seemingly negligible amounts of functional form misfit, when systematic, can be associated with significant distortions of the score metric in vertical…
Descriptors: Item Response Theory, Scaling, Goodness of Fit, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Suh, Youngsuk; Bolt, Daniel M. – Journal of Educational Measurement, 2011
In multiple-choice items, differential item functioning (DIF) in the correct response may or may not be caused by differentially functioning distractors. Identifying distractors as causes of DIF can provide valuable information for potential item revision or the design of new test items. In this paper, we examine a two-step approach based on…
Descriptors: Test Items, Test Bias, Multiple Choice Tests, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Johnson, Timothy R.; Bolt, Daniel M. – Journal of Educational and Behavioral Statistics, 2010
Multidimensional item response models are usually implemented to model the relationship between item responses and two or more traits of interest. We show how multidimensional multinomial logit item response models can also be used to account for individual differences in response style. This is done by specifying a factor-analytic model for…
Descriptors: Models, Response Style (Tests), Factor Structure, Individual Differences
Peer reviewed Peer reviewed
Direct linkDirect link
Suh, Youngsuk; Bolt, Daniel M. – Psychometrika, 2010
Nested logit item response models for multiple-choice data are presented. Relative to previous models, the new models are suggested to provide a better approximation to multiple-choice items where the application of a solution strategy precedes consideration of response options. In practice, the models also accommodate collapsibility across all…
Descriptors: Computation, Simulation, Psychometrics, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Wells, Craig S.; Bolt, Daniel M. – Applied Measurement in Education, 2008
Tests of model misfit are often performed to validate the use of a particular model in item response theory. Douglas and Cohen (2001) introduced a general nonparametric approach for detecting misfit under the two-parameter logistic model. However, the statistical properties of their approach, and empirical comparisons to other methods, have not…
Descriptors: Test Length, Test Items, Monte Carlo Methods, Nonparametric Statistics
Peer reviewed Peer reviewed
Wollack, James A.; Bolt, Daniel M.; Cohen, Allan S.; Lee, Young-Sun – Applied Psychological Measurement, 2002
Compared the quality of item parameter estimates for marginal maximum likelihood (MML) and Markov Chain Monte Carlo (MCMC) with the nominal response model using simulation. The quality of item parameter recovery was nearly identical for MML and MCMC, and both methods tended to produce good estimates. (SLD)
Descriptors: Estimation (Mathematics), Markov Processes, Monte Carlo Methods, Simulation
Peer reviewed Peer reviewed
Bolt, Daniel M. – Applied Measurement in Education, 1999
Examined whether the item response theory (IRT) true-score equating method is more adversely affected by the presence of multidimensionality than two conventional equating methods, linear and equipercentile equating. Results of two simulation studies suggest that the IRT method performs as well as the conventional methods when the correlation…
Descriptors: Correlation, Equated Scores, Item Response Theory, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Mroch, Andrew A.; Bolt, Daniel M. – Applied Measurement in Education, 2006
Recently, nonparametric methods have been proposed that provide a dimensionally based description of test structure for tests with dichotomous items. Because such methods are based on different notions of dimensionality than are assumed when using a psychometric model, it remains unclear whether these procedures might lead to a different…
Descriptors: Simulation, Comparative Analysis, Psychometrics, Methods Research
Peer reviewed Peer reviewed
Direct linkDirect link
Bolt, Daniel M.; Gierl, Mark J. – Journal of Educational Measurement, 2006
Inspection of differential item functioning (DIF) in translated test items can be informed by graphical comparisons of item response functions (IRFs) across translated forms. Due to the many forms of DIF that can emerge in such analyses, it is important to develop statistical tests that can confirm various characteristics of DIF when present.…
Descriptors: Regression (Statistics), Tests, Test Bias, Test Items
Peer reviewed Peer reviewed
Bolt, Daniel M. – Applied Psychological Measurement, 2001
Presents a new nonparametric method for constructing a spatial representation of multidimensional test structure, the Conditional Covariance-based SCALing (CCSCAL) method. Describes an index to measure the accuracy of the representation. Uses simulation and real-life data analyses to show that the method provides a suitable approximation to…
Descriptors: Analysis of Covariance, Item Response Theory, Nonparametric Statistics, Scaling
Peer reviewed Peer reviewed
Bolt, Daniel M.; Cohen, Allan S.; Wollack, James A. – Journal of Educational and Behavioral Statistics, 2001
Proposes a mixture item response model for investigating individual differences in the selection of response categories in multiple choice items. A real data example illustrates how the model can be used to distinguish examinees disproportionately attracted to different types of distractors, and a simulation study evaluates item parameter recovery…
Descriptors: Classification, Individual Differences, Item Response Theory, Mathematical Models