NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Schweizer, Karl; Troche, Stefan – Educational and Psychological Measurement, 2018
In confirmatory factor analysis quite similar models of measurement serve the detection of the difficulty factor and the factor due to the item-position effect. The item-position effect refers to the increasing dependency among the responses to successively presented items of a test whereas the difficulty factor is ascribed to the wide range of…
Descriptors: Investigations, Difficulty Level, Factor Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Matlock, Ki Lynn; Turner, Ronna – Educational and Psychological Measurement, 2016
When constructing multiple test forms, the number of items and the total test difficulty are often equivalent. Not all test developers match the number of items and/or average item difficulty within subcontent areas. In this simulation study, six test forms were constructed having an equal number of items and average item difficulty overall.…
Descriptors: Item Response Theory, Computation, Test Items, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Dardick, William R.; Mislevy, Robert J. – Educational and Psychological Measurement, 2016
A new variant of the iterative "data = fit + residual" data-analytical approach described by Mosteller and Tukey is proposed and implemented in the context of item response theory psychometric models. Posterior probabilities from a Bayesian mixture model of a Rasch item response theory model and an unscalable latent class are expressed…
Descriptors: Bayesian Statistics, Probability, Data Analysis, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Jin, Kuan-Yu – Educational and Psychological Measurement, 2010
In this study, the authors extend the standard item response model with internal restrictions on item difficulty (MIRID) to fit polytomous items using cumulative logits and adjacent-category logits. Moreover, the new model incorporates discrimination parameters and is rooted in a multilevel framework. It is a nonlinear mixed model so that existing…
Descriptors: Difficulty Level, Test Items, Item Response Theory, Generalization
Peer reviewed Peer reviewed
Direct linkDirect link
Weitzman, R. A. – Educational and Psychological Measurement, 2009
Building on the Kelley and Gulliksen versions of classical test theory, this article shows that a logistic model having only a single item parameter can account for varying item discrimination, as well as difficulty, by using item-test correlations to adjust incorrect-correct (0-1) item responses prior to an initial model fit. The fit occurs…
Descriptors: Item Response Theory, Test Items, Difficulty Level, Test Bias
Peer reviewed Peer reviewed
Mazor, Kathleen M.; And Others – Educational and Psychological Measurement, 1994
A variation of the Mantel Haenszel procedure is proposed that improves detection rates of nonuniform differential item functioning (DIF) without increasing the Type I error rate. The procedure, which is illustrated with simulated examinee responses, involves splitting the sample into low- and high-performing groups. (SLD)
Descriptors: Difficulty Level, Identification, Item Analysis, Item Bias
Peer reviewed Peer reviewed
Shoemaker, David M. – Educational and Psychological Measurement, 1972
Descriptors: Difficulty Level, Error of Measurement, Item Sampling, Simulation
Peer reviewed Peer reviewed
MacDonald, Paul; Paunonen, Sampo V. – Educational and Psychological Measurement, 2002
Examined the behavior of item and person statistics from item response theory and classical test theory frameworks through Monte Carlo methods with simulated test data. Findings suggest that item difficulty and person ability estimates are highly comparable for both approaches. (SLD)
Descriptors: Ability, Comparative Analysis, Difficulty Level, Item Response Theory