Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 5 |
Descriptor
Difficulty Level | 8 |
Simulation | 8 |
Test Items | 6 |
Item Response Theory | 5 |
Models | 3 |
Comparative Analysis | 2 |
Correlation | 2 |
Factor Analysis | 2 |
Monte Carlo Methods | 2 |
Statistical Analysis | 2 |
Test Bias | 2 |
More ▼ |
Source
Educational and Psychological… | 8 |
Author
Dardick, William R. | 1 |
Jin, Kuan-Yu | 1 |
MacDonald, Paul | 1 |
Matlock, Ki Lynn | 1 |
Mazor, Kathleen M. | 1 |
Mislevy, Robert J. | 1 |
Paunonen, Sampo V. | 1 |
Schweizer, Karl | 1 |
Shoemaker, David M. | 1 |
Troche, Stefan | 1 |
Turner, Ronna | 1 |
More ▼ |
Publication Type
Journal Articles | 7 |
Reports - Research | 5 |
Reports - Evaluative | 2 |
Speeches/Meeting Papers | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Raven Advanced Progressive… | 1 |
What Works Clearinghouse Rating
Is the Factor Observed in Investigations on the Item-Position Effect Actually the Difficulty Factor?
Schweizer, Karl; Troche, Stefan – Educational and Psychological Measurement, 2018
In confirmatory factor analysis quite similar models of measurement serve the detection of the difficulty factor and the factor due to the item-position effect. The item-position effect refers to the increasing dependency among the responses to successively presented items of a test whereas the difficulty factor is ascribed to the wide range of…
Descriptors: Investigations, Difficulty Level, Factor Analysis, Models
Matlock, Ki Lynn; Turner, Ronna – Educational and Psychological Measurement, 2016
When constructing multiple test forms, the number of items and the total test difficulty are often equivalent. Not all test developers match the number of items and/or average item difficulty within subcontent areas. In this simulation study, six test forms were constructed having an equal number of items and average item difficulty overall.…
Descriptors: Item Response Theory, Computation, Test Items, Difficulty Level
Dardick, William R.; Mislevy, Robert J. – Educational and Psychological Measurement, 2016
A new variant of the iterative "data = fit + residual" data-analytical approach described by Mosteller and Tukey is proposed and implemented in the context of item response theory psychometric models. Posterior probabilities from a Bayesian mixture model of a Rasch item response theory model and an unscalable latent class are expressed…
Descriptors: Bayesian Statistics, Probability, Data Analysis, Item Response Theory
Wang, Wen-Chung; Jin, Kuan-Yu – Educational and Psychological Measurement, 2010
In this study, the authors extend the standard item response model with internal restrictions on item difficulty (MIRID) to fit polytomous items using cumulative logits and adjacent-category logits. Moreover, the new model incorporates discrimination parameters and is rooted in a multilevel framework. It is a nonlinear mixed model so that existing…
Descriptors: Difficulty Level, Test Items, Item Response Theory, Generalization
Weitzman, R. A. – Educational and Psychological Measurement, 2009
Building on the Kelley and Gulliksen versions of classical test theory, this article shows that a logistic model having only a single item parameter can account for varying item discrimination, as well as difficulty, by using item-test correlations to adjust incorrect-correct (0-1) item responses prior to an initial model fit. The fit occurs…
Descriptors: Item Response Theory, Test Items, Difficulty Level, Test Bias

Mazor, Kathleen M.; And Others – Educational and Psychological Measurement, 1994
A variation of the Mantel Haenszel procedure is proposed that improves detection rates of nonuniform differential item functioning (DIF) without increasing the Type I error rate. The procedure, which is illustrated with simulated examinee responses, involves splitting the sample into low- and high-performing groups. (SLD)
Descriptors: Difficulty Level, Identification, Item Analysis, Item Bias

Shoemaker, David M. – Educational and Psychological Measurement, 1972
Descriptors: Difficulty Level, Error of Measurement, Item Sampling, Simulation

MacDonald, Paul; Paunonen, Sampo V. – Educational and Psychological Measurement, 2002
Examined the behavior of item and person statistics from item response theory and classical test theory frameworks through Monte Carlo methods with simulated test data. Findings suggest that item difficulty and person ability estimates are highly comparable for both approaches. (SLD)
Descriptors: Ability, Comparative Analysis, Difficulty Level, Item Response Theory