Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 5 |
Descriptor
Error of Measurement | 7 |
Item Response Theory | 7 |
Predictor Variables | 7 |
Bayesian Statistics | 3 |
Correlation | 3 |
Simulation | 2 |
Structural Equation Models | 2 |
Ability | 1 |
Aptitude Treatment Interaction | 1 |
Comparative Analysis | 1 |
Control Groups | 1 |
More ▼ |
Author
Fox, Jean-Paul | 2 |
Glas, Cees A. W. | 2 |
Algina, James | 1 |
Aydin, Burak | 1 |
Caroline M. Böhm | 1 |
Christine E. DeMars | 1 |
Leite, Walter L. | 1 |
McDonald, Roderick P. | 1 |
Paulius Satkus | 1 |
Sinharay, Sandip | 1 |
Thorsten Meiser | 1 |
More ▼ |
Publication Type
Journal Articles | 5 |
Reports - Research | 5 |
Reports - Descriptive | 1 |
Reports - Evaluative | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Christine E. DeMars; Paulius Satkus – Educational and Psychological Measurement, 2024
Marginal maximum likelihood, a common estimation method for item response theory models, is not inherently a Bayesian procedure. However, due to estimation difficulties, Bayesian priors are often applied to the likelihood when estimating 3PL models, especially with small samples. Little focus has been placed on choosing the priors for marginal…
Descriptors: Item Response Theory, Statistical Distributions, Error of Measurement, Bayesian Statistics
Viola Merhof; Caroline M. Böhm; Thorsten Meiser – Educational and Psychological Measurement, 2024
Item response tree (IRTree) models are a flexible framework to control self-reported trait measurements for response styles. To this end, IRTree models decompose the responses to rating items into sub-decisions, which are assumed to be made on the basis of either the trait being measured or a response style, whereby the effects of such person…
Descriptors: Item Response Theory, Test Interpretation, Test Reliability, Test Validity
Sinharay, Sandip – Journal of Educational Measurement, 2018
The value-added method of Haberman is arguably one of the most popular methods to evaluate the quality of subscores. The method is based on the classical test theory and deems a subscore to be of added value if the subscore predicts the corresponding true subscore better than does the total score. Sinharay provided an interpretation of the added…
Descriptors: Scores, Value Added Models, Raw Scores, Item Response Theory
Aydin, Burak; Leite, Walter L.; Algina, James – Educational and Psychological Measurement, 2016
We investigated methods of including covariates in two-level models for cluster randomized trials to increase power to detect the treatment effect. We compared multilevel models that included either an observed cluster mean or a latent cluster mean as a covariate, as well as the effect of including Level 1 deviation scores in the model. A Monte…
Descriptors: Error of Measurement, Predictor Variables, Randomized Controlled Trials, Experimental Groups
McDonald, Roderick P. – Psychometrika, 2011
A distinction is proposed between measures and predictors of latent variables. The discussion addresses the consequences of the distinction for the true-score model, the linear factor model, Structural Equation Models, longitudinal and multilevel models, and item-response models. A distribution-free treatment of calibration and…
Descriptors: Measurement, Structural Equation Models, Item Response Theory, Error of Measurement
Fox, Jean-Paul; Glas, Cees A. W. – 2000
This paper focuses on handling measurement error in predictor variables using item response theory (IRT). Measurement error is of great important in assessment of theoretical constructs, such as intelligence or the school climate. Measurement error is modeled by treating the predictors as unobserved latent variables and using the normal ogive…
Descriptors: Bayesian Statistics, Error of Measurement, Item Response Theory, Predictor Variables
Fox, Jean-Paul; Glas, Cees A. W. – 1998
A two-level regression model is imposed on the ability parameters in an item response theory (IRT) model. The advantage of using latent rather than observed scores as dependent variables of a multilevel model is that this offers the possibility of separating the influence of item difficulty and ability level and modeling response variation and…
Descriptors: Ability, Bayesian Statistics, Difficulty Level, Error of Measurement