Publication Date
In 2025 | 5 |
Since 2024 | 13 |
Since 2021 (last 5 years) | 22 |
Since 2016 (last 10 years) | 40 |
Since 2006 (last 20 years) | 141 |
Descriptor
Evaluation Methods | 169 |
Item Response Theory | 169 |
Models | 146 |
Simulation | 48 |
Psychometrics | 35 |
Comparative Analysis | 33 |
Computation | 29 |
Measurement Techniques | 29 |
Test Items | 28 |
Data Analysis | 25 |
Item Analysis | 24 |
More ▼ |
Source
Author
Bauer, Daniel J. | 4 |
Cai, Li | 4 |
Cohen, Allan S. | 4 |
Wilson, Mark | 4 |
Dorans, Neil J. | 3 |
Falk, Carl F. | 3 |
Gongjun Xu | 3 |
Maris, Gunter | 3 |
Sijtsma, Klaas | 3 |
Woods, Carol M. | 3 |
Andrich, David | 2 |
More ▼ |
Publication Type
Education Level
Audience
Researchers | 5 |
Practitioners | 1 |
Location
Germany | 3 |
California | 2 |
Colorado | 2 |
Afghanistan | 1 |
Botswana | 1 |
Brazil | 1 |
China | 1 |
Denmark | 1 |
Finland | 1 |
Florida | 1 |
France | 1 |
More ▼ |
Laws, Policies, & Programs
Education Consolidation… | 1 |
Education for All Handicapped… | 1 |
Individuals with Disabilities… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Jean-Paul Fox – Journal of Educational and Behavioral Statistics, 2025
Popular item response theory (IRT) models are considered complex, mainly due to the inclusion of a random factor variable (latent variable). The random factor variable represents the incidental parameter problem since the number of parameters increases when including data of new persons. Therefore, IRT models require a specific estimation method…
Descriptors: Sample Size, Item Response Theory, Accuracy, Bayesian Statistics
Sohee Kim; Ki Lynn Cole – International Journal of Testing, 2025
This study conducted a comprehensive comparison of Item Response Theory (IRT) linking methods applied to a bifactor model, examining their performance on both multiple choice (MC) and mixed format tests within the common item nonequivalent group design framework. Four distinct multidimensional IRT linking approaches were explored, consisting of…
Descriptors: Item Response Theory, Comparative Analysis, Models, Item Analysis
Kim, Stella Y. – Educational Measurement: Issues and Practice, 2022
In this digital ITEMS module, Dr. Stella Kim provides an overview of multidimensional item response theory (MIRT) equating. Traditional unidimensional item response theory (IRT) equating methods impose the sometimes untenable restriction on data that only a single ability is assessed. This module discusses potential sources of multidimensionality…
Descriptors: Item Response Theory, Models, Equated Scores, Evaluation Methods
Jiaying Xiao; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Accurate item parameters and standard errors (SEs) are crucial for many multidimensional item response theory (MIRT) applications. A recent study proposed the Gaussian Variational Expectation Maximization (GVEM) algorithm to improve computational efficiency and estimation accuracy (Cho et al., 2021). However, the SE estimation procedure has yet to…
Descriptors: Error of Measurement, Models, Evaluation Methods, Item Analysis
Causes of Nonlinear Metrics in Item Response Theory Models and Implications for Educational Research
Xiangyi Liao – ProQuest LLC, 2024
Educational research outcomes frequently rely on an assumption that measurement metrics have interval-level properties. While most investigators know enough to be suspicious of interval-level claims, and in some cases even question their findings given such doubts, there is a lack of understanding regarding the measurement conditions that create…
Descriptors: Item Response Theory, Educational Research, Measurement, Evaluation Methods
Tong Wu; Stella Y. Kim; Carl Westine; Michelle Boyer – Journal of Educational Measurement, 2025
While significant attention has been given to test equating to ensure score comparability, limited research has explored equating methods for rater-mediated assessments, where human raters inherently introduce error. If not properly addressed, these errors can undermine score interchangeability and test validity. This study proposes an equating…
Descriptors: Item Response Theory, Evaluators, Error of Measurement, Test Validity
Madeline A. Schellman; Matthew J. Madison – Grantee Submission, 2024
Diagnostic classification models (DCMs) have grown in popularity as stakeholders increasingly desire actionable information related to students' skill competencies. Longitudinal DCMs offer a psychometric framework for providing estimates of students' proficiency status transitions over time. For both cross-sectional and longitudinal DCMs, it is…
Descriptors: Diagnostic Tests, Classification, Models, Psychometrics
Joakim Wallmark; James O. Ramsay; Juan Li; Marie Wiberg – Journal of Educational and Behavioral Statistics, 2024
Item response theory (IRT) models the relationship between the possible scores on a test item against a test taker's attainment of the latent trait that the item is intended to measure. In this study, we compare two models for tests with polytomously scored items: the optimal scoring (OS) model, a nonparametric IRT model based on the principles of…
Descriptors: Item Response Theory, Test Items, Models, Scoring
Erik Forsberg; Anders Sjöberg – Measurement: Interdisciplinary Research and Perspectives, 2025
This paper reports a validation study based on descriptive multidimensional item response theory (DMIRT), implemented in the R package "D3mirt" by using the ERS-C, an extended version of the Relevance subscale from the Moral Foundations Questionnaire including two new items for collectivism (17 items in total). Two latent models are…
Descriptors: Evaluation Methods, Programming Languages, Altruism, Collectivism
Markus T. Jansen; Ralf Schulze – Educational and Psychological Measurement, 2024
Thurstonian forced-choice modeling is considered to be a powerful new tool to estimate item and person parameters while simultaneously testing the model fit. This assessment approach is associated with the aim of reducing faking and other response tendencies that plague traditional self-report trait assessments. As a result of major recent…
Descriptors: Factor Analysis, Models, Item Analysis, Evaluation Methods
Guo, Wenjing; Choi, Youn-Jeng – Educational and Psychological Measurement, 2023
Determining the number of dimensions is extremely important in applying item response theory (IRT) models to data. Traditional and revised parallel analyses have been proposed within the factor analysis framework, and both have shown some promise in assessing dimensionality. However, their performance in the IRT framework has not been…
Descriptors: Item Response Theory, Evaluation Methods, Factor Analysis, Guidelines
Chenchen Ma; Jing Ouyang; Gongjun Xu – Grantee Submission, 2023
Cognitive Diagnosis Models (CDMs) are a special family of discrete latent variable models that are widely used in educational and psychological measurement. A key component of CDMs is the Q-matrix characterizing the dependence structure between the items and the latent attributes. Additionally, researchers also assume in many applications certain…
Descriptors: Psychological Evaluation, Clinical Diagnosis, Item Analysis, Algorithms
Carmen Köhler; Lale Khorramdel; Artur Pokropek; Johannes Hartig – Journal of Educational Measurement, 2024
For assessment scales applied to different groups (e.g., students from different states; patients in different countries), multigroup differential item functioning (MG-DIF) needs to be evaluated in order to ensure that respondents with the same trait level but from different groups have equal response probabilities on a particular item. The…
Descriptors: Measures (Individuals), Test Bias, Models, Item Response Theory
Manapat, Patrick D.; Edwards, Michael C. – Educational and Psychological Measurement, 2022
When fitting unidimensional item response theory (IRT) models, the population distribution of the latent trait ([theta]) is often assumed to be normally distributed. However, some psychological theories would suggest a nonnormal [theta]. For example, some clinical traits (e.g., alcoholism, depression) are believed to follow a positively skewed…
Descriptors: Robustness (Statistics), Computational Linguistics, Item Response Theory, Psychological Patterns
Yuanfang Liu; Mark H. C. Lai; Ben Kelcey – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Measurement invariance holds when a latent construct is measured in the same way across different levels of background variables (continuous or categorical) while controlling for the true value of that construct. Using Monte Carlo simulation, this paper compares the multiple indicators, multiple causes (MIMIC) model and MIMIC-interaction to a…
Descriptors: Classification, Accuracy, Error of Measurement, Correlation