Publication Date
In 2025 | 1 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 10 |
Since 2016 (last 10 years) | 32 |
Since 2006 (last 20 years) | 121 |
Descriptor
Item Response Theory | 179 |
Models | 146 |
Test Items | 39 |
Psychometrics | 38 |
Computation | 35 |
Evaluation Methods | 23 |
Simulation | 23 |
Mathematical Models | 21 |
Measurement Techniques | 21 |
Bayesian Statistics | 19 |
Computer Software | 19 |
More ▼ |
Source
Author
Ferrando, Pere J. | 7 |
Mislevy, Robert J. | 6 |
De Boeck, Paul | 5 |
Curran, Patrick J. | 3 |
Maris, Gunter | 3 |
Meijer, Rob R. | 3 |
Rabe-Hesketh, Sophia | 3 |
Revuelta, Javier | 3 |
Rupp, Andre A. | 3 |
Zumbo, Bruno D. | 3 |
Almond, Russell G. | 2 |
More ▼ |
Publication Type
Reports - Descriptive | 179 |
Journal Articles | 162 |
Speeches/Meeting Papers | 10 |
Guides - Non-Classroom | 2 |
Opinion Papers | 2 |
Creative Works | 1 |
Reports - Evaluative | 1 |
Reports - Research | 1 |
Education Level
Higher Education | 6 |
Elementary Education | 5 |
Elementary Secondary Education | 4 |
Middle Schools | 4 |
Postsecondary Education | 4 |
Intermediate Grades | 3 |
Junior High Schools | 3 |
Secondary Education | 3 |
Grade 4 | 2 |
Grade 6 | 2 |
Grade 7 | 2 |
More ▼ |
Audience
Researchers | 5 |
Practitioners | 2 |
Students | 1 |
Teachers | 1 |
Location
Netherlands | 3 |
Denmark | 2 |
Belgium | 1 |
Canada | 1 |
China | 1 |
Georgia | 1 |
Idaho | 1 |
Illinois (Chicago) | 1 |
Iran | 1 |
Massachusetts | 1 |
Missouri | 1 |
More ▼ |
Laws, Policies, & Programs
Race to the Top | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Jianbin Fu; Xuan Tan; Patrick C. Kyllonen – Applied Measurement in Education, 2024
A process is proposed to create the one-dimensional expected item characteristic curve (ICC) and test characteristic curve (TCC) for each trait in multidimensional forced-choice questionnaires based on the Rank-2PL (two-parameter logistic) item response theory models for forced-choice items with two or three statements. Some examples of ICC and…
Descriptors: Item Response Theory, Questionnaires, Measurement Techniques, Statistics
Jean-Paul Fox – Journal of Educational and Behavioral Statistics, 2025
Popular item response theory (IRT) models are considered complex, mainly due to the inclusion of a random factor variable (latent variable). The random factor variable represents the incidental parameter problem since the number of parameters increases when including data of new persons. Therefore, IRT models require a specific estimation method…
Descriptors: Sample Size, Item Response Theory, Accuracy, Bayesian Statistics
Jianbin Fu; Xuan Tan; Patrick C. Kyllonen – Journal of Educational Measurement, 2024
This paper presents the item and test information functions of the Rank two-parameter logistic models (Rank-2PLM) for items with two (pair) and three (triplet) statements in forced-choice questionnaires. The Rank-2PLM model for pairs is the MUPP-2PLM (Multi-Unidimensional Pairwise Preference) and, for triplets, is the Triplet-2PLM. Fisher's…
Descriptors: Questionnaires, Test Items, Item Response Theory, Models
Kim, Yunsung; Sreechan; Piech, Chris; Thille, Candace – International Educational Data Mining Society, 2023
Dynamic Item Response Models extend the standard Item Response Theory (IRT) to capture temporal dynamics in learner ability. While these models have the potential to allow instructional systems to actively monitor the evolution of learner proficiency in real time, existing dynamic item response models rely on expensive inference algorithms that…
Descriptors: Item Response Theory, Accuracy, Inferences, Algorithms
Huang, Sijia; Luo, Jinwen; Cai, Li – Educational and Psychological Measurement, 2023
Random item effects item response theory (IRT) models, which treat both person and item effects as random, have received much attention for more than a decade. The random item effects approach has several advantages in many practical settings. The present study introduced an explanatory multidimensional random item effects rating scale model. The…
Descriptors: Rating Scales, Item Response Theory, Models, Test Items
Kim, Stella Y. – Educational Measurement: Issues and Practice, 2022
In this digital ITEMS module, Dr. Stella Kim provides an overview of multidimensional item response theory (MIRT) equating. Traditional unidimensional item response theory (IRT) equating methods impose the sometimes untenable restriction on data that only a single ability is assessed. This module discusses potential sources of multidimensionality…
Descriptors: Item Response Theory, Models, Equated Scores, Evaluation Methods
Gyamfi, Abraham; Acquaye, Rosemary – Acta Educationis Generalis, 2023
Introduction: Item response theory (IRT) has received much attention in validation of assessment instrument because it allows the estimation of students' ability from any set of the items. Item response theory allows the difficulty and discrimination levels of each item on the test to be estimated. In the framework of IRT, item characteristics are…
Descriptors: Item Response Theory, Models, Test Items, Difficulty Level
Henninger, Mirka – Journal of Educational Measurement, 2021
Item Response Theory models with varying thresholds are essential tools to account for unknown types of response tendencies in rating data. However, in order to separate constructs to be measured and response tendencies, specific constraints have to be imposed on varying thresholds and their interrelations. In this article, a multidimensional…
Descriptors: Response Style (Tests), Item Response Theory, Models, Computation
Zhan, Peida; Jiao, Hong; Liao, Dandan; Li, Feiming – Journal of Educational and Behavioral Statistics, 2019
Providing diagnostic feedback about growth is crucial to formative decisions such as targeted remedial instructions or interventions. This article proposed a longitudinal higher-order diagnostic classification modeling approach for measuring growth. The new modeling approach is able to provide quantitative values of overall and individual growth…
Descriptors: Classification, Growth Models, Educational Diagnosis, Models
Ranger, Jochen; Kuhn, Jörg-Tobias; Wolgast, Anett – Journal of Educational Measurement, 2021
Van der Linden's hierarchical model for responses and response times can be used in order to infer the ability and mental speed of test takers from their responses and response times in an educational test. A standard approach for this is maximum likelihood estimation. In real-world applications, the data of some test takers might be partly…
Descriptors: Models, Reaction Time, Item Response Theory, Tests
Ulitzsch, Esther; von Davier, Matthias; Pohl, Steffi – Educational and Psychological Measurement, 2020
So far, modeling approaches for not-reached items have considered one single underlying process. However, missing values at the end of a test can occur for a variety of reasons. On the one hand, examinees may not reach the end of a test due to time limits and lack of working speed. On the other hand, examinees may not attempt all items and quit…
Descriptors: Item Response Theory, Test Items, Response Style (Tests), Computer Assisted Testing
Leventhal, Brian; Ames, Allison – Educational Measurement: Issues and Practice, 2020
In this digital ITEMS module, Dr. Brian Leventhal and Dr. Allison Ames provide an overview of "Monte Carlo simulation studies" (MCSS) in "item response theory" (IRT). MCSS are utilized for a variety of reasons, one of the most compelling being that they can be used when analytic solutions are impractical or nonexistent because…
Descriptors: Item Response Theory, Monte Carlo Methods, Simulation, Test Items
Raykov, Tenko; Marcoulides, George A.; Huber, Chuck – Measurement: Interdisciplinary Research and Perspectives, 2020
It is demonstrated that the popular three-parameter logistic model can lead to markedly inaccurate individual ability level estimates for mixture populations. A theoretically and empirically important setting is initially considered where (a) in one of two subpopulations (latent classes) the two-parameter logistic model holds for each item in a…
Descriptors: Item Response Theory, Models, Measurement Techniques, Item Analysis
Matta, Tyler H.; Rutkowski, Leslie; Rutkowski, David; Liaw, Yuan-Ling – Large-scale Assessments in Education, 2018
This article provides an overview of the R package lsasim, designed to facilitate the generation of data that mimics a large scale assessment context. The package features functions for simulating achievement data according to a number of common IRT models with known parameters. A clear advantage of lsasim over other simulation software is that…
Descriptors: Measurement, Data, Simulation, Item Response Theory
Peterson, Christina Hamme; Gischlar, Karen L.; Peterson, N. Andrew – Journal for Specialists in Group Work, 2017
Measures that accurately capture the phenomenon are critical to research and practice in group work. The vast majority of group-related measures were developed using the reflective measurement model rooted in classical test theory (CTT). Depending on the construct definition and the measure's purpose, the reflective model may not always be the…
Descriptors: Item Response Theory, Group Activities, Test Theory, Test Items