Publication Date
In 2025 | 3 |
Since 2024 | 6 |
Since 2021 (last 5 years) | 24 |
Since 2016 (last 10 years) | 65 |
Since 2006 (last 20 years) | 134 |
Descriptor
Item Response Theory | 152 |
Computer Software | 147 |
Models | 54 |
Test Items | 43 |
Computation | 34 |
Foreign Countries | 29 |
Simulation | 28 |
Statistical Analysis | 28 |
Comparative Analysis | 27 |
Computer Assisted Testing | 26 |
Item Analysis | 26 |
More ▼ |
Source
Author
Wang, Wen-Chung | 8 |
DeMars, Christine E. | 5 |
Ames, Allison J. | 4 |
Ferrando, Pere J. | 4 |
Hambleton, Ronald K. | 4 |
Jin, Kuan-Yu | 4 |
Engelhard, George, Jr. | 3 |
Han, Kyung T. | 3 |
Luo, Yong | 3 |
Paek, Insu | 3 |
Raykov, Tenko | 3 |
More ▼ |
Publication Type
Journal Articles | 152 |
Reports - Research | 78 |
Reports - Descriptive | 44 |
Reports - Evaluative | 25 |
Book/Product Reviews | 5 |
Guides - Non-Classroom | 1 |
Information Analyses | 1 |
Reference Materials -… | 1 |
Reports - General | 1 |
Education Level
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Cheng, Yiling – Measurement: Interdisciplinary Research and Perspectives, 2023
Computerized adaptive testing (CAT) offers an efficient and highly accurate method for estimating examinees' abilities. In this article, the free version of Concerto Software for CAT was reviewed, dividing our evaluation into three sections: software implementation, the Item Response Theory (IRT) features of CAT, and user experience. Overall,…
Descriptors: Computer Software, Computer Assisted Testing, Adaptive Testing, Item Response Theory
Kuan-Yu Jin; Wai-Lok Siu – Journal of Educational Measurement, 2025
Educational tests often have a cluster of items linked by a common stimulus ("testlet"). In such a design, the dependencies caused between items are called "testlet effects." In particular, the directional testlet effect (DTE) refers to a recursive influence whereby responses to earlier items can positively or negatively affect…
Descriptors: Models, Test Items, Educational Assessment, Scores
Paganin, Sally; Paciorek, Christopher J.; Wehrhahn, Claudia; Rodríguez, Abel; Rabe-Hesketh, Sophia; de Valpine, Perry – Journal of Educational and Behavioral Statistics, 2023
Item response theory (IRT) models typically rely on a normality assumption for subject-specific latent traits, which is often unrealistic in practice. Semiparametric extensions based on Dirichlet process mixtures (DPMs) offer a more flexible representation of the unknown distribution of the latent trait. However, the use of such models in the IRT…
Descriptors: Bayesian Statistics, Item Response Theory, Guidance, Evaluation Methods
Cole, Ki; Paek, Insu – Measurement: Interdisciplinary Research and Perspectives, 2022
Statistical Analysis Software (SAS) is a widely used tool for data management analysis across a variety of fields. The procedure for item response theory (PROC IRT) is one to perform unidimensional and multidimensional item response theory (IRT) analysis for dichotomous and polytomous data. This review provides a summary of the features of PROC…
Descriptors: Item Response Theory, Computer Software, Item Analysis, Statistical Analysis
Metsämuuronen, Jari – Practical Assessment, Research & Evaluation, 2022
This article discusses visual techniques for detecting test items that would be optimal to be selected to the final compilation on the one hand and, on the other hand, to out-select those items that would lower the quality of the compilation. Some classic visual tools are discussed, first, in a practical manner in diagnosing the logical,…
Descriptors: Test Items, Item Analysis, Item Response Theory, Cutting Scores
Raykov, Tenko – Measurement: Interdisciplinary Research and Perspectives, 2023
This software review discusses the capabilities of Stata to conduct item response theory modeling. The commands needed for fitting the popular one-, two-, and three-parameter logistic models are initially discussed. The procedure for testing the discrimination parameter equality in the one-parameter model is then outlined. The commands for fitting…
Descriptors: Item Response Theory, Models, Comparative Analysis, Item Analysis
Ö. Emre C. Alagöz; Thorsten Meiser – Educational and Psychological Measurement, 2024
To improve the validity of self-report measures, researchers should control for response style (RS) effects, which can be achieved with IRTree models. A traditional IRTree model considers a response as a combination of distinct decision-making processes, where the substantive trait affects the decision on response direction, while decisions about…
Descriptors: Item Response Theory, Validity, Self Evaluation (Individuals), Decision Making
Ince Araci, F. Gul; Tan, Seref – International Journal of Assessment Tools in Education, 2022
Computerized Adaptive Testing (CAT) is a beneficial test technique that decreases the number of items that need to be administered by taking items in accordance with individuals' own ability levels. After the CAT applications were constructed based on the unidimensional Item Response Theory (IRT), Multidimensional CAT (MCAT) applications have…
Descriptors: Adaptive Testing, Computer Assisted Testing, Simulation, Item Response Theory
An Analysis of Differential Bundle Functioning in Multidimensional Tests Using the SIBTEST Procedure
Özdogan, Didem; Kelecioglu, Hülya – International Journal of Assessment Tools in Education, 2022
This study aims to analyze the differential bundle functioning in multidimensional tests with a specific purpose to detect this effect through differentiating the location of the item with DIF in the test, the correlation between the dimensions, the sample size, and the ratio of reference to focal group size. The first 10 items of the test that is…
Descriptors: Correlation, Sample Size, Test Items, Item Analysis
Grimm, Kevin J.; Fine, Kimberly; Stegmann, Gabriela – International Journal of Behavioral Development, 2021
Modeling within-person change over time and between-person differences in change over time is a primary goal in prevention science. When modeling change in an observed score over time with multilevel or structural equation modeling approaches, each observed score counts toward the estimation of model parameters equally. However, observed scores…
Descriptors: Error of Measurement, Weighted Scores, Accuracy, Item Response Theory
Kyung-Mi O. – Language Testing in Asia, 2024
This study examines the efficacy of artificial intelligence (AI) in creating parallel test items compared to human-made ones. Two test forms were developed: one consisting of 20 existing human-made items and another with 20 new items generated with ChatGPT assistance. Expert reviews confirmed the content parallelism of the two test forms.…
Descriptors: Comparative Analysis, Artificial Intelligence, Computer Software, Test Items
Fuchimoto, Kazuma; Ishii, Takatoshi; Ueno, Maomi – IEEE Transactions on Learning Technologies, 2022
Educational assessments often require uniform test forms, for which each test form has equivalent measurement accuracy but with a different set of items. For uniform test assembly, an important issue is the increase of the number of assembled uniform tests. Although many automatic uniform test assembly methods exist, the maximum clique algorithm…
Descriptors: Simulation, Efficiency, Test Items, Educational Assessment
Kalkan, Ömür Kaya – Measurement: Interdisciplinary Research and Perspectives, 2022
The four-parameter logistic (4PL) Item Response Theory (IRT) model has recently been reconsidered in the literature due to the advances in the statistical modeling software and the recent developments in the estimation of the 4PL IRT model parameters. The current simulation study evaluated the performance of expectation-maximization (EM),…
Descriptors: Comparative Analysis, Sample Size, Test Length, Algorithms
Roger Young; Emily Courtney; Alexander Kah; Mariah Wilkerson; Yi-Hsin Chen – Teaching of Psychology, 2025
Background: Multiple-choice item (MCI) assessments are burdensome for instructors to develop. Artificial intelligence (AI, e.g., ChatGPT) can streamline the process without sacrificing quality. The quality of AI-generated MCIs and human experts is comparable. However, whether the quality of AI-generated MCIs is equally good across various domain-…
Descriptors: Item Response Theory, Multiple Choice Tests, Psychology, Textbooks
Raykov, Tenko; Pusic, Martin – Educational and Psychological Measurement, 2023
This note is concerned with evaluation of location parameters for polytomous items in multiple-component measuring instruments. A point and interval estimation procedure for these parameters is outlined that is developed within the framework of latent variable modeling. The method permits educational, behavioral, biomedical, and marketing…
Descriptors: Item Analysis, Measurement Techniques, Computer Software, Intervals