Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 13 |
Descriptor
Classification | 14 |
Item Response Theory | 14 |
Monte Carlo Methods | 14 |
Accuracy | 7 |
Test Items | 7 |
Comparative Analysis | 4 |
Item Analysis | 4 |
Simulation | 4 |
Bayesian Statistics | 3 |
Computer Assisted Testing | 3 |
Difficulty Level | 3 |
More ▼ |
Source
Author
Thompson, Nathan A. | 2 |
Allan S. Cohen | 1 |
Ben Kelcey | 1 |
Boughton, Keith A. | 1 |
Cohen, Allan S. | 1 |
De Deyne, Simon | 1 |
Douglas, Jeff | 1 |
Dry, Matthew J. | 1 |
Hambleton, Ronald K. | 1 |
Han, Kyung T. | 1 |
He, Xuming | 1 |
More ▼ |
Publication Type
Journal Articles | 12 |
Reports - Research | 10 |
Reports - Evaluative | 3 |
Dissertations/Theses -… | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Belgium | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Sen, Sedat; Cohen, Allan S. – Educational and Psychological Measurement, 2023
The purpose of this study was to examine the effects of different data conditions on item parameter recovery and classification accuracy of three dichotomous mixture item response theory (IRT) models: the Mix1PL, Mix2PL, and Mix3PL. Manipulated factors in the simulation included the sample size (11 different sample sizes from 100 to 5000), test…
Descriptors: Sample Size, Item Response Theory, Accuracy, Classification
Yuanfang Liu; Mark H. C. Lai; Ben Kelcey – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Measurement invariance holds when a latent construct is measured in the same way across different levels of background variables (continuous or categorical) while controlling for the true value of that construct. Using Monte Carlo simulation, this paper compares the multiple indicators, multiple causes (MIMIC) model and MIMIC-interaction to a…
Descriptors: Classification, Accuracy, Error of Measurement, Correlation
Sedat Sen; Allan S. Cohen – Educational and Psychological Measurement, 2024
A Monte Carlo simulation study was conducted to compare fit indices used for detecting the correct latent class in three dichotomous mixture item response theory (IRT) models. Ten indices were considered: Akaike's information criterion (AIC), the corrected AIC (AICc), Bayesian information criterion (BIC), consistent AIC (CAIC), Draper's…
Descriptors: Goodness of Fit, Item Response Theory, Sample Size, Classification
Liu, Yixing; Thompson, Marilyn S. – Journal of Experimental Education, 2022
A simulation study was conducted to explore the impact of differential item functioning (DIF) on general factor difference estimation for bifactor, ordinal data. Common analysis misspecifications in which the generated bifactor data with DIF were fitted using models with equality constraints on noninvariant item parameters were compared under data…
Descriptors: Comparative Analysis, Item Analysis, Sample Size, Error of Measurement
Han, Kyung T.; Wells, Craig S.; Hambleton, Ronald K. – Practical Assessment, Research & Evaluation, 2015
In item response theory test scaling/equating with the three-parameter model, the scaling coefficients A and B have no impact on the c-parameter estimates of the test items since the cparameter estimates are not adjusted in the scaling/equating procedure. The main research question in this study concerned how serious the consequences would be if…
Descriptors: Item Response Theory, Monte Carlo Methods, Scaling, Test Items
Koziol, Natalie A. – Applied Measurement in Education, 2016
Testlets, or groups of related items, are commonly included in educational assessments due to their many logistical and conceptual advantages. Despite their advantages, testlets introduce complications into the theory and practice of educational measurement. Responses to items within a testlet tend to be correlated even after controlling for…
Descriptors: Classification, Accuracy, Comparative Analysis, Models
Jiao, Hong; Kamata, Akihito; Wang, Shudong; Jin, Ying – Journal of Educational Measurement, 2012
The applications of item response theory (IRT) models assume local item independence and that examinees are independent of each other. When a representative sample for psychometric analysis is selected using a cluster sampling method in a testlet-based assessment, both local item dependence and local person dependence are likely to be induced.…
Descriptors: Item Response Theory, Test Items, Markov Processes, Monte Carlo Methods
Md Desa, Zairul Nor Deana – ProQuest LLC, 2012
In recent years, there has been increasing interest in estimating and improving subscore reliability. In this study, the multidimensional item response theory (MIRT) and the bi-factor model were combined to estimate subscores, to obtain subscores reliability, and subscores classification. Both the compensatory and partially compensatory MIRT…
Descriptors: Item Response Theory, Computation, Reliability, Classification
Thompson, Nathan A. – Practical Assessment, Research & Evaluation, 2011
Computerized classification testing (CCT) is an approach to designing tests with intelligent algorithms, similar to adaptive testing, but specifically designed for the purpose of classifying examinees into categories such as "pass" and "fail." Like adaptive testing for point estimation of ability, the key component is the…
Descriptors: Adaptive Testing, Computer Assisted Testing, Classification, Probability
Verheyen, Steven; De Deyne, Simon; Dry, Matthew J.; Storms, Gert – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2011
A contrast category effect on categorization occurs when the decision to apply a category term to an entity not only involves a comparison between the entity and the target category but is also influenced by a comparison of the entity with 1 or more alternative categories from the same domain as the target. Establishing a contrast category effect…
Descriptors: Foreign Countries, Stimuli, Classification, Models
Thompson, Nathan A. – Educational and Psychological Measurement, 2009
Several alternatives for item selection algorithms based on item response theory in computerized classification testing (CCT) have been suggested, with no conclusive evidence on the substantial superiority of a single method. It is argued that the lack of sizable effect is because some of the methods actually assess items very similarly through…
Descriptors: Item Response Theory, Psychoeducational Methods, Cutting Scores, Simulation
Henson, Robert; Roussos, Louis; Douglas, Jeff; He, Xuming – Applied Psychological Measurement, 2008
Cognitive diagnostic models (CDMs) model the probability of correctly answering an item as a function of an examinee's attribute mastery pattern. Because estimation of the mastery pattern involves more than a continuous measure of ability, reliability concepts introduced by classical test theory and item response theory do not apply. The cognitive…
Descriptors: Diagnostic Tests, Classification, Probability, Item Response Theory
Yao, Lihua; Boughton, Keith A. – Applied Psychological Measurement, 2007
Several approaches to reporting subscale scores can be found in the literature. This research explores a multidimensional compensatory dichotomous and polytomous item response theory modeling approach for subscale score proficiency estimation, leading toward a more diagnostic solution. It also develops and explores the recovery of a Markov chain…
Descriptors: Psychometrics, Markov Processes, Classification, Item Response Theory
Lau, Che-Ming Allen; And Others – 1996
This study focused on the robustness of unidimensional item response theory (UIRT) models in computerized classification testing against violation of the unidimensionality assumption. The study addressed whether UIRT models remain acceptable under various testing conditions and dimensionality strengths. Monte Carlo simulation techniques were used…
Descriptors: Classification, Computer Assisted Testing, Educational Testing, Item Response Theory