NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20260
Since 20252
Since 2022 (last 5 years)48
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 48 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Daniel P. Jurich; Matthew J. Madison – Educational Assessment, 2023
Diagnostic classification models (DCMs) are psychometric models that provide probabilistic classifications of examinees on a set of discrete latent attributes. When analyzing or constructing assessments scored by DCMs, understanding how each item influences attribute classifications can clarify the meaning of the measured constructs, facilitate…
Descriptors: Test Items, Models, Classification, Influences
Peer reviewed Peer reviewed
Direct linkDirect link
Ting Wang; Keith Stelter; Thomas O’Neill; Nathaniel Hendrix; Andrew Bazemore; Kevin Rode; Warren P. Newton – Journal of Applied Testing Technology, 2025
Precise item categorisation is essential in aligning exam questions with content domains outlined in assessment blueprints. Traditional methods, such as manual classification or supervised machine learning, are often time-consuming, error-prone, or limited by the need for large training datasets. This study presents a novel approach using…
Descriptors: Test Items, Automation, Classification, Artificial Intelligence
He, Dan – ProQuest LLC, 2023
This dissertation examines the effectiveness of machine learning algorithms and feature engineering techniques for analyzing process data and predicting test performance. The study compares three classification approaches and identifies item-specific process features that are highly predictive of student performance. The findings suggest that…
Descriptors: Artificial Intelligence, Data Analysis, Algorithms, Classification
Hess, Jessica – ProQuest LLC, 2023
This study was conducted to further research into the impact of student-group item parameter drift (SIPD) --referred to as subpopulation item parameter drift in previous research-- on ability estimates and proficiency classification accuracy when occurring in the discrimination parameter of a 2-PL item response theory (IRT) model. Using Monte…
Descriptors: Test Items, Groups, Ability, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Su, Kun; Henson, Robert A. – Journal of Educational and Behavioral Statistics, 2023
This article provides a process to carefully evaluate the suitability of a content domain for which diagnostic classification models (DCMs) could be applicable and then optimized steps for constructing a test blueprint for applying DCMs and a real-life example illustrating this process. The content domains were carefully evaluated using a set of…
Descriptors: Classification, Models, Science Tests, Physics
Peer reviewed Peer reviewed
Direct linkDirect link
Weese, James D.; Turner, Ronna C.; Ames, Allison; Crawford, Brandon; Liang, Xinya – Educational and Psychological Measurement, 2022
A simulation study was conducted to investigate the heuristics of the SIBTEST procedure and how it compares with ETS classification guidelines used with the Mantel-Haenszel procedure. Prior heuristics have been used for nearly 25 years, but they are based on a simulation study that was restricted due to computer limitations and that modeled item…
Descriptors: Test Bias, Heuristics, Classification, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Rios, Joseph – Applied Measurement in Education, 2022
To mitigate the deleterious effects of rapid guessing (RG) on ability estimates, several rescoring procedures have been proposed. Underlying many of these procedures is the assumption that RG is accurately identified. At present, there have been minimal investigations examining the utility of rescoring approaches when RG is misclassified, and…
Descriptors: Accuracy, Guessing (Tests), Scoring, Classification
Jing Ma – ProQuest LLC, 2024
This study investigated the impact of scoring polytomous items later on measurement precision, classification accuracy, and test security in mixed-format adaptive testing. Utilizing the shadow test approach, a simulation study was conducted across various test designs, lengths, number and location of polytomous item. Results showed that while…
Descriptors: Scoring, Adaptive Testing, Test Items, Classification
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Demir, Seda – Journal of Educational Technology and Online Learning, 2022
The purpose of this research was to evaluate the effect of item pool and selection algorithms on computerized classification testing (CCT) performance in terms of some classification evaluation metrics. For this purpose, 1000 examinees' response patterns using the R package were generated and eight item pools with 150, 300, 450, and 600 items…
Descriptors: Test Items, Item Banks, Mathematics, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Henninger, Mirka; Debelak, Rudolf; Strobl, Carolin – Educational and Psychological Measurement, 2023
To detect differential item functioning (DIF), Rasch trees search for optimal split-points in covariates and identify subgroups of respondents in a data-driven way. To determine whether and in which covariate a split should be performed, Rasch trees use statistical significance tests. Consequently, Rasch trees are more likely to label small DIF…
Descriptors: Item Response Theory, Test Items, Effect Size, Statistical Significance
Peer reviewed Peer reviewed
Direct linkDirect link
Britt Hadar; Maayan Katzir; Sephi Pumpian; Tzur Karelitz; Nira Liberman – npj Science of Learning, 2023
Performance on standardized academic aptitude tests (AAT) can determine important life outcomes. However, it is not clear whether and which aspects of the content of test questions affect performance. We examined the effect of psychological distance embedded in test questions. In Study 1 (N = 41,209), we classified the content of existing AAT…
Descriptors: Academic Aptitude, Thinking Skills, Aptitude Tests, Standardized Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Alahmadi, Sarah; Jones, Andrew T.; Barry, Carol L.; Ibáñez, Beatriz – Applied Measurement in Education, 2023
Rasch common-item equating is often used in high-stakes testing to maintain equivalent passing standards across test administrations. If unaddressed, item parameter drift poses a major threat to the accuracy of Rasch common-item equating. We compared the performance of well-established and newly developed drift detection methods in small and large…
Descriptors: Equated Scores, Item Response Theory, Sample Size, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Becker, Kirk A.; Kao, Shu-chuan – Journal of Applied Testing Technology, 2022
Natural Language Processing (NLP) offers methods for understanding and quantifying the similarity between written documents. Within the testing industry these methods have been used for automatic item generation, automated scoring of text and speech, modeling item characteristics, automatic question answering, machine translation, and automated…
Descriptors: Item Banks, Natural Language Processing, Computer Assisted Testing, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Owen Henkel; Hannah Horne-Robinson; Maria Dyshel; Greg Thompson; Ralph Abboud; Nabil Al Nahin Ch; Baptiste Moreau-Pernet; Kirk Vanacore – Journal of Learning Analytics, 2025
This paper introduces AMMORE, a new dataset of 53,000 math open-response question-answer pairs from Rori, a mathematics learning platform used by middle and high school students in several African countries. Using this dataset, we conducted two experiments to evaluate the use of large language models (LLM) for grading particularly challenging…
Descriptors: Learning Analytics, Learning Management Systems, Mathematics Instruction, Middle School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Kang, Hyeon-Ah; Han, Suhwa; Kim, Doyoung; Kao, Shu-Chuan – Educational and Psychological Measurement, 2022
The development of technology-enhanced innovative items calls for practical models that can describe polytomous testlet items. In this study, we evaluate four measurement models that can characterize polytomous items administered in testlets: (a) generalized partial credit model (GPCM), (b) testlet-as-a-polytomous-item model (TPIM), (c)…
Descriptors: Goodness of Fit, Item Response Theory, Test Items, Scoring
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4