Publication Date
In 2025 | 1 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 21 |
Since 2016 (last 10 years) | 56 |
Since 2006 (last 20 years) | 127 |
Descriptor
Difficulty Level | 167 |
Models | 167 |
Item Response Theory | 95 |
Test Items | 88 |
Foreign Countries | 31 |
Cognitive Processes | 29 |
Learning Theories | 24 |
Comparative Analysis | 20 |
Correlation | 19 |
Psychometrics | 19 |
Scores | 19 |
More ▼ |
Source
Author
Engelhard, George, Jr. | 5 |
Jin, Kuan-Yu | 4 |
Revuelta, Javier | 3 |
Wang, Wen-Chung | 3 |
Wind, Stefanie A. | 3 |
Baghaei, Purya | 2 |
Bejar, Isaac I. | 2 |
Fergadiotis, Gerasimos | 2 |
Finch, Holmes | 2 |
Goldhammer, Frank | 2 |
Hartig, Johannes | 2 |
More ▼ |
Publication Type
Education Level
Location
Japan | 5 |
Brazil | 4 |
Germany | 4 |
Singapore | 4 |
Taiwan | 4 |
United States | 4 |
China | 3 |
Indonesia | 3 |
Iran | 3 |
South Korea | 3 |
Turkey | 3 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Sample Size and Item Parameter Estimation Precision When Utilizing the Masters' Partial Credit Model
Custer, Michael; Kim, Jongpil – Online Submission, 2023
This study utilizes an analysis of diminishing returns to examine the relationship between sample size and item parameter estimation precision when utilizing the Masters' Partial Credit Model for polytomous items. Item data from the standardization of the Batelle Developmental Inventory, 3rd Edition were used. Each item was scored with a…
Descriptors: Sample Size, Item Response Theory, Test Items, Computation
Aiman Mohammad Freihat; Omar Saleh Bani Yassin – Educational Process: International Journal, 2025
Background/purpose: This study aimed to reveal the accuracy of estimation of multiple-choice test items parameters following the models of the item-response theory in measurement. Materials/methods: The researchers depended on the measurement accuracy indicators, which express the absolute difference between the estimated and actual values of the…
Descriptors: Accuracy, Computation, Multiple Choice Tests, Test Items
Soysal, Sumeyra; Yilmaz Kogar, Esin – International Journal of Assessment Tools in Education, 2022
The testlet comprises a set of items based on a common stimulus. When the testlet is used in the tests, there may violate the local independence assumption, and in this case, it would not be appropriate to use traditional item response theory models in the tests in which the testlet is included. When the testlet is discussed, one of the most…
Descriptors: Test Items, Test Theory, Models, Sample Size
Gyamfi, Abraham; Acquaye, Rosemary – Acta Educationis Generalis, 2023
Introduction: Item response theory (IRT) has received much attention in validation of assessment instrument because it allows the estimation of students' ability from any set of the items. Item response theory allows the difficulty and discrimination levels of each item on the test to be estimated. In the framework of IRT, item characteristics are…
Descriptors: Item Response Theory, Models, Test Items, Difficulty Level
Sweeney, Sandra M.; Sinharay, Sandip; Johnson, Matthew S.; Steinhauer, Eric W. – Educational Measurement: Issues and Practice, 2022
The focus of this paper is on the empirical relationship between item difficulty and item discrimination. Two studies--an empirical investigation and a simulation study--were conducted to examine the association between item difficulty and item discrimination under classical test theory and item response theory (IRT), and the effects of the…
Descriptors: Correlation, Item Response Theory, Item Analysis, Difficulty Level
Pham, Duy N.; Wells, Craig S.; Bauer, Malcolm I.; Wylie, E. Caroline; Monroe, Scott – Applied Measurement in Education, 2021
Assessments built on a theory of learning progressions are promising formative tools to support learning and teaching. The quality and usefulness of those assessments depend, in large part, on the validity of the theory-informed inferences about student learning made from the assessment results. In this study, we introduced an approach to address…
Descriptors: Formative Evaluation, Mathematics Instruction, Mathematics Achievement, Middle School Students
Eaton, Philip; Johnson, Keith; Barrett, Frank; Willoughby, Shannon – Physical Review Physics Education Research, 2019
For proper assessment selection understanding the statistical similarities amongst assessments that measure the same, or very similar, topics is imperative. This study seeks to extend the comparative analysis between the brief electricity and magnetism assessment (BEMA) and the conceptual survey of electricity and magnetism (CSEM) presented by…
Descriptors: Test Theory, Item Response Theory, Comparative Analysis, Energy
Jin, Kuan-Yu; Siu, Wai-Lok; Huang, Xiaoting – Journal of Educational Measurement, 2022
Multiple-choice (MC) items are widely used in educational tests. Distractor analysis, an important procedure for checking the utility of response options within an MC item, can be readily implemented in the framework of item response theory (IRT). Although random guessing is a popular behavior of test-takers when answering MC items, none of the…
Descriptors: Guessing (Tests), Multiple Choice Tests, Item Response Theory, Attention
Natalia Riapina – Business and Professional Communication Quarterly, 2024
This article presents a conceptual framework for integrating AI-enabled business communication in higher education. Drawing on established theories from business communication and educational technology, the framework provides comprehensive guidance for designing engaging learning experiences. It emphasizes the significance of social presence,…
Descriptors: Artificial Intelligence, Business Communication, Higher Education, Technology Uses in Education
Khong, Hou Keat; Kabilan, Muhammad Kamarul – Computer Assisted Language Learning, 2022
The notion of "Micro-Learning" (ML) has been repeatedly accented as a successful learning approach in different learning phenomena. Despite these optimistic emphases, several studies lack a theoretical grounding in adoption of ML, thus missing a shared perspective of the education community. The scarce theoretical justification for…
Descriptors: Second Language Instruction, Cognitive Processes, Difficulty Level, Self Determination
Tang, Xiaodan; Karabatsos, George; Chen, Haiqin – Applied Measurement in Education, 2020
In applications of item response theory (IRT) models, it is known that empirical violations of the local independence (LI) assumption can significantly bias parameter estimates. To address this issue, we propose a threshold-autoregressive item response theory (TAR-IRT) model that additionally accounts for order dependence among the item responses…
Descriptors: Item Response Theory, Test Items, Models, Computation
van Schijndel, Marten; Linzen, Tal – Cognitive Science, 2021
The disambiguation of a syntactically ambiguous sentence in favor of a less preferred parse can lead to slower reading at the disambiguation point. This phenomenon, referred to as a garden-path effect, has motivated models in which readers initially maintain only a subset of the possible parses of the sentence, and subsequently require…
Descriptors: Syntax, Ambiguity (Semantics), Reading Processes, Linguistic Theory
Byung-Doh Oh – ProQuest LLC, 2024
Decades of psycholinguistics research have shown that human sentence processing is highly incremental and predictive. This has provided evidence for expectation-based theories of sentence processing, which posit that the processing difficulty of linguistic material is modulated by its probability in context. However, these theories do not make…
Descriptors: Language Processing, Computational Linguistics, Artificial Intelligence, Computer Software
Lozano, José H.; Revuelta, Javier – Applied Measurement in Education, 2021
The present study proposes a Bayesian approach for estimating and testing the operation-specific learning model, a variant of the linear logistic test model that allows for the measurement of the learning that occurs during a test as a result of the repeated use of the operations involved in the items. The advantages of using a Bayesian framework…
Descriptors: Bayesian Statistics, Computation, Learning, Testing
Dhyaaldian, Safa Mohammed Abdulridah; Kadhim, Qasim Khlaif; Mutlak, Dhameer A.; Neamah, Nour Raheem; Kareem, Zaidoon Hussein; Hamad, Doaa A.; Tuama, Jassim Hassan; Qasim, Mohammed Saad – International Journal of Language Testing, 2022
A C-Test is a gap-filling test for measuring language competence in the first and second language. C-Tests are usually analyzed with polytomous Rasch models by considering each passage as a super-item or testlet. This strategy helps overcome the local dependence inherent in C-Test gaps. However, there is little research on the best polytomous…
Descriptors: Item Response Theory, Cloze Procedure, Reading Tests, Language Tests