NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 5 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Tang, Xiaodan; Karabatsos, George; Chen, Haiqin – Applied Measurement in Education, 2020
In applications of item response theory (IRT) models, it is known that empirical violations of the local independence (LI) assumption can significantly bias parameter estimates. To address this issue, we propose a threshold-autoregressive item response theory (TAR-IRT) model that additionally accounts for order dependence among the item responses…
Descriptors: Item Response Theory, Test Items, Models, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Lozano, José H.; Revuelta, Javier – Applied Measurement in Education, 2021
The present study proposes a Bayesian approach for estimating and testing the operation-specific learning model, a variant of the linear logistic test model that allows for the measurement of the learning that occurs during a test as a result of the repeated use of the operations involved in the items. The advantages of using a Bayesian framework…
Descriptors: Bayesian Statistics, Computation, Learning, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Lim, Euijin; Lee, Won-Chan – Applied Measurement in Education, 2020
The purpose of this study is to address the necessity of subscore equating and to evaluate the performance of various equating methods for subtests. Assuming the random groups design and number-correct scoring, this paper analyzed real data and simulated data with four study factors including test dimensionality, subtest length, form difference in…
Descriptors: Equated Scores, Test Length, Test Format, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Traynor, Anne – Applied Measurement in Education, 2017
It has long been argued that U.S. states' differential performance on nationwide assessments may reflect differences in students' opportunity to learn the tested content that is primarily due to variation in curricular content standards, rather than in instructional quality or educational investment. To quantify the effect of differences in…
Descriptors: Test Items, Difficulty Level, State Standards, Academic Standards
Peer reviewed Peer reviewed
Newman, Dianna L.; And Others – Applied Measurement in Education, 1988
The effect of using statistical and cognitive item difficulty to determine item order on multiple-choice tests was examined, using 120 undergraduate students. Students performed better when items were ordered by increasing cognitive difficulty rather than decreasing difficulty. The statistical ordering of difficulty had little effect on…
Descriptors: Cognitive Tests, Difficulty Level, Higher Education, Multiple Choice Tests