NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 5 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Won-Chan; Kim, Stella Y.; Choi, Jiwon; Kang, Yujin – Journal of Educational Measurement, 2020
This article considers psychometric properties of composite raw scores and transformed scale scores on mixed-format tests that consist of a mixture of multiple-choice and free-response items. Test scores on several mixed-format tests are evaluated with respect to conditional and overall standard errors of measurement, score reliability, and…
Descriptors: Raw Scores, Item Response Theory, Test Format, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Shuchang; Cai, Yan; Tu, Dongbo – Journal of Educational Measurement, 2018
This study applied the mode of on-the-fly assembled multistage adaptive testing to cognitive diagnosis (CD-OMST). Several and several module assembly methods for CD-OMST were proposed and compared in terms of measurement precision, test security, and constrain management. The module assembly methods in the study included the maximum priority index…
Descriptors: Adaptive Testing, Monte Carlo Methods, Computer Security, Clinical Diagnosis
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wenyi; Song, Lihong; Chen, Ping; Meng, Yaru; Ding, Shuliang – Journal of Educational Measurement, 2015
Classification consistency and accuracy are viewed as important indicators for evaluating the reliability and validity of classification results in cognitive diagnostic assessment (CDA). Pattern-level classification consistency and accuracy indices were introduced by Cui, Gierl, and Chang. However, the indices at the attribute level have not yet…
Descriptors: Classification, Reliability, Accuracy, Cognitive Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Won-Chan – Journal of Educational Measurement, 2010
In this article, procedures are described for estimating single-administration classification consistency and accuracy indices for complex assessments using item response theory (IRT). This IRT approach was applied to real test data comprising dichotomous and polytomous items. Several different IRT model combinations were considered. Comparisons…
Descriptors: Classification, Item Response Theory, Comparative Analysis, Models
Peer reviewed Peer reviewed
Hanson, Bradley A.; Brennan, Robert L. – Journal of Educational Measurement, 1990
Using several data sets, the relative performance of the beta binomial model and two more general strong true score models in estimating several indices of classification consistency is examined. It appears that the beta binomial model can provide inadequate fits to raw score distributions compared to more general models. (TJH)
Descriptors: Classification, Comparative Analysis, Equations (Mathematics), Estimation (Mathematics)