Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 3 |
Descriptor
Selection Criteria | 3 |
Models | 2 |
Accuracy | 1 |
Adaptive Testing | 1 |
Bayesian Statistics | 1 |
Cognitive Tests | 1 |
Comparative Analysis | 1 |
Computer Assisted Testing | 1 |
Decision Making | 1 |
Diagnostic Tests | 1 |
Factor Analysis | 1 |
More ▼ |
Source
Educational and Psychological… | 3 |
Author
Chang, Hua-Hua | 1 |
Chang, Wanchen | 1 |
Christine DiStefano | 1 |
Dodd, Barbara G. | 1 |
Lin, Chuan-Ju | 1 |
Lisa Calvocoressi | 1 |
Tenko Raykov | 1 |
Whittaker, Tiffany A. | 1 |
Publication Type
Journal Articles | 3 |
Reports - Research | 3 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Tenko Raykov; Christine DiStefano; Lisa Calvocoressi – Educational and Psychological Measurement, 2024
This note demonstrates that the widely used Bayesian Information Criterion (BIC) need not be generally viewed as a routinely dependable index for model selection when the bifactor and second-order factor models are examined as rival means for data description and explanation. To this end, we use an empirically relevant setting with…
Descriptors: Bayesian Statistics, Models, Decision Making, Comparative Analysis
Lin, Chuan-Ju; Chang, Hua-Hua – Educational and Psychological Measurement, 2019
For item selection in cognitive diagnostic computerized adaptive testing (CD-CAT), ideally, a single item selection index should be created to simultaneously regulate precision, exposure status, and attribute balancing. For this purpose, in this study, we first proposed an attribute-balanced item selection criterion, namely, the standardized…
Descriptors: Test Items, Selection Criteria, Computer Assisted Testing, Adaptive Testing
Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G. – Educational and Psychological Measurement, 2013
Whittaker, Chang, and Dodd compared the performance of model selection criteria when selecting among mixed-format IRT models and found that the criteria did not perform adequately when selecting the more parameterized models. It was suggested by M. S. Johnson that the problems when selecting the more parameterized models may be because of the low…
Descriptors: Item Response Theory, Models, Selection Criteria, Accuracy