Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 9 |
Descriptor
Adaptive Testing | 10 |
Computer Assisted Testing | 10 |
Test Items | 10 |
Selection | 9 |
Comparative Analysis | 5 |
Computation | 3 |
Item Banks | 3 |
Statistical Analysis | 3 |
Accuracy | 2 |
Classification | 2 |
Cognitive Tests | 2 |
More ▼ |
Source
Educational and Psychological… | 10 |
Author
Chang, Hua-Hua | 2 |
Cheng, Ying | 2 |
Lin, Chuan-Ju | 2 |
Diao, Qi | 1 |
Dodd, Barbara G. | 1 |
Douglas, Jeffrey | 1 |
Guo, Fanmin | 1 |
Hauser, Carl | 1 |
He, Wei | 1 |
Hembry, Ian | 1 |
Leroux, Audrey J. | 1 |
More ▼ |
Publication Type
Journal Articles | 10 |
Reports - Research | 8 |
Reports - Evaluative | 2 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Lin, Chuan-Ju; Chang, Hua-Hua – Educational and Psychological Measurement, 2019
For item selection in cognitive diagnostic computerized adaptive testing (CD-CAT), ideally, a single item selection index should be created to simultaneously regulate precision, exposure status, and attribute balancing. For this purpose, in this study, we first proposed an attribute-balanced item selection criterion, namely, the standardized…
Descriptors: Test Items, Selection Criteria, Computer Assisted Testing, Adaptive Testing
He, Wei; Diao, Qi; Hauser, Carl – Educational and Psychological Measurement, 2014
This study compared four item-selection procedures developed for use with severely constrained computerized adaptive tests (CATs). Severely constrained CATs refer to those adaptive tests that seek to meet a complex set of constraints that are often not conclusive to each other (i.e., an item may contribute to the satisfaction of several…
Descriptors: Comparative Analysis, Test Items, Selection, Computer Assisted Testing
Cheng, Ying; Patton, Jeffrey M.; Shao, Can – Educational and Psychological Measurement, 2015
a-Stratified computerized adaptive testing with b-blocking (AST), as an alternative to the widely used maximum Fisher information (MFI) item selection method, can effectively balance item pool usage while providing accurate latent trait estimates in computerized adaptive testing (CAT). However, previous comparisons of these methods have treated…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Banks
Leroux, Audrey J.; Lopez, Myriam; Hembry, Ian; Dodd, Barbara G. – Educational and Psychological Measurement, 2013
This study compares the progressive-restricted standard error (PR-SE) exposure control procedure to three commonly used procedures in computerized adaptive testing, the randomesque, Sympson-Hetter (SH), and no exposure control methods. The performance of these four procedures is evaluated using the three-parameter logistic model under the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Comparative Analysis, Statistical Analysis
Seo, Dong Gi; Weiss, David J. – Educational and Psychological Measurement, 2015
Most computerized adaptive tests (CATs) have been studied using the framework of unidimensional item response theory. However, many psychological variables are multidimensional and might benefit from using a multidimensional approach to CATs. This study investigated the accuracy, fidelity, and efficiency of a fully multidimensional CAT algorithm…
Descriptors: Computer Assisted Testing, Adaptive Testing, Accuracy, Fidelity
Wang, Chun – Educational and Psychological Measurement, 2013
Cognitive diagnostic computerized adaptive testing (CD-CAT) purports to combine the strengths of both CAT and cognitive diagnosis. Cognitive diagnosis models aim at classifying examinees into the correct mastery profile group so as to pinpoint the strengths and weakness of each examinee whereas CAT algorithms choose items to determine those…
Descriptors: Computer Assisted Testing, Adaptive Testing, Cognitive Tests, Diagnostic Tests
Lin, Chuan-Ju – Educational and Psychological Measurement, 2011
This study compares four item selection criteria for a two-category computerized classification testing: (1) Fisher information (FI), (2) Kullback-Leibler information (KLI), (3) weighted log-odds ratio (WLOR), and (4) mutual information (MI), with respect to the efficiency and accuracy of classification decision using the sequential probability…
Descriptors: Computer Assisted Testing, Adaptive Testing, Selection, Test Items
Wang, Wen-Chung; Liu, Chen-Wei – Educational and Psychological Measurement, 2011
The generalized graded unfolding model (GGUM) has been recently developed to describe item responses to Likert items (agree-disagree) in attitude measurement. In this study, the authors (a) developed two item selection methods in computerized classification testing under the GGUM, the current estimate/ability confidence interval method and the cut…
Descriptors: Computer Assisted Testing, Adaptive Testing, Classification, Item Response Theory
Cheng, Ying; Chang, Hua-Hua; Douglas, Jeffrey; Guo, Fanmin – Educational and Psychological Measurement, 2009
a-stratification is a method that utilizes items with small discrimination (a) parameters early in an exam and those with higher a values when more is learned about the ability parameter. It can achieve much better item usage than the maximum information criterion (MIC). To make a-stratification more practical and more widely applicable, a method…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection

Plake, Barbara S.; And Others – Educational and Psychological Measurement, 1995
No significant differences in performance on a self-adapted test or anxiety were found for college students (n=218) taking a self-adapted test who selected item difficulty without any prior information, inspected an item before selecting, or answered a typical item and received performance feedback. (SLD)
Descriptors: Achievement, Adaptive Testing, College Students, Computer Assisted Testing