Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 22 |
Descriptor
Test Items | 40 |
Computer Assisted Testing | 29 |
Adaptive Testing | 26 |
Simulation | 18 |
Selection | 17 |
Item Banks | 14 |
Item Response Theory | 11 |
Comparative Analysis | 10 |
Test Construction | 7 |
Reaction Time | 5 |
Statistical Analysis | 5 |
More ▼ |
Source
Applied Psychological… | 9 |
Journal of Educational… | 6 |
ETS Research Report Series | 4 |
Journal of Educational and… | 4 |
Educational and Psychological… | 3 |
Psychometrika | 3 |
Applied Measurement in… | 1 |
Author
Chang, Hua-Hua | 40 |
Hau, Kit-Tai | 6 |
Leung, Chi-Keung | 4 |
Wang, Chun | 4 |
Zhang, Jinming | 4 |
Cheng, Ying | 3 |
Tao, Jian | 3 |
Yi, Qing | 3 |
Ying, Zhiliang | 3 |
Ali, Usama S. | 2 |
Deng, Hui | 2 |
More ▼ |
Publication Type
Journal Articles | 30 |
Reports - Research | 27 |
Reports - Evaluative | 12 |
Speeches/Meeting Papers | 8 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 1 |
Audience
Location
Canada | 1 |
United States | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
What Works Clearinghouse Rating
Lin, Chuan-Ju; Chang, Hua-Hua – Educational and Psychological Measurement, 2019
For item selection in cognitive diagnostic computerized adaptive testing (CD-CAT), ideally, a single item selection index should be created to simultaneously regulate precision, exposure status, and attribute balancing. For this purpose, in this study, we first proposed an attribute-balanced item selection criterion, namely, the standardized…
Descriptors: Test Items, Selection Criteria, Computer Assisted Testing, Adaptive Testing
Choe, Edison M.; Kern, Justin L.; Chang, Hua-Hua – Journal of Educational and Behavioral Statistics, 2018
Despite common operationalization, measurement efficiency of computerized adaptive testing should not only be assessed in terms of the number of items administered but also the time it takes to complete the test. To this end, a recent study introduced a novel item selection criterion that maximizes Fisher information per unit of expected response…
Descriptors: Computer Assisted Testing, Reaction Time, Item Response Theory, Test Items
Kang, Hyeon-Ah; Zhang, Susu; Chang, Hua-Hua – Journal of Educational Measurement, 2017
The development of cognitive diagnostic-computerized adaptive testing (CD-CAT) has provided a new perspective for gaining information about examinees' mastery on a set of cognitive attributes. This study proposes a new item selection method within the framework of dual-objective CD-CAT that simultaneously addresses examinees' attribute mastery…
Descriptors: Computer Assisted Testing, Adaptive Testing, Cognitive Tests, Test Items
Kang, Hyeon-Ah; Lu, Ying; Chang, Hua-Hua – Applied Measurement in Education, 2017
Increasing use of item pools in large-scale educational assessments calls for an appropriate scaling procedure to achieve a common metric among field-tested items. The present study examines scaling procedures for developing a new item pool under a spiraled block linking design. The three scaling procedures are considered: (a) concurrent…
Descriptors: Item Response Theory, Accuracy, Educational Assessment, Test Items
Ali, Usama S.; Chang, Hua-Hua; Anderson, Carolyn J. – ETS Research Report Series, 2015
Polytomous items are typically described by multiple category-related parameters; situations, however, arise in which a single index is needed to describe an item's location along a latent trait continuum. Situations in which a single index would be needed include item selection in computerized adaptive testing or test assembly. Therefore single…
Descriptors: Item Response Theory, Test Items, Computer Assisted Testing, Adaptive Testing
Guo, Rui; Zheng, Yi; Chang, Hua-Hua – Journal of Educational Measurement, 2015
An important assumption of item response theory is item parameter invariance. Sometimes, however, item parameters are not invariant across different test administrations due to factors other than sampling error; this phenomenon is termed item parameter drift. Several methods have been developed to detect drifted items. However, most of the…
Descriptors: Item Response Theory, Test Items, Evaluation Methods, Equated Scores
Meng, Xiang-Bin; Tao, Jian; Chang, Hua-Hua – Journal of Educational Measurement, 2015
The assumption of conditional independence between the responses and the response times (RTs) for a given person is common in RT modeling. However, when the speed of a test taker is not constant, this assumption will be violated. In this article we propose a conditional joint model for item responses and RTs, which incorporates a covariance…
Descriptors: Reaction Time, Test Items, Accuracy, Models
Cheng, Ying; Chen, Peihua; Qian, Jiahe; Chang, Hua-Hua – Applied Psychological Measurement, 2013
Differential item functioning (DIF) analysis is an important step in the data analysis of large-scale testing programs. Nowadays, many such programs endorse matrix sampling designs to reduce the load on examinees, such as the balanced incomplete block (BIB) design. These designs pose challenges to the traditional DIF analysis methods. For example,…
Descriptors: Test Bias, Equated Scores, Test Items, Effect Size
Tao, Jian; Shi, Ning-Zhong; Chang, Hua-Hua – Journal of Educational and Behavioral Statistics, 2012
For mixed-type tests composed of both dichotomous and polytomous items, polytomous items often yield more information than dichotomous ones. To reflect the difference between the two types of items, polytomous items are usually pre-assigned with larger weights. We propose an item-weighted likelihood method to better assess examinees' ability…
Descriptors: Test Items, Weighted Scores, Maximum Likelihood Statistics, Statistical Bias
Chen, Pei-Hua; Chang, Hua-Hua; Wu, Haiyan – Educational and Psychological Measurement, 2012
Two sampling-and-classification-based procedures were developed for automated test assembly: the Cell Only and the Cell and Cube methods. A simulation study based on a 540-item bank was conducted to compare the performance of the procedures with the performance of a mixed-integer programming (MIP) method for assembling multiple parallel test…
Descriptors: Test Items, Selection, Test Construction, Item Response Theory
Sun, Shan-Shan; Tao, Jian; Chang, Hua-Hua; Shi, Ning-Zhong – Applied Psychological Measurement, 2012
For mixed-type tests composed of dichotomous and polytomous items, polytomous items often yield more information than dichotomous items. To reflect the difference between the two types of items and to improve the precision of ability estimation, an adaptive weighted maximum-a-posteriori (WMAP) estimation is proposed. To evaluate the performance of…
Descriptors: Monte Carlo Methods, Computation, Item Response Theory, Weighted Scores
Fan, Zhewen; Wang, Chun; Chang, Hua-Hua; Douglas, Jeffrey – Journal of Educational and Behavioral Statistics, 2012
Traditional methods for item selection in computerized adaptive testing only focus on item information without taking into consideration the time required to answer an item. As a result, some examinees may receive a set of items that take a very long time to finish, and information is not accrued as efficiently as possible. The authors propose two…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Analysis
Wang, Chun; Fan, Zhewen; Chang, Hua-Hua; Douglas, Jeffrey A. – Journal of Educational and Behavioral Statistics, 2013
The item response times (RTs) collected from computerized testing represent an underutilized type of information about items and examinees. In addition to knowing the examinees' responses to each item, we can investigate the amount of time examinees spend on each item. Current models for RTs mainly focus on parametric models, which have the…
Descriptors: Reaction Time, Computer Assisted Testing, Test Items, Accuracy
Ali, Usama S.; Chang, Hua-Hua – ETS Research Report Series, 2014
Adaptive testing is advantageous in that it provides more efficient ability estimates with fewer items than linear testing does. Item-driven adaptive pretesting may also offer similar advantages, and verification of such a hypothesis about item calibration was the main objective of this study. A suitability index (SI) was introduced to adaptively…
Descriptors: Adaptive Testing, Simulation, Pretests Posttests, Test Items
Chen, Ping; Xin, Tao; Wang, Chun; Chang, Hua-Hua – Psychometrika, 2012
Item replenishing is essential for item bank maintenance in cognitive diagnostic computerized adaptive testing (CD-CAT). In regular CAT, online calibration is commonly used to calibrate the new items continuously. However, until now no reference has publicly become available about online calibration for CD-CAT. Thus, this study investigates the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Diagnostic Tests, Cognitive Tests