NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 14 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Choe, Edison M.; Kern, Justin L.; Chang, Hua-Hua – Journal of Educational and Behavioral Statistics, 2018
Despite common operationalization, measurement efficiency of computerized adaptive testing should not only be assessed in terms of the number of items administered but also the time it takes to complete the test. To this end, a recent study introduced a novel item selection criterion that maximizes Fisher information per unit of expected response…
Descriptors: Computer Assisted Testing, Reaction Time, Item Response Theory, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Chun; Chang, Hua-Hua; Boughton, Keith A. – Applied Psychological Measurement, 2013
Multidimensional computerized adaptive testing (MCAT) is able to provide a vector of ability estimates for each examinee, which could be used to provide a more informative profile of an examinee's performance. The current literature on MCAT focuses on the fixed-length tests, which can generate less accurate results for those examinees whose…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Length, Item Banks
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ali, Usama S.; Chang, Hua-Hua – ETS Research Report Series, 2014
Adaptive testing is advantageous in that it provides more efficient ability estimates with fewer items than linear testing does. Item-driven adaptive pretesting may also offer similar advantages, and verification of such a hypothesis about item calibration was the main objective of this study. A suitability index (SI) was introduced to adaptively…
Descriptors: Adaptive Testing, Simulation, Pretests Posttests, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Cheng, Ying; Chang, Hua-Hua; Douglas, Jeffrey; Guo, Fanmin – Educational and Psychological Measurement, 2009
a-stratification is a method that utilizes items with small discrimination (a) parameters early in an exam and those with higher a values when more is learned about the ability parameter. It can achieve much better item usage than the maximum information criterion (MIC). To make a-stratification more practical and more widely applicable, a method…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Peer reviewed Peer reviewed
Direct linkDirect link
Yi, Qing; Zhang, Jinming; Chang, Hua-Hua – Applied Psychological Measurement, 2008
Criteria had been proposed for assessing the severity of possible test security violations for computerized tests with high-stakes outcomes. However, these criteria resulted from theoretical derivations that assumed uniformly randomized item selection. This study investigated potential damage caused by organized item theft in computerized adaptive…
Descriptors: Test Items, Simulation, Item Analysis, Safety
Peer reviewed Peer reviewed
Chang, Hua-Hua; Qian, Jiahe; Yang, Zhiliang – Applied Psychological Measurement, 2001
Proposed a refinement, based on the stratification of items developed by D. Weiss (1973), of the computerized adaptive testing item selection procedure of H. Chang and Z. Ying (1999). Simulation studies using an item bank from the Graduate Record Examination show the benefits of the new procedure. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Selection, Simulation
Leung, Chi-Keung; Chang, Hua-Hua; Hau, Kit-Tai – 2000
Information based item selection methods in computerized adaptive tests (CATs) tend to choose the item that provides maximum information at an examinee's estimated trait level. As a result, these methods can yield extremely skewed item exposure distributions in which items with high "a" values may be overexposed, while those with low…
Descriptors: Adaptive Testing, Computer Assisted Testing, Selection, Simulation
Deng, Hui; Chang, Hua-Hua – 2001
The purpose of this study was to compare a proposed revised a-stratified, or alpha-stratified, USTR method of test item selection with the original alpha-stratified multistage computerized adaptive testing approach (STR) and the use of maximum Fisher information (FSH) with respect to test efficiency and item pool usage using simulated computerized…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Selection
Peer reviewed Peer reviewed
Chang, Hua-Hua; Ying, Zhiliang – Applied Psychological Measurement, 1999
Proposes a new multistage adaptive-testing procedure that factors the discrimination parameter (alpha) into the item-selection process. Simulation studies indicate that the new strategy results in tests that are well-balanced, with respect to item exposure, and efficient. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Selection
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yi, Qing; Zhang, Jinming; Chang, Hua-Hua – ETS Research Report Series, 2006
Chang and Zhang (2002, 2003) proposed several baseline criteria for assessing the severity of possible test security violations for computerized tests with high-stakes outcomes. However, these criteria were obtained from theoretical derivations that assumed uniformly randomized item selection. The current study investigated potential damage caused…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Computer Security
Wen, Jian-Bing; Chang, Hua-Hua; Hau, Kit-Tai – 2000
Test security has often been a problem in computerized adaptive testing (CAT) because the traditional wisdom of item selection overly exposes high discrimination items. The a-stratified (STR) design advocated by H. Chang and his collaborators, which uses items of less discrimination in earlier stages of testing, has been shown to be very…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Hau, Kit-Tai; Wen, Jian-Bing; Chang, Hua-Hua – 2002
In the a-stratified method, a popular and efficient item exposure control strategy proposed by H. Chang (H. Chang and Z. Ying, 1999; K. Hau and H. Chang, 2001) for computerized adaptive testing (CAT), the item pool and item selection process has usually been divided into four strata and the corresponding four stages. In a series of simulation…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Peer reviewed Peer reviewed
Leung, Chi-Keung; Chang, Hua-Hua; Hau, Kit-Tai – Educational and Psychological Measurement, 2003
Studied three stratification designs for computerized adaptive testing in conjunction with three well-developed content balancing methods. Simulation study results show substantial differences in item overlap rate and pool utilization among different methods. Recommends an optimal combination of stratification design and content balancing method.…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Simulation
Peer reviewed Peer reviewed
Chang, Hua-Hua; Ying, Zhiliang – Applied Psychological Measurement, 1996
An item selection procedure for computerized adaptive testing based on average global information is proposed. Results from simulation studies comparing the approach with the usual maximum item information item selection indicate that the new method leads to improvement in terms of bias and mean squared error reduction under many circumstances.…
Descriptors: Adaptive Testing, Computer Assisted Testing, Error of Measurement, Item Response Theory