NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)8
Source
Applied Psychological…29
Audience
Location
Florida1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 29 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Mao, Xiuzhen; Xin, Tao – Applied Psychological Measurement, 2013
The Monte Carlo approach which has previously been implemented in traditional computerized adaptive testing (CAT) is applied here to cognitive diagnostic CAT to test the ability of this approach to address multiple content constraints. The performance of the Monte Carlo approach is compared with the performance of the modified maximum global…
Descriptors: Monte Carlo Methods, Cognitive Tests, Diagnostic Tests, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Hsu, Chia-Ling; Wang, Wen-Chung; Chen, Shu-Ying – Applied Psychological Measurement, 2013
Interest in developing computerized adaptive testing (CAT) under cognitive diagnosis models (CDMs) has increased recently. CAT algorithms that use a fixed-length termination rule frequently lead to different degrees of measurement precision for different examinees. Fixed precision, in which the examinees receive the same degree of measurement…
Descriptors: Computer Assisted Testing, Adaptive Testing, Cognitive Tests, Diagnostic Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Yao, Lihua – Applied Psychological Measurement, 2013
Through simulated data, five multidimensional computerized adaptive testing (MCAT) selection procedures with varying test lengths are examined and compared using different stopping rules. Fixed item exposure rates are used for all the items, and the Priority Index (PI) method is used for the content constraints. Two stopping rules, standard error…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hung-Yu; Chen, Po-Hsi; Wang, Wen-Chung – Applied Psychological Measurement, 2012
In the human sciences, a common assumption is that latent traits have a hierarchical structure. Higher order item response theory models have been developed to account for this hierarchy. In this study, computerized adaptive testing (CAT) algorithms based on these kinds of models were implemented, and their performance under a variety of…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G. – Applied Psychological Measurement, 2012
When tests consist of multiple-choice and constructed-response items, researchers are confronted with the question of which item response theory (IRT) model combination will appropriately represent the data collected from these mixed-format tests. This simulation study examined the performance of six model selection criteria, including the…
Descriptors: Item Response Theory, Models, Selection, Criteria
Peer reviewed Peer reviewed
Direct linkDirect link
Kang, Taehoon; Cohen, Allan S.; Sung, Hyun-Jung – Applied Psychological Measurement, 2009
This study examines the utility of four indices for use in model selection with nested and nonnested polytomous item response theory (IRT) models: a cross-validation index and three information-based indices. Four commonly used polytomous IRT models are considered: the graded response model, the generalized partial credit model, the partial credit…
Descriptors: Item Response Theory, Models, Selection, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Barrada, Juan Ramon; Olea, Julio; Ponsoda, Vicente; Abad, Francisco Jose – Applied Psychological Measurement, 2010
In a typical study comparing the relative efficiency of two item selection rules in computerized adaptive testing, the common result is that they simultaneously differ in accuracy and security, making it difficult to reach a conclusion on which is the more appropriate rule. This study proposes a strategy to conduct a global comparison of two or…
Descriptors: Test Items, Simulation, Adaptive Testing, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Feiming; Cohen, Allan S.; Kim, Seock-Ho; Cho, Sun-Joo – Applied Psychological Measurement, 2009
This study examines model selection indices for use with dichotomous mixture item response theory (IRT) models. Five indices are considered: Akaike's information coefficient (AIC), Bayesian information coefficient (BIC), deviance information coefficient (DIC), pseudo-Bayes factor (PsBF), and posterior predictive model checks (PPMC). The five…
Descriptors: Item Response Theory, Models, Selection, Methods
Peer reviewed Peer reviewed
Zeng, Lingjia – Applied Psychological Measurement, 1995
The effects of different degrees of smoothing on results of equipercentile equating in random groups design using a postsmoothing method based on cubic splines were investigated, and a computer-based procedure was introduced for selecting a desirable degree of smoothing. Results suggest that no particular degree of smoothing was always optimal.…
Descriptors: Computer Simulation, Computer Software, Equated Scores, Research Methodology
Peer reviewed Peer reviewed
Chang, Hua-Hua; Qian, Jiahe; Yang, Zhiliang – Applied Psychological Measurement, 2001
Proposed a refinement, based on the stratification of items developed by D. Weiss (1973), of the computerized adaptive testing item selection procedure of H. Chang and Z. Ying (1999). Simulation studies using an item bank from the Graduate Record Examination show the benefits of the new procedure. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Selection, Simulation
Peer reviewed Peer reviewed
Eggen, T. J. H. M. – Applied Psychological Measurement, 1999
Evaluates a method for item selection in adaptive testing that is based on Kullback-Leibler information (KLI) (T. Cover and J. Thomas, 1991). Simulation study results show that testing algorithms using KLI-based item selection perform better than or as well as those using Fisher information item selection. (SLD)
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Selection
Peer reviewed Peer reviewed
Veldkamp, Bernard P. – Applied Psychological Measurement, 2002
Presents two mathematical programming approaches for the assembly of ability tests from item pools calibrated under a multidimensional item response theory model. Item selection is based on the Fisher information matrix. Illustrates the method through empirical examples for a two-dimensional mathematics item pool. (SLD)
Descriptors: Ability, Item Banks, Item Response Theory, Selection
Peer reviewed Peer reviewed
Ackerman, Terry A.; Evans, John A. – Applied Psychological Measurement, 1994
The effect of the conditioning score on the results of differential item functioning (DIF) analysis was examined with simulated data. The study demonstrates that results of DIF that rely on a conditioning score can be quite different depending on the conditioning variable that is selected. (SLD)
Descriptors: Construct Validity, Identification, Item Bias, Selection
Peer reviewed Peer reviewed
Chang, Hua-Hua; Ying, Zhiliang – Applied Psychological Measurement, 1999
Proposes a new multistage adaptive-testing procedure that factors the discrimination parameter (alpha) into the item-selection process. Simulation studies indicate that the new strategy results in tests that are well-balanced, with respect to item exposure, and efficient. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Banks, Selection
Peer reviewed Peer reviewed
Pastor, Dena A.; Dodd, Barbara G.; Chang, Hua-Hua – Applied Psychological Measurement, 2002
Studied the impact of using five different exposure control algorithms in two sizes of item pool calibrated using the generalized partial credit model. Simulation results show that the a-stratified design, in comparison to a no-exposure control condition, could be used to reduce item exposure and overlap and increase pool use, while degrading…
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Item Banks
Previous Page | Next Page ยป
Pages: 1  |  2