NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 59 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi – Applied Psychological Measurement, 2013
Variable-length computerized adaptive testing (VL-CAT) allows both items and test length to be "tailored" to examinees, thereby achieving the measurement goal (e.g., scoring precision or classification) with as few items as possible. Several popular test termination rules depend on the standard error of the ability estimate, which in turn depends…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Length, Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Han, Kyung T. – Applied Psychological Measurement, 2013
Most computerized adaptive testing (CAT) programs do not allow test takers to review and change their responses because it could seriously deteriorate the efficiency of measurement and make tests vulnerable to manipulative test-taking strategies. Several modified testing methods have been developed that provide restricted review options while…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Testing
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – Applied Psychological Measurement, 2012
A testlet is a cluster of items that share a common passage, scenario, or other context. These items might measure something in common beyond the trait measured by the test as a whole; if so, the model for the item responses should allow for this testlet trait. But modeling testlet effects that are negligible makes the model unnecessarily…
Descriptors: Test Items, Item Response Theory, Comparative Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Yao, Lihua – Applied Psychological Measurement, 2013
Through simulated data, five multidimensional computerized adaptive testing (MCAT) selection procedures with varying test lengths are examined and compared using different stopping rules. Fixed item exposure rates are used for all the items, and the Priority Index (PI) method is used for the content constraints. Two stopping rules, standard error…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Selection
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hung-Yu; Chen, Po-Hsi; Wang, Wen-Chung – Applied Psychological Measurement, 2012
In the human sciences, a common assumption is that latent traits have a hierarchical structure. Higher order item response theory models have been developed to account for this hierarchy. In this study, computerized adaptive testing (CAT) algorithms based on these kinds of models were implemented, and their performance under a variety of…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Seybert, Jacob; Stark, Stephen – Applied Psychological Measurement, 2012
A Monte Carlo study was conducted to examine the accuracy of differential item functioning (DIF) detection using the differential functioning of items and tests (DFIT) method. Specifically, the performance of DFIT was compared using "testwide" critical values suggested by Flowers, Oshima, and Raju, based on simulations involving large numbers of…
Descriptors: Test Bias, Monte Carlo Methods, Form Classes (Languages), Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Shih, Ching-Lin – Applied Psychological Measurement, 2010
Three multiple indicators-multiple causes (MIMIC) methods, namely, the standard MIMIC method (M-ST), the MIMIC method with scale purification (M-SP), and the MIMIC method with a pure anchor (M-PA), were developed to assess differential item functioning (DIF) in polytomous items. In a series of simulations, it appeared that all three methods…
Descriptors: Methods, Test Bias, Test Items, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Finch, Holmes – Applied Psychological Measurement, 2011
Estimation of multidimensional item response theory (MIRT) model parameters can be carried out using the normal ogive with unweighted least squares estimation with the normal-ogive harmonic analysis robust method (NOHARM) software. Previous simulation research has demonstrated that this approach does yield accurate and efficient estimates of item…
Descriptors: Item Response Theory, Computation, Test Items, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Murphy, Daniel L.; Dodd, Barbara G.; Vaughn, Brandon K. – Applied Psychological Measurement, 2010
This study examined the performance of the maximum Fisher's information, the maximum posterior weighted information, and the minimum expected posterior variance methods for selecting items in a computerized adaptive testing system when the items were grouped in testlets. A simulation study compared the efficiency of ability estimation among the…
Descriptors: Simulation, Adaptive Testing, Item Analysis, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Barrada, Juan Ramon; Olea, Julio; Ponsoda, Vicente; Abad, Francisco Jose – Applied Psychological Measurement, 2010
In a typical study comparing the relative efficiency of two item selection rules in computerized adaptive testing, the common result is that they simultaneously differ in accuracy and security, making it difficult to reach a conclusion on which is the more appropriate rule. This study proposes a strategy to conduct a global comparison of two or…
Descriptors: Test Items, Simulation, Adaptive Testing, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Jin, Kuan-Yu – Applied Psychological Measurement, 2010
In this study, all the advantages of slope parameters, random weights, and latent regression are acknowledged when dealing with component and composite items by adding slope parameters and random weights into the standard item response model with internal restrictions on item difficulty and formulating this new model within a multilevel framework…
Descriptors: Test Items, Difficulty Level, Regression (Statistics), Generalization
Peer reviewed Peer reviewed
Direct linkDirect link
Lopez Rivas, Gabriel E.; Stark, Stephen; Chernyshenko, Oleksandr S. – Applied Psychological Measurement, 2009
The purpose of this simulation study is to investigate the effects of anchor subtest composition on the accuracy of item response theory (IRT) likelihood ratio (LR) differential item functioning (DIF) detection (Thissen, Steinberg, & Wainer, 1988). Here, the IRT LR test was implemented with a free baseline approach wherein a baseline model was…
Descriptors: Simulation, Item Response Theory, Test Bias, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Furlow, Carolyn F.; Ross, Terris Raiford; Gagne, Phill – Applied Psychological Measurement, 2009
Douglas, Roussos, and Stout introduced the concept of differential bundle functioning (DBF) for identifying the underlying causes of differential item functioning (DIF). In this study, reference group was simulated to have higher mean ability than the focal group on a nuisance dimension, resulting in DIF for each of the multidimensional items…
Descriptors: Test Bias, Test Items, Reference Groups, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Emons, Wilco H. M. – Applied Psychological Measurement, 2009
For valid decision making, it is essential to both the person being measured and the person or organization that is having the person measured that the observed scores adequately represent the underlying trait. This study deals with person-fit analysis of polytomous item scores to detect unusual patterns of sum scores on subsets of items. This…
Descriptors: Personality Theories, Personality Measures, Scores, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, Seung W.; Swartz, Richard J. – Applied Psychological Measurement, 2009
Item selection is a core component in computerized adaptive testing (CAT). Several studies have evaluated new and classical selection methods; however, the few that have applied such methods to the use of polytomous items have reported conflicting results. To clarify these discrepancies and further investigate selection method properties, six…
Descriptors: Adaptive Testing, Item Analysis, Comparative Analysis, Test Items
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4