NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Zhang, Jinming – Psychometrika, 2013
In some popular test designs (including computerized adaptive testing and multistage testing), many item pairs are not administered to any test takers, which may result in some complications during dimensionality analyses. In this paper, a modified DETECT index is proposed in order to perform dimensionality analyses for response data from such…
Descriptors: Adaptive Testing, Simulation, Computer Assisted Testing, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Yao, Lihua – Psychometrika, 2012
Multidimensional computer adaptive testing (MCAT) can provide higher precision and reliability or reduce test length when compared with unidimensional CAT or with the paper-and-pencil test. This study compared five item selection procedures in the MCAT framework for both domain scores and overall scores through simulation by varying the structure…
Descriptors: Item Banks, Test Length, Simulation, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Chang, Yuan-chin Ivan; Lu, Hung-Yi – Psychometrika, 2010
Item calibration is an essential issue in modern item response theory based psychological or educational testing. Due to the popularity of computerized adaptive testing, methods to efficiently calibrate new items have become more important than that in the time when paper and pencil test administration is the norm. There are many calibration…
Descriptors: Test Items, Educational Testing, Adaptive Testing, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Cheng, Ying – Psychometrika, 2009
Computerized adaptive testing (CAT) is a mode of testing which enables more efficient and accurate recovery of one or more latent traits. Traditionally, CAT is built upon Item Response Theory (IRT) models that assume unidimensionality. However, the problem of how to build CAT upon latent class models (LCM) has not been investigated until recently,…
Descriptors: Simulation, Adaptive Testing, Heuristics, Scientific Concepts
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Tianyou; Zhang, Jiawei – Psychometrika, 2006
This paper deals with optimal partitioning of limited testing time in order to achieve maximum total test score. Nonlinear optimization theory was used to analyze this problem. A general case using a generic item response model is first presented. A special case that applies a response time model proposed by Wang and Hanson (2005) is also…
Descriptors: Reaction Time, Testing, Scores, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Xu, Xueli; Douglas, Jeff – Psychometrika, 2006
Nonparametric item response models have been developed as alternatives to the relatively inflexible parametric item response models. An open question is whether it is possible and practical to administer computerized adaptive testing with nonparametric models. This paper explores the possibility of computerized adaptive testing when using…
Descriptors: Simulation, Nonparametric Statistics, Item Analysis, Item Response Theory
Peer reviewed Peer reviewed
Andrich, David – Psychometrika, 1995
This book discusses adapting pencil-and-paper tests to computerized testing. Mention is made of models for graded responses to items and of possibilities beyond pencil-and-paper-tests, but the book is essentially about dichotomously scored test items. Contrasts between item response theory and classical test theory are described. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Response Theory, Scores
Peer reviewed Peer reviewed
Armstrong, Ronald D.; And Others – Psychometrika, 1992
A method is presented and illustrated for simultaneously generating multiple tests with similar characteristics from the item bank by using binary programing techniques. The parallel tests are created to match an existing seed test item for item and to match user-supplied taxonomic specifications. (SLD)
Descriptors: Algorithms, Arithmetic, Computer Assisted Testing, Equations (Mathematics)
Peer reviewed Peer reviewed
Samejima, Fumiko – Psychometrika, 1994
Using the constant information model, constant amounts of test information, and a finite interval of ability, simulated data were produced for 8 ability levels and 20 numbers of test items. Analyses suggest that it is desirable to consider modifying test information functions when they measure accuracy in ability estimation. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Computer Simulation
Peer reviewed Peer reviewed
Jones, Douglas H.; Jin, Zhiying – Psychometrika, 1994
Replenishing item pools for on-line ability testing requires innovative and efficient data collection. A method is proposed to collect test item calibration data in an on-line testing environment sequentially using locally D-optimum designs, thereby achieving high Fisher information for the item parameters. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Data Collection
Peer reviewed Peer reviewed
Segall, Daniel O. – Psychometrika, 1996
Maximum likelihood and Bayesian procedures are presented for item selection and scoring of multidimensional adaptive tests. A demonstration with simulated response data illustrates that multidimensional adaptive testing can provide equal or higher reliabilities with fewer items than are required in one-dimensional adaptive testing. (SLD)
Descriptors: Adaptive Testing, Bayesian Statistics, Computer Assisted Testing, Equations (Mathematics)