NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Chang, Yuan-chin Ivan; Lu, Hung-Yi – Psychometrika, 2010
Item calibration is an essential issue in modern item response theory based psychological or educational testing. Due to the popularity of computerized adaptive testing, methods to efficiently calibrate new items have become more important than that in the time when paper and pencil test administration is the norm. There are many calibration…
Descriptors: Test Items, Educational Testing, Adaptive Testing, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Chang, Hua-Hua; Ying, Zhiliang – Psychometrika, 2008
It has been widely reported that in computerized adaptive testing some examinees may get much lower scores than they would normally if an alternative paper-and-pencil version were given. The main purpose of this investigation is to quantitatively reveal the cause for the underestimation phenomenon. The logistic models, including the 1PL, 2PL, and…
Descriptors: Adaptive Testing, Computer Assisted Testing, Computation, Test Items
Peer reviewed Peer reviewed
van der Linden, Wim J. – Psychometrika, 1998
This paper suggests several item selection criteria for adaptive testing that are all based on the use of the true posterior. Some of the ability estimators produced by these criteria are discussed and empirically criticized. (SLD)
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Peer reviewed Peer reviewed
Macready, George B.; Dayton, C. Mitchell – Psychometrika, 1992
An adaptive testing algorithm is presented based on an alternative modeling framework, and its effectiveness is investigated in a simulation based on real data. The algorithm uses a latent class modeling framework in which assessed latent attributes are assumed to be categorical variables. (SLD)
Descriptors: Adaptive Testing, Algorithms, Bayesian Statistics, Classification
Peer reviewed Peer reviewed
Armstrong, Ronald D.; And Others – Psychometrika, 1992
A method is presented and illustrated for simultaneously generating multiple tests with similar characteristics from the item bank by using binary programing techniques. The parallel tests are created to match an existing seed test item for item and to match user-supplied taxonomic specifications. (SLD)
Descriptors: Algorithms, Arithmetic, Computer Assisted Testing, Equations (Mathematics)
Peer reviewed Peer reviewed
Samejima, Fumiko – Psychometrika, 1994
Using the constant information model, constant amounts of test information, and a finite interval of ability, simulated data were produced for 8 ability levels and 20 numbers of test items. Analyses suggest that it is desirable to consider modifying test information functions when they measure accuracy in ability estimation. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Computer Simulation
Peer reviewed Peer reviewed
Jones, Douglas H.; Jin, Zhiying – Psychometrika, 1994
Replenishing item pools for on-line ability testing requires innovative and efficient data collection. A method is proposed to collect test item calibration data in an on-line testing environment sequentially using locally D-optimum designs, thereby achieving high Fisher information for the item parameters. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Data Collection
Peer reviewed Peer reviewed
Segall, Daniel O. – Psychometrika, 1996
Maximum likelihood and Bayesian procedures are presented for item selection and scoring of multidimensional adaptive tests. A demonstration with simulated response data illustrates that multidimensional adaptive testing can provide equal or higher reliabilities with fewer items than are required in one-dimensional adaptive testing. (SLD)
Descriptors: Adaptive Testing, Bayesian Statistics, Computer Assisted Testing, Equations (Mathematics)