NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 28 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Jyun-Hong Chen; Hsiu-Yi Chao – Journal of Educational and Behavioral Statistics, 2024
To solve the attenuation paradox in computerized adaptive testing (CAT), this study proposes an item selection method, the integer programming approach based on real-time test data (IPRD), to improve test efficiency. The IPRD method turns information regarding the ability distribution of the population from real-time test data into feasible test…
Descriptors: Data Use, Computer Assisted Testing, Adaptive Testing, Design
Peer reviewed Peer reviewed
Direct linkDirect link
Tan, Qingrong; Cai, Yan; Luo, Fen; Tu, Dongbo – Journal of Educational and Behavioral Statistics, 2023
To improve the calibration accuracy and calibration efficiency of cognitive diagnostic computerized adaptive testing (CD-CAT) for new items and, ultimately, contribute to the widespread application of CD-CAT in practice, the current article proposed a Gini-based online calibration method that can simultaneously calibrate the Q-matrix and item…
Descriptors: Cognitive Tests, Computer Assisted Testing, Adaptive Testing, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Shiyu; Xiao, Houping; Cohen, Allan – Journal of Educational and Behavioral Statistics, 2021
An adaptive weight estimation approach is proposed to provide robust latent ability estimation in computerized adaptive testing (CAT) with response revision. This approach assigns different weights to each distinct response to the same item when response revision is allowed in CAT. Two types of weight estimation procedures, nonfunctional and…
Descriptors: Computer Assisted Testing, Adaptive Testing, Computation, Robustness (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Kang, Hyeon-Ah; Zheng, Yi; Chang, Hua-Hua – Journal of Educational and Behavioral Statistics, 2020
With the widespread use of computers in modern assessment, online calibration has become increasingly popular as a way of replenishing an item pool. The present study discusses online calibration strategies for a joint model of responses and response times. The study proposes likelihood inference methods for item paramter estimation and evaluates…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Response Theory, Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Jewsbury, Paul A.; van Rijn, Peter W. – Journal of Educational and Behavioral Statistics, 2020
In large-scale educational assessment data consistent with a simple-structure multidimensional item response theory (MIRT) model, where every item measures only one latent variable, separate unidimensional item response theory (UIRT) models for each latent variable are often calibrated for practical reasons. While this approach can be valid for…
Descriptors: Item Response Theory, Computation, Test Items, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Sinharay, Sandip – Journal of Educational and Behavioral Statistics, 2016
Meijer and van Krimpen-Stoop noted that the number of person-fit statistics (PFSs) that have been designed for computerized adaptive tests (CATs) is relatively modest. This article partially addresses that concern by suggesting three new PFSs for CATs. The statistics are based on tests for a change point and can be used to detect an abrupt change…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Response Theory, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Ping – Journal of Educational and Behavioral Statistics, 2017
Calibration of new items online has been an important topic in item replenishment for multidimensional computerized adaptive testing (MCAT). Several online calibration methods have been proposed for MCAT, such as multidimensional "one expectation-maximization (EM) cycle" (M-OEM) and multidimensional "multiple EM cycles"…
Descriptors: Test Items, Item Response Theory, Test Construction, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Nydick, Steven W. – Journal of Educational and Behavioral Statistics, 2014
The sequential probability ratio test (SPRT) is a common method for terminating item response theory (IRT)-based adaptive classification tests. To decide whether a classification test should stop, the SPRT compares a simple log-likelihood ratio, based on the classification bound separating two categories, to prespecified critical values. As has…
Descriptors: Probability, Item Response Theory, Models, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Thissen, David – Journal of Educational and Behavioral Statistics, 2016
David Thissen, a professor in the Department of Psychology and Neuroscience, Quantitative Program at the University of North Carolina, has consulted and served on technical advisory committees for assessment programs that use item response theory (IRT) over the past couple decades. He has come to the conclusion that there are usually two purposes…
Descriptors: Item Response Theory, Test Construction, Testing Problems, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Chun – Journal of Educational and Behavioral Statistics, 2014
Many latent traits in social sciences display a hierarchical structure, such as intelligence, cognitive ability, or personality. Usually a second-order factor is linearly related to a group of first-order factors (also called domain abilities in cognitive ability measures), and the first-order factors directly govern the actual item responses.…
Descriptors: Measurement, Accuracy, Item Response Theory, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Fan, Zhewen; Wang, Chun; Chang, Hua-Hua; Douglas, Jeffrey – Journal of Educational and Behavioral Statistics, 2012
Traditional methods for item selection in computerized adaptive testing only focus on item information without taking into consideration the time required to answer an item. As a result, some examinees may receive a set of items that take a very long time to finish, and information is not accrued as efficiently as possible. The authors propose two…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Doong, Shing H. – Journal of Educational and Behavioral Statistics, 2009
The purpose of this study is to investigate a functional relation between item exposure parameters (IEPs) and item parameters (IPs) over parallel pools. This functional relation is approximated by a well-known tool in machine learning. Let P and Q be parallel item pools and suppose IEPs for P have been obtained via a Sympson and Hetter-type…
Descriptors: Item Banks, Adaptive Testing, Computer Assisted Testing, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Finkelman, Matthew – Journal of Educational and Behavioral Statistics, 2008
Sequential mastery testing (SMT) has been researched as an efficient alternative to paper-and-pencil testing for pass/fail examinations. One popular method for determining when to cease examination in SMT is the truncated sequential probability ratio test (TSPRT). This article introduces the application of stochastic curtailment in SMT to shorten…
Descriptors: Mastery Tests, Sequential Approach, Computer Assisted Testing, Adaptive Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Passos, Valeria Lima; Berger, Martijn P. F.; Tan, Frans E. S. – Journal of Educational and Behavioral Statistics, 2008
During the early stage of computerized adaptive testing (CAT), item selection criteria based on Fisher"s information often produce less stable latent trait estimates than the Kullback-Leibler global information criterion. Robustness against early stage instability has been reported for the D-optimality criterion in a polytomous CAT with the…
Descriptors: Computer Assisted Testing, Adaptive Testing, Evaluation Criteria, Item Analysis
Peer reviewed Peer reviewed
Bradlow, Eric T.; Weiss, Robert E. – Journal of Educational and Behavioral Statistics, 2001
Compares four methods that map outlier statistics to a familiarity probability scale (a "P" value). Explored these methods in the context of computerized adaptive test data from a 1995 nationally administered computerized examination for professionals in the medical industry. (SLD)
Descriptors: Adaptive Testing, Computer Assisted Testing, Probability, Test Construction
Previous Page | Next Page ยป
Pages: 1  |  2