NotesFAQContact Us
Collection
Advanced
Search Tips
Education Level
Audience
Location
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 65 results Save | Export
Peer reviewed Peer reviewed
Cheng, Philip E.; Liou, Michelle – Applied Psychological Measurement, 2000
Reviewed methods of estimating theta suitable for computerized adaptive testing (CAT) and discussed the differences between Fisher and Kullback-Leibler information criteria for selecting items. Examined the accuracy of different CAT algorithms using samples from the National Assessment of Educational Progress. Results show when correcting for…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R. – 1999
Item scores that do not fit an assumed item response theory model may cause the latent trait value to be estimated inaccurately. Several person-fit statistics for detecting nonfitting score patterns for paper-and-pencil tests have been proposed. In the context of computerized adaptive tests (CAT), the use of person-fit analysis has hardly been…
Descriptors: Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics), Item Response Theory
Huang, Chi-Yu; Kalohn, John C.; Lin, Chuan-Ju; Spray, Judith – 2000
Item pools supporting computer-based tests are not always completely calibrated. Occasionally, only a small subset of the items in the pool may have actual calibrations, while the remainder of the items may only have classical item statistics, (e.g., "p"-values, point-biserial correlation coefficients, or biserial correlation…
Descriptors: Classification, Computer Assisted Testing, Estimation (Mathematics), Item Banks
Peer reviewed Peer reviewed
Divgi, D. R. – Applied Psychological Measurement, 1989
Two methods for estimating the reliability of a computerized adaptive test (CAT) without using item response theory are presented. The data consist of CAT and paper-and-pencil scores from identical or equivalent samples, and scores for all examinees on one or more covariates, using the Armed Services Vocational Aptitude Battery. (TJH)
Descriptors: Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics), Predictive Validity
Peer reviewed Peer reviewed
van der Linden, Wim J.; Glas, Cees A. W. – Applied Measurement in Education, 2000
Performed a simulation study to demonstrate the dramatic impact of capitalization on estimation errors on ability estimation in adaptive testing. Discusses four different strategies to minimize the likelihood of capitalization in computerized adaptive testing. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Peer reviewed Peer reviewed
Chen, Shu-Ying; Ankenmann, Robert D.; Chang, Hua-Hua – Applied Psychological Measurement, 2000
Compared five item selection rules with respect to the efficiency and precision of trait (theta) estimation at the early stages of computerized adaptive testing (CAT). The Fisher interval information, Fisher information with a posterior distribution, Kullback-Leibler information, and Kullback-Leibler information with a posterior distribution…
Descriptors: Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics), Selection
Peer reviewed Peer reviewed
Reise, Steve P.; Yu, Jiayuan – Journal of Educational Measurement, 1990
Parameter recovery in the graded-response model was investigated using the MULTILOG computer program under default conditions. Results from 36 simulated data sets suggest that at least 500 examinees are needed to achieve adequate calibration under the graded model. Sample size had little influence on the true ability parameter's recovery. (SLD)
Descriptors: Computer Assisted Testing, Computer Simulation, Computer Software, Estimation (Mathematics)
Peer reviewed Peer reviewed
van der Linden, Wim J. – Psychometrika, 1998
This paper suggests several item selection criteria for adaptive testing that are all based on the use of the true posterior. Some of the ability estimators produced by these criteria are discussed and empirically criticized. (SLD)
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Peer reviewed Peer reviewed
Chen, Ssu-Kuang; Hou, Liling; Dodd, Barbara G. – Educational and Psychological Measurement, 1998
A simulation study was conducted to investigate the application of expected a posteriori (EAP) trait estimation in computerized adaptive tests (CAT) based on the partial credit model and compare it with maximum likelihood estimation (MLE). Results show the conditions under which EAP and MLE provide relatively accurate estimation in CAT. (SLD)
Descriptors: Adaptive Testing, Comparative Analysis, Computer Assisted Testing, Estimation (Mathematics)
Jiang, Hai – 1999
The purpose of this paper is to describe the techniques used in establishing the concordance tables between the Test of English as a Foreign Language (TOEFL), paper and pencil (P&P), and computer-based testing (CBT) sections and total reported score scales. Listening, reading, and composite structure and essay scores plus a total score are…
Descriptors: Computer Assisted Testing, English (Second Language), Estimation (Mathematics), Scaling
Peer reviewed Peer reviewed
Wang, Tianyou; Vispoel, Walter P. – Journal of Educational Measurement, 1998
Used simulations of computerized adaptive tests to evaluate results yielded by four commonly used ability estimation methods: maximum likelihood estimation (MLE) and three Bayesian approaches. Results show clear distinctions between MLE and Bayesian methods. (SLD)
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Peer reviewed Peer reviewed
Zwick, Rebecca; And Others – Journal of Educational Measurement, 1995
In a simulation study of ability and estimation of differential item functioning (DIF) in computerized adaptive tests, Rasch-based DIF statistics were highly correlated with generating DIF, but DIF statistics tended to be slightly smaller than in the three-parameter logistic model analyses. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Computer Simulation
Tang, K. Linda – 1996
The average Kullback-Keibler (K-L) information index (H. Chang and Z. Ying, in press) is a newly proposed statistic in Computerized Adaptive Testing (CAT) item selection based on the global information function. The objectives of this study were to improve understanding of the K-L index with various parameters and to compare the performance of the…
Descriptors: Ability, Adaptive Testing, Comparative Analysis, Computer Assisted Testing
Peer reviewed Peer reviewed
Du, Yi; And Others – Applied Measurement in Education, 1993
A new computerized mastery test is described that builds on the Lewis and Sheehan procedure (sequential testlets) (1990), but uses fuzzy set decision theory to determine stopping rules and the Rasch model to calibrate items and estimate abilities. Differences between fuzzy set and Bayesian methods are illustrated through an example. (SLD)
Descriptors: Bayesian Statistics, Comparative Analysis, Computer Assisted Testing, Estimation (Mathematics)
Peer reviewed Peer reviewed
Nicewander, W. Alan; Thomasson, Gary L. – Applied Psychological Measurement, 1999
Derives three reliability estimates for the Bayes modal estimate (BME) and the maximum-likelihood estimate (MLE) of theta in computerized adaptive tests (CATs). Computes the three reliability estimates and the true reliabilities of both BME and MLE for seven simulated CATs. Results show the true reliabilities for BME and MLE to be nearly identical…
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5