NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 43 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sahin, Alper; Weiss, David J. – Educational Sciences: Theory and Practice, 2015
This study aimed to investigate the effects of calibration sample size and item bank size on examinee ability estimation in computerized adaptive testing (CAT). For this purpose, a 500-item bank pre-calibrated using the three-parameter logistic model with 10,000 examinees was simulated. Calibration samples of varying sizes (150, 250, 350, 500,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Sample Size, Item Banks
Peer reviewed Peer reviewed
Direct linkDirect link
Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi – Applied Psychological Measurement, 2013
Variable-length computerized adaptive testing (VL-CAT) allows both items and test length to be "tailored" to examinees, thereby achieving the measurement goal (e.g., scoring precision or classification) with as few items as possible. Several popular test termination rules depend on the standard error of the ability estimate, which in turn depends…
Descriptors: Adaptive Testing, Computer Assisted Testing, Test Length, Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Chun; Chang, Hua-Hua; Boughton, Keith A. – Applied Psychological Measurement, 2013
Multidimensional computerized adaptive testing (MCAT) is able to provide a vector of ability estimates for each examinee, which could be used to provide a more informative profile of an examinee's performance. The current literature on MCAT focuses on the fixed-length tests, which can generate less accurate results for those examinees whose…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Length, Item Banks
Peer reviewed Peer reviewed
Direct linkDirect link
Schmitt, T. A.; Sass, D. A.; Sullivan, J. R.; Walker, C. M. – International Journal of Testing, 2010
Imposed time limits on computer adaptive tests (CATs) can result in examinees having difficulty completing all items, thus compromising the validity and reliability of ability estimates. In this study, the effects of speededness were explored in a simulated CAT environment by varying examinee response patterns to end-of-test items. Expectedly,…
Descriptors: Monte Carlo Methods, Simulation, Computer Assisted Testing, Adaptive Testing
Stocking, Martha L.; Lewis, Charles – 1995
The interest in the application of large-scale adaptive testing for secure tests has served to focus attention on issues that arise when theoretical advances are made operational. Many such issues in the application of large-scale adaptive testing for secure tests have more to do with changes in testing conditions than with testing paradigms. One…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Peer reviewed Peer reviewed
Veerkamp, Wim J. J. – Journal of Educational and Behavioral Statistics, 2000
Showed how Taylor approximation can be used to generate a linear approximation to a logistic item characteristic curve and a linear ability estimator. Demonstrated how, for a specific simulation, this could result in the special case of a Robbins-Monro item selection procedure for adaptive testing. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Selection
Reese, Lynda M.; Schnipke, Deborah L.; Luebke, Stephen W. – 1999
Most large-scale testing programs facing computerized adaptive testing (CAT) must face the challenge of maintaining extensive content requirements, but content constraints in computerized adaptive testing (CAT) can compromise the precision and efficiency that could be achieved by a pure maximum information adaptive testing algorithm. This…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Simulation
Peer reviewed Peer reviewed
van der Linden, Wim J.; Glas, Cees A. W. – Applied Measurement in Education, 2000
Performed a simulation study to demonstrate the dramatic impact of capitalization on estimation errors on ability estimation in adaptive testing. Discusses four different strategies to minimize the likelihood of capitalization in computerized adaptive testing. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Wen, Jian-Bing; Chang, Hua-Hua; Hau, Kit-Tai – 2000
Test security has often been a problem in computerized adaptive testing (CAT) because the traditional wisdom of item selection overly exposes high discrimination items. The a-stratified (STR) design advocated by H. Chang and his collaborators, which uses items of less discrimination in earlier stages of testing, has been shown to be very…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Hau, Kit-Tai; Wen, Jian-Bing; Chang, Hua-Hua – 2002
In the a-stratified method, a popular and efficient item exposure control strategy proposed by H. Chang (H. Chang and Z. Ying, 1999; K. Hau and H. Chang, 2001) for computerized adaptive testing (CAT), the item pool and item selection process has usually been divided into four strata and the corresponding four stages. In a series of simulation…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Peer reviewed Peer reviewed
van der Linden, Wim J.; Reese, Lynda M. – Applied Psychological Measurement, 1998
Proposes a model for constrained computerized adaptive testing in which the information in the test at the trait level (theta) estimate is maximized subject to the number of possible constraints on the content of the test. Test assembly relies on a linear-programming approach. Illustrates the approach through simulation with items from the Law…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Estimation (Mathematics)
Peer reviewed Peer reviewed
Walker, Cindy M.; Beretvas, S. Natasha; Ackerman, Terry – Applied Measurement in Education, 2001
Conducted a simulation study of differential item functioning (DIF) to compare the power and Type I error rates for two conditions: using an examinee's ability estimate as the conditioning variable with the CATSIB program and either using the regression correction from CATSIB or not. Discusses implications of findings for DIF detection. (SLD)
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Item Bias
Peer reviewed Peer reviewed
Wang, Tianyou; Vispoel, Walter P. – Journal of Educational Measurement, 1998
Used simulations of computerized adaptive tests to evaluate results yielded by four commonly used ability estimation methods: maximum likelihood estimation (MLE) and three Bayesian approaches. Results show clear distinctions between MLE and Bayesian methods. (SLD)
Descriptors: Ability, Adaptive Testing, Bayesian Statistics, Computer Assisted Testing
Parshall, Cynthia G.; Davey, Tim; Nering, Mike L. – 1998
When items are selected during a computerized adaptive test (CAT) solely with regard to their measurement properties, it is commonly found that certain items are administered to nearly every examinee, and that a small number of the available items will account for a large proportion of the item administrations. This presents a clear security risk…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Efficiency
Weissman, Alexander – 2003
This study investigated the efficiency of item selection in a computerized adaptive test (CAT), where efficiency was defined in terms of the accumulated test information at an examinee's true ability level. A simulation methodology compared the efficiency of 2 item selection procedures with 5 ability estimation procedures for CATs of 5, 10, 15,…
Descriptors: Ability, Adaptive Testing, Computer Assisted Testing, Maximum Likelihood Statistics
Previous Page | Next Page ยป
Pages: 1  |  2  |  3