NotesFAQContact Us
Collection
Advanced
Search Tips
Education Level
Audience
Location
Netherlands1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 37 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Peabody, Michael R. – Measurement: Interdisciplinary Research and Perspectives, 2023
Many organizations utilize some form of automation in the test assembly process; either fully algorithmic or heuristically constructed. However, one issue with heuristic models is that when the test assembly problem changes the entire model may need to be re-conceptualized and recoded. In contrast, mixed-integer programming (MIP) is a mathematical…
Descriptors: Programming Languages, Algorithms, Heuristics, Mathematical Models
Yunxiao Chen; Xiaoou Li; Jingchen Liu; Gongjun Xu; Zhiliang Ying – Grantee Submission, 2017
Large-scale assessments are supported by a large item pool. An important task in test development is to assign items into scales that measure different characteristics of individuals, and a popular approach is cluster analysis of items. Classical methods in cluster analysis, such as the hierarchical clustering, K-means method, and latent-class…
Descriptors: Item Analysis, Classification, Graphs, Test Items
Reid-Green, Keith S. – 1995
Some of the test questions for the National Council of Architectural Registration Boards deal with the site, including drainage, regrading, and the like. Some questions are most easily scored by examining contours, but others, such as water flow questions, are best scored from a grid in which each element is assigned its average elevation. This…
Descriptors: Algorithms, Architecture, Licensing Examinations (Professions), Test Construction
Jiang, Hai; Tang, K. Linda – 1998
This discussion of new methods for calibrating item response theory (IRT) models looks into new optimization procedures, such as the Genetic Algorithm (GA) to improve on the use of the Newton-Raphson procedure. The advantages of using a global optimization procedure like GA is that this kind of procedure is not easily affected by local optima and…
Descriptors: Algorithms, Item Response Theory, Mathematical Models, Simulation
Peer reviewed Peer reviewed
Zeng, Lingjia – Applied Psychological Measurement, 1997
Proposes a marginal Bayesian estimation procedure to improve item parameter estimates for the three parameter logistic model. Computer simulation suggests that implementing the marginal Bayesian estimation algorithm with four-parameter beta prior distributions and then updating the priors with empirical means of updated intermediate estimates can…
Descriptors: Algorithms, Bayesian Statistics, Estimation (Mathematics), Statistical Distributions
Stocking, Martha L.; Lewis, Charles – 1995
The interest in the application of large-scale adaptive testing for secure tests has served to focus attention on issues that arise when theoretical advances are made operational. Many such issues in the application of large-scale adaptive testing for secure tests have more to do with changes in testing conditions than with testing paradigms. One…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Peer reviewed Peer reviewed
Millman, Jason; Westman, Ronald S. – Journal of Educational Measurement, 1989
Five approaches to writing test items with computer assistance are described. A model of knowledge using a set of structures and a system for implementing the scheme are outlined. The approaches include the author-supplied approach, replacement-set procedures, computer-supplied prototype items, subject-matter mapping, and discourse analysis. (TJH)
Descriptors: Achievement Tests, Algorithms, Computer Assisted Testing, Test Construction
Peer reviewed Peer reviewed
Eggen, T. J. H. M. – Applied Psychological Measurement, 1999
Evaluates a method for item selection in adaptive testing that is based on Kullback-Leibler information (KLI) (T. Cover and J. Thomas, 1991). Simulation study results show that testing algorithms using KLI-based item selection perform better than or as well as those using Fisher information item selection. (SLD)
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Selection
Chang, Shun-Wen; Ansley, Timothy N.; Lin, Sieh-Hwa – 2000
This study examined the effectiveness of the Sympson and Hetter conditional procedure (SHC), a modification of the Sympson and Hetter (1985) algorithm, in controlling the exposure rates of items in a computerized adaptive testing (CAT) environment. The properties of the procedure were compared with those of the Davey and Parshall (1995) and the…
Descriptors: Adaptive Testing, Algorithms, Computer Assisted Testing, Item Banks
Stocking, Martha L.; And Others – 1991
A previously developed method of automatically selecting items for inclusion in a test subject to constraints on item content and statistical properties is applied to real data. Two tests are first assembled by experts in test construction who normally assemble such tests on a routine basis. Using the same pool of items and constraints articulated…
Descriptors: Algorithms, Automation, Coding, Computer Assisted Testing
Longford, Nicholas T. – 1994
This study is a critical evaluation of the roles for coding and scoring of missing responses to multiple-choice items in educational tests. The focus is on tests in which the test-takers have little or no motivation; in such tests omitting and not reaching (as classified by the currently adopted operational rules) is quite frequent. Data from the…
Descriptors: Algorithms, Classification, Coding, Models
Peer reviewed Peer reviewed
Schnipke, Deborah L.; Green, Bert F. – Journal of Educational Measurement, 1995
Two item selection algorithms, one based on maximal differentiation between examinees and one based on item response theory and maximum information for each examinee, were compared in simulated linear and adaptive tests of cognitive ability. Adaptive tests based on maximum information were clearly superior. (SLD)
Descriptors: Adaptive Testing, Algorithms, Comparative Analysis, Item Response Theory
Stocking, Martha L.; And Others – 1991
This paper presents a new heuristic approach to interactive test assembly that is called the successive item replacement algorithm. This approach builds on the work of W. J. van der Linden (1987) and W. J. van der Linden and E. Boekkooi-Timminga (1989) in which methods of mathematical optimization are combined with item response theory to…
Descriptors: Algorithms, Automation, Computer Selection, Heuristics
Peer reviewed Peer reviewed
Stocking, Martha L.; And Others – Applied Psychological Measurement, 1993
A method of automatically selecting items for inclusion in a test with constraints on item content and statistical properties was applied to real data. Tests constructed manually from the same data and constraints were compared to tests constructed automatically. Results show areas in which automated assembly can improve test construction. (SLD)
Descriptors: Algorithms, Automation, Comparative Testing, Computer Assisted Testing
Gershon, Richard; Bergstrom, Betty – 1995
When examinees are allowed to review responses on an adaptive test, can they "cheat" the adaptive algorithm in order to take an easier test and improve their performance? Theoretically, deliberately answering items incorrectly will lower the examinee ability estimate and easy test items will be administered. If review is then allowed,…
Descriptors: Adaptive Testing, Algorithms, Cheating, Computer Assisted Testing
Previous Page | Next Page ยป
Pages: 1  |  2  |  3