NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)1
Since 2006 (last 20 years)1
Education Level
Audience
Location
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 34 results Save | Export
Yunxiao Chen; Xiaoou Li; Jingchen Liu; Gongjun Xu; Zhiliang Ying – Grantee Submission, 2017
Large-scale assessments are supported by a large item pool. An important task in test development is to assign items into scales that measure different characteristics of individuals, and a popular approach is cluster analysis of items. Classical methods in cluster analysis, such as the hierarchical clustering, K-means method, and latent-class…
Descriptors: Item Analysis, Classification, Graphs, Test Items
Reid-Green, Keith S. – 1995
Some of the test questions for the National Council of Architectural Registration Boards deal with the site, including drainage, regrading, and the like. Some questions are most easily scored by examining contours, but others, such as water flow questions, are best scored from a grid in which each element is assigned its average elevation. This…
Descriptors: Algorithms, Architecture, Licensing Examinations (Professions), Test Construction
Peer reviewed Peer reviewed
Millman, Jason; Westman, Ronald S. – Journal of Educational Measurement, 1989
Five approaches to writing test items with computer assistance are described. A model of knowledge using a set of structures and a system for implementing the scheme are outlined. The approaches include the author-supplied approach, replacement-set procedures, computer-supplied prototype items, subject-matter mapping, and discourse analysis. (TJH)
Descriptors: Achievement Tests, Algorithms, Computer Assisted Testing, Test Construction
Peer reviewed Peer reviewed
van der Linden, Wim J.; Boekkooi-Timminga, Ellen – Psychometrika, 1989
A maximin model for test design based on item response theory is proposed. Only the relative shape of target test information function is specified. It serves as a constraint subject to which a linear programing algorithm maximizes the test information. The model is illustrated, and alternative models are discussed. (TJH)
Descriptors: Algorithms, Latent Trait Theory, Linear Programing, Mathematical Models
van der Linden, Wim J.; Adema, Jos J. – 1997
An algorithm for the assembly of multiple test forms is proposed in which the multiple-form problem is reduced to a series of computationally less intensive two-form problems. At each step one form is assembled to its true specifications; the other form is a dummy assembled only to maintain a balance between the quality of the current form and the…
Descriptors: Algorithms, Foreign Countries, Higher Education, Linear Programming
Stocking, Martha L.; And Others – 1991
A previously developed method of automatically selecting items for inclusion in a test subject to constraints on item content and statistical properties is applied to real data. Two tests are first assembled by experts in test construction who normally assemble such tests on a routine basis. Using the same pool of items and constraints articulated…
Descriptors: Algorithms, Automation, Coding, Computer Assisted Testing
Longford, Nicholas T. – 1994
This study is a critical evaluation of the roles for coding and scoring of missing responses to multiple-choice items in educational tests. The focus is on tests in which the test-takers have little or no motivation; in such tests omitting and not reaching (as classified by the currently adopted operational rules) is quite frequent. Data from the…
Descriptors: Algorithms, Classification, Coding, Models
Peer reviewed Peer reviewed
Schnipke, Deborah L.; Green, Bert F. – Journal of Educational Measurement, 1995
Two item selection algorithms, one based on maximal differentiation between examinees and one based on item response theory and maximum information for each examinee, were compared in simulated linear and adaptive tests of cognitive ability. Adaptive tests based on maximum information were clearly superior. (SLD)
Descriptors: Adaptive Testing, Algorithms, Comparative Analysis, Item Response Theory
van der Linden, Wim J. – 1997
In constrained adaptive testing, the numbers of constraints needed to control the content of the tests can easily run into the hundreds. Proper initialization of the algorithm becomes a requirement because the presence of large numbers of constraints slows down the convergence of the ability estimator. In this paper, an empirical initialization of…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Stocking, Martha L.; And Others – 1991
This paper presents a new heuristic approach to interactive test assembly that is called the successive item replacement algorithm. This approach builds on the work of W. J. van der Linden (1987) and W. J. van der Linden and E. Boekkooi-Timminga (1989) in which methods of mathematical optimization are combined with item response theory to…
Descriptors: Algorithms, Automation, Computer Selection, Heuristics
Peer reviewed Peer reviewed
Berger, Martijn P. F. – Applied Psychological Measurement, 1994
This paper focuses on similarities of optimal design of fixed-form tests, adaptive tests, and testlets within the framework of the general theory of optimal designs. A sequential design procedure is proposed that uses these similarities to obtain consistent estimates for the trait level distribution. (SLD)
Descriptors: Achievement Tests, Adaptive Testing, Algorithms, Estimation (Mathematics)
Peer reviewed Peer reviewed
Stocking, Martha L.; And Others – Applied Psychological Measurement, 1993
A method of automatically selecting items for inclusion in a test with constraints on item content and statistical properties was applied to real data. Tests constructed manually from the same data and constraints were compared to tests constructed automatically. Results show areas in which automated assembly can improve test construction. (SLD)
Descriptors: Algorithms, Automation, Comparative Testing, Computer Assisted Testing
Davey, Tim; Parshall, Cynthia G. – 1995
Although computerized adaptive tests acquire their efficiency by successively selecting items that provide optimal measurement at each examinee's estimated level of ability, operational testing programs will typically consider additional factors in item selection. In practice, items are generally selected with regard to at least three, often…
Descriptors: Ability, Adaptive Testing, Algorithms, Computer Assisted Testing
Ackerman, Terry A. – 1991
This paper examines the effect of using unidimensional item response theory (IRT) item parameter estimates of multidimensional items to create weakly parallel test forms using target information curves. To date, all computer-based algorithms that have been devised to create parallel test forms assume that the items are unidimensional. This paper…
Descriptors: Algorithms, Equations (Mathematics), Estimation (Mathematics), Item Response Theory
Peer reviewed Peer reviewed
Armstrong, R. D.; And Others – Applied Psychological Measurement, 1996
When the network-flow algorithm (NFA) and the average growth approximation algorithm (AGAA) were used for automated test assembly with American College Test and Armed Services Vocational Aptitude Battery item banks, results indicate that reasonable error in item parameters is not harmful for test assembly using NFA or AGAA. (SLD)
Descriptors: Algorithms, Aptitude Tests, College Entrance Examinations, Computer Assisted Testing
Previous Page | Next Page ยป
Pages: 1  |  2  |  3