Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 1 |
Descriptor
Source
Applied Psychological… | 2 |
Educational and Psychological… | 1 |
Journal of Educational… | 1 |
Journal of Educational and… | 1 |
Psychometrika | 1 |
Author
Armstrong, Ronald D. | 6 |
Belov, Dmitry I. | 2 |
Jones, Douglas H. | 2 |
Kunce, Charles S. | 1 |
Wang, Zhaobo | 1 |
Publication Type
Journal Articles | 6 |
Reports - Evaluative | 3 |
Reports - Research | 2 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Armed Services Vocational… | 1 |
Law School Admission Test | 1 |
What Works Clearinghouse Rating
Belov, Dmitry I.; Armstrong, Ronald D. – Educational and Psychological Measurement, 2009
The recent literature on computerized adaptive testing (CAT) has developed methods for creating CAT item pools from a large master pool. Each CAT pool is designed as a set of nonoverlapping forms reflecting the skill levels of an assumed population of test takers. This article presents a Monte Carlo method to obtain these CAT pools and discusses…
Descriptors: Computer Assisted Testing, Adaptive Testing, Item Banks, Test Items

Armstrong, Ronald D.; Jones, Douglas H.; Wang, Zhaobo – Journal of Educational and Behavioral Statistics, 1998
Generating a test from an item bank using a criterion based on classical test theory parameters poses considerable problems. A mathematical model is formulated that maximizes the reliability coefficient alpha, subject to logical constraints on the choice of items. Theorems ensuring appropriate application of the Lagragian relation techniques are…
Descriptors: Item Banks, Mathematical Models, Reliability, Test Construction

Armstrong, Ronald D.; Jones, Douglas H.; Kunce, Charles S. – Applied Psychological Measurement, 1998
Investigated the use of mathematical programming techniques to generate parallel test forms with passages and items based on item-response theory (IRT) using the Fundamentals of Engineering Examination. Generated four parallel test forms from the item bank of almost 1,100 items. Comparison with human-generated forms supports the mathematical…
Descriptors: Engineering, Item Banks, Item Response Theory, Test Construction

Armstrong, Ronald D.; And Others – Psychometrika, 1992
A method is presented and illustrated for simultaneously generating multiple tests with similar characteristics from the item bank by using binary programing techniques. The parallel tests are created to match an existing seed test item for item and to match user-supplied taxonomic specifications. (SLD)
Descriptors: Algorithms, Arithmetic, Computer Assisted Testing, Equations (Mathematics)

Armstrong, Ronald D.; And Others – Journal of Educational Statistics, 1994
A network-flow model is formulated for constructing parallel tests based on classical test theory while using test reliability as the criterion. Practitioners can specify a test-difficulty distribution for values of item difficulties as well as test-composition requirements. An empirical study illustrates the reliability of generated tests. (SLD)
Descriptors: Algorithms, Computer Assisted Testing, Difficulty Level, Item Banks
Belov, Dmitry I.; Armstrong, Ronald D. – Applied Psychological Measurement, 2005
A new test assembly algorithm based on a Monte Carlo random search is presented in this article. A major advantage of the Monte Carlo test assembly over other approaches (integer programming or enumerative heuristics) is that it performs a uniform sampling from the item pool, which provides every feasible item combination (test) with an equal…
Descriptors: Item Banks, Computer Assisted Testing, Monte Carlo Methods, Evaluation Methods