Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 5 |
Descriptor
Automation | 11 |
Computer Assisted Testing | 11 |
Test Construction | 7 |
Item Banks | 4 |
Adaptive Testing | 3 |
Scores | 3 |
Scoring | 3 |
Test Items | 3 |
Thinking Skills | 3 |
Comparative Analysis | 2 |
Item Response Theory | 2 |
More ▼ |
Source
Journal of Educational… | 11 |
Author
Luecht, Richard M. | 2 |
van der Linden, Wim J. | 2 |
Bennett, Randy Elliot | 1 |
Cai, Yan | 1 |
Casabianca, Jodi M. | 1 |
Chao, Szu-Fu | 1 |
Choi, Ikkyu | 1 |
Clauser, Brian E. | 1 |
Clyman, Stephen G. | 1 |
Diao, Qi | 1 |
Donoghue, John R. | 1 |
More ▼ |
Publication Type
Journal Articles | 11 |
Reports - Descriptive | 5 |
Reports - Research | 3 |
Reports - Evaluative | 2 |
Book/Product Reviews | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Casabianca, Jodi M.; Donoghue, John R.; Shin, Hyo Jeong; Chao, Szu-Fu; Choi, Ikkyu – Journal of Educational Measurement, 2023
Using item-response theory to model rater effects provides an alternative solution for rater monitoring and diagnosis, compared to using standard performance metrics. In order to fit such models, the ratings data must be sufficiently connected in order to estimate rater effects. Due to popular rating designs used in large-scale testing scenarios,…
Descriptors: Item Response Theory, Alternative Assessment, Evaluators, Research Problems
Xu, Lingling; Wang, Shiyu; Cai, Yan; Tu, Dongbo – Journal of Educational Measurement, 2021
Designing a multidimensional adaptive test (M-MST) based on a multidimensional item response theory (MIRT) model is critical to make full use of the advantages of both MST and MIRT in implementing multidimensional assessments. This study proposed two types of automated test assembly (ATA) algorithms and one set of routing rules that can facilitate…
Descriptors: Item Response Theory, Adaptive Testing, Automation, Test Construction
Li, Jie; van der Linden, Wim J. – Journal of Educational Measurement, 2018
The final step of the typical process of developing educational and psychological tests is to place the selected test items in a formatted form. The step involves the grouping and ordering of the items to meet a variety of formatting constraints. As this activity tends to be time-intensive, the use of mixed-integer programming (MIP) has been…
Descriptors: Programming, Automation, Test Items, Test Format
van der Linden, Wim J.; Diao, Qi – Journal of Educational Measurement, 2011
In automated test assembly (ATA), the methodology of mixed-integer programming is used to select test items from an item bank to meet the specifications for a desired test form and optimize its measurement accuracy. The same methodology can be used to automate the formatting of the set of selected items into the actual test form. Three different…
Descriptors: Test Items, Test Format, Test Construction, Item Banks
Finkelman, Matthew; Kim, Wonsuk; Roussos, Louis A. – Journal of Educational Measurement, 2009
Much recent psychometric literature has focused on cognitive diagnosis models (CDMs), a promising class of instruments used to measure the strengths and weaknesses of examinees. This article introduces a genetic algorithm to perform automated test assembly alongside CDMs. The algorithm is flexible in that it can be applied whether the goal is to…
Descriptors: Identification, Genetics, Test Construction, Mathematics

Weiner, John A.; Gibson, Wade M. – Journal of Educational Measurement, 1998
Describes a procedure for automated-test-forms assembly based on Classical Test Theory (CTT). The procedure uses stratified random-content sampling and test-form preequating to ensure both content and psychometric equivalence in generating virtually unlimited parallel forms. Extends the usefulness of CTT in automated test construction. (Author/SLD)
Descriptors: Automation, Computer Assisted Testing, Equated Scores, Psychometrics

Luecht, Richard M.; Nungester, Ronald J. – Journal of Educational Measurement, 1998
Describes an integrated approach to test development and administration called computer-adaptive sequential testing (CAST). CAST incorporates adaptive testing methods with automated test assembly. Describes the CAST framework and demonstrates several applications using a medical-licensure example. (SLD)
Descriptors: Adaptive Testing, Automation, Computer Assisted Testing, Licensing Examinations (Professions)

Stocking, Martha L.; Jirele, Thomas; Lewis, Charles; Swanson, Len – Journal of Educational Measurement, 1998
Constructed a pool of items from operational tests of mathematics to investigate the feasibility of using automated-test-assembly (ATA) methods to moderate simultaneously possibly irrelevant differences between the performance of women and men and African-American and White test takers. Discusses the usefulness of ATA. (SLD)
Descriptors: Automation, Computer Assisted Testing, Item Banks, Mathematics Tests

Luecht, Richard M. – Journal of Educational Measurement, 1998
Comments on the application of a proposed automated test assembly (ATA) to the problem of reducing potential performance differential among population subgroups and points out some pitfalls. Presents a rejoinder by M. Stocking and others. (SLD)
Descriptors: Automation, Computer Assisted Testing, Item Banks, Mathematics Tests

Clauser, Brian E.; Margolis, Melissa J.; Clyman, Stephen G.; Ross, Linette P. – Journal of Educational Measurement, 1997
Research on automated scoring is extended by comparing alternative automated systems for scoring a computer simulation of physicians' patient management skills. A regression-based system is more highly correlated with experts' evaluations than a system that uses complex rules to map performances into score levels, but both approaches are feasible.…
Descriptors: Algorithms, Automation, Comparative Analysis, Computer Assisted Testing

Bennett, Randy Elliot; Steffen, Manfred; Singley, Mark Kevin; Morley, Mary; Jacquemin, Daniel – Journal of Educational Measurement, 1997
Scoring accuracy and item functioning were studied for an open-ended response type test in which correct answers can take many different surface forms. Results with 1,864 graduate school applicants showed automated scoring to approximate the accuracy of multiple-choice scoring. Items functioned similarly to other item types being considered. (SLD)
Descriptors: Adaptive Testing, Automation, College Applicants, Computer Assisted Testing