Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 6 |
Descriptor
Automation | 13 |
Test Items | 13 |
Test Construction | 8 |
Computer Assisted Testing | 7 |
Item Banks | 5 |
Algorithms | 3 |
Item Response Theory | 3 |
Scoring | 3 |
Artificial Intelligence | 2 |
Heuristics | 2 |
Models | 2 |
More ▼ |
Source
Applied Psychological… | 2 |
Journal of Educational… | 2 |
Contemporary Educational… | 1 |
Educational Measurement:… | 1 |
IEEE Transactions on Learning… | 1 |
Journal of Applied Testing… | 1 |
Author
Stocking, Martha L. | 3 |
Bennett, Randy Elliot | 2 |
Basu, Anupam | 1 |
Becker, Benjamin | 1 |
Bickel, Lisa | 1 |
Brunnert, Kim | 1 |
Cole, Brian S. | 1 |
Das, Syaamantak | 1 |
Debeer, Dries | 1 |
Geerlings, Hanneke | 1 |
Glas, Cees A. W. | 1 |
More ▼ |
Publication Type
Reports - Evaluative | 13 |
Journal Articles | 8 |
Speeches/Meeting Papers | 3 |
Education Level
Elementary Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 1 |
What Works Clearinghouse Rating
Becker, Benjamin; Weirich, Sebastian; Goldhammer, Frank; Debeer, Dries – Journal of Educational Measurement, 2023
When designing or modifying a test, an important challenge is controlling its speededness. To achieve this, van der Linden (2011a, 2011b) proposed using a lognormal response time model, more specifically the two-parameter lognormal model, and automated test assembly (ATA) via mixed integer linear programming. However, this approach has a severe…
Descriptors: Test Construction, Automation, Models, Test Items
Qiao, Chen; Hu, Xiao – IEEE Transactions on Learning Technologies, 2023
Free text answers to short questions can reflect students' mastery of concepts and their relationships relevant to learning objectives. However, automating the assessment of free text answers has been challenging due to the complexity of natural language. Existing studies often predict the scores of free text answers in a "black box"…
Descriptors: Computer Assisted Testing, Automation, Test Items, Semantics
Das, Syaamantak; Mandal, Shyamal Kumar Das; Basu, Anupam – Contemporary Educational Technology, 2020
Cognitive learning complexity identification of assessment questions is an essential task in the domain of education, as it helps both the teacher and the learner to discover the thinking process required to answer a given question. Bloom's Taxonomy cognitive levels are considered as a benchmark standard for the classification of cognitive…
Descriptors: Classification, Difficulty Level, Test Items, Identification
Cole, Brian S.; Lima-Walton, Elia; Brunnert, Kim; Vesey, Winona Burt; Raha, Kaushik – Journal of Applied Testing Technology, 2020
Automatic item generation can rapidly generate large volumes of exam items, but this creates challenges for assembly of exams which aim to include syntactically diverse items. First, we demonstrate a diminishing marginal syntactic return for automatic item generation using a saturation detection approach. This analysis can help users of automatic…
Descriptors: Artificial Intelligence, Automation, Test Construction, Test Items
Kosh, Audra E.; Simpson, Mary Ann; Bickel, Lisa; Kellogg, Mark; Sanford-Moore, Ellie – Educational Measurement: Issues and Practice, 2019
Automatic item generation (AIG)--a means of leveraging technology to create large quantities of items--requires a minimum number of items to offset the sizable upfront investment (i.e., model development and technology deployment) in order to achieve cost savings. In this cost-benefit analysis, we estimated the cost of each step of AIG and manual…
Descriptors: Cost Effectiveness, Automation, Test Items, Mathematics Tests
Geerlings, Hanneke; van der Linden, Wim J.; Glas, Cees A. W. – Applied Psychological Measurement, 2013
Optimal test-design methods are applied to rule-based item generation. Three different cases of automated test design are presented: (a) test assembly from a pool of pregenerated, calibrated items; (b) test generation on the fly from a pool of calibrated item families; and (c) test generation on the fly directly from calibrated features defining…
Descriptors: Test Construction, Test Items, Item Banks, Automation
Stocking, Martha L.; And Others – 1991
A previously developed method of automatically selecting items for inclusion in a test subject to constraints on item content and statistical properties is applied to real data. Two tests are first assembled by experts in test construction who normally assemble such tests on a routine basis. Using the same pool of items and constraints articulated…
Descriptors: Algorithms, Automation, Coding, Computer Assisted Testing
Lee, William M.; And Others – 1989
Projects to develop an automated item banking and test development system have been undertaken on several occasions at the Air Force Human Resources Laboratory (AFHRL) throughout the past 10 years. Such a system permits the construction of tests in far less time and with a higher degree of accuracy than earlier test construction procedures. This…
Descriptors: Automation, Computer Assisted Testing, Item Banks, Item Response Theory
Kaplan, Randy M.; Bennett, Randy Elliot – 1994
This study explores the potential for using a computer-based scoring procedure for the formulating-hypotheses (F-H) item. This item type presents a situation and asks the examinee to generate explanations for it. Each explanation is judged right or wrong, and the number of creditable explanations is summed to produce an item score. Scores were…
Descriptors: Automation, Computer Assisted Testing, Correlation, Higher Education
Stocking, Martha L.; And Others – 1991
This paper presents a new heuristic approach to interactive test assembly that is called the successive item replacement algorithm. This approach builds on the work of W. J. van der Linden (1987) and W. J. van der Linden and E. Boekkooi-Timminga (1989) in which methods of mathematical optimization are combined with item response theory to…
Descriptors: Algorithms, Automation, Computer Selection, Heuristics

Stocking, Martha L.; And Others – Applied Psychological Measurement, 1993
A method of automatically selecting items for inclusion in a test with constraints on item content and statistical properties was applied to real data. Tests constructed manually from the same data and constraints were compared to tests constructed automatically. Results show areas in which automated assembly can improve test construction. (SLD)
Descriptors: Algorithms, Automation, Comparative Testing, Computer Assisted Testing

Bennett, Randy Elliot; Steffen, Manfred; Singley, Mark Kevin; Morley, Mary; Jacquemin, Daniel – Journal of Educational Measurement, 1997
Scoring accuracy and item functioning were studied for an open-ended response type test in which correct answers can take many different surface forms. Results with 1,864 graduate school applicants showed automated scoring to approximate the accuracy of multiple-choice scoring. Items functioned similarly to other item types being considered. (SLD)
Descriptors: Adaptive Testing, Automation, College Applicants, Computer Assisted Testing
Martinez, Michael E.; And Others – 1990
Large-scale testing is dominated by the multiple-choice question format. Widespread use of the format is due, in part, to the ease with which multiple-choice items can be scored automatically. This paper examines automatic scoring procedures for an alternative item type: figural response. Figural response items call for the completion or…
Descriptors: Automation, Computer Assisted Testing, Educational Technology, Multiple Choice Tests