Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 7 |
Since 2006 (last 20 years) | 12 |
Descriptor
Automation | 14 |
Computer Assisted Testing | 14 |
Test Format | 14 |
Test Items | 6 |
Scoring | 4 |
Test Construction | 4 |
Foreign Countries | 3 |
Item Banks | 3 |
Programming | 3 |
Adaptive Testing | 2 |
Comparative Analysis | 2 |
More ▼ |
Source
Author
van der Linden, Wim J. | 3 |
Diao, Qi | 2 |
Ayfer Sayin | 1 |
Carr, Nathan T. | 1 |
Casabianca, Jodi M. | 1 |
Chang, Hua-Hua | 1 |
Chatzigiannakou, Maria | 1 |
Figueira, Álvaro | 1 |
Gao, Xiaohong | 1 |
Gierl, Mark J. | 1 |
Larissa Kirschner | 1 |
More ▼ |
Publication Type
Journal Articles | 11 |
Reports - Descriptive | 6 |
Reports - Research | 6 |
Information Analyses | 1 |
Reports - Evaluative | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Elementary Education | 2 |
Grade 4 | 1 |
Higher Education | 1 |
Intermediate Grades | 1 |
Postsecondary Education | 1 |
Audience
Researchers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 1 |
What Works Clearinghouse Rating
Ulrike Padó; Yunus Eryilmaz; Larissa Kirschner – International Journal of Artificial Intelligence in Education, 2024
Short-Answer Grading (SAG) is a time-consuming task for teachers that automated SAG models have long promised to make easier. However, there are three challenges for their broad-scale adoption: A technical challenge regarding the need for high-quality models, which is exacerbated for languages with fewer resources than English; a usability…
Descriptors: Grading, Automation, Test Format, Computer Assisted Testing
McCaffrey, Daniel F.; Casabianca, Jodi M.; Ricker-Pedley, Kathryn L.; Lawless, René R.; Wendler, Cathy – ETS Research Report Series, 2022
This document describes a set of best practices for developing, implementing, and maintaining the critical process of scoring constructed-response tasks. These practices address both the use of human raters and automated scoring systems as part of the scoring process and cover the scoring of written, spoken, performance, or multimodal responses.…
Descriptors: Best Practices, Scoring, Test Format, Computer Assisted Testing
Paiva, José Carlos; Leal, José Paulo; Figueira, Álvaro – ACM Transactions on Computing Education, 2022
Practical programming competencies are critical to the success in computer science (CS) education and go-to-market of fresh graduates. Acquiring the required level of skills is a long journey of discovery, trial and error, and optimization seeking through a broad range of programming activities that learners must perform themselves. It is not…
Descriptors: Automation, Computer Assisted Testing, Student Evaluation, Computer Science Education
Ayfer Sayin; Sabiha Bozdag; Mark J. Gierl – International Journal of Assessment Tools in Education, 2023
The purpose of this study is to generate non-verbal items for a visual reasoning test using templated-based automatic item generation (AIG). The fundamental research method involved following the three stages of template-based AIG. An item from the 2016 4th-grade entrance exam of the Science and Art Center (known as BILSEM) was chosen as the…
Descriptors: Test Items, Test Format, Nonverbal Tests, Visual Measures
Shin, Jinnie; Gierl, Mark J. – International Journal of Testing, 2022
Over the last five years, tremendous strides have been made in advancing the AIG methodology required to produce items in diverse content areas. However, the one content area where enormous problems remain unsolved is language arts, generally, and reading comprehension, more specifically. While reading comprehension test items can be created using…
Descriptors: Reading Comprehension, Test Construction, Test Items, Natural Language Processing
Li, Jie; van der Linden, Wim J. – Journal of Educational Measurement, 2018
The final step of the typical process of developing educational and psychological tests is to place the selected test items in a formatted form. The step involves the grouping and ordering of the items to meet a variety of formatting constraints. As this activity tends to be time-intensive, the use of mixed-integer programming (MIP) has been…
Descriptors: Programming, Automation, Test Items, Test Format
Papasalouros, Andreas; Chatzigiannakou, Maria – International Association for Development of the Information Society, 2018
Automating the production of questions for assessment and self-assessment has become recently an active field of study. The use of Semantic Web technologies has certain advantages over other methods for question generation and thus is one of the most important lines of research for this problem. The aim of this paper is to provide an overview of…
Descriptors: Computer Assisted Testing, Web 2.0 Technologies, Test Format, Multiple Choice Tests
Diao, Qi; van der Linden, Wim J. – Applied Psychological Measurement, 2013
Automated test assembly uses the methodology of mixed integer programming to select an optimal set of items from an item bank. Automated test-form generation uses the same methodology to optimally order the items and format the test form. From an optimization point of view, production of fully formatted test forms directly from the item pool using…
Descriptors: Automation, Test Construction, Test Format, Item Banks
Yarnell, Jordy B.; Pfeiffer, Steven I. – Journal of Psychoeducational Assessment, 2015
The present study examined the psychometric equivalence of administering a computer-based version of the Gifted Rating Scale (GRS) compared with the traditional paper-and-pencil GRS-School Form (GRS-S). The GRS-S is a teacher-completed rating scale used in gifted assessment. The GRS-Electronic Form provides an alternative method of administering…
Descriptors: Gifted, Psychometrics, Rating Scales, Computer Assisted Testing
Zheng, Yi; Nozawa, Yuki; Gao, Xiaohong; Chang, Hua-Hua – ACT, Inc., 2012
Multistage adaptive tests (MSTs) have gained increasing popularity in recent years. MST is a balanced compromise between linear test forms (i.e., paper-and-pencil testing and computer-based testing) and traditional item-level computer-adaptive testing (CAT). It combines the advantages of both. On one hand, MST is adaptive (and therefore more…
Descriptors: Adaptive Testing, Heuristics, Accuracy, Item Banks
van der Linden, Wim J.; Diao, Qi – Journal of Educational Measurement, 2011
In automated test assembly (ATA), the methodology of mixed-integer programming is used to select test items from an item bank to meet the specifications for a desired test form and optimize its measurement accuracy. The same methodology can be used to automate the formatting of the set of selected items into the actual test form. Three different…
Descriptors: Test Items, Test Format, Test Construction, Item Banks
Carr, Nathan T.; Xi, Xiaoming – Language Assessment Quarterly, 2010
This article examines how the use of automated scoring procedures for short-answer reading tasks can affect the constructs being assessed. In particular, it highlights ways in which the development of scoring algorithms intended to apply the criteria used by human raters can lead test developers to reexamine and even refine the constructs they…
Descriptors: Scoring, Automation, Reading Tests, Test Format
Schofield, J.; Vincent, A. T. – Programmed Learning and Educational Technology, 1986
Describes a project at Chorleywood College (England) which examined the possibility of automating the transcription of braille examination responses for marking by examination boards. Discussion covers the braille code, requirements of the transcription system, an analysis of mock examination papers which led to system modification, and the final…
Descriptors: Automation, Braille, Computer Assisted Testing, Foreign Countries
Martinez, Michael E.; And Others – 1990
Large-scale testing is dominated by the multiple-choice question format. Widespread use of the format is due, in part, to the ease with which multiple-choice items can be scored automatically. This paper examines automatic scoring procedures for an alternative item type: figural response. Figural response items call for the completion or…
Descriptors: Automation, Computer Assisted Testing, Educational Technology, Multiple Choice Tests