Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 9 |
Descriptor
Automation | 10 |
Computer Assisted Testing | 10 |
Item Response Theory | 10 |
Test Construction | 6 |
Item Banks | 5 |
Adaptive Testing | 4 |
Test Items | 4 |
Heuristics | 3 |
Foreign Countries | 2 |
Mathematics Tests | 2 |
Scores | 2 |
More ▼ |
Source
IEEE Transactions on Learning… | 2 |
Journal of Educational… | 2 |
ACT, Inc. | 1 |
Applied Psychological… | 1 |
Educational Measurement:… | 1 |
International Journal of… | 1 |
Journal of Educational and… | 1 |
Author
Ahmadi, Alireza | 1 |
Aomi, Itsuki | 1 |
Bernard P. Veldkamp | 1 |
Cai, Yan | 1 |
Casabianca, Jodi M. | 1 |
Chang, Hua-Hua | 1 |
Chao, Szu-Fu | 1 |
Choi, Ikkyu | 1 |
Conejo, Ricardo | 1 |
Donoghue, John R. | 1 |
Fong, A. C. M. | 1 |
More ▼ |
Publication Type
Journal Articles | 8 |
Reports - Research | 7 |
Reports - Evaluative | 2 |
Reports - Descriptive | 1 |
Education Level
Elementary Secondary Education | 1 |
Higher Education | 1 |
Postsecondary Education | 1 |
Secondary Education | 1 |
Audience
Location
Singapore | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Trends in International… | 1 |
What Works Clearinghouse Rating
Uto, Masaki; Aomi, Itsuki; Tsutsumi, Emiko; Ueno, Maomi – IEEE Transactions on Learning Technologies, 2023
In automated essay scoring (AES), essays are automatically graded without human raters. Many AES models based on various manually designed features or various architectures of deep neural networks (DNNs) have been proposed over the past few decades. Each AES model has unique advantages and characteristics. Therefore, rather than using a single-AES…
Descriptors: Prediction, Scores, Computer Assisted Testing, Scoring
Casabianca, Jodi M.; Donoghue, John R.; Shin, Hyo Jeong; Chao, Szu-Fu; Choi, Ikkyu – Journal of Educational Measurement, 2023
Using item-response theory to model rater effects provides an alternative solution for rater monitoring and diagnosis, compared to using standard performance metrics. In order to fit such models, the ratings data must be sufficiently connected in order to estimate rater effects. Due to popular rating designs used in large-scale testing scenarios,…
Descriptors: Item Response Theory, Alternative Assessment, Evaluators, Research Problems
Xu, Lingling; Wang, Shiyu; Cai, Yan; Tu, Dongbo – Journal of Educational Measurement, 2021
Designing a multidimensional adaptive test (M-MST) based on a multidimensional item response theory (MIRT) model is critical to make full use of the advantages of both MST and MIRT in implementing multidimensional assessments. This study proposed two types of automated test assembly (ATA) algorithms and one set of routing rules that can facilitate…
Descriptors: Item Response Theory, Adaptive Testing, Automation, Test Construction
Giada Spaccapanico Proietti; Mariagiulia Matteucci; Stefania Mignani; Bernard P. Veldkamp – Journal of Educational and Behavioral Statistics, 2024
Classical automated test assembly (ATA) methods assume fixed and known coefficients for the constraints and the objective function. This hypothesis is not true for the estimates of item response theory parameters, which are crucial elements in test assembly classical models. To account for uncertainty in ATA, we propose a chance-constrained…
Descriptors: Automation, Computer Assisted Testing, Ambiguity (Context), Item Response Theory
Rafatbakhsh, Elaheh; Ahmadi, Alireza; Moloodi, Amirsaeid; Mehrpour, Saeed – Educational Measurement: Issues and Practice, 2021
Test development is a crucial, yet difficult and time-consuming part of any educational system, and the task often falls all on teachers. Automatic item generation systems have recently drawn attention as they can reduce this burden and make test development more convenient. Such systems have been developed to generate items for vocabulary,…
Descriptors: Test Construction, Test Items, Computer Assisted Testing, Multiple Choice Tests
Conejo, Ricardo; Guzmán, Eduardo; Trella, Monica – International Journal of Artificial Intelligence in Education, 2016
This article describes the evolution and current state of the domain-independent Siette assessment environment. Siette supports different assessment methods--including classical test theory, item response theory, and computer adaptive testing--and integrates them with multidimensional student models used by intelligent educational systems.…
Descriptors: Automation, Student Evaluation, Intelligent Tutoring Systems, Item Banks
Veldkamp, Bernard P.; Matteucci, Mariagiulia; de Jong, Martijn G. – Applied Psychological Measurement, 2013
Item response theory parameters have to be estimated, and because of the estimation process, they do have uncertainty in them. In most large-scale testing programs, the parameters are stored in item banks, and automated test assembly algorithms are applied to assemble operational test forms. These algorithms treat item parameters as fixed values,…
Descriptors: Test Construction, Test Items, Item Banks, Automation
Zheng, Yi; Nozawa, Yuki; Gao, Xiaohong; Chang, Hua-Hua – ACT, Inc., 2012
Multistage adaptive tests (MSTs) have gained increasing popularity in recent years. MST is a balanced compromise between linear test forms (i.e., paper-and-pencil testing and computer-based testing) and traditional item-level computer-adaptive testing (CAT). It combines the advantages of both. On one hand, MST is adaptive (and therefore more…
Descriptors: Adaptive Testing, Heuristics, Accuracy, Item Banks
Nguyen, M. L.; Hui, Siu Cheung; Fong, A. C. M. – IEEE Transactions on Learning Technologies, 2013
Web-based testing has become a ubiquitous self-assessment method for online learning. One useful feature that is missing from today's web-based testing systems is the reliable capability to fulfill different assessment requirements of students based on a large-scale question data set. A promising approach for supporting large-scale web-based…
Descriptors: Computer Assisted Testing, Test Construction, Student Evaluation, Programming
Lee, William M.; And Others – 1989
Projects to develop an automated item banking and test development system have been undertaken on several occasions at the Air Force Human Resources Laboratory (AFHRL) throughout the past 10 years. Such a system permits the construction of tests in far less time and with a higher degree of accuracy than earlier test construction procedures. This…
Descriptors: Automation, Computer Assisted Testing, Item Banks, Item Response Theory