Publication Date
In 2025 | 2 |
Since 2024 | 7 |
Since 2021 (last 5 years) | 14 |
Since 2016 (last 10 years) | 15 |
Since 2006 (last 20 years) | 24 |
Descriptor
Automation | 33 |
Computer Assisted Testing | 33 |
Test Construction | 33 |
Test Items | 18 |
Item Banks | 9 |
Foreign Countries | 8 |
Item Response Theory | 6 |
Computer Software | 5 |
Mathematics Tests | 5 |
Scoring | 5 |
Test Validity | 5 |
More ▼ |
Source
Author
Stocking, Martha L. | 3 |
Diao, Qi | 2 |
Gierl, Mark J. | 2 |
Guher Gorgun | 2 |
Luecht, Richard M. | 2 |
Okan Bulut | 2 |
van der Linden, Wim J. | 2 |
Aaron McVay | 1 |
Abhishek Dasgupta | 1 |
Ahmadi, Alireza | 1 |
Ananthasubramaniam, Anand | 1 |
More ▼ |
Publication Type
Journal Articles | 27 |
Reports - Research | 16 |
Reports - Descriptive | 10 |
Reports - Evaluative | 4 |
Speeches/Meeting Papers | 3 |
Book/Product Reviews | 1 |
Dissertations/Theses -… | 1 |
Information Analyses | 1 |
Tests/Questionnaires | 1 |
Education Level
Audience
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
International English… | 1 |
National Assessment of… | 1 |
Test of English for… | 1 |
Trends in International… | 1 |
What Works Clearinghouse Rating
Po-Chun Huang; Ying-Hong Chan; Ching-Yu Yang; Hung-Yuan Chen; Yao-Chung Fan – IEEE Transactions on Learning Technologies, 2024
Question generation (QG) task plays a crucial role in adaptive learning. While significant QG performance advancements are reported, the existing QG studies are still far from practical usage. One point that needs strengthening is to consider the generation of question group, which remains untouched. For forming a question group, intrafactors…
Descriptors: Automation, Test Items, Computer Assisted Testing, Test Construction
Guher Gorgun; Okan Bulut – Education and Information Technologies, 2024
In light of the widespread adoption of technology-enhanced learning and assessment platforms, there is a growing demand for innovative, high-quality, and diverse assessment questions. Automatic Question Generation (AQG) has emerged as a valuable solution, enabling educators and assessment developers to efficiently produce a large volume of test…
Descriptors: Computer Assisted Testing, Test Construction, Test Items, Automation
Jonathan Seiden – Annenberg Institute for School Reform at Brown University, 2025
Direct assessments of early childhood development (ECD) are a cornerstone of research in developmental psychology and are increasingly used to evaluate programs and policies in lower- and middle-income countries. Despite strong psychometric properties, these assessments are too expensive and time consuming for use in large-scale monitoring or…
Descriptors: Young Children, Child Development, Performance Based Assessment, Developmental Psychology
Dongkwang Shin; Jang Ho Lee – ELT Journal, 2024
Although automated item generation has gained a considerable amount of attention in a variety of fields, it is still a relatively new technology in ELT contexts. Therefore, the present article aims to provide an accessible introduction to this powerful resource for language teachers based on a review of the available research. Particularly, it…
Descriptors: Language Tests, Artificial Intelligence, Test Items, Automation
Falcão, Filipe; Pereira, Daniela Marques; Gonçalves, Nuno; De Champlain, Andre; Costa, Patrício; Pêgo, José Miguel – Advances in Health Sciences Education, 2023
Automatic Item Generation (AIG) refers to the process of using cognitive models to generate test items using computer modules. It is a new but rapidly evolving research area where cognitive and psychometric theory are combined into digital framework. However, assessment of the item quality, usability and validity of AIG relative to traditional…
Descriptors: Computer Assisted Testing, Test Construction, Test Items, Automation
Guher Gorgun; Okan Bulut – Educational Measurement: Issues and Practice, 2025
Automatic item generation may supply many items instantly and efficiently to assessment and learning environments. Yet, the evaluation of item quality persists to be a bottleneck for deploying generated items in learning and assessment settings. In this study, we investigated the utility of using large-language models, specifically Llama 3-8B, for…
Descriptors: Artificial Intelligence, Quality Control, Technology Uses in Education, Automation
Das, Bidyut; Majumder, Mukta; Phadikar, Santanu; Sekh, Arif Ahmed – Research and Practice in Technology Enhanced Learning, 2021
Learning through the internet becomes popular that facilitates learners to learn anything, anytime, anywhere from the web resources. Assessment is most important in any learning system. An assessment system can find the self-learning gaps of learners and improve the progress of learning. The manual question generation takes much time and labor.…
Descriptors: Automation, Test Items, Test Construction, Computer Assisted Testing
Xu, Lingling; Wang, Shiyu; Cai, Yan; Tu, Dongbo – Journal of Educational Measurement, 2021
Designing a multidimensional adaptive test (M-MST) based on a multidimensional item response theory (MIRT) model is critical to make full use of the advantages of both MST and MIRT in implementing multidimensional assessments. This study proposed two types of automated test assembly (ATA) algorithms and one set of routing rules that can facilitate…
Descriptors: Item Response Theory, Adaptive Testing, Automation, Test Construction
Aaron McVay – ProQuest LLC, 2021
As assessments move towards computerized testing and making continuous testing available the need for rapid assembly of forms is increasing. The objective of this study was to investigate variability in assembled forms through the lens of first- and second-order equity properties of equating, by examining three factors and their interactions. Two…
Descriptors: Automation, Computer Assisted Testing, Test Items, Reaction Time
Charles Hulme; Joshua McGrane; Mihaela Duta; Gillian West; Denise Cripps; Abhishek Dasgupta; Sarah Hearne; Rachel Gardner; Margaret Snowling – Language, Speech, and Hearing Services in Schools, 2024
Purpose: Oral language skills provide a critical foundation for formal education and especially for the development of children's literacy (reading and spelling) skills. It is therefore important for teachers to be able to assess children's language skills, especially if they are concerned about their learning. We report the development and…
Descriptors: Automation, Language Tests, Standardized Tests, Test Construction
Shin, Jinnie; Gierl, Mark J. – International Journal of Testing, 2022
Over the last five years, tremendous strides have been made in advancing the AIG methodology required to produce items in diverse content areas. However, the one content area where enormous problems remain unsolved is language arts, generally, and reading comprehension, more specifically. While reading comprehension test items can be created using…
Descriptors: Reading Comprehension, Test Construction, Test Items, Natural Language Processing
Giada Spaccapanico Proietti; Mariagiulia Matteucci; Stefania Mignani; Bernard P. Veldkamp – Journal of Educational and Behavioral Statistics, 2024
Classical automated test assembly (ATA) methods assume fixed and known coefficients for the constraints and the objective function. This hypothesis is not true for the estimates of item response theory parameters, which are crucial elements in test assembly classical models. To account for uncertainty in ATA, we propose a chance-constrained…
Descriptors: Automation, Computer Assisted Testing, Ambiguity (Context), Item Response Theory
Rafatbakhsh, Elaheh; Ahmadi, Alireza; Moloodi, Amirsaeid; Mehrpour, Saeed – Educational Measurement: Issues and Practice, 2021
Test development is a crucial, yet difficult and time-consuming part of any educational system, and the task often falls all on teachers. Automatic item generation systems have recently drawn attention as they can reduce this burden and make test development more convenient. Such systems have been developed to generate items for vocabulary,…
Descriptors: Test Construction, Test Items, Computer Assisted Testing, Multiple Choice Tests
Bateson, Gordon – International Journal of Computer-Assisted Language Learning and Teaching, 2021
As a result of the Japanese Ministry of Education's recent edict that students' written and spoken English should be assessed in university entrance exams, there is an urgent need for tools to help teachers and students prepare for these exams. Although some commercial tools already exist, they are generally expensive and inflexible. To address…
Descriptors: Test Construction, Computer Assisted Testing, Internet, Writing Tests
Rupp, André A. – Applied Measurement in Education, 2018
This article discusses critical methodological design decisions for collecting, interpreting, and synthesizing empirical evidence during the design, deployment, and operational quality-control phases for automated scoring systems. The discussion is inspired by work on operational large-scale systems for automated essay scoring but many of the…
Descriptors: Design, Automation, Scoring, Test Scoring Machines