Publication Date
| In 2026 | 0 |
| Since 2025 | 17 |
Descriptor
| Computer Assisted Testing | 17 |
| Scoring | 13 |
| Artificial Intelligence | 9 |
| Automation | 5 |
| Foreign Countries | 5 |
| Scoring Rubrics | 5 |
| Accuracy | 4 |
| Evaluation Methods | 4 |
| Grading | 4 |
| Natural Language Processing | 4 |
| Computer Software | 3 |
| More ▼ | |
Source
Author
| Alex J. Mechaber | 1 |
| Ana Sánchez-Bello | 1 |
| Andrea Fernández-Sánchez | 1 |
| Ann Arthur | 1 |
| Arnon Hershkovitz | 1 |
| Bhashithe Abeysinghe | 1 |
| Brian E. Clauser | 1 |
| Celeste Combrinck | 1 |
| Chen Qiu | 1 |
| Chi-Yu Huang | 1 |
| Congning Ni | 1 |
| More ▼ | |
Publication Type
| Reports - Research | 17 |
| Journal Articles | 16 |
Education Level
| Higher Education | 5 |
| Postsecondary Education | 5 |
| Secondary Education | 4 |
| High Schools | 2 |
| Junior High Schools | 2 |
| Middle Schools | 2 |
| Elementary Education | 1 |
| Grade 8 | 1 |
| Grade 9 | 1 |
Audience
Location
| Greece | 1 |
| Iran | 1 |
| South Africa | 1 |
| Spain | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| National Assessment of… | 2 |
| ACT Assessment | 1 |
| Foreign Language Classroom… | 1 |
| Program for International… | 1 |
| Test of English as a Foreign… | 1 |
| Torrance Tests of Creative… | 1 |
What Works Clearinghouse Rating
Mingfeng Xue; Yunting Liu; Xingyao Xiao; Mark Wilson – Journal of Educational Measurement, 2025
Prompts play a crucial role in eliciting accurate outputs from large language models (LLMs). This study examines the effectiveness of an automatic prompt engineering (APE) framework for automatic scoring in educational measurement. We collected constructed-response data from 930 students across 11 items and used human scores as the true labels. A…
Descriptors: Computer Assisted Testing, Prompting, Educational Assessment, Automation
Ikkyu Choi; Matthew S. Johnson – Journal of Educational Measurement, 2025
Automated scoring systems provide multiple benefits but also pose challenges, notably potential bias. Various methods exist to evaluate these algorithms and their outputs for bias. Upon detecting bias, the next logical step is to investigate its cause, often by examining feature distributions. Recently, Johnson and McCaffrey proposed an…
Descriptors: Prediction, Bias, Automation, Scoring
Luyang Fang; Gyeonggeon Lee; Xiaoming Zhai – Journal of Educational Measurement, 2025
Machine learning-based automatic scoring faces challenges with imbalanced student responses across scoring categories. To address this, we introduce a novel text data augmentation framework that leverages GPT-4, a generative large language model specifically tailored for imbalanced datasets in automatic scoring. Our experimental dataset consisted…
Descriptors: Computer Assisted Testing, Artificial Intelligence, Automation, Scoring
Selcuk Acar; Peter Organisciak; Denis Dumas – Journal of Creative Behavior, 2025
In this three-study investigation, we applied various approaches to score drawings created in response to both Form A and Form B of the Torrance Tests of Creative Thinking-Figural (broadly TTCT-F) as well as the Multi-Trial Creative Ideation task (MTCI). We focused on TTCT-F in Study 1, and utilizing a random forest classifier, we achieved 79% and…
Descriptors: Scoring, Computer Assisted Testing, Models, Correlation
Peter Baldwin; Victoria Yaneva; Kai North; Le An Ha; Yiyun Zhou; Alex J. Mechaber; Brian E. Clauser – Journal of Educational Measurement, 2025
Recent developments in the use of large-language models have led to substantial improvements in the accuracy of content-based automated scoring of free-text responses. The reported accuracy levels suggest that automated systems could have widespread applicability in assessment. However, before they are used in operational testing, other aspects of…
Descriptors: Artificial Intelligence, Scoring, Computational Linguistics, Accuracy
Wesley Morris; Langdon Holmes; Joon Suh Choi; Scott Crossley – International Journal of Artificial Intelligence in Education, 2025
Recent developments in the field of artificial intelligence allow for improved performance in the automated assessment of extended response items in mathematics, potentially allowing for the scoring of these items cheaply and at scale. This study details the grand prize-winning approach to developing large language models (LLMs) to automatically…
Descriptors: Automation, Computer Assisted Testing, Mathematics Tests, Scoring
Jonas Flodén – British Educational Research Journal, 2025
This study compares how the generative AI (GenAI) large language model (LLM) ChatGPT performs in grading university exams compared to human teachers. Aspects investigated include consistency, large discrepancies and length of answer. Implications for higher education, including the role of teachers and ethics, are also discussed. Three…
Descriptors: College Faculty, Artificial Intelligence, Comparative Testing, Scoring
Mathias Benedek; Roger E. Beaty – Journal of Creative Behavior, 2025
The PISA assessment 2022 of creative thinking was a moonshot effort that introduced significant advancements over existing creativity tests, including a broad range of domains (written, visual, social, and scientific), implementation in many languages, and sophisticated scoring methods. PISA 2022 demonstrated the general feasibility of assessing…
Descriptors: Creative Thinking, Creativity, Creativity Tests, Scoring
Wallace N. Pinto Jr.; Jinnie Shin – Journal of Educational Measurement, 2025
In recent years, the application of explainability techniques to automated essay scoring and automated short-answer grading (ASAG) models, particularly those based on transformer architectures, has gained significant attention. However, the reliability and consistency of these techniques remain underexplored. This study systematically investigates…
Descriptors: Automation, Grading, Computer Assisted Testing, Scoring
Wen Xin Zhang; John J. H. Lin; Ying-Shao Hsu – Journal of Computer Assisted Learning, 2025
Background Study: Assessing learners' inquiry-based skills is challenging as social, political, and technological dimensions must be considered. The advanced development of artificial intelligence (AI) makes it possible to address these challenges and shape the next generation of science education. Objectives: The present study evaluated the SSI…
Descriptors: Artificial Intelligence, Computer Assisted Testing, Inquiry, Active Learning
Eran Hadas; Arnon Hershkovitz – Journal of Learning Analytics, 2025
Creativity is an imperative skill for today's learners, one that has important contributions to issues of inclusion and equity in education. Therefore, assessing creativity is of major importance in educational contexts. However, scoring creativity based on traditional tools suffers from subjectivity and is heavily time- and labour-consuming. This…
Descriptors: Creativity, Evaluation Methods, Computer Assisted Testing, Artificial Intelligence
Dongmei Li; Shalini Kapoor; Ann Arthur; Chi-Yu Huang; YoungWoo Cho; Chen Qiu; Hongling Wang – ACT Education Corp., 2025
Starting in April 2025, ACT will introduce enhanced forms of the ACT® test for national online testing, with a full rollout to all paper and online test takers in national, state and district, and international test administrations by Spring 2026. ACT introduced major updates by changing the test lengths and testing times, providing more time per…
Descriptors: College Entrance Examinations, Testing, Change, Scoring
Andrea Fernández-Sánchez; Juan José Lorenzo-Castiñeiras; Ana Sánchez-Bello – European Journal of Education, 2025
The advent of artificial intelligence (AI) technologies heralds a transformative era in education. This study investigates the integration of AI tools in developing educational assessment rubrics within the 'Curriculum Design Development and Evaluation' course at the University of A Coruña during the 2023-2024 academic year. Employing an…
Descriptors: Foreign Countries, Higher Education, Artificial Intelligence, Technology Integration
Celeste Combrinck; Nelé Loubser – Discover Education, 2025
Written assignments for large classes pose a far more significant challenge in the age of the GenAI revolution. Suggestions such as oral exams and formative assessments are not always feasible with many students in a class. Therefore, we conducted a study in South Africa and involved 280 Honors students to explore the usefulness of Turnitin's AI…
Descriptors: Foreign Countries, Artificial Intelligence, Large Group Instruction, Alternative Assessment
Georgios Zacharis; Stamatios Papadakis – Educational Process: International Journal, 2025
Background/purpose: Generative artificial intelligence (GenAI) is often promoted as a transformative tool for assessment, yet evidence of its validity compared to human raters remains limited. This study examined whether an AI-based rater could be used interchangeably with trained faculty in scoring complex coursework. Materials/methods:…
Descriptors: Artificial Intelligence, Technology Uses in Education, Computer Assisted Testing, Grading
Previous Page | Next Page »
Pages: 1 | 2
Peer reviewed
Direct link
