Publication Date
| In 2026 | 1 |
| Since 2025 | 35 |
| Since 2022 (last 5 years) | 35 |
| Since 2017 (last 10 years) | 35 |
| Since 2007 (last 20 years) | 35 |
Descriptor
Source
Author
| Chenglu Li | 2 |
| Danielle S. McNamara | 2 |
| Hang Li | 2 |
| Jiliang Tang | 2 |
| Joseph Krajcik | 2 |
| Kaiqi Yang | 2 |
| Langdon Holmes | 2 |
| Mihai Dascalu | 2 |
| Scott Crossley | 2 |
| Wanli Xing | 2 |
| Wesley Morris | 2 |
| More ▼ | |
Publication Type
| Reports - Research | 32 |
| Journal Articles | 26 |
| Speeches/Meeting Papers | 8 |
| Tests/Questionnaires | 2 |
| Books | 1 |
| Collected Works - General | 1 |
| Information Analyses | 1 |
| Reports - Descriptive | 1 |
Education Level
| Higher Education | 11 |
| Postsecondary Education | 11 |
| Secondary Education | 4 |
| Junior High Schools | 2 |
| Middle Schools | 2 |
| Elementary Education | 1 |
| Elementary Secondary Education | 1 |
| Grade 4 | 1 |
| High Schools | 1 |
| Intermediate Grades | 1 |
Audience
| Researchers | 1 |
| Teachers | 1 |
Location
| Germany | 1 |
| New Zealand | 1 |
| South Africa | 1 |
| Turkey | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| International English… | 1 |
| National Assessment of… | 1 |
What Works Clearinghouse Rating
Christopher Adamson – New Directions for Teaching and Learning, 2025
This chapter responds to the recent crisis surrounding developments in large language models (LLMs) and generative AI with a relational view of education informed by the emerging world-centered approach to education and a synthesis of personalist character formation with feminist care ethics. It proposes that the instinct to manage student use of…
Descriptors: Artificial Intelligence, Natural Language Processing, Automation, Feminism
Exploring Fairness and Explainability in LLM-Generated Support for Online Learning Discussion Forums
Zifeng Liu; Wanli Xing; Xinyue Jiao; Chenglu Li – Journal of Learning Analytics, 2025
Large language models (LLMs) hold significant potential to enhance online learning by automating responses to learner queries and offering personalized, scalable support. However, concerns about bias in LLM-generated responses present challenges to their ethical and equitable use in educational settings. This study explores fairness and…
Descriptors: Artificial Intelligence, Natural Language Processing, Electronic Learning, Automation
Kangkang Li; Chengyang Qian; Xianmin Yang – Education and Information Technologies, 2025
In learnersourcing, automatic evaluation of student-generated content (SGC) is significant as it streamlines the evaluation process, provides timely feedback, and enhances the objectivity of grading, ultimately supporting more effective and efficient learning outcomes. However, the methods of aggregating students' evaluations of SGC face the…
Descriptors: Student Developed Materials, Educational Quality, Automation, Artificial Intelligence
Mohammad Arif Ul Alam; Geeta Verma; Eumie Jhong; Justin Barber; Ashis Kumer Biswas – International Educational Data Mining Society, 2025
The growing demand for microcredentials in education and workforce development necessitates scalable, accurate, and fair assessment systems for both soft and hard skills based on students' lived experience narratives. Existing approaches struggle with the complexities of hierarchical credentialing and the mitigation of algorithmic bias related to…
Descriptors: Microcredentials, Sex, Ethnicity, Artificial Intelligence
Harpreet Auby; Namrata Shivagunde; Vijeta Deshpande; Anna Rumshisky; Milo D. Koretsky – Journal of Engineering Education, 2025
Background: Analyzing student short-answer written justifications to conceptually challenging questions has proven helpful to understand student thinking and improve conceptual understanding. However, qualitative analyses are limited by the burden of analyzing large amounts of text. Purpose: We apply dense and sparse Large Language Models (LLMs)…
Descriptors: Student Evaluation, Thinking Skills, Test Format, Cognitive Processes
Luyang Fang; Gyeonggeon Lee; Xiaoming Zhai – Journal of Educational Measurement, 2025
Machine learning-based automatic scoring faces challenges with imbalanced student responses across scoring categories. To address this, we introduce a novel text data augmentation framework that leverages GPT-4, a generative large language model specifically tailored for imbalanced datasets in automatic scoring. Our experimental dataset consisted…
Descriptors: Computer Assisted Testing, Artificial Intelligence, Automation, Scoring
Michel C. Desmarais; Arman Bakhtiari; Ovide Bertrand Kuichua Kandem; Samira Chiny Folefack Temfack; Chahé Nerguizian – International Educational Data Mining Society, 2025
We propose a novel method for automated short answer grading (ASAG) designed for practical use in real-world settings. The method combines LLM embedding similarity with a nonlinear regression function, enabling accurate prediction from a small number of expert-graded responses. In this use case, a grader manually assesses a few responses, while…
Descriptors: Grading, Automation, Artificial Intelligence, Natural Language Processing
Zifeng Liu; Wanli Xing; Chenglu Li; Fan Zhang; Hai Li; Victor Minces – Journal of Learning Analytics, 2025
Creativity is a vital skill in science, technology, engineering, and mathematics (STEM)-related education, fostering innovation and problem-solving. Traditionally, creativity assessments relied on human evaluations, such as the consensual assessment technique (CAT), which are resource-intensive, time-consuming, and often subjective. Recent…
Descriptors: Creativity, Elementary School Students, Artificial Intelligence, Man Machine Systems
Mingfeng Xue; Yunting Liu; Xingyao Xiao; Mark Wilson – Journal of Educational Measurement, 2025
Prompts play a crucial role in eliciting accurate outputs from large language models (LLMs). This study examines the effectiveness of an automatic prompt engineering (APE) framework for automatic scoring in educational measurement. We collected constructed-response data from 930 students across 11 items and used human scores as the true labels. A…
Descriptors: Computer Assisted Testing, Prompting, Educational Assessment, Automation
Ngoc My Bui; Jessie S. Barrot – Education and Information Technologies, 2025
With the generative artificial intelligence (AI) tool's remarkable capabilities in understanding and generating meaningful content, intriguing questions have been raised about its potential as an automated essay scoring (AES) system. One such tool is ChatGPT, which is capable of scoring any written work based on predefined criteria. However,…
Descriptors: Artificial Intelligence, Natural Language Processing, Technology Uses in Education, Automation
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2025
Automated multiple-choice question (MCQ) generation is valuable for scalable assessment and enhanced learning experiences. How-ever, existing MCQ generation methods face challenges in ensuring plausible distractors and maintaining answer consistency. This paper intro-duces a method for MCQ generation that integrates reasoning-based explanations…
Descriptors: Automation, Computer Assisted Testing, Multiple Choice Tests, Natural Language Processing
Wesley Morris; Langdon Holmes; Joon Suh Choi; Scott Crossley – International Journal of Artificial Intelligence in Education, 2025
Recent developments in the field of artificial intelligence allow for improved performance in the automated assessment of extended response items in mathematics, potentially allowing for the scoring of these items cheaply and at scale. This study details the grand prize-winning approach to developing large language models (LLMs) to automatically…
Descriptors: Automation, Computer Assisted Testing, Mathematics Tests, Scoring
Alex Goslen; Yeo Jin Kim; Jonathan Rowe; James Lester – International Journal of Artificial Intelligence in Education, 2025
The development of large language models offers new possibilities for enhancing adaptive scaffolding of student learning in game-based learning environments. In this work, we present a novel framework for automatic plan generation that utilizes text-based representations of students' actions within a game-based learning environment, Crystal…
Descriptors: Artificial Intelligence, Natural Language Processing, Technology Uses in Education, Game Based Learning
Smitha S. Kumar; Michael A. Lones; Manuel Maarek; Hind Zantout – ACM Transactions on Computing Education, 2025
Programming demands a variety of cognitive skills, and mastering these competencies is essential for success in computer science education. The importance of formative feedback is well acknowledged in programming education, and thus, a diverse range of techniques has been proposed to generate and enhance formative feedback for programming…
Descriptors: Automation, Computer Science Education, Programming, Feedback (Response)
Abubakir Siedahmed; Jaclyn Ocumpaugh; Zelda Ferris; Dinesh Kodwani; Eamon Worden; Neil Heffernan – International Educational Data Mining Society, 2025
Recent advances in AI have opened the door for the automated scoring of open-ended math problems, which were previously much more difficult to assess at scale. However, we know that biases still remain in some of these algorithms. For example, recent research on the automated scoring of student essays has shown that certain varieties of English…
Descriptors: Artificial Intelligence, Automation, Scoring, Mathematics Tests

Peer reviewed
Direct link
