NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 28 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Harpreet Auby; Namrata Shivagunde; Vijeta Deshpande; Anna Rumshisky; Milo D. Koretsky – Journal of Engineering Education, 2025
Background: Analyzing student short-answer written justifications to conceptually challenging questions has proven helpful to understand student thinking and improve conceptual understanding. However, qualitative analyses are limited by the burden of analyzing large amounts of text. Purpose: We apply dense and sparse Large Language Models (LLMs)…
Descriptors: Student Evaluation, Thinking Skills, Test Format, Cognitive Processes
Peer reviewed Peer reviewed
Direct linkDirect link
Luyang Fang; Gyeonggeon Lee; Xiaoming Zhai – Journal of Educational Measurement, 2025
Machine learning-based automatic scoring faces challenges with imbalanced student responses across scoring categories. To address this, we introduce a novel text data augmentation framework that leverages GPT-4, a generative large language model specifically tailored for imbalanced datasets in automatic scoring. Our experimental dataset consisted…
Descriptors: Computer Assisted Testing, Artificial Intelligence, Automation, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Mingfeng Xue; Yunting Liu; Xingyao Xiao; Mark Wilson – Journal of Educational Measurement, 2025
Prompts play a crucial role in eliciting accurate outputs from large language models (LLMs). This study examines the effectiveness of an automatic prompt engineering (APE) framework for automatic scoring in educational measurement. We collected constructed-response data from 930 students across 11 items and used human scores as the true labels. A…
Descriptors: Computer Assisted Testing, Prompting, Educational Assessment, Automation
Peer reviewed Peer reviewed
Direct linkDirect link
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Mohsin Murtaza; Chi-Tsun Cheng; Mohammad Fard; John Zeleznikow – International Journal of Artificial Intelligence in Education, 2025
As modern vehicles continue to integrate increasingly sophisticated Advanced Driver Assistance Systems (ADAS) and Autonomous Vehicles (AV) functions, conventional user manuals may no longer be the most effective medium for conveying knowledge to drivers. This research analysed conventional, paper and video-based instructional methods versus a…
Descriptors: Educational Change, Driver Education, Motor Vehicles, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Zaki, Nazar; Turaev, Sherzod; Shuaib, Khaled; Krishnan, Anusuya; Mohamed, Elfadil – Education and Information Technologies, 2023
Quality control and assurance plays a fundamental role within higher education contexts. One means by which quality control can be performed is by mapping the course learning outcomes (CLOs) to the program learning outcomes (PLO). This paper describes a system by which this mapping process can be automated and validated. The proposed AI-based…
Descriptors: Program Evaluation, Outcomes of Education, Natural Language Processing, Higher Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yucheng Chu; Peng He; Hang Li; Haoyu Han; Kaiqi Yang; Yu Xue; Tingting Li; Yasemin Copur-Gencturk; Joseph Krajcik; Jiliang Tang – International Educational Data Mining Society, 2025
Short answer assessment is a vital component of science education, allowing evaluation of students' complex three-dimensional understanding. Large language models (LLMs) that possess human-like ability in linguistic tasks are increasingly popular in assisting human graders to reduce their workload. However, LLMs' limitations in domain knowledge…
Descriptors: Artificial Intelligence, Science Education, Technology Uses in Education, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
Jie Yang; Ehsan Latif; Yuze He; Xiaoming Zhai – Journal of Science Education and Technology, 2025
The development of explanations for scientific phenomena is crucial in science assessment. However, the scoring of students' written explanations is a challenging and resource-intensive process. Large language models (LLMs) have demonstrated the potential to address these challenges, particularly when the explanations are written in English, an…
Descriptors: Artificial Intelligence, Technology Uses in Education, Automation, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Suna-Seyma Uçar; Itziar Aldabe; Nora Aranberri; Ana Arruarte – International Journal of Artificial Intelligence in Education, 2024
Current student-centred, multilingual, active teaching methodologies require that teachers have continuous access to texts that are adequate in terms of topic and language competence. However, the task of finding appropriate materials is arduous and time consuming for teachers. To build on automatic readability assessment research that could help…
Descriptors: Artificial Intelligence, Technology Uses in Education, Automation, Readability
Peer reviewed Peer reviewed
Direct linkDirect link
Jing Zhang; Qiaoyun Liao; Lipei Li; Jingyi Luo – Journal of Educational Computing Research, 2026
Natural Language Processing (NLP) has emerged as a transformative tool for EFL speaking instruction. However, prior research lacks robust empirical investigations into how distinct NLP tools independently enhance adaptability, accuracy, and fluency--particularly through controlled, large-scale interventions. Most studies focus on short-term…
Descriptors: Artificial Intelligence, Natural Language Processing, English (Second Language), Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Leydi Johana Chaparro-Moreno; Hugo Gonzalez Villasanti; Laura M. Justice; Jing Sun; Mary Beth Schmitt – Journal of Speech, Language, and Hearing Research, 2024
Purpose: This study examines the accuracy of Interaction Detection in Early Childhood Settings (IDEAS), a program that automatically transcribes audio files and estimates linguistic units relevant to speech-language therapy, including part-of-speech units that represent features of language complexity, such as adjectives and coordinating…
Descriptors: Speech Language Pathology, Allied Health Personnel, Speech Therapy, Children
Peer reviewed Peer reviewed
Direct linkDirect link
Urrutia, Felipe; Araya, Roberto – Journal of Educational Computing Research, 2024
Written answers to open-ended questions can have a higher long-term effect on learning than multiple-choice questions. However, it is critical that teachers immediately review the answers, and ask to redo those that are incoherent. This can be a difficult task and can be time-consuming for teachers. A possible solution is to automate the detection…
Descriptors: Elementary School Students, Grade 4, Elementary School Mathematics, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Somers, Rick; Cunningham-Nelson, Samuel; Boles, Wageeh – Australasian Journal of Educational Technology, 2021
In this study, we applied natural language processing (NLP) techniques, within an educational environment, to evaluate their usefulness for automated assessment of students' conceptual understanding from their short answer responses. Assessing understanding provides insight into and feedback on students' conceptual understanding, which is often…
Descriptors: Natural Language Processing, Student Evaluation, Automation, Feedback (Response)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Wan, Qian; Crossley, Scott; Banawan, Michelle; Balyan, Renu; Tian, Yu; McNamara, Danielle; Allen, Laura – International Educational Data Mining Society, 2021
The current study explores the ability to predict argumentative claims in structurally-annotated student essays to gain insights into the role of argumentation structure in the quality of persuasive writing. Our annotation scheme specified six types of argumentative components based on the well-established Toulmin's model of argumentation. We…
Descriptors: Essays, Persuasive Discourse, Automation, Identification
Crossley, Scott A.; Kim, Minkyung; Allen, Laura K.; McNamara, Danielle S. – Grantee Submission, 2019
Summarization is an effective strategy to promote and enhance learning and deep comprehension of texts. However, summarization is seldom implemented by teachers in classrooms because the manual evaluation of students' summaries requires time and effort. This problem has led to the development of automated models of summarization quality. However,…
Descriptors: Automation, Writing Evaluation, Natural Language Processing, Artificial Intelligence
Previous Page | Next Page »
Pages: 1  |  2