Publication Date
In 2025 | 1 |
Since 2024 | 9 |
Since 2021 (last 5 years) | 18 |
Since 2016 (last 10 years) | 28 |
Since 2006 (last 20 years) | 34 |
Descriptor
Automation | 34 |
Models | 34 |
Natural Language Processing | 34 |
Artificial Intelligence | 19 |
Scoring | 10 |
Classification | 9 |
Feedback (Response) | 9 |
Prediction | 9 |
Evaluation Methods | 8 |
Foreign Countries | 8 |
Data Analysis | 7 |
More ▼ |
Source
Author
Publication Type
Reports - Research | 22 |
Journal Articles | 17 |
Speeches/Meeting Papers | 7 |
Collected Works - Proceedings | 6 |
Dissertations/Theses -… | 2 |
Reports - Descriptive | 2 |
Reports - Evaluative | 2 |
Numerical/Quantitative Data | 1 |
Education Level
Audience
Researchers | 1 |
Location
Australia | 3 |
Brazil | 3 |
Netherlands | 3 |
Denmark | 2 |
Israel | 2 |
Pennsylvania | 2 |
Spain | 2 |
Asia | 1 |
China | 1 |
Connecticut | 1 |
Czech Republic | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
Massachusetts Comprehensive… | 1 |
What Works Clearinghouse Rating
Kangkang Li; Chengyang Qian; Xianmin Yang – Education and Information Technologies, 2025
In learnersourcing, automatic evaluation of student-generated content (SGC) is significant as it streamlines the evaluation process, provides timely feedback, and enhances the objectivity of grading, ultimately supporting more effective and efficient learning outcomes. However, the methods of aggregating students' evaluations of SGC face the…
Descriptors: Student Developed Materials, Educational Quality, Automation, Artificial Intelligence
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing
Rebeckah K. Fussell; Emily M. Stump; N. G. Holmes – Physical Review Physics Education Research, 2024
Physics education researchers are interested in using the tools of machine learning and natural language processing to make quantitative claims from natural language and text data, such as open-ended responses to survey questions. The aspiration is that this form of machine coding may be more efficient and consistent than human coding, allowing…
Descriptors: Physics, Educational Researchers, Artificial Intelligence, Natural Language Processing
Bulut, Okan; Yildirim-Erbasli, Seyma Nur – International Journal of Assessment Tools in Education, 2022
Reading comprehension is one of the essential skills for students as they make a transition from learning to read to reading to learn. Over the last decade, the increased use of digital learning materials for promoting literacy skills (e.g., oral fluency and reading comprehension) in K-12 classrooms has been a boon for teachers. However, instant…
Descriptors: Reading Comprehension, Natural Language Processing, Artificial Intelligence, Automation
Lixiang Yan; Lele Sha; Linxuan Zhao; Yuheng Li; Roberto Martinez-Maldonado; Guanliang Chen; Xinyu Li; Yueqiao Jin; Dragan Gaševic – British Journal of Educational Technology, 2024
Educational technology innovations leveraging large language models (LLMs) have shown the potential to automate the laborious process of generating and analysing textual content. While various innovations have been developed to automate a range of educational tasks (eg, question generation, feedback provision, and essay grading), there are…
Descriptors: Educational Technology, Artificial Intelligence, Natural Language Processing, Educational Innovation
Sami Baral; Eamon Worden; Wen-Chiang Lim; Zhuang Luo; Christopher Santorelli; Ashish Gurung; Neil Heffernan – Grantee Submission, 2024
The effectiveness of feedback in enhancing learning outcomes is well documented within Educational Data Mining (EDM). Various prior research have explored methodologies to enhance the effectiveness of feedback to students in various ways. Recent developments in Large Language Models (LLMs) have extended their utility in enhancing automated…
Descriptors: Automation, Scoring, Computer Assisted Testing, Natural Language Processing
Matsuda, Noboru; Wood, Jesse; Shrivastava, Raj; Shimmei, Machi; Bier, Norman – Journal of Educational Data Mining, 2022
A model that maps the requisite skills, or knowledge components, to the contents of an online course is necessary to implement many adaptive learning technologies. However, developing a skill model and tagging courseware contents with individual skills can be expensive and error prone. We propose a technology to automatically identify latent…
Descriptors: Skills, Models, Identification, Courseware
Condor, Aubrey; Litster, Max; Pardos, Zachary – International Educational Data Mining Society, 2021
We explore how different components of an Automatic Short Answer Grading (ASAG) model affect the model's ability to generalize to questions outside of those used for training. For supervised automatic grading models, human ratings are primarily used as ground truth labels. Producing such ratings can be resource heavy, as subject matter experts…
Descriptors: Automation, Grading, Test Items, Generalization
Hunkoog Jho; Minsu Ha – Journal of Baltic Science Education, 2024
This study aimed at examining the performance of generative artificial intelligence to extract argumentation elements from text. Thus, the researchers developed a web-based framework to provide automated assessment and feedback relying on a large language model, ChatGPT. The results produced by ChatGPT were compared to human experts across…
Descriptors: Feedback (Response), Artificial Intelligence, Persuasive Discourse, Models
Morrison, Ryan – Online Submission, 2022
Large Language Models (LLM) -- powerful algorithms that can generate and transform text -- are set to disrupt language learning education and text-based assessments as they allow for automation of text that can meet certain outcomes of many traditional assessments such as essays. While there is no way to definitively identify text created by this…
Descriptors: Models, Mathematics, Automation, Natural Language Processing
Seyedahmad Rahimi; Justice T. Walker; Lin Lin-Lipsmeyer; Jinnie Shin – Creativity Research Journal, 2024
Digital sandbox games such as "Minecraft" can be used to assess and support creativity. Doing so, however, requires an understanding of what is deemed creative in this game context. One approach is to understand how Minecrafters describe creativity in their communities, and how much those descriptions overlap with the established…
Descriptors: Creativity, Video Games, Computer Games, Evaluation Methods
Paul Deane; Duanli Yan; Katherine Castellano; Yigal Attali; Michelle Lamar; Mo Zhang; Ian Blood; James V. Bruno; Chen Li; Wenju Cui; Chunyi Ruan; Colleen Appel; Kofi James; Rodolfo Long; Farah Qureshi – ETS Research Report Series, 2024
This paper presents a multidimensional model of variation in writing quality, register, and genre in student essays, trained and tested via confirmatory factor analysis of 1.37 million essay submissions to ETS' digital writing service, Criterion®. The model was also validated with several other corpora, which indicated that it provides a…
Descriptors: Writing (Composition), Essays, Models, Elementary School Students
Peter Organisciak; Selcuk Acar; Denis Dumas; Kelly Berthiaume – Grantee Submission, 2023
Automated scoring for divergent thinking (DT) seeks to overcome a key obstacle to creativity measurement: the effort, cost, and reliability of scoring open-ended tests. For a common test of DT, the Alternate Uses Task (AUT), the primary automated approach casts the problem as a semantic distance between a prompt and the resulting idea in a text…
Descriptors: Automation, Computer Assisted Testing, Scoring, Creative Thinking
Wai Tong Chor; Kam Meng Goh; Li Li Lim; Kin Yun Lum; Tsung Heng Chiew – Education and Information Technologies, 2024
The programme outcomes are broad statements of knowledge, skills, and competencies that the students should be able to demonstrate upon graduation from a programme, while the Educational Taxonomy classifies learning objectives into different domains. The precise mapping of a course outcomes to the programme outcome and the educational taxonomy…
Descriptors: Artificial Intelligence, Engineering Education, Taxonomy, Educational Objectives
Botarleanu, Robert-Mihai; Dascalu, Mihai; Allen, Laura K.; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2022
Automated scoring of student language is a complex task that requires systems to emulate complex and multi-faceted human evaluation criteria. Summary scoring brings an additional layer of complexity to automated scoring because it involves two texts of differing lengths that must be compared. In this study, we present our approach to automate…
Descriptors: Automation, Scoring, Documentation, Likert Scales