Publication Date
| In 2026 | 0 |
| Since 2025 | 15 |
| Since 2022 (last 5 years) | 41 |
| Since 2017 (last 10 years) | 51 |
| Since 2007 (last 20 years) | 51 |
Descriptor
Source
Author
| Chris Piech | 2 |
| Hang Li | 2 |
| Jiliang Tang | 2 |
| Joseph Krajcik | 2 |
| Kaiqi Yang | 2 |
| Yasemin Copur-Gencturk | 2 |
| Yucheng Chu | 2 |
| Abdulkadir Kara | 1 |
| Abubakir Siedahmed | 1 |
| Akinbowale Natheniel Babatunde | 1 |
| Alexander López-Padrón | 1 |
| More ▼ | |
Publication Type
| Journal Articles | 39 |
| Reports - Research | 32 |
| Speeches/Meeting Papers | 9 |
| Information Analyses | 8 |
| Reports - Descriptive | 6 |
| Reports - Evaluative | 3 |
| Dissertations/Theses -… | 2 |
| Tests/Questionnaires | 2 |
Education Level
| Higher Education | 16 |
| Postsecondary Education | 16 |
| Secondary Education | 5 |
| Junior High Schools | 3 |
| Middle Schools | 3 |
| Grade 12 | 1 |
| High Schools | 1 |
Audience
| Teachers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
| Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Zhu, Xinhua; Wu, Han; Zhang, Lanfang – IEEE Transactions on Learning Technologies, 2022
Automatic short-answer grading (ASAG) is a key component of intelligent tutoring systems. Deep learning is an advanced method to deal with recognizing textual entailment tasks in an end-to-end manner. However, deep learning methods for ASAG still remain challenging mainly because of the following two major reasons: (1) high-precision scoring…
Descriptors: Intelligent Tutoring Systems, Grading, Automation, Models
Ishaya Gambo; Faith-Jane Abegunde; Omobola Gambo; Roseline Oluwaseun Ogundokun; Akinbowale Natheniel Babatunde; Cheng-Chi Lee – Education and Information Technologies, 2025
The current educational system relies heavily on manual grading, posing challenges such as delayed feedback and grading inaccuracies. Automated grading tools (AGTs) offer solutions but come with limitations. To address this, "GRAD-AI" is introduced, an advanced AGT that combines automation with teacher involvement for precise grading,…
Descriptors: Automation, Grading, Artificial Intelligence, Computer Assisted Testing
Juliette Woodrow; Sanmi Koyejo; Chris Piech – International Educational Data Mining Society, 2025
High-quality feedback requires understanding of a student's work, insights into what concepts would help them improve, and language that matches the preferences of the specific teaching team. While Large Language Models (LLMs) can generate coherent feedback, adapting these responses to align with specific teacher preferences remains an open…
Descriptors: Feedback (Response), Artificial Intelligence, Teacher Attitudes, Preferences
Marcus Messer; Neil C. C. Brown; Michael Kölling; Miaojing Shi – ACM Transactions on Computing Education, 2024
We conducted a systematic literature review on automated grading and feedback tools for programming education. We analysed 121 research papers from 2017 to 2021 inclusive and categorised them based on skills assessed, approach, language paradigm, degree of automation, and evaluation techniques. Most papers assess the correctness of assignments in…
Descriptors: Automation, Grading, Feedback (Response), Programming
Yunsung Kim; Jadon Geathers; Chris Piech – International Educational Data Mining Society, 2024
"Stochastic programs," which are programs that produce probabilistic output, are a pivotal paradigm in various areas of CS education from introductory programming to machine learning and data science. Despite their importance, the problem of automatically grading such programs remains surprisingly unexplored. In this paper, we formalize…
Descriptors: Grading, Automation, Accuracy, Programming
Michel C. Desmarais; Arman Bakhtiari; Ovide Bertrand Kuichua Kandem; Samira Chiny Folefack Temfack; Chahé Nerguizian – International Educational Data Mining Society, 2025
We propose a novel method for automated short answer grading (ASAG) designed for practical use in real-world settings. The method combines LLM embedding similarity with a nonlinear regression function, enabling accurate prediction from a small number of expert-graded responses. In this use case, a grader manually assesses a few responses, while…
Descriptors: Grading, Automation, Artificial Intelligence, Natural Language Processing
Fatima Abu Deeb; Timothy Hickey – Computer Science Education, 2024
Background and Context: Auto-graders are praised by novice students learning to program, as they provide them with automatic feedback about their problem-solving process. However, some students often make random changes when they have errors in their code, without engaging in deliberate thinking about the cause of the error. Objective: To…
Descriptors: Reflection, Automation, Grading, Novices
Zirou Lin; Hanbing Yan; Li Zhao – Journal of Computer Assisted Learning, 2024
Background: Peer assessment has played an important role in large-scale online learning, as it helps promote the effectiveness of learners' online learning. However, with the emergence of numerical grades and textual feedback generated by peers, it is necessary to detect the reliability of the large amount of peer assessment data, and then develop…
Descriptors: Peer Evaluation, Automation, Grading, Models
Smitha S. Kumar; Michael A. Lones; Manuel Maarek; Hind Zantout – ACM Transactions on Computing Education, 2025
Programming demands a variety of cognitive skills, and mastering these competencies is essential for success in computer science education. The importance of formative feedback is well acknowledged in programming education, and thus, a diverse range of techniques has been proposed to generate and enhance formative feedback for programming…
Descriptors: Automation, Computer Science Education, Programming, Feedback (Response)
Abdulkadir Kara; Eda Saka Simsek; Serkan Yildirim – Asian Journal of Distance Education, 2024
Evaluation is an essential component of the learning process when discerning learning situations. Assessing natural language responses, like short answers, takes time and effort. Artificial intelligence and natural language processing advancements have led to more studies on automatically grading short answers. In this review, we systematically…
Descriptors: Automation, Natural Language Processing, Artificial Intelligence, Grading
Ulrike Padó; Yunus Eryilmaz; Larissa Kirschner – International Journal of Artificial Intelligence in Education, 2024
Short-Answer Grading (SAG) is a time-consuming task for teachers that automated SAG models have long promised to make easier. However, there are three challenges for their broad-scale adoption: A technical challenge regarding the need for high-quality models, which is exacerbated for languages with fewer resources than English; a usability…
Descriptors: Grading, Automation, Test Format, Computer Assisted Testing
Leila Ouahrani; Djamal Bennouar – International Journal of Artificial Intelligence in Education, 2024
We consider the reference-based approach for Automatic Short Answer Grading (ASAG) that involves scoring a textual constructed student answer comparing to a teacher-provided reference answer. The reference answer does not cover the variety of student answers as it contains only specific examples of correct answers. Considering other language…
Descriptors: Grading, Automation, Answer Keys, Tests
Putnikovic, Marko; Jovanovic, Jelena – IEEE Transactions on Learning Technologies, 2023
Automatic grading of short answers is an important task in computer-assisted assessment (CAA). Recently, embeddings, as semantic-rich textual representations, have been increasingly used to represent short answers and predict the grade. Despite the recent trend of applying embeddings in automatic short answer grading (ASAG), there are no…
Descriptors: Automation, Computer Assisted Testing, Grading, Natural Language Processing
Tan, Hongye; Wang, Chong; Duan, Qinglong; Lu, Yu; Zhang, Hu; Li, Ru – Interactive Learning Environments, 2023
Automatic short answer grading (ASAG) is a challenging task that aims to predict a score for a given student response. Previous works on ASAG mainly use nonneural or neural methods. However, the former depends on handcrafted features and is limited by its inflexibility and high cost, and the latter ignores global word cooccurrence in a corpus and…
Descriptors: Automation, Grading, Computer Assisted Testing, Graphs
Abubakir Siedahmed; Jaclyn Ocumpaugh; Zelda Ferris; Dinesh Kodwani; Eamon Worden; Neil Heffernan – International Educational Data Mining Society, 2025
Recent advances in AI have opened the door for the automated scoring of open-ended math problems, which were previously much more difficult to assess at scale. However, we know that biases still remain in some of these algorithms. For example, recent research on the automated scoring of student essays has shown that certain varieties of English…
Descriptors: Artificial Intelligence, Automation, Scoring, Mathematics Tests

Peer reviewed
Direct link
