Publication Date
In 2025 | 0 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 6 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 6 |
Descriptor
Algorithms | 6 |
Automation | 6 |
Scoring | 4 |
Artificial Intelligence | 3 |
Computer Assisted Testing | 3 |
Feedback (Response) | 3 |
Classification | 2 |
Computer Interfaces | 2 |
Essays | 2 |
Formative Evaluation | 2 |
Natural Language Processing | 2 |
More ▼ |
Source
International Journal of… | 6 |
Author
Alexandron, Giora | 1 |
Ariely, Moriah | 1 |
Arthur Bakker | 1 |
Bamdev, Pakhi | 1 |
Danielle S. McNamara | 1 |
Enrique Garcia Moreno-Esteva | 1 |
Grover, Manraj Singh | 1 |
Hama, Mika | 1 |
Ionut Paraschiv | 1 |
Larissa Kirschner | 1 |
Lonneke Boels | 1 |
More ▼ |
Publication Type
Journal Articles | 6 |
Reports - Research | 6 |
Education Level
Elementary Education | 1 |
Grade 7 | 1 |
Grade 8 | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Ulrike Padó; Yunus Eryilmaz; Larissa Kirschner – International Journal of Artificial Intelligence in Education, 2024
Short-Answer Grading (SAG) is a time-consuming task for teachers that automated SAG models have long promised to make easier. However, there are three challenges for their broad-scale adoption: A technical challenge regarding the need for high-quality models, which is exacerbated for languages with fewer resources than English; a usability…
Descriptors: Grading, Automation, Test Format, Computer Assisted Testing
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – International Journal of Artificial Intelligence in Education, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Ariely, Moriah; Nazaretsky, Tanya; Alexandron, Giora – International Journal of Artificial Intelligence in Education, 2023
Machine learning algorithms that automatically score scientific explanations can be used to measure students' conceptual understanding, identify gaps in their reasoning, and provide them with timely and individualized feedback. This paper presents the results of a study that uses Hebrew NLP to automatically score student explanations in Biology…
Descriptors: Artificial Intelligence, Algorithms, Natural Language Processing, Hebrew
Lonneke Boels; Enrique Garcia Moreno-Esteva; Arthur Bakker; Paul Drijvers – International Journal of Artificial Intelligence in Education, 2024
As a first step toward automatic feedback based on students' strategies for solving histogram tasks we investigated how strategy recognition can be automated based on students' gazes. A previous study showed how students' task-specific strategies can be inferred from their gazes. The research question addressed in the present article is how data…
Descriptors: Eye Movements, Learning Strategies, Problem Solving, Automation
Bamdev, Pakhi; Grover, Manraj Singh; Singla, Yaman Kumar; Vafaee, Payman; Hama, Mika; Shah, Rajiv Ratn – International Journal of Artificial Intelligence in Education, 2023
English proficiency assessments have become a necessary metric for filtering and selecting prospective candidates for both academia and industry. With the rise in demand for such assessments, it has become increasingly necessary to have the automated human-interpretable results to prevent inconsistencies and ensure meaningful feedback to the…
Descriptors: Language Proficiency, Automation, Scoring, Speech Tests
Myers, Matthew C.; Wilson, Joshua – International Journal of Artificial Intelligence in Education, 2023
This study evaluated the construct validity of six scoring traits of an automated writing evaluation (AWE) system called "MI Write." Persuasive essays (N = 100) written by students in grades 7 and 8 were randomized at the sentence-level using a script written with Python's NLTK module. Each persuasive essay was randomized 30 times (n =…
Descriptors: Construct Validity, Automation, Writing Evaluation, Algorithms