Publication Date
In 2025 | 0 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 7 |
Descriptor
Computer Assisted Testing | 7 |
Scoring | 7 |
Artificial Intelligence | 4 |
Automation | 4 |
Essays | 3 |
Algorithms | 2 |
College Students | 2 |
Concept Formation | 2 |
Feedback (Response) | 2 |
Formative Evaluation | 2 |
Introductory Courses | 2 |
More ▼ |
Source
International Journal of… | 7 |
Author
Clayton Cohn | 1 |
Danielle S. McNamara | 1 |
Harris, Amy E. | 1 |
Ionut Paraschiv | 1 |
Keith Cochran | 1 |
Kodeswaran, Balaji | 1 |
Lauterbach, Mark | 1 |
Lottridge, Susan | 1 |
Mihai Dascalu | 1 |
Myers, Matthew C. | 1 |
Noriko Tomuro | 1 |
More ▼ |
Publication Type
Journal Articles | 7 |
Reports - Research | 5 |
Reports - Descriptive | 1 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 3 |
Elementary Education | 1 |
Grade 7 | 1 |
Grade 8 | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Postsecondary Education | 1 |
Secondary Education | 1 |
Audience
Location
Spain | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – International Journal of Artificial Intelligence in Education, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Rebecka Weegar; Peter Idestam-Almquist – International Journal of Artificial Intelligence in Education, 2024
Machine learning methods can be used to reduce the manual workload in exam grading, making it possible for teachers to spend more time on other tasks. However, when it comes to grading exams, fully eliminating manual work is not yet possible even with very accurate automated grading, as any grading mistakes could have significant consequences for…
Descriptors: Grading, Computer Assisted Testing, Introductory Courses, Computer Science Education
Ormerod, Christopher; Lottridge, Susan; Harris, Amy E.; Patel, Milan; van Wamelen, Paul; Kodeswaran, Balaji; Woolf, Sharon; Young, Mackenzie – International Journal of Artificial Intelligence in Education, 2023
We introduce a short answer scoring engine made up of an ensemble of deep neural networks and a Latent Semantic Analysis-based model to score short constructed responses for a large suite of questions from a national assessment program. We evaluate the performance of the engine and show that the engine achieves above-human-level performance on a…
Descriptors: Computer Assisted Testing, Scoring, Artificial Intelligence, Semantics
Keith Cochran; Clayton Cohn; Peter Hastings; Noriko Tomuro; Simon Hughes – International Journal of Artificial Intelligence in Education, 2024
To succeed in the information age, students need to learn to communicate their understanding of complex topics effectively. This is reflected in both educational standards and standardized tests. To improve their writing ability for highly structured domains like scientific explanations, students need feedback that accurately reflects the…
Descriptors: Science Process Skills, Scientific Literacy, Scientific Concepts, Concept Formation
Myers, Matthew C.; Wilson, Joshua – International Journal of Artificial Intelligence in Education, 2023
This study evaluated the construct validity of six scoring traits of an automated writing evaluation (AWE) system called "MI Write." Persuasive essays (N = 100) written by students in grades 7 and 8 were randomized at the sentence-level using a script written with Python's NLTK module. Each persuasive essay was randomized 30 times (n =…
Descriptors: Construct Validity, Automation, Writing Evaluation, Algorithms
Perin, Dolores; Lauterbach, Mark – International Journal of Artificial Intelligence in Education, 2018
The problem of poor writing skills at the postsecondary level is a large and troubling one. This study investigated the writing skills of low-skilled adults attending college developmental education courses by determining whether variables from an automated scoring system were predictive of human scores on writing quality rubrics. The human-scored…
Descriptors: College Students, Writing Evaluation, Writing Skills, Developmental Studies Programs
Perez-Marin, Diana; Pascual-Nieto, Ismael – International Journal of Artificial Intelligence in Education, 2010
A student conceptual model can be defined as a set of interconnected concepts associated with an estimation value that indicates how well these concepts are used by the students. It can model just one student or a group of students, and can be represented as a concept map, conceptual diagram or one of several other knowledge representation…
Descriptors: Concept Mapping, Knowledge Representation, Models, Universities