Publication Date
In 2025 | 1 |
Since 2024 | 13 |
Since 2021 (last 5 years) | 39 |
Since 2016 (last 10 years) | 57 |
Since 2006 (last 20 years) | 66 |
Descriptor
Source
Author
Publication Type
Education Level
Audience
Teachers | 1 |
Location
Netherlands | 4 |
Australia | 3 |
Brazil | 3 |
Europe | 3 |
European Union | 3 |
Germany | 3 |
Israel | 2 |
Massachusetts | 2 |
Pennsylvania | 2 |
Singapore | 2 |
Spain | 2 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
Massachusetts Comprehensive… | 1 |
What Works Clearinghouse Rating
Lottridge, Susan; Woolf, Sherri; Young, Mackenzie; Jafari, Amir; Ormerod, Chris – Journal of Computer Assisted Learning, 2023
Background: Deep learning methods, where models do not use explicit features and instead rely on implicit features estimated during model training, suffer from an explainability problem. In text classification, saliency maps that reflect the importance of words in prediction are one approach toward explainability. However, little is known about…
Descriptors: Documentation, Learning Strategies, Models, Prediction
Benjamin Goecke; Paul V. DiStefano; Wolfgang Aschauer; Kurt Haim; Roger Beaty; Boris Forthmann – Journal of Creative Behavior, 2024
Automated scoring is a current hot topic in creativity research. However, most research has focused on the English language and popular verbal creative thinking tasks, such as the alternate uses task. Therefore, in this study, we present a large language model approach for automated scoring of a scientific creative thinking task that assesses…
Descriptors: Creativity, Creative Thinking, Scoring, Automation
Anderson Pinheiro Cavalcanti; Rafael Ferreira Mello; Dragan Gaševic; Fred Freitas – International Journal of Artificial Intelligence in Education, 2024
Educational feedback is a crucial factor in the student's learning journey, as through it, students are able to identify their areas of deficiencies and improve self-regulation. However, the literature shows that this is an area of great dissatisfaction, especially in higher education. Providing effective feedback becomes an increasingly…
Descriptors: Prediction, Feedback (Response), Artificial Intelligence, Automation
Alexandra C. Salem; Robert C. Gale; Mikala Fleegle; Gerasimos Fergadiotis; Steven Bedrick – Journal of Speech, Language, and Hearing Research, 2023
Purpose: To date, there are no automated tools for the identification and fine-grained classification of paraphasias within discourse, the production of which is the hallmark characteristic of most people with aphasia (PWA). In this work, we fine-tune a large language model (LLM) to automatically predict paraphasia targets in Cinderella story…
Descriptors: Aphasia, Prediction, Story Telling, Oral Language
Ulrike Padó; Yunus Eryilmaz; Larissa Kirschner – International Journal of Artificial Intelligence in Education, 2024
Short-Answer Grading (SAG) is a time-consuming task for teachers that automated SAG models have long promised to make easier. However, there are three challenges for their broad-scale adoption: A technical challenge regarding the need for high-quality models, which is exacerbated for languages with fewer resources than English; a usability…
Descriptors: Grading, Automation, Test Format, Computer Assisted Testing
Ren, Ping; Yang, Liu; Luo, Fang – Education and Information Technologies, 2023
Student feedback is crucial for evaluating the performance of teachers and the quality of teaching. Free-form text comments obtained from open-ended questions are seldom analyzed comprehensively since it is difficult to interpret and score compared to standardized rating scales. To solve this problem, the present study employed aspect-level…
Descriptors: Student Attitudes, Student Evaluation of Teacher Performance, Feedback (Response), Prediction
Tan, Hongye; Wang, Chong; Duan, Qinglong; Lu, Yu; Zhang, Hu; Li, Ru – Interactive Learning Environments, 2023
Automatic short answer grading (ASAG) is a challenging task that aims to predict a score for a given student response. Previous works on ASAG mainly use nonneural or neural methods. However, the former depends on handcrafted features and is limited by its inflexibility and high cost, and the latter ignores global word cooccurrence in a corpus and…
Descriptors: Automation, Grading, Computer Assisted Testing, Graphs
Lemantara, Julianto; Hariadi, Bambang; Sunarto, M. J. Dewiyani; Amelia, Tan; Sagirani, Tri – IEEE Transactions on Learning Technologies, 2023
A quick and effective learning assessment is needed to evaluate the learning process. Many tools currently offer automatic assessment for subjective and objective questions; however, there is no such free tool that provides plagiarism detection among students for subjective questions in a learning management system (LMS). This article aims to…
Descriptors: Students, Cheating, Prediction, Essays
Uto, Masaki; Aomi, Itsuki; Tsutsumi, Emiko; Ueno, Maomi – IEEE Transactions on Learning Technologies, 2023
In automated essay scoring (AES), essays are automatically graded without human raters. Many AES models based on various manually designed features or various architectures of deep neural networks (DNNs) have been proposed over the past few decades. Each AES model has unique advantages and characteristics. Therefore, rather than using a single-AES…
Descriptors: Prediction, Scores, Computer Assisted Testing, Scoring
Regan Mozer; Luke Miratrix – Grantee Submission, 2024
For randomized trials that use text as an outcome, traditional approaches for assessing treatment impact require that each document first be manually coded for constructs of interest by trained human raters. This process, the current standard, is both time-consuming and limiting: even the largest human coding efforts are typically constrained to…
Descriptors: Artificial Intelligence, Coding, Efficiency, Statistical Inference
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – International Journal of Artificial Intelligence in Education, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Buczak, Philip; Huang, He; Forthmann, Boris; Doebler, Philipp – Journal of Creative Behavior, 2023
Traditionally, researchers employ human raters for scoring responses to creative thinking tasks. Apart from the associated costs this approach entails two potential risks. First, human raters can be subjective in their scoring behavior (inter-rater-variance). Second, individual raters are prone to inconsistent scoring patterns…
Descriptors: Computer Assisted Testing, Scoring, Automation, Creative Thinking
Shin, Jinnie; Gierl, Mark J. – Journal of Applied Testing Technology, 2022
Automated Essay Scoring (AES) technologies provide innovative solutions to score the written essays with a much shorter time span and at a fraction of the current cost. Traditionally, AES emphasized the importance of capturing the "coherence" of writing because abundant evidence indicated the connection between coherence and the overall…
Descriptors: Computer Assisted Testing, Scoring, Essays, Automation
Lishan Zhang; Linyu Deng; Sixv Zhang; Ling Chen – IEEE Transactions on Learning Technologies, 2024
With the popularity of online one-to-one tutoring, there are emerging concerns about the quality and effectiveness of this kind of tutoring. Although there are some evaluation methods available, they are heavily relied on manual coding by experts, which is too costly. Therefore, using machine learning to predict instruction quality automatically…
Descriptors: Automation, Classification, Artificial Intelligence, Tutoring