Publication Date
In 2025 | 1 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 9 |
Descriptor
Automation | 9 |
Interrater Reliability | 9 |
Artificial Intelligence | 4 |
Scoring | 4 |
Documentation | 2 |
Educational Research | 2 |
Essays | 2 |
Speech | 2 |
Accuracy | 1 |
Adults | 1 |
Age Differences | 1 |
More ▼ |
Source
Author
Aaron Olaf Batty | 1 |
Abbas, Mohsin | 1 |
Alessia Battisti | 1 |
Amanda Barany | 1 |
Andres Felipe Zambrano | 1 |
Berisha, Visar | 1 |
Boulanger, David | 1 |
Casabianca, Jodi M. | 1 |
Franz Holzknecht | 1 |
Fromm, Davida | 1 |
Ginger S. Watson | 1 |
More ▼ |
Publication Type
Journal Articles | 9 |
Reports - Research | 8 |
Reports - Descriptive | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Elementary Education | 1 |
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Switzerland | 1 |
Texas (Houston) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Lottridge, Susan; Woolf, Sherri; Young, Mackenzie; Jafari, Amir; Ormerod, Chris – Journal of Computer Assisted Learning, 2023
Background: Deep learning methods, where models do not use explicit features and instead rely on implicit features estimated during model training, suffer from an explainability problem. In text classification, saliency maps that reflect the importance of words in prediction are one approach toward explainability. However, little is known about…
Descriptors: Documentation, Learning Strategies, Models, Prediction
Abbas, Mohsin; van Rosmalen, Peter; Kalz, Marco – IEEE Transactions on Learning Technologies, 2023
For predicting and improving the quality of essays, text analytic metrics (surface, syntactic, morphological, and semantic features) can be used to provide formative feedback to the students in higher education. In this study, the goal was to identify a sufficient number of features that exhibit a fair proxy of the scores given by the human raters…
Descriptors: Feedback (Response), Automation, Essays, Scoring
McCaffrey, Daniel F.; Casabianca, Jodi M.; Ricker-Pedley, Kathryn L.; Lawless, René R.; Wendler, Cathy – ETS Research Report Series, 2022
This document describes a set of best practices for developing, implementing, and maintaining the critical process of scoring constructed-response tasks. These practices address both the use of human raters and automated scoring systems as part of the scoring process and cover the scoring of written, spoken, performance, or multimodal responses.…
Descriptors: Best Practices, Scoring, Test Format, Computer Assisted Testing
Jonathan K. Foster; Peter Youngs; Rachel van Aswegen; Samarth Singh; Ginger S. Watson; Scott T. Acton – Journal of Learning Analytics, 2024
Despite a tremendous increase in the use of video for conducting research in classrooms as well as preparing and evaluating teachers, there remain notable challenges to using classroom videos at scale, including time and financial costs. Recent advances in artificial intelligence could make the process of analyzing, scoring, and cataloguing videos…
Descriptors: Learning Analytics, Automation, Classification, Artificial Intelligence
Xiner Liu; Andres Felipe Zambrano; Ryan S. Baker; Amanda Barany; Jaclyn Ocumpaugh; Jiayi Zhang; Maciej Pankiewicz; Nidhi Nasiar; Zhanlan Wei – Journal of Learning Analytics, 2025
This study explores the potential of the large language model GPT-4 as an automated tool for qualitative data analysis by educational researchers, exploring which techniques are most successful for different types of constructs. Specifically, we assess three different prompt engineering strategies -- Zero-shot, Few-shot, and Fewshot with…
Descriptors: Coding, Artificial Intelligence, Automation, Data Analysis
Fromm, Davida; Katta, Saketh; Paccione, Mason; Hecht, Sophia; Greenhouse, Joel; MacWhinney, Brian; Schnur, Tatiana T. – Journal of Speech, Language, and Hearing Research, 2021
Purpose: Analysis of connected speech in the field of adult neurogenic communication disorders is essential for research and clinical purposes, yet time and expertise are often cited as limiting factors. The purpose of this project was to create and evaluate an automated program to score and compute the measures from the Quantitative Production…
Descriptors: Speech, Automation, Statistical Analysis, Adults
Mahr, Tristan J.; Berisha, Visar; Kawabata, Kan; Liss, Julie; Hustad, Katherine C. – Journal of Speech, Language, and Hearing Research, 2021
Purpose: Acoustic measurement of speech sounds requires first segmenting the speech signal into relevant units (words, phones, etc.). Manual segmentation is cumbersome and time consuming. Forced-alignment algorithms automate this process by aligning a transcript and a speech sample. We compared the phoneme-level alignment performance of five…
Descriptors: Speech, Young Children, Automation, Phonemes
Kumar, Vivekanandan S.; Boulanger, David – International Journal of Artificial Intelligence in Education, 2021
This article investigates the feasibility of using automated scoring methods to evaluate the quality of student-written essays. In 2012, Kaggle hosted an Automated Student Assessment Prize contest to find effective solutions to automated testing and grading. This article: a) analyzes the datasets from the contest -- which contained hand-graded…
Descriptors: Automation, Scoring, Essays, Writing Evaluation
Franz Holzknecht; Sandrine Tornay; Alessia Battisti; Aaron Olaf Batty; Katja Tissi; Tobias Haug; Sarah Ebling – Language Assessment Quarterly, 2024
Although automated spoken language assessment is rapidly growing, such systems have not been widely developed for signed languages. This study provides validity evidence for an automated web application that was developed to assess and give feedback on handshape and hand movement of L2 learners' Swiss German Sign Language signs. The study shows…
Descriptors: Sign Language, Vocabulary Development, Educational Assessment, Automation