Publication Date
In 2025 | 2 |
Since 2024 | 7 |
Since 2021 (last 5 years) | 13 |
Since 2016 (last 10 years) | 19 |
Since 2006 (last 20 years) | 26 |
Descriptor
Reliability | 30 |
Automation | 27 |
Scoring | 13 |
Validity | 11 |
Essays | 6 |
Accuracy | 5 |
Artificial Intelligence | 5 |
College Students | 5 |
Comparative Analysis | 5 |
Correlation | 5 |
Foreign Countries | 5 |
More ▼ |
Source
Author
Abdelhalim, Suzan M. | 1 |
Amanda E. Carriero | 1 |
Amy S. McDonnell | 1 |
Anique de Bruin | 1 |
Attali, Yigal | 1 |
Babin, Gilbert | 1 |
Baldwin, Peter | 1 |
Beigman Klebanov, Beata | 1 |
Breyer, F. Jay | 1 |
Burstein, Jill | 1 |
Charland, Patrick | 1 |
More ▼ |
Publication Type
Reports - Research | 30 |
Journal Articles | 28 |
Speeches/Meeting Papers | 2 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 7 |
Elementary Education | 5 |
Postsecondary Education | 5 |
Intermediate Grades | 3 |
Middle Schools | 3 |
Grade 5 | 2 |
Grade 6 | 2 |
Secondary Education | 2 |
Early Childhood Education | 1 |
Grade 10 | 1 |
Grade 11 | 1 |
More ▼ |
Audience
Researchers | 2 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Management Admission… | 1 |
Test of English as a Foreign… | 1 |
United States Medical… | 1 |
What Works Clearinghouse Rating
Zirou Lin; Hanbing Yan; Li Zhao – Journal of Computer Assisted Learning, 2024
Background: Peer assessment has played an important role in large-scale online learning, as it helps promote the effectiveness of learners' online learning. However, with the emergence of numerical grades and textual feedback generated by peers, it is necessary to detect the reliability of the large amount of peer assessment data, and then develop…
Descriptors: Peer Evaluation, Automation, Grading, Models
Ngoc My Bui; Jessie S. Barrot – Education and Information Technologies, 2025
With the generative artificial intelligence (AI) tool's remarkable capabilities in understanding and generating meaningful content, intriguing questions have been raised about its potential as an automated essay scoring (AES) system. One such tool is ChatGPT, which is capable of scoring any written work based on predefined criteria. However,…
Descriptors: Artificial Intelligence, Natural Language Processing, Technology Uses in Education, Automation
Luke Strickland; Simon Farrell; Micah K. Wilson; Jack Hutchinson; Shayne Loft – Cognitive Research: Principles and Implications, 2024
In a range of settings, human operators make decisions with the assistance of automation, the reliability of which can vary depending upon context. Currently, the processes by which humans track the level of reliability of automation are unclear. In the current study, we test cognitive models of learning that could potentially explain how humans…
Descriptors: Automation, Reliability, Man Machine Systems, Learning Processes
Monika Lohani; Joel M. Cooper; Amy S. McDonnell; Gus G. Erickson; Trent G. Simmons; Amanda E. Carriero; Kaedyn W. Crabtree; David L. Strayer – Cognitive Research: Principles and Implications, 2024
The reliability of cognitive demand measures in controlled laboratory settings is well-documented; however, limited research has directly established their stability under real-life and high-stakes conditions, such as operating automated technology on actual highways. Partially automated vehicles have advanced to become an everyday mode of…
Descriptors: Cognitive Processes, Difficulty Level, Automation, Psychophysiology
Tingting Li; Kevin Haudek; Joseph Krajcik – Journal of Science Education and Technology, 2025
Scientific modeling is a vital educational practice that helps students apply scientific knowledge to real-world phenomena. Despite advances in AI, challenges in accurately assessing such models persist, primarily due to the complexity of cognitive constructs and data imbalances in educational settings. This study addresses these challenges by…
Descriptors: Artificial Intelligence, Scientific Concepts, Models, Automation
Doewes, Afrizal; Pechenizkiy, Mykola – International Educational Data Mining Society, 2021
Scoring essays is generally an exhausting and time-consuming task for teachers. Automated Essay Scoring (AES) facilitates the scoring process to be faster and more consistent. The most logical way to assess the performance of an automated scorer is by measuring the score agreement with the human raters. However, we provide empirical evidence that…
Descriptors: Man Machine Systems, Automation, Computer Assisted Testing, Scoring
Paul Deane; Duanli Yan; Katherine Castellano; Yigal Attali; Michelle Lamar; Mo Zhang; Ian Blood; James V. Bruno; Chen Li; Wenju Cui; Chunyi Ruan; Colleen Appel; Kofi James; Rodolfo Long; Farah Qureshi – ETS Research Report Series, 2024
This paper presents a multidimensional model of variation in writing quality, register, and genre in student essays, trained and tested via confirmatory factor analysis of 1.37 million essay submissions to ETS' digital writing service, Criterion®. The model was also validated with several other corpora, which indicated that it provides a…
Descriptors: Writing (Composition), Essays, Models, Elementary School Students
Maio, Shannon; Dumas, Denis; Organisciak, Peter; Runco, Mark – Creativity Research Journal, 2020
In recognition of the capability of text-mining models to quantify aspects of language use, some creativity researchers have adopted text-mining models as a mechanism to objectively and efficiently score the Originality of open-ended responses to verbal divergent thinking tasks. With the increasing use of text-mining models in divergent thinking…
Descriptors: Creative Thinking, Scores, Reliability, Data Analysis
Héctor J. Pijeira-Díaz; Shashank Subramanya; Janneke van de Pol; Anique de Bruin – Journal of Computer Assisted Learning, 2024
Background: When learning causal relations, completing causal diagrams enhances students' comprehension judgements to some extent. To potentially boost this effect, advances in natural language processing (NLP) enable real-time formative feedback based on the automated assessment of students' diagrams, which can involve the correctness of both the…
Descriptors: Learning Analytics, Automation, Student Evaluation, Causal Models
Fox, Carly B.; Israelsen-Augenstein, Megan; Jones, Sharad; Gillam, Sandra Laing – Journal of Speech, Language, and Hearing Research, 2021
Purpose: This study examined the accuracy and potential clinical utility of two expedited transcription methods for narrative language samples elicited from school-age children (7;5-11;10 [years;months]) with developmental language disorder. Transcription methods included real-time transcription produced by speech-language pathologists (SLPs) and…
Descriptors: Transcripts (Written Records), Child Language, Narration, Language Impairments
Ullmann, Thomas Daniel – International Journal of Artificial Intelligence in Education, 2019
Reflective writing is an important educational practice to train reflective thinking. Currently, researchers must manually analyze these writings, limiting practice and research because the analysis is time and resource consuming. This study evaluates whether machine learning can be used to automate this manual analysis. The study investigates…
Descriptors: Reflection, Writing (Composition), Writing Evaluation, Automation
Chen, Dandan; Hebert, Michael; Wilson, Joshua – American Educational Research Journal, 2022
We used multivariate generalizability theory to examine the reliability of hand-scoring and automated essay scoring (AES) and to identify how these scoring methods could be used in conjunction to optimize writing assessment. Students (n = 113) included subsamples of struggling writers and non-struggling writers in Grades 3-5 drawn from a larger…
Descriptors: Reliability, Scoring, Essays, Automation
Richard Correnti; Lindsay Clare Matsumura; Elaine Lin Wang; Diane Litman; Haoran Zhang – Grantee Submission, 2022
Recent reviews of automated writing evaluation systems indicate lack of uniformity in the purpose, design, and assessment of such systems. Our work lies at the nexus of critical themes arising from these reviews. We describe our work on eRevise, an automated writing evaluation system focused on elementary students' text-based evidence-use. eRevise…
Descriptors: Writing Evaluation, Feedback (Response), Automation, Elementary School Students
Rieger, Tobias; Heilmann, Lydia; Manzey, Dietrich – Cognitive Research: Principles and Implications, 2021
Visual inspection of luggage using X-ray technology at airports is a time-sensitive task that is often supported by automated systems to increase performance and reduce workload. The present study evaluated how time pressure and automation support influence visual search behavior and performance in a simulated luggage screening task. Moreover, we…
Descriptors: Time Management, Travel, Air Transportation, Task Analysis
Song, Yi; Deane, Paul; Beigman Klebanov, Beata – ETS Research Report Series, 2017
This project focuses on laying the foundations for automated analysis of argumentation schemes, supporting identification and classification of the arguments being made in a text, for the purpose of scoring the quality of written analyses of arguments. We developed annotation protocols for 20 argument prompts from a college-level test under the…
Descriptors: Scoring, Automation, Persuasive Discourse, Documentation
Previous Page | Next Page »
Pages: 1 | 2