Publication Date
In 2025 | 2 |
Since 2024 | 10 |
Since 2021 (last 5 years) | 18 |
Since 2016 (last 10 years) | 27 |
Since 2006 (last 20 years) | 34 |
Descriptor
Source
Author
Publication Type
Reports - Research | 26 |
Journal Articles | 20 |
Speeches/Meeting Papers | 7 |
Collected Works - Proceedings | 3 |
Reports - Evaluative | 3 |
Books | 1 |
Collected Works - General | 1 |
Reports - Descriptive | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 11 |
Postsecondary Education | 10 |
Secondary Education | 9 |
High Schools | 7 |
Middle Schools | 5 |
Elementary Education | 4 |
Early Childhood Education | 3 |
Grade 5 | 3 |
Junior High Schools | 3 |
Grade 3 | 2 |
Grade 4 | 2 |
More ▼ |
Audience
Location
Arizona (Phoenix) | 2 |
Brazil | 2 |
Hong Kong | 2 |
California (Long Beach) | 1 |
Canada | 1 |
China | 1 |
India | 1 |
Mississippi | 1 |
Thailand | 1 |
Uruguay | 1 |
Wisconsin (Madison) | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Gates MacGinitie Reading Tests | 3 |
International English… | 1 |
Test of English as a Foreign… | 1 |
Writing Apprehension Test | 1 |
What Works Clearinghouse Rating
Ngoc My Bui; Jessie S. Barrot – Education and Information Technologies, 2025
With the generative artificial intelligence (AI) tool's remarkable capabilities in understanding and generating meaningful content, intriguing questions have been raised about its potential as an automated essay scoring (AES) system. One such tool is ChatGPT, which is capable of scoring any written work based on predefined criteria. However,…
Descriptors: Artificial Intelligence, Natural Language Processing, Technology Uses in Education, Automation
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – International Journal of Artificial Intelligence in Education, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Xinming Chen; Ziqian Zhou; Malila Prado – International Journal of Assessment Tools in Education, 2025
This study explores the efficacy of ChatGPT-3.5, an AI chatbot, used as an Automatic Essay Scoring (AES) system and feedback provider for IELTS essay preparation. It investigates the alignment between scores given by ChatGPT-3.5 and those assigned by official IELTS examiners to establish its reliability as an AES. It also identifies the strategies…
Descriptors: Artificial Intelligence, Natural Language Processing, Technology Uses in Education, Automation
Morrison, Ryan – Online Submission, 2022
Large Language Models (LLM) -- powerful algorithms that can generate and transform text -- are set to disrupt language learning education and text-based assessments as they allow for automation of text that can meet certain outcomes of many traditional assessments such as essays. While there is no way to definitively identify text created by this…
Descriptors: Models, Mathematics, Automation, Natural Language Processing
Paul Deane; Duanli Yan; Katherine Castellano; Yigal Attali; Michelle Lamar; Mo Zhang; Ian Blood; James V. Bruno; Chen Li; Wenju Cui; Chunyi Ruan; Colleen Appel; Kofi James; Rodolfo Long; Farah Qureshi – ETS Research Report Series, 2024
This paper presents a multidimensional model of variation in writing quality, register, and genre in student essays, trained and tested via confirmatory factor analysis of 1.37 million essay submissions to ETS' digital writing service, Criterion®. The model was also validated with several other corpora, which indicated that it provides a…
Descriptors: Writing (Composition), Essays, Models, Elementary School Students
David W. Brown; Dean Jensen – International Society for Technology, Education, and Science, 2023
The growth of Artificial Intelligence (AI) chatbots has created a great deal of discussion in the education community. While many have gravitated towards the ability of these bots to make learning more interactive, others have grave concerns that student created essays, long used as a means of assessing the subject comprehension of students, may…
Descriptors: Artificial Intelligence, Natural Language Processing, Computer Software, Writing (Composition)
Wan, Qian; Crossley, Scott; Banawan, Michelle; Balyan, Renu; Tian, Yu; McNamara, Danielle; Allen, Laura – International Educational Data Mining Society, 2021
The current study explores the ability to predict argumentative claims in structurally-annotated student essays to gain insights into the role of argumentation structure in the quality of persuasive writing. Our annotation scheme specified six types of argumentative components based on the well-established Toulmin's model of argumentation. We…
Descriptors: Essays, Persuasive Discourse, Automation, Identification
Keith Cochran; Clayton Cohn; Peter Hastings; Noriko Tomuro; Simon Hughes – International Journal of Artificial Intelligence in Education, 2024
To succeed in the information age, students need to learn to communicate their understanding of complex topics effectively. This is reflected in both educational standards and standardized tests. To improve their writing ability for highly structured domains like scientific explanations, students need feedback that accurately reflects the…
Descriptors: Science Process Skills, Scientific Literacy, Scientific Concepts, Concept Formation
Sebastian Gombert; Aron Fink; Tornike Giorgashvili; Ioana Jivet; Daniele Di Mitri; Jane Yau; Andreas Frey; Hendrik Drachsler – International Journal of Artificial Intelligence in Education, 2024
Various studies empirically proved the value of highly informative feedback for enhancing learner success. However, digital educational technology has yet to catch up as automated feedback is often provided shallowly. This paper presents a case study on implementing a pipeline that provides German-speaking university students enrolled in an…
Descriptors: Automation, Student Evaluation, Essays, Feedback (Response)
Kornwipa Poonpon; Paiboon Manorom; Wirapong Chansanam – Contemporary Educational Technology, 2023
Automated essay scoring (AES) has become a valuable tool in educational settings, providing efficient and objective evaluations of student essays. However, the majority of AES systems have primarily focused on native English speakers, leaving a critical gap in the evaluation of non-native speakers' writing skills. This research addresses this gap…
Descriptors: Automation, Essays, Scoring, English (Second Language)
Wan, Qian; Crossley, Scott; Allen, Laura; McNamara, Danielle – Grantee Submission, 2020
In this paper, we extracted content-based and structure-based features of text to predict human annotations for claims and nonclaims in argumentative essays. We compared Logistic Regression, Bernoulli Naive Bayes, Gaussian Naive Bayes, Linear Support Vector Classification, Random Forest, and Neural Networks to train classification models. Random…
Descriptors: Persuasive Discourse, Essays, Writing Evaluation, Natural Language Processing
Hong Jiao, Editor; Robert W. Lissitz, Editor – IAP - Information Age Publishing, Inc., 2024
With the exponential increase of digital assessment, different types of data in addition to item responses become available in the measurement process. One of the salient features in digital assessment is that process data can be easily collected. This non-conventional structured or unstructured data source may bring new perspectives to better…
Descriptors: Artificial Intelligence, Natural Language Processing, Psychometrics, Computer Assisted Testing
Chen, Dandan; Hebert, Michael; Wilson, Joshua – American Educational Research Journal, 2022
We used multivariate generalizability theory to examine the reliability of hand-scoring and automated essay scoring (AES) and to identify how these scoring methods could be used in conjunction to optimize writing assessment. Students (n = 113) included subsamples of struggling writers and non-struggling writers in Grades 3-5 drawn from a larger…
Descriptors: Reliability, Scoring, Essays, Automation
Lu, Chang; Cutumisu, Maria – International Educational Data Mining Society, 2021
Digitalization and automation of test administration, score reporting, and feedback provision have the potential to benefit large-scale and formative assessments. Many studies on automated essay scoring (AES) and feedback generation systems were published in the last decade, but few connected AES and feedback generation within a unified framework.…
Descriptors: Learning Processes, Automation, Computer Assisted Testing, Scoring