Publication Date
In 2025 | 15 |
Since 2024 | 30 |
Since 2021 (last 5 years) | 73 |
Since 2016 (last 10 years) | 111 |
Since 2006 (last 20 years) | 143 |
Descriptor
Source
Author
Publication Type
Reports - Research | 148 |
Journal Articles | 131 |
Tests/Questionnaires | 17 |
Speeches/Meeting Papers | 12 |
Guides - Non-Classroom | 1 |
Education Level
Audience
Location
Indonesia | 5 |
United Kingdom | 5 |
China | 4 |
South Korea | 4 |
Thailand | 4 |
Germany | 3 |
Malaysia | 3 |
Saudi Arabia | 3 |
Hungary | 2 |
Iran | 2 |
Louisiana | 2 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Siraprapa Kotmungkun; Wichuta Chompurach; Piriya Thaksanan – English Language Teaching Educational Journal, 2024
This study explores the writing quality of two AI chatbots, OpenAI ChatGPT and Google Gemini. The research assesses the quality of the generated texts based on five essay models using the T.E.R.A. software, focusing on ease of understanding, readability, and reading levels using the Flesch-Kincaid formula. Thirty essays were generated, 15 from…
Descriptors: Plagiarism, Artificial Intelligence, Computer Software, Essays
Dadi Ramesh; Suresh Kumar Sanampudi – European Journal of Education, 2024
Automatic essay scoring (AES) is an essential educational application in natural language processing. This automated process will alleviate the burden by increasing the reliability and consistency of the assessment. With the advances in text embedding libraries and neural network models, AES systems achieved good results in terms of accuracy.…
Descriptors: Scoring, Essays, Writing Evaluation, Memory
Elizabeth L. Wetzler; Kenneth S. Cassidy; Margaret J. Jones; Chelsea R. Frazier; Nickalous A. Korbut; Chelsea M. Sims; Shari S. Bowen; Michael Wood – Teaching of Psychology, 2025
Background: Generative artificial intelligence (AI) represents a potentially powerful, time-saving tool for grading student essays. However, little is known about how AI-generated essay scores compare to human instructor scores. Objective: The purpose of this study was to compare the essay grading scores produced by AI with those of human…
Descriptors: Essays, Writing Evaluation, Scores, Evaluators
Wilson, Joshua; Huang, Yue; Palermo, Corey; Beard, Gaysha; MacArthur, Charles A. – International Journal of Artificial Intelligence in Education, 2021
This study examined a naturalistic, districtwide implementation of an automated writing evaluation (AWE) software program called "MI Write" in elementary schools. We specifically examined the degree to which aspects of MI Write were implemented, teacher and student attitudes towards MI Write, and whether MI Write usage along with other…
Descriptors: Automation, Writing Evaluation, Feedback (Response), Computer Software
Wilson, Joshua; Huang, Yue; Palermo, Corey; Beard, Gaysha; MacArthur, Charles A. – Grantee Submission, 2021
This study examined a naturalistic, districtwide implementation of an automated writing evaluation (AWE) software program called "MI Write" in elementary schools. We specifically examined the degree to which aspects of MI Write were implemented, teacher and student attitudes towards MI Write, and whether MI Write usage along with other…
Descriptors: Automation, Writing Evaluation, Feedback (Response), Computer Software
Guy J. Krueger – Thresholds in Education, 2025
Generative AI has become a quotidian discussion topic in many writing departments, and the conversations often focus on the negative aspects or the disruptions it has caused. A growing number of teachers and scholars, though, have embraced the new technology and welcomed it into their classrooms. In the Spring 2024 semester, students in my…
Descriptors: Writing Instruction, Artificial Intelligence, Computer Software, Technology Integration
Zhang, Haoran; Litman, Diane – Grantee Submission, 2021
Human essay grading is a laborious task that can consume much time and effort. Automated Essay Scoring (AES) has thus been proposed as a fast and effective solution to the problem of grading student writing at scale. However, because AES typically uses supervised machine learning, a human-graded essay corpus is still required to train the AES…
Descriptors: Essays, Grading, Writing Evaluation, Computational Linguistics
Yubin Xu; Lin Liu; Jianwen Xiong; Guangtian Zhu – Journal of Baltic Science Education, 2025
As the development and application of large language models (LLMs) in physics education progress, the well-known AI-based chatbot ChatGPT4 has presented numerous opportunities for educational assessment. Investigating the potential of AI tools in practical educational assessment carries profound significance. This study explored the comparative…
Descriptors: Physics, Artificial Intelligence, Computer Software, Accuracy
Jenel T. Cavazos; Keane A. Hauck; Hannah M. Baskin; Catherine M. Bain – Teaching of Psychology, 2025
Background: The emergence of artificial intelligence (AI) in higher education has sparked numerous discussions about its implications. ChatGPT, a prominent AI conversational model, has attracted significant attention for its ability to generate essays and formulate responses. Objective: The current study sought to explore how and why students are…
Descriptors: Student Attitudes, Artificial Intelligence, Computer Software, Cheating
Daniel Holcombe – Hispania, 2025
Accompanying the recent rise in cautious popularity surrounding Generative Artificial Intelligence (Gen-AI), some language educators are exploring innovative linguistic interactions with Gen-AI. Seeking to add a literature approximation to such criticism, this article explores two activities that feature Gen-AI in undergraduate literature courses.…
Descriptors: Undergraduate Students, Artificial Intelligence, Computer Software, Cheating
Fatih Yavuz; Özgür Çelik; Gamze Yavas Çelik – British Journal of Educational Technology, 2025
This study investigates the validity and reliability of generative large language models (LLMs), specifically ChatGPT and Google's Bard, in grading student essays in higher education based on an analytical grading rubric. A total of 15 experienced English as a foreign language (EFL) instructors and two LLMs were asked to evaluate three student…
Descriptors: English (Second Language), Second Language Learning, Second Language Instruction, Computational Linguistics
Andrew Potter; Mitchell Shortt; Maria Goldshtein; Rod D. Roscoe – Grantee Submission, 2025
Broadly defined, academic language (AL) is a set of lexical-grammatical norms and registers commonly used in educational and academic discourse. Mastery of academic language in writing is an important aspect of writing instruction and assessment. The purpose of this study was to use Natural Language Processing (NLP) tools to examine the extent to…
Descriptors: Academic Language, Natural Language Processing, Grammar, Vocabulary Skills
Ibrahim, Karim – Language Testing in Asia, 2023
The release of ChatGPT marked the beginning of a new era of AI-assisted plagiarism that disrupts traditional assessment practices in ESL composition. In the face of this challenge, educators are left with little guidance in controlling AI-assisted plagiarism, especially when conventional methods fail to detect AI-generated texts. One approach to…
Descriptors: English (Second Language), Second Language Learning, Second Language Instruction, Artificial Intelligence
David W. Brown; Dean Jensen – International Society for Technology, Education, and Science, 2023
The growth of Artificial Intelligence (AI) chatbots has created a great deal of discussion in the education community. While many have gravitated towards the ability of these bots to make learning more interactive, others have grave concerns that student created essays, long used as a means of assessing the subject comprehension of students, may…
Descriptors: Artificial Intelligence, Natural Language Processing, Computer Software, Writing (Composition)
Chad C. Tossell; Nathan L. Tenhundfeld; Ali Momen; Katrina Cooley; Ewart J. de Visser – IEEE Transactions on Learning Technologies, 2024
This article examined student experiences before and after an essay writing assignment that required the use of ChatGPT within an undergraduate engineering course. Utilizing a pre-post study design, we gathered data from 24 participants to evaluate ChatGPT's support for both completing and grading an essay assignment, exploring its educational…
Descriptors: Student Attitudes, Computer Software, Artificial Intelligence, Grading