Publication Date
In 2025 | 2 |
Since 2024 | 11 |
Since 2021 (last 5 years) | 27 |
Since 2016 (last 10 years) | 46 |
Since 2006 (last 20 years) | 48 |
Descriptor
Source
Author
Publication Type
Reports - Research | 44 |
Journal Articles | 37 |
Speeches/Meeting Papers | 7 |
Tests/Questionnaires | 5 |
Reports - Evaluative | 2 |
Collected Works - Proceedings | 1 |
Dissertations/Theses -… | 1 |
Education Level
Higher Education | 48 |
Postsecondary Education | 45 |
Secondary Education | 5 |
High Schools | 4 |
Adult Education | 1 |
Elementary Education | 1 |
Grade 10 | 1 |
Grade 6 | 1 |
Intermediate Grades | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
More ▼ |
Audience
Location
China | 3 |
Indonesia | 2 |
Vietnam | 2 |
Algeria | 1 |
Australia | 1 |
California (Long Beach) | 1 |
Finland | 1 |
France | 1 |
Illinois | 1 |
Iraq | 1 |
Massachusetts (Boston) | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating

Yang Zhong; Mohamed Elaraby; Diane Litman; Ahmed Ashraf Butt; Muhsin Menekse – Grantee Submission, 2024
This paper introduces REFLECTSUMM, a novel summarization dataset specifically designed for summarizing students' reflective writing. The goal of REFLECTSUMM is to facilitate developing and evaluating novel summarization techniques tailored to real-world scenarios with little training data, with potential implications in the opinion summarization…
Descriptors: Documentation, Writing (Composition), Reflection, Metadata
Hosnia M. M. Ahmed; Shaymaa E. Sorour – Education and Information Technologies, 2024
Evaluating the quality of university exam papers is crucial for universities seeking institutional and program accreditation. Currently, exam papers are assessed manually, a process that can be tedious, lengthy, and in some cases, inconsistent. This is often due to the focus on assessing only the formal specifications of exam papers. This study…
Descriptors: Higher Education, Artificial Intelligence, Writing Evaluation, Natural Language Processing
Maira Klyshbekova; Pamela Abbott – Electronic Journal of e-Learning, 2024
There is a current debate about the extent to which ChatGPT, a natural language AI chatbot, can disrupt processes in higher education settings. The chatbot is capable of not only answering queries in a human-like way within seconds but can also provide long tracts of texts which can be in the form of essays, emails, and coding. In this study, in…
Descriptors: Artificial Intelligence, Higher Education, Technology Uses in Education, Evaluation Methods
Jussi S. Jauhiainen; Agustín Garagorry Guerra – Innovations in Education and Teaching International, 2025
The study highlights ChatGPT-4's potential in educational settings for the evaluation of university students' open-ended written examination responses. ChatGPT-4 evaluated 54 written responses, ranging from 24 to 256 words in English. It assessed each response using five criteria and assigned a grade on a six-point scale from fail to excellent,…
Descriptors: Artificial Intelligence, Technology Uses in Education, Student Evaluation, Writing Evaluation
Rakovic, Mladen; Iqbal, Sehrish; Li, Tongguang; Fan, Yizhou; Singh, Shaveen; Surendrannair, Surya; Kilgour, Jonathan; Graaf, Joep; Lim, Lyn; Molenaar, Inge; Bannert, Maria; Moore, Johanna; Gaševic, Dragan – Journal of Computer Assisted Learning, 2023
Background: Assignments that involve writing based on several texts are challenging to many learners. Formative feedback supporting learners in these tasks should be informed by the characteristics of evolving written product and by the characteristics of learning processes learners enacted while developing the product. However, formative feedback…
Descriptors: Artificial Intelligence, Essays, High Achievement, Writing Achievement
Huiying Cai; Xun Yan – Language Testing, 2024
Rater comments tend to be qualitatively analyzed to indicate raters' application of rating scales. This study applied natural language processing (NLP) techniques to quantify meaningful, behavioral information from a corpus of rater comments and triangulated that information with a many-facet Rasch measurement (MFRM) analysis of rater scores. The…
Descriptors: Natural Language Processing, Item Response Theory, Rating Scales, Writing Evaluation
James Ewert Duah; Paul McGivern – International Journal of Information and Learning Technology, 2024
Purpose: This study examines the impact of generative artificial intelligence (GenAI), particularly ChatGPT, on higher education (HE). The ease with which content can be generated using GenAI has raised concerns across academia regarding its role in academic contexts, particularly regarding summative assessments. This research makes a unique…
Descriptors: Artificial Intelligence, Man Machine Systems, Natural Language Processing, Technology Uses in Education
Li, Xu; Ouyang, Fan; Liu, Jianwen; Wei, Chengkun; Chen, Wenzhi – Journal of Educational Computing Research, 2023
The computer-supported writing assessment (CSWA) has been widely used to reduce instructor workload and provide real-time feedback. Interpretability of CSWA draws extensive attention because it can benefit the validity, transparency, and knowledge-aware feedback of academic writing assessments. This study proposes a novel assessment tool,…
Descriptors: Computer Assisted Testing, Writing Evaluation, Feedback (Response), Natural Language Processing
Andrew Williams – International Journal of Educational Technology in Higher Education, 2024
The value of generative AI tools in higher education has received considerable attention. Although there are many proponents of its value as a learning tool, many are concerned with the issues regarding academic integrity and its use by students to compose written assessments. This study evaluates and compares the output of three commonly used…
Descriptors: Content Area Writing, Artificial Intelligence, Writing Assignments, Biomedicine
McCaffrey, Daniel F.; Zhang, Mo; Burstein, Jill – Grantee Submission, 2022
Background: This exploratory writing analytics study uses argumentative writing samples from two performance contexts--standardized writing assessments and university English course writing assignments--to compare: (1) linguistic features in argumentative writing; and (2) relationships between linguistic characteristics and academic performance…
Descriptors: Persuasive Discourse, Academic Language, Writing (Composition), Academic Achievement
Yu Tian; Minkyung Kim; Scott Crossley; Qian Wan – Reading and Writing: An Interdisciplinary Journal, 2024
Investigating links between temporal features of the writing process (e.g., bursts and pauses during writing) and the linguistic features found in written products would help us better understand intersections between the writing process and product. However, research on this topic is rare. This article illustrates a method to examine associations…
Descriptors: Second Language Learning, Second Language Instruction, Connected Discourse, Writing Processes
Kai Guo; Deliang Wang – Education and Information Technologies, 2024
ChatGPT, the newest pre-trained large language model, has recently attracted unprecedented worldwide attention. Its exceptional performance in understanding human language and completing a variety of tasks in a conversational way has led to heated discussions about its implications for and use in education. This exploratory study represents one of…
Descriptors: Feedback (Response), English (Second Language), Artificial Intelligence, Natural Language Processing
Messina, Cara Marta; Jones, Cherice Escobar; Poe, Mya – Written Communication, 2023
We report on a college-level study of student reflection and instructor prompts using scoring and corpus analysis methods. We collected 340 student reflections and 24 faculty prompts. Reflections were scored using trait and holistic scoring and then reflections and faculty prompts were analyzed using Natural Language Processing to identify…
Descriptors: Reflection, Writing Instruction, Computational Linguistics, Cues
Öncel, Püren; Flynn, Lauren E.; Sonia, Allison N.; Barker, Kennis E.; Lindsay, Grace C.; McClure, Caleb M.; McNamara, Danielle S.; Allen, Laura K. – Grantee Submission, 2021
Automated Writing Evaluation systems have been developed to help students improve their writing skills through the automated delivery of both summative and formative feedback. These systems have demonstrated strong potential in a variety of educational contexts; however, they remain limited in their personalization and scope. The purpose of the…
Descriptors: Computer Assisted Instruction, Writing Evaluation, Formative Evaluation, Summative Evaluation
Sonia, Allison N.; Joseph, Magliano P.; McCarthy, Kathryn S.; Creer, Sarah D.; McNamara, Danielle S.; Allen, Laura K. – Grantee Submission, 2022
The constructed responses individuals generate while reading can provide insights into their coherence-building processes. The current study examined how the cohesion of constructed responses relates to performance on an integrated writing task. Participants (N = 95) completed a multiple document reading task wherein they were prompted to think…
Descriptors: Natural Language Processing, Connected Discourse, Reading Processes, Writing Skills