Publication Date
In 2025 | 12 |
Since 2024 | 39 |
Since 2021 (last 5 years) | 112 |
Descriptor
Source
Author
Publication Type
Journal Articles | 102 |
Reports - Research | 96 |
Tests/Questionnaires | 14 |
Dissertations/Theses -… | 6 |
Reports - Evaluative | 4 |
Information Analyses | 3 |
Reports - Descriptive | 3 |
Speeches/Meeting Papers | 2 |
Education Level
Higher Education | 56 |
Postsecondary Education | 56 |
Elementary Education | 16 |
Secondary Education | 8 |
Middle Schools | 5 |
Early Childhood Education | 4 |
Intermediate Grades | 4 |
Primary Education | 4 |
Grade 2 | 3 |
Grade 4 | 3 |
High Schools | 3 |
More ▼ |
Audience
Location
China | 12 |
Iran | 5 |
Thailand | 5 |
Japan | 4 |
South Korea | 4 |
Vietnam | 4 |
California | 3 |
Spain | 3 |
Texas | 2 |
Turkey | 2 |
Afghanistan | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
International English… | 5 |
Test of English as a Foreign… | 3 |
Wechsler Individual… | 2 |
What Works Clearinghouse Rating
Joan Li; Nikhil Kumar Jangamreddy; Ryuto Hisamoto; Ruchita Bhansali; Amalie Dyda; Luke Zaphir; Mashhuda Glencross – Australasian Journal of Educational Technology, 2024
Generative artificial intelligence technologies, such as ChatGPT, bring an unprecedented change in education by leveraging the power of natural language processing and machine learning. Employing ChatGPT to assist with marking written assessment presents multiple advantages including scalability, improved consistency, eliminating biases associated…
Descriptors: Higher Education, Artificial Intelligence, Grading, Scoring Rubrics
Jussi S. Jauhiainen; Agustin Bernardo Garagorry Guerra – Journal of Information Technology Education: Innovations in Practice, 2025
Aim/Purpose: This article investigates the process of identifying and correcting hallucinations in ChatGPT-4's recall of student-written responses as well as its evaluation of these responses, and provision of feedback. Effective prompting is examined to enhance the pre-evaluation, evaluation, and post-evaluation stages. Background: Advanced Large…
Descriptors: Artificial Intelligence, Student Evaluation, Writing Evaluation, Feedback (Response)
Shermis, Mark D. – Journal of Educational Measurement, 2022
One of the challenges of discussing validity arguments for machine scoring of essays centers on the absence of a commonly held definition and theory of good writing. At best, the algorithms attempt to measure select attributes of writing and calibrate them against human ratings with the goal of accurate prediction of scores for new essays.…
Descriptors: Scoring, Essays, Validity, Writing Evaluation
Jechun An – Society for Research on Educational Effectiveness, 2024
Teachers need instructionally useful data to make timely and appropriate decisions to meet their students with intensive needs (Filderman et al., 2019). Teachers have still experienced difficulty in instructional decision making in response to students' CBM data (Gesel et al., 2021). This is because data itself that was used for simply determining…
Descriptors: Educational Research, Research Problems, Elementary School Students, Writing Skills
Wang, Jue; Engelhard, George; Combs, Trenton – Journal of Experimental Education, 2023
Unfolding models are frequently used to develop scales for measuring attitudes. Recently, unfolding models have been applied to examine rater severity and accuracy within the context of rater-mediated assessments. One of the problems in applying unfolding models to rater-mediated assessments is that the substantive interpretations of the latent…
Descriptors: Writing Evaluation, Scoring, Accuracy, Computational Linguistics
Bin Chen; Jinyan Huang – SAGE Open, 2023
This study examined Chinese EFL researchers' English abstract writing in language education. Using open-ended questionnaires, it first investigated 24 Chinese EFL researchers' perceptions of their challenges in writing English abstracts. Using generalizability theory and follow-up interviews, it then invited 16 experienced English journal…
Descriptors: Researchers, Academic Language, Documentation, Attitudes
Jussi S. Jauhiainen; Agustín Garagorry Guerra – Innovations in Education and Teaching International, 2025
The study highlights ChatGPT-4's potential in educational settings for the evaluation of university students' open-ended written examination responses. ChatGPT-4 evaluated 54 written responses, ranging from 24 to 256 words in English. It assessed each response using five criteria and assigned a grade on a six-point scale from fail to excellent,…
Descriptors: Artificial Intelligence, Technology Uses in Education, Student Evaluation, Writing Evaluation
Pinot de Moira, Anne; Wheadon, Christopher; Christodoulou, Daisy – Research in Education, 2022
Writing is generally assessed internationally using rubric-based approaches, but there is a growing body of evidence to suggest that the reliability of such approaches is poor. In contrast, comparative judgement studies suggest that it is possible to assess open ended tasks such as writing with greater reliability. Many previous studies, however,…
Descriptors: Writing Evaluation, Classification, Accuracy, Scoring Rubrics
Wang, Heqiao; Troia, Gary A. – Written Communication, 2023
The primary purpose of this study is to investigate the degree to which register knowledge, register-specific motivation, and diverse linguistic features are predictive of human judgment of writing quality in three registers--narrative, informative, and opinion. The secondary purpose is to compare the evaluation metrics of register-partitioned…
Descriptors: Writing Evaluation, Essays, Elementary School Students, Grade 4
Qing-Ke Fu; Di Zou; Haoran Xie; Gary Cheng – Computer Assisted Language Learning, 2024
Automated writing evaluation (AWE) plays an important role in writing pedagogy and has received considerable research attention recently; however, few reviews have been conducted to systematically analyze the recent publications arising from the many studies in this area. The present review aims to provide a comprehensive analysis of the…
Descriptors: Journal Articles, Automation, Writing Evaluation, Feedback (Response)
Ramy Shabara; Khaled ElEbyary; Deena Boraie – Teaching English with Technology, 2024
Although there are claims that ChatGPT, an AI-based language model, is capable of assessing the writing of L2 learners accurately and consistently in the classroom, a number of recent studies have shown discrepancies between AI and human raters. Furthermore, there is a lack of studies investigating the intrareliability of ChatGPT scores.…
Descriptors: Foreign Countries, Artificial Intelligence, Scoring Rubrics, Student Evaluation
Doewes, Afrizal; Kurdhi, Nughthoh Arfawi; Saxena, Akrati – International Educational Data Mining Society, 2023
Automated Essay Scoring (AES) tools aim to improve the efficiency and consistency of essay scoring by using machine learning algorithms. In the existing research work on this topic, most researchers agree that human-automated score agreement remains the benchmark for assessing the accuracy of machine-generated scores. To measure the performance of…
Descriptors: Essays, Writing Evaluation, Evaluators, Accuracy
Karim Sadeghi; Maryam Esmaeeli – RELC Journal: A Journal of Language Teaching and Research, 2024
Corrective feedback (CF) has long been a hot topic in language education circles and has received extensive research attention. However, there is still controversy over the effectiveness of CF use and error correction in language classes. To address this discrepancy, the current study probed the effectiveness of different CF types in improving…
Descriptors: Writing (Composition), Writing Evaluation, Feedback (Response), Accuracy
Nguyen Huynh Trang; Jessie S. Barrot – RELC Journal: A Journal of Language Teaching and Research, 2024
This quasi-experimental study investigates the effects of pre-writing (Pre-EI) and post-writing explicit instruction (Post-EI) on L2 learners' overall writing accuracy and errors at different severity levels. Situated within process-genre-oriented writing classrooms, a total of three intact groups (N = 101) were designated as two experimental…
Descriptors: Foreign Countries, College Students, English (Second Language), Second Language Instruction
Hailay Tesfay Gebremariam – SAGE Open, 2024
Although, written corrective feedback (hereafter referred to as CF) is applauded in many writing courses for fostering students' quality writing, its impact on grammatical accuracy in L2 students' writing remains a debated topic. Thus, this study looked into the effect of CF types on L2 students' grammatical accuracy in writing. To achieve this…
Descriptors: Outcomes of Education, Written Language, Feedback (Response), Error Correction