Publication Date
In 2025 | 0 |
Since 2024 | 6 |
Since 2021 (last 5 years) | 11 |
Since 2016 (last 10 years) | 11 |
Since 2006 (last 20 years) | 12 |
Descriptor
Source
Author
Di Zou | 2 |
Areej Lhothali | 1 |
Aryadoust, Vahid | 1 |
Balfour, Stephen P. | 1 |
Castro, São Luís | 1 |
Cordeiro, Carolina | 1 |
Fan Li | 1 |
Gary Cheng | 1 |
Geng, Jingxin | 1 |
Gilbert Dizon | 1 |
Han, Turgay | 1 |
More ▼ |
Publication Type
Information Analyses | 12 |
Journal Articles | 12 |
Reports - Research | 3 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 4 |
Postsecondary Education | 4 |
Elementary Education | 1 |
Elementary Secondary Education | 1 |
Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Yi Xue – Education and Information Technologies, 2024
The new era of generative artificial intelligence has sparked the blossoming academic fireworks in the realm of education and information technologies. Driven by natural language processing (NLP), automated writing evaluation (AWE) tools become a ubiquitous practice in intelligent computer-assisted language learning (CALL) environments. Based on…
Descriptors: Literature Reviews, Meta Analysis, Bibliometrics, Artificial Intelligence
Gilbert Dizon; John M. Gayed – Cogent Education, 2024
The use of automated writing evaluation (AWE) in second language (L2) writing contexts has increased dramatically, as evidenced by the large body of research published on the topic over the past decade. Considering this, several systematic reviews on AWE have been published. Nevertheless, none of these review studies has exclusively focused on the…
Descriptors: Literature Reviews, Meta Analysis, Second Language Learning, Writing Evaluation
Thuy Thi-Nhu Ngo; Howard Hao-Jan Chen; Kyle Kuo-Wei Lai – Interactive Learning Environments, 2024
The present study performs a three-level meta-analysis to investigate the overall effectiveness of automated writing evaluation (AWE) on EFL/ESL student writing performance. 24 primary studies representing 85 between-group effect sizes and 34 studies representing 178 within-group effect sizes found from 1993 to 2021 were separately meta-analyzed.…
Descriptors: Writing Evaluation, Automation, Computer Software, English (Second Language)
Xiaoli Huang; Wei Xu; Fan Li; Zhonggen Yu – Asia-Pacific Education Researcher, 2024
With the rapid advancement of information technologies, automated writing evaluation technologies have developed so fast that they can be applied to writing assessments. However, scanty studies have pooled the effects of automated writing evaluation on writing performance. Through a PRISMA protocol-based meta-analysis, this study concludes that…
Descriptors: Writing Evaluation, Automation, Writing Attitudes, Anxiety
Huawei, Shi; Aryadoust, Vahid – Education and Information Technologies, 2023
Automated writing evaluation (AWE) systems are developed based on interdisciplinary research and technological advances such as natural language processing, computer sciences, and latent semantic analysis. Despite a steady increase in research publications in this area, the results of AWE investigations are often mixed, and their validity may be…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Automation
Nunes, Andreia; Cordeiro, Carolina; Limpo, Teresa; Castro, São Luís – Journal of Computer Assisted Learning, 2022
Background: Automated Writing Evaluation (AWE) systems to aid writing learning and instruction in primary and secondary education are growing increasingly popular. However, their effectiveness is hardly known. We conducted a systematic review focusing on the effects of these systems providing writing feedback to students in school settings.…
Descriptors: Writing Evaluation, Automation, Program Evaluation, Writing Instruction
Qing-Ke Fu; Di Zou; Haoran Xie; Gary Cheng – Computer Assisted Language Learning, 2024
Automated writing evaluation (AWE) plays an important role in writing pedagogy and has received considerable research attention recently; however, few reviews have been conducted to systematically analyze the recent publications arising from the many studies in this area. The present review aims to provide a comprehensive analysis of the…
Descriptors: Journal Articles, Automation, Writing Evaluation, Feedback (Response)
Tahani I. Aldosemani; Hussein Assalahi; Areej Lhothali; Maram Albsisi – International Journal of Computer-Assisted Language Learning and Teaching, 2023
This paper explores the literature on AWE feedback, particularly its perceived impact on enhancing EFL student writing proficiency. Prior research highlighted the contribution of AWE in fostering learner autonomy and alleviating teacher workloads, with a substantial focus on student engagement with AWE feedback. This review strives to illuminate…
Descriptors: Automation, Student Evaluation, Writing Evaluation, English (Second Language)
Geng, Jingxin; Razali, Abu Bakar – English Language Teaching, 2022
Automated Writing Evaluation program (AWE) has gained increasing ground in ESL/EFL writing instruction because of its instructional features, such as the instant automated writing score system and the diagnostic corrective feedback in real-time for individual written drafts. However, there is little known about how the automated feedback provided…
Descriptors: Program Effectiveness, Automation, Writing Evaluation, Undergraduate Students
Linqian Ding; Di Zou – Education and Information Technologies, 2024
With the burgeoning popularity and swift advancements of automated writing evaluation (AWE) systems in language classrooms, scholarly and practical interest in this area has noticeably increased. This systematic review aims to comprehensively investigate current research on three prominent AWE systems: Grammarly, Pigai, and Criterion. Objectives…
Descriptors: Automation, Writing Evaluation, Literature Reviews, Computer Software
Sari, Elif; Han, Turgay – Reading Matrix: An International Online Journal, 2021
Providing both effective feedback applications and reliable assessment practices are two central issues in ESL/EFL writing instruction contexts. Giving individual feedback is very difficult in crowded classes as it requires a great amount of time and effort for instructors. Moreover, instructors likely employ inconsistent assessment procedures,…
Descriptors: Automation, Writing Evaluation, Artificial Intelligence, Natural Language Processing
Balfour, Stephen P. – Research & Practice in Assessment, 2013
Two of the largest Massive Open Online Course (MOOC) organizations have chosen different methods for the way they will score and provide feedback on essays students submit. EdX, MIT and Harvard's non-profit MOOC federation, recently announced that they will use a machine-based Automated Essay Scoring (AES) application to assess written work in…
Descriptors: Online Courses, Writing Evaluation, Automation, Scoring