Publication Date
In 2025 | 2 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 13 |
Since 2016 (last 10 years) | 26 |
Since 2006 (last 20 years) | 37 |
Descriptor
Computer Assisted Testing | 37 |
Feedback (Response) | 37 |
Writing Evaluation | 37 |
Essays | 18 |
Foreign Countries | 14 |
Writing Instruction | 14 |
Automation | 12 |
Second Language Learning | 12 |
Computer Software | 11 |
English (Second Language) | 11 |
Revision (Written Composition) | 11 |
More ▼ |
Source
Author
Publication Type
Education Level
Higher Education | 14 |
Postsecondary Education | 13 |
Secondary Education | 8 |
Elementary Education | 6 |
Middle Schools | 5 |
Elementary Secondary Education | 4 |
Junior High Schools | 4 |
Grade 7 | 3 |
Grade 6 | 2 |
Grade 8 | 2 |
High Schools | 2 |
More ▼ |
Audience
Laws, Policies, & Programs
Assessments and Surveys
International English… | 1 |
Program for International… | 1 |
What Works Clearinghouse Rating
Jessie S. Barrot – Education and Information Technologies, 2024
This bibliometric analysis attempts to map out the scientific literature on automated writing evaluation (AWE) systems for teaching, learning, and assessment. A total of 170 documents published between 2002 and 2021 in Social Sciences Citation Index journals were reviewed from four dimensions, namely size (productivity and citations), time…
Descriptors: Educational Trends, Automation, Computer Assisted Testing, Writing Tests
Shujun Liu; Azzeddine Boudouaia; Xinya Chen; Yan Li – Asia-Pacific Education Researcher, 2025
The application of Automated Writing Evaluation (AWE) has recently gained researchers' attention worldwide. However, the impact of AWE feedback on student writing, particularly in languages other than English, remains controversial. This study aimed to compare the impacts of Chinese AWE feedback and teacher feedback on Chinese writing revision,…
Descriptors: Foreign Countries, Middle School Students, Grade 7, Writing Evaluation
Li, Xu; Ouyang, Fan; Liu, Jianwen; Wei, Chengkun; Chen, Wenzhi – Journal of Educational Computing Research, 2023
The computer-supported writing assessment (CSWA) has been widely used to reduce instructor workload and provide real-time feedback. Interpretability of CSWA draws extensive attention because it can benefit the validity, transparency, and knowledge-aware feedback of academic writing assessments. This study proposes a novel assessment tool,…
Descriptors: Computer Assisted Testing, Writing Evaluation, Feedback (Response), Natural Language Processing
Joshua Kloppers – International Journal of Computer-Assisted Language Learning and Teaching, 2023
Automated writing evaluation (AWE) software is an increasingly popular tool for English second language learners. However, research on the accuracy of such software has been both scarce and largely limited in its scope. As such, this article broadens the field of research on AWE accuracy by using a mixed design to holistically evaluate the…
Descriptors: Grammar, Automation, Writing Evaluation, Computer Assisted Instruction
Hong Jiao, Editor; Robert W. Lissitz, Editor – IAP - Information Age Publishing, Inc., 2024
With the exponential increase of digital assessment, different types of data in addition to item responses become available in the measurement process. One of the salient features in digital assessment is that process data can be easily collected. This non-conventional structured or unstructured data source may bring new perspectives to better…
Descriptors: Artificial Intelligence, Natural Language Processing, Psychometrics, Computer Assisted Testing
Myers, Matthew C.; Wilson, Joshua – International Journal of Artificial Intelligence in Education, 2023
This study evaluated the construct validity of six scoring traits of an automated writing evaluation (AWE) system called "MI Write." Persuasive essays (N = 100) written by students in grades 7 and 8 were randomized at the sentence-level using a script written with Python's NLTK module. Each persuasive essay was randomized 30 times (n =…
Descriptors: Construct Validity, Automation, Writing Evaluation, Algorithms
Mazhar Bal; Emre Öztürk – British Educational Research Journal, 2025
The purpose of this study is to examine the relationship between technology-supported writing instruction at the K-12 level and deep learning approaches and to understand the trends in this field. In the study, 12 articles selected from Web of Science, Scopus, ERIC and EBSCO databases were systematically analysed. The findings reveal that the…
Descriptors: Learning Processes, Writing Processes, Writing Skills, Writing Instruction
Potter, Andrew; Wilson, Joshua – Educational Technology Research and Development, 2021
Automated Writing Evaluation (AWE) provides automatic writing feedback and scoring to support student writing and revising. The purpose of the present study was to analyze a statewide implementation of an AWE software (n = 114,582) in grades 4-11. The goals of the study were to evaluate: (1) to what extent AWE features were used; (2) if equity and…
Descriptors: Computer Assisted Testing, Writing Evaluation, Feedback (Response), Scoring
Conijn, Rianne; Martinez-Maldonado, Roberto; Knight, Simon; Buckingham Shum, Simon; Van Waes, Luuk; van Zaanen, Menno – Computer Assisted Language Learning, 2022
Current writing support tools tend to focus on assessing final or intermediate products, rather than the writing process. However, sensing technologies, such as keystroke logging, can enable provision of automated feedback during, and on aspects of, the writing process. Despite this potential, little is known about the critical indicators that can…
Descriptors: Automation, Feedback (Response), Writing Evaluation, Learning Analytics
Mohammadi, Mojtaba; Zarrabi, Maryam; Kamali, Jaber – International Journal of Language Testing, 2023
With the incremental integration of technology in writing assessment, technology-generated feedback has found its way to take further steps toward replacing human corrective feedback and rating. Yet, further investigation is deemed necessary regarding its potential use either as a supplement to or replacement for human feedback. This study aims to…
Descriptors: Formative Evaluation, Writing Evaluation, Feedback (Response), Computer Assisted Testing
Sari, Elif; Han, Turgay – Reading Matrix: An International Online Journal, 2021
Providing both effective feedback applications and reliable assessment practices are two central issues in ESL/EFL writing instruction contexts. Giving individual feedback is very difficult in crowded classes as it requires a great amount of time and effort for instructors. Moreover, instructors likely employ inconsistent assessment procedures,…
Descriptors: Automation, Writing Evaluation, Artificial Intelligence, Natural Language Processing
Xu, Wenwen; Kim, Ji-Hyun – English Teaching, 2023
This study explored the role of written languaging (WL) in response to automated written corrective feedback (AWCF) in L2 accuracy improvement in English classrooms at a university in China. A total of 254 freshmen enrolled in intermediate composition classes participated, and they wrote 4 essays and received AWCF. A half of them engaged in WL…
Descriptors: Grammar, Accuracy, Writing Instruction, Writing Evaluation
Feifei Han; Zehua Wang – OTESSA Conference Proceedings, 2021
This study compared the effects of teacher feedback (TF) and online automated feedback (AF) on the quality of revision of English writing. It also examined the strengths and weaknesses of the two types of feedback perceived by English language learners (ELLs) as a foreign language (FL). Sixty-eight Chinese students from two English classes…
Descriptors: Comparative Analysis, Feedback (Response), English (Second Language), Second Language Instruction
Zhang, Zhe – ELT Journal, 2017
In order to benefit from feedback on their writing, students need to engage effectively with it. This article reports a case study on student engagement with computer-generated feedback, known as automated writing evaluation (AWE) feedback, in an EFL context. Differing from previous studies that explored commercially available AWE programs, this…
Descriptors: Writing Evaluation, Computer Mediated Communication, Feedback (Response), Computer Assisted Testing
Seifried, Eva; Lenhard, Wolfgang; Spinath, Birgit – Journal of Educational Computing Research, 2017
Writing essays and receiving feedback can be useful for fostering students' learning and motivation. When faced with large class sizes, it is desirable to identify students who might particularly benefit from feedback. In this article, we tested the potential of Latent Semantic Analysis (LSA) for identifying poor essays. A total of 14 teaching…
Descriptors: Computer Assisted Testing, Computer Software, Essays, Writing Evaluation