NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 34 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Doewes, Afrizal; Kurdhi, Nughthoh Arfawi; Saxena, Akrati – International Educational Data Mining Society, 2023
Automated Essay Scoring (AES) tools aim to improve the efficiency and consistency of essay scoring by using machine learning algorithms. In the existing research work on this topic, most researchers agree that human-automated score agreement remains the benchmark for assessing the accuracy of machine-generated scores. To measure the performance of…
Descriptors: Essays, Writing Evaluation, Evaluators, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Uto, Masaki; Okano, Masashi – IEEE Transactions on Learning Technologies, 2021
In automated essay scoring (AES), scores are automatically assigned to essays as an alternative to grading by humans. Traditional AES typically relies on handcrafted features, whereas recent studies have proposed AES models based on deep neural networks to obviate the need for feature engineering. Those AES models generally require training on a…
Descriptors: Essays, Scoring, Writing Evaluation, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Yishen Song; Qianta Zhu; Huaibo Wang; Qinhua Zheng – IEEE Transactions on Learning Technologies, 2024
Manually scoring and revising student essays has long been a time-consuming task for educators. With the rise of natural language processing techniques, automated essay scoring (AES) and automated essay revising (AER) have emerged to alleviate this burden. However, current AES and AER models require large amounts of training data and lack…
Descriptors: Scoring, Essays, Writing Evaluation, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Joshua Kloppers – International Journal of Computer-Assisted Language Learning and Teaching, 2023
Automated writing evaluation (AWE) software is an increasingly popular tool for English second language learners. However, research on the accuracy of such software has been both scarce and largely limited in its scope. As such, this article broadens the field of research on AWE accuracy by using a mixed design to holistically evaluate the…
Descriptors: Grammar, Automation, Writing Evaluation, Computer Assisted Instruction
Yi Gui – ProQuest LLC, 2024
This study explores using transfer learning in machine learning for natural language processing (NLP) to create generic automated essay scoring (AES) models, providing instant online scoring for statewide writing assessments in K-12 education. The goal is to develop an instant online scorer that is generalizable to any prompt, addressing the…
Descriptors: Writing Tests, Natural Language Processing, Writing Evaluation, Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ahmet Can Uyar; Dilek Büyükahiska – International Journal of Assessment Tools in Education, 2025
This study explores the effectiveness of using ChatGPT, an Artificial Intelligence (AI) language model, as an Automated Essay Scoring (AES) tool for grading English as a Foreign Language (EFL) learners' essays. The corpus consists of 50 essays representing various types including analysis, compare and contrast, descriptive, narrative, and opinion…
Descriptors: Artificial Intelligence, Computer Software, Technology Uses in Education, Teaching Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Potter, Andrew; Wilson, Joshua – Educational Technology Research and Development, 2021
Automated Writing Evaluation (AWE) provides automatic writing feedback and scoring to support student writing and revising. The purpose of the present study was to analyze a statewide implementation of an AWE software (n = 114,582) in grades 4-11. The goals of the study were to evaluate: (1) to what extent AWE features were used; (2) if equity and…
Descriptors: Computer Assisted Testing, Writing Evaluation, Feedback (Response), Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mohammadi, Mojtaba; Zarrabi, Maryam; Kamali, Jaber – International Journal of Language Testing, 2023
With the incremental integration of technology in writing assessment, technology-generated feedback has found its way to take further steps toward replacing human corrective feedback and rating. Yet, further investigation is deemed necessary regarding its potential use either as a supplement to or replacement for human feedback. This study aims to…
Descriptors: Formative Evaluation, Writing Evaluation, Feedback (Response), Computer Assisted Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Xu, Wenwen; Kim, Ji-Hyun – English Teaching, 2023
This study explored the role of written languaging (WL) in response to automated written corrective feedback (AWCF) in L2 accuracy improvement in English classrooms at a university in China. A total of 254 freshmen enrolled in intermediate composition classes participated, and they wrote 4 essays and received AWCF. A half of them engaged in WL…
Descriptors: Grammar, Accuracy, Writing Instruction, Writing Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Aitken, Adam; Thompson, Darrall G. – International Journal of Technology and Design Education, 2018
First year undergraduate design students have found difficulties in realising the standards expected for academic writing at university level. An assessment initiative was used to engage students with criteria and standards for a core interdisciplinary design subject notable for its demanding assessment of academic writing. The same graduate…
Descriptors: Undergraduate Students, Design, Assignments, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Feifei Han; Zehua Wang – OTESSA Conference Proceedings, 2021
This study compared the effects of teacher feedback (TF) and online automated feedback (AF) on the quality of revision of English writing. It also examined the strengths and weaknesses of the two types of feedback perceived by English language learners (ELLs) as a foreign language (FL). Sixty-eight Chinese students from two English classes…
Descriptors: Comparative Analysis, Feedback (Response), English (Second Language), Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Seifried, Eva; Lenhard, Wolfgang; Spinath, Birgit – Journal of Educational Computing Research, 2017
Writing essays and receiving feedback can be useful for fostering students' learning and motivation. When faced with large class sizes, it is desirable to identify students who might particularly benefit from feedback. In this article, we tested the potential of Latent Semantic Analysis (LSA) for identifying poor essays. A total of 14 teaching…
Descriptors: Computer Assisted Testing, Computer Software, Essays, Writing Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Godwin-Jones, Robert – Language Learning & Technology, 2018
This article provides an update to the author's overview of developments in second language (L2) online writing that he wrote in 2008. There has been renewed interest in L2 writing through the wide use of social media, along with the rising popularity of computer-mediated communication (CMC) and telecollaboration (class-based online exchanges).…
Descriptors: Second Language Learning, Computer Mediated Communication, Second Language Instruction, Writing Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Tsai, Shu-Chiao – Computer Assisted Language Learning, 2019
This study investigates the impact on extemporaneous English-language first drafts by using Google Translate (GT) in three different tasks assigned to Chinese sophomore, junior, and senior students of English as a Foreign Language (EFL) majoring in English. Students wrote first in Chinese (Step 1), then drafted corresponding texts in English (Step…
Descriptors: English (Second Language), Second Language Learning, Second Language Instruction, Computer Software
Peer reviewed Peer reviewed
Direct linkDirect link
Razi, Salim – SAGE Open, 2015
Similarity reports of plagiarism detectors should be approached with caution as they may not be sufficient to support allegations of plagiarism. This study developed a 50-item rubric to simplify and standardize evaluation of academic papers. In the spring semester of 2011-2012 academic year, 161 freshmen's papers at the English Language Teaching…
Descriptors: Foreign Countries, Scoring Rubrics, Writing Evaluation, Writing (Composition)
Previous Page | Next Page »
Pages: 1  |  2  |  3