NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 49 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Almusharraf, Norah; Alotaibi, Hind – Technology, Knowledge and Learning, 2023
Evaluating written texts is believed to be a time-consuming process that can lack consistency and objectivity. Automated essay scoring (AES) can provide solutions to some of the limitations of human scoring. This research aimed to evaluate the performance of one AES system, Grammarly, in comparison to human raters. Both approaches' performances…
Descriptors: Writing Evaluation, Writing Tests, Essay Tests, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
On-Soon Lee – Journal of Pan-Pacific Association of Applied Linguistics, 2024
Despite the increasing interest in using AI tools as assistant agents in instructional settings, the effectiveness of ChatGPT, the generative pretrained AI, for evaluating the accuracy of second language (L2) writing has been largely unexplored in formative assessment. Therefore, the current study aims to examine how ChatGPT, as an evaluator,…
Descriptors: Foreign Countries, Undergraduate Students, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Joshua Kloppers – International Journal of Computer-Assisted Language Learning and Teaching, 2023
Automated writing evaluation (AWE) software is an increasingly popular tool for English second language learners. However, research on the accuracy of such software has been both scarce and largely limited in its scope. As such, this article broadens the field of research on AWE accuracy by using a mixed design to holistically evaluate the…
Descriptors: Grammar, Automation, Writing Evaluation, Computer Assisted Instruction
Jiyeo Yun – English Teaching, 2023
Studies on automatic scoring systems in writing assessments have also evaluated the relationship between human and machine scores for the reliability of automated essay scoring systems. This study investigated the magnitudes of indices for inter-rater agreement and discrepancy, especially regarding human and machine scoring, in writing assessment.…
Descriptors: Meta Analysis, Interrater Reliability, Essays, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Choi, Ikkyu; Deane, Paul – Language Assessment Quarterly, 2021
Keystroke logs provide a comprehensive record of observable writing processes. Previous studies examining the keystroke logs of young L1 English writers performing experimental writing tasks have identified writing processes features predictive of the quality of responses. Contrarily, large-scale studies on the dynamic and temporal nature of L2…
Descriptors: Writing Processes, Writing Evaluation, Computer Assisted Testing, Learning Analytics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ahmet Can Uyar; Dilek Büyükahiska – International Journal of Assessment Tools in Education, 2025
This study explores the effectiveness of using ChatGPT, an Artificial Intelligence (AI) language model, as an Automated Essay Scoring (AES) tool for grading English as a Foreign Language (EFL) learners' essays. The corpus consists of 50 essays representing various types including analysis, compare and contrast, descriptive, narrative, and opinion…
Descriptors: Artificial Intelligence, Computer Software, Technology Uses in Education, Teaching Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Sherkuziyeva, Nasiba; Imamutdinovna Gabidullina, Farida; Ahmed Abdel-Al Ibrahim, Khaled; Bayat, Sania – Language Testing in Asia, 2023
This study aimed to examine the impacts of computerized dynamic assessment (C-DA) and rater-mediated assessment on the test anxiety, writing performance, and oral proficiency of Iranian EFL learners. Based on Preliminary English Test (PET) results, a sample of 64 intermediate participants was chosen from 93 students. Running a convenience sampling…
Descriptors: Comparative Analysis, Computer Assisted Testing, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Gaillat, Thomas; Simpkin, Andrew; Ballier, Nicolas; Stearns, Bernardo; Sousa, Annanda; Bouyé, Manon; Zarrouk, Manel – ReCALL, 2021
This paper focuses on automatically assessing language proficiency levels according to linguistic complexity in learner English. We implement a supervised learning approach as part of an automatic essay scoring system. The objective is to uncover Common European Framework of Reference for Languages (CEFR) criterial features in writings by learners…
Descriptors: Prediction, Rating Scales, English (Second Language), Second Language Learning
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Mohammadi, Mojtaba; Zarrabi, Maryam; Kamali, Jaber – International Journal of Language Testing, 2023
With the incremental integration of technology in writing assessment, technology-generated feedback has found its way to take further steps toward replacing human corrective feedback and rating. Yet, further investigation is deemed necessary regarding its potential use either as a supplement to or replacement for human feedback. This study aims to…
Descriptors: Formative Evaluation, Writing Evaluation, Feedback (Response), Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Sari, Elif; Han, Turgay – Reading Matrix: An International Online Journal, 2021
Providing both effective feedback applications and reliable assessment practices are two central issues in ESL/EFL writing instruction contexts. Giving individual feedback is very difficult in crowded classes as it requires a great amount of time and effort for instructors. Moreover, instructors likely employ inconsistent assessment procedures,…
Descriptors: Automation, Writing Evaluation, Artificial Intelligence, Natural Language Processing
Peer reviewed Peer reviewed
Direct linkDirect link
He, Tung-hsien – SAGE Open, 2019
This study employed a mixed-design approach and the Many-Facet Rasch Measurement (MFRM) framework to investigate whether rater bias occurred between the onscreen scoring (OSS) mode and the paper-based scoring (PBS) mode. Nine human raters analytically marked scanned scripts and paper scripts using a six-category (i.e., six-criterion) rating…
Descriptors: Computer Assisted Testing, Scoring, Item Response Theory, Essays
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Xu, Wenwen; Kim, Ji-Hyun – English Teaching, 2023
This study explored the role of written languaging (WL) in response to automated written corrective feedback (AWCF) in L2 accuracy improvement in English classrooms at a university in China. A total of 254 freshmen enrolled in intermediate composition classes participated, and they wrote 4 essays and received AWCF. A half of them engaged in WL…
Descriptors: Grammar, Accuracy, Writing Instruction, Writing Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rupp, André A.; Casabianca, Jodi M.; Krüger, Maleika; Keller, Stefan; Köller, Olaf – ETS Research Report Series, 2019
In this research report, we describe the design and empirical findings for a large-scale study of essay writing ability with approximately 2,500 high school students in Germany and Switzerland on the basis of 2 tasks with 2 associated prompts, each from a standardized writing assessment whose scoring involved both human and automated components.…
Descriptors: Automation, Foreign Countries, English (Second Language), Language Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yao, Lili; Haberman, Shelby J.; Zhang, Mo – ETS Research Report Series, 2019
Many assessments of writing proficiency that aid in making high-stakes decisions consist of several essay tasks evaluated by a combination of human holistic scores and computer-generated scores for essay features such as the rate of grammatical errors per word. Under typical conditions, a summary writing score is provided by a linear combination…
Descriptors: Prediction, True Scores, Computer Assisted Testing, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Feifei Han; Zehua Wang – OTESSA Conference Proceedings, 2021
This study compared the effects of teacher feedback (TF) and online automated feedback (AF) on the quality of revision of English writing. It also examined the strengths and weaknesses of the two types of feedback perceived by English language learners (ELLs) as a foreign language (FL). Sixty-eight Chinese students from two English classes…
Descriptors: Comparative Analysis, Feedback (Response), English (Second Language), Second Language Instruction
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4