NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Joshua Kloppers – International Journal of Computer-Assisted Language Learning and Teaching, 2023
Automated writing evaluation (AWE) software is an increasingly popular tool for English second language learners. However, research on the accuracy of such software has been both scarce and largely limited in its scope. As such, this article broadens the field of research on AWE accuracy by using a mixed design to holistically evaluate the…
Descriptors: Grammar, Automation, Writing Evaluation, Computer Assisted Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Osama Koraishi – Language Teaching Research Quarterly, 2024
This study conducts a comprehensive quantitative evaluation of OpenAI's language model, ChatGPT 4, for grading Task 2 writing of the IELTS exam. The objective is to assess the alignment between ChatGPT's grading and that of official human raters. The analysis encompassed a multifaceted approach, including a comparison of means and reliability…
Descriptors: Second Language Learning, English (Second Language), Language Tests, Artificial Intelligence
Yi Gui – ProQuest LLC, 2024
This study explores using transfer learning in machine learning for natural language processing (NLP) to create generic automated essay scoring (AES) models, providing instant online scoring for statewide writing assessments in K-12 education. The goal is to develop an instant online scorer that is generalizable to any prompt, addressing the…
Descriptors: Writing Tests, Natural Language Processing, Writing Evaluation, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Potter, Andrew; Wilson, Joshua – Educational Technology Research and Development, 2021
Automated Writing Evaluation (AWE) provides automatic writing feedback and scoring to support student writing and revising. The purpose of the present study was to analyze a statewide implementation of an AWE software (n = 114,582) in grades 4-11. The goals of the study were to evaluate: (1) to what extent AWE features were used; (2) if equity and…
Descriptors: Computer Assisted Testing, Writing Evaluation, Feedback (Response), Scoring
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sumner, Josh – Research-publishing.net, 2021
Comparative Judgement (CJ) has emerged as a technique that typically makes use of holistic judgement to assess difficult-to-specify constructs such as production (speaking and writing) in Modern Foreign Languages (MFL). In traditional approaches, markers assess candidates' work one-by-one in an absolute manner, assigning scores to different…
Descriptors: Holistic Approach, Student Evaluation, Comparative Analysis, Decision Making
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tywoniw, Rurik; Crossley, Scott – Language Education & Assessment, 2019
Cohesion features were calculated for a corpus of 960 essays by 480 test-takers from the Test of English as a Foreign Language (TOEFL) in order to examine differences in the use of cohesion devices between integrated (source-based) writing and independent writing samples. Cohesion indices were measured using an automated textual analysis tool, the…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Connected Discourse
Peer reviewed Peer reviewed
Direct linkDirect link
García Botero, Gustavo; Botero Restrepo, Margarita Alexandra; Zhu, Chang; Questier, Frederik – Computer Assisted Language Learning, 2021
Learners need diligence when going solo in technology-enhanced learning environments. Nevertheless, self-regulation and scaffolding are two under-researched concepts when it comes to mobile learning. To tackle this knowledge gap, this study focuses on self-regulation and scaffolding for mobile assisted language learning (MALL). Fifty-two students…
Descriptors: Computer Assisted Instruction, Teaching Methods, Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Ebadi, Saman; Rahimi, Masoud – Computer Assisted Language Learning, 2019
Drawing on Vygotskian sociocultural theory of mind and social constructivism, and adopting a sequential exploratory mixed-methods approach, this study explored the impact of online dynamic assessment (DA) on EFL learners' academic writing skills through one-on-one individual and online synchronous DA sessions over Google Docs. It also investigated…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Sociocultural Patterns
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Tang, Jinlan; Rich, Changhua Sun – JALT CALL Journal, 2017
This paper reports a series of research studies on the use of automated writing evaluation (awe) in secondary and university settings in China. The secondary school study featured the use of awe in six intact classes of 268 senior high school students for one academic year. The university study group comprised 460 students from five universities…
Descriptors: English (Second Language), Second Language Learning, Second Language Instruction, Revision (Written Composition)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Breyer, F. Jay; Attali, Yigal; Williamson, David M.; Ridolfi-McCulla, Laura; Ramineni, Chaitanya; Duchnowski, Matthew; Harris, April – ETS Research Report Series, 2014
In this research, we investigated the feasibility of implementing the "e-rater"® scoring engine as a check score in place of all-human scoring for the "Graduate Record Examinations"® ("GRE"®) revised General Test (rGRE) Analytical Writing measure. This report provides the scientific basis for the use of e-rater as a…
Descriptors: Computer Software, Computer Assisted Testing, Scoring, College Entrance Examinations
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Scoring models for the "e-rater"® system were built and evaluated for the "TOEFL"® exam's independent and integrated writing prompts. Prompt-specific and generic scoring models were built, and evaluation statistics, such as weighted kappas, Pearson correlations, standardized differences in mean scores, and correlations with…
Descriptors: Scoring, Prompting, Evaluators, Computer Software
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bagheridoust, Esmaeil; Husseini, Zahra – English Language Teaching, 2011
Writing as one important skill in language proficiency demands validity, hence high schools are real places in which valid results are needed for high-stake decisions. Unrealistic and non-viable tests result in improper and invalid interpretation and use. Illustrations without any written research have proved their effectiveness in whatsoever…
Descriptors: Foreign Countries, English (Second Language), Second Language Instruction, Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
McCurry, Doug – Assessing Writing, 2010
This article considers the claim that machine scoring of writing test responses agrees with human readers as much as humans agree with other humans. These claims about the reliability of machine scoring of writing are usually based on specific and constrained writing tasks, and there is reason for asking whether machine scoring of writing requires…
Descriptors: Writing Tests, Scoring, Interrater Reliability, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Yong-Won; Gentile, Claudia; Kantor, Robert – Applied Linguistics, 2010
The main purpose of the study was to investigate the distinctness and reliability of analytic (or multi-trait) rating dimensions and their relationships to holistic scores and "e-rater"[R] essay feature variables in the context of the TOEFL[R] computer-based test (TOEFL CBT) writing assessment. Data analyzed in the study were holistic…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dikli, Semire – Journal of Technology, Learning, and Assessment, 2006
Automated Essay Scoring (AES) is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003). AES systems are mainly used to overcome time, cost, reliability, and generalizability issues in writing assessment (Bereiter, 2003; Burstein,…
Descriptors: Scoring, Writing Evaluation, Writing Tests, Standardized Tests
Previous Page | Next Page »
Pages: 1  |  2