Publication Date
In 2025 | 0 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 9 |
Descriptor
Automation | 9 |
Student Evaluation | 9 |
Writing Evaluation | 9 |
Essays | 6 |
Scoring | 5 |
Computer Software | 4 |
Reliability | 4 |
Artificial Intelligence | 3 |
Computer Assisted Testing | 3 |
Educational Technology | 3 |
Essay Tests | 3 |
More ▼ |
Source
Journal of Technology,… | 4 |
Interactive Learning… | 1 |
International Journal of… | 1 |
International Journal of… | 1 |
Reading and Writing: An… | 1 |
Turkish Online Journal of… | 1 |
Author
Areej Lhothali | 1 |
Attali, Yigal | 1 |
Boulanger, David | 1 |
Burstein, Jill | 1 |
Dikli, Semire | 1 |
Garcia, Veronica | 1 |
Grimes, Douglas | 1 |
Howard Hao-Jan Chen | 1 |
Hussein Assalahi | 1 |
Kumar, Vivekanandan S. | 1 |
Kyle Kuo-Wei Lai | 1 |
More ▼ |
Publication Type
Journal Articles | 9 |
Reports - Research | 5 |
Information Analyses | 2 |
Reports - Descriptive | 2 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 6 |
Postsecondary Education | 6 |
Middle Schools | 2 |
Elementary Secondary Education | 1 |
High Schools | 1 |
Junior High Schools | 1 |
Secondary Education | 1 |
Audience
Location
California | 1 |
China | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Management Admission… | 1 |
What Works Clearinghouse Rating
Thuy Thi-Nhu Ngo; Howard Hao-Jan Chen; Kyle Kuo-Wei Lai – Interactive Learning Environments, 2024
The present study performs a three-level meta-analysis to investigate the overall effectiveness of automated writing evaluation (AWE) on EFL/ESL student writing performance. 24 primary studies representing 85 between-group effect sizes and 34 studies representing 178 within-group effect sizes found from 1993 to 2021 were separately meta-analyzed.…
Descriptors: Writing Evaluation, Automation, Computer Software, English (Second Language)
Li Dong – Reading and Writing: An Interdisciplinary Journal, 2024
Within the context of Chinese university education, effective communication in the field of second language writing heavily relies on lexical complexity, yet the role of writing feedback perception in relation to lexical complexity remains elusive. This study introduces a comprehensive writing feedback perception model encompassing perceptions of…
Descriptors: Foreign Countries, College Students, Feedback (Response), Writing Instruction
Tahani I. Aldosemani; Hussein Assalahi; Areej Lhothali; Maram Albsisi – International Journal of Computer-Assisted Language Learning and Teaching, 2023
This paper explores the literature on AWE feedback, particularly its perceived impact on enhancing EFL student writing proficiency. Prior research highlighted the contribution of AWE in fostering learner autonomy and alleviating teacher workloads, with a substantial focus on student engagement with AWE feedback. This review strives to illuminate…
Descriptors: Automation, Student Evaluation, Writing Evaluation, English (Second Language)
Kumar, Vivekanandan S.; Boulanger, David – International Journal of Artificial Intelligence in Education, 2021
This article investigates the feasibility of using automated scoring methods to evaluate the quality of student-written essays. In 2012, Kaggle hosted an Automated Student Assessment Prize contest to find effective solutions to automated testing and grading. This article: a) analyzes the datasets from the contest -- which contained hand-graded…
Descriptors: Automation, Scoring, Essays, Writing Evaluation
Traci C. Eshelman – Turkish Online Journal of Educational Technology - TOJET, 2024
The purpose of this multiple intrinsic case study was to describe how Northeastern United States middle school teachers and students engaged with a new automated writing evaluation tool used to score and provide feedback on extended essay assignments to improve teaching and learning writing. Richard Elmore's (1993) instructional core framework is…
Descriptors: High School Teachers, Learner Engagement, Automation, Writing Evaluation
Grimes, Douglas; Warschauer, Mark – Journal of Technology, Learning, and Assessment, 2010
Automated writing evaluation (AWE) software uses artificial intelligence (AI) to score student essays and support revision. We studied how an AWE program called MY Access![R] was used in eight middle schools in Southern California over a three-year period. Although many teachers and students considered automated scoring unreliable, and teachers'…
Descriptors: Automation, Writing Evaluation, Essays, Artificial Intelligence
Rudner, Lawrence M.; Garcia, Veronica; Welch, Catherine – Journal of Technology, Learning, and Assessment, 2006
This report provides a two-part evaluation of the IntelliMetric[SM] automated essay scoring system based on its performance scoring essays from the Analytic Writing Assessment of the Graduate Management Admission Test[TM] (GMAT[TM]). The IntelliMetric system performance is first compared to that of individual human raters, a Bayesian system…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Dikli, Semire – Journal of Technology, Learning, and Assessment, 2006
Automated Essay Scoring (AES) is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003). AES systems are mainly used to overcome time, cost, reliability, and generalizability issues in writing assessment (Bereiter, 2003; Burstein,…
Descriptors: Scoring, Writing Evaluation, Writing Tests, Standardized Tests
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation