Publication Date
In 2025 | 1 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 8 |
Descriptor
Scoring | 9 |
Essays | 8 |
Evaluators | 6 |
Writing Evaluation | 6 |
English (Second Language) | 4 |
Scores | 4 |
Second Language Learning | 4 |
Computer Assisted Testing | 3 |
Computer Software | 3 |
Artificial Intelligence | 2 |
Comparative Analysis | 2 |
More ▼ |
Source
Language Testing | 9 |
Author
Attali, Yigal | 1 |
Barkaoui, Khaled | 1 |
Bond, Trevor | 1 |
Chan, Kinnie Kin Yee | 1 |
Enright, Mary K. | 1 |
Gierl, Mark | 1 |
Gierl, Mark J. | 1 |
Latifi, Syed | 1 |
Lewis, Will | 1 |
Quinlan, Thomas | 1 |
Razi, Salim | 1 |
More ▼ |
Publication Type
Journal Articles | 9 |
Reports - Research | 6 |
Reports - Evaluative | 2 |
Reports - Descriptive | 1 |
Education Level
Secondary Education | 3 |
Higher Education | 2 |
Postsecondary Education | 2 |
Elementary Education | 1 |
Grade 6 | 1 |
High Schools | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Audience
Location
Turkey | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
What Works Clearinghouse Rating
Chan, Kinnie Kin Yee; Bond, Trevor; Yan, Zi – Language Testing, 2023
We investigated the relationship between the scores assigned by an Automated Essay Scoring (AES) system, the Intelligent Essay Assessor (IEA), and grades allocated by trained, professional human raters to English essay writing by instigating two procedures novel to written-language assessment: the logistic transformation of AES raw scores into…
Descriptors: Computer Assisted Testing, Essays, Scoring, Scores
Shin, Jinnie; Gierl, Mark J. – Language Testing, 2021
Automated essay scoring (AES) has emerged as a secondary or as a sole marker for many high-stakes educational assessments, in native and non-native testing, owing to remarkable advances in feature engineering using natural language processing, machine learning, and deep-neural algorithms. The purpose of this study is to compare the effectiveness…
Descriptors: Scoring, Essays, Writing Evaluation, Computer Software
Taichi Yamashita – Language Testing, 2025
With the rapid development of generative artificial intelligence (AI) frameworks (e.g., the generative pre-trained transformer [GPT]), a growing number of researchers have started to explore its potential as an automated essay scoring (AES) system. While previous studies have investigated the alignment between human ratings and GPT ratings, few…
Descriptors: Artificial Intelligence, English (Second Language), Second Language Learning, Second Language Instruction
Latifi, Syed; Gierl, Mark – Language Testing, 2021
An automated essay scoring (AES) program is a software system that uses techniques from corpus and computational linguistics and machine learning to grade essays. In this study, we aimed to describe and evaluate particular language features of Coh-Metrix for a novel AES program that would score junior and senior high school students' essays from…
Descriptors: Writing Evaluation, Computer Assisted Testing, Scoring, Essays
Sahan, Özgür; Razi, Salim – Language Testing, 2020
This study examines the decision-making behaviors of raters with varying levels of experience while assessing EFL essays of distinct qualities. The data were collected from 28 raters with varying levels of rating experience and working at the English language departments of different universities in Turkey. Using a 10-point analytic rubric, each…
Descriptors: Decision Making, Essays, Writing Evaluation, Evaluators
Attali, Yigal; Lewis, Will; Steier, Michael – Language Testing, 2013
Automated essay scoring can produce reliable scores that are highly correlated with human scores, but is limited in its evaluation of content and other higher-order aspects of writing. The increased use of automated essay scoring in high-stakes testing underscores the need for human scoring that is focused on higher-order aspects of writing. This…
Descriptors: Scoring, Essay Tests, Reliability, High Stakes Tests
Barkaoui, Khaled – Language Testing, 2010
This study adopted a multilevel modeling (MLM) approach to examine the contribution of rater and essay factors to variability in ESL essay holistic scores. Previous research aiming to explain variability in essay holistic scores has focused on either rater or essay factors. The few studies that have examined the contribution of more than one…
Descriptors: Performance Based Assessment, English (Second Language), Second Language Learning, Holistic Approach
Enright, Mary K.; Quinlan, Thomas – Language Testing, 2010
E-rater[R] is an automated essay scoring system that uses natural language processing techniques to extract features from essays and to model statistically human holistic ratings. Educational Testing Service has investigated the use of e-rater, in conjunction with human ratings, to score one of the two writing tasks on the TOEFL-iBT[R] writing…
Descriptors: Second Language Learning, Scoring, Essays, Language Processing
Schoonen, Rob – Language Testing, 2005
The assessment of writing ability is notoriously difficult. Different facets of the assessment seem to influence its outcome. Besides the writer's writing proficiency, the topic of the assignment, the features or traits scored (e.g., content or language use) and even the way in which these traits are scored (e.g., holistically or analytically)…
Descriptors: Grade 6, Scoring, Essays, Writing Ability