Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 8 |
Since 2016 (last 10 years) | 14 |
Since 2006 (last 20 years) | 24 |
Descriptor
Essays | 22 |
English (Second Language) | 18 |
Second Language Learning | 18 |
Writing Evaluation | 17 |
Evaluators | 12 |
Scores | 12 |
Language Tests | 11 |
Writing Tests | 10 |
Scoring | 8 |
Writing (Composition) | 7 |
Correlation | 6 |
More ▼ |
Source
Language Testing | 26 |
Author
Attali, Yigal | 2 |
Barkaoui, Khaled | 2 |
Crossley, Scott | 2 |
Gebril, Atta | 2 |
Ann Tai Choe | 1 |
Beigman Klebanov, Beata | 1 |
Bilki, Zeynep | 1 |
Bond, Trevor | 1 |
Botarleanu, Robert-Mihai | 1 |
Chan, Kinnie Kin Yee | 1 |
Chang, Jianfang | 1 |
More ▼ |
Publication Type
Journal Articles | 26 |
Reports - Research | 18 |
Reports - Evaluative | 5 |
Reports - Descriptive | 2 |
Information Analyses | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 9 |
Secondary Education | 4 |
Postsecondary Education | 3 |
Elementary Education | 1 |
Grade 11 | 1 |
Grade 6 | 1 |
Grade 7 | 1 |
Grade 9 | 1 |
High Schools | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
More ▼ |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 4 |
Graduate Record Examinations | 2 |
Test of Written English | 1 |
What Works Clearinghouse Rating
Chan, Kinnie Kin Yee; Bond, Trevor; Yan, Zi – Language Testing, 2023
We investigated the relationship between the scores assigned by an Automated Essay Scoring (AES) system, the Intelligent Essay Assessor (IEA), and grades allocated by trained, professional human raters to English essay writing by instigating two procedures novel to written-language assessment: the logistic transformation of AES raw scores into…
Descriptors: Computer Assisted Testing, Essays, Scoring, Scores
Monteiro, Kátia; Crossley, Scott; Botarleanu, Robert-Mihai; Dascalu, Mihai – Language Testing, 2023
Lexical frequency benchmarks have been extensively used to investigate second language (L2) lexical sophistication, especially in language assessment studies. However, indices based on semantic co-occurrence, which may be a better representation of the experience language users have with lexical items, have not been sufficiently tested as…
Descriptors: Second Language Learning, Second Languages, Native Language, Semantics
Takanori Sato – Language Testing, 2024
Assessing the content of learners' compositions is a common practice in second language (L2) writing assessment. However, the construct definition of content in L2 writing assessment potentially underrepresents the target competence in content and language integrated learning (CLIL), which aims to foster not only L2 proficiency but also critical…
Descriptors: Language Tests, Content and Language Integrated Learning, Writing Evaluation, Writing Tests
Yu-Tzu Chang; Ann Tai Choe; Daniel Holden; Daniel R. Isbell – Language Testing, 2024
In this Brief Report, we describe an evaluation of and revisions to a rubric adapted from the Jacobs et al.'s (1981) ESL COMPOSITION PROFILE, with four rubric categories and 20-point rating scales, in the context of an intensive English program writing placement test. Analysis of 4 years of rating data (2016-2021, including 434 essays) using…
Descriptors: Language Tests, Rating Scales, Second Language Learning, English (Second Language)
Shin, Jinnie; Gierl, Mark J. – Language Testing, 2021
Automated essay scoring (AES) has emerged as a secondary or as a sole marker for many high-stakes educational assessments, in native and non-native testing, owing to remarkable advances in feature engineering using natural language processing, machine learning, and deep-neural algorithms. The purpose of this study is to compare the effectiveness…
Descriptors: Scoring, Essays, Writing Evaluation, Computer Software
Yan, Xun; Chuang, Ping-Lin – Language Testing, 2023
This study employed a mixed-methods approach to examine how rater performance develops during a semester-long rater certification program for an English as a Second Language (ESL) writing placement test at a large US university. From 2016 to 2018, we tracked three groups of novice raters (n = 30) across four rounds in the certification program.…
Descriptors: Evaluators, Interrater Reliability, Item Response Theory, Certification
Latifi, Syed; Gierl, Mark – Language Testing, 2021
An automated essay scoring (AES) program is a software system that uses techniques from corpus and computational linguistics and machine learning to grade essays. In this study, we aimed to describe and evaluate particular language features of Coh-Metrix for a novel AES program that would score junior and senior high school students' essays from…
Descriptors: Writing Evaluation, Computer Assisted Testing, Scoring, Essays
Lukácsi, Zoltán – Language Testing, 2021
In second language writing assessment, rating scales and scores from human-mediated assessment have been criticized for a number of shortcomings including problems with adequacy, relevance, and reliability (Hamp-Lyons, 1990; McNamara, 1996; Weigle, 2002). In its testing practice, Euroexam International also detected that the rating scales for…
Descriptors: Test Construction, Test Validity, Test Items, Check Lists
Plakans, Lia; Gebril, Atta; Bilki, Zeynep – Language Testing, 2019
The present study investigates integrated writing assessment performances with regard to the linguistic features of complexity, accuracy, and fluency (CAF). Given the increasing presence of integrated tasks in large-scale and classroom assessments, validity evidence is needed for the claim that their scores reflect targeted language abilities.…
Descriptors: Accuracy, Language Tests, Scores, Writing Evaluation
Sahan, Özgür; Razi, Salim – Language Testing, 2020
This study examines the decision-making behaviors of raters with varying levels of experience while assessing EFL essays of distinct qualities. The data were collected from 28 raters with varying levels of rating experience and working at the English language departments of different universities in Turkey. Using a 10-point analytic rubric, each…
Descriptors: Decision Making, Essays, Writing Evaluation, Evaluators
Beigman Klebanov, Beata; Ramineni, Chaitanya; Kaufer, David; Yeoh, Paul; Ishizaki, Suguru – Language Testing, 2019
Essay writing is a common type of constructed-response task used frequently in standardized writing assessments. However, the impromptu timed nature of the essay writing tests has drawn increasing criticism for the lack of authenticity for real-world writing in classroom and workplace settings. The goal of this paper is to contribute evidence to a…
Descriptors: Test Validity, Writing Tests, Writing Skills, Persuasive Discourse
Attali, Yigal – Language Testing, 2016
A short training program for evaluating responses to an essay writing task consisted of scoring 20 training essays with immediate feedback about the correct score. The same scoring session also served as a certification test for trainees. Participants with little or no previous rating experience completed this session and 14 trainees who passed an…
Descriptors: Writing Evaluation, Writing Tests, Standardized Tests, Evaluators
Kyle, Kristopher; Crossley, Scott – Language Testing, 2017
Over the past 45 years, the construct of syntactic sophistication has been assessed in L2 writing using what Bulté and Housen (2012) refer to as absolute complexity (Lu, 2011; Ortega, 2003; Wolfe-Quintero, Inagaki, & Kim, 1998). However, it has been argued that making inferences about learners based on absolute complexity indices (e.g., mean…
Descriptors: Syntax, Verbs, Second Language Learning, Word Frequency
Attali, Yigal; Lewis, Will; Steier, Michael – Language Testing, 2013
Automated essay scoring can produce reliable scores that are highly correlated with human scores, but is limited in its evaluation of content and other higher-order aspects of writing. The increased use of automated essay scoring in high-stakes testing underscores the need for human scoring that is focused on higher-order aspects of writing. This…
Descriptors: Scoring, Essay Tests, Reliability, High Stakes Tests
Jarvis, Scott – Language Testing, 2017
The present study discusses the relevance of measures of lexical diversity (LD) to the assessment of learner corpora. It also argues that existing measures of LD, many of which have become specialized for use with language corpora, are fundamentally measures of lexical repetition, are based on an etic perspective of language, and lack construct…
Descriptors: Computational Linguistics, English (Second Language), Second Language Learning, Native Speakers
Previous Page | Next Page »
Pages: 1 | 2