Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 10 |
Descriptor
Computer Assisted Testing | 12 |
Writing Evaluation | 12 |
Essay Tests | 8 |
Essays | 7 |
Scoring | 7 |
Writing Tests | 5 |
Comparative Analysis | 4 |
Educational Technology | 4 |
Measurement | 4 |
Student Placement | 4 |
Validity | 4 |
More ▼ |
Source
Assessing Writing | 12 |
Author
Ramineni, Chaitanya | 2 |
Burke, Jennifer N. | 1 |
Chen, Jing | 1 |
Chun, Young | 1 |
Cizek, Gregory J. | 1 |
Condon, William | 1 |
Deane, Paul | 1 |
Deess, Perry | 1 |
Elliot, Norbert | 1 |
Joshi, Kamal | 1 |
Klobucar, Andrew | 1 |
More ▼ |
Publication Type
Journal Articles | 12 |
Reports - Evaluative | 7 |
Reports - Research | 4 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 6 |
Elementary Secondary Education | 5 |
Postsecondary Education | 5 |
Adult Education | 2 |
Elementary Education | 1 |
Grade 6 | 1 |
Audience
Practitioners | 1 |
Location
Canada (Toronto) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Does not meet standards | 1 |
Ramineni, Chaitanya; Williamson, David M. – Assessing Writing, 2013
In this paper, we provide an overview of psychometric procedures and guidelines Educational Testing Service (ETS) uses to evaluate automated essay scoring for operational use. We briefly describe the e-rater system, the procedures and criteria used to evaluate e-rater, implications for a range of potential uses of e-rater, and directions for…
Descriptors: Educational Testing, Guidelines, Scoring, Psychometrics
Deane, Paul – Assessing Writing, 2013
This paper examines the construct measured by automated essay scoring (AES) systems. AES systems measure features of the text structure, linguistic structure, and conventional print form of essays; as such, the systems primarily measure text production skills. In the current state-of-the-art, AES provide little direct evidence about such matters…
Descriptors: Scoring, Essays, Text Structure, Writing (Composition)
Ramineni, Chaitanya – Assessing Writing, 2013
In this paper, I describe the design and evaluation of automated essay scoring (AES) models for an institution's writing placement program. Information was gathered on admitted student writing performance at a science and technology research university in the northeastern United States. Under timed conditions, first-year students (N = 879) were…
Descriptors: Validity, Comparative Analysis, Internet, Student Placement
Klobucar, Andrew; Elliot, Norbert; Deess, Perry; Rudniy, Oleksandr; Joshi, Kamal – Assessing Writing, 2013
This study investigated the use of automated essay scoring (AES) to identify at-risk students enrolled in a first-year university writing course. An application of AES, the "Criterion"[R] Online Writing Evaluation Service was evaluated through a methodology focusing on construct modelling, response processes, disaggregation, extrapolation,…
Descriptors: Writing Evaluation, Scoring, Writing Instruction, Essays
Condon, William – Assessing Writing, 2013
Automated Essay Scoring (AES) has garnered a great deal of attention from the rhetoric and composition/writing studies community since the Educational Testing Service began using e-rater[R] and the "Criterion"[R] Online Writing Evaluation Service as products in scoring writing tests, and most of the responses have been negative. While the…
Descriptors: Measurement, Psychometrics, Evaluation Methods, Educational Testing
Weigle, Sara Cushing – Assessing Writing, 2013
This article presents considerations for using automated scoring systems to evaluate second language writing. A distinction is made between English language learners in English-medium educational systems and those studying English in their own countries for a variety of purposes, and between learning-to-write and writing-to-learn in a second…
Descriptors: Scoring, Second Language Learning, Second Languages, English Language Learners
Chen, Jing; White, Sheida; McCloskey, Michael; Soroui, Jaleh; Chun, Young – Assessing Writing, 2011
This study investigated the comparability of paper and computer versions of a functional writing assessment administered to adults 16 and older. Three writing tasks were administered in both paper and computer modes to volunteers in the field test of an assessment of adult literacy in 2008. One set of analyses examined mode effects on scoring by…
Descriptors: Writing Evaluation, Writing Tests, Computer Assisted Testing, Educational Technology
McCurry, Doug – Assessing Writing, 2010
This article considers the claim that machine scoring of writing test responses agrees with human readers as much as humans agree with other humans. These claims about the reliability of machine scoring of writing are usually based on specific and constrained writing tasks, and there is reason for asking whether machine scoring of writing requires…
Descriptors: Writing Tests, Scoring, Interrater Reliability, Computer Assisted Testing

Lewiecki-Wilson, Cynthia; Sommers, Jeff; Tassoni, John Paul – Assessing Writing, 2000
Describes reasons for resisting computer editing tests and suggests possible problems with using only directed student self-placement in open access institutions. Presents a sample student profile to illustrate the interaction and negotiation among writing teachers as they read profiles and reach an agreement about their placement recommendation.…
Descriptors: Case Studies, Computer Assisted Testing, Higher Education, Rhetoric
Li, Jiang – Assessing Writing, 2006
The present study investigated the influence of word processing on the writing of students of English as a second language (ESL) and on writing assessment as well. Twenty-one adult Mandarin-Chinese speakers with advanced English proficiency living in Toronto participated in the study. Each participant wrote two comparable writing tasks under…
Descriptors: Writing Evaluation, Protocol Analysis, Writing Tests, Evaluation Methods
Burke, Jennifer N.; Cizek, Gregory J. – Assessing Writing, 2006
This study was conducted to gather evidence regarding effects of the mode of writing (handwritten vs. word-processed) on compositional quality in a sample of sixth grade students. Questionnaire data and essay scores were gathered to examine the effect of composition mode on essay scores of students of differing computer skill levels. The study was…
Descriptors: Computer Assisted Testing, High Stakes Tests, Writing Processes, Grade 6
Lee, H. K. – Assessing Writing, 2004
This study aimed to comprehensively investigate the impact of a word-processor on an ESL writing assessment, covering comparison of inter-rater reliability, the quality of written products, the writing process across different testing occasions using different writing media, and students' perception of a computer-delivered test. Writing samples of…
Descriptors: Writing Evaluation, Student Attitudes, Writing Tests, Testing