Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 3 |
Descriptor
Computer Assisted Testing | 3 |
Computer Software Evaluation | 3 |
Essay Tests | 3 |
Scoring | 3 |
Writing Evaluation | 3 |
Writing Tests | 3 |
Automation | 2 |
College Entrance Examinations | 2 |
Computer Software | 2 |
Educational Technology | 2 |
Essays | 2 |
More ▼ |
Author
Attali, Yigal | 1 |
Burstein, Jill | 1 |
Garcia, Veronica | 1 |
McCurry, Doug | 1 |
Rudner, Lawrence M. | 1 |
Welch, Catherine | 1 |
Publication Type
Journal Articles | 3 |
Reports - Descriptive | 1 |
Reports - Evaluative | 1 |
Reports - Research | 1 |
Education Level
Higher Education | 2 |
Postsecondary Education | 2 |
Elementary Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Graduate Management Admission… | 1 |
National Assessment of… | 1 |
What Works Clearinghouse Rating
McCurry, Doug – Assessing Writing, 2010
This article considers the claim that machine scoring of writing test responses agrees with human readers as much as humans agree with other humans. These claims about the reliability of machine scoring of writing are usually based on specific and constrained writing tasks, and there is reason for asking whether machine scoring of writing requires…
Descriptors: Writing Tests, Scoring, Interrater Reliability, Computer Assisted Testing
Rudner, Lawrence M.; Garcia, Veronica; Welch, Catherine – Journal of Technology, Learning, and Assessment, 2006
This report provides a two-part evaluation of the IntelliMetric[SM] automated essay scoring system based on its performance scoring essays from the Analytic Writing Assessment of the Graduate Management Admission Test[TM] (GMAT[TM]). The IntelliMetric system performance is first compared to that of individual human raters, a Bayesian system…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation