Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 5 |
Descriptor
| Computer Software Evaluation | 8 |
| Essay Tests | 8 |
| Computer Assisted Testing | 7 |
| Essays | 5 |
| Scoring | 5 |
| Writing Evaluation | 5 |
| Automation | 3 |
| College Entrance Examinations | 3 |
| Writing Tests | 3 |
| College Freshmen | 2 |
| Comparative Analysis | 2 |
| More ▼ | |
Source
| Assessing Writing | 3 |
| Journal of Technology,… | 2 |
| Assessment & Evaluation in… | 1 |
| English Language Teaching | 1 |
| Online Submission | 1 |
Author
Publication Type
| Journal Articles | 7 |
| Reports - Research | 6 |
| Reports - Descriptive | 1 |
| Reports - Evaluative | 1 |
| Speeches/Meeting Papers | 1 |
Education Level
| Higher Education | 4 |
| Postsecondary Education | 4 |
| Elementary Secondary Education | 2 |
| Elementary Education | 1 |
| Grade 5 | 1 |
| High Schools | 1 |
| Intermediate Grades | 1 |
| Middle Schools | 1 |
| Secondary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
| Graduate Management Admission… | 1 |
| National Assessment of… | 1 |
What Works Clearinghouse Rating
Ramineni, Chaitanya – Assessing Writing, 2013
In this paper, I describe the design and evaluation of automated essay scoring (AES) models for an institution's writing placement program. Information was gathered on admitted student writing performance at a science and technology research university in the northeastern United States. Under timed conditions, first-year students (N = 879) were…
Descriptors: Validity, Comparative Analysis, Internet, Student Placement
Klobucar, Andrew; Elliot, Norbert; Deess, Perry; Rudniy, Oleksandr; Joshi, Kamal – Assessing Writing, 2013
This study investigated the use of automated essay scoring (AES) to identify at-risk students enrolled in a first-year university writing course. An application of AES, the "Criterion"[R] Online Writing Evaluation Service was evaluated through a methodology focusing on construct modelling, response processes, disaggregation, extrapolation,…
Descriptors: Writing Evaluation, Scoring, Writing Instruction, Essays
Prvinchandar, Sunita; Ayub, Ahmad Fauzi Mohd – English Language Teaching, 2014
This study compared the effectiveness of two types of computer software for improving the English writing skills of pupils in a Malaysian primary school. Sixty students who participated in the seven-week training course were divided into two groups, with the experimental group using the StyleWriter software and the control group using the…
Descriptors: Writing Skills, Courseware, Writing Improvement, Elementary School Students
McCurry, Doug – Assessing Writing, 2010
This article considers the claim that machine scoring of writing test responses agrees with human readers as much as humans agree with other humans. These claims about the reliability of machine scoring of writing are usually based on specific and constrained writing tasks, and there is reason for asking whether machine scoring of writing requires…
Descriptors: Writing Tests, Scoring, Interrater Reliability, Computer Assisted Testing
Garcia Laborda, Jesus; Magal Royo, Teresa; Enriquez Carrasco, Emilia – Online Submission, 2010
This paper presents the results of writing processing among 260 high school senior students, their degree of satisfaction using the new trial version of the Computer Based University Entrance Examination in Spain and their degree of motivation towards written online test tasks. Currently, this is one of the closing studies to verify whether…
Descriptors: Foreign Countries, Curriculum Development, High Stakes Tests, Student Motivation
Peer reviewedShermis, Mark D.; Mzumara, Howard R.; Olson, Jennifer; Harrington, Susanmarie – Assessment & Evaluation in Higher Education, 2001
Examined Project Essay Grade (PEG) software for evaluating Web-based student essays that serve as placement tests. In the first experiment, a sample of student essays was used to create a statistical model for the PEG software; the second experiment compared computer and human ratings of essays. Found that the software is an efficient means for…
Descriptors: Computer Software Evaluation, Essay Tests, Grading, World Wide Web
Rudner, Lawrence M.; Garcia, Veronica; Welch, Catherine – Journal of Technology, Learning, and Assessment, 2006
This report provides a two-part evaluation of the IntelliMetric[SM] automated essay scoring system based on its performance scoring essays from the Analytic Writing Assessment of the Graduate Management Admission Test[TM] (GMAT[TM]). The IntelliMetric system performance is first compared to that of individual human raters, a Bayesian system…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation

Direct link
