NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 10 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ramineni, Chaitanya – Assessing Writing, 2013
In this paper, I describe the design and evaluation of automated essay scoring (AES) models for an institution's writing placement program. Information was gathered on admitted student writing performance at a science and technology research university in the northeastern United States. Under timed conditions, first-year students (N = 879) were…
Descriptors: Validity, Comparative Analysis, Internet, Student Placement
Peer reviewed Peer reviewed
Direct linkDirect link
Klobucar, Andrew; Elliot, Norbert; Deess, Perry; Rudniy, Oleksandr; Joshi, Kamal – Assessing Writing, 2013
This study investigated the use of automated essay scoring (AES) to identify at-risk students enrolled in a first-year university writing course. An application of AES, the "Criterion"[R] Online Writing Evaluation Service was evaluated through a methodology focusing on construct modelling, response processes, disaggregation, extrapolation,…
Descriptors: Writing Evaluation, Scoring, Writing Instruction, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
McCurry, Doug – Assessing Writing, 2010
This article considers the claim that machine scoring of writing test responses agrees with human readers as much as humans agree with other humans. These claims about the reliability of machine scoring of writing are usually based on specific and constrained writing tasks, and there is reason for asking whether machine scoring of writing requires…
Descriptors: Writing Tests, Scoring, Interrater Reliability, Computer Assisted Testing
Peer reviewed Peer reviewed
Aiken, Lewis R. – Educational and Psychological Measurement, 1996
This article describes a set of 11 menu-driven procedures written in BASICA for MS-DOS based microcomputers for constructing several types of rating scales, attitude scales, and checklists, and for scoring responses to the constructed instruments. The uses of the program are described in detail. (SLD)
Descriptors: Attitude Measures, Check Lists, Computer Assisted Testing, Computer Software
Peer reviewed Peer reviewed
Harasym, Peter H.; And Others – Journal of Educational Computing Research, 1993
Discussion of the use of human markers to mark responses on write-in questions focuses on a study that determined the feasibility of using a computer program to mark write-in responses for the Medical Council of Canada Qualifying Examination. The computer performance was compared with that of physician markers. (seven references) (LRW)
Descriptors: Comparative Analysis, Computer Assisted Testing, Computer Software Development, Computer Software Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rudner, Lawrence M.; Garcia, Veronica; Welch, Catherine – Journal of Technology, Learning, and Assessment, 2006
This report provides a two-part evaluation of the IntelliMetric[SM] automated essay scoring system based on its performance scoring essays from the Analytic Writing Assessment of the Graduate Management Admission Test[TM] (GMAT[TM]). The IntelliMetric system performance is first compared to that of individual human raters, a Bayesian system…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation
Hansen, Kim – 1992
One hundred and sixteen test administrators in Job Service offices throughout the United States who are currently using the automated typing test software were contacted by telephone about the software. Sixty-nine percent have used the software less than 1 year, and 21 percent have used it more than 1 year. In 78 percent of the offices, there is…
Descriptors: Computer Assisted Testing, Computer Software, Computer Software Evaluation, Employment Qualifications
Solano-Flores, Guillermo; Raymond, Bruce; Schneider, Steven A. – 1997
The need for effective ways of monitoring the quality of scoring of portfolios resulted in the development of a software package that provides scoring leaders with updated information on their assessors' scoring quality. Assessors with computers enter data as they score, and this information is analyzed and reported to scoring leaders. The…
Descriptors: Art Teachers, Computer Assisted Testing, Computer Software, Computer Software Evaluation
de-la-Torre, Roberto; Vispoel, Walter P. – 1991
The development and preliminary evaluation of the Computerized Adaptive Testing System (CATSYS), a new testing package for IBM-compatible microcomputers, are described. CATSYS can be used to administer and score operational adaptive tests or to conduct on-line computer simulation studies. The package incorporates several innovative features,…
Descriptors: Adaptive Testing, Computer Assisted Testing, Computer Simulation, Computer Software Development