NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Ramineni, Chaitanya – Assessing Writing, 2013
In this paper, I describe the design and evaluation of automated essay scoring (AES) models for an institution's writing placement program. Information was gathered on admitted student writing performance at a science and technology research university in the northeastern United States. Under timed conditions, first-year students (N = 879) were…
Descriptors: Validity, Comparative Analysis, Internet, Student Placement
Peer reviewed Peer reviewed
Direct linkDirect link
Klobucar, Andrew; Elliot, Norbert; Deess, Perry; Rudniy, Oleksandr; Joshi, Kamal – Assessing Writing, 2013
This study investigated the use of automated essay scoring (AES) to identify at-risk students enrolled in a first-year university writing course. An application of AES, the "Criterion"[R] Online Writing Evaluation Service was evaluated through a methodology focusing on construct modelling, response processes, disaggregation, extrapolation,…
Descriptors: Writing Evaluation, Scoring, Writing Instruction, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Y.; Harrington, M.; White, P. – Journal of Computer Assisted Learning, 2012
This paper introduces "CTutor", an automated writing evaluation (AWE) tool for detecting breakdowns in local coherence and reports on a study that applies it to the writing of Chinese L2 English learners. The program is based on Centering theory (CT), a theory of local coherence and salience. The principles of CT are first introduced and…
Descriptors: Foreign Countries, Educational Technology, Expertise, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Seifried, Eva; Lenhard, Wolfgang; Baier, Herbert; Spinath, Birgit – Journal of Educational Computing Research, 2012
This study investigates the potential of a software tool based on Latent Semantic Analysis (LSA; Landauer, McNamara, Dennis, & Kintsch, 2007) to automatically evaluate complex German texts. A sample of N = 94 German university students provided written answers to questions that involved a high amount of analytical reasoning and evaluation.…
Descriptors: Foreign Countries, Computer Software, Computer Software Evaluation, Computer Uses in Education
Peer reviewed Peer reviewed
Direct linkDirect link
Ferster, Bill; Hammond, Thomas C.; Alexander, R. Curby; Lyman, Hunt – Journal of Interactive Learning Research, 2012
The hurried pace of the modern classroom does not permit formative feedback on writing assignments at the frequency or quality recommended by the research literature. One solution for increasing individual feedback to students is to incorporate some form of computer-generated assessment. This study explores the use of automated assessment of…
Descriptors: Feedback (Response), Scripts, Formative Evaluation, Essays
Peer reviewed Peer reviewed
van der Geest, Thea; Remmers, Tim – Computers and Composition, 1994
Examines the use of Prep-Editor, a computer program to help undergraduate science writing students communicate with their peers about drafts. Finds that the program did not increase time spent on various writing activities. Notes that the Prep group reported a number of computer-related problems, whereas the non-Prep group reported more…
Descriptors: Computer Software Evaluation, Computer Uses in Education, Higher Education, Peer Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rudner, Lawrence M.; Garcia, Veronica; Welch, Catherine – Journal of Technology, Learning, and Assessment, 2006
This report provides a two-part evaluation of the IntelliMetric[SM] automated essay scoring system based on its performance scoring essays from the Analytic Writing Assessment of the Graduate Management Admission Test[TM] (GMAT[TM]). The IntelliMetric system performance is first compared to that of individual human raters, a Bayesian system…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays