NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hunt, Jared; Tompkins, Patrick – Inquiry, 2014
The plagiarism detection programs SafeAssign and Turnitin are commonly used at the collegiate level to detect improper use of outside sources. In order to determine whether either program is superior, this study evaluated the programs using four standards: (1) the ability to detect legitimate plagiarism, (2) the ability to avoid false positives,…
Descriptors: Comparative Analysis, Computer Software, Plagiarism, Computational Linguistics
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Y.; Harrington, M.; White, P. – Journal of Computer Assisted Learning, 2012
This paper introduces "CTutor", an automated writing evaluation (AWE) tool for detecting breakdowns in local coherence and reports on a study that applies it to the writing of Chinese L2 English learners. The program is based on Centering theory (CT), a theory of local coherence and salience. The principles of CT are first introduced and…
Descriptors: Foreign Countries, Educational Technology, Expertise, Feedback (Response)
Peer reviewed Peer reviewed
Direct linkDirect link
Seifried, Eva; Lenhard, Wolfgang; Baier, Herbert; Spinath, Birgit – Journal of Educational Computing Research, 2012
This study investigates the potential of a software tool based on Latent Semantic Analysis (LSA; Landauer, McNamara, Dennis, & Kintsch, 2007) to automatically evaluate complex German texts. A sample of N = 94 German university students provided written answers to questions that involved a high amount of analytical reasoning and evaluation.…
Descriptors: Foreign Countries, Computer Software, Computer Software Evaluation, Computer Uses in Education
Peer reviewed Peer reviewed
Direct linkDirect link
McCurry, Doug – Assessing Writing, 2010
This article considers the claim that machine scoring of writing test responses agrees with human readers as much as humans agree with other humans. These claims about the reliability of machine scoring of writing are usually based on specific and constrained writing tasks, and there is reason for asking whether machine scoring of writing requires…
Descriptors: Writing Tests, Scoring, Interrater Reliability, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Ferster, Bill; Hammond, Thomas C.; Alexander, R. Curby; Lyman, Hunt – Journal of Interactive Learning Research, 2012
The hurried pace of the modern classroom does not permit formative feedback on writing assignments at the frequency or quality recommended by the research literature. One solution for increasing individual feedback to students is to incorporate some form of computer-generated assessment. This study explores the use of automated assessment of…
Descriptors: Feedback (Response), Scripts, Formative Evaluation, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
Cotos, Elena – CALICO Journal, 2011
This paper presents an empirical evaluation of automated writing evaluation (AWE) feedback used for L2 academic writing teaching and learning. It introduces the Intelligent Academic Discourse Evaluator (IADE), a new web-based AWE program that analyzes the introduction section to research articles and generates immediate, individualized, and…
Descriptors: Evidence, Feedback (Response), Academic Discourse, Writing (Composition)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation