Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 11 |
Descriptor
Source
Assessing Writing | 3 |
Journal of Technology,… | 2 |
Australasian Journal of… | 1 |
CALICO Journal | 1 |
Inquiry | 1 |
Journal of Computer Assisted… | 1 |
Journal of Educational… | 1 |
Journal of Interactive… | 1 |
Author
Alexander, R. Curby | 1 |
Attali, Yigal | 1 |
Baier, Herbert | 1 |
Burrows, Steven | 1 |
Burstein, Jill | 1 |
Cotos, Elena | 1 |
Deess, Perry | 1 |
Elliot, Norbert | 1 |
Ferster, Bill | 1 |
Garcia, Veronica | 1 |
Hammond, Thomas C. | 1 |
More ▼ |
Publication Type
Journal Articles | 11 |
Reports - Research | 6 |
Reports - Evaluative | 4 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 8 |
Postsecondary Education | 8 |
Elementary Secondary Education | 2 |
Middle Schools | 1 |
Two Year Colleges | 1 |
Audience
Teachers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Management Admission… | 1 |
National Assessment of… | 1 |
What Works Clearinghouse Rating
Hunt, Jared; Tompkins, Patrick – Inquiry, 2014
The plagiarism detection programs SafeAssign and Turnitin are commonly used at the collegiate level to detect improper use of outside sources. In order to determine whether either program is superior, this study evaluated the programs using four standards: (1) the ability to detect legitimate plagiarism, (2) the ability to avoid false positives,…
Descriptors: Comparative Analysis, Computer Software, Plagiarism, Computational Linguistics
Ramineni, Chaitanya – Assessing Writing, 2013
In this paper, I describe the design and evaluation of automated essay scoring (AES) models for an institution's writing placement program. Information was gathered on admitted student writing performance at a science and technology research university in the northeastern United States. Under timed conditions, first-year students (N = 879) were…
Descriptors: Validity, Comparative Analysis, Internet, Student Placement
Klobucar, Andrew; Elliot, Norbert; Deess, Perry; Rudniy, Oleksandr; Joshi, Kamal – Assessing Writing, 2013
This study investigated the use of automated essay scoring (AES) to identify at-risk students enrolled in a first-year university writing course. An application of AES, the "Criterion"[R] Online Writing Evaluation Service was evaluated through a methodology focusing on construct modelling, response processes, disaggregation, extrapolation,…
Descriptors: Writing Evaluation, Scoring, Writing Instruction, Essays
Wang, Y.; Harrington, M.; White, P. – Journal of Computer Assisted Learning, 2012
This paper introduces "CTutor", an automated writing evaluation (AWE) tool for detecting breakdowns in local coherence and reports on a study that applies it to the writing of Chinese L2 English learners. The program is based on Centering theory (CT), a theory of local coherence and salience. The principles of CT are first introduced and…
Descriptors: Foreign Countries, Educational Technology, Expertise, Feedback (Response)
On the Reliability and Validity of Human and LSA-Based Evaluations of Complex Student-Authored Texts
Seifried, Eva; Lenhard, Wolfgang; Baier, Herbert; Spinath, Birgit – Journal of Educational Computing Research, 2012
This study investigates the potential of a software tool based on Latent Semantic Analysis (LSA; Landauer, McNamara, Dennis, & Kintsch, 2007) to automatically evaluate complex German texts. A sample of N = 94 German university students provided written answers to questions that involved a high amount of analytical reasoning and evaluation.…
Descriptors: Foreign Countries, Computer Software, Computer Software Evaluation, Computer Uses in Education
McCurry, Doug – Assessing Writing, 2010
This article considers the claim that machine scoring of writing test responses agrees with human readers as much as humans agree with other humans. These claims about the reliability of machine scoring of writing are usually based on specific and constrained writing tasks, and there is reason for asking whether machine scoring of writing requires…
Descriptors: Writing Tests, Scoring, Interrater Reliability, Computer Assisted Testing
Ferster, Bill; Hammond, Thomas C.; Alexander, R. Curby; Lyman, Hunt – Journal of Interactive Learning Research, 2012
The hurried pace of the modern classroom does not permit formative feedback on writing assignments at the frequency or quality recommended by the research literature. One solution for increasing individual feedback to students is to incorporate some form of computer-generated assessment. This study explores the use of automated assessment of…
Descriptors: Feedback (Response), Scripts, Formative Evaluation, Essays
Burrows, Steven; Shortis, Mark – Australasian Journal of Educational Technology, 2011
Online marking and feedback systems are critical for providing timely and accurate feedback to students and maintaining the integrity of results in large class teaching. Previous investigations have involved much in-house development and more consideration is needed for deploying or customising off the shelf solutions. Furthermore, keeping up to…
Descriptors: Foreign Countries, Integrated Learning Systems, Feedback (Response), Evaluation Criteria
Cotos, Elena – CALICO Journal, 2011
This paper presents an empirical evaluation of automated writing evaluation (AWE) feedback used for L2 academic writing teaching and learning. It introduces the Intelligent Academic Discourse Evaluator (IADE), a new web-based AWE program that analyzes the introduction section to research articles and generates immediate, individualized, and…
Descriptors: Evidence, Feedback (Response), Academic Discourse, Writing (Composition)
Rudner, Lawrence M.; Garcia, Veronica; Welch, Catherine – Journal of Technology, Learning, and Assessment, 2006
This report provides a two-part evaluation of the IntelliMetric[SM] automated essay scoring system based on its performance scoring essays from the Analytic Writing Assessment of the Graduate Management Admission Test[TM] (GMAT[TM]). The IntelliMetric system performance is first compared to that of individual human raters, a Bayesian system…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation