Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 12 |
Descriptor
Computer Assisted Testing | 13 |
Essay Tests | 13 |
Student Evaluation | 13 |
Educational Technology | 8 |
Educational Testing | 6 |
Scoring | 6 |
Writing Evaluation | 6 |
Writing Tests | 6 |
Computer Software | 5 |
Educational Assessment | 5 |
Essays | 5 |
More ▼ |
Source
Author
Johnson, Martin | 2 |
Nadas, Rita | 2 |
Attali, Yigal | 1 |
Bell, John F. | 1 |
Blanchard, Daniel | 1 |
Burstein, Jill | 1 |
Cahill, Aoife | 1 |
Chao, K.-J. | 1 |
Chase, Clinton I. | 1 |
Chen, N.-S. | 1 |
Chodorow, Martin | 1 |
More ▼ |
Publication Type
Journal Articles | 11 |
Reports - Research | 7 |
Reports - Descriptive | 4 |
Books | 1 |
Guides - Classroom - Teacher | 1 |
Guides - General | 1 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 6 |
Postsecondary Education | 6 |
Elementary Secondary Education | 3 |
Secondary Education | 3 |
Elementary Education | 1 |
Grade 8 | 1 |
High Schools | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Audience
Practitioners | 2 |
Teachers | 1 |
Location
Qatar | 1 |
Texas | 1 |
United Kingdom | 1 |
Laws, Policies, & Programs
Assessments and Surveys
SAT (College Admission Test) | 2 |
Test of English as a Foreign… | 2 |
ACT Assessment | 1 |
Graduate Management Admission… | 1 |
What Works Clearinghouse Rating
Deane, Paul – ETS Research Report Series, 2014
This paper explores automated methods for measuring features of student writing and determining their relationship to writing quality and other features of literacy, such as reading rest scores. In particular, it uses the "e-rater"™ automatic essay scoring system to measure "product" features (measurable traits of the final…
Descriptors: Writing Processes, Writing Evaluation, Student Evaluation, Writing Skills
Blanchard, Daniel; Tetreault, Joel; Higgins, Derrick; Cahill, Aoife; Chodorow, Martin – ETS Research Report Series, 2013
This report presents work on the development of a new corpus of non-native English writing. It will be useful for the task of native language identification, as well as grammatical error detection and correction, and automatic essay scoring. In this report, the corpus is described in detail.
Descriptors: Language Tests, Second Language Learning, English (Second Language), Writing Tests
Westhuizen, Duan vd – Commonwealth of Learning, 2016
This work starts with a brief overview of education in developing countries, to contextualise the use of the guidelines. Although this document is intended to be a practical tool, it is necessary to include some theoretical analysis of the concept of online assessment. This is given in Sections 3 and 4, together with the identification and…
Descriptors: Guidelines, Student Evaluation, Computer Assisted Testing, Evaluation Methods
Wolf, Kenneth; Dunlap, Joanna; Stevens, Ellen – Journal of Effective Teaching, 2012
This article describes ten key assessment practices for advancing student learning that all professors should be familiar with and strategically incorporate in their classrooms and programs. Each practice or concept is explained with examples and guidance for putting it into practice. The ten are: learning outcomes, performance assessments,…
Descriptors: Educational Assessment, Student Evaluation, Educational Practices, Outcomes of Education
Chao, K.-J.; Hung, I.-C.; Chen, N.-S. – Journal of Computer Assisted Learning, 2012
Online learning has been rapidly developing in the last decade. However, there is very little literature available about the actual adoption of online synchronous assessment approaches and any guidelines for effective assessment design and implementation. This paper aims at designing and evaluating the possibility of applying online synchronous…
Descriptors: Electronic Learning, Student Evaluation, Online Courses, Computer Software
Marking Essays on Screen: An Investigation into the Reliability of Marking Extended Subjective Texts
Johnson, Martin; Nadas, Rita; Bell, John F. – British Journal of Educational Technology, 2010
There is a growing body of research literature that considers how the mode of assessment, either computer-based or paper-based, might affect candidates' performances. Despite this, there is a fairly narrow literature that shifts the focus of attention to those making assessment judgements and which considers issues of assessor consistency when…
Descriptors: English Literature, Examiners, Evaluation Research, Evaluators
McPherson, Douglas – Interactive Technology and Smart Education, 2009
Purpose: The purpose of this paper is to describe how and why Texas A&M University at Qatar (TAMUQ) has developed a system aiming to effectively place students in freshman and developmental English programs. The placement system includes: triangulating data from external test scores, with scores from a panel-marked hand-written essay (HWE),…
Descriptors: Student Placement, Educational Testing, English (Second Language), Second Language Instruction
Johnson, Martin; Nadas, Rita – Learning, Media and Technology, 2009
Within large scale educational assessment agencies in the UK, there has been a shift towards assessors marking digitally scanned copies rather than the original paper scripts that were traditionally used. This project uses extended essay examination scripts to consider whether the mode in which an essay is read potentially influences the…
Descriptors: Reading Comprehension, Educational Assessment, Internet, Essay Tests
Kobrin, Jennifer L.; Deng, Hui; Shaw, Emily J. – Journal of Applied Testing Technology, 2007
This study was designed to address two frequent criticisms of the SAT essay--that essay length is the best predictor of scores, and that there is an advantage in using more "sophisticated" examples as opposed to personal experience. The study was based on 2,820 essays from the first three administrations of the new SAT. Each essay was…
Descriptors: Testing Programs, Computer Assisted Testing, Construct Validity, Writing Skills
Rudner, Lawrence M.; Garcia, Veronica; Welch, Catherine – Journal of Technology, Learning, and Assessment, 2006
This report provides a two-part evaluation of the IntelliMetric[SM] automated essay scoring system based on its performance scoring essays from the Analytic Writing Assessment of the Graduate Management Admission Test[TM] (GMAT[TM]). The IntelliMetric system performance is first compared to that of individual human raters, a Bayesian system…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Dikli, Semire – Journal of Technology, Learning, and Assessment, 2006
Automated Essay Scoring (AES) is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003). AES systems are mainly used to overcome time, cost, reliability, and generalizability issues in writing assessment (Bereiter, 2003; Burstein,…
Descriptors: Scoring, Writing Evaluation, Writing Tests, Standardized Tests
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation
Chase, Clinton I. – 1999
This book provides basic skills and knowledge about assessment so that teachers can expand their ability to deal with appraisal problems in their own settings. The first section deals with the basic principles of assessment. The second section concerns creating and applying assessment tools. The third section reviews issues in understanding and…
Descriptors: Academic Achievement, Computer Assisted Testing, Educational Assessment, Elementary Secondary Education