Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 19 |
Descriptor
Computer Assisted Testing | 25 |
Essays | 25 |
Scoring | 12 |
Writing Evaluation | 12 |
Evaluation Methods | 8 |
Writing Tests | 8 |
Grading | 7 |
Second Language Learning | 6 |
Validity | 6 |
Comparative Analysis | 5 |
English (Second Language) | 5 |
More ▼ |
Source
Author
Davies, Phil | 2 |
James, Cindy L. | 2 |
Attali, Yigal | 1 |
Bridgeman, Brent | 1 |
Brown, Gavin T. L. | 1 |
Burrows, Steven | 1 |
Chung, Gregory K. W. K. | 1 |
Condon, William | 1 |
Coniam, David | 1 |
Cope, Bill | 1 |
Deane, Paul | 1 |
More ▼ |
Publication Type
Reports - Evaluative | 25 |
Journal Articles | 21 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 9 |
Postsecondary Education | 5 |
Elementary Secondary Education | 3 |
Grade 11 | 1 |
Secondary Education | 1 |
Audience
Location
United Kingdom | 2 |
Australia | 1 |
Hong Kong | 1 |
Taiwan | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 4 |
Graduate Record Examinations | 2 |
National Assessment of… | 1 |
SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Doewes, Afrizal; Kurdhi, Nughthoh Arfawi; Saxena, Akrati – International Educational Data Mining Society, 2023
Automated Essay Scoring (AES) tools aim to improve the efficiency and consistency of essay scoring by using machine learning algorithms. In the existing research work on this topic, most researchers agree that human-automated score agreement remains the benchmark for assessing the accuracy of machine-generated scores. To measure the performance of…
Descriptors: Essays, Writing Evaluation, Evaluators, Accuracy
Moncaleano, Sebastian; Russell, Michael – Journal of Applied Testing Technology, 2018
2017 marked a century since the development and administration of the first large-scale group administered standardized test. Since that time, both the importance of testing and the technology of testing have advanced significantly. This paper traces the technological advances that have led to the large-scale administration of educational tests in…
Descriptors: Technological Advancement, Standardized Tests, Computer Assisted Testing, Automation
Deane, Paul – Assessing Writing, 2013
This paper examines the construct measured by automated essay scoring (AES) systems. AES systems measure features of the text structure, linguistic structure, and conventional print form of essays; as such, the systems primarily measure text production skills. In the current state-of-the-art, AES provide little direct evidence about such matters…
Descriptors: Scoring, Essays, Text Structure, Writing (Composition)
Condon, William – Assessing Writing, 2013
Automated Essay Scoring (AES) has garnered a great deal of attention from the rhetoric and composition/writing studies community since the Educational Testing Service began using e-rater[R] and the "Criterion"[R] Online Writing Evaluation Service as products in scoring writing tests, and most of the responses have been negative. While the…
Descriptors: Measurement, Psychometrics, Evaluation Methods, Educational Testing
Weigle, Sara Cushing – Assessing Writing, 2013
This article presents considerations for using automated scoring systems to evaluate second language writing. A distinction is made between English language learners in English-medium educational systems and those studying English in their own countries for a variety of purposes, and between learning-to-write and writing-to-learn in a second…
Descriptors: Scoring, Second Language Learning, Second Languages, English Language Learners
Cope, Bill; Kalantzis, Mary – Open Review of Educational Research, 2015
This article sets out to explore a shift in the sources of evidence-of-learning in the era of networked computing. One of the key features of recent developments has been popularly characterized as "big data". We begin by examining, in general terms, the frame of reference of contemporary debates on machine intelligence and the role of…
Descriptors: Data Analysis, Evidence, Computer Uses in Education, Artificial Intelligence
Brown, Gavin T. L. – Higher Education Quarterly, 2010
The use of timed, essay examinations is a well-established means of evaluating student learning in higher education. The reliability of essay scoring is highly problematic and it appears that essay examination grades are highly dependent on language and organisational components of writing. Computer-assisted scoring of essays makes use of language…
Descriptors: Higher Education, Essay Tests, Validity, Scoring
Lee, Yong-Won; Gentile, Claudia; Kantor, Robert – Applied Linguistics, 2010
The main purpose of the study was to investigate the distinctness and reliability of analytic (or multi-trait) rating dimensions and their relationships to holistic scores and "e-rater"[R] essay feature variables in the context of the TOEFL[R] computer-based test (TOEFL CBT) writing assessment. Data analyzed in the study were holistic…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
James, Cindy L. – Assessing Writing, 2008
The scoring of student essays by computer has generated much debate and subsequent research. The majority of the research thus far has focused on validating the automated scoring tools by comparing the electronic scores to human scores of writing or other measures of writing skills, and exploring the predictive validity of the automated scores.…
Descriptors: Predictive Validity, Scoring, Electronic Equipment, Essays
Ockey, Gary J. – Modern Language Journal, 2009
Computer-based testing (CBT) to assess second language ability has undergone remarkable development since Garret (1991) described its purpose as "the computerized administration of conventional tests" in "The Modern Language Journal." For instance, CBT has made possible the delivery of more authentic tests than traditional paper-and-pencil tests.…
Descriptors: Second Language Learning, Adaptive Testing, Computer Assisted Testing, Language Aptitude
Davies, Phil – Assessment & Evaluation in Higher Education, 2009
This article details the implementation and use of a "Review Stage" within the CAP (computerised assessment by peers) tool as part of the assessment process for a post-graduate module in e-learning. It reports upon the effect of providing the students with a "second chance" in marking and commenting their peers' essays having been able to view the…
Descriptors: Feedback (Response), Student Evaluation, Computer Assisted Testing, Peer Evaluation
Attali, Yigal; Bridgeman, Brent; Trapani, Catherine – Journal of Technology, Learning, and Assessment, 2010
A generic approach in automated essay scoring produces scores that have the same meaning across all prompts, existing or new, of a writing assessment. This is accomplished by using a single set of linguistic indicators (or features), a consistent way of combining and weighting these features into essay scores, and a focus on features that are not…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Test Scoring Machines
Burrows, Steven; Shortis, Mark – Australasian Journal of Educational Technology, 2011
Online marking and feedback systems are critical for providing timely and accurate feedback to students and maintaining the integrity of results in large class teaching. Previous investigations have involved much in-house development and more consideration is needed for deploying or customising off the shelf solutions. Furthermore, keeping up to…
Descriptors: Foreign Countries, Integrated Learning Systems, Feedback (Response), Evaluation Criteria
Coniam, David – Educational Research and Evaluation, 2009
This paper describes a study comparing paper-based marking (PBM) and onscreen marking (OSM) in Hong Kong utilising English language essay scripts drawn from the live 2007 Hong Kong Certificate of Education Examination (HKCEE) Year 11 English Language Writing Paper. In the study, 30 raters from the 2007 HKCEE Writing Paper marked on paper 100…
Descriptors: Student Attitudes, Foreign Countries, Essays, Comparative Analysis
Quinlan, Thomas; Higgins, Derrick; Wolff, Susanne – Educational Testing Service, 2009
This report evaluates the construct coverage of the e-rater[R[ scoring engine. The matter of construct coverage depends on whether one defines writing skill, in terms of process or product. Originally, the e-rater engine consisted of a large set of components with a proven ability to predict human holistic scores. By organizing these capabilities…
Descriptors: Guides, Writing Skills, Factor Analysis, Writing Tests
Previous Page | Next Page ยป
Pages: 1 | 2