Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 4 |
Descriptor
Automation | 5 |
Essay Tests | 5 |
Scoring | 4 |
Factor Analysis | 3 |
Language Tests | 3 |
College Entrance Examinations | 2 |
English (Second Language) | 2 |
Essays | 2 |
Test Scoring Machines | 2 |
Writing Tests | 2 |
Business Administration… | 1 |
More ▼ |
Author
Attali, Yigal | 3 |
Burstein, Jill | 1 |
Fatima, Sameen S. | 1 |
Ghosh, Siddhartha | 1 |
Higgins, Derrick | 1 |
Quinlan, Thomas | 1 |
Wolff, Susanne | 1 |
Publication Type
Reports - Research | 3 |
Journal Articles | 2 |
Reports - Evaluative | 2 |
Education Level
Higher Education | 3 |
Postsecondary Education | 3 |
Elementary Education | 1 |
Elementary Secondary Education | 1 |
Grade 10 | 1 |
Grade 11 | 1 |
Grade 12 | 1 |
Grade 6 | 1 |
Grade 7 | 1 |
Grade 8 | 1 |
Grade 9 | 1 |
More ▼ |
Audience
Location
India | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 5 |
Graduate Record Examinations | 2 |
Graduate Management Admission… | 1 |
What Works Clearinghouse Rating
Attali, Yigal – Educational Testing Service, 2011
The e-rater[R] automated essay scoring system is used operationally in the scoring of TOEFL iBT[R] independent essays. Previous research has found support for a 3-factor structure of the e-rater features. This 3-factor structure has an attractive hierarchical linguistic interpretation with a word choice factor, a grammatical convention within a…
Descriptors: Essay Tests, Language Tests, Test Scoring Machines, Automation
Attali, Yigal – Educational Testing Service, 2011
This paper proposes an alternative content measure for essay scoring, based on the "difference" in the relative frequency of a word in high-scored versus low-scored essays. The "differential word use" (DWU) measure is the average of these differences across all words in the essay. A positive value indicates the essay is using…
Descriptors: Scoring, Essay Tests, Word Frequency, Content Analysis
Quinlan, Thomas; Higgins, Derrick; Wolff, Susanne – Educational Testing Service, 2009
This report evaluates the construct coverage of the e-rater[R[ scoring engine. The matter of construct coverage depends on whether one defines writing skill, in terms of process or product. Originally, the e-rater engine consisted of a large set of components with a proven ability to predict human holistic scores. By organizing these capabilities…
Descriptors: Guides, Writing Skills, Factor Analysis, Writing Tests
Ghosh, Siddhartha; Fatima, Sameen S. – Journal of Educational Technology, 2007
Automated essay grading or scoring systems are no more a myth, but they are a reality. As of today, the human written (not hand written) essays are corrected not only by examiners/teachers but also by machines. The TOEFL exam is one of the best examples of this application. The students' essays are evaluated both by human and web based automated…
Descriptors: Foreign Countries, Essays, Grading, Automation
Attali, Yigal; Burstein, Jill – ETS Research Report Series, 2005
The e-raterĀ® system has been used by ETS for automated essay scoring since 1999. This paper describes a new version of e-rater (v.2.0) that differs from the previous one (v.1.3) with regard to the feature set and model building approach. The paper describes the new version, compares the new and previous versions in terms of performance, and…
Descriptors: Essay Tests, Automation, Scoring, Comparative Analysis