Publication Date
| In 2026 | 0 |
| Since 2025 | 0 |
| Since 2022 (last 5 years) | 0 |
| Since 2017 (last 10 years) | 0 |
| Since 2007 (last 20 years) | 3 |
Descriptor
| Automation | 3 |
| Scoring | 3 |
| Scoring Formulas | 3 |
| Computer Assisted Testing | 2 |
| Essays | 2 |
| Evaluation Methods | 2 |
| Interrater Reliability | 2 |
| Test Scoring Machines | 2 |
| Weighted Scores | 2 |
| Writing Evaluation | 2 |
| Benchmarking | 1 |
| More ▼ | |
Author
| Attali, Yigal | 1 |
| Ben-Simon, Anat | 1 |
| Bennett, Randy Elliott | 1 |
| Bridgeman, Brent | 1 |
| Davey, Tim | 1 |
| Ramineni, Chaitanya | 1 |
| Trapani, Catherine S. | 1 |
| Williamson, David M. | 1 |
Publication Type
| Journal Articles | 3 |
| Reports - Research | 3 |
Education Level
| Elementary Education | 1 |
| Elementary Secondary Education | 1 |
| Grade 8 | 1 |
| Higher Education | 1 |
| Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| Graduate Record Examinations | 1 |
What Works Clearinghouse Rating
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Automated scoring models for the "e-rater"® scoring engine were built and evaluated for the "GRE"® argument and issue-writing tasks. Prompt-specific, generic, and generic with prompt-specific intercept scoring models were built and evaluation statistics such as weighted kappas, Pearson correlations, standardized difference in…
Descriptors: Scoring, Test Scoring Machines, Automation, Models
Attali, Yigal – ETS Research Report Series, 2007
Because there is no commonly accepted view of what makes for good writing, automated essay scoring (AES) ideally should be able to accommodate different theoretical positions, certainly at the level of state standards but also perhaps among teachers at the classroom level. This paper presents a practical approach and an interactive computer…
Descriptors: Computer Assisted Testing, Automation, Essay Tests, Scoring
Ben-Simon, Anat; Bennett, Randy Elliott – Journal of Technology, Learning, and Assessment, 2007
This study evaluated a "substantively driven" method for scoring NAEP writing assessments automatically. The study used variations of an existing commercial program, e-rater[R], to compare the performance of three approaches to automated essay scoring: a "brute-empirical" approach in which variables are selected and weighted solely according to…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays

Peer reviewed
Direct link
