Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 2 |
Descriptor
College Entrance Examinations | 2 |
Correlation | 2 |
Essays | 2 |
Graduate Study | 2 |
Scoring | 2 |
Task Analysis | 2 |
Weighted Scores | 2 |
Automation | 1 |
Comparative Analysis | 1 |
Computer Assisted Testing | 1 |
Computer Software | 1 |
More ▼ |
Source
ETS Research Report Series | 2 |
Author
Attali, Yigal | 1 |
Bridgeman, Brent | 1 |
Davey, Tim | 1 |
Ramineni, Chaitanya | 1 |
Sinharay, Sandip | 1 |
Trapani, Catherine S. | 1 |
Williamson, David M. | 1 |
Publication Type
Journal Articles | 2 |
Reports - Research | 2 |
Education Level
Higher Education | 2 |
Postsecondary Education | 2 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 2 |
What Works Clearinghouse Rating
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of the argument and issue tasks that form the Analytical Writing measure of the "GRE"® General Test. For each of these tasks, this study explored the value added of reporting 4 trait scores for each of these 2 tasks over the total e-rater score.…
Descriptors: Scores, Computer Assisted Testing, Computer Software, Grammar
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Automated scoring models for the "e-rater"® scoring engine were built and evaluated for the "GRE"® argument and issue-writing tasks. Prompt-specific, generic, and generic with prompt-specific intercept scoring models were built and evaluation statistics such as weighted kappas, Pearson correlations, standardized difference in…
Descriptors: Scoring, Test Scoring Machines, Automation, Models