Publication Date
| In 2025 | 0 |
| Since 2024 | 0 |
| Since 2021 (last 5 years) | 0 |
| Since 2016 (last 10 years) | 1 |
| Since 2006 (last 20 years) | 2 |
Descriptor
| College Entrance Examinations | 2 |
| Correlation | 2 |
| Evaluation Methods | 2 |
| Scoring | 2 |
| Weighted Scores | 2 |
| Automation | 1 |
| Computation | 1 |
| Computer Assisted Testing | 1 |
| Data | 1 |
| Demography | 1 |
| Design | 1 |
| More ▼ | |
Source
| ETS Research Report Series | 2 |
Author
| Bridgeman, Brent | 2 |
| Breyer, F. Jay | 1 |
| Davey, Tim | 1 |
| Ramineni, Chaitanya | 1 |
| Rupp, André A. | 1 |
| Trapani, Catherine S. | 1 |
| Williamson, David M. | 1 |
Publication Type
| Journal Articles | 2 |
| Reports - Research | 2 |
Education Level
| Higher Education | 2 |
| Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
| Graduate Record Examinations | 2 |
What Works Clearinghouse Rating
Breyer, F. Jay; Rupp, André A.; Bridgeman, Brent – ETS Research Report Series, 2017
In this research report, we present an empirical argument for the use of a contributory scoring approach for the 2-essay writing assessment of the analytical writing section of the "GRE"® test in which human and machine scores are combined for score creation at the task and section levels. The approach was designed to replace a currently…
Descriptors: College Entrance Examinations, Scoring, Essay Tests, Writing Evaluation
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Automated scoring models for the "e-rater"® scoring engine were built and evaluated for the "GRE"® argument and issue-writing tasks. Prompt-specific, generic, and generic with prompt-specific intercept scoring models were built and evaluation statistics such as weighted kappas, Pearson correlations, standardized difference in…
Descriptors: Scoring, Test Scoring Machines, Automation, Models

Peer reviewed
