Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 2 |
Descriptor
Automation | 2 |
College Entrance Examinations | 2 |
Demography | 2 |
Graduate Study | 2 |
Persuasive Discourse | 2 |
Scoring | 2 |
Test Scoring Machines | 2 |
African Americans | 1 |
American Indians | 1 |
Asians | 1 |
Computer Assisted Testing | 1 |
More ▼ |
Source
ETS Research Report Series | 2 |
Author
Ramineni, Chaitanya | 2 |
Bridgeman, Brent | 1 |
Davey, Tim | 1 |
Trapani, Catherine S. | 1 |
Williamson, David | 1 |
Williamson, David M. | 1 |
Publication Type
Journal Articles | 2 |
Reports - Research | 2 |
Education Level
Higher Education | 2 |
Postsecondary Education | 2 |
Audience
Location
China | 1 |
India | 1 |
Japan | 1 |
South Korea | 1 |
Taiwan | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 2 |
What Works Clearinghouse Rating
Ramineni, Chaitanya; Williamson, David – ETS Research Report Series, 2018
Notable mean score differences for the "e-rater"® automated scoring engine and for humans for essays from certain demographic groups were observed for the "GRE"® General Test in use before the major revision of 2012, called rGRE. The use of e-rater as a check-score model with discrepancy thresholds prevented an adverse impact…
Descriptors: Scores, Computer Assisted Testing, Test Scoring Machines, Automation
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Automated scoring models for the "e-rater"® scoring engine were built and evaluated for the "GRE"® argument and issue-writing tasks. Prompt-specific, generic, and generic with prompt-specific intercept scoring models were built and evaluation statistics such as weighted kappas, Pearson correlations, standardized difference in…
Descriptors: Scoring, Test Scoring Machines, Automation, Models