Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 3 |
Descriptor
Evaluation Methods | 3 |
Scoring | 3 |
Weighted Scores | 3 |
College Entrance Examinations | 2 |
Computer Assisted Testing | 2 |
Correlation | 2 |
Essays | 2 |
Standardized Tests | 2 |
Test Scoring Machines | 2 |
Writing Evaluation | 2 |
Automation | 1 |
More ▼ |
Author
Bridgeman, Brent | 3 |
Attali, Yigal | 1 |
Breyer, F. Jay | 1 |
Davey, Tim | 1 |
Ramineni, Chaitanya | 1 |
Rupp, André A. | 1 |
Trapani, Catherine | 1 |
Trapani, Catherine S. | 1 |
Williamson, David M. | 1 |
Publication Type
Journal Articles | 3 |
Reports - Research | 2 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 3 |
Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 3 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Breyer, F. Jay; Rupp, André A.; Bridgeman, Brent – ETS Research Report Series, 2017
In this research report, we present an empirical argument for the use of a contributory scoring approach for the 2-essay writing assessment of the analytical writing section of the "GRE"® test in which human and machine scores are combined for score creation at the task and section levels. The approach was designed to replace a currently…
Descriptors: College Entrance Examinations, Scoring, Essay Tests, Writing Evaluation
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Automated scoring models for the "e-rater"® scoring engine were built and evaluated for the "GRE"® argument and issue-writing tasks. Prompt-specific, generic, and generic with prompt-specific intercept scoring models were built and evaluation statistics such as weighted kappas, Pearson correlations, standardized difference in…
Descriptors: Scoring, Test Scoring Machines, Automation, Models
Attali, Yigal; Bridgeman, Brent; Trapani, Catherine – Journal of Technology, Learning, and Assessment, 2010
A generic approach in automated essay scoring produces scores that have the same meaning across all prompts, existing or new, of a writing assessment. This is accomplished by using a single set of linguistic indicators (or features), a consistent way of combining and weighting these features into essay scores, and a focus on features that are not…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Test Scoring Machines