Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 3 |
Descriptor
College Entrance Examinations | 3 |
Graduate Study | 3 |
Scoring | 3 |
Task Analysis | 3 |
Correlation | 2 |
Essays | 2 |
Prediction | 2 |
Scores | 2 |
Weighted Scores | 2 |
Accuracy | 1 |
Automation | 1 |
More ▼ |
Source
ETS Research Report Series | 3 |
Author
Attali, Yigal | 1 |
Bridgeman, Brent | 1 |
Davey, Tim | 1 |
Haberman, Shelby J. | 1 |
Ramineni, Chaitanya | 1 |
Sinharay, Sandip | 1 |
Trapani, Catherine S. | 1 |
Williamson, David M. | 1 |
Publication Type
Journal Articles | 3 |
Reports - Research | 3 |
Education Level
Higher Education | 3 |
Postsecondary Education | 3 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 3 |
Praxis Series | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Haberman, Shelby J. – ETS Research Report Series, 2020
Best linear prediction (BLP) and penalized best linear prediction (PBLP) are techniques for combining sources of information to produce task scores, section scores, and composite test scores. The report examines issues to consider in operational implementation of BLP and PBLP in testing programs administered by ETS [Educational Testing Service].
Descriptors: Prediction, Scores, Tests, Testing Programs
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of the argument and issue tasks that form the Analytical Writing measure of the "GRE"® General Test. For each of these tasks, this study explored the value added of reporting 4 trait scores for each of these 2 tasks over the total e-rater score.…
Descriptors: Scores, Computer Assisted Testing, Computer Software, Grammar
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Automated scoring models for the "e-rater"® scoring engine were built and evaluated for the "GRE"® argument and issue-writing tasks. Prompt-specific, generic, and generic with prompt-specific intercept scoring models were built and evaluation statistics such as weighted kappas, Pearson correlations, standardized difference in…
Descriptors: Scoring, Test Scoring Machines, Automation, Models