Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 3 |
Descriptor
College Entrance Examinations | 3 |
Computer Assisted Testing | 3 |
Validity | 3 |
Computer Software | 2 |
Correlation | 2 |
Essays | 2 |
Graduate Study | 2 |
Reliability | 2 |
Scoring | 2 |
Statistical Analysis | 2 |
Writing Evaluation | 2 |
More ▼ |
Author
Attali, Yigal | 3 |
Breyer, F. Jay | 1 |
Burstein, Jill | 1 |
Duchnowski, Matthew | 1 |
Harris, April | 1 |
Hawthorn, John | 1 |
Powers, Don | 1 |
Ramineni, Chaitanya | 1 |
Ridolfi-McCulla, Laura | 1 |
Williamson, David M. | 1 |
Publication Type
Journal Articles | 3 |
Reports - Research | 2 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 3 |
Postsecondary Education | 3 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 2 |
What Works Clearinghouse Rating
Breyer, F. Jay; Attali, Yigal; Williamson, David M.; Ridolfi-McCulla, Laura; Ramineni, Chaitanya; Duchnowski, Matthew; Harris, April – ETS Research Report Series, 2014
In this research, we investigated the feasibility of implementing the "e-rater"® scoring engine as a check score in place of all-human scoring for the "Graduate Record Examinations"® ("GRE"®) revised General Test (rGRE) Analytical Writing measure. This report provides the scientific basis for the use of e-rater as a…
Descriptors: Computer Software, Computer Assisted Testing, Scoring, College Entrance Examinations
Attali, Yigal; Powers, Don; Hawthorn, John – ETS Research Report Series, 2008
Registered examinees for the GRE® General Test answered open-ended sentence-completion items. For half of the items, participants received immediate feedback on the correctness of their answers and up to two opportunities to revise their answers. A significant feedback-and-revision effect was found. Participants were able to correct many of their…
Descriptors: College Entrance Examinations, Graduate Study, Sentences, Psychometrics
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation