Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 5 |
Descriptor
Essay Tests | 5 |
Scoring | 4 |
Computer Assisted Testing | 3 |
Automation | 2 |
College Entrance Examinations | 2 |
Comparative Analysis | 2 |
Correlation | 2 |
Graduate Study | 2 |
Measurement | 2 |
Models | 2 |
Persuasive Discourse | 2 |
More ▼ |
Author
Ramineni, Chaitanya | 5 |
Williamson, David M. | 2 |
Beigman Klebanov, Beata | 1 |
Ishizaki, Suguru | 1 |
Kaufer, David | 1 |
Trapani, Catherine S. | 1 |
Williamson, David | 1 |
Yeoh, Paul | 1 |
Publication Type
Journal Articles | 5 |
Reports - Research | 4 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 4 |
Postsecondary Education | 4 |
Elementary Secondary Education | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 2 |
Praxis Series | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Ramineni, Chaitanya; Williamson, David – ETS Research Report Series, 2018
Notable mean score differences for the "e-rater"® automated scoring engine and for humans for essays from certain demographic groups were observed for the "GRE"® General Test in use before the major revision of 2012, called rGRE. The use of e-rater as a check-score model with discrepancy thresholds prevented an adverse impact…
Descriptors: Scores, Computer Assisted Testing, Test Scoring Machines, Automation
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M. – ETS Research Report Series, 2015
Automated scoring models were trained and evaluated for the essay task in the "Praxis I"® writing test. Prompt-specific and generic "e-rater"® scoring models were built, and evaluation statistics, such as quadratic weighted kappa, Pearson correlation, and standardized differences in mean scores, were examined to evaluate the…
Descriptors: Writing Tests, Licensing Examinations (Professions), Teacher Competency Testing, Scoring
Beigman Klebanov, Beata; Ramineni, Chaitanya; Kaufer, David; Yeoh, Paul; Ishizaki, Suguru – Language Testing, 2019
Essay writing is a common type of constructed-response task used frequently in standardized writing assessments. However, the impromptu timed nature of the essay writing tests has drawn increasing criticism for the lack of authenticity for real-world writing in classroom and workplace settings. The goal of this paper is to contribute evidence to a…
Descriptors: Test Validity, Writing Tests, Writing Skills, Persuasive Discourse
Ramineni, Chaitanya; Williamson, David M. – Assessing Writing, 2013
In this paper, we provide an overview of psychometric procedures and guidelines Educational Testing Service (ETS) uses to evaluate automated essay scoring for operational use. We briefly describe the e-rater system, the procedures and criteria used to evaluate e-rater, implications for a range of potential uses of e-rater, and directions for…
Descriptors: Educational Testing, Guidelines, Scoring, Psychometrics
Ramineni, Chaitanya – Assessing Writing, 2013
In this paper, I describe the design and evaluation of automated essay scoring (AES) models for an institution's writing placement program. Information was gathered on admitted student writing performance at a science and technology research university in the northeastern United States. Under timed conditions, first-year students (N = 879) were…
Descriptors: Validity, Comparative Analysis, Internet, Student Placement