Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 5 |
Descriptor
Automation | 6 |
Classification | 6 |
Documentation | 3 |
Scoring | 3 |
Essays | 2 |
Natural Language Processing | 2 |
Prediction | 2 |
Regression (Statistics) | 2 |
Accuracy | 1 |
Artificial Intelligence | 1 |
Case Studies | 1 |
More ▼ |
Source
ETS Research Report Series | 6 |
Author
Beigman Klebanov, Beata | 1 |
Bejar, Isaac I. | 1 |
Bruno, James V. | 1 |
Cahill, Aoife | 1 |
Chen, Lei | 1 |
Deane, Paul | 1 |
Dorans, Neil J. | 1 |
Flor, Michael | 1 |
Futagi, Yoko | 1 |
Gyawali, Binod | 1 |
Hao, Jiangang | 1 |
More ▼ |
Publication Type
Journal Articles | 6 |
Reports - Research | 6 |
Numerical/Quantitative Data | 1 |
Education Level
Higher Education | 2 |
Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 1 |
What Works Clearinghouse Rating
Wang, Wei; Dorans, Neil J. – ETS Research Report Series, 2021
Agreement statistics and measures of prediction accuracy are often used to assess the quality of two measures of a construct. Agreement statistics are appropriate for measures that are supposed to be interchangeable, whereas prediction accuracy statistics are appropriate for situations where one variable is the target and the other variables are…
Descriptors: Classification, Scaling, Prediction, Accuracy
Song, Yi; Deane, Paul; Beigman Klebanov, Beata – ETS Research Report Series, 2017
This project focuses on laying the foundations for automated analysis of argumentation schemes, supporting identification and classification of the arguments being made in a text, for the purpose of scoring the quality of written analyses of arguments. We developed annotation protocols for 20 argument prompts from a college-level test under the…
Descriptors: Scoring, Automation, Persuasive Discourse, Documentation
Hao, Jiangang; Chen, Lei; Flor, Michael; Liu, Lei; von Davier, Alina A. – ETS Research Report Series, 2017
Conversations in collaborative problem-solving activities can be used to probe the collaboration skills of the team members. Annotating the conversations into different collaboration skills by human raters is laborious and time consuming. In this report, we report our work on developing an automated annotation system, CPS-rater, for conversational…
Descriptors: Problem Solving, Cooperative Learning, Teamwork, Documentation
Bruno, James V.; Cahill, Aoife; Gyawali, Binod – ETS Research Report Series, 2016
We present an annotation scheme for classifying differences in the outputs of syntactic constituency parsers when a gold standard is unavailable or undesired, as in the case of texts written by nonnative speakers of English. We discuss its automated implementation and the results of a case study that uses the scheme to choose a parser best suited…
Descriptors: Documentation, Classification, Differences, Syntax
Williamson, David M.; Bejar, Isaac I.; Sax, Anne – ETS Research Report Series, 2004
As automated scoring of complex constructed-response examinations reaches operational status, the process of evaluating the quality of resultant scores, particularly in contrast to scores of expert human graders, becomes as complex as the data itself. Using a vignette from the Architectural Registration Examination (ARE), this paper explores the…
Descriptors: Automation, Scoring, Tests, Classification
Sheehan, Kathleen M.; Kostin, Irene; Futagi, Yoko; Hemat, Ramin; Zuckerman, Daniel – ETS Research Report Series, 2006
This paper describes the development, implementation, and evaluation of an automated system for predicting the acceptability status of candidate reading-comprehension stimuli extracted from a database of journal and magazine articles. The system uses a combination of classification and regression techniques to predict the probability that a given…
Descriptors: Automation, Prediction, Reading Comprehension, Classification