Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 10 |
Since 2006 (last 20 years) | 26 |
Descriptor
Essays | 29 |
Scoring | 22 |
Writing Tests | 14 |
Computer Assisted Testing | 12 |
Correlation | 11 |
Scores | 11 |
Writing Evaluation | 10 |
Automation | 9 |
Prompting | 9 |
English (Second Language) | 8 |
Language Tests | 8 |
More ▼ |
Source
ETS Research Report Series | 29 |
Author
Zhang, Mo | 5 |
Attali, Yigal | 4 |
Ramineni, Chaitanya | 3 |
Sinharay, Sandip | 3 |
Williamson, David M. | 3 |
Breyer, F. Jay | 2 |
Bridgeman, Brent | 2 |
Davey, Tim | 2 |
Deane, Paul | 2 |
Haberman, Shelby J. | 2 |
Kantor, Robert | 2 |
More ▼ |
Publication Type
Journal Articles | 29 |
Reports - Research | 29 |
Tests/Questionnaires | 6 |
Numerical/Quantitative Data | 1 |
Education Level
Higher Education | 12 |
Postsecondary Education | 11 |
Secondary Education | 8 |
Junior High Schools | 5 |
Middle Schools | 5 |
Elementary Education | 4 |
High Schools | 4 |
Grade 8 | 3 |
Grade 4 | 2 |
Intermediate Grades | 2 |
Grade 10 | 1 |
More ▼ |
Audience
Location
Indiana | 2 |
California (Los Angeles) | 1 |
Canada | 1 |
Georgia | 1 |
Germany | 1 |
Iowa | 1 |
Michigan | 1 |
Minnesota | 1 |
New Jersey | 1 |
New York | 1 |
Pennsylvania | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 7 |
Graduate Record Examinations | 6 |
Praxis Series | 2 |
National Merit Scholarship… | 1 |
Preliminary Scholastic… | 1 |
What Works Clearinghouse Rating
Ling, Guangming; Williams, Jean; O'Brien, Sue; Cavalie, Carlos F. – ETS Research Report Series, 2022
Recognizing the appealing features of a tablet (e.g., an iPad), including size, mobility, touch screen display, and virtual keyboard, more educational professionals are moving away from larger laptop and desktop computers and turning to the iPad for their daily work, such as reading and writing. Following the results of a recent survey of…
Descriptors: Tablet Computers, Computers, Essays, Scoring
Wang, Wei; Dorans, Neil J. – ETS Research Report Series, 2021
Agreement statistics and measures of prediction accuracy are often used to assess the quality of two measures of a construct. Agreement statistics are appropriate for measures that are supposed to be interchangeable, whereas prediction accuracy statistics are appropriate for situations where one variable is the target and the other variables are…
Descriptors: Classification, Scaling, Prediction, Accuracy
Paul Deane; Duanli Yan; Katherine Castellano; Yigal Attali; Michelle Lamar; Mo Zhang; Ian Blood; James V. Bruno; Chen Li; Wenju Cui; Chunyi Ruan; Colleen Appel; Kofi James; Rodolfo Long; Farah Qureshi – ETS Research Report Series, 2024
This paper presents a multidimensional model of variation in writing quality, register, and genre in student essays, trained and tested via confirmatory factor analysis of 1.37 million essay submissions to ETS' digital writing service, Criterion®. The model was also validated with several other corpora, which indicated that it provides a…
Descriptors: Writing (Composition), Essays, Models, Elementary School Students
Wendler, Cathy; Glazer, Nancy; Cline, Frederick – ETS Research Report Series, 2019
One of the challenges in scoring constructed-response (CR) items and tasks is ensuring that rater drift does not occur during or across scoring windows. Rater drift reflects changes in how raters interpret and use established scoring criteria to assign essay scores. Calibration is a process used to help control rater drift and, as such, serves as…
Descriptors: College Entrance Examinations, Graduate Study, Accuracy, Test Reliability
Cao, Yi; Chen, Jianshen; Zhang, Mo; Li, Chen – ETS Research Report Series, 2020
Scenario-based writing assessment has two salient characteristics by design: a lead-in/essay scaffolding structure and a unified scenario/topic throughout. In this study, we examine whether the scenario-based assessment design would impact students' essay scores compared to its alternative conditions, which intentionally broke the scaffolding…
Descriptors: Writing Processes, Vignettes, Writing Evaluation, Regression (Statistics)
Song, Yi; Deane, Paul; Beigman Klebanov, Beata – ETS Research Report Series, 2017
This project focuses on laying the foundations for automated analysis of argumentation schemes, supporting identification and classification of the arguments being made in a text, for the purpose of scoring the quality of written analyses of arguments. We developed annotation protocols for 20 argument prompts from a college-level test under the…
Descriptors: Scoring, Automation, Persuasive Discourse, Documentation
Zhang, Mo; Chen, Jing; Ruan, Chunyi – ETS Research Report Series, 2016
Successful detection of unusual responses is critical for using machine scoring in the assessment context. This study evaluated the utility of approaches to detecting unusual responses in automated essay scoring. Two research questions were pursued. One question concerned the performance of various prescreening advisory flags, and the other…
Descriptors: Essays, Scoring, Automation, Test Scoring Machines
Rupp, André A.; Casabianca, Jodi M.; Krüger, Maleika; Keller, Stefan; Köller, Olaf – ETS Research Report Series, 2019
In this research report, we describe the design and empirical findings for a large-scale study of essay writing ability with approximately 2,500 high school students in Germany and Switzerland on the basis of 2 tasks with 2 associated prompts, each from a standardized writing assessment whose scoring involved both human and automated components.…
Descriptors: Automation, Foreign Countries, English (Second Language), Language Tests
Yao, Lili; Haberman, Shelby J.; Zhang, Mo – ETS Research Report Series, 2019
Many assessments of writing proficiency that aid in making high-stakes decisions consist of several essay tasks evaluated by a combination of human holistic scores and computer-generated scores for essay features such as the rate of grammatical errors per word. Under typical conditions, a summary writing score is provided by a linear combination…
Descriptors: Prediction, True Scores, Computer Assisted Testing, Scoring
Zhu, Mengxiao; Zhang, Mo; Deane, Paul – ETS Research Report Series, 2019
The research on using event logs and item response time to study test-taking processes is rapidly growing in the field of educational measurement. In this study, we analyzed the keystroke logs collected from 761 middle school students in the United States as they completed a persuasive writing task. Seven variables were extracted from the…
Descriptors: Keyboarding (Data Entry), Data Collection, Data Analysis, Writing Processes
Breyer, F. Jay; Attali, Yigal; Williamson, David M.; Ridolfi-McCulla, Laura; Ramineni, Chaitanya; Duchnowski, Matthew; Harris, April – ETS Research Report Series, 2014
In this research, we investigated the feasibility of implementing the "e-rater"® scoring engine as a check score in place of all-human scoring for the "Graduate Record Examinations"® ("GRE"®) revised General Test (rGRE) Analytical Writing measure. This report provides the scientific basis for the use of e-rater as a…
Descriptors: Computer Software, Computer Assisted Testing, Scoring, College Entrance Examinations
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of "TOEFL iBT"® independent and integrated tasks. In this study we explored the psychometric added value of reporting four trait scores for each of these two tasks, beyond the total e-rater score.The four trait scores are word choice, grammatical…
Descriptors: Writing Tests, Scores, Language Tests, English (Second Language)
Heilman, Michael; Madnani, Nitin – ETS Research Report Series, 2012
Many writing assessments use generic prompts about social issues. However, we currently lack an understanding of how test takers respond to such prompts. In the absence of such an understanding, automated scoring systems may not be as reliable as they could be and may worsen over time. To move toward a deeper understanding of responses to generic…
Descriptors: Writing Evaluation, Scoring, Prompting, Responses
Fu, Jianbin; Chung, Seunghee; Wise, Maxwell – ETS Research Report Series, 2013
The Cognitively Based Assessment of, for, and as Learning ("CBAL"™) research initiative is aimed at developing an innovative approach to K-12 assessment based on cognitive competency models. Because the choice of scoring and equating approaches depends on test dimensionality, the dimensional structure of CBAL tests must be understood.…
Descriptors: Cognitive Measurement, Cognitive Ability, Scoring, Grade 4
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of the argument and issue tasks that form the Analytical Writing measure of the "GRE"® General Test. For each of these tasks, this study explored the value added of reporting 4 trait scores for each of these 2 tasks over the total e-rater score.…
Descriptors: Scores, Computer Assisted Testing, Computer Software, Grammar
Previous Page | Next Page »
Pages: 1 | 2