NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Paul Deane; Duanli Yan; Katherine Castellano; Yigal Attali; Michelle Lamar; Mo Zhang; Ian Blood; James V. Bruno; Chen Li; Wenju Cui; Chunyi Ruan; Colleen Appel; Kofi James; Rodolfo Long; Farah Qureshi – ETS Research Report Series, 2024
This paper presents a multidimensional model of variation in writing quality, register, and genre in student essays, trained and tested via confirmatory factor analysis of 1.37 million essay submissions to ETS' digital writing service, Criterion®. The model was also validated with several other corpora, which indicated that it provides a…
Descriptors: Writing (Composition), Essays, Models, Elementary School Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Rotou, Ourania; Rupp, André A. – ETS Research Report Series, 2020
This research report provides a description of the processes of evaluating the "deployability" of automated scoring (AS) systems from the perspective of large-scale educational assessments in operational settings. It discusses a comprehensive psychometric evaluation that entails analyses that take into consideration the specific purpose…
Descriptors: Computer Assisted Testing, Scoring, Educational Assessment, Psychometrics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hao, Jiangang; Liu, Lei; Kyllonen, Patrick; Flor, Michael; von Davier, Alina A. – ETS Research Report Series, 2019
Collaborative problem solving (CPS) is an important 21st-century skill that is crucial for both career and academic success. However, developing a large-scale and standardized assessment of CPS that can be administered on a regular basis is very challenging. In this report, we introduce a set of psychometric considerations and a general scoring…
Descriptors: Scoring, Psychometrics, Cooperation, Problem Solving
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chen, Lei; Zechner, Klaus; Yoon, Su-Youn; Evanini, Keelan; Wang, Xinhao; Loukina, Anatassia; Tap, Jidong; Davis, Lawrence; Lee, Chong Min; Ma, Min; Mundowsky, Robert; Lu, Chi; Leong, Chee Wee; Gyawali, Binod – ETS Research Report Series, 2018
This research report provides an overview of the R&D efforts at Educational Testing Service related to its capability for automated scoring of nonnative spontaneous speech with the "SpeechRater"? automated scoring service since its initial version was deployed in 2006. While most aspects of this R&D work have been published in…
Descriptors: Computer Assisted Testing, Scoring, Test Scoring Machines, Speech Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chen, Jing; Zhang, Mo; Bejar, Isaac I. – ETS Research Report Series, 2017
Automated essay scoring (AES) generally computes essay scores as a function of macrofeatures derived from a set of microfeatures extracted from the text using natural language processing (NLP). In the "e-rater"® automated scoring engine, developed at "Educational Testing Service" (ETS) for the automated scoring of essays, each…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essay Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bejar, Isaac I.; VanWinkle, Waverely; Madnani, Nitin; Lewis, William; Steier, Michael – ETS Research Report Series, 2013
The paper applies a natural language computational tool to study a potential construct-irrelevant response strategy, namely the use of "shell language." Although the study is motivated by the impending increase in the volume of scoring of students responses from assessments to be developed in response to the Race to the Top initiative,…
Descriptors: Responses, Language Usage, Natural Language Processing, Computational Linguistics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Blanchard, Daniel; Tetreault, Joel; Higgins, Derrick; Cahill, Aoife; Chodorow, Martin – ETS Research Report Series, 2013
This report presents work on the development of a new corpus of non-native English writing. It will be useful for the task of native language identification, as well as grammatical error detection and correction, and automatic essay scoring. In this report, the corpus is described in detail.
Descriptors: Language Tests, Second Language Learning, English (Second Language), Writing Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sukkarieh, Jane Z.; von Davier, Matthias; Yamamoto, Kentaro – ETS Research Report Series, 2012
This document describes a solution to a problem in the automatic content scoring of the multilingual character-by-character highlighting item type. This solution is language independent and represents a significant enhancement. This solution not only facilitates automatic scoring but plays an important role in clustering students' responses;…
Descriptors: Scoring, Multilingualism, Test Items, Role