NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 6 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Paul Deane; Duanli Yan; Katherine Castellano; Yigal Attali; Michelle Lamar; Mo Zhang; Ian Blood; James V. Bruno; Chen Li; Wenju Cui; Chunyi Ruan; Colleen Appel; Kofi James; Rodolfo Long; Farah Qureshi – ETS Research Report Series, 2024
This paper presents a multidimensional model of variation in writing quality, register, and genre in student essays, trained and tested via confirmatory factor analysis of 1.37 million essay submissions to ETS' digital writing service, Criterion®. The model was also validated with several other corpora, which indicated that it provides a…
Descriptors: Writing (Composition), Essays, Models, Elementary School Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Hao, Jiangang; Liu, Lei; Kyllonen, Patrick; Flor, Michael; von Davier, Alina A. – ETS Research Report Series, 2019
Collaborative problem solving (CPS) is an important 21st-century skill that is crucial for both career and academic success. However, developing a large-scale and standardized assessment of CPS that can be administered on a regular basis is very challenging. In this report, we introduce a set of psychometric considerations and a general scoring…
Descriptors: Scoring, Psychometrics, Cooperation, Problem Solving
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chen, Lei; Zechner, Klaus; Yoon, Su-Youn; Evanini, Keelan; Wang, Xinhao; Loukina, Anatassia; Tap, Jidong; Davis, Lawrence; Lee, Chong Min; Ma, Min; Mundowsky, Robert; Lu, Chi; Leong, Chee Wee; Gyawali, Binod – ETS Research Report Series, 2018
This research report provides an overview of the R&D efforts at Educational Testing Service related to its capability for automated scoring of nonnative spontaneous speech with the "SpeechRater"? automated scoring service since its initial version was deployed in 2006. While most aspects of this R&D work have been published in…
Descriptors: Computer Assisted Testing, Scoring, Test Scoring Machines, Speech Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bruno, James V.; Cahill, Aoife; Gyawali, Binod – ETS Research Report Series, 2016
We present an annotation scheme for classifying differences in the outputs of syntactic constituency parsers when a gold standard is unavailable or undesired, as in the case of texts written by nonnative speakers of English. We discuss its automated implementation and the results of a case study that uses the scheme to choose a parser best suited…
Descriptors: Documentation, Classification, Differences, Syntax
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chen, Jing; Zhang, Mo; Bejar, Isaac I. – ETS Research Report Series, 2017
Automated essay scoring (AES) generally computes essay scores as a function of macrofeatures derived from a set of microfeatures extracted from the text using natural language processing (NLP). In the "e-rater"® automated scoring engine, developed at "Educational Testing Service" (ETS) for the automated scoring of essays, each…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essay Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sheehan, Kathleen M.; Kostin, Irene; Futagi, Yoko; Hemat, Ramin; Zuckerman, Daniel – ETS Research Report Series, 2006
This paper describes the development, implementation, and evaluation of an automated system for predicting the acceptability status of candidate reading-comprehension stimuli extracted from a database of journal and magazine articles. The system uses a combination of classification and regression techniques to predict the probability that a given…
Descriptors: Automation, Prediction, Reading Comprehension, Classification