NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 13 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal – ETS Research Report Series, 2020
Principles of skill acquisition dictate that raters should be provided with frequent feedback about their ratings. However, in current operational practice, raters rarely receive immediate feedback about their scores owing to the prohibitive effort required to generate such feedback. An approach for generating and administering feedback responses…
Descriptors: Feedback (Response), Evaluators, Accuracy, Scores
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Feng, Gary; Joe, Jilliam; Kitchen, Christopher; Mao, Liyang; Roohr, Katrina Crotts; Chen, Lei – ETS Research Report Series, 2019
This proof-of-concept study examined the feasibility of a new scoring procedure designed to reduce the time of scoring a video-based public speaking assessment task. Instead of scoring the video in its entirety, the performance was evaluated based on content-related (e.g., speech organization, word choice) and delivery-related (e.g., vocal…
Descriptors: Scoring, Public Speaking, Video Technology, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Qi, Yi; Bell, Courtney A.; Jones, Nathan D.; Lewis, Jennifer M.; Witherspoon, Margaret W.; Redash, Amanda – ETS Research Report Series, 2018
Teacher observations are being used for high-stakes purposes in states across the country, and administrators often serve as raters in teacher evaluation systems. This paper examines how the cognitive aspects of administrators' use of an observation instrument, a modified version of Charlotte Danielson's Framework for Teaching, interact with the…
Descriptors: Teacher Evaluation, Classroom Observation Techniques, Observation, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Breyer, F. Jay; Rupp, André A.; Bridgeman, Brent – ETS Research Report Series, 2017
In this research report, we present an empirical argument for the use of a contributory scoring approach for the 2-essay writing assessment of the analytical writing section of the "GRE"® test in which human and machine scores are combined for score creation at the task and section levels. The approach was designed to replace a currently…
Descriptors: College Entrance Examinations, Scoring, Essay Tests, Writing Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ackerman, Debra J. – ETS Research Report Series, 2020
Over the past 8 years, U.S. kindergarten classrooms have been impacted by policies mandating or recommending the administration of a specific kindergarten entry assessment (KEA) in the initial months of school as well as the increasing reliance on digital technology in the form of mobile apps, touchscreen devices, and online data platforms. Using…
Descriptors: Kindergarten, School Readiness, Computer Assisted Testing, Preschool Teachers
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Oliveri, María Elena; Lawless, René – ETS Research Report Series, 2018
In this paper, we first examine the challenges of score comparability associated with the use of assessments that are exported. By exported assessments, we mean assessments that are developed for domestic use and are then administered in other countries in either the same or a different language. Second, we provide suggestions to better support…
Descriptors: Scores, Scoring, Higher Education, College Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kannan, Priya – ETS Research Report Series, 2016
Federal accountability requirements after the No Child Left Behind (NCLB) Act of 2001 and the need to report progress for various disaggregated subgroups of students meant that the methods used to set and articulate performance standards across the grades must be revisited. Several solutions that involve either "a priori" deliberations…
Descriptors: Evaluation Methods, Cutting Scores, Elementary Secondary Education, Student Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Automated scoring models for the "e-rater"® scoring engine were built and evaluated for the "GRE"® argument and issue-writing tasks. Prompt-specific, generic, and generic with prompt-specific intercept scoring models were built and evaluation statistics such as weighted kappas, Pearson correlations, standardized difference in…
Descriptors: Scoring, Test Scoring Machines, Automation, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Sheehan, Kathleen M. – ETS Research Report Series, 2016
The "TextEvaluator"® text analysis tool is a fully automated text complexity evaluation tool designed to help teachers and other educators select texts that are consistent with the text complexity guidelines specified in the Common Core State Standards (CCSS). This paper provides an overview of the TextEvaluator measurement approach and…
Descriptors: Automation, Evaluation Methods, Reading Material Selection, Common Core State Standards
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Guzman-Orth, Danielle; Lopez, Alexis A.; Tolentino, Florencia – ETS Research Report Series, 2017
Dual language learners (DLLs) and the various educational programs that serve them are increasing in number across the country. This framework lays out a conceptual approach for dual language assessment tasks designed to measure the language and literacy skills of young DLLs entering kindergarten in the United States. Although our examples focus…
Descriptors: Bilingualism, Second Language Learning, English Language Learners, Spanish Speaking
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Scoring models for the "e-rater"® system were built and evaluated for the "TOEFL"® exam's independent and integrated writing prompts. Prompt-specific and generic scoring models were built, and evaluation statistics, such as weighted kappas, Pearson correlations, standardized differences in mean scores, and correlations with…
Descriptors: Scoring, Prompting, Evaluators, Computer Software
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Xi, Xiaoming; Higgins, Derrick; Zechner, Klaus; Williamson, David M. – ETS Research Report Series, 2008
This report presents the results of a research and development effort for SpeechRater? Version 1.0 (v1.0), an automated scoring system for the spontaneous speech of English language learners used operationally in the Test of English as a Foreign Language™ (TOEFL®) Practice Online assessment (TPO). The report includes a summary of the validity…
Descriptors: Speech, Scoring, Scoring Rubrics, Scoring Formulas
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zechner, Klaus; Bejar, Isaac I.; Hemat, Ramin – ETS Research Report Series, 2007
The increasing availability and performance of computer-based testing has prompted more research on the automatic assessment of language and speaking proficiency. In this investigation, we evaluated the feasibility of using an off-the-shelf speech-recognition system for scoring speaking prompts from the LanguEdge field test of 2002. We first…
Descriptors: Role, Computer Assisted Testing, Language Proficiency, Oral Language