Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 2 |
Descriptor
Automation | 2 |
Interrater Reliability | 2 |
Scoring | 2 |
Artificial Intelligence | 1 |
Best Practices | 1 |
Computer Assisted Testing | 1 |
Evaluation | 1 |
Guidelines | 1 |
High Stakes Tests | 1 |
Test Format | 1 |
Validity | 1 |
More ▼ |
Author
Breyer, F. Jay | 1 |
Casabianca, Jodi M. | 1 |
Lawless, René R. | 1 |
McCaffrey, Daniel F. | 1 |
Ricker-Pedley, Kathryn L. | 1 |
Wendler, Cathy | 1 |
Williamson, David M. | 1 |
Xi, Xiaoming | 1 |
Publication Type
Journal Articles | 2 |
Reports - Descriptive | 2 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
McCaffrey, Daniel F.; Casabianca, Jodi M.; Ricker-Pedley, Kathryn L.; Lawless, René R.; Wendler, Cathy – ETS Research Report Series, 2022
This document describes a set of best practices for developing, implementing, and maintaining the critical process of scoring constructed-response tasks. These practices address both the use of human raters and automated scoring systems as part of the scoring process and cover the scoring of written, spoken, performance, or multimodal responses.…
Descriptors: Best Practices, Scoring, Test Format, Computer Assisted Testing
Williamson, David M.; Xi, Xiaoming; Breyer, F. Jay – Educational Measurement: Issues and Practice, 2012
A framework for evaluation and use of automated scoring of constructed-response tasks is provided that entails both evaluation of automated scoring as well as guidelines for implementation and maintenance in the context of constantly evolving technologies. Consideration of validity issues and challenges associated with automated scoring are…
Descriptors: Automation, Scoring, Evaluation, Guidelines