Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 2 |
Descriptor
Program Validation | 2 |
Academic Standards | 1 |
Achievement Rating | 1 |
Correlation | 1 |
Criterion Referenced Tests | 1 |
Essays | 1 |
Evaluators | 1 |
Expertise | 1 |
Interrater Reliability | 1 |
Mathematics Education | 1 |
Novices | 1 |
More ▼ |
Source
Applied Measurement in… | 2 |
Author
Bostic, Jonathan | 1 |
Carney, Michele | 1 |
Duchnowski, Matthew P. | 1 |
Escoffery, David S. | 1 |
Krupa, Erin Elizabeth | 1 |
Powers, Donald E. | 1 |
Publication Type
Journal Articles | 2 |
Reports - Descriptive | 1 |
Reports - Research | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Krupa, Erin Elizabeth; Carney, Michele; Bostic, Jonathan – Applied Measurement in Education, 2019
This article provides a brief introduction to the set of four articles in the special issue. To provide a foundation for the issue, key terms are defined, a brief historical overview of validity is provided, and a description of several different validation approaches used in the issue are explained. Finally, the contribution of the articles to…
Descriptors: Test Items, Program Validation, Test Validity, Mathematics Education
Powers, Donald E.; Escoffery, David S.; Duchnowski, Matthew P. – Applied Measurement in Education, 2015
By far, the most frequently used method of validating (the interpretation and use of) automated essay scores has been to compare them with scores awarded by human raters. Although this practice is questionable, human-machine agreement is still often regarded as the "gold standard." Our objective was to refine this model and apply it to…
Descriptors: Essays, Test Scoring Machines, Program Validation, Criterion Referenced Tests