Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 4 |
Descriptor
Source
Assessing Writing | 1 |
Council of Chief State School… | 1 |
Journal of Experimental… | 1 |
Learning Policy Institute | 1 |
Rowman & Littlefield… | 1 |
Publication Type
Reports - Descriptive | 3 |
Journal Articles | 2 |
Reports - Evaluative | 2 |
Books | 1 |
Education Level
Elementary Secondary Education | 4 |
Higher Education | 3 |
Postsecondary Education | 3 |
High Schools | 2 |
Secondary Education | 2 |
Audience
Location
Australia | 2 |
Connecticut | 2 |
New Hampshire | 2 |
New York | 2 |
Rhode Island | 2 |
United Kingdom (England) | 2 |
Vermont | 2 |
Singapore | 1 |
Laws, Policies, & Programs
Every Student Succeeds Act… | 2 |
No Child Left Behind Act 2001 | 1 |
Assessments and Surveys
National Assessment of… | 5 |
New York State Regents… | 2 |
What Works Clearinghouse Rating
Darling-Hammond, Linda – Learning Policy Institute, 2017
After passage of the Every Student Succeeds Act (ESSA) in 2015, states assumed greater responsibility for designing their own accountability and assessment systems. ESSA requires states to measure "higher order thinking skills and understanding" and encourages the use of open-ended performance assessments, which are essential for…
Descriptors: Performance Based Assessment, Accountability, Portfolios (Background Materials), Task Analysis
Darling-Hammond, Linda – Council of Chief State School Officers, 2017
The Every Student Succeeds Act (ESSA) opened up new possibilities for how student and school success are defined and supported in American public education. States have greater responsibility for designing and building their assessment and accountability systems. These new opportunities to develop performance assessments are critically important…
Descriptors: Performance Based Assessment, Accountability, Portfolios (Background Materials), Task Analysis
McCurry, Doug – Assessing Writing, 2010
This article considers the claim that machine scoring of writing test responses agrees with human readers as much as humans agree with other humans. These claims about the reliability of machine scoring of writing are usually based on specific and constrained writing tasks, and there is reason for asking whether machine scoring of writing requires…
Descriptors: Writing Tests, Scoring, Interrater Reliability, Computer Assisted Testing
Shermis, Mark D.; DiVesta, Francis J. – Rowman & Littlefield Publishers, Inc., 2011
"Classroom Assessment in Action" clarifies the multi-faceted roles of measurement and assessment and their applications in a classroom setting. Comprehensive in scope, Shermis and Di Vesta explain basic measurement concepts and show students how to interpret the results of standardized tests. From these basic concepts, the authors then…
Descriptors: Student Evaluation, Standardized Tests, Scores, Measurement

Page, Ellis Batten – Journal of Experimental Education, 1994
National Assessment of Educational Progress writing sample essays from 1988 and 1990 (495 and 599 essays) were subjected to computerized grading and human ratings. Cross-validation suggests that computer scoring is superior to a two-judge panel, a finding encouraging for large programs of essay evaluation. (SLD)
Descriptors: Computer Assisted Testing, Computer Software, Essays, Evaluation Methods