Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 4 |
Descriptor
Pass Fail Grading | 4 |
Scoring | 4 |
Item Analysis | 3 |
Scoring Formulas | 2 |
Scoring Rubrics | 2 |
Standard Setting (Scoring) | 2 |
Test Interpretation | 2 |
Testing | 2 |
Achievement Tests | 1 |
Adaptive Testing | 1 |
Alignment (Education) | 1 |
More ▼ |
Source
Educational Testing Service | 1 |
English Teaching Forum | 1 |
Practical Assessment,… | 1 |
Society for Research on… | 1 |
Publication Type
Journal Articles | 2 |
Reports - Descriptive | 2 |
Reports - Evaluative | 1 |
Reports - Research | 1 |
Education Level
Elementary Secondary Education | 1 |
Higher Education | 1 |
Audience
Location
New York | 1 |
New York (New York) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Sophie Litschwartz – Society for Research on Educational Effectiveness, 2021
Background/Context: Pass/fail standardized exams frequently selectively rescore failing exams and retest failing examinees. This practice distorts the test score distribution and can confuse those who do analysis on these distributions. In 2011, the Wall Street Journal showed large discontinuities in the New York City Regent test score…
Descriptors: Standardized Tests, Pass Fail Grading, Scoring Rubrics, Scoring Formulas
Rogler, Dawn – English Teaching Forum, 2014
This article presents principles and practices of effective assessment, outlining seven key concepts--usefulness, reliability, validity, practicality, washback, authenticity, and transparency--and demonstrating how to apply them in creating an exam blueprint. The article also discusses the importance of providing feedback after a test has been…
Descriptors: Testing, Student Evaluation, Validity, Reliability
Judd, Wallace – Practical Assessment, Research & Evaluation, 2009
Over the past twenty years in performance testing a specific item type with distinguishing characteristics has arisen time and time again. It's been invented independently by dozens of test development teams. And yet this item type is not recognized in the research literature. This article is an invitation to investigate the item type, evaluate…
Descriptors: Test Items, Test Format, Evaluation, Item Analysis
Dorans, Neil J.; Liang, Longjuan; Puhan, Gautam – Educational Testing Service, 2010
Scores are the most visible and widely used products of a testing program. The choice of score scale has implications for test specifications, equating, and test reliability and validity, as well as for test interpretation. At the same time, the score scale should be viewed as infrastructure likely to require repair at some point. In this report…
Descriptors: Testing Programs, Standard Setting (Scoring), Test Interpretation, Certification