Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 4 |
Descriptor
Comparative Analysis | 5 |
Evaluation Methods | 5 |
Cutting Scores | 2 |
Simulation | 2 |
Test Bias | 2 |
Achievement Tests | 1 |
Bayesian Statistics | 1 |
Computation | 1 |
Decision Making | 1 |
Elementary Education | 1 |
Evaluators | 1 |
More ▼ |
Source
Educational Measurement:… | 5 |
Author
Wyse, Adam E. | 2 |
Babcock, Ben | 1 |
Cho, Sun-Joo | 1 |
Lee, Woo-yeol | 1 |
Linn, Robert L. | 1 |
Suh, Youngsuk | 1 |
Walker, A. Adrienne | 1 |
Wind, Stefanie A. | 1 |
Publication Type
Journal Articles | 5 |
Reports - Research | 4 |
Reports - Evaluative | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Wind, Stefanie A.; Walker, A. Adrienne – Educational Measurement: Issues and Practice, 2021
Many large-scale performance assessments include score resolution procedures for resolving discrepancies in rater judgments. The goal of score resolution is conceptually similar to person fit analyses: To identify students for whom observed scores may not accurately reflect their achievement. Previously, researchers have observed that…
Descriptors: Goodness of Fit, Performance Based Assessment, Evaluators, Decision Making
Wyse, Adam E.; Babcock, Ben – Educational Measurement: Issues and Practice, 2017
This article provides an overview of the Hofstee standard-setting method and illustrates several situations where the Hofstee method will produce undefined cut scores. The situations where the cut scores will be undefined involve cases where the line segment derived from the Hofstee ratings does not intersect the score distribution curve based on…
Descriptors: Cutting Scores, Evaluation Methods, Standard Setting (Scoring), Comparative Analysis
Cho, Sun-Joo; Suh, Youngsuk; Lee, Woo-yeol – Educational Measurement: Issues and Practice, 2016
The purpose of this ITEMS module is to provide an introduction to differential item functioning (DIF) analysis using mixture item response models. The mixture item response models for DIF analysis involve comparing item profiles across latent groups, instead of manifest groups. First, an overview of DIF analysis based on latent groups, called…
Descriptors: Test Bias, Research Methodology, Evaluation Methods, Models
Wyse, Adam E. – Educational Measurement: Issues and Practice, 2017
This article illustrates five different methods for estimating Angoff cut scores using item response theory (IRT) models. These include maximum likelihood (ML), expected a priori (EAP), modal a priori (MAP), and weighted maximum likelihood (WML) estimators, as well as the most commonly used approach based on translating ratings through the test…
Descriptors: Cutting Scores, Item Response Theory, Bayesian Statistics, Maximum Likelihood Statistics

Linn, Robert L.; And Others – Educational Measurement: Issues and Practice, 1990
Results of a 1987 report--indicating that elementary students of all 50 states were above the national average--were assessed via 2 national mail and telephone surveys. Although results of data for 35 states support the general findings of the 1987 report, it appears that more specific results are less sensational. (TJH)
Descriptors: Achievement Tests, Comparative Analysis, Elementary Education, Evaluation Methods