Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 3 |
Descriptor
Computation | 3 |
Educational Testing | 3 |
Testing Programs | 3 |
Error of Measurement | 2 |
Scores | 2 |
Standardized Tests | 2 |
Achievement Tests | 1 |
Content Analysis | 1 |
Correlation | 1 |
Data Analysis | 1 |
Differences | 1 |
More ▼ |
Author
Chan, Tsze | 1 |
Cohen, Jon | 1 |
Haberman, Shelby J. | 1 |
Jaciw, Andrew P. | 1 |
Jiang, Tao | 1 |
Olsen, Robert B. | 1 |
Price, Cristofer | 1 |
Seburn, Mary | 1 |
Unlu, Fatih | 1 |
Publication Type
Journal Articles | 2 |
Reports - Research | 2 |
Numerical/Quantitative Data | 1 |
Reports - Evaluative | 1 |
Education Level
Elementary Secondary Education | 1 |
Audience
Location
Arizona | 1 |
California | 1 |
Missouri | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Olsen, Robert B.; Unlu, Fatih; Price, Cristofer; Jaciw, Andrew P. – National Center for Education Evaluation and Regional Assistance, 2011
This report examines the differences in impact estimates and standard errors that arise when these are derived using state achievement tests only (as pre-tests and post-tests), study-administered tests only, or some combination of state- and study-administered tests. State tests may yield different evaluation results relative to a test that is…
Descriptors: Achievement Tests, Standardized Tests, State Standards, Reading Achievement
Haberman, Shelby J. – Journal of Educational and Behavioral Statistics, 2008
In educational tests, subscores are often generated from a portion of the items in a larger test. Guidelines based on mean squared error are proposed to indicate whether subscores are worth reporting. Alternatives considered are direct reports of subscores, estimates of subscores based on total score, combined estimates based on subscores and…
Descriptors: Testing Programs, Regression (Statistics), Scores, Student Evaluation
Cohen, Jon; Chan, Tsze; Jiang, Tao; Seburn, Mary – Applied Psychological Measurement, 2008
U.S. state educational testing programs administer tests to track student progress and hold schools accountable for educational outcomes. Methods from item response theory, especially Rasch models, are usually used to equate different forms of a test. The most popular method for estimating Rasch models yields inconsistent estimates and relies on…
Descriptors: Testing Programs, Educational Testing, Item Response Theory, Computation