Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 3 |
Descriptor
Monte Carlo Methods | 4 |
Reliability | 4 |
Scaling | 2 |
Achievement Tests | 1 |
Correlation | 1 |
Decision Making | 1 |
Elementary School Students | 1 |
Generalizability Theory | 1 |
Grade 1 | 1 |
Grade 2 | 1 |
Grade 3 | 1 |
More ▼ |
Source
Applied Measurement in… | 4 |
Author
Bovaird, James A. | 1 |
Hawley, Leslie R. | 1 |
Hurtz, Gregory M. | 1 |
Johnson, Robert L. | 1 |
Jones, J. Patrick | 1 |
Penny, Jim | 1 |
Schiel, Jeffrey L. | 1 |
Shaw, Dale G. | 1 |
Shumate, Steven R. | 1 |
Surles, James | 1 |
Wu, ChaoRong | 1 |
More ▼ |
Publication Type
Journal Articles | 4 |
Reports - Evaluative | 2 |
Reports - Research | 2 |
Education Level
Early Childhood Education | 1 |
Elementary Education | 1 |
Grade 1 | 1 |
Grade 2 | 1 |
Grade 3 | 1 |
Primary Education | 1 |
Audience
Location
Tennessee | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Stanford Achievement Tests | 1 |
What Works Clearinghouse Rating
Hawley, Leslie R.; Bovaird, James A.; Wu, ChaoRong – Applied Measurement in Education, 2017
Value-added assessment methods have been criticized by researchers and policy makers for a number of reasons. One issue includes the sensitivity of model results across different outcome measures. This study examined the utility of incorporating multivariate latent variable approaches within a traditional value-added framework. We evaluated the…
Descriptors: Value Added Models, Reliability, Multivariate Analysis, Scaling
Hurtz, Gregory M.; Jones, J. Patrick – Applied Measurement in Education, 2009
Standard setting methods such as the Angoff method rely on judgments of item characteristics; item response theory empirically estimates item characteristics and displays them in item characteristic curves (ICCs). This study evaluated several indexes of rater fit to ICCs as a method for judging rater accuracy in their estimates of expected item…
Descriptors: Standard Setting (Scoring), Item Response Theory, Reliability, Measurement
Shumate, Steven R.; Surles, James; Johnson, Robert L.; Penny, Jim – Applied Measurement in Education, 2007
Increasingly, assessment practitioners use generalizability coefficients to estimate the reliability of scores from performance tasks. Little research, however, examines the relation between the estimation of generalizability coefficients and the number of rubric scale points and score distributions. The purpose of the present research is to…
Descriptors: Generalizability Theory, Monte Carlo Methods, Measures (Individuals), Program Effectiveness

Schiel, Jeffrey L.; Shaw, Dale G. – Applied Measurement in Education, 1992
Changes in information retention resulting from changes in reliability and number of intervals in scale construction were studied to provide quantitative information to help in decisions about choosing intervals. Information retention reached a maximum when the number of intervals was about 8 or more and reliability was near 1.0. (SLD)
Descriptors: Decision Making, Knowledge Level, Mathematical Models, Monte Carlo Methods