Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 2 |
Descriptor
Source
Educational and Psychological… | 2 |
British Educational Research… | 1 |
Carnegie Foundation for the… | 1 |
Psychometrika | 1 |
Author
Publication Type
Reports - Research | 6 |
Journal Articles | 4 |
Reports - Evaluative | 2 |
Speeches/Meeting Papers | 2 |
Education Level
Elementary Secondary Education | 1 |
Higher Education | 1 |
Audience
Researchers | 1 |
Location
United Kingdom (England) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
California Psychological… | 1 |
Graduate Record Examinations | 1 |
Sentence Completion Test | 1 |
What Works Clearinghouse Rating
Raudenbush, Stephen – Carnegie Foundation for the Advancement of Teaching, 2013
This brief considers the problem of using value-added scores to compare teachers who work in different schools. The author focuses on whether such comparisons can be regarded as fair, or, in statistical language, "unbiased." An unbiased measure does not systematically favor teachers because of the backgrounds of the students they are…
Descriptors: Educational Research, Achievement Gains, Teacher Effectiveness, Comparative Analysis
Gorard, Stephen – British Educational Research Journal, 2010
This paper considers the model of school effectiveness (SE) currently dominant in research, policy and practice in England (although the concerns it raises are international). It shows, principally through consideration of initial and propagated error, that SE results cannot be relied upon. By considering the residual difference between the…
Descriptors: School Effectiveness, Foreign Countries, Scores, Educational Policy

Huynh, Huynh – Psychometrika, 1986
Under the assumption of normalcy, a formula is derived for the reliability of the maximum score. It is shown that the maximum score is more reliable than each of the single observations but less reliable than their composite score. (Author/LMO)
Descriptors: Error of Measurement, Mathematical Models, Reliability, Scores

Stevens, Joseph J.; Aleamoni, Lawrence, M. – Educational and Psychological Measurement, 1986
Prior standardization of scores when an aggregate score is formed has been criticized. This article presents a demonstration of the effects of differential weighting of aggregate components that clarifies the need for prior standardization. The role of standardization in statistics and the use of aggregate scores in research are discussed.…
Descriptors: Correlation, Error of Measurement, Factor Analysis, Raw Scores

Blixt, Sonya L.; Shama, Deborah D. – Educational and Psychological Measurement, 1986
Methods of estimating the standard error at different ability levels were compared. Overall, it was found that at a given ability level the standard errors calculated using different formulas are not appreciably different. Further, for most situations the traditional method of calculating a standard error probably provides sufficient precision.…
Descriptors: College Freshmen, Error of Measurement, Higher Education, Mathematics Achievement

Misanchuk, Earl R. – 1978
Multiple matrix sampling of three subscales of the California Psychological Inventory was used to investigate the effects of four variables on error estimates of the mean (EEM) and variance (EEV). The four variables were examinee population size (600, 450, 300, 150, 100, and 75); number of subtests, (2, 3, 4, 5, 6, and 7), hence the number of…
Descriptors: Adults, Analysis of Variance, Error of Measurement, Item Sampling
Lord, Frederic M.; Wild, Cheryl L. – 1985
This study compares the contribution to measurement accuracy of the verbal score of each of four verbal item types included in the Graduate Record Examinations (GRE) General Test. Comparisons are based on item response theory, a methodology that allows the researcher to look at the accuracy of individual points on the score scale. This methodology…
Descriptors: College Entrance Examinations, Error of Measurement, Graduate Study, Higher Education
Jaeger, Richard M.; Busch, John Christian – 1986
This study explores the use of the modified caution index (MCI) for identifying judges whose patterns of recommendations suggest that their judgments might be based on incomplete information, flawed reasoning, or inattention to their standard-setting tasks. It also examines the effect on test standards and passing rates when the test standards of…
Descriptors: Criterion Referenced Tests, Error of Measurement, Evaluation Methods, High Schools