Descriptor
Analysis of Variance | 4 |
Error of Measurement | 4 |
Research Design | 3 |
Hypothesis Testing | 2 |
Research Problems | 2 |
Academic Ability | 1 |
Achievement Rating | 1 |
Achievement Tests | 1 |
Children | 1 |
Correlation | 1 |
Data Interpretation | 1 |
More ▼ |
Publication Type
Opinion Papers | 4 |
Journal Articles | 3 |
Information Analyses | 2 |
Guides - Non-Classroom | 1 |
Reports - Evaluative | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Audience
Researchers | 2 |
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Gillmore, Gerald M. – New Directions for Testing and Measurement, 1983
The unique conceptual framework and language of generalizability theory are presented. While this chapter is relevant to any area in which generalizability theory is applicable, it emphasizes evaluation research, and most examples come from that area. (Author/PN)
Descriptors: Achievement Tests, Analysis of Variance, Decision Making, Error of Measurement

Goodwin, Laura D.; Goodwin, William L. – Journal of Early Intervention, 1991
Four approaches to estimating interrater reliability in early childhood special education research are illustrated and compared: correlation, comparison of means, percentage of agreement, and generalizability theory techniques. Generalizability theory techniques are proposed as a method for estimating the amount of variance attributable to…
Descriptors: Analysis of Variance, Disabilities, Early Childhood Education, Educational Research

Assor, Avi; And Others – Child Development, 1990
Addresses three issues concerning the assessment of the overrating and underrating of academic competence: (1) the impossibility of separating effects of overrating and underrating from effects of perceived and actual competence; (2) the questionable validity of Connell and Ilardi's method; and (3) the proposal of a new method and its implications…
Descriptors: Academic Ability, Achievement Rating, Analysis of Variance, Children
Thompson, Bruce – 1987
This paper evaluates the logic underlying various criticisms of statistical significance testing and makes specific recommendations for scientific and editorial practice that might better increase the knowledge base. Reliance on the traditional hypothesis testing model has led to a major bias against nonsignificant results and to misinterpretation…
Descriptors: Analysis of Variance, Data Interpretation, Editors, Effect Size