Descriptor
Program Evaluation | 7 |
Research Methodology | 7 |
Data Analysis | 3 |
Evaluation Methods | 3 |
Field Studies | 2 |
Administrator Role | 1 |
Agency Role | 1 |
Analysis of Covariance | 1 |
Analysis of Variance | 1 |
Audiences | 1 |
Bibliographies | 1 |
More ▼ |
Source
Evaluation Quarterly | 7 |
Author
Alderman, Donald L. | 1 |
Boruch, Robert F. | 1 |
Cook, Thomas D. | 1 |
Gruder, Charles L. | 1 |
Moran, R. Allen | 1 |
Perkins, Dennis N. T. | 1 |
Powers, Donald E. | 1 |
St. Pierre, Robert G. | 1 |
Wilson, Steve | 1 |
Publication Type
Journal Articles | 2 |
Reports - Evaluative | 1 |
Reports - Research | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Perkins, Dennis N. T. – Evaluation Quarterly, 1977
Based on a theoretical model of program development, this taxonomy of evaluation relates six evaluation objectives or purposes to nine methodological alternatives. Use of the taxonomy may increase the efficiency of evaluation, improve communication, and assist in the choice of a methodology that is appropriate to the evaluation purposes.…
Descriptors: Classification, Data Analysis, Educational Assessment, Evaluation
Boruch, Robert F.; And Others – Evaluation Quarterly, 1978
Bibliographic references list 300 randomized field experiments undertaken to study programs in schools, hospitals, prisons, and other social settings. The bibliography provides evidence supporting the feasibility and the broad scope of randomized field tests conducted by economists, sociologists, psychologists, and educational researchers.…
Descriptors: Bibliographies, Citations (References), Educational Programs, Field Studies
Powers, Donald E.; Alderman, Donald L. – Evaluation Quarterly, 1979
Practical methods for implementing true experimental designs in evaluation settings in which such designs are rarely used are presented. Particular attention is paid to educational settings. (Author/JKS)
Descriptors: Analysis of Variance, Educational Experiments, Evaluation Methods, Field Studies
St. Pierre, Robert G. – Evaluation Quarterly, 1978
Data from the national evaluation of Project Follow Through were analyzed using analysis of covariance with and without correcting the pretest for unreliability. Such corrections led to some changes in conclusions. There are many disagreements in the literature about the appropriateness of correction for unreliability. (Author/CTM)
Descriptors: Analysis of Covariance, Data Analysis, Error Patterns, Pretests Posttests
Cook, Thomas D.; Gruder, Charles L. – Evaluation Quarterly, 1978
Four projects aimed at evaluating the technical quality of recent summative evaluations are discussed, and models of metaevaluation are presented. Common technical problems are identified and practical methods for solving these problems are outlined, but these methods are limited by the current state of the art. (Author/CTM)
Descriptors: Consultants, Data Analysis, Evaluators, Meta Evaluation
Moran, R. Allen – Evaluation Quarterly, 1977
This paper argues that program evaluation research should be broadened to deal systematically with the economics of resource use at the level of the individual social service delivery agency. The argument is strengthened by the recognition that input use can be established by the same methods used to measure service delivery. (Author/MV)
Descriptors: Administrator Role, Agency Role, Cost Effectiveness, Delivery Systems
Wilson, Steve – Evaluation Quarterly, 1979
Claims made for case studies are analyzed, and some of the obstacles to using this kind of evaluation information are discussed. The analysis is based on evaluation of a federally funded project which combined the evaluation with attempts to use that information to help other schools. (Author/CTM)
Descriptors: Audiences, Case Studies, Diffusion, Educational Innovation