NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 5 results Save | Export
Trochim, William M. K.; Visco, Ronald J. – New Directions for Testing and Measurement, 1985
Quality control in the auditing profession has developed useful practices that are applied to the research and evaluation process. By identifying and controlling sources of bias and noise in the system of inquiry, the quality of evaluative evidence can be enhanced for both present and future use. (Author/BS)
Descriptors: Evaluation Methods, Program Evaluation, Quality Control, Research Design
Holland, Sherry – New Directions for Testing and Measurement, 1985
Information that describes a program's implementation and client audience can help the evaluator tailor the study to the constraints of particular settings and increase the likelihood that the evaluation can be implemented as planned. (Author/BS)
Descriptors: Evaluation Methods, National Programs, Planning, Program Descriptions
Cordray, David S.; Sonnefeld, L. Joseph – New Directions for Testing and Measurement, 1985
There are numerous micro-level methods decisions associated with planning an impact evaluation. Quantitative synthesis methods can be used to construct an actuarial data base for establishing the likelihood of achieving desired sample sizes, statistical power, and measurement characteristics. (Author/BS)
Descriptors: Effect Size, Evaluation Criteria, Evaluation Methods, Meta Analysis
Orwin, Robert G. – New Directions for Testing and Measurement, 1985
The manner in which results and methods are reported influences the ability of the synthesis of prior studies for planning new evaluations. Confidence ratings, coding conventions, and supplemental evidence can partially overcome the difficulties. Planners must acknowledge the influence of their own judgement in using prior research. (Author)
Descriptors: Decision Making, Evaluation Methods, Evaluators, Meta Analysis
Lipsey, Mark W.; And Others – New Directions for Testing and Measurement, 1985
A representative sample of studies drawn from the published program evaluation literature is examined. It is concluded that weak designs, low statistical power, ad hoc measurement, and neglect of treatment implementation and program theory characterize the state of the art in program evaluation. (Author/BS)
Descriptors: Effect Size, Evaluation Methods, Measurement Techniques, Program Evaluation