NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Wortman, Paul M.; And Others – New Directions for Program Evaluation, 1980
What a realistic training program should require, given a broad scope of research methodology, is discussed. Considering the core of evaluation research training, the diversity of methodologies raises issues about what is to be taught and how the involvement of students in actual projects is discussed. (Author/GK)
Descriptors: Curriculum, Evaluators, Graduate Study, Models
Peer reviewed Peer reviewed
Brown, Elizabeth D. – New Directions for Program Evaluation, 1980
Considered fundamentally a scientific enterprise, evaluation research also requires artistic skills. The latter are essential to in-house evaluators, more so than to extrainstitutional evaluators. A training model is suggested. (Author/GK)
Descriptors: Evaluators, Interprofessional Relationship, Models, Professional Training
Peer reviewed Peer reviewed
Rindskopf, David – New Directions for Program Evaluation, 1986
Modeling the process by which participants are selected into groups, rather than adjusting for preexisting group differences, provides the basis for several new approaches to the analysis of data from nonrandomized studies. Econometric approaches, the propensity scores approach, and the relative assignment variable approach to the modeling of…
Descriptors: Effect Size, Experimental Groups, Intelligence Quotient, Mathematical Models
Peer reviewed Peer reviewed
Reichardt, Charles; Gollob, Harry – New Directions for Program Evaluation, 1986
Causal models often omit variables that should be included, use variables that are measured fallibly, and ignore time lags. Such practices can lead to severely biased estimates of effects. The discussion explains these biases and shows how to take them into account. (Author)
Descriptors: Effect Size, Error of Measurement, High Schools, Mathematical Models
Peer reviewed Peer reviewed
Hedges, Larry V. – New Directions for Program Evaluation, 1984
The adequacy of traditional effect size measures for research synthesis is challenged. Analogues to analysis of variance and multiple regression analysis for effect sizes are presented. The importance of tests for the consistency of effect sizes in interpreting results, and problems in obtaining well-specified models for meta-analysis are…
Descriptors: Analysis of Variance, Effect Size, Mathematical Models, Meta Analysis
Peer reviewed Peer reviewed
Gerhard, Ronald J. – New Directions for Program Evaluation, 1981
The need for and applicability of general systems theory in human services evaluation is discussed. The role of evaluation in human services agencies is described and the necessity of combining the programs to be evaluated and the evaluation process itself in a single unifying conceptual model is demonstrated. (Author/AL)
Descriptors: Change Strategies, Evaluation Methods, Human Services, Integrated Activities
Peer reviewed Peer reviewed
Kiresuk, Thomas J.; And Others – New Directions for Program Evaluation, 1981
The consumer of a service is of primary importance when defining groups most concerned with program impact. Program effectiveness for the consumer may be increased through the use of certain guiding principles based on the extension of existing quality assurance and program evaluation methodologies. (Author/RL)
Descriptors: Accountability, Delivery Systems, Evaluation Methods, Human Services
Peer reviewed Peer reviewed
Larson, Richard C.; Kaplan, Edward H. – New Directions for Program Evaluation, 1981
Evaluation is discussed as an information-gathering process. Currently popular evaluation programs are reviewed in relation to decision making and various approaches that seem to contribute to the decision utility of evaluation (e.g. classical approaches, Bayesian approaches, adaptive designs, and model-based evaluations) are described. (Author/AL)
Descriptors: Bayesian Statistics, Decision Making, Evaluation Methods, Formative Evaluation
Peer reviewed Peer reviewed
Cameron, Kim – New Directions for Program Evaluation, 1981
Organizational effectiveness is not a clearly defined concept. The author illustrates how the four most widely used models are not uniformly applicable. He states the evaluator must make explicit certain critical choices when measuring effectiveness. These criteria reveal the definition of effectiveness and what is being measured. (DWH)
Descriptors: Evaluation Criteria, Models, Organizational Effectiveness, Organizational Objectives