Descriptor
Models | 5 |
Program Evaluation | 5 |
Mathematical Models | 4 |
Effect Size | 3 |
Evaluation Methods | 3 |
Statistical Analysis | 3 |
Summative Evaluation | 3 |
Evaluators | 2 |
Human Services | 2 |
Professional Training | 2 |
Quasiexperimental Design | 2 |
More ▼ |
Source
New Directions for Program… | 9 |
Author
Brown, Elizabeth D. | 1 |
Cameron, Kim | 1 |
Gerhard, Ronald J. | 1 |
Gollob, Harry | 1 |
Hedges, Larry V. | 1 |
Kaplan, Edward H. | 1 |
Kiresuk, Thomas J. | 1 |
Larson, Richard C. | 1 |
Reichardt, Charles | 1 |
Rindskopf, David | 1 |
Wortman, Paul M. | 1 |
More ▼ |
Publication Type
Journal Articles | 9 |
Opinion Papers | 9 |
Reports - Evaluative | 5 |
Reports - Descriptive | 1 |
Reports - Research | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating

Wortman, Paul M.; And Others – New Directions for Program Evaluation, 1980
What a realistic training program should require, given a broad scope of research methodology, is discussed. Considering the core of evaluation research training, the diversity of methodologies raises issues about what is to be taught and how the involvement of students in actual projects is discussed. (Author/GK)
Descriptors: Curriculum, Evaluators, Graduate Study, Models

Brown, Elizabeth D. – New Directions for Program Evaluation, 1980
Considered fundamentally a scientific enterprise, evaluation research also requires artistic skills. The latter are essential to in-house evaluators, more so than to extrainstitutional evaluators. A training model is suggested. (Author/GK)
Descriptors: Evaluators, Interprofessional Relationship, Models, Professional Training

Rindskopf, David – New Directions for Program Evaluation, 1986
Modeling the process by which participants are selected into groups, rather than adjusting for preexisting group differences, provides the basis for several new approaches to the analysis of data from nonrandomized studies. Econometric approaches, the propensity scores approach, and the relative assignment variable approach to the modeling of…
Descriptors: Effect Size, Experimental Groups, Intelligence Quotient, Mathematical Models

Reichardt, Charles; Gollob, Harry – New Directions for Program Evaluation, 1986
Causal models often omit variables that should be included, use variables that are measured fallibly, and ignore time lags. Such practices can lead to severely biased estimates of effects. The discussion explains these biases and shows how to take them into account. (Author)
Descriptors: Effect Size, Error of Measurement, High Schools, Mathematical Models

Hedges, Larry V. – New Directions for Program Evaluation, 1984
The adequacy of traditional effect size measures for research synthesis is challenged. Analogues to analysis of variance and multiple regression analysis for effect sizes are presented. The importance of tests for the consistency of effect sizes in interpreting results, and problems in obtaining well-specified models for meta-analysis are…
Descriptors: Analysis of Variance, Effect Size, Mathematical Models, Meta Analysis

Gerhard, Ronald J. – New Directions for Program Evaluation, 1981
The need for and applicability of general systems theory in human services evaluation is discussed. The role of evaluation in human services agencies is described and the necessity of combining the programs to be evaluated and the evaluation process itself in a single unifying conceptual model is demonstrated. (Author/AL)
Descriptors: Change Strategies, Evaluation Methods, Human Services, Integrated Activities

Kiresuk, Thomas J.; And Others – New Directions for Program Evaluation, 1981
The consumer of a service is of primary importance when defining groups most concerned with program impact. Program effectiveness for the consumer may be increased through the use of certain guiding principles based on the extension of existing quality assurance and program evaluation methodologies. (Author/RL)
Descriptors: Accountability, Delivery Systems, Evaluation Methods, Human Services

Larson, Richard C.; Kaplan, Edward H. – New Directions for Program Evaluation, 1981
Evaluation is discussed as an information-gathering process. Currently popular evaluation programs are reviewed in relation to decision making and various approaches that seem to contribute to the decision utility of evaluation (e.g. classical approaches, Bayesian approaches, adaptive designs, and model-based evaluations) are described. (Author/AL)
Descriptors: Bayesian Statistics, Decision Making, Evaluation Methods, Formative Evaluation

Cameron, Kim – New Directions for Program Evaluation, 1981
Organizational effectiveness is not a clearly defined concept. The author illustrates how the four most widely used models are not uniformly applicable. He states the evaluator must make explicit certain critical choices when measuring effectiveness. These criteria reveal the definition of effectiveness and what is being measured. (DWH)
Descriptors: Evaluation Criteria, Models, Organizational Effectiveness, Organizational Objectives