NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 20 results Save | Export
Peer reviewed Peer reviewed
Lincoln, Yvonna S.; Guba, Egon G. – New Directions for Program Evaluation, 1986
The emergence of a new, naturalistic, paradigm of inquiry has led to a demand for rigorous criteria that meet traditional standards of inquiry. Two sets are suggested, one of which, the "trustworthiness" criteria, parallels conventional criteria, while the second, "authenticity" criteria, is implied directly by new paradigm…
Descriptors: Evaluation Criteria, Models, Observation, Program Evaluation
Peer reviewed Peer reviewed
Wortman, Paul M.; And Others – New Directions for Program Evaluation, 1980
What a realistic training program should require, given a broad scope of research methodology, is discussed. Considering the core of evaluation research training, the diversity of methodologies raises issues about what is to be taught and how the involvement of students in actual projects is discussed. (Author/GK)
Descriptors: Curriculum, Evaluators, Graduate Study, Models
Peer reviewed Peer reviewed
Brown, Elizabeth D. – New Directions for Program Evaluation, 1980
Considered fundamentally a scientific enterprise, evaluation research also requires artistic skills. The latter are essential to in-house evaluators, more so than to extrainstitutional evaluators. A training model is suggested. (Author/GK)
Descriptors: Evaluators, Interprofessional Relationship, Models, Professional Training
Peer reviewed Peer reviewed
Cordray, David S. – New Directions for Program Evaluation, 1986
The role of human judgment in the development and synthesis of evidence has not been adequately developed or acknowledged within quasi-experimental analysis. Corrective solutions need to confront the fact that causal analysis within complex environments will require a more active assessment that entails reasoning and statistical modeling.…
Descriptors: Evaluative Thinking, Models, Program Effectiveness, Program Evaluation
Peer reviewed Peer reviewed
Yeaton, William; Sechrest, Lee – New Directions for Program Evaluation, 1987
In no-difference research, no differences are found among groups or conditions. This article summarizes the existing commentary on such research. The characteristics of no-difference research, its acceptance by the research community, strategies for conducting such studies, and its centrality within the experimental and nonexperimental paradigms…
Descriptors: Evaluation Methods, Literature Reviews, Models, Program Evaluation
Peer reviewed Peer reviewed
Bickman, Leonard – New Directions for Program Evaluation, 1987
Ten functions of program theory are reviewed: contributing to social science knowledge; assisting policymakers, discriminating between theory failure and program failure, identifying the problem and target group, providing program implementation description, uncovering unintended effects, specifying intervening variables, improving formative use…
Descriptors: Evaluation Methods, Evaluation Utilization, Models, Problem Solving
Peer reviewed Peer reviewed
Gerhard, Ronald J. – New Directions for Program Evaluation, 1981
The need for and applicability of general systems theory in human services evaluation is discussed. The role of evaluation in human services agencies is described and the necessity of combining the programs to be evaluated and the evaluation process itself in a single unifying conceptual model is demonstrated. (Author/AL)
Descriptors: Change Strategies, Evaluation Methods, Human Services, Integrated Activities
Peer reviewed Peer reviewed
Shadish, William R., Ed. – New Directions for Program Evaluation, 1994
This special issue deals with the preparation of professional evaluators. Evaluation is considered as a profession that warrants specialized training. Aspects of necessary training and descriptions of current programs are discussed. (SLD)
Descriptors: Criteria, Evaluation Methods, Evaluators, Foreign Countries
Peer reviewed Peer reviewed
Scheirer, Mary Ann – New Directions for Program Evaluation, 1987
Program theory and implementation process theory contribute, respectively, to specifying the what and the why of program delivery. This article discusses the relationship between these two theories and the types of data collection methods that can be derived from each in order to contribute to more comprehensive evaluation designs. (JAZ)
Descriptors: Data Collection, Evaluation Methods, Evaluators, Measurement Techniques
Peer reviewed Peer reviewed
Thompson, Bruce W. – New Directions for Program Evaluation, 1981
To enable decision makers to become more aware of the levels of confidence that can be placed in the results of particular computerized decision models, methods for establishing confidence levels are discussed. The U.S. General Accounting Office has developed criteria and review procedures for evaluating computerized policy analysis models.…
Descriptors: Computer Oriented Programs, Decision Making, Evaluation Methods, Federal Programs
Peer reviewed Peer reviewed
Lipsey, Mark W. – New Directions for Program Evaluation, 1993
Explores the role of theory in strengthening causal interpretations in nonexperimental research. Evaluators must conduct theory-driven research, concentrating on "small theory," in that the focus is on the explanation of processes specific to the program being evaluated. Theory-guided treatment research must be programmatic and…
Descriptors: Causal Models, Effect Size, Evaluators, Generalization
Peer reviewed Peer reviewed
Conrad, Kendon J.; Miller, Todd Q. – New Directions for Program Evaluation, 1987
A short history of program theory in evaluation is reviewed. Two related problems in program evaluation are discussed and illustrated: (1) how to specify in measurable terms the consensus of theories and values that guides the program; and (2) how to construct a theoretical framework that specifies the use of these measurements. (Author/JAZ)
Descriptors: Evaluation Methods, Evaluation Problems, History, Measurement Techniques
Peer reviewed Peer reviewed
Kiresuk, Thomas J.; And Others – New Directions for Program Evaluation, 1981
The consumer of a service is of primary importance when defining groups most concerned with program impact. Program effectiveness for the consumer may be increased through the use of certain guiding principles based on the extension of existing quality assurance and program evaluation methodologies. (Author/RL)
Descriptors: Accountability, Delivery Systems, Evaluation Methods, Human Services
Peer reviewed Peer reviewed
Larson, Richard C.; Kaplan, Edward H. – New Directions for Program Evaluation, 1981
Evaluation is discussed as an information-gathering process. Currently popular evaluation programs are reviewed in relation to decision making and various approaches that seem to contribute to the decision utility of evaluation (e.g. classical approaches, Bayesian approaches, adaptive designs, and model-based evaluations) are described. (Author/AL)
Descriptors: Bayesian Statistics, Decision Making, Evaluation Methods, Formative Evaluation
Peer reviewed Peer reviewed
Sechrest, Lee, Ed. – New Directions for Program Evaluation, 1993
Two chapters of this issue consider critical multiplism as a research strategy with links to meta analysis and generalizability theory. The unifying perspective it can provide for quantitative and qualitative evaluation is discussed. The third chapter explores meta analysis as a way to improve causal inferences in nonexperimental data. (SLD)
Descriptors: Causal Models, Evaluation Methods, Generalizability Theory, Inferences
Previous Page | Next Page ยป
Pages: 1  |  2