Descriptor
Research Design | 15 |
Evaluation Methods | 11 |
Program Evaluation | 10 |
Research Methodology | 7 |
Research Problems | 6 |
Stakeholders | 3 |
Statistical Analysis | 3 |
Validity | 3 |
Case Studies | 2 |
Data Analysis | 2 |
Data Collection | 2 |
More ▼ |
Source
New Directions for Program… | 15 |
Author
Conner, Ross F. | 2 |
Weiss, Carol H. | 2 |
Andrew, Loyd D. | 1 |
Bickman, Leonard | 1 |
Cohen, David K. | 1 |
Conrad, Kendon J., Ed. | 1 |
Fortune, Jim C. | 1 |
Light, Richard J. | 1 |
Lipsey, Mark W. | 1 |
McBee, Janice K. | 1 |
McCleary, Richard | 1 |
More ▼ |
Publication Type
Journal Articles | 15 |
Opinion Papers | 6 |
Reports - Evaluative | 4 |
Guides - Non-Classroom | 3 |
Information Analyses | 3 |
Reports - Research | 3 |
Collected Works - Serials | 1 |
Reports - Descriptive | 1 |
Education Level
Audience
Location
Australia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating

Yeaton, William; Sechrest, Lee – New Directions for Program Evaluation, 1987
In no-difference research, no differences are found among groups or conditions. This article summarizes the existing commentary on such research. The characteristics of no-difference research, its acceptance by the research community, strategies for conducting such studies, and its centrality within the experimental and nonexperimental paradigms…
Descriptors: Evaluation Methods, Literature Reviews, Models, Program Evaluation

Yeaton, William H.; Wortman, Paul M. – New Directions for Program Evaluation, 1984
Solutions to methodological problems in medical technologies research synthesis relating to temporal change, persons receiving treatment, research design, analytic procedures, and threats to validity are presented. These solutions should help with the planning and methodogy for research synthesis in other areas. (BS)
Descriptors: Medical Research, Meta Analysis, Patients, Research Design

Conner, Ross F. – New Directions for Program Evaluation, 1985
The author compares his experiences evaluating Peace Corps volunteers in Kenya and evaluating a health promotion program for adolescents in Orange County, California. Doing international evaluations can improve domestic evaluations by heightening awareness of often unrecognized cultural variations in the domestic setting. (BS)
Descriptors: Comparative Analysis, Cultural Awareness, Evaluation Methods, Experimenter Characteristics

Conner, Ross F. – New Directions for Program Evaluation, 1980
Is it ethical to select clients at random for a beneficial social service, then deny the benefits to a control group for the sake of science? Participation of control groups in planning, implementation and evaluation of social programs may resolve ethical issues. (Author/CP)
Descriptors: Control Groups, Ethics, Evaluation Methods, Program Evaluation

Light, Richard J. – New Directions for Program Evaluation, 1984
Using examples from government program evaluation studies, six areas where research synthesis is more effective than individual studies are presented. Research synthesis can: (1) help match treatment type with recipient type; (2) explain which treatment features matter; (3) explain conflicting results; (4) determine critical outcomes; (5) assess…
Descriptors: Aptitude Treatment Interaction, Evaluation Methods, Federal Programs, Meta Analysis

Weiss, Carol H. – New Directions for Program Evaluation, 1983
Analysis of the assumptions underlying the stakeholder approach to evaluation combined with the limited experience in testing the approach reported in this volume, suggests that some claims are cogent and others problematical. (Author)
Descriptors: Case Studies, Decision Making, Evaluation Criteria, Evaluation Methods

Lipsey, Mark W. – New Directions for Program Evaluation, 1993
Explores the role of theory in strengthening causal interpretations in nonexperimental research. Evaluators must conduct theory-driven research, concentrating on "small theory," in that the focus is on the explanation of processes specific to the program being evaluated. Theory-guided treatment research must be programmatic and…
Descriptors: Causal Models, Effect Size, Evaluators, Generalization

New Directions for Program Evaluation, 1980
Representative models of program evaluation are described by their approach to values, and categorized by empirical style: positivism versus humanism. The models are: social process audit; experimental/quasi-experimental research design; goal-free evaluation; systems evaluation; cost-benefit analysis; and accountability program evaluation. (CP)
Descriptors: Accountability, Cost Effectiveness, Critical Thinking, Evaluation Methods

Fortune, Jim C.; McBee, Janice K. – New Directions for Program Evaluation, 1984
Twenty-nine steps necessary for data file preparation for secondary analysis are discussed. Data base characteristics and planned use vary the complexity of the preparation. Required techniques (file verification, sample verification, file merger, data aggregation, file modification, and variable controls) and seven associated pitfalls are defined…
Descriptors: Computer Storage Devices, Data Analysis, Data Collection, Data Processing

Andrew, Loyd D. – New Directions for Program Evaluation, 1984
A detailed examination of the Higher Education General Information Survey (HEGIS) data base--its content, development, and past and potential uses for researchers--is presented. Because knowledge of the data elements is crucial for successful secondary analysis, procedures for collecting needed information on the data in HEGIS are suggested. (BS)
Descriptors: Data Analysis, Data Collection, Databases, Educational Research

Weiss, Carol H. – New Directions for Program Evaluation, 1983
The promise of the stakeholder approach to evaluation lies in its potential to counter criticism that evaluation is too narrow, unrealistic in its standards for success, unfair to program staff and participants, and irrelevant to decision makers. (Author)
Descriptors: Elementary Secondary Education, Evaluation Methods, Evaluation Utilization, History

Cohen, David K. – New Directions for Program Evaluation, 1983
Critiquing the stakeholder idea, the author states: if government chooses to take account of competing views in social program evaluation, it can get a better result if it encourages the competing views to find a voice of their own, not to speak through the government's chosen instrument. (Author/PN)
Descriptors: Case Studies, Educational Policy, Evaluation Methods, Evaluation Utilization

McCleary, Richard; Riggs, James E. – New Directions for Program Evaluation, 1982
Time series analysis is applied to an assessment of the temporary and permanent impact of the 1975 Australian Family Law Act and its effect on number of divorces. The application and construct validity of the model is examined. (Author/PN)
Descriptors: Court Litigation, Demography, Divorce, Evaluation Methods

Conrad, Kendon J., Ed. – New Directions for Program Evaluation, 1994
The 9 articles of this theme issue stem from a project on alcohol and drug abuse that involved 14 projects, 10 of which began with randomized clinical trials. These papers describe implementation problems associated with experimentation in field research and the focus on ensuring internal validity. (SLD)
Descriptors: Alcohol Abuse, Drug Abuse, Evaluation Methods, Experiments

Bickman, Leonard – New Directions for Program Evaluation, 1985
This chapter describes the implementation and lessons learned from the conduct of three field experiments in education. All three evaluation designs used randomized assignment. Results showed even under very adverse and unstable conditions, randomized designs can be maintained. (LMO)
Descriptors: Attrition (Research Studies), Educational Assessment, Elementary Secondary Education, Evaluation Methods