Descriptor
Research Design | 15 |
Program Evaluation | 13 |
Evaluation Methods | 11 |
Research Methodology | 11 |
Quasiexperimental Design | 7 |
Research Problems | 7 |
Statistical Analysis | 6 |
Validity | 6 |
Effect Size | 3 |
Meta Analysis | 3 |
Models | 3 |
More ▼ |
Source
New Directions for Program… | 22 |
Author
Conner, Ross F. | 2 |
Weiss, Carol H. | 2 |
Wortman, Paul M. | 2 |
Andrew, Loyd D. | 1 |
Bickman, Leonard | 1 |
Bryant, Fred B. | 1 |
Campbell, Donald T. | 1 |
Cohen, David K. | 1 |
Conrad, Kendon J., Ed. | 1 |
Cordray, David S. | 1 |
Fortune, Jim C. | 1 |
More ▼ |
Publication Type
Journal Articles | 22 |
Opinion Papers | 9 |
Reports - Evaluative | 9 |
Guides - Non-Classroom | 3 |
Information Analyses | 3 |
Reports - Research | 3 |
Reports - Descriptive | 2 |
Collected Works - Serials | 1 |
Education Level
Audience
Location
Australia | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating

Mark, Melvin M. – New Directions for Program Evaluation, 1986
Several validity typologies are overviewed then integrated. The integrated framework is used to contrast the positions of Campbell and Cronback. The current practice and logic of quasi-experimentation is critiqued, and expansions beyond the primary focus of dominant validity typologies are suggested. (BS)
Descriptors: Evaluative Thinking, Generalization, Program Evaluation, Quasiexperimental Design

Cordray, David S. – New Directions for Program Evaluation, 1986
The role of human judgment in the development and synthesis of evidence has not been adequately developed or acknowledged within quasi-experimental analysis. Corrective solutions need to confront the fact that causal analysis within complex environments will require a more active assessment that entails reasoning and statistical modeling.…
Descriptors: Evaluative Thinking, Models, Program Effectiveness, Program Evaluation

Yeaton, William; Sechrest, Lee – New Directions for Program Evaluation, 1987
In no-difference research, no differences are found among groups or conditions. This article summarizes the existing commentary on such research. The characteristics of no-difference research, its acceptance by the research community, strategies for conducting such studies, and its centrality within the experimental and nonexperimental paradigms…
Descriptors: Evaluation Methods, Literature Reviews, Models, Program Evaluation

Shadish, William R., Jr.; And Others – New Directions for Program Evaluation, 1986
Since usually no defensible option for performing a task within quasi-experimentation is unbiased, it is desirable to select several options that reflect biases in different directions. The benefits of applying a critical multiplism approach to causal hypotheses, group nonequivalence, and units of analysis in quasi-experimentation is discussed.…
Descriptors: Bias, Matched Groups, Program Evaluation, Quasiexperimental Design

Campbell, Donald T. – New Directions for Program Evaluation, 1986
Confusion about the meaning of validity in quasi-experimental research can be addressed by carefully relabeling types of validity. Internal validity can more aptly be termed "local molar causal validity." More tentatively, the "principle of proximal similarity" can be substituted for the concept of external validity. (Author)
Descriptors: Definitions, Quasiexperimental Design, Sampling, Social Science Research

Rindskopf, David – New Directions for Program Evaluation, 1986
Modeling the process by which participants are selected into groups, rather than adjusting for preexisting group differences, provides the basis for several new approaches to the analysis of data from nonrandomized studies. Econometric approaches, the propensity scores approach, and the relative assignment variable approach to the modeling of…
Descriptors: Effect Size, Experimental Groups, Intelligence Quotient, Mathematical Models

Yeaton, William H.; Wortman, Paul M. – New Directions for Program Evaluation, 1984
Solutions to methodological problems in medical technologies research synthesis relating to temporal change, persons receiving treatment, research design, analytic procedures, and threats to validity are presented. These solutions should help with the planning and methodogy for research synthesis in other areas. (BS)
Descriptors: Medical Research, Meta Analysis, Patients, Research Design

Conner, Ross F. – New Directions for Program Evaluation, 1985
The author compares his experiences evaluating Peace Corps volunteers in Kenya and evaluating a health promotion program for adolescents in Orange County, California. Doing international evaluations can improve domestic evaluations by heightening awareness of often unrecognized cultural variations in the domestic setting. (BS)
Descriptors: Comparative Analysis, Cultural Awareness, Evaluation Methods, Experimenter Characteristics

Conner, Ross F. – New Directions for Program Evaluation, 1980
Is it ethical to select clients at random for a beneficial social service, then deny the benefits to a control group for the sake of science? Participation of control groups in planning, implementation and evaluation of social programs may resolve ethical issues. (Author/CP)
Descriptors: Control Groups, Ethics, Evaluation Methods, Program Evaluation

Reichardt, Charles; Gollob, Harry – New Directions for Program Evaluation, 1986
Causal models often omit variables that should be included, use variables that are measured fallibly, and ignore time lags. Such practices can lead to severely biased estimates of effects. The discussion explains these biases and shows how to take them into account. (Author)
Descriptors: Effect Size, Error of Measurement, High Schools, Mathematical Models

Light, Richard J. – New Directions for Program Evaluation, 1984
Using examples from government program evaluation studies, six areas where research synthesis is more effective than individual studies are presented. Research synthesis can: (1) help match treatment type with recipient type; (2) explain which treatment features matter; (3) explain conflicting results; (4) determine critical outcomes; (5) assess…
Descriptors: Aptitude Treatment Interaction, Evaluation Methods, Federal Programs, Meta Analysis

Weiss, Carol H. – New Directions for Program Evaluation, 1983
Analysis of the assumptions underlying the stakeholder approach to evaluation combined with the limited experience in testing the approach reported in this volume, suggests that some claims are cogent and others problematical. (Author)
Descriptors: Case Studies, Decision Making, Evaluation Criteria, Evaluation Methods

Lipsey, Mark W. – New Directions for Program Evaluation, 1993
Explores the role of theory in strengthening causal interpretations in nonexperimental research. Evaluators must conduct theory-driven research, concentrating on "small theory," in that the focus is on the explanation of processes specific to the program being evaluated. Theory-guided treatment research must be programmatic and…
Descriptors: Causal Models, Effect Size, Evaluators, Generalization

Bryant, Fred B.; Wortman, Paul M. – New Directions for Program Evaluation, 1984
Methods for selecting relevant and appropriate quasi-experimental studies for inclusion in research synthesis using the threats-to-validity approach are presented. Effects of including and excluding studies are evaluated. (BS)
Descriptors: Evaluation Criteria, Meta Analysis, Quasiexperimental Design, Research Methodology

New Directions for Program Evaluation, 1980
Representative models of program evaluation are described by their approach to values, and categorized by empirical style: positivism versus humanism. The models are: social process audit; experimental/quasi-experimental research design; goal-free evaluation; systems evaluation; cost-benefit analysis; and accountability program evaluation. (CP)
Descriptors: Accountability, Cost Effectiveness, Critical Thinking, Evaluation Methods
Previous Page | Next Page ยป
Pages: 1 | 2