Descriptor
Source
Evaluation and Program… | 12 |
Author
Publication Type
Journal Articles | 12 |
Book/Product Reviews | 3 |
Information Analyses | 3 |
Reports - Descriptive | 3 |
Reports - Research | 3 |
Opinion Papers | 2 |
Reports - Evaluative | 1 |
Reports - General | 1 |
Education Level
Audience
Location
Ohio | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating

Mohr, L. B. – Evaluation and Program Planning, 2000
Suggests that there is a tendency in social science and program evaluation to adhere to some methodological practices by force of custom rather than because of their reasoned applicability. These ideas include regression artifacts, random measurement error, and change or gain scores. (Author/SLD)
Descriptors: Error of Measurement, Program Evaluation, Regression (Statistics), Research Methodology

Reichardt, Charles S. – Evaluation and Program Planning, 2000
Agrees with L. Mohr that researchers are too quick to assume that measurement error is random, but disagrees that the idea of regression toward the mean has been a distraction and the notion that change scores analysis should be avoided in favor of regression analysis. (SLD)
Descriptors: Error of Measurement, Program Evaluation, Regression (Statistics), Research Methodology

Mohr, L. B. – Evaluation and Program Planning, 2000
Responds to C. S. Reichardt's discussion of regression artifacts, random measurement error, and change scores. Emphasizes that attention to regression artifacts in program evaluation is almost bound to be problematic and proposes some arguments in support of this position. (SLD)
Descriptors: Error of Measurement, Program Evaluation, Regression (Statistics), Research Methodology

Kytle, Jackson; Millman, Ernest Joel – Evaluation and Program Planning, 1986
This paper focuses on the discrepancy the authors personally experienced between the stated principles of social research and experience with several applied social research projects over the last 10 years. Three cases of applied social research are presented and critiqued, and two types of structural problems were found. (Author/LMO)
Descriptors: Educational Principles, Evaluation Criteria, Program Evaluation, Research Methodology

Ginsberg, Pauline E., Ed. – Evaluation and Program Planning, 1988
Program evaluations outside North America are described in five papers. A social intervention and evaluation program in Shanghai, differences between social science research in the United States and Ireland, program evaluation in Africa, an assessment of health promotion/disease prevention programs in Europe, and a cross-cultural perspective on…
Descriptors: Cross Cultural Studies, Cultural Differences, Health Services, International Studies

Ostrander, Susan A.; And Others – Evaluation and Program Planning, 1978
An information feedback model and an ideology of evaluation are posed and contrasted with experiences in evaluation research. Impact on policy is the criterion for success of an evaluation and political constraints block this impact. Some ways social researchers might conceptualize and construct an appropriate base of power are considered.…
Descriptors: Evaluators, Feedback, Grants, Individual Power

Apsler, Robert – Evaluation and Program Planning, 1978
Strasser and Deniston's own analysis (TM 504 254) shows that post-planned evaluations are unsuitable substitutes for pre-planned evaluations. When viewed as post-experimental interviews, however, post-planned evaluations can produce valuable information which complements traditional experimental and quasi-experimental evaluations. (MH)
Descriptors: Data Collection, Evaluation Methods, Objectives, Program Effectiveness

Klay, William Earle – Evaluation and Program Planning, 1991
Evaluation and strategic management are related in that each attempts to improve the quality of policy decisions and each has evolved from a product-centered focus to one in which implementation and utilization are vital. Evaluators should understand strategic management theory to evaluate its applications appropriately. (SLD)
Descriptors: Cooperation, Decision Making, Evaluation Methods, Evaluation Utilization

Nguyen, Tuan D. – Evaluation and Program Planning, 1978
Criticizes Strasser and Deniston's post-planned evaluation (TM 504 253) because of their: (1) emphasis on evaluation research; (2) imposition of experimental rigor; (3) inapplicability to human service projects; (4) inattention to congruity between the program and its environment; (5) distinct characteristics of program evaluation; and (6)…
Descriptors: Environmental Influences, Evaluation Criteria, Evaluation Methods, Formative Evaluation

Britan, Gerald M. – Evaluation and Program Planning, 1978
The distinction between experimental evaluation (which employs generalizable analyses of discrete causes and effects) and contextual evaluation (which examines particular program operations holistically) is articulated. Program characteristics are examined and linked to experimental and contextual models of program evaluation. (Author/MH)
Descriptors: Content Analysis, Context Clues, Evaluation Methods, Holistic Evaluation

Schulberg, Herbert C. – Evaluation and Program Planning, 1978
Based on Strasser and Deniston's guidelines (TM 504 251), the author discusses pre-planned and post-planned program evaluation decisions based on: (1) attitude of the organization being evaluated; (2) availability of resources; and (3) usefulness of the findings for programmatic decision making. (MH)
Descriptors: Decision Making, Educational Assessment, Employer Attitudes, Evaluation Methods

Altschuld, James W.; And Others – Evaluation and Program Planning, 1993
A mail survey completed by 62 administrators in colleges of education in Ohio provides descriptive and correlational results about the nature of utilization of needs assessment. A moderately high multiple correlation is found for attitude and involvement, but other independent variables contribute minimally to prediction. (SLD)
Descriptors: Administrator Attitudes, College Administration, Correlation, Educational Assessment