NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 20 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter; Burghardt, John – Evaluation Review, 2007
This article discusses the use of propensity scoring in experimental program evaluations to estimate impacts for subgroups defined by program features and participants' program experiences. The authors discuss estimation issues and provide specification tests. They also discuss the use of an overlooked data collection design--obtaining predictions…
Descriptors: Program Effectiveness, Scoring, Experimental Programs, Control Groups
Peer reviewed Peer reviewed
Heilman, John G. – Evaluation Review, 1980
The choice between experimental research or process-oriented oriented research as the only valid paradigm of evaluation research is rejected. It is argued that there is a middle ground. Suggestions are made for mixing the two approaches to suit particular research settings. (Author/GK)
Descriptors: Evaluation Methods, Evaluative Thinking, Models, Program Evaluation
Peer reviewed Peer reviewed
Nagel, Stuart S. – Evaluation Review, 1984
Introspective interviewing can often determine the magnitude of relations more meaningfully than statistical analysis. Deduction from empirically validated premises avoids many research design problems. Guesswork can be combined with sensitivity analysis to determine the effects of guesses and missing information on conclusions. (Author/DWH)
Descriptors: Deduction, Evaluation Methods, Intuition, Policy Formation
Peer reviewed Peer reviewed
Leviton, Laura C.; Hughes, Edward F. X. – Evaluation Review, 1981
The use of evaluations for policy and program development and change is critically reviewed. Existing conceptions of utilization are discussed and improvements in the methods of detecting use are suggested. Five clusters of variables affecting utilization are described and hypotheses about the reasons for their effects are outlined. (Author/AL)
Descriptors: Cluster Grouping, Definitions, Literature Reviews, Methods
Peer reviewed Peer reviewed
Alemi, Farrokh – Evaluation Review, 1987
Trade-offs are implicit in choosing a subjective or objective method for evaluating social programs. The differences between Bayesian and traditional statistics, decision and cost-benefit analysis, and anthropological and traditional case systems illustrate trade-offs in choosing methods because of limited resources. (SLD)
Descriptors: Bayesian Statistics, Case Studies, Evaluation Methods, Program Evaluation
Peer reviewed Peer reviewed
Dennis, Michael L. – Evaluation Review, 1990
Six potential problems with the use of randomized experiments to evaluate programs in the field are addressed. Problems include treatment dilution, treatment contamination or confounding, inaccurate case flow and power estimates, violations of the random assignment processes, changes in the environmental context, and changes in the treatment…
Descriptors: Drug Rehabilitation, Evaluation Problems, Experiments, Field Studies
Peer reviewed Peer reviewed
Proper, Elizabeth C.; Pierre, Robert G. – Evaluation Review, 1980
This response to TM 505 708 briefly reviews the five major points of that article, and adds seven points that evaluators should consider when preparing reports. Illustrations are taken from Project Follow Through. (BW)
Descriptors: Analysis of Covariance, Data Analysis, Predictor Variables, Program Evaluation
Peer reviewed Peer reviewed
Murray, David M.; And Others – Evaluation Review, 1994
This article presents a synopsis of each of seven presentations given at a conference on design and analysis in community trial studies. Papers identify problems with community trials and discuss strengths and weaknesses associated with design and analysis strategies. Areas of consensus are summarized. (SLD)
Descriptors: Cohort Analysis, Conferences, Evaluation Methods, Intervention
Peer reviewed Peer reviewed
DiCostanzo, James L.; Eichelberger, R. Tony – Evaluation Review, 1980
Design, analysis, and reporting considerations for the application of analysis of covariance (ANCOVA) techniques in educational settings are described. Numerous examples are drawn from the national follow through evaluation, and suggestions for improving reports using ANCOVA-type techniques are presented. (Author/BW)
Descriptors: Analysis of Covariance, Data Analysis, Error of Measurement, Predictor Variables
Peer reviewed Peer reviewed
McClelland, Lou; Cook, Stuart W. – Evaluation Review, 1980
Electricity conservation programs were implemented in matched pairs of office-classroom-laboratory buildings and dormitories. The methodological problems of predicting consumption levels, interpreting why changes in consumption occurred, and estimating initial waste levels are discussed along with their implications for the conduct of behavioral…
Descriptors: College Buildings, Electrical Appliances, Electricity, Energy Conservation
Peer reviewed Peer reviewed
Trochim, William M.K. – Evaluation Review, 1982
Meta-analysis of Title I program evaluations shows the norm-referenced model overestimates positive effectiveness; while the regression-discontinuity design underestimates it. Potential biases include residual regression artifacts, attrition and time-of-testing problems in the norm-referenced design, and assignment, measurement, and data…
Descriptors: Compensatory Education, Data Collection, Elementary Secondary Education, Evaluation Methods
Peer reviewed Peer reviewed
Ormala, Erkki – Evaluation Review, 1994
Trends in European practice that relate to qualitative assessment in the evaluation of the impact of research and innovation are discussed and analyzed. To date, European evaluations have been mainly concerned with quality and direct impact with few assessments of medium- or long-term impact. (SLD)
Descriptors: Data Analysis, Data Collection, Evaluation Methods, Foreign Countries
Peer reviewed Peer reviewed
Hansen, William B.; And Others – Evaluation Review, 1990
A meta-analysis of school-based substance abuse prevention studies revealed that the mean proportion of subjects retained dropped from 81.4 percent at the three-month followup to 67.5 percent at the three-year followup. Researchers should interpret their results in light of these normative data and adopt second-effort strategies to reduce…
Descriptors: Attrition (Research Studies), Cohort Analysis, Educational Research, Followup Studies
Peer reviewed Peer reviewed
Moffitt, Robert – Evaluation Review, 1991
Statistical methods for program evaluation with nonexperimental data are reviewed with emphasis on circumstances in which nonexperimental data are valid. Three solutions are proposed for problems of selection bias, and implications for evaluation design and data collection and analysis are discussed. (SLD)
Descriptors: Bias, Cohort Analysis, Equations (Mathematics), Estimation (Mathematics)
Peer reviewed Peer reviewed
Dwyer, James H. – Evaluation Review, 1984
A solution to the problem of specification error due to excluded variables in statistical models of treatment effects in nonrandomized (nonequivalent) control group designs is presented. It involves longitudinal observation with at least two pretests. A maximum likelihood estimation program such as LISREL may provide reasonable estimates of…
Descriptors: Control Groups, Mathematical Models, Maximum Likelihood Statistics, Monte Carlo Methods
Previous Page | Next Page ยป
Pages: 1  |  2